id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
236852372
pes2o/s2orc
v3-fos-license
The Role of Prediction in Learned Predictiveness Learning permits even relatively uninteresting stimuli to capture attention if they are established as predictors of important outcomes. Associative theories explain this “learned predictiveness” effect by positing that attention is a function of the relative strength of the association between stimuli and outcomes. In three experiments we show that this explanation is incomplete: learned overt visual-attention is not a function of the relative strength of the association between stimuli and an outcome. In three experiments, human participants were exposed to triplets of stimuli that comprised (a) a target (that defined correct responding), (b) a stimulus that was perfectly correlated with the presentation of the target, and (c) a stimulus that was uncorrelated with the presentation of the target. Participants’ knowledge of the associative relationship between the correlated or uncorrelated stimuli and the target was always good. However, eye-tracking revealed that an attentional bias toward the correlated stimulus only developed when it and target-relevant responding preceded the target stimulus. We propose a framework in which attentional changes are modulated during learning as a function the relative strength of the association between stimuli and the task-relevant response, rather than an association between stimuli and the task-relevant outcome. The study of the relationship between learning and attention has a long history. Pavlov (1927) noted that a novel stimulus presented to an organism would elicit an "investigatory reflex" (p. 12), something that would today be referred to as an orienting response, and Lashley (1929) soon after, noted that only stimuli, or components of stimuli, that are organized by the principles of attention will become associated with one another. In the intervening 90 or so years there has been much discussion about the relationship between learning and attention (for reviews see Le Pelley, 2004;Le Pelley et al., 2016;Mackintosh, 1975;Pearce & Hall, 1980), but one feature of this relationship that has sustained interest is the extent to which a stimulus will come to attract attention if it is predictive of a subsequent event. In his influential theory of attention, for example, Mackintosh (1975) proposed that "subjects learn to attend to and ignore stimuli to the extent that those stimuli successfully predict the outcome of a trial," and more recent theoretical analyses of learning and attention make similar claims (e.g., Esber & Haselgrove, 2011;George & Pearce, 2012;Le Pelley, 2004). This general principle has become known as the predictiveness principle, which refers to the idea that "cues become more psychologically salient as a result of their predictiveness with respect to important outcomes; more attention will be allocated to predictive cues than to nonpredictive cues" (Le Pelley et al., 2016, p. 8). There seems to be good reason for advocating a general predictiveness-principle too. So called "learned predictiveness" tasks reliably show that stimuli that are good predictors of an outcome come to attract more overt visual attention than stimuli that are not. For example, in a study by Le Pelley et al. (2011) participants' eye movements were recorded while they were presented with pairs of stimuli (nonsense words) on a computer screen, and on each trial asked to predict which of two different outcomes (sounds) would follow each pair. Across the experimental design, half of the stimuli were perfectly predictive of the identity of the outcome, while the remainder of the stimuli were irrelevant. The specifics of the design of the training can be seen in Table 1, where the nonsense words are represented as letters A to D, and V to Y. As can be seen, stimuli A and D were predictive of Outcome 1, and stimuli B and C were predictive of Outcome 2; thus, permitting the solution of this task. Stimuli V to Y, however, were presented on trials that terminated with Outcome 1 and Outcome 2. These stimuli were predictively redundant and consequently irrelevant to the solution of this task. The results of Le Pelley et al.'s (2011) study revealed that, as a consequence of this training, participants' visual dwell times were longer to the predictive stimuli than the irrelevant stimuli, a result that has been reproduced on a number of occasions using a variety of stimuli and tasks, in different laboratories (e.g., Alamia & Zénon, 2016;Aristizabal et al., 2016;Beesley et al., 2015;Griffiths & Mitchell, 2008;Haselgrove et al., 2016;Lochmann & Wills, 2003;Mitchell et al., 2012). It is also a result that concurs with studies of learning and attention in nonhuman animals (e.g., Haselgrove et al., 2010;Mackintosh & Little, 1969;Roberts et al., 1988). Experimental results that conform to the predictiveness principle are often interpreted in terms of the framework provided by associative theories of learning. These theories stipulate that the attention paid to a stimulus can change according to some function of its associative strength, or the difference between its associative strength and the magnitude of the outcome: the so-called prediction error. To illustrate this, it is useful to consider the theory proposed by Mackintosh (1975), an influential model of learning in its own right, but one that has also been incorporated into more contemporary, and comprehensive treatments of learning (e.g., Le Pelley, 2004;Pearce & Mackintosh, 2010). According to Mackintosh, the change in the strength of the association between a stimulus (e.g., A) and an outcome (DV A ) is determined by Equation 1: Here, the error term (k -V A ) is the discrepancy between the magnitude of the outcome (k) and the current associative strength of stimulus A. h is a learning rate parameter, determined by the properties of the outcome. Most important for the Mackintosh model, a is a variable stimulus-attention parameter that may increase or decrease after each trial. The rules proposed by Mackintosh for determining these increases and decreases in attention (Da) are shown in Equations 2a and 2b, respectively: where V r is the sum of the associative strength of all stimuli present on that trial, minus V A (i.e., it is the remainder). The size of the change in Da A is assumed to be proportional to the magnitude of the inequalities in Equations 2a or 2b. Using these equations, it can be seen how Mackintosh's (1975) theory provides a mechanism for understanding the predictiveness principle. For example, the error terms of the predictive stimuli (A to D) in the study by Le Pelley et al. (2011), for example, will on each trial be less than the error terms of the irrelevant stimuli (V to Y). Consequently, it follows from Equation 2a that attention to predictive stimuli will increase, and from Equation 2b that attention to irrelevant stimuli will decrease. One might wish to make the argument, however, that a sleight of hand is being performed when associative theories of attention, such as Mackintosh's (1975) theory, are applied to our understanding of the predictiveness principle. Associative theories, rather paradoxically, generally say very little about how time is represented during learning (for discussions of this matter see Daw et al., 2006;Gallistel & Gibbon, 2000;Niv, 2009;Sutton & Barto, 2018). According to these theories, when events are paired, an opportunity is provided for an association to form between them. However, no information about the temporal relationship between the events themselves forms a part of the association-these models do not distinguish between events that are presented simultaneously and those that are presented sequentially. Consequently, the notion that a stimulus may be predictive of another event rather than merely connected to it, is something that is beyond the explanation of most accounts of associative learning. Consider Equations 1, 2a, and 2b, for example; these analyses of learning and attention make no explicit statement about the role of time, sequence or (crucially) prediction in learning. Instead, association and associative error are used to drive learning and attention; and yet conceptual understandings of learning and attention appeal to the role of prediction and prediction error to explain attentional phenomena such as the learned-predictiveness effect. There has been rather little focus on the question of whether a sequential prediction is necessary for the establishment of a learned attentional bias, or whether mere association will suffice. In a visual search study reported by Beesley et al. (2018), participants were required to make a response about the orientation of a target stimulus that was presented simultaneously with an array of distractors. In one condition, a configuration of relevant distractors provided information about the location of the target. Despite the relationship between the configuration of distractors and the target, attention was not biased toward these configurations. At face value this might be taken to suggest that mere simultaneous association is not sufficient for the establishment of an attentional bias to a stimulus. However, what is unclear from this study is (a) whether participants had knowledge of the association between the configuration of distractors and the target and (b) if an attentional bias would have been established even if relevant distractors had been presented before the target (i.e., established as truly predictive). The purpose of the experiments reported here was to uncouple association and prediction to determine their roles in learned changes in attention. To do this we investigated whether overt changes in attention were acquired to stimuli that were associatively correlated, or uncorrelated, with other events in the absence of a sequential, predictive, relationship between them. To anticipate our results, we observed that association between stimuli alone was insufficient to modulate an attentional bias. Instead, a learned attentional bias was only acquired to stimuli when predictive responding was necessitated by the task. We consider the role of the association between stimuli and task-relevant responding to explain these results. Experiment 1 Experiment 1 established a procedure in which stimuli were either correlated or uncorrelated with the presentation of a target stimulus, in the absence of any predictive relationship. The question of interest was whether, under these circumstances, visual dwell time would be longer to stimuli that were correlated with the target than stimuli that were uncorrelated with it. To achieve this, participants were trained with triplets of stimuli, each of which comprised a target stimulus (that participants were required to respond to), a correlated stimulus (that was presented with the target stimulus on 100% of the training trials), and an uncorrelated stimulus (that was presented with the target stimulus on only 50% of the training trials). The specifics of the design are shown in Table 2. It can be seen that during the training trials, stimuli U and V were perfectly correlated with the presentation of the targets Y and Z, respectively. However, stimuli W and X were uncorrelated with the target stimuli as they are presented equally frequently with Y and Z and provide no information about the identity of the target stimulus. Occasional test trials were presented that comprised only the correlated and uncorrelated stimuli. The duration of participants eye gaze toward the correlated and uncorrelated stimuli was measured to determine whether participants acquired an attentional bias toward the correlated stimuli on the basis of their association with the target stimulus, relative to the uncorrelated stimulus. According to analyses that emphasize the importance of association in the acquisition of attention (e.g., Esber & Haselgrove, 2011;Le Pelley, 2004;Mackintosh, 1975) a stimulus that is correlated with the occurrence of the target will acquire more attention than a stimulus that is uncorrelated with the occurrence of the target. The same prediction does not necessarily hold if it is thought that prediction is an important determinant of learned changes in attention. To determine participants' knowledge about the relationship between the correlated and uncorrelated stimuli and the target, a final test was conducted in which participants rated the likelihood of the correlated and uncorrelated stimuli being paired with each target stimulus. Participants Eighteen participants (14 females; four males) were recruited from the University of Nottingham's School of Psychology. Participants had a mean age of 19.6 years (SD = .61). All participants had normal or corrected-to-normal vision, and were excluded if they reported a history of visual disturbances triggered by flashing lights. Participants received course credit for their participation or a £3 inconvenience allowance. This experiment, as well as Experiments 2 and 3, received ethical approval from the institution's local ethics committee. Apparatus and Stimuli The experiment was designed and run using experiment Builder Version 1.10.1630. An SR (Mississauga, Canada) Research Eyelink 1000 Plus eye-tracker sampled participant's right eye at a rate of 1000 Hz. Gaze location was determined by monitoring the location of the pupil using an infrared camera mounted on the desk in front of the display monitor. Thresholds used to define fixations and saccades were: 15°displacement, 30°/s velocity, and 8,000°/s 2 acceleration. Participants viewed stimuli binocularly from a distance of 67 cm, and head movements were minimized using a chin and forehead rest. Stimuli were presented upon a BenQ XL2420T LED monitor (33 3 57 cm) with a resolution set to 1,920 3 1,080 at 114 Hz. Stimuli consisted of the letters U, V, W, X, Y, and Z, presented in the color black, font size 60, Times New Roman. Stimuli were presented 5 cm apart from each other, at the apexes of a notional upright equilateral triangle the, center of which coincided with the center of the monitor screen. All stimuli were presented on a white background and subtended a visual angle of 25.4°. Regions of interest were set to 2.7 3 2.7 cm squares centered on each letter. Stimuli were counterbalanced as either the target, correlated, or uncorrelated stimuli. Procedure Eye movement data were recorded from each participant's right eye. The eye-tracker was calibrated for each participant at the outset of the experimental session using a nine-point calibration, except for two participants who could only be calibrated using a 5point procedure. A brief health questionnaire was administered to screen for exclusion criteria, written instructions were given, and informed consent was gained. At the start of each experimental session, the experimenter checked that both pupil and corneal reflections were present while participants read the on-screen instructions before performing a calibration. The written instructions for the calibration task were: "Thank you for participating in this study. Your first task is to focus on the dot and follow it with your eyes. Press the space bar to begin the experiment." After performing the calibration, participants read further on-screen instructions: "Each trial will begin with a fixation cross in the center of the screen. You need to look directly at the cross. Following this, your task is to indicate whether the letter (Y or Z) is present in the array. Try to remember which letters are paired together because at the end of the experiment you will have a memory test (Y = left, Z = right). Press any key to start the experiment." Nonitalicized text in brackets indicates example text that was counterbalanced for each participant. Each trial began with the presentation of a 1 cm 2 fixation cross located in the center of the screen for 1,000 ms. This was then replaced with either the triplets of stimuli (on training trials) or pairs of stimuli (test trials). The triplet stimuli remained on the screen until a response was made; the test trials stimuli remained on screen for 5,000 ms. Each trial was separated by an interstimulus interval of 1,000 ms, during which the screen was blank. Each of the four compounds of three stimuli shown in Table 2 was presented on 72 occasions across the experiment, providing 288 training trials in total. In addition, there were 24 test trials with the Note. Triplets of letters comprising correlated, uncorrelated, and target stimuli were presented on screen. Responses to the presentation of either of two target stimuli (e.g., Y and Z) were required. Test trials with the correlated and uncorrelated stimuli alone were interspersed throughout training with the training trials. LEARNED PREDICTIVENESS correlated and uncorrelated stimuli presented in the absence of the target stimulus (that may reasonably be expected to command most of the visual attention). No additional instructions were provided on these trials. Trial order was randomized over the whole experiment (312 trials in total). Stimulus position was varied randomly across the experiment to ensure that target, correlated, and uncorrelated stimuli were presented equally frequently in all three positions on both training and test trials. After participants completed the experiment they were given two paper-based questionnaires each of which tested their understanding of the relationship between the correlated and uncorrelated stimuli with each target stimuli. At the top of the page was written, for example "How likely was the letter 'TARGET STIMU-LUS' to be paired with the following letters?(please circle)." Presented underneath this question were the two correlated and two uncorrelated stimuli, next to each of which was a 10 point Likert scale that was anchored with the word "Unlikely" adjacent to the number 1, and the word "Likely" adjacent to the number 10. Upon completion of the questionnaires participants were thanked for their time and debriefed. Transparency and Openness For each experiment reported here, we detail processes for identifying any data to be excluded, any data exclusions, all manipulations, and all measures in the study. Data will be made available upon request to the corresponding author. The experiments were not preregistered. Results Occasionally the eye-tracker lost track of pupil and corneal reflections, resulting in missing eye-movement data. Participants were excluded from further analysis if the percentage of missing data were greater than 20% (zero participants were excluded on this basis). Furthermore, if participants were excluded if they achieved less than 60% correct responses across all trial blocks (zero participants were excluded on this basis). For statistical analyses in this and subsequent experiments, Greenhouse-Geisser corrections were applied where sphericity was violated, however, degrees of freedom are rounded to the nearest whole number for the sake of clarity. Behavioral Data Panel A of Figure 1 shows the mean proportion of correct responses over 16 blocks of 18 trials and reveals that participants performed the task accurately from trial Block 1, and by Block 16 had a mean accuracy of almost .96, which a one-sample t test revealed to be significantly above chance (.5), t(17) = 69.28, p , .001. A one-way repeated measures analysis of variance (ANOVA) of mean proportion correct with the factor of block (1-16) revealed a nonsignificant main effect, F(5, 87) = .73, h p 2 = .04, p = .605, reflecting the lack of change over training. Similarly, the mean response time (RT), measured on correct and incorrect trials, from the termination of the fixation cross, varied very little across training ( Figure 1, Panel B). A one-way repeated measures ANOVA of RT with the factor of block (1-16) also revealed a nonsignificant main effect, F(4, 60) = .66, h p 2 = .04, p = .606. Eye Gaze Analysis Mean proportions of dwell time were calculated by dividing the total dwell time each participant spent within an ROI on each trial by the RT for that trial. These dwell times for the correlated, uncorrelated, and target stimuli during the training trials are shown in Panel C of Figure 1. This reveals that the target stimuli attracted the largest proportion of dwell time, and very little was directed either toward the correlated or the uncorrelated stimuli. A two-way repeated measures ANOVA of proportion of dwell time with the factors of stimulus type (target vs. correlated vs. uncorrelated) and trial block (1-16) revealed a significant main effect of stimulus type, F(1, 19) = 58.02, h p 2 = .77, p , .001, trial block, F(4, 73) = 4.43, h p 2 = .21, p = .002, and a significant interaction, F(22, 374) = 1.86, h p 2 = .10, p = .050. Simple main effects analysis of this interaction revealed a significant difference between the proportion of dwell time directed toward target stimuli and the correlated or uncorrelated stimuli from block 1 onward, smallest, F(1, 19) = 26.03, h p 2 = .51, p , .001, with the target stimuli always attracting a higher proportion of dwell time than the correlated and uncorrelated stimuli, which did not differ. It was important to ascertain whether this nonsignificant difference between the correlated and uncorrelated stimuli supported the null hypothesis (there was no difference between the proportion of dwell time directed toward correlated and uncorrelated stimuli), or supported no conclusion at all. To decide between these two possibilities, a scaled JZS Bayes Factor was calculated according to the procedure described by Rouder et al. (2009) with a scale r = .707. The scaled JZS Bayes Factors for the difference between the correlated and uncorrelated stimuli was 4.08, which is in favor of the null. One possible explanation for the absence of a difference between the proportion of dwell time directed toward the correlated and uncorrelated stimuli is that participants were directing so much attention toward the task-relevant target stimulus, that attention to the remaining stimuli on the screen was at floor levels. To examine this possibility, the 24 test trials where the target stimulus was absent were examined. Figure 1, Panel D shows that on the target-absent test trials, the mean proportion of dwell time directed toward correlated and uncorrelated stimuli was moderately longer than it was during the training trials that included the target (Figure 1, Panel C) but there was still no difference in dwell time between these stimuli. A two-way repeated measures ANOVA of proportion of dwell time with the factors of stimulus type (correlated vs. uncorrelated) and trial block (1-4) revealed a no effect of stimulus type, F(1, 17) = .04, h p 2 = .002, p = .850, trial number, F(7, 112) = 1.76, h p 2 = .09, p = .107, and no interaction between these factors, F(8, 133) = .73, h p 2 = .04. The scaled JZS Bayes Factor for the difference between the correlated and uncorrelated stimuli was 4.04, which is in favor of the null. Questionnaire Data Participants were given a questionnaire that tested their understanding of the associative relationship between the correlated or uncorrelated stimuli with the target stimuli. A difference score was calculated to determine the specificity of the stimulus-target relationship. For the correlated stimulus, this was computed by subtracting the rating of the relationship between the correlated stimulus and the target stimulus it was not paired with from the 206 EATHERINGTON AND HASELGROVE rating of the relationship between the same correlated stimulus and the target stimulus that it was paired with (e.g., the rating of the relationship between U and Z was subtracted from the rating of the relationship between U and Y). For the uncorrelated stimulus, this was computed by subtracting the rating of the relationship between the uncorrelated stimulus and a target stimulus from the rating of the relationship between the same uncorrelated stimulus and the other target stimulus that it was paired with (e.g., the rating of the relationship between W and Y was subtracted from the rating of the relationship between W and Z): UY-UZ, VZ-VY, WY-WZ, XZ-XY. Panel E of Figure 1 reveals that the difference score was higher for correlated stimuli compared with uncorrelated stimuli, t(34) = 6.69, p , .001. Discussion Participants received trials in which stimuli were either strongly positively correlated or entirely uncorrelated with the presentation of a task-relevant target. Participants engaged quickly and accurately with this task (Figure 1, Panels A and B), and showed clear knowledge about the associative relationship between the correlated stimuli and the target ( Figure 1, Panel E). According to theories of learning that emphasize the role of changes in attention to stimuli as a consequence of association, these conditions should be sufficient for the acquisition of a bias in attention toward the correlated stimuli and away from the uncorrelated stimuli (e.g., Esber & Haselgrove, 2011;Le Pelley, 2004;Mackintosh, 1975). However, neither during training trials that included the target stimulus, nor during the test trials that included only the correlated and uncorrelated stimuli could we find any evidence of differences in the duration of visual dwell times, using either traditional frequentist statistics, nor with a calculation of Bayes factors (Figure 1, Panels C and D). These results imply that mere association, or differences in associative error, alone, are not sufficient for the establishment of differences in learned variations in attention. Experiment 2 In Experiment 1, we arranged for one set of visual stimuli to be positively correlated with a target stimulus while another set of stimuli were not. Despite the presence of appropriate associative knowledge about the relationship between the correlated or uncorrelated LEARNED PREDICTIVENESS stimuli and the target, there was no difference in the extent to which participants' overt visual attention was directed toward these stimuli. As we have noted, these results imply that mere association with a task relevant stimulus alone is not sufficient for the acquisition of an attentional bias to other stimuli (e.g., Esber & Haselgrove, 2011;Le Pelley, 2004;Mackintosh, 1975). Instead, these results are consistent with the idea that a predictive relationship must be arranged between stimuli for associations to be translated into changes in attentionan idea that is implied by the predictiveness principle (Le . However, while real and nonsense words are regularly used as visual stimuli in studies of the relationship between learning and attention (e.g., Le Pelley et al., 2011) it is relatively rare for single letters to be used in the same role. It is possible, then, that the stimuli used in Experiment 1 were, in and of themselves, unable to support learned variations in attention, irrespective of whether a predictive relationship was arranged between them or otherwise. The aim of Experiment 2, therefore, was to use single-letter stimuli as in Experiment 1, with the same degrees of correlation or noncorrelation between themselves and the target but under circumstances in which a predictive relationship is also embedded into the trial structure to explore whether, now, a bias in learned attention would develop. To do this, three groups of participants were included in Experiment 2. First, we included a group (group Simultaneous) that closely replicated the conditions of Experiment 1 to examine whether the outcome of this study would reproduce. A second group (group Serial-Target) received identical instructions to participants in group Simultaneous, but for this group the correlated and uncorrelated stimuli were presented simultaneously, as a pair, and then followed immediately afterward with the target stimulus-thus, reproducing the temporal arrangement of cues and outcomes more commonly used in studies of learned predictiveness. Finally, a third group (group Serial-Stimuli) received an identical trial structure to group Serial-Target; however, for this group the instructions were subtly changed. For group Serial-Stimuli, participants were again informed that they could respond when the target appeared, however, they were also informed that they could make a response earlier-before the presentation of the target during the preceding stimuli-if they thought they were able to predict its arrival. 1 The purpose of including group Serial-Target and group Serial-Stimuli was to explore whether it was sufficient for just a predictive temporal relationship to be established between the correlated and target stimuli for an attentional bias to develop toward these stimuli; or, instead, whether in addition it was necessary for a predictive response to be made about the identity of a forthcoming, future event. On the basis of the results of Experiment 1, we expected to see no difference in dwell times toward the correlated and uncorrelated stimuli in group Simultaneous. What remained to be determined, however, was whether the two groups that had a serial relationship established between the correlated or uncorrelated stimuli and the target (group Serial-Target and group Serial-Stimuli) would show evidence of differences in attention to the correlated and uncorrelated stimuli; and perhaps most interestingly whether these two groups would differ themselves. Finally, and in keeping with the design of Experiment 1, at the end of the Experiment 2, all groups received a questionnaire to determine their knowledge of the relationship between the correlated, uncorrelated, and target stimuli. Participants Fifty-two participants were recruited from the University of Nottingham's School of Psychology and randomly assigned to either group Simultaneous, group Serial-Stimuli or group Serial-Target. Group Simultaneous consisted of 18 females with a mean age of 18.5 6 .86 (M 6 SD) years; group Serial-Target consisted of 18 females with a mean age of 18.33 6 .49 years and group Serial-Stimuli consisted of 16 females with a mean age of 19 6 1.27 years. Exclusion criteria for all groups were the same as in Experiment 1, and like in Experiment 1, all participants had normal or corrected-to-normal vision and received course credit for their participation or a £3 inconvenience allowance. Apparatus and Stimuli The apparatus used in Experiment 2 was identical to that used in Experiment 1. Stimuli consisted of the letters J, P, Q, V, W, and Z, which are better matched in terms of frequency in the English language than the letters used in Experiment 1. In keeping with Experiment 1, letters were presented in black, font size 60, Times New Roman. Letters Q and P always served as the target stimuli, while letters J, V, W, and Z were randomly assigned to serve as either the correlated or uncorrelated stimuli for each participant. For group Simultaneous the target appeared simultaneously with the correlated and uncorrelated stimuli at a random apex of the notional triangle described in Experient 1. For group Serial-Target and Serial-Stimuli, the target appeared alone at a random location within the notional triangle on the subsequent screen after the presentation of the compound of the correlated and uncorrelated stimuli. Each of the four training trials was presented 36 times making 144 trials overall. Eight test trials were randomly distributed throughout the experiment in which four trials with each compound stimulus were presented twice in the absence of a target stimulus. On these trials, the correlated and uncorrelated stimuli were presented in the same manner as in Experiment 1. Procedure The same calibration, informed consent and health-screening procedure was used at the start of Experiment 2 as in Experiment 1. After this, participants read the following on-screen instructions: "Each trial will begin with a fixation cross in the center of the screen. This will be followed by a series of letters. Your task is to indicate whether the letter 'Q' or 'P' is present in the array. Q = left arrow, P = right arrow. Pay attention to which stimuli are paired together because at the end you will have a memory test. Press any key to start the experiment." For group Serial-Stimuli, participants were Serial refer to the nature of the presentation of the stimuli on each trial. When the correlated, uncorrelated and target stimuli are presented at the same time then the group designation is simultaneous; when the correlated and uncorrelated stimulus precede the target, the group designation is serial. Stimuli and Target distinguish the two serial groups with respect to the response instructions. When the instructions indicate responding is to the Target then the designation is Target; when the instructions also inform that responding may take place during the stimuli then the designation is Stimuli. 208 EATHERINGTON AND HASELGROVE additionally told, "You can press early if you think you know which letter will appear." Each training trial began with the presentation of a fixation cross that was located in the center of the screen for 1,000 ms. This fixation cross was then removed and for group Simultaneous replaced with the triplet of the correlated, uncorrelated, and target letters, simultaneously, for 2,000 ms. For groups Serial-Stimuli and Serial-Target, the fixation cross was replaced with the presentation of the correlated and uncorrelated stimuli for 2,000 ms, which were then removed from the screen and followed immediately by the target stimulus, also for 2,000 ms. For all groups the trial then recycled, after a 1,000 ms blank screen, to the fixation cross. Although participants were told that they could press early if they thought the letter will appear in group Serial-Stimuli, they were free to respond at any point during the trial and could respond during the target is they wished. For each group, the stimuli remained on screen, irrespective of the presence of absence of responding, until the termination of the trial. After participants completed this stage of the experiment, they were given a questionnaire to complete which tested their understanding of the relationship between the correlated and uncorrelated stimuli with target stimuli. With the exception of the identities of the letters used in Experiment 2, the questionnaire was the same as that used in Experiment 1. Results The exclusion criteria were identical to those used in Experiment 1. Again zero participants were excluded. Behavioral Data In keeping with the results of Experiment 1, participants responded with high accuracy from trial-Block 1 onward, in all groups. In groups Simultaneous, Serial-Target, and Serial-Stimuli the mean proportions of correct responses during the final trial block were .98, .95, and .89, respectively. One sample t tests revealed that each of these means was significantly greater than chance (.5), smallest t(17) = 773.98, p , .001. The mean RT was again measured from the termination of the fixation cross and, as can be seen in Figure 2, Panel D, remained relatively constant for group Serial Target (Overall M: 2570 ms) and group Simultaneous (Overall M: 872 ms), and were consistent with participants in these groups responding at the point at which the target stimulus was LEARNED PREDICTIVENESS presented. For participants in group Serial Stimuli, however, RTs during the first block of trials had a mean of around 2000 ms, before dropping quickly, and then terminating in the final block of training at a mean of around 1,000 ms. Thus, with training participants in this group shifted their response from the time in the trials when just the target was presented to a time in the trials when just the correlated and uncorrelated stimuli were presented. That is to say, they had acquired a predictive response. A two-way mixed ANOVA of mean RT with the within factor of trial block (1-8) and the between-subjects factor of group (Group Simultaneous vs. Group Serial-Stimuli vs. Group Serial-Target), revealed a main effect of group, F(2, 49) = 250.64, h p 2 = .91, p , .001, of trial block, F(7, 343) = 17.85, h p 2 = .27, p , .001, and an interaction between these factors, F(14, 343) = 7.32, h p 2 = .23, p , .001. Bonferroni adjusted post hoc tests revealed that each group differed from each other group on each block (smallest mean difference = 305 ms, SE = 95.88, p = .008). Panel E of Figure 2 shows dwell times to the correlated and uncorrelated stimuli from the test trials. These results largely confirm the observations from the training trials, with dwell times being longer to the correlated stimuli than the uncorrelated stimuli in group Serial-Stimuli (t(17) = 3.73, p = .002), but not in group Simultaneous (t(17) = 1.30, p = .221) or group Serial-Target (t(17) = 1.50, p = .153). JZS Bayes factors for these three t values were 24.07, 1.99, and 1.59 respectfully. These Bayes factors are in favor of the alternative, null and null hypotheses, again respectively. Questionnaire Data Panel F of Figure 2 shows that the mean difference score for the correlated stimuli was higher than that for the uncorrelated stimuli in all three groups. A two-way ANOVA of mean difference rating with the within-subjects factor of stimulus type (correlated vs. uncorrelated), and the between-subjects factor of group (Group Simultaneous vs. Group Serial-Stimuli vs. Group Serial-Target), revealed a main effect of stimulus type, F(1, 35) = 238.99, h p 2 = .87, p , .001, no effect of group, F(2, 70) = .02, h p 2 = .001, p = .978, and no interaction, F(2, 70) = .18, h p 2 = .005, p = .835. Discussion Experiment 2 reproduced the effect observed in Experiment 1 by demonstrating, again, that an attentional bias did not develop toward stimuli that were positively correlated with a copresent target event, even when participants had clearly learned the associative relationships between these events (group Simultaneous). The same outcome was also observed when a serial, rather than a simultaneous relationship was arranged between the correlated or uncorrelated stimuli and the target (group Serial-Target). An overt attentional bias was only established toward stimuli that were correlated with a target when participants were told that they could make a predictive response about the identity of the target, before its presentation (group Serial-Stimuli). Together, these results are difficult to reconcile with simple attentional models of associative learning that use only associations or associative error to determine attentional biases to stimuli that are correlated with a task-relevant events (e.g., Esber & Haselgrove, 2011;Le Pelley et al., 2011;Mackintosh, 1975). What these results suggest, instead, is that the prediction of a subsequent event is a key component for the allocation of learned attentional biases. Experiment 3 It is possible that the training given to group Serial-Target and group Simultaneous in Experiment 2 was, in fact, sufficient to bias attention toward the correlated stimuli; but, for some reason, the conditions of stimulus training (or exposure) may not have been appropriate to reveal this bias and masked its expression. In other words, the conditions of training used to vary attention, between groups, were confounded with the conditions of testing used to detect that bias. Part of this concern is alleviated by the test trials in Experiment 2 in which both groups were exposed to the correlated and uncorrelated stimuli in the absence of the target. However, on these trials an attentional bias toward the correlated stimulus may have been masked, or interrupted, by a search for the (now absent) target. To explore this possibility, in Stage 1 of Experiment 3, 210 EATHERINGTON AND HASELGROVE participants first received identical training to that given to participants in group Simultaneous or group Serial-Target in Experiment 2. On the basis of the results observed in Experiment 2 we expect to observe no difference in the dwell times toward the correlated and uncorrelated stimuli in this stage of the experiment. To determine if this training did, however, establish attentional biases to the correlated and uncorrelated stimuli that were, in some way, masked by the conditions of testing, all participants transferred to a second stage of the experiment in which the conditions of stimulus exposure and response instructions were the same as those given to group Serial-Stimuli in Experiment 2. Thus, in Stage 2, all participants were tested under the same circumstances and also under circumstances known (from Experiment 2) to permit the detection of a learned bias in overt attention. A crucial manipulation in Stage 2 permitted us to evaluate whether any attentional bias established to the correlated cues in Stage 1 was unexpressed. For half of the participants within each of the two groups, the stimuli that were correlated with the target in Stage 1 remained correlated with the target in Stage 2 (and similarly the stimuli that were uncorrelated with the target in Stage 1 remained uncorrelated with the target in Stage 2)-generating groups Serial Congruent and Simultaneous Congruent. For the remaining participants within the Serial and Simultaneous groups, the stimuli that were correlated with the target in Stage 1 became uncorrelated with the target in Stage 2 (and similarly, the stimuli that were uncorrelated with the target in Stage 1 became correlated with the target in Stage 2)-generating groups Serial Incongruent and Simultaneous Incongruent (see Table 3). The logic behind this manipulation is that if attention was biased toward the correlated stimuli in Stage 1, but unexpressed, then in Stage 2 this bias should be revealed. This should particularly be the case in the two conditions where the contingencies between the stimuli and the targets remained the same between the two stages (that is to say there would be a behavioral saving in groups Serial Congruent and Simultaneous Congruent) relative to the two groups for whom the contingencies between the stimuli and the targets were reversed between Stages 1 and 2. For groups Serial Incongruent and Simultaneous Incongruent, the prior (putatively masked) biases, should hinder the expression of the attentional bias in Stage 2. Participants Each group consisted of 18 participants who were recruited from the University of Nottingham's School of Psychology and randomly assigned to one of the four experimental groups. Group Serial-Congruent consisted of 11 females (M 6 SD age: 19.90 6 1.51); group Serial-Incongruent consisted of 12 females (19.83 6 1.64); group Simultaneous-Congruent consisted of 12 females (21.08 6 2.11); and finally group Simultaneous-Incongruent consisted of 12 females (20.67 6 3.23). Exclusion criteria for all groups were the same as in Experiment 1, and like in Experiment 1, all participants had normal or corrected-to-normal vision and received course credit for their participation or a £3 inconvenience allowance. Apparatus and Stimuli The apparatus and stimuli used in Experiment 3 were identical to that used in Experiment 2. In Stage 1, each of the four trial-types was presented 18 times each in a random order. The spatial and temporal arrangement of the stimuli and trials for groups Serial-Congruent and Serial-Incongruent was identical to that given to group Serial-Target in Experiment 2. The spatial and temporal arrangement of the stimuli and trials for groups Simultaneous-Congruent and Simultaneous-Incongruent was identical to that in group Simultaneous in Experiment 2. There were no probe trials, in which the target was omitted in this stage of the experiment. In Stage 2, the spatial and temporal arrangement of the stimuli and trials was the same for all groups, and identical to that given to group Serial-Stimuli in Experiment 2. At the end of Stage 2 for each group, the four trial types were presented six times in a random order in the absence of a target stimulus, and participants were asked to report which target stimulus they thought accompanied the compound during training. Procedure The eye tracker was calibrated and eye movement data were recorded in the same manner as described in Experiment 2. After performing the calibration, participants read on-screen instructions for Stage 1. For participants in all four groups, the Stage 1 instructions read: "Each trial will begin with a fixation cross in the center of the screen. This will be followed by a series of letters. Your task is to indicate whether the letter 'Q' or 'P' is present in the array. Q = left arrow, P = right arrow. Pay attention to which stimuli are paired together because at the end you will have a memory test. Press any key to start the experiment." Each trial during Stage 1 began with the presentation of a fixation cross in the center of the screen for 1,000 ms. After this, for groups Simultaneous Congruent and Simultaneous Incongruent, participants were presented with a triplet of the correlated, uncorrelated, and target stimuli for 2,000 ms. For groups Serial Congruent and Serial Incongruent the correlated and uncorrelated stimuli LEARNED PREDICTIVENESS were presented for 2,000 ms followed by the target stimulus for 2,000 ms. For all groups, these sequences were followed by an interstimulus interval of 1,000 ms before the trial recycled with the fixation cross on the next trial. Upon completing Stage 1, participants were presented with the following instructions: "In the next stage you will see two letters followed by a target letter, but in addition to indicating its identity, this time you can anticipate whether the target will be a 'P' or a 'Q' by pressing early. Press the space bar to continue." Each trial in Stage 2 began with the presentation of a fixation cross in the center of the screen for 1,000 ms. Participants were then presented with the correlated and uncorrelated stimuli for 2,000 ms, this compound was then followed by the target stimulus for 2,000 ms, this was followed by an interstimulus interval of 1,000 ms before the trial recycled to fixation cross for the next trial. Upon completing Stage 2, participants received a final series of test trials: "In the final stage of the experiment you will be presented with two letters but they will not be followed by a target letter. Based on these you need to guess which letter should have been paired with them. Press the space bar to continue." In keeping with Experiment 2, for each group, the stimuli remained on screen irrespective of the presence of absence of responding, until the termination of the trial. After participants completed this stage of the experiment, they were given a questionnaire to complete which tested their understanding of the relationship between the correlated and uncorrelated stimuli with target stimuli. The questionnaire was the same as that used in Experiment 2. Results Exclusion criteria were identical to those used in Experiment 1, and again zero participants were excluded. Stage 1 Behavioral Data. Participants in groups Simultaneous Congruent and Simultaneous Incongruent were treated identically in Stage 1, as were groups Serial Incongruent and Serial Congruent. The data for these two pairs of groups were combined for the analysis in Stage 1. During Stage 1 the mean proportion of correct responses was high (..9) from Block 1, and increased only slightly over the course of the experiment. A two-way mixed measures ANOVA of individual proportions of correct responses with the within factor of trial block (1-4) and between factor of group (Simultaneous vs. Serial), revealed a significant main effect of trial block, F(3, 177) = 3.16, h p 2 = .04, p = .034, and group, F(1, 70) = 6.47, h p 2 = .09, p = .013, with the mean proportion correct being higher in the serial groups than in the Simultaneous groups. The interaction between block and group was not significant, F(3, 177) = 2.53, h p 2 = .01, p = .657. The mean RT decreased slightly for the simultaneous and serial groups as training progressed. A two-way mixed measures ANOVA of individual RT with the within factor of trial block (1-4) and between factor of group (Simultaneous vs. Serial), revealed a significant main effect of trial block, F(2, 170) = 12.42, h p 2 = .15, p , .001, and group, with RTs being faster in the simultaneous group, F(1, 70) = 4.61, h p 2 = 4.61, p = .035. The interaction between block and group was not significant, F(2, 170) = 2.73, h p 2 = .04, p = .057, Eye Gaze Analysis. In keeping with the outcome of Experiment 2, Panels A, B, C, and D of Figure 3 demonstrates that there was no difference in the mean proportion of dwell time directed toward the correlated and uncorrelated stimuli for any of the groups over the trial blocks of Stage 1. A four-way mixed measures ANOVA of mean proportion of dwell time with the within factors of trial block (1-4) and stimulus type (correlated vs. uncorrelated), and between factors of group (Simultaneous vs. Serial) and congruency (Congruent vs. Incongruent) revealed a significant main effect of group, F(1, 34) = 68.16, h p 2 = .67, p , .001, all remaining main effects and interactions were nonsignificant (largest F(1, 34) = 2.23, h p 2 = .06, p = .144). Stage 2 Behavioral Data. Panels A and B of Figure 4 show the mean proportion of correct responses for the four groups during Stage 2. Performance improved across the four blocks of training and, across the majority of these blocks, accuracy was superior in the congruent groups, regardless of whether training involved serial or simultaneous training in Stage 1. A three-way mixed measures ANOVA of mean proportion correct responses with the within-group factor of trial block (1-4), and between factors of group (Simultaneous vs. Serial) and congruency (Congruent vs. Incongruent), revealed a significant main effects of trial block, congruency and group, smallest F(1, 31) = 6.42, h p 2 = .17, p = .017. There was a significant Block 3 Group interaction, F(3, 93) = 3.58, h p 2 = .10, p = .017, and Block 3 Congruency interaction, F(2.41, 74.57) = 4.39, h p 2 = .12, p = .011. However, the interaction between congruency and group, and the three-way interaction were not significant, largest F(3, 93) = 1.66, h p 2 = .05, p = .182. Simple main effects analysis of the interaction between block and group revealed that the difference in proportion of correct responses between group Simultaneous and group Serial was significant in Block 1 and 2, smallest, F(3, 93) = 6.90, h p 2 = .33, p = .020 with group Simultaneous having a higher proportion of correct responses. Simple main effects analysis of the interaction between block and congruency revealed that the difference in proportion of correct responses for group Congruent and group Incongruent was significant in Block 1, 2, and 3 smallest, F(2.41, 74.57) = 11.01, h p 2 = .44, p = .005 with group Congruent having a higher proportion of correct responses. This final interaction, as well as the main effect of congruency confirms that our manipulation of congruency between Stage 1 and 2 had a detectable impact upon behaviorpresumably because of a direct transfer of associative strength from the stimuli that remained correlated with the target in Stage 1 and 2 in the two congruent groups. Panels C and D of Figure 4 show that in Stage 2, the mean RT decreased across training in both the simultaneous and serial groups, but was fastest for the congruent conditions, again confirming that switching the contingency of the correlated and uncorrelated stimuli between experimental stages was effective, and that participants were sensitive to the difference in correlation of the stimuli during Stage 1 of the Experiment. A three-way mixed measures ANOVA of mean RT with the within factor of trial block (1-4), and between factors of group (Simultaneous vs. Serial) and congruency (Congruent vs. Incongruent) revealed significant main effects of trial block, group, and congruency, smallest F(2.01, 58.19) = 4.30, h p 2 = .13, p = 212 EATHERINGTON AND HASELGROVE .018, and a significant Block 3 Group interaction, F(3, 87) = 4.30, h p 2 = .13, p = .007. All remaining interactions were nonsignificant, largest F(1, 29) = 1.26, h p 2 = .04, p = .270. Eye Gaze. Panels A, B, C, and D of Figure 5 show that a larger proportion of dwell time was directed toward the correlated stimuli in all groups in Stage 2, irrespective of whether participants were in the congruent or incongruent groups, or whether training in Stage 1 was conducted using a serial or simultaneous procedure. A four-way mixed measures ANOVA of mean proportion of dwell time with the within factors of trial block (1-4), stimulus type (correlated vs. uncorrelated), and between-group factors of group (Simultaneous vs. Serial) and congruency (Congruent vs. Incongruent) revealed significant main effects of block, stimulus type, and group, smallest F(1, 34) = 4.62, h p 2 = .12, p = .039, the remaining main effect of congruency was not significant, F(1, 34) = 3.30, h p 2 = .09, p = .078. The scaled JZS Bayes factor for this effect was 1.038, in favor of the null. There was a significant Block 3 Stimulus Type interaction, F(3, 102) = 3.19, h p 2 = .09, p = .027, but all remaining interactions were nonsignificant, largest F(1, 34) = 2.61, h p 2 = .07, p = .116. Panels A and B of Figure 6 show the mean proportion of dwell times directed to the correlated and uncorrelated stimuli during the test trials for all groups in Stage 2. Overall, dwell times in the incongruent groups were longer than in the congruent groups, perhaps reflecting the uncertainty associated with the introduction of the manipulation Stage 2 (Pearce & Hall, 1980). In keeping with the data from the training trials, correlated stimuli attracted the largest proportion of dwell time in groups trained simultaneously or serially in Stage 1. A three-way repeated measures ANOVA of mean proportion of dwell time with the within-subjects factor of stimulus type (correlated vs. uncorrelated) and between-groups factors of group (Simultaneous vs. Serial) and congruency (Congruent vs. Incongruent) revealed a significant main effect of stimulus type, F(1, 68) = 22.77, h p 2 = .25, p , .001, group, F(1, 68) = 4.71, h p 2 = .07, p = .034, and congruency, F(1, 68) = 14.19, h p 2 = .17, p , .001. However, all interactions were nonsignificant, largest, F(1, 68) = 2.64, h p 2 = .04, p = .109. Questionnaire Data. Difference scores for all four groups were calculated in a manner identical to Experiment 2 and are shown in panels C and D of Figure 6. A three-way repeated measures ANOVA of mean difference scores with the within factor of LEARNED PREDICTIVENESS stimulus type (correlated vs. uncorrelated) and between factors of group (Simultaneous vs. Serial) and congruency (Congruent vs. Incongruent) revealed a significant main effect of stimulus type, F(1, 68) = 149.83, h p 2 = .688, p , .001, but no effect of group or congruency, largest F(1, 68) = 2.12, h p 2 = .03, p = .145. Significant interactions were found between Stimulus Type 3 Congruency (F(1, 68) = 7.13, h p 2 = .10, p = .009), Stimulus Type 3 Group (F(1, 68) = 8.01, h p 2 = .11, p = .006), but the three-way interaction was nonsignificant (F(1, 68) = .146, h p 2 = .002, p = .704). Simple main effects analysis of the interaction between stimulus type and congruency revealed that the difference in difference rating between correlated and uncorrelated stimuli was significant in congruent and incongruent groups, smallest, p , .001. Simple main effects analysis of the interaction between stimulus type and group revealed that the difference in difference rating between correlated and uncorrelated stimuli was significant in the simultaneous and serial conditions, smallest, p , .001. Thus, in keeping with the previous experiments, participants knowledge of the associative relations between the correlated or uncorrelated stimuli and the target was good. Discussion The eye-tracking data from Stage 2 of the current experiment, reproduced the effect observed in group Serial-Stimuli from Experiment 2: when participants were required to make a predictive response about the identity of a subsequent task-relevant target then their overt attention came to be biased toward the stimuli correlated with the target. The data from Stage 1 reproduce the effects observed in groups Serial-Target and Simultaneous from Experiment 2: when a simultaneous or sequential relationship between correlated stimuli and target is established-without the requirement of predictive response-then there was no indication of the acquisition of an attentional bias. Of most interest, the current experiment revealed that when the predictive contingencies of the correlated and uncorrelated stimuli were swapped between Stage 1 and 2 then there was no disruption in the bias in overt attention to the correlated stimuli in Stage 2. In fact, numerically, the difference in dwell times between the correlated and uncorrelated stimuli was most substantial in the Incongruent groups than in the Congruent groups. Thus, it does not appear that the simultaneous and sequential training in Stage 1 established an attentional bias that, for whatever reason, went undetected. If this were the case then we would anticipate seeing an attenuation of the difference in dwell time between the correlated and uncorrelated stimuli in the Incongruent groups in Stage 2, which was not observed. It is worth reiterating that the behavioral data (mean proportion correct and RTs) and the overall dwell times were different between the incongruent groups and the congruent groups in EATHERINGTON AND HASELGROVE Stage 2; thus, providing a confirmation of the effectiveness of this manipulation. Finally, all groups showed appropriate knowledge about the associative relationships between the correlated stimuli and the targets. The results of the current experiment, together with Experiments 1 and 2, suggest that mere association or associative error is insufficient to result in learned changes in overt attention. General Discussion The purpose of the experiments reported here was to investigate the role of prediction in learned predictiveness. Prior studies of this phenomenon (e.g., Le Pelley et al., 2011) have revealed that a stimulus correlated with a task-relevant event comes to control more overt visual attention than a stimulus that is task irrelevant. These studies have been taken to support the "predictiveness principle": the idea that, through learning, stimuli that are "predictive" of events of importance come to control relatively more attention. However, as we have noted, formal models that are used to simulate these effects (e.g., Esber & Haselgrove, 2011;Le Pelley, 2004;Mackintosh, 1975) use association and associative error to bridge the relationship between learning and attention, and make no reference to time. Consequently, these models make no explicit dissociation between sequentially and simultaneously presented events, and are silent with respect to the actual role of prediction in learned predictiveness. Experiment 1 revealed that despite participants' having appropriate associative knowledge about the relationship between stimuli that were correlated or uncorrelated with a target stimulus, no bias in visual dwell time was observed toward the correlated stimulus when all these stimuli were presented simultaneously. This result implies that mere association between stimuli is insufficient to modify learned variations in attention. Experiment 2 reproduced this effect in group Simultaneous, and also revealed that arranging for the correlated and uncorrelated stimuli to precede the target stimulus was, in and of itself, also insufficient to result in the acquisition of a bias in visual dwell time to the correlated stimuli (group Serial-Target). This was again, in the presence of appropriate associative knowledge about the relationship between these stimuli and the target. Thus, a veridical, predictive, relationship LEARNED PREDICTIVENESS 215 between stimuli is insufficient for learning to modify attention. Only when participants were asked to make a predictive response, that is to say a response before the presentation of the target, were longer dwell times acquired to the correlated stimulus relative to the uncorrelated stimulus. Experiment 3 confirmed these findings, and also provided evidence that the lack of attentional biases in groups Simultaneous and Serial-Target were not a consequence of a confound between the conditions of training and the conditions of testing. Together, these results imply a crucial role for predictive responding in the etiology of the learned predictiveness effect. More specifically they suggest that responding needs to be contingent with a stimulus at a time when the target is not present for learning to change overt visual attention. To make this clear, consider Figure 7, which shows a timeline of one trial for group Serial-Stimuli, group Simultaneous, and group Serial-Target from a relatively late part of training in Experiment 2. Note that the temporal relationships between the correlated stimulus and the Target stimulus was equivalent in group Serial-Stimuli and group Serial-Target. Thus, the contiguity between these events alone was not sufficient to explain the bias in visual attention evident in group Serial-Stimuli. Furthermore, relative to the onsets of the correlated and uncorrelated stimuli, responses are produced at comparable times in groups Serial-Stimuli and group Simultaneous. Consequently, the contiguity between the correlated stimulus and the response alone is not sufficient to explain the bias in visual attention evident in group Serial-Stimuli but not in group Simultaneous. To explain the attentional bias toward the correlated stimulus observed in group Serial-Stimuli, but not in the other groups, we postulate that the target-relevant response must be performed contiguously with the correlated stimulus, at a time when the target is not present. It is relatively straightforward to, algorithmically, modify the equations provided by associative models of learning to realize the conceptual description provided above. For example, taking Mackintosh's theory as a case in point, we could stipulate that Equations 2a and 2b described in the beginning of the article only take effect when the task-relevant response occurs at a time before the onset of the task-relevant target (i.e., t CR , t k ). In the context of Mackintosh's theory this would mean that associative learning would still take place (as Equation 1 is not limited by the temporal relationship between response and target); however, this learning will not be translated into changes in attentional control, unless the task relevant responding preceded the presentation of the target stimulus. Consequently, this modification to Mackintosh's theory EATHERINGTON AND HASELGROVE would explain why all three groups in Experiment 2 demonstrated good, and equivalent, knowledge of the associative relationships between the correlated or uncorrelated stimuli and the target but why only group Serial-Stimuli translated this knowledge into a change in attention. There are, however, some problems with this algorithmic fudge. The first is relatively minor as it applies only to the modification as it is applied to the Mackintosh (1975) model. Note that learning, in Equation 1, is driven with an individual error term (e.g., Bush & Mosteller, 1951), rather than the summed error term used more standardly (e.g., Rescorla & Wagner, 1972). The summed error term is only used, in Mackintosh's theory, to change attention to stimuli in Equations 2a and 2b. Consequently, effects such as blocking (Kamin, 1968), or overshadowing (Pavlov, 1927) are driven only by changes in the allocation of attention to stimuli (specifically, a reduction in attention to redundant stimuli, . It follows then, that if we restrict attentional change to circumstances in which task-relevant responding precedes presentation of the target then overshadowing and blocking should not be evident under circumstances in which a conditioned, or target-relevant response, coincides with the unconditioned, or target, stimulus; a prediction that is demonstrably false (e.g., Balleine et al., 2005;Dwyer et al., 2011). Fortunately, this problem can be overcome if we relax the assumption that the summed error term is applied only to the mechanism that changes the allocation of attention. If a summed error term is applied at the level of the learning algorithm too (e.g., Esber & Haselgrove, 2011;Le Pelley, 2004), then more than one source of cue-competition is available and effects such as simultaneous-stimulus blocking and overshadowing can be explained, albeit in a manner that does not permit the stimuli used in these studies to undergo a change in attention. The second problem with the modification described above is more fundamental and applies to theories of learning and attention more generally. As we have seen, most models of attentional learning (e.g., Esber & Haselgrove, 2011;Le Pelley, 2004;Mackintosh, 1975), which can successfully explain a broad range of learning phenomena, do so without taking into account how time is represented within the architecture of the model-they are so called trial-based models. Consequently, before any such algorithmic modification may be applied, one must first begin to tackle the question of precisely how the timing of stimuli and their associated responses will be represented. A common approach to this problem is to acknowledge that rather than discrete trials being the smallest unit of temporal resolution within learning, instead, shorter windows of associability (epochs, or "bins"; e.g., Moore & Stickney, 1980) are assumed to successively open and close as time passes, with experimental events potentially spanning multiple epochs. Applying this assumption may provide a resolution of the experiments that we report here. Consider Figure 8, which for the sake of illustration and simplicity, redraws Figure 7 to exemplify how a single trial may be represented across six different windows of associability that open and close during this series of experimental events. We make the assumption that associations between any experimental event (stimuli or responses) will be more successfully acquired when those events occupy the same epoch (e.g., Moore & Stickney, 1980). We also make the assumption (that we will discuss later) that attention to stimuli is modified not on the basis of their association with other stimuli, but on the basis of their association with the task-relevant response. On the basis of these assumptions, it can be seen that in each group there is one or more epoch in which the correlated stimulus and the target stimulus are both coactive (epoch 4 for the two Serial groups, and epochs 1 to 4 for group Simultaneous). Consequently, associative learning will have the opportunity to take place between these stimuli and a test of associative knowledge (such as the test trials presented at the end of each experiment) has the opportunity to reveal this. Similarly, it can also be seen that the correlated stimulus and the response are also both coactive (in epoch 2 for groups Serial-Stimuli and Simultaneous, and in epoch 4 for group Serial-Target). Note, however, that for groups Simultaneous and Serial-Target the correlated stimulus and the response are both coactive within the same epoch as the target stimulus. On the basis of most learning rules, therefore, the association between the correlated stimulus and the response will be overshadowed by the target stimulus. This will not be the case in group Serial-Stimuli, however, because for this group the target is not presented during an epoch in which the correlated stimulus and the response are coactive; consequently the influence of overshadowing by the target will be far less. If, as we have postulated above, attentional control by stimuli (in this case the correlated stimulus) is a function of its association with the task-relevant response, then what should be observed is the acquisition of an attentional bias in group Serial-Stimuli, but not in the remaining two groups, which is of course the result we observed. Our proposed mechanism for this is a combination of standard associative principles (in this case overshadowing) operating at the epoch level-a proposal which is not particularly controversial (McLaren & Mackintosh, 2000;Vogel et al., 2003) and an attentional modulation mechanism that is sensitive to the correlation of stimulus and task-relevant responding. It is to this second proposal that we now turn our attention. While associative theories typically emphasize the role of attention to be one that permits organisms to better select stimuli on the basis of their predictive validity of other subsequent stimuli (e.g., an outcome), an alternative view of attention is afforded by a number of cognitive models, which instead emphasize the primary role of attention to be one of controlling action (e.g., Allport, 1989;Norman & Shallice, 1986). By placing the emphasis on the relationship between stimuli and responses, the proposals outlined here can be viewed as consistent with the idea that attention is bound to the intended actions of the participant (e.g., Engel et al., 2013). For example, according to premotor theories of attention, the selection of sensory information is determined by current action plans (Rizzolatti et al., 1987). Notably, Craighero et al. (1999) state the general position as one in which "Orienting of attention implies an activation of basic sensory-motor circuits according to the action goal. Attention results, therefore, from an internal representation of the required response during the interval between cue presentation and target presentation" (p. 1690). Similarly, so-called late-selection models of attention (e.g., Deutsch & Deutsch, 1963;Duncan, 1980;Kahneman, 1973) assume that attentional selection, or filtering, occurs at a relatively late stage in information processing (cf. Broadbent, 1958). According to models of this class, the selection of stimuli for attention involves more complex, "categorical" information than mere physical stimulus properties. As Deutsch and Deutsch note, "all sensory messages which impinge on the organism are perceptually analyzed at the highest level" (p. 85), and as Quinlan and Dyson (2008) note, in their discussion of late selection models more generally, "It is as if everything is identified but that the critical constraints relate to responding" (p. 286). The proposal we suggest here, in which learned changes in attention are a consequence of stimulus-response associations, concords with the idea that the purpose of attention is to guide actions. Perhaps this position has some face validity too. If the organism finds itself engaged in a task in which the constraints require the selection of information, then it may well be a consequence of the limits of processing capacity being either reached, or at least approached. Under these circumstances, then, a relatively automatic mechanism for determining how attention should be allocated seems necessary. A direct association between stimulus and response fulfils this requirement. By proposing that learned variations in stimulus attention are a function of the strength of the association between that stimulus and the task-relevant response, we align the current results with a literature that suggests that attentional selection for reward associated stimuli is a consequence of an attentional habit (e.g., Anderson, 2016;Luque et al., 2017). According to this position, an attentional habit is conceptually similar to the architecture EATHERINGTON AND HASELGROVE (stimulus-response) of a habit as considered in instrumental-learning paradigms. For example, lever-press responding for food comes under the control of a particular stimulus (e.g., the sight of the lever)-an association that is reinforced by reward (Dickinson, 1985). When applied to an attentional habit, the notion here is that eye movements can be thought of as the instrumental response that has come under the control of experimental stimuli (e.g., cues predictive of target outcomes). The associative structure of the proposals we describe here are in keeping with this general view; however, our suggestion is that the association between stimuli and task-relevant motor responses (such as key presses) in addition to eye movements, also modifies overt attention. A counterpoint to this possibility is worthy of note, however. In studies of value modulated attentional capture (e.g., Anderson et al., 2011aAnderson et al., , 2011b a stimulus that is established as predictive of a large monetary reward will come to attract more visual attention than a stimulus that is predictive of a smaller monetary reward; and importantly, this effect can even be observed when orienting responses to the stimulus associated with the high-value reward are detrimental to ongoing task-relevant performance . It is difficult to see how a stimulus-response analysis, such as that developed here, could account for these results. Our analysis, thus far, has focused on the role of differences in the timing of responding during stimuli in the etiology of learned predictiveness. However, it is important to consider that the three groups in Experiment 2, and the conditions of Stage 1 and Stage 2 in Experiment 3 also differed in terms of the instructions that participants were provided with, and that this difference is an equally good predictor of which conditions or groups demonstrated an attentional bias to the correlated stimuli. Specifically, when participants were instructed that they may anticipate the arrival of the target stimulus, and press the response key early, then only under these circumstances did dwell times come to be longer to the correlated stimuli relative to the uncorrelated stimuli. It is worthwhile considering whether variations in the instructions were responsible for the effects observed in the current studies. Evidence for the role of instruction in learned variations in attention has been provided by Mitchell et al. (2012). In their Experiment 2, participants were required to learn to predict the shape that a tree would grow into depending on the sets of cross-pollinating seeds that were used. Their results revealed a standard learned predictiveness effect, in that dwell times were longer to the stimuli (seeds) that were established as predictive of the outcome (tree shape) than to stimuli that were irrelevant to the task. However, this bias in dwell time could be reversed in a second stage of the experiment if participants were given instructions that informed them that it was highly unlikely that the seeds that controlled the shape of the tree previously would influence the shape of the trees from now on. Subsequent studies have also revealed that instructions can influence learned predictiveness; however, a residual bias toward previously predictive cues can survive the instructions (Don & Livesey, 2015;Shone et al., 2015). Mitchell et al., account for their results by suggesting that the learned predictiveness effect is a consequence of participants' controlled attention being determined (in part) by a "causal model" of the scenario described in the cover story of the task used in their experiment, a model that can be revised on the basis of instruction (see also Mitchell et al., 2009). Consequently, it is clear that the nature of the instructions given to participants can have a substantial influence on how learned variations in overt visual attention are expressed to stimuli that are correlated or uncorrelated with task-relevant goals. One possible way in which instructions may have impacted overt visual attention in the current studies is through motivating differing strategies. For example, in both group Simultaneous and group Serial-Target in Experiment 2, participants were not required to predict a subsequent event on each trial because their task instructions emphasized responding to the target stimulus when it is seen. On this basis, then, changes in attention may only take place when participants are required to predict a future event, as was the case in group Serial-Stimuli in Experiment 2. One way to dissociate this analysis from the analysis based upon S-R overshadowing presented earlier, would be to conduct a backward learned predictiveness study. Here participants would be presented with outcomes or targets before the presentation of compounds of correlated and uncorrelated stimuli and instructed to respond as to the identity of the target during the compound. According to the instructional analysis just developed, under these circumstances, we would expect no attentional bias to develop to the correlated stimuli as the instructional requirement is to make a response about the identity of a past event rather than a prediction about a future event. However, according to the S-R overshadowing analysis, an attentional bias to the correlated stimulus should still develop as the response is being made contiguously with the presentation of the correlated stimulus, but in the absence of the target stimulus, and hence the contribution of overshadowing of the correlated S-R association would be less. 2 In any case, the current experiments reveal that in order for learning to modify attention to a stimulus that is correlated with a target, task-relevant responding must take place at a time before target presentation. Mere "association with some other immediately interesting thing" (James, 1890) is insufficient for a stimulus to derive attention. It seems, then, that learned predictiveness is appropriately named-but perhaps for not the reason suspected by associative theories of learning.
2020-05-28T09:14:13.008Z
2020-05-26T00:00:00.000
{ "year": 2022, "sha1": "c4b63795983a048c359e1e96c111a93235b33a1a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1037/xan0000330", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c257af2e3bf0bb53ccfb7efd5a17e70904a1e8ba", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
268268582
pes2o/s2orc
v3-fos-license
CATHOLIC PARTISANSHIP IN THE 2020 PRESIDENTIAL ELECTION: DEMOGRAPHIC AND CULTURAL CLEAVAGES 327 CATHOLICS AND CONTEMPORARY AMERICAN political strategists and analysts alike reflects the continued power of Dionne's paradox, 4 despite frequent scholarly warnings about "the myth of the Catholic vote." 5 And there was no less interest as the 2022 mid-term elections approached.Catholics constitute over one-fifth of American voters, but despite being part of a single hierarchical religious institution are hardly a unified bloc in national elections. 6And although there are obvious methodological limitations to focusing on a single religious tradition, scholars are still justifiably intrigued by the task of delineating the social underpinnings of Catholic partisanship. 7hat partisanship is a continually moving target as a result of dramatic changes in the Catholic population over the past half century.First, the Catholic community has gone from overwhelmingly white to multi-ethnic, as Latinos constitute at least of a third of Catholic parishioners, while immigration from Southeast Asia and domestic conversions have increased the numbers of Asian and Black Catholics. 8eanwhile, Catholics of European origin have increasingly moved from working to middle class in education and income: fewer than one in twenty white Catholics were college graduates in the 1950s, but well over one-third are today. Furthermore, both white and "new ethnic" Catholics are less "metropolitan" and more geographically dispersed than in the 1940s heyday of big-city Northeastern and Midwestern concentrations.White Catholics are aging as well; the percentage over 65 years of age has more than doubled in the past six decades, reaching almost one-quarter today.As in other religious groups, Catholic marital patterns have also changed: in the 1950s seven of eight adult Catholics were married, compared to just over one-half today.White Catholics today also differ religiously from their 1950s counterparts.After Vatican II Catholic observance dropped significantly, from about 70 percent "regular" attendance in the 1960s to 44 percent in the 1970s, and to around a third in 2020, below that of white evangelicals and black Protestants. 9nd theological conflicts among Catholics have intensified as "traditionalists" face off against "progressives" (and even Pope Francis), producing what The Economist called "the fight for Catholic America." 10 Not surprisingly, given these socioeconomic, ethnic, demographic and religious transformations, Catholic political behavior has also undergone profound changes. Long the bulwark of the New Deal coalition, 11 white Catholics have recently distributed themselves more widely across the political spectrum, both in party identification and vote choice.White Catholics were strongly Democratic in the 1940s, with that partisanship reaching a peak in the Kennedy election of 1960 but receding significantly thereafter.By 2012 they were almost equally distributed on the partisan spectrum; the historic Democratic advantage had disappeared.In comparison, their Latino brethren -and other ethnic groups -have maintained solid Democratic attachments through the past three decades, as their growing numbers augured rising political importance.But even those ties were under threat in the 2020s. 12e Demographic Bases of Catholic Partisanship In this essay, we examine the demographic bases of Catholic party ties, testing four perspectives used by Shafer and Spady 13 to identify the social underpinnings of partisanship.These perspectives stress (1) social class and education; (2) racial and ethnic influences; (3) "domestic roles," such as gender, sexuality, family structure and residence; and, (4) religiosity and theological cleavages. 14As students of religion and politics will immediately observe, these categories correspond almost precisely to those used by most analysts in explaining the changing partisanship of Catholics. 15e consider each perspective in slightly greater detail before moving to an empirical examination of Catholic partisanship in 2020. Social Class and Education Most accounts of partisan change among Catholics have focused on the role of economic status, especially among whites.Why have Catholics changed their electoral behavior since the Democratic ascendancy of the 1960s?"The most obvious answer is that they occupy a more elevated position in the socioeconomic order." 16s European-origin Catholics climbed the economic ladder and achieved higher education, they began to desert their ancestral party, voting more frequently for Republicans (especially for higher offices) and shifting their identification away from the Democrats.Upward mobility presumably fostered more conservative attitudes on role of government and social welfare issues, leading to a shift toward the GOP.Blue-collar Catholics who remained part of the institutional outposts of the New Deal, such as labor unions, were less prone to defect, at least for a time. 11 See: Robert Axelrod, Where the Votes Come From: An Analysis of Electoral Coalitions, 1952-1968, American Political Science Review, Vol.66, No. 1, 1972, pp.11-20.12 Ruy Teixeira, "The Democrats' Hispanic Voter Problem, " The Liberal Patriot, 9 December 2021.Available at: https://theliberalpatriot. substack.com/p/the-democrats-hispanic-voter-problem-dfc?s=r (accessed April 4, 2023 Although educational advancement historically operated in tandem with economic mobility in producing less Democratic loyalty, in recent years scholars find education playing an independent role, especially among those with postgraduate work.Highly educated Americans have increasingly migrated toward the Democrats, perhaps as a result of their social liberalism, joining many with more modest educations in the Democratic ranks.On the other hand, many voters with intermediate levels of education lean Republican, especially in the "Trump era."Presumably, Catholics should exhibit the same tendencies. Race and Ethnicity Despite the collective image of American Catholics as an overwhelmingly Democratic constituency throughout history, there have always been significant ethnic and racial differences in support for the party.Ethnocultural historians found that Irish Catholics usually excelled in Democratic propensities, while their Italian, German and some other ethnic brethren were often less enthusiastic. 17Such skepticism was strongest in areas where the Irish monopolized local politics or Republican machines offered an open door. 18And although much of the scholarly interest on the growing ethnic complexity of American Catholics has focused on Latinosthemselves an internally diverse group 19 -there are substantial contingents of Black, Asian, and "other race" Catholic voters.Many controversies over faith and practice within today's Church have important ethnic implications, so we might expect to see such patterns in political life, as ethnic groups adopt varying partisan loyalties. Domestic Roles and Locations As Shafer and Spady argue, "domestic roles" have emerged as another source of American ideological and partisan differences. 20These influences have appeared among Catholics as well in recent decades.The gender gap solidified by the 1990s, with men substantially less Democratic than women; some scholars found that married citizens and those with young children were more likely to be Republicans.Sexual orientation also plays a role, with "straight" Americans locating on the GOP side and sexual "minorities" supporting the Democrats.In the 1980s and 1990s younger Catholics were less Democratic, although this pattern may have reversed in the 21st century, as young people generally have favored the Democrats.Such age cohort CATHOLICS AND CONTEMPORARY AMERICAN POLITICS Thomas V. Feingold, James L. Guth • CATHOLIC PARTISANSHIP IN THE 2020 PRESIDENTIAL ELECTION: DEMOGRAPHIC AND CULTURAL CLEAVAGES • pp (327-351) differences may reflect the effects of maturing during specific political eras, but for Catholics they may also be shaped by seminal religious events, such as Vatican II and its aftermath. 21Some evidence suggests regional differences: the growing Catholic population of the South has been more inclined toward the GOP than its counterparts in traditional Catholic heartlands of the Northeast and Midwest.At the same time, urban-rural differences may also have intensified in recent years. 22Finally, scholars have found that veterans of the armed services tend to favor the GOP. Religiosity and Religious Values Although most work on Catholic political change has focused on socioeconomic, ethnic, and domestic influences, early social science studies saw religion itself as a key determinant of partisanship: "Catholics vote differently from Protestants, and this difference is not simply a function of differing demographic or ideological posi-tions…And the more closely they are bound to their religion, the more Democratic they are." 23Such observations not only posited a religious basis for Catholic behavior, but revealed a common pattern of ethnoreligious politics: the most committed believers were typically the strongest adherents to their tradition's "normative" party.Before the 1970s, regular Mass attendance predicted greater Catholic support for Democratic candidates, just as churchgoing produced Republican affinities among Protestants, thereby reinforcing traditional confessional patterns. 24he "culture wars" starting in the 1960s transformed the nature of religious influence, however, as traditionalists and progressives squared off against each other in many faith communities.As conservative positions on abortion, embryonic stemcell research, gay rights, same-sex marriage and other cultural issues were correlated with "religiosity," Republican strategists used these "wedge issues" to lure observant Catholics away from the Democrats. 25By the 1990s, both casual and professional observers pointed to the "God gap": regular Mass attenders were prone to vote Republican, while the less-observant leaned Democratic.Of course, this phenomenon was not limited to -or even most pronounced -among Catholics. 26Such divisions became more politically relevant as Mass attendance dropped from very high levels in the 1960s to much lower ones today. 27While some scholars found that the salience of religion, rather than Mass attendance, was the best predictor of Catholic Republicanism, this trait and faithful observance are typically very highly correlated, making such distinctions problematic. Unfortunately, the stress on "the God gap," whether measured by Mass attendance or religious salience, often obscured the fundamental source of partisan cleavages: theological differences.The foundational texts of the "culture wars" or "religious restructuring" perspective reminds us that the key divide in contemporary religious communities is not over observance, but theology -with "traditionalists" moving toward Republicans and "progressives" toward the Democrats. 28nfortunately, the paucity of belief measures in surveys has led most scholars and virtually all journalists to focus on proxies such as Mass attendance or religious salience.Although these items do roughly differentiate the two Catholic factions, as traditionalists are considerably more observant, it is certainly preferable to measure beliefs directly. Data and Methods The analysis here assesses the relative importance of all four categories of demographic factors in shaping Catholic partisanship, using two standard data sources: the 2020 American National Election Study (ANES) and the 2020 Cooperative Election Study (CES).Each survey has advantages: the ANES has a substantial subset of Catholics in its pre-and post-election surveys (usable Ns=1699 and 1537) and a much broader set of religious variables, found in both the pre-and post-election questionnaires.On the other hand, the CES has a much larger subsample of Catholics (usable N=11,191), permitting fuller analysis of racial and ethnic subgroups and, perhaps, surer estimates of the effects of other factors. 29Unfortunately, the CES lacks items on religious belief -limiting conclusions about the full role of religion.Thus, by utilizing both surveys it allows us to examine all four demographic categories in some detail, looking at the socioeconomic, ethnic, domestic and religious roots of Catholic partisanship. 26 See: Lyman A. Kellstedt and James L. Guth, "Religious Groups as a Polarizing Force", in: Polarized Politics: The Impact of Divisiveness in the US Political System, William Crotty (ed.), Lynne Rienner, Boulder, 2015, pp.157-186.Some observers, however, doubt that this phenomenon is significant, finding only minor partisan differences: William V. D' Antonio, Michele Dillon, and Mary L. Gautier, American Catholics in Transition…, or doubt that it is permanent, discovering that observant Catholics are sometimes still more Democratic, at least when other factors are controlled, see Mark M. Grey and Mary E. Bendyna, "Between Church, Party and Conscience: Protecting Life and Promoting Social Justice among U. S. Catholics", in: Catholics and Politics: The Dynamic Tension Between Faith & Power, Kristin E. Heyer, Mark J. Rozell, and Michael A. Genovese, (eds.),Georgetown University Press, Washington, DC, 2008, pp.75-72, and Matthew J. Streb and Frederick Brian, 2008, The Myth of a Distinct Catholic Vote… pp.93-112.27 William V. D' Antonio, Michele Dillon, and Mary L. Gautier, American Catholics in Transition, Rowman & Littlefield, Lanham, 2013, 13ff.28 Robert Wuthnow, The Restructuring of American Religion, Princeton University Press, Princeton, 1988; James D. Hunter, Culture Wars: The Struggle to Define America, Basic Books, 1991. Thomas V. Feingold, James L. Guth • CATHOLIC PARTISANSHIP IN THE 2020 PRESIDENTIAL ELECTION: DEMOGRAPHIC AND CULTURAL CLEAVAGES • pp (327-351) A First Cut: Considering Catholic Partisan Evaluations One advantage of the ANES is its multiple measures of partisanship.Although most scholars focus on the classic "Michigan" seven-point party identification scale, we begin with a more nuanced approach, using seven independent partisan evaluations available in the pre-election survey.In Tables 1, 2 and 3 we report the impact of all four categories of factors on (1) "thermometer ratings" of the Democratic and Republican parties, (2) comparable evaluations of the parties' 2020 standard bearers, Joe Biden and Donald Trump; (3) scores summarizing the net "likes" and "dislikes" about each party; and finally (4) the classic "Michigan" party identification scale.Although these measures usually tell a similar story, some differences in the factors influencing each are instructive. For each demographic category, we employ several standard measures: 1) for socioeconomic status we use family income and education level 30 ; 2) for ethnicity, we distinguish whites, blacks, Hispanics, Asians, and other races; 3) for personal status, we include an age variable for Catholics born before or during the Vatican II era, which preliminary examination showed to be the only distinctive age cohort in the multivariate analysis 31 ; Southern residence; sexual identity; gender; marital status; number of children; size of community; and, lastly, veteran status. Finally, we incorporate religious measures.Although the ANES religious battery may have a "Protestant bias", as some scholars contend, such bias should work against strong relationships among Catholics.From the ANES pre-election survey, we use an additive measure of various traditionalist identifications; a religiosity score derived from Mass attendance and religious salience; and views of the Bible.We also include two post-election measures: a thermometer for "Christian fundamentalists" and an item asking how much "discrimination" Christians face in the US. Although not ideal for assessing traditionalist beliefs, both are reasonable proxies."Fundamentalism" is not originally a "Catholic" term, but in popular parlance it has come to signify any religious conservatism, even among Catholics. 32Positive Catholic responses may also reflect recent "co-belligerency" by traditionalists in different Christian confessions. 33And a sense of social discrimination against Christians is held primarily by traditionalists in all American Christian groups, including Catholics. 30 We initially included union membership but found it had little effect at either the bivariate or multivariate levels, so to simplify analysis we have omitted it. 31 A preliminary review of age effects among different Catholic ethnic groups in the ANES data revealed very complex patterns, due in part to the relatively few respondents in some categories (18-26; 27-41;42-59; 60-79; and 80+).There was a slight tendency for Democratic affiliation to decline through middle age, but dummies for the younger cohorts did not survive multivariate analysis.In the much larger CES sample, younger voters also tended to be more Democratic, but this effect appeared primarily among non-white Catholics and also did not survive multivariate analysis. ПОЛИТИКОЛОГИЈА РЕЛИГИЈЕ бр. 2/2023 год XVII • POLITICS AND RELIGION • POLITOLOGIE DES RELIGIONS • Nº 2/2023 Vol. XVII КАТОЛИЦИ И САВРЕМЕНА АМЕРИЧКА ПОЛИТИКА For example, a content analysis of the traditionalist National Catholic Register and the progressive National Catholic Reporter for the first six months of 2021 revealed over four times as many articles on religious freedom and discrimination against Christians in the former as in the latter. 34Whatever their limitations, these measures provide a solid estimate of Catholic theological traditionalism. As a first cut at the full contours of the American Catholic electorate, Table 1 summarizes bivariate correlations between demographic variables and the seven partisan measures.On the Democratic side, correlations are usually strongest with the party thermometer, followed by that for Biden, and then by likes and dislikes about the party.Nevertheless, as we should expect, the patterns are quite similar.Higher family income predicts cooler feelings toward the Democratic Party, but is not significant in the other two cases.Grade and high school graduates feel warmer toward the party, but they are not more Democratic on the other partisan measures.On the other hand, those with some college tend to be cooler toward the party and Joe Biden -and have more "dislikes" than "likes" about Democrats.Interestingly, those with college and graduate degrees do not differ from the sample as a whole, but postgraduates tend to like Biden a little better. The Republican measures often present the mirror image of their Democratic counterparts, although higher income also results in a dimmer view of the party, just as for the Democrats, and grade school graduates are also negative.The positive effect of a high school diploma on the Republican thermometer is notable, as is the negative effect of postgraduate work.The Trump pattern reveals modest positive coefficients for those with high school diplomas and some college, but more negative evaluations from those with college degrees and postgraduate work.Finally, virtually none of the SES factors influences the GOP like/dislike measure, except for the greater negativism of high school graduates.On the whole, then, we see only modest effects of social class on these six partisan evaluation measures. 35And, finally, the SES measures are not much more predictive of the classic Michigan identification scale: higher income has a mild significant correlation with Republicanism, while those with only a grade school education tilt more strongly Democratic.But other educational groups do not differ from the rest of the Catholic public.36To no one's surprise, racial and ethnic factors are much more strongly associated with partisanship -and can be summarized quickly.White Catholics on balance exhibit negative evaluations of both the Democratic Party and Joe Biden (although less so about the candidate), and also have more to complain about than like about the party.On the other hand, white Catholics reveal positive evaluations of the Republican counterparts -with Donald Trump getting the strongest endorsement, rather than the GOP.Hispanics tend to be more positive about the Democratic Party than about candidate Biden, and a little more negative on Trump than on the GOP.The small contingent of black Catholics is solidly favorable on all the Democratic indicators, and negative on the Republican ones, while Asian and other minority Catholics appear marginally on the "Democratic" side of the correlations.Finally, the Michigan scale data shows a strong preference of whites for the GOP, of Hispanics and blacks for the Democrats, with Asian and other race Catholics not differing from other Catholic respondents.As a first look, these correlations portray a deep ethnic partisan chasm among Catholics. The domestic status factors work mostly as expected.Interestingly, initial examination shows that the one distinctive Catholic age group in the ANES survey is the oldest, the Vatican II generation -who lean toward the Democrats on the party and Biden thermometers, but do not otherwise differ from other age groups.Southern Catholics are significantly more positive on all the GOP measures and slightly more negative toward the Democratic ones.Straight, male, married, and veteran Catholics give positive ratings to the GOP and are generally negative on the Democratic evaluations, although not always significantly so.Number of children under 18 works solidly against positive Democratic sentiments, but only modestly in favor of the GOP.And residents of larger communities favor the Democrats, but rate the GOP and Trump negatively.The classic Michigan scale tends overall to reflect the solid influence of all these factors, with the exception of number of children, which just misses significance. Finally, employment of ANES religious measures bears fruit in the last section of the table.Traditionalist religious self-identification works solidly against positive Democratic evaluations and even more strongly in favor of Republican ones.And, although one should not make too much of this, traditionalists were even more negative about Joe Biden -a fellow observant Catholic -than they were about the Democratic Party, perhaps reflecting the well-publicized "wafer wars" involving pro-abortion rights Democrats.A similar pattern, but with more modest correlations, is seen in results for religiosity.Those who say religion is important in their lives and attend Mass regularly tend to give the Democrats negative ratings, and the Republicans more generous positive ones.A similar effect is seen for the "Bible" item.Although often seen as a "Protestant" measure of Christian orthodoxy, literalism is nevertheless associated with Catholic partisanship, especially on the "pro" Republican side."Born-again" Catholics are a just little more likely than the non-born again to favor the Democratic Party and Joe Biden -a pattern quite different than that among Protestants. 37The most powerful measures, however, are the post-election items tapping theological "culture wars": Catholics who feel warmly toward "Christian fundamentalists" and perceive discrimination against Christians downgrade the Democrats and approve the Republicans.Finally, the Michigan scale shows the solid influence of all the religious measures, except for born-again status, with the fundamentalist thermometer, perceptions of discrimination against Christians, and theological self-identification having the strongest relationships. Thus, apart from the seeming anomaly of the born-again measure, the implications are clear.All the measures tapping religious "traditionalism" or "orthodoxy" (including the proxy of religiosity) are solidly associated with partisan assessments.Although not all are ideal conceptually or in measurement terms, their collective message is that religious restructuring within the American Church is a powerful shaper of partisan affect, joined by stark racial and ethnic divisions. 37 The meaning of this unusual effect is unclear, as born-again status has the "normal" pro-Republican effect among Catholics in the larger CES sample analyzed below. Source: ANES 2020 (N=1537) To gain some sense of the relative influence of each demographic category, we ran OLS regressions on the partisan measures, using the variables in each category in turn.(We ran two regressions for the religious variables: one using only the pre-election items, and another adding the post-election measures.)Table 2 reports the variance explained by each set of demographic factors for all seven partisan evaluations.As the earlier discussion hinted and Table 2 confirms, an "SES" model explains little variation on any measure of partisanship, doing best for both party thermometers and the "Michigan" scale.Ethnic and racial identities do considerably better, especially for the classic scale, and then the Democratic Party and Trump thermometers.The domestic factors do not match ethnicity in power, but far outperform socioeconomic status, explaining the Michigan scale best of all the partisan evaluations.Finally, the pre-election survey religious items are modest predictors of Democratic evaluations, but explain much more variance in the GOP measures than the three previous models.If we add the post-election fundamentalism and discrimination items, the model explains substantial variance on the Democratic scores and even more on the GOP evaluations -as well as on the classic Michigan scale. For a comprehensive analysis of influences, Table 3 reports the results of OLS regressions for the seven measures, directly incorporating all the independent variables.Some demographic measures clearly have more direct impact than others.First, as the earlier analysis portended, socioeconomic status virtually disappears from the explanation: "New Deal" class-based party divisions are hard to see, leaving only traces which sometimes work in the wrong direction, such as the negative coefficients between both higher income and postgraduate education and the Republican Party thermometer (although, as noted earlier, these trends may well be a feature of new party alignments).Achievement of some college education seems to provide a fillip for favorable Trump evaluations, but none of the income and education items has a significant independent effect on the Michigan scale. 38ace and ethnicity are much more powerful, as minority Catholics line up with the Democrats and, of course, whites with the GOP.Indeed, even under controls the beta coefficients for "ethnic" groups are often comparable to the bivariate correlations in Table 1 -or actually a little enhanced.The historically minded would argue that the Democratic Party remains the home of "ethnics," -just a different set from the Europeans of the classic party machine era.Indeed, the electoral divisions be-38 Analysis of non-Catholics shows a small but significant influence of higher income favoring the GOP. tween Catholic Democrats and WASP Republicans -"ethnics" versus native "whites" -have come to structure partisan divisions within the Church. Not surprisingly, most domestic status variables have some direct influence.The Vatican II age cohort is more Democratic (and younger Catholics more Republican), southerners are more Republican, as are male and heterosexual Catholics.Those with young children tend not to like the Democrats but living in a larger community has a mild pro-Democratic effect.And almost across the board, Catholic veterans dislike Democrats and favor the GOP.Finally, religion demonstrates significant power: traditionalist religious identification favors the Republicans, as does belief in biblical authority.The most powerful indicators, however, are from the "culture wars": sectarian approval and a sense of Christian discrimination militate against the Democrats and in favor of Republicans.Sectarian sentiments especially boost warmth toward the GOP, while perception of discrimination pumps up Trump's evaluations.Born-again status, however, once more characterizes Democratic Catholics, rather than Republicans.Note, however, that when theological belief and identification are in the picture, religiosity drops out.This confirms that survey researchers should do more to tap religious belief, rather than relying on church attendance or religious salience as proxies. Each equation explains respectable amounts of variance, with the results strongest for the Michigan party identification scale (27.9 percent), followed closely by the "GOP" measures: Trump thermometer rating (27.3 percent) and GOP thermometer (26.9 percent).The results are somewhat weaker in explaining Democratic and Biden thermometers (22.7 and 20.4 percent, respectively) and the less extensive likes/dislikes measures (18.3 percent for the GOP and 13.1 percent for the Democrats). To summarize the ANES results: among contemporary American Catholics, socioeconomic factors have very little influence over partisan affections, in either bivariate or multivariate analysis.Racial and ethnic identities are much better predictors of virtually all these measures.Domestic status, on the other hand, usually has the expected influences: elderly voters are on net somewhat more Democratic, while southerners, heterosexuals, men, rural residents, and veterans lean toward the GOP.Finally, theological divisions add a considerable amount to the explanation of Catholic partisanship: traditionalists (on all measures) are aligned with the GOP and progressives with the Democrats. Catholic Party Identification: A Comparison and Robustness Check Although the rich partisanship measures in ANES 2020 provide valuable insights into nuances in party evaluations, we want to validate and extend our findings by examining data from another major academic survey, the 2020 Cooperative Election Study (CES), which has a Catholic subsample of over 11,000.This not only permits comparison with the ANES, but also allows use of more detailed ethnicity measures, with larger numbers of non-white Catholics.At the same time, we show that the limited CES religious measures produce an underestimate of religious influences on Catholic partisanship. We begin with the same analysis applied earlier to the ANES, matching as closely as possible the variables available in both studies.(We have divided the ANES "Hispanics" in Table 3 into Mexican, Puerto Rican and "Other" to match the CES cat-egories. 39) Table 4 compares results from three OLS regressions: one using the four sets of ANES variables, but restricted to the pre-election religious items; a second ANES analysis adding the two post-election religious items; and, the third replicating as far as possible this analysis in the CES.The table 4 reveals some familiar patterns and offers some cautionary notes, both about adequate specification of variables and about drawing conclusions from single surveys.The first column shows patterns quite familiar from our earlier look at the ANES: when everything is in the equation, socioeconomic status has virtually no independent influence on Catholic party identification, while race and ethnicity reveal a powerful white vs. minority division, and most domestic status variables operate in the expected directions, although married folks and those with minor children are not significantly more Republican.The religious effects are also familiar, with traditionalist identification, high religiosity, and belief in Biblical authority all producing more Republican identifiers.This regression explains almost one-fifth of the variance in party identification among Catholics. If we use the additional post-election religious measures (the fundamentalist Christian thermometer and discrimination against Christians items), we improve the variance explained to well over one-quarter, a substantial boost. 40Note that this addition has little effect on coefficients in the other three categories (sometimes actually increasing them), but reduces substantially that for traditionalist identification, trims that for biblical authority, and eliminates that for religiosity.Although we should be cautious in our interpretation, this suggests that all these items (except for born again status) do get at the conservative end of the religious restructuring continuum, even among Catholics.(And that religiosity is best thought of as an indirect proxy for theological orientation.) 39 We might have included more detailed Latino "ethnicity" data in "race/ethnicity" category for both surveys, but some ANES items were still restricted at the time of writing and, in any event, with the exceptions of Mexicans and Puerto Ricans, other ANES Latino subgroups would be quite small.The different way ethnicity questions were asked also makes it difficult to "line up" comparable categories.We use the more detailed CES ethnicity data later.40 With this formulation, the four categories explain, respectively: SES, 1.3 percent; race and ethnicity, 8.6 percent; domestic status, 5.5 percent; and religion, 14.3 percent.CATHOLICS AND CONTEMPORARY AMERICAN POLITICS Thomas V. Feingold, James L. Guth • CATHOLIC PARTISANSHIP IN THE 2020 PRESIDENTIAL ELECTION: DEMOGRAPHIC AND CULTURAL CLEAVAGES • pp (327-351) The CES provides a much larger Catholic sample, more information on ethnicity, and some cautionary tales.First, in contrast to the ANES, the CES data suggest continued presence of class-based partisanship, if not entirely along classic lines.Higher incomes produce more GOP identifiers, but education has the new effects discovered by recent studies: those with less than a college education are moving in a Republican direction, while post-graduates are becoming significantly more Democratic.The race and ethnicity variables, on the other hand, work very much like those in the ANES models (first two columns), as do most domestic status items, with the exception of a considerably smaller age Vatican II cohort effect in favor of the Democrats and a solid and significant CES tendency for married persons to move toward the GOP.But the CES religious measures have less explanatory power: religiosity has a solid coefficient, as expected, capturing a good bit of unmeasured theological "traditionalism," but "born-again" status works in the other direction from that in the ANES, producing a slight pro-Republican effect, more like its effect among Protestants.But the absence of other belief and identification measures means that the CES underestimates religious influences.This largely accounts for the reduced variance explained by the CES analysis -about 15 percent. Thus, comparison of the ANES and CES produces several conclusions.First, given conflicting results on socioeconomic indicators, we must hold open the question of continuing social class influences on Catholic party identification.Second, the strong effects of the ANES religious measures suggest that surveys lacking belief measures are likely to underestimate religious influences.Finally, in many ways the two surveys produce comparable results, emphasizing the cleavages created by ethnicity and domestic status variables, especially gender and sexuality.These findings largely conform to our theoretical expectations. Party Identification Among Catholics: Racial and Ethnic Groups The power of the racial and ethnic factors in structuring American Catholic partisanship is certainly evident.But do the other influences we have analyzed work in the same fashion in these major Catholic "constituencies"?We extend the previous analysis first to the two major internal components of the contemporary Catholic electorate: white or "Anglo" Catholics and Latinos, both with large contingents in the CES sample, and then, more cautiously to Blacks, Asians and other racial groups, with smaller but not trivial numbers in the huge CES sample.Do socioeconomic, domestic and religious variables influence these Catholic constituencies in the same way?We examine the four groups on the three non-ethnic demographic categories, and also run a model adding country of origin to the Hispanic model.As an examination shows, many variables operate in somewhat different fashion among racial groups.As in Table 4, higher income produces movement toward the GOP, but this "social class" influence is strongest among Latinos and blacks, much weaker among whites, and close to significantly negative among Asians.On the other hand, the "novel" effects of education are consistently significant only among whites, with those lacking a college education more Republican, and postgraduates more Democratic.The oldest age cohort (the Vatican II generation) is significantly more Democratic only among Latino, black and Asian Catholics, not white Catholics.Southern residence moves all groups except black Catholics toward the GOP, although the effect is not significant among Asians and other race Catholics.While men, married folks and rural dwellers are more Republican in most groups, sexu-al identity influences only whites.Finally, religiosity has a stronger pro-Republican influence among whites, Asians and other races -but the effect is present among Latinos as well (no doubt giving hope to Republicans and discomfiting Democrats).On the other hand, religiosity nudges black Catholics toward the Democrats (the coefficient just misses significance).In this and some other ways, the black Catholic partisan profile is distinct from those of other Catholic groups. What about the effects of national origin among Hispanics?As many observers have noted, American Hispanics are a diverse group, coming from many national backgrounds, with varied histories in this country.The third model in Table 5 incorporates country of ancestry in the model for Latinos.This procedure has very little impact on the coefficients seen in Model 2, but substantially bolsters the variance explained.With everything in the equation, Puerto Ricans are most inclined toward the Democrats, followed by Mexican-Americans.On the other side, Latinos born in the US have a significant Republican slant and Cuban-Americans an even stronger one.Central Americans do not differ much from the "miscellaneous Latinos" omitted reference group (or from the entire Latino subsample, for that matter).All this confirms considerable partisan diversity among Latino communities, despite overall Democratic propensities.The weaker Democratic ties of native-born American Hispanics especially threaten the party's hopes of political hegemony based on the "coalition of the ascendant" social groups, depending on the Democrats' ability to capture the votes of ethnic minorities. 41 Partisanship in the Voting Booth: 2020 Presidential Choices Our last task is to consider the influence of demography on the paradigmatic partisan choice, the vote for president.Table 6 reports the results of two logistic regressions (in the ANES and CES) on the presidential choice among Catholics.Again, we have matched variables from the two surveys as closely as possible.Although the analyses reveal some common features, we find some of the same differences seen earlier.First, the impact of SES measures is much clearer in the CES, where higher income produces a higher Republican vote, as do levels of education below college graduation, with postgraduate work leading in the other direction.The ANES shows only traces of these relationships.Nevertheless, as the summaries for variance explained at bottom show, SES measures in both surveys improve prediction only marginally beyond one based simply on the distribution of the vote. Partisan divisions created by race and ethnicity are also evident in the presidential vote, with Latino, black and Asian Catholics more likely than their white brethren to cast Democratic ballots.In both surveys, ethnicity does a much better job in Thomas V. Feingold, James L. Guth • CATHOLIC PARTISANSHIP IN THE 2020 PRESIDENTIAL ELECTION: DEMOGRAPHIC AND CULTURAL CLEAVAGES • pp (327-351) predicting the vote than SES variables do.And the domestic status variables almost match ethnicity in predictive power, and line up quite well across the surveys, with southerners, straights, men and married folks as well as rural Catholics favoring the GOP, although some coefficients in the smaller ANES subsample miss statistical significance.And Catholic veterans clearly favored Trump. The ANES' major strength appears in its richer assessment of religious influences.Even with the relatively small sample and everything else in the equation, the religious variables remain statistically significant, with the Bible item just missing.As is often the case when religious beliefs are well-measured, religiosity here "flips signs" from the bivariate relationship and favors Biden, while born-again status also becomes a much stronger predictor of a Democratic vote.In the CES, on the other hand, religiosity is conducive to a Trump vote, presumably reflecting unmeasured effects of traditionalism.Also in contrast to the ANES, born-again status favors a Republican vote, suggesting that the pro-Democratic effect in the ANES is partly a residual one, apparent primarily when other measures of traditionalism are in the analysis.In any case, ANES religious variables alone predict 72 percent of the vote correctly, compared with only 57 percent for the two CES measures.The better ANES measurement of religion is a major source of its stronger performance accounting for electoral partisanship.Of course, voting models in the "Michigan" tradition must always incorporate party identification -V.O.Key's famous "standing decision" -as an important influence.When we add party identification to the analysis, we obviously boost the predictive power of each model to over 90 percent, but find that many demographic Thomas V. Feingold, James L. Guth • CATHOLIC PARTISANSHIP IN THE 2020 PRESIDENTIAL ELECTION: DEMOGRAPHIC AND CULTURAL CLEAVAGES • pp (327-351) traits remain significant.Even in the smaller ANES sample, virtually all the ethnic and religious variables remain significant direct predictors, while both socioeconomic and domestic traits drop out, seemingly absorbed by party identification.In the much larger CES sample, the two religious measures also retain very solid influences, as do the ethnicity variables, but they are joined by most domestic variables in predicting partisan electoral choice.Only income washes out, seemingly absorbed by partisanship, while educational effects are more marginal, though working in the expected directions (data not shown). Summary and Conclusions We have examined the many divisions among Catholics which have produced the sharp partisan divisions in this large religious community.There is some tantalizing evidence that traditional social class factors are no longer the primary determinants of Catholic party choices, but have been superseded by ethnic, domestic status and religious cleavages.Of course, that assessment depends in part on the data sources considered: ANES Catholics appear less likely to ground their partisanship in their socioeconomic status, at least in comparison with those surveyed by the CES.In a different vein, the greater power of religious factors in the ANES is easier to explain: it has more measures tapping religious belief, the driving force behind the "culture wars" affecting Catholics and other religious communities.The CES lacks such items. We have also seen that different communities of Catholics connect their own life positions to partisanship in varying proportions: even in the CES, income is a much more important predictor of partisanship among Latinos than it is among Anglos.For the latter, religious belief is a bigger factor, along with some domestic status variables, although there are signs that religion matters for Latinos as well.And we have confirmed that Latino partisanship is not uniform, but varies with many other factors, including national origin, age, religiosity and region -a good reason for the recent soul-searching by Democratic strategists worried about the GOP's stronger showing among Latinos in 2020. 42 fuller understanding of Catholic partisanship will require several kinds of future research.First, we still need to address the old question asked primarily by Catholic thinkers but also by other scholars: is there a distinct "Catholic" component to partisanship?Or is Catholic partisanship simply an artifact of all the influences affecting Americans generally?That is a big question and one that is hard to get at, but we have seen here that there is some evidence that SES factors may not influence white Catholics in the same way as those in other religious traditions.Is there a "Catholic perspective" that modifies the operation of these other factors?Perhaps.But if we rerun the equations for party identification and presidential vote reported for the ANES in Tables 5 and 6 for the full sample with dummy variables for "Catholic," "Evangelical Protestant," "Mainline Protestant," and "Jewish," only the coefficients for "Evangelical Protestant" (a very substantial one at that) and "Jewish" remain, while those for Catholics and Mainliners drop out.This strongly suggests that simple membership in the latter two traditions does not add to the explanatory power of other social, demographic, personal and religious variables in predicting partisan choices.The story is a little different in the CES: although "Evangelical" and "Jewish" affiliations still produce greater GOP and Democratic choices respectively, net of all other influences, Mainline membership also has a substantial net Democratic effect. 43But again, even in this huge sample, the "Catholic" coefficient is not significant.Perhaps Dionne's famous quip needs modification: "there are many kinds of Catholic votes and they are all important." 43 In comparison with the ANES results, this probably reflects the CES' s absence of belief measures and the predominant liberalism of Mainline Protestant denominations. 34 Unpublished study conducted by the second author' s Fall 2021 political analysis class.35Interestingly,non-Catholics show a substantial impact of income and post-graduate education on all six measures and on party identification as well (see below for the Catholic case).
2024-03-08T16:03:23.854Z
2023-10-25T00:00:00.000
{ "year": 2023, "sha1": "3c39bdaebc06a3699cda986a0d585025dcd16315", "oa_license": "CCBYNCSA", "oa_url": "https://www.politicsandreligionjournal.com/index.php/prj/article/download/467/477", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "76288920a760036f50e7c130571642dee3f5d701", "s2fieldsofstudy": [ "Political Science", "Sociology" ], "extfieldsofstudy": [] }
231726912
pes2o/s2orc
v3-fos-license
Studies on peroxidase from Moringa oleifera Lam leaves Kinetic and physicochemical properties of Moringa oleifera peroxidase purified using a novel and cost efficient protocol was investigated with a view to providing information on its possible biotechnological potentials. Moringa oleifera peroxidase was purified to homogeneity in two steps, involving ATPS and size exclusion chromatography on Sephadex G-100 with a yield of 84.12 %. In-gel activity staining revealed the presence of one isoform of peroxidase. The purified peroxidase is monomeric with native and subunits molecular weight of 38.9 and 43.5 kDa respectively. Kinetic parameters - Vmax, Km(app)o-dianisidine, Km(app)H2O2 of the purified enzyme were 2.5 units/mg protein, 0.020 ± 0.04 mM and 1.37 ± 0.18 mM respectively. Its optimum pH and temperature were 5 and 30 °C respectively. The purified enzyme cross-linked BSA into an insoluble matrix with the aid of caffeic acid. The study concluded that the purification scheme adopted is rapid and efficient, the purified enzyme exhibited some physiochemical properties that make it suitable for various biotechnological applications. Introduction Peroxidases (EC 1.11.1.x) are a group of enzymes that contain heme, and they oxidize varieties of xenobiotics by using hydrogen peroxide (Saunders, 1973). Peroxidase is an anti-oxidative enzyme that is widely distributed in microbe, plant, and animal tissue. They represent a large group of heme-containing enzymes family (Van Huystee and Cairns, 1982). This oxido-reductase catalyzes a reaction in which hydrogen peroxide acts as the acceptor and another compound acts as the donor of hydrogen atoms (Rodrigo et al., 1996). In the presence of peroxide, peroxidase from plant tissues can oxidize a wide range of phenolic compounds, such as guaiacol, catechol, pyrogallol, chlorogenic acid, and catechin (Onsa et al., 2004). Peroxidase has a diverse function in the plant. They are involved in the lignifications of cell walls (Adewale and Adekunle, 2018), plant differentiation and growth, plant defense from a pathogenic microorganism, and wound healing (Shigeto and Tsutsumi, 2016). Other functions of peroxidase include their involvement in the biotransformation of drugs, chemicals, and polymers (Sakai et al., 2014). Also, their ability to oxidize phenolic has increased its use for various industrial applications, of which the most important ones include decolorization of waste (Lai and Lin, 2005;Dalal and Gupta, 2007;Jadhav et al., 2009), synthesis of various aromatic chemicals, removal of peroxides from foodstuff, industrial wastes and crosslinking of macromolecules (proteins) (Saitou et al. 1991;Kim and Yoo, 1996) and in the biological field, as diagnostic kits for enzyme immunoassays and as an important component of ELISA system (Leon et al., 2002;Deepa and Arumughan, 2002). Horseradish (Armoracia rusticana) roots are the major sources of commercially available peroxidase. Due to their biochemical diversity, they are used as a traditional source of peroxidase for commercial production. They exist in multiple isoforms which makes it a convenient toolbox of plant peroxidases from which an isoenzyme that meets the requirements of an application can be chosen (Adewale and Adekunle, 2018). The combination of various features in the horseradish toolbox with the aid of recombinant technology allows for improved biocatalysis and novel synthesis in the use of peroxidase, however, this is still at the infancy. Thus, numerous studies have been carried out in a search for an alternative source of peroxidase with higher stability, availability, degree of purification, substrate specificity and novel properties. Moringa oleifera (English name: drumstick tree) is a medium-sized tree with small-sized leaves native to Bangladesh, China, Nepal Pakistan, India, but also cultivated in tropical America, tropical Africa, Malaysia, and the Philippines and its widely known for its medicinal value (Popoola and Obembe, 2013). The flowers and fruits of the Moringa oleifera plant are rich in nutrients. The leaves of the Moringa oleifera plant are rich in vitamins, carotenoids, polyphenols, phenolic acids, flavonoids, alkaloids, glucosinolates, isothiocyanates, tannins, and saponins (Leone et al., 2015). All parts of the Moringa oleifera plant; the root, bark, gum, leaf, pod, and seeds, have been reported to have various biological activities. Generally Moringa oleifera has diverse medicinal and biomedical applications. The plant extract has been reported in the synthesis of nanoparticle which posses better cytotoxicity and antibacterial activity (Ezhilarasi et al., 2016), and have been used traditionally in the purification of water and treatment of various diseases from malaria and typhoid fever to hypertension and diabetes (Sivasankari et al., 2014). Previous work by Khatun et al. (2012) had shown the presence of peroxidase in Moringa oleifera leaves with desirable properties. This study, however focuses on a rapid purification scheme for the enzyme preparations and its possible biotechnological potential. Materials Sephadex G-100 and Sephacryl S-300 were obtained from Pharmacia Fine Chemicals, Uppsala, Sweden. The molecular weight standard for SDS-PAGE was obtained from Thermo Scientific, Lithuania. Caffeic acid, bovine serum albumin, ammonium sulphate, and polyethylene glycol (PEG 6000) were obtained from Sigma Chemical, St Louis, USA. All other reagents were of analytical grade and were obtained from reputable chemical suppliers. Moringa oleifera leaves were harvested from the Moringa oleifera tree within Obafemi Awolowo University Campus Ile-Ife. Enzyme extraction and measurement of enzyme activity Enzyme extraction from young and matured leaves of Moringa oleifera was carried out in 10 mM phosphate buffer pH 6.0 containing 10% glycerol at 4 C for 1 min in a warring blender. The homogenate was centrifuged at 10,000 x g for 30 min at 4 ᵒC to obtain a cell debris-free supernatant which was stored as crude enzyme homogenate. Peroxidase activities were routinely determined according to the method of Kay et al. (1967) as described by Adewale and Adekunle (2018). Briefly, the reaction mixture contained in final concentration, hydrogen peroxide (1 mM), o-dianisidine (0.25 mM), 0.1 M sodium phosphate buffer pH 6.0, and enzyme aliquot that gave a change in absorbance of 0.02-0.07 per min at 460 nm as a result of the oxidation of o-dianisidine in the presence of hydrogen peroxide. One unit of enzyme activity is defined as the amount of enzyme that oxidizes 1 μmol o-dianisidine/min (ε 460 ¼ 11.3 mM À1 cm À1 ). The leave homogenate with the highest peroxidase activity was adopted for further studies. Protein concentration determination Protein concentration was routinely measured according to the method of Bradford (1976) using bovine serum albumin as the standard protein. The protein standard curve was determined by pipetting from 0 -1.0 ml of BSA (10 μg/ml) into labeled test tubes equivalent to 0-10 μg BSA concentration. Afterwards, the test-tube was made up to 1.8 ml with distilled water. Then, 0.2 ml of Bradford reagent was added to each of the test tubes, which was then mixed and incubated briefly at room temperature. The absorbance of each test tube was read against a blank which contained all other components of the mixture except BSA at 595 nm. Absorbance was then plotted against the corresponding amount (mg) of standard protein to construct the standard curve. Protein concentration in the supernatant was determined by extrapolation from the standard protein curve. In-gel activity staining of peroxidase In-gel activity staining for the determination of the presence of peroxidase in the crude and the purified sample was carried out on a native polyacrylamide gel electrophoresis (native PAGE) on 10% separating gel and a 4% stacking gel using the Tris-glycine buffer system (25 mM Tris-base and 192 mM Glycine) at pH 8.3 as described by Laemmli (1970). The crude and purified peroxidase samples were prepared by pipetting 30 μl each of crude and purified peroxidase and 30 μl of sample buffer (0.12 M Tris-base pH 6.8 containing 10% glycerol and 0.02% bromophenol blue). Aliquots of the resulting mixtures were loaded separately on a slab gel. Electrophoresis was carried out at room temperature at a constant voltage of 100 mV for the stacking gel and 150 mV for the separating gel. After electrophoresis, gels were immersed into a solution containing a final concentration of 30 mM H 2 O 2, 15 mM o-dianisidine, and 0.1 M sodium phosphate buffer pH 6.5 for 10 min for the development of chromophores. Thereafter, the gel was removed, washed in distilled water, and viewed for band formation. Purification of peroxidase The purification of peroxidase from Moringa oleifera was carried out using a combination of aqueous two-phase partitioning system and gel filtration chromatography. Purification by aqueous two-phase partitioning system (ATPS) was carried out according to the method of Srinivas et al. (2002) with slight modification. Polyethylene glycol (PEG 6000, 35% w/v), ammonium sulphate (7.5% w/v) and sodium chloride (2% w/v) were dissolved in the crude extract on ice. The mixture was stirred continuously to achieve a completely homogenous mixture which was then incubated on ice for phase separation. The top and bottom phases were collected and were assayed for peroxidase activity and protein concentration. The salt-rich bottom phase which had higher peroxidase activity was dialyzed against four changes of buffer (10 mM phosphate buffer pH 6.0 containing 10% glycerol) at 4 C for 6 h in a cold box to remove salts. The partially purified sample obtained from dialysis was further purified on Sephadex G-100 (1 Â 50 cm) gel filtration column which was previously equilibrated with 10 mM sodium phosphate buffer pH 6.5 containing 10% glycerol. Determination of native and subunit molecular weight The native molecular weight of the purified peroxidase was carried out on calibrated Sephadex G-100 and the subunits molecular weight of the purified peroxidase was determined by SDS-polyacrylamide gel electrophoresis on a 12 % (w/v) polyacrylamide running gel and a 4 % (w/v) polyacrylamide stacking gel according to the method of Laemmli (1970) along with Thermo Scientific Molecular weight marker. Determination of kinetic parameters The effect of hydrogen peroxide and o-dianisidine on the purified peroxidase from Moringa oleifera was determined by assaying for peroxidase activity at varying concentrations of hydrogen peroxide between 0 mM-5 mM at a constant concentration of o-dianisidine (0.25 mM) and also by varying the concentrations of o-dianisidine between 0 mM-0.25 mM at a constant concentration of hydrogen peroxide (1 mM) in 50 mM phosphate buffer. Apparent kinetic parameters K m and V max for both substrates were interpolated using non-linear regression software (GraphPad Prism 5). Effect of temperature on peroxidase activity The effect of temperature on peroxidase activity was studied by incubating reaction mixtures containing 10 mM phosphate buffer pH 6.0, 0.25 mM o-dianisidine, and 1 mM H 2 O 2 at the temperature range of 20 C-50 C in a water bath for 10 min. The enzyme was introduced into the medium, stirred, and then placed in the spectrophotometer to take the absorbance. A graph of activity against the respective values of temperature was plotted and the optimum temperature was interpolated. Effect of pH on peroxidase activity The effect of the pH on the enzyme activity was performed according to the methods of Adewale and Adekunle (2018). The activity was determined in the pH range of 3.0-10.0 at 30 C. The following buffer systems at the indicated pH ranges were used: 50 mM acetate buffer pH 3.0-5.0, 50 mM MES buffer pH 5.5-6.5, 50 mM phosphate buffer pH 7.0-8.0, and 50 mM borate buffer pH 8.5-10.0. The peroxidase activity was assayed as described by Kay et al. (1967) with the assay buffer being replaced by each of these buffers. Synthesis of cross-linked protein networks The potential of the purified peroxidase in the synthesis of crosslinked protein network was carried out using BSA as the protein at varying concentrations (1.0 mg/ml, 10 mg/ml) and in the presence of caffeic acid (0.5 mM, 1.0 mM, 2.0 mM) and a fixed concentration of H 2 O 2 as the substrate. The reaction mixture of 0.5 ml contained BSA, caffeic acid, H 2 O 2 , 50 mM phosphate buffer, and an aliquot of purified peroxidase. Control reaction mixtures were prepared in the absence of the purified enzyme and substrate. The mixtures were incubated at 50 C overnight (18 h). The products were observed with a Zeiss LSM 510 META confocal microscope fitted to a Zeiss Axiovert 200 M. Cross-linking of peroxidase with BSA The potentials of Moringa oleifera peroxidase as a reporter enzyme was investigated by coupling the enzyme with BSA. This was carried out according to the method of Ayhan et al. (2012) as described by Adewale and Adekunle (2018). Briefly, this was done by mixing 0.3 mg/ml BSA and 0.2651 mg/ml peroxidase in a 1:1 ratio after which glutaraldehyde (0.25%) was added to the solution. The solution was incubated for 2 h and peroxidase activity was determined as earlier described. The ability of the peroxidase to conjugate BSA was monitored on a calibrated Sephacryl S-300 column. Statistic All experiments are carried out at least in triplicate unless otherwise stated. Data were expresses as mean AE standard deviation. Statistical analysis was perform using Graph pad prism 5.0 software. Expression of peroxidase in young and mature leaves of Moringa oleifera Peroxidase is higher in the leaves of matured leaves of Moringa oleifera (1.4082 units/mg protein) when compared to young leaves (1.3541 units/mg protein). However, the difference is not significant (Figure 1). In-gel Activity Staining of Peroxidase In-gel activity staining reveals the presence of only one isoform of peroxidase in the matured leave extract of Moringa oleifera leaves (Figure 2). Enzyme purification Purification of peroxidase from Moringa oleifera leaves using a combination of ATPS and size exclusion chromatography on Sephadex G-100, resulted in one peroxidase activity peak with final recovery of 84% (Table 1 and Figure 3). Subunit molecular weight determination The sub-unit molecular weight of the purified Moringa oleifera peroxidase on a 12% polyacrylamide gel electrophoresis in the presence of SDS gave a single band equivalent to 43.5189 AE 1.6303 kDa (Figure 4). Native molecular weight determination by gel filtration on Sephadex G-100 The native molecular weight for purified Moringa oleifera peroxidase was 38.98 AE 1.41 kDa. Kinetic parameter The apparent kinetic parameters K m(app) Effect of temperature The optimum temperature of the purified peroxidase from Moringa oleifera was 30 ᵒC ( Figure 5). Effect of pH The optimum pH for the purified peroxidase from Moringa oleifera leaves was 5.0 ( Figure 6). Peroxidase as reporter enzyme When purified peroxidase from Moringa oleifera was cross-linked with BSA and the product separated on calibrated Sephacryl S-300. It gave a single peak with molecular weight of 81.052 kDa signifying the BSA had bonded with the peroxidase forming a single protein possessing peroxidase activity. Synthesis of cross-linked protein network Peroxidase was able to synthesize insoluble fibrous protein from soluble globular BSA in both presence and absence of caffeic acid. However, the presence of 2 mM caffeic acid facilitates the synthesis of more insoluble matrix as seen in Figure 7. Discussion We reported in this study a rapid and efficient purification scheme for peroxidase from Moringa oleifera and demonstrated its potential in the synthesis of cross-link protein network and as a reporter enzyme. Activity staining under non-denaturing conditions revealed the presence of only one form of peroxidase. This is consistent with the result obtained from the initial purification of peroxidase from Moringa leaves by Khatun et al. (2012) which reveal the presence of only one form of peroxidase. Expression of peroxidase in young and matured leaves of Moringa oleifera was found to be 1.3541 AE 0.1002 and 1.4082 AE 0.1278 units/mg protein respectively. Although the matured leaves extract possessed more peroxidase activity than the young leaves, it is however not significant. Previous work on the extraction of peroxidase from Moringa oleifera leaves by Khatun et al. (2012) showed to have a specific activity of 2.11 units/mg protein. This difference may be due to the different substrate used in the individual study. Also, factors such as age, time of harvest, and other environmental conditions which are known to affect peroxidase expression in plants could also be involved. Aqueous two-phase separation (ATPS) used in this study as the prime purification process was found to be viable, rapid, and efficient combining both purification and concentration of the crude enzymes to about 263% recovery and a purification fold of about 10. The increase in recovery and purification fold utilizing ATPS could be due to partitioning of non-target proteins, natural inhibitors and other contaminants from the desired enzyme to the PEG-rich phase. Thus, depending on the type of biotechnological application, this purification process may be sufficient. Further purification on gel filtration (Sephadex G-100) was able to purify the protein to homogeneity with a recovery of 84% and purification fold of 4. The loss of enzyme recovery and purification suggests that the initial purification step may have been sufficient in purifying the enzyme. Previous work from peroxidase purification from Moringa oleifera by Khatun et al. (2012) used a combination of ammonium sulphate precipitation, DEAE-cellulose column chromatography, Sephadex -G 200 column chromatography, and Con-A column chromatography resulted in 28% recovery and purification fold of 164. When compared with Khatun et al. (2012), the purification process devised for this study is considerably fast, cheap, reliable, and less cumbersome with a higher recovery. In order to ascertain the purity, integrity, and molecular weight of the purified enzymes, native and subunit molecular weight of the purified enzyme was carried out on calibrated Sephadex G-100 and SDS-PAGE respectively. The purified enzyme appeared to be monomeric with native and subunit molecular weight to be 38.98 AE 1.41 and 43.5 AE 1.63 kDa respectively. This result is consistent with the initial report by Khatun et al. (2012), which reported a monomeric peroxidase from Moringa oleifera with molecular weight of 43kDa. This result is also consistent with the majority of monomeric peroxidases purified from plant having molecular weight between 30-60 kDa (Khatun et al., 2012). However, some exceptions, like kolanut peroxidase have been shown to be dimeric with low molecular weights (Adewale and Adekunle, 2018). The purified peroxidase was stable over a pH range of 4.0-6.0 with its optimum pH around 5.0 and unstable at alkaline pH. The loss of activity at this alkaline pH may be due to instability of the heme-binding to the enzyme at low pH or as a result of protein denaturation or ionic changes in the heme group at high pH (Adams, 1997). Other studies have reported similar results where most peroxidases from different sources showed optimum activity in the pH range of 4.5-6.5 (Pina et al., 2001;Leon et al., 2002;Deepa and Arumughan, 2002;Diao et al., 2011;Adewale and Adekunle, 2018). Optimum temperature for the purified peroxidase was 30 C which is in agreement with the work Civello et al. (1995) who reported maximum enzyme activity at 30 C for peroxidase purified from strawberry fruits. It is also interesting to note that Diao et al. (2011) reported optimum temperature on four different sources of peroxidase as follows; 40 C for Allium sativum and Sorghum bicolor and 30 C for Ipomoea batatas and Raphanus sativu which are consistent with this result. However, Khatun et al. (2012) reported that the optimum temperature for peroxidase from Moringa oleifera was 50 C this is in sharp contrast to our report. The reason may be due to the purification adopted for this study. For PEG are known to bind to protein and this may have slightly altered the structure of the protein resulting in lower optimum temperature. Reporter enzymes have immense usage in ELISA kit since it is known to detect antigens and proteins since they are known to trigger the formation of colored product which can be visualized and quantified (Al-Shaban and Abdel-Hamid, 2009). However, horseradish peroxidase is the most widely used as a reporter enzyme because it posseses the ability to produce a chromogenic product at a very low concentration. In this study, we were able to cross-link peroxidase from Moringa oleifera with BSA utilizing glutaraldehyde as the crosslinker, which produced a single activity peak from Sephacyrl S-300 chromatogram with a molecular weight of 81 AE 1.9 kDa. The increase in molecular weight of the protein obtained suggested that peroxidase has fused with BSA to create a new high molecular weight protein possessing peroxidase activity. In other words, cross-linked peroxidase has been used to report BSA, implying it could be useful in reporting other proteins. This is also similar to the result obtained from the cross-linking of peroxidase from kolanut with BSA (Adewale and Adekunle, 2018). The potentials of Moringa oleifera leave peroxidase as a catalyst in the synthesis of cross-linked protein networks, were further analyzed. Enzymatic cross-linking of proteins has gained increasing interest in food technology to create novel food products or improve textural properties of dairy products and biopolymers based on the cross-linking of the amino side chains such as tyrosine, glutamine, and lysine residues in the proteins, resulting in the formation of a new functional threedimensional protein network (Chen et al., 2002;Thalmann and L€ otzbeyer, 2002) and the traditional enzyme used mainly for this purpose is transglutaminase (B€ onisch et al., 2007). The ability of Moringa peroxidase in the effective cross-linking of proteins was achieved in this study. The enzyme could catalyze the formation of fibrous protein networks from soluble proteins (BSA). The cross-linking was carried out in the presence of low molecular weight caffeic acid. This phenolic source acts as a substrate for the cross-linking process. The phenolic substrate is integrated into the polymerizing complex as an interconnection between the single protein molecules for each of the samples. In principle, it is also possible that other amino acid residues of the proteins may have acted as the electrophilic substrate for peroxidase resulting in direct protein-protein crosslinks. In conclusion, the finding was able to show that peroxidase is abundant in Moringa oleifera leaves and was purified to homogeneity in a twostep purification process thereby considerably reducing cost and time. The purified enzyme possesses some combination of properties that could be useful for biotechnological applications i.e. in the synthesis of crosslinked protein network and as a reporter enzyme. Author contribution statement Isaac Olusanjo Adewale: Contributed reagents, materials, analysis tools or data. Oluwadare Joel Agunbiade: Analyzed and interpreted the data; Wrote the paper. Funding statement This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Data availability statement No data was used for the research described in the article. Figure 7. Photomicrographs of Cross-linked Protein Networks. The synthesis of fibrous protein networks were carried out as earlier described in the presence of peroxidase, caffeic acid, hydrogen peroxide, and BSA at 50 C for 18 h. Where A is the result obtained when peroxidase catalyzed BSA crosslink in the presence of 2mM caffeic Acid; B is the result obtained when peroxidase catalyzed BSA crosslink without caffeic acid; and C is the result obtained with only BSA and caffeic acid (control).
2021-01-31T05:07:57.685Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "bd50dcd673d885e4ea1e933a5dcfdcee33093f06", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S2405844021001377/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bd50dcd673d885e4ea1e933a5dcfdcee33093f06", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
29445700
pes2o/s2orc
v3-fos-license
Use of evidence-based therapies after discharge among elderly patients with acute myocardial infarction Background: Postdischarge use of evidence-based drug therapies has been proposed as a measure of quality of care for myocardial infarction patients. We examined trends in the use of evidence-based drug therapies after discharge among elderly patients with myocardial infarction. Methods: We performed a cross-sectional study in a retrospective population-based cohort that was created using linked administrative databases. We included patients aged 65 years and older who were discharged from hospital with a diagnosis of myocardial infarction between Apr. 1, 1992, and Mar. 31, 2005. We determined the annual percentage of patients who filled a prescription for statins, β-blockers and angiotensin-modifying drugs within 90 days after discharge. Results: The percentage of patients who filled a prescription for a β-blocker increased from 42.6% in 1992 to 78.1% in 2005. The percentage of patients who filled a prescription for an angiotensin-modifying drug increased from 42.0% in 1992 to 78.4% in 2005. The percentage of patients who filled a prescription for a statin increased from 4.2% in 1992 to 79.2% in 2005. In 2005, about half of the hospitals had rates of use for each of these therapies that were less than 80%. The temporal rate of increase in statin use after discharge was slower among noncardiologists than among cardiologists (3.5%–2.8% slower). The rate of increase was 4.8% slower for among physicians with low volumes of myocardial infarction patients than among those with high volumes of such patients and was 5.7% greater at teaching hospitals compared with nonteaching hospitals. Interpretation: Use of statins, β-blockers and angiotensin-modifying drugs increased from 1992 to 2005. The rate of increase in the use of these medications after discharge varied across physician and hospital characteristics. Patients who had been admitted to hospital with myocardial infarction in the year before the index admission were excluded to restrict the sample to patients with new diagnoses. We also excluded patients discharged to complex continuing care hospitals because their medications are not covered under the Ontario Drug Benefit program. The accuracy of the most responsible diagnosis of myocardial infarction upon which patients were selected for inclusion has previously been validated. It was shown to have a specificity of 92.8% and a sensitivity of 88.8% among patients admitted to coronary care units. 15 Medication use We compared patients' demographic and clinical characteristics across different periods of the study. For each year, we determined the percentage of patients aged 65 and older who, within 90 days after discharge from hospital, filled a prescription for each of the following medical therapies: β-blockers, angiotensin-modifying agents (either ACE inhibitors or angiotensin-receptor blockers) and statins. The reasons for using a 90-day window and for combining ACE inhibitors and angiotensin-receptor blockers are described in an online appendix (available at www.cmaj.ca/cgi/content/full/179/9 /895/DC2). We did not examine postdischarge use of ASA because it is available without a prescription; thus, its use is not accurately captured by the Ontario Drug Benefit program. Physician and hospital characteristics We conducted a second set of analyses to determine the physician and hospital characteristics that were associated with a more rapid temporal increase in the postdischarge use of evidence-based medications. We included physician specialty and sex as well as the average annual volume of patients with myocardial infarction seen (all ages) and the average number of years of clinical experience across the study period. Only years in which the physician saw at least 1 patient with myocardial infarction were used to calculate the average annual volume. We included the following hospital characteristics: teaching status and average annual volume of patients with myocardial infarction (all ages) during the study period. Statistical analysis We compared patients' demographic and clinical characteristics across different periods of the study. Categorical variables were compared using the χ 2 test, and continuous variables were compared using the Wilcoxon rank-sum test. We assessed the statistical significance of the trends in medication prescribing using the Mantel-Haenszel χ 2 test. We used random-effects logistic regression models to examine the influence of patient, physician and hospital characteristics on postdischarge medication use. 16 The models included the following patient-level variables: age, sex and the 9 comorbid conditions (cardiogenic shock, congestive heart failure, pulmonary edema, cardiac dysrhythmia, malignant disease, cerebrovascular disease, acute renal failure, chronic renal failure, diabetes with complications) that comprise the Ontario acute myocardial infarction mortality prediction model. The derivation and validation of this model has been described elsewhere. 17 The regression models incorporated a variable denoting the number of years since 1992, which allowed us to determine changes in the use of each medical therapy over time. We modified each model by incorporating an interaction between time and each physician and hospital characteristic. This allowed us to examine whether the rate of increase in the use of medical therapies differed depending on physician or hospital characteristics. Because of the size and complex nature of the sample, a cross-classified multilevel was not fit. As was done in an earlier study, if physicians practised at 2 different hospitals, we considered them to be 2 independent physicians. 18 Research CMAJ • OCTOBER 21, 2008 • 179(9) 896 Study population During the study period, 132 778 elderly patients with a myocardial infarction were discharged from hospital. The patient demographic and clinical characteristics are reported in Table 1. The median patient age and prevalence of each of the comorbid conditions examined varied across the different periods of the study (p ≤ 0.025). Because of the large sample, judgment should be used in interpreting the clinical significance of differences between the different eras. Overall trends The annual number of elderly patients with myocardial infarction ranged from 8133 in 1992 to 10 707 in 2001. Overall trends in the postdischarge use of evidence-based therapies are described in Figure 1. The results of our analyses to examine physician and hospital characteristics associated with a more rapid increase in postdischarge use of β-blockers, angiotensin-modifying agents and statins are described in Figure 2 and Appendix 2 (available online at www.cmaj.ca/cgi/content/full/179/9/ 895/DC2). For each physician for which the rate of uptake of evidence was different from the reference group, we provide a ratio and associated 95% CI. This ratio describes the relative difference between the odds ratio for the temporal increase in prescribing in the given group and the odds ratio for the temporal increase in prescribing in the reference group. The rate of increase in postdischarge use of statins was slower among general and family practitioners (0.97, 95% CI The plotted probabilities of medication use after discharge were derived from the fitted multilevel regression model and represent the predicted probability at an average hospital, for a patient of average age, all of whose risk factors were set to absent, and whose values for the other physician and hospital characteristics were set to their reference levels. Interpretation We observed gradual and substantial increases between 1992 and 2005 in the postdischarge use of β-blockers, angiotensinmodifying agents and statins among elderly patients with myocardial infarction. The use of β-blockers and angiotensinmodifying agents approximately doubled over the study period, and the use of statins increased 18-fold. Furthermore, at the population level, the use of each of these therapies appears to have reached a plateau. A novel contribution of our study is that it examined the association of physician and hospital characteristics with temporal changes in the use of evidence-based drug therapies. Although we did not find a consistent pattern across the 3 classes of medications, we found an association between physician and hospital characteristics and the rate of increase in the postdischarge use of each of the 3 medication classes over time. For example, in 1992 β-blocker use was higher among patients attended by cardiologists than among those attended by noncardiologists. However, the rate of increase in the prescribing of β-blockers was greater among general internists and other specialists than among cardiologists, such that postdischarge rates were converging between the different specialties by the end of the study period. The temporal increase in postdischarge statin use was greater among patients attended by cardiologists than among those attended by noncardiologists. An explanation for this difference may be the temporality of the availability of evidence. Evidence for statin use in myocardial infarction patients accumulated in the 1990s, which comprised the early part of our study period. In contrast, evidence for β-blocker use in this population accumulated in the 1980s, before our study period. Thus, cardiologists may have been aware of the evidence for β-blockers, while it took longer for this evidence to disseminate among noncardiologists. Finally, the temporal rate of increase in postdischarge use of angiotensin-modifying agents was greater among cardiologists than among internists and other specialists. Evidence for the use of angiotensinmodifying agents accumulated during the study period, and cardiologists may have had a better awareness of this evidence. The overall trends in our study are similar to those of a recent study that examined the trends in quality of care provided to myocardial infarction patients in 4 US states between 1992 and 2001. 19 That study found improvements, among all patients and ideal candidates, in prescribing at discharge of ASA, β-blockers and ACE inhibitors between 1992 and 2001. Importantly, only a minority of patients were identified as ideal candidates for each therapy. In 2000/01, among all patients, the discharge rates of prescribing ASA, β-blockers and angiotensin-converting enzyme inhibitors were 79.4%, 71.4% and 64.6%, respectively. Among ideal candidates, prescribing rates were 87.4%, 80.3% and 74.8%, respectively. Our findings are also relevant to policy-makers and clinicians interested in quality improvement for cardiac care. The steady, as opposed to abrupt, increase in the rates of use of drug therapies for the secondary prevention of myocardial infarction over our 14-year study period suggests that changing physician prescribing behaviour is a process that happens slowly over time and that it is achievable with sustained reinforcement. Multiple clinical trials and observational studies that expanded the indications and documented the underuse of these therapies were published during the study period. Our results suggest that these studies likely had a cumulative effect that eventually resulted in close to saturation levels of therapy for secondary prevention. When restricted to agents for which evidence was disseminated during the study period (angiotensin-modifying agents and statins), our study provides some evidence that cardiologists adapt evidence-based medication use more rapidly than noncardiologists. Furthermore, physicians who cared for a low number of patients with myocardial infarction tended to adopt evidence-based care more slowly than those who cared for many such patients. Finally, teaching hospitals adopted the use of statins more rapidly than nonteaching hospitals. Our findings suggest that there is a need to identify methods to stimulate more rapid uptake of evidence-based drug therapies by physicians practising in nonteaching hospitals, as well as by noncardiologists and physicians who care for a low number of patients with myocardial infarction. Providing low-volume physicians with mentors and encouraging academic institutions to partner with nonteaching hospitals may result in a more rapid uptake of evidence. Finally, we speculate that the development and rapid dissemination of standardized discharge checklists by cardiovascular specialists could improve the uptake of evidence-based practices by groups in which uptake has been historically slower. Bradley and colleagues conducted a qualitative study to identify factors associated with an increase in β-blocker use after myocardial infarction. 20 They found that hospitals with greater temporal improvements in β-blocker use had 4 characteristics not found in hospitals with less or no temporal improvement: shared goals for improvement, substantial administrative support, strong physician leadership advocating β-blocker use and use of credible data feedback. 20 The final element suggests that hospital report cards that include hospital-specific postdischarge rates of medication use among myocardial infarction patients, similar to one published earlier in Ontario, 21 may help to improve evidence-based prescribing. Limitations There are limitations to our study. First, we used administrative data, which did not allow us to exclude patients who had contraindications to the therapies under consideration. However, as found elsewhere, 19 it is likely that postdischarge medication use is even higher among ideal patients for whom therapy is indicated and who have no contraindications than it is in the entire population of patients with myocardial infarction. Furthermore, our use of administrative data allowed us to examine use of prescription medications by all elderly patients with myocardial infarction in our jurisdiction. The data for our study were from a population-based database of incident hospital admissions of patients with myocardial infarction in Ontario. Therefore, our data are comprehensive and not restricted to only tertiary centres or to a registry that is subject to voluntary enrolment. A second limitation is that we reported the percentage of patients who filled a prescription. We were unable to capture prescriptions that were not filled by the patient. Therefore, our results likely underestimate postdischarge prescribing. A third limitation is that our analyses were restricted to patients aged 65 and older. Earlier studies have shown that prescribing of evidence-based therapies after myocardial infarction decreases with increasing age. 6,7,11 Thus, the use of these therapies is likely even higher among younger patients. Conclusion Prescriptions for β-blockers, angiotensin-modifying agents and statins are currently filled by about 80% of elderly patients with myocardial infarction after discharge from hospital. However, there was moderate variation in hospitalspecific rates of use of these therapies, with about half of all hospitals prescribing these medications to less than 80% of patients. Furthermore, the rate of increase in use of evidencebased drug therapies use depended on physician and hospital characteristics.
2018-01-09T01:14:20.697Z
2008-10-21T00:00:00.000
{ "year": 2008, "sha1": "7a9af39142f24a1f83d4d005a2f62b7b45cdd4e1", "oa_license": null, "oa_url": "http://www.cmaj.ca/content/179/9/895.full.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7a9af39142f24a1f83d4d005a2f62b7b45cdd4e1", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
266150830
pes2o/s2orc
v3-fos-license
Clinical value of the low-grade inflammation score in aneurysmal subarachnoid hemorrhage Background and purpose Multiple inflammatory biomarkers have been shown to predict symptomatic cerebral vasospasm (SCVS) and poor functional outcome in patients with aneurysmal subarachnoid hemorrhage. However, the impact of the low-grade inflammation (LGI) score, which can reflect the synergistic effects of five individual inflammatory biomarkers on SCVS and poor functional outcome on aneurysmal subarachnoid hemorrhage (aSAH), has not yet been well established. The aim of this study was to evaluate the impact of the LGI score on SCVS and poor functional outcome in aSAH patients. Methods The LGI score was calculated as the sum of 10 quantiles of each individual inflammatory biomarker. The association of the LGI score with the risk of SCVS and poor functional outcome was analyzed with multivariate logistical regression. Results A total of 270 eligible aSAH patients were included in this study: 74 (27.4%) had SCVS, and 79 (29.3%) had poor functional outcomes. After adjusting for confounders, a higher LGI score was revealed to independently predict SCVS (OR, 1.083; 95% CI, 1.011–1.161; P = 0.024) and poor functional outcome (OR, 1.132; 95% CI, 1.023–1.252; P = 0.016), and the second and third tertile group had higher risk of SCVS than lowest tertile group (OR, 2.826; 95% CI, 1.090–7.327; P = 0.033) (OR, 3.243; 95% CI, 1.258–8.358; P = 0.015). The receiver operating characteristic (ROC) curve uncovered the ability of the LGI score to distinguish patients with and without SCVS (area under the curve [AUC] = 0.746; 95% CI, 0.690–0.797; P < 0.001) and poor functional outcomes (area under the curve [AUC] = 0.799; 95% CI, 0.746–0.845; P < 0.001), the predictive value of LGI on SCVS and poor functional outcome is superior than PLT, NLR and WBC, but there was no statistical difference between LGI and CRP for predicting SCVS (P = 0.567) and poor functional outcome (P = 0.171). Conclusions A higher LGI which represents severe low grade inflammation status is associated with SCVS and poor functional outcome at 3 months after aSAH. Introduction Aneurysmal subarachnoid hemorrhage (aSAH) is the most common type of spontaneous subarachnoid hemorrhage (SAH) [1] and is a life-threatening neurological emergency in clinical practice.Patients with aSAH often experience a sudden onset and a rapid progression of symptoms and have high rates of mortality and permanent disability [2].Although the rapid development of neurosurgical and neurointensive care techniques has improved the patient prognosis, those with severe systemic inflammatory responses still have high rates of mortality and disability [3].Systemic inflammatory response syndrome occurs in up to 87% of patients after aSAH [4]; although the specific pathophysiological mechanism is not clear, it has been strongly associated with cerebral vasospasm and delayed cerebral infarction (DCI) [3].Therefore, several peripheral inflammatory biomarkers are widely used in the clinical risk assessment of patients with aSAH.C-reactive protein (CRP), white blood cells (WBCs) and the neutrophil-to-lymphocyte ratio (NLR) were confirmed to be reliable predictors of various complications and poor functional outcomes after aSAH [5,6].However, most of the studies used single biomarker approaches or the ratio of two indicators rather than considering a panel of combined biomarkers. Low-grade inflammation (LGI) is recognized as a risk factor for several chronic diseases, including cardiovascular disease, cancer and neurodegenerative disease [7][8][9].The low-grade inflammation score (LGIS) has been used previously to evaluate the possible synergistic effects of each inflammatory biomarker (CRP, WBC, platelet count, and NLR) [10].This novel index can independently predict total mortality in the healthy adult general population and cardiovascular mortality in patients with cardiovascular diseases [11,12].In addition, an elevated LGIS is associated with a higher risk of stroke recurrence [13].In aSAH patients, inflammatory biomarkers change simultaneously; therefore, considering the synergistic effects might better illustrate the level of inflammation.To date, few studies have confirmed the prognostic value of LGIS in aSAH patients; therefore, this study aimed to use this predefined LGIS to investigate its relationship with symptomatic cerebral vasospasm (SCVS) and poor functional outcome in aSAH patients. Study populations In this retrospective study, data were collected from consecutive patients diagnosed with aSAH at multiple research centers, including the Department of Neurology, The First Affiliated Hospital of Anhui Medical University, Huai'an Hospital and Huai'an No. 1 People's Hospital, from September 2018 to June 2021.The inclusion criteria were as follows: (1) diagnosed with SAH by computed tomography (CT), aneurysms detected by computed tomography angiography (CTA) and digital subtraction angiography (DSA); (2) admission 48 h after onset and laboratory examination; and (3) age of 18 years or older.The exclusion criteria were as follows: (1) nonaneurysmal SAH, such as trauma, vasculitis, and arteriovenous malformation rupture; (2) severe hepatic or renal disease, hematological disease, malignant tumor, autoimmune disease and immunosuppressive therapy; and (3) incomplete clinical data.All patients or their legal representatives signed informed consent, and the protocol was approved by the Ethics Committee of The First Affiliated Hospital of Anhui Medical University. Data collection Patient demographics, vascular risk factors (such as hypertension, diabetes mellitus, history of smoking), surgical approaches and aneurysm location were all collected and evaluated.The severity of clinical presentation at admission was assessed by the World Federation of Neurological Societies (WFNS) grade and Hunt-Hess classification [14].The modified CT Fisher grade was used to assess SAH on CT scans [15].Functional outcome was defined according to the modified Rankin Scale (mRS) at discharge, and an mRS score of 3-6 suggested a poor functional outcome.Blood samples were monitored shortly after admission; the white blood cell count (WBC), neutrophil count, lymphocyte count and CRP were all collected; and the NLR was defined as the neutrophil count divided by lymphocyte count. Assessment of LGIS LGIS was introduced to evaluate the synergistic effects of inflammatory biomarkers (CRP, WBC, PLT and NLR).The value of each biomarker was divided into 10 quantiles; the highest deciles (7 to 10) had a score that increased from 1 to 4, while the lowest deciles (1 to 4) were negatively scored from − 4 to -1 and deciles 5 or 6 received zero points.Then, the values of the four biomarkers were summed to obtain the LGIS, and this total score represented the intensity of low-grade inflammation, ranging from − 16 to 16 [10].Patients were stratified into three groups (T1-T3) according to the LGI value; the higher the LGI index, the more severe the low-grade inflammation was considered to be. Definition of symptomatic cerebral vasospasm Symptomatic cerebral vasospasm was defined as the development of new focal neurological signs and deterioration in the level of consciousness, or both.The cause of deterioration was considered to be cerebral ischemia attributable to vasospasm after excluding other possible causes (rebleeding, hydrocephalus, seizures, metabolic derangement, infection, excessive sedation, hypotension, hypoxia, fever, heart failure, and cerebral edema) [16].In our study, SCVS was evaluated by two certified neurologists blind to clinical data. Statistical analysis Statistical analysis was conducted using the Statistical Package for the Social Science, version 26.0 (SPSS Inc., Chicago, IL) and Medcalc 19.Continuous variables are described herein as the mean (standard deviation) and median (interquartile ranges, IQRs), and categorical variables are expressed as numbers (percentages).Differences in baseline characteristics were assessed by the Chi-square or Fisher exact test for categorical variables and by the t test, Mann-Whitney U test, one way analysis of variance, and Kruskal-Wallis test as appropriate. The collinearity between candidate variables was examined using variance inflation factors before developing the multivariable binary logistical regression model, which was used to analyze the predictive value of the LGI score for SVCS and poor outcome.We used the lowest tertile as the reference category.The covariates entered in the multivariate logistical regression to evaluate the association of LGI score and SCVD were age, gender, CT Fisher grade, WFNS grade and albumin, we further adjusted SCVS and hydrocephalus in the multivariate logistical regression for predicting poor functional outcome.Two-tailed P values of < 0.05 were considered statistically significant.Receiver operating characteristic (ROC) curve analysis was performed to examine the discrimination of the LGI score and each individual biomarkers of the score.Pairwise comparison was performed using the Delong's test. Finally, ROC analysis was performed to investigate the ability of the LGI score and individual biomarkers of the score to distinguish between aSAH patients who did or did not develop SCVS and poor functional outcome.The LGI score showed a superior ability to predict SCVS (area under the curve [AUC] = 0.746; 95% CI, 0.690-0.797;P < 0.001) and poor functional outcome (area under the curve [AUC] = 0.799; 95% CI, 0.746-0.845;P < 0.001).(Figures 1 and 2; Table 6).By pairing and comparing the AUCs of LGI and each individual biomarkers, we found that, the predictive value of LGI on SCVS and poor functional outcome is superior than PLT, NLR and WBC, but there was no statistical difference between LGI and CRP for predicting SCVS (P = 0.567) and poor functional outcome (P = 0.171) (Table 7). Discussion In this study, we used plasma (CRP) and cellular (WBC count, PLT count and NLR) values to construct an LGI score in aSAH patients.The results of this study indicated that the LGI score was independently associated with SCVS and poor functional outcome at 3 months in aSAH patients.Moreover, the discriminatory ability of the LGI score for poor functional outcomes is superior to that of some individual biomarker. The LGI score is a composite score that is used to evaluate comprehensive effects on stroke recurrence and total mortality.An increased LGI score is significantly associated with a higher incidence of stroke recurrence and total mortality [12,13].Although the possible mechanisms are not well defined, obviously, the inflammatory pathway is the common denominator involving pathogenetic mechanisms among several diseases. Previous studies have shown that aSAH patients have a dramatic elevation of sympathetic nervous activity, and sympathoexcitation contributes to the elevation of systemic levels of catecholamines [17], inflammatory cytokines and cells [18].Experimental evidence has shown that the cerebrovasculature displays a super sensitivity to catecholamines after SAH [19], and the spasmogenic ability of these amines may be involved in the genesis of cerebral vasospasm.In addition, sympathetic nervous system overactivation may contribute to cardiac disturbance and marked blood pressure elevation [20], and the instability of physical conditions and acute stress may be appropriate explanations for poor functional outcomes.Since the sympathetic nervous system plays a key role in regulating the inflammatory process [21], it seems reasonable to use inflammatory factors to assess the level of sympathetic activation in clinical practice. The early-phase proinflammatory cytokine cascade has been postulated to play a crucial and unifying role in the pathogenesis of cerebral vasospasms and poor functional outcome.Subarachnoid blood is a stimulant that induces the transcription of multiple components of the inflammatory cascade [22].The main manifestations of LGI, low grade inflammation; SCVS, symptomatic cerebral vasospasm; OR, odds ratio; CI, confidential interval; P, p for trend; the neuroimmune system in aSAH patients are excessive neuroinflammation and immunodepression, which can be indirectly indicated by neutrophil increases and lymphocyte decreases [23,24].Excessive accumulation of neutrophils in the central nervous system is involved in early brain injury, and lymphocyte depletion after aSAH may lead to some adverse complications, which are potential mechanisms of poor prognosis in aSAH patients [25].NLR is a novel marker of systemic inflammatory response, and the peripheral NLR may reflect the severity of neutrophil infiltration after aSAH.Studies have shown that NLR is an independent predictor of poor outcome and DCI occurrence in aSAH [5].In addition, systemic leukocytosis is commonly observed in SAH patients, and white blood cells can directly promote free radical formation and release cytokines and chemotactic factors to propagate immune dysregulation [26].WBC infiltration and neutrophil recruitment all contribute to SCVS by weakening microvascular perfusion and leading to the release of a large number of inflammatory mediators [27].These characteristics make the WBC count a reliable index to predict DCI after aSAH [28,29]. Platelet activation and aggregation are also involved in the pathogenesis of DCI, and the potential role of platelets in microthrombi formation, large artery vasospasm, microvessel construction, inflammation and cortical spreading depolarization may all contribute to the pathophysiology of DCI.CRP is an exquisitely sensitive systemic marker of inflammation and tissue damage [30].In the clinic, CRP has good prognostic value for aSAH [31] and several other diseases.However, CRP is a nonspecific inflammation biomarker that can be elevated in the presence of any tissue [32].In clinical practice, measurement of CRP is often combined with that of peripheral inflammatory cells to improve its clinical predictive value.CRP and several peripheral inflammatory cells are routinely collected from aSAH patients and are potentially simple ways for clinicians to determine the risk of SCVS and poor prognosis in aSAH patients.Each individual inflammatory biomarker of the LGI score is involved in the pathophysiology of aSAH through different inflammatory pathways.However, compared to the role of a single inflammatory biomarker, the LGI score accounts for the possible synergistic effect of each biomarker, effectively controlling the variability of the inflammatory biomarkers.In this study, the predictive effect of the LGI score was superior to that of each individual biomarker, and it was a better predictor of poor outcome and SCVS in aSAH patients. To our knowledge, this is the first study to use a composite score of several biomarkers to assess the risk of SCVS and poor prognosis in aSAH patients.The present study has several potential limitations that should be addressed when interpreting the results.First, this is a retrospective study in which we excluded patients with incomplete data, which inevitably produced bias.Second, the sample size was quite small, and the study was performed in a single country, which might limit the generalizability of the results to other patient cohorts.Third, we only evaluated the prognosis of patients at 3 months after discharge; thus, long-term follow-up data are needed to support the findings of this study.Fourth, in this study, only the admission LGI score was measured, and other time points of the LGI score were not considered.Finally, some other variables related to outcome (such as intracranial hypertension) were not included in the multivariate logistical regression. In conclusion, an increased LGI score can be a useful predictor of SCVS and poor functional outcome after aSAH, and the predictive value of the LGI score for poor prognosis is better than that of each individual inflammatory biomarker. Fig. 2 Fig. 1 Fig. 2 The receiver operating characteristic curves of the low-grade inflammation score and the individual biomarkers to predict poor outcome Table 1 Baseline Data According to the tertile of Low-grade inflammation score SD, Standard Deviation; IQR, Interquartile Range; P, P for trend; DM, Diabetes Mellitus; mRS, Modified Ranking Scale; WFNS, World Federation of Neurosurgical Societies; WBC, White Blood Cell counts; CRP, C-Reactive Protein; NLR, Neutrophil-to-Lymphocyte ratio; PLT, Platelet Table 3 Univariate analysis of association with functional outcome Table 4 Multivariate logistic regression analyzes the the impact of LGI (as continuous variable) on SCVS and functional outcome Table 5 Multivariate logistic regression analyzes the the impact of LGI (as categorical variable) on SCVS and functional outcome WFNS, World Federation Neurological Societies; LGI, low grade inflammation; OR, odds ratio; CL, confident interval; P, p for trend; SCVS, symptomatic cerebral vasospasm Table 6 ROC curves for SCVS and poor functional outcome ROC, Receiver Operating Characteristic; SCVS, Symptomatic Cerebral Vasospasm; AUC, Area Under Curve; CI, Confidential Interval; P, P for Trend; LGI, Low Grade Inflammation Score; CRP, C-reactive Protein; PLT, Platelet; NLR, Neutrophil to Lymphocyte Ratio; WBC, White Blood Cell Counts Table 7 The results of Delong test SCVS, Symptomatic Cerebral Vasospasm; SE, Standard Error; CI, Confidential Interval; P, P for Trend; LGI, Low Grade Inflammation Score; CRP, C-reactive Protein; PLT, Platelet; NLR, Neutrophil to Lymphocyte Ratio; WBC, White Blood Cell Counts
2023-12-11T14:39:55.992Z
2023-12-01T00:00:00.000
{ "year": 2023, "sha1": "6e41d4033f418d8ce917a4b92be1a564e3a15b61", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "6e41d4033f418d8ce917a4b92be1a564e3a15b61", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119818657
pes2o/s2orc
v3-fos-license
New elliptical parallax barrier pattern to reduce the cross talk caused by light leakage Abstract. This paper proposes a parallax barrier with an elliptical pattern that reduces the cross talk caused by light leakage from adjacent subpixels in autostereoscopic three-dimensional (3-D) displays. To find the optimum size of the elliptical barrier pattern, the relationship between the reduction of the light leakage and that of the luminance is analyzed. In addition, we analyze the relationship between the cross talk and the luminance. By using these relationships, we propose an optimum size of the ellipse. An autostereoscopic 3-D display with the elliptical barrier is compared with 3-D displays with the slanted barrier and the rectangular one. The measured cross talk of the slanted-type 3-D display whose pixel size is 98×294  μm was 57%. However, the cross talk of the ellipse-type 3-D display was 32% at the similar luminance condition when the minor and major axes are 92 and 278 μm, respectively. For generalization, we investigate autostereoscopic 3-D displays with different pixel sizes and different viewing distances. We find the optimum area of the ellipse is 70% of the subpixel area to reduce the cross talk. Introduction These days, many types of autostereoscopic display devices have been developed. One optical technology used in autostereoscopic displays is a parallax barrier. Autostereoscopic displays with the parallax barrier have superior three-dimensional (3-D) display characteristics and low costs. 1,2 Autostereoscopic displays provide views for multiple viewers to perceive stereoscopic images without glasses. 3,4 A barrier with a slanted pattern has been generally used for autostereoscopic displays because it features a balanced resolution in both the horizontal and vertical directions and reduced moiré artifacts on the display screen. However, a conventional autostereoscopic display with the slanted parallax barrier suffers from undesired cross talk that deteriorates the stereoscopic image quality. [5][6][7][8] There are many kinds of factors that influence the cross talk in autostereoscopic displays. Kooi 9 found that the contrast of the display and binocular disparity of the 3-D images are the important factors that determine the cross talk. He found that cross talk is less visible when the displays and 3-D images have a high contrast ratio (100∶1) and a reasonable binocular disparity (40 arc min), respectively. Kooi and Toet 10 found that the vertical disparity of human eyes and blur of the 3-D images affect the visual comport. The cross talk becomes more visible with increasing vertical disparity and sharpness of the 3-D images. However, the most important factor that influences cross talk in autostereoscopic displays with parallax barrier is the light from adjacent subpixels. 11,12 Generally, pixels of the display are rectangular, thus a slanted barrier cannot completely block the light from adjacent subpixels as shown in Fig. 1(a). In order to solve this problem, many methods have been proposed such as modified pixel layout, 6 fusion of viewing zones, 13 and so on. As shown in Fig. 1(b), Mashitan et al. proposed a multiview autostereoscopic display with a rectangular barrier pattern. 14 However, they did not study the cross talk characteristics of the rectangular barrier pattern. We expect that the rectangular barrier pattern can solve the structural limitation of the conventional slanted barrier pattern. In our experiment, we verified that the rectangular barrier pattern is superior to the conventional slanted barrier pattern with regard to cross talk. However, the autostereoscopic displays with the rectangular barrier pattern also suffer from cross talk caused by unwanted light leakage from adjacent subpixels as shown in Fig. 2. The unwanted light leakage mainly falls into three classifications: first is light leakage from vertically and horizontally adjacent pixels (e.g., to the second pixels from the first and third), second is from diagonally adjacent pixels (e.g., to second pixels from the fourth and sixth), and the third is the light leakage from adjacent pixels of the same viewpoint (e.g., to second from the other second pixels). The cross talk caused by light leakage from adjacent subpixels must be resolved because cross talk is the most critical issue impeding the development of 3-D displays. In this paper, we analyze how the cross talk is influenced by the various barrier patterns and various sizes of barrier patterns. Finally, we propose a new parallax barrier pattern and its optimum size that would resolve the cross talk caused by light leakage. Device Structure We used a 19-in. SXGA (1280 × 1024) patterned-vertical alignment (PVA) liquid crystal display (LCD) for our experiments. The size of a subpixel is approximately 98 × 294 μm. The horizontal and vertical lengths of black matrix are 11 and 21 μm as shown in Fig. 3(a). The optical structure of the autostereoscopic 3-D display is shown in Fig. 3(b). The LCD panel is located under the parallax barrier with a gap, g, of 0.75 mm. The distance between the right and left sweet spots, b, must be equal to the interpupillary distance, once again, assumed to be 65 mm. 15 Thus, the optimum viewing distance, d, is determined as 50 cm. 15 New Parallax Barrier Pattern with Elliptical Shape In the case of the slanted barrier pattern, light leakage is not the main reason for cross talk because of the structural limitation as shown in Fig. 1(a). The main reason for cross talk in the slanted barrier pattern is that the slanted barrier cannot completely cover the subpixel's boundaries. In the rectangular barrier pattern, the main reason for the cross talk is the light leakage from adjacent subpixels. To solve this problem of light leakage in the rectangular barrier pattern, we need to carefully investigate it. Figure 4 shows microscopic images of how the light leakage depends on the test image pattern. In Fig. 4, only the area of the fourth view is open and the other area for the remaining views is blocked. The size of the open area is about 98 × 294 μm which is the same size as a single subpixel as shown in Fig. 3(a). Thus, it is natural to expect images through the fourth view only at the fourth view point. At this time, the distance between parallax barrier and LCD panel is 0.75 mm (g) which is the same as condition to Fig. 3(b). We took a microscopic image at the position normal to 4C pixel on the barrier. The subpixel 4C is turned off in Fig. 4. Figure 4(a) shows a microscopic image of when two red subpixels that are located at the upper and lower positions in relation to the subpixel 4C are turned on. If the light leakage from vertically adjacent pixels does not exist, 4C pixel has to display a pure black image as shown in ideal image of Fig. 4(a). However, we can easily observe the light leakage from vertically adjacent pixels as shown in actual photo of Fig. 4(a). In Fig. 4(b), we can also observe the light leakage of when a subpixel at each of the left and right side of the subpixel 4C are turned on. Also, we cannot neglect the light leakage from diagonally adjacent pixels as shown in Figs. 4(c) and 4(d). As a result, we can find that we need to consider two things to reduce the light leakage from adjacent subpixels: The first is the horizontal and vertical lengths of the barrier pattern. The second is the shape of the barrier. The light leakage from horizontal and vertical directions will decrease if the horizontal and vertical lengths of the barrier pattern decrease. The luminance, however, will decrease. Thus, it is important to find the optimum lengths of the barrier pattern. The rectangular pattern is not appropriate to reduce the light leakage from diagonally adjacent pixels. In order to consider cross talk and luminance simultaneously, we propose a new parallax barrier pattern with an elliptical shape. We investigate the cross talk carefully taking various lengths of major and minor axes into account. Results and Discussion To verify the effectiveness of the elliptical barrier pattern, we used the six-view autostereoscopic display with various elliptical pattern sizes for our experiments. In addition, we compared the performance of the elliptical barrier pattern with those of the slanted and rectangular barriers. Figure 5 shows the microscopic images and calculated cross talk of slanted parallax barrier patterns with widths ranging from 53 to 108 μm. Figure 6 shows the microscopic images and cross talk of the rectangular parallax barrier patterns ranging from 46 × 222 μm to 98 × 294 μm, respectively. We can calculate the cross talk described by 13 Cross talkð%Þ ¼ In Eq. (1), L 1 and L i represent the luminance of the first view pixels and the luminance of the i'th view pixels measured at the first view position, respectively. When we measure the cross talk of the first viewpoint, as shown in Fig. 3(b), we fix the color analyzer at the first view's position, and we display white at the pixels for the first view and black at the pixels for the other views. We can measure the first view's luminance. Then, we display white at the pixels for the second view and black at the pixels for the other views. We can measure the second view's cross talk at the first view position. We can measure from the third to the sixth view's cross talk at the first view position in the same way. At this time, color analyzer is still fixed at the first view position. We can obtain precise values because the color analyzer and display panel are both fixed. Figure 7 shows a summary of how cross talk changes depending on the sizes of both the slanted and rectangular barrier patterns. The results show that a rectangular barrier pattern is superior to a slanted barrier pattern. For example, when we compare the slanted barrier pattern of width of 98 μm (No. 2 in Fig. 5) with the rectangular barrier pattern of size 86 × 262 μm (No. 2 in Fig. 6), the cross talk of the rectangular barrier pattern (No. 2 in Fig. 6) has a lower cross talk than that of the slanted barrier pattern (No. 2 in Fig. 5) in spite of the fact that the luminance of the two is similar. This means that the rectangular barrier pattern has a better cross talk performance than the slanted barrier pattern. Figure 8 shows the measured characteristics of the slanted barrier with the width of 98 μm and the rectangular barrier with the area of 86 × 262 μm. We display white at the pixels for the first view and black at the pixels for the other views. Then, we move the color analyzer from the fourth view position to the third view position. We can measure the optical characteristics of the first view as shown in Fig. 8. We can measure from the third to the sixth view's optical characteristics in the same way. As shown in Fig. 8, the measured luminance of the first view had a peak value, and the perceived luminance values of the two different displays are similar at the sweet spot. As denoted by solid rectangle in Fig. 8, the cross talk of the rectangular barrier is smaller than that of the slanted barrier pattern. However, the light leakage from diagonal directions as shown in Fig. 2 was not yet considered. Thus, we propose a new parallax barrier pattern with an elliptical shape to reduce cross talk more effectively by blocking leakage. We investigate cross talk carefully taking various lengths of major and minor axes into account. Here, the major axis means the larger of two axes, which corresponds to the largest distance between antipodal on the ellipse, whereas the minor axes means the one with the smallest distance across the ellipse. When the lengths of both the major and minor axes are short, the cross talk caused by light leakage is improved while the luminance of the panel is not. It always suffers from the trade-off between brightness and cross talk. Therefore, finding the optimum lengths of the ellipse pattern is important. In order to find it, we investigated the relationship between the cross talk and the length of the elliptical pattern with an optical simulator, light tools. Figure 9 shows a schematic of an optical simulation. We arranged 9 pixels as shown in Fig. 9(a). The pixels have the same size as those of the LCD panel used in the previous experiment. Figure 9 We simulated the various sizes of elliptical barrier patterns as shown in Fig. 10, and we calculated the cross talk using the ratio of the number of light rays for light leakage component to luminance component. We fabricated the elliptical barrier pattern. Figure 10 shows the microscopic image of elliptical barriers with the various lengths of major and minor axes. When the length of the major and minor axes was larger than the pixel size (98 × 294 μm), we blocked the area of the ellipse that was larger than the pixel size to prevent it invading the area of its adjacent pixel. We need to find the relationship between the cross talk and various lengths of the major and minor axes of the elliptical barriers. First, we analyzed the relationship between the reduction of light leakage and as reduction of luminance depending on the size of the ellipses as shown in Fig. 11. The black solid line is introduced as a reference with a slope of 1. Thus, if the slope of data is higher than 1, the reduction ratio of leakage is larger than that of luminance, which is more efficient. The red circles and solid diamonds show the simulation and measurement results for each size of ellipse, respectively. The blue solid and green dashed lines are the fitting lines with different slopes. As shown in Fig. 11, we find that the blue solid line is more efficient than the green dashed line because the slope of the blue line is higher than 1 while the green line is almost 1. From the results of Fig. 11, we can deduce that the optimum size of the ellipse is 92 × 278 μm. Second, we analyzed the relationship between cross talk and luminance in the same way as shown in Fig. 12. The reduction of cross talk for the blue line is steeper than that of the green line, which means that the optimum size was also the same as was determined by Fig. 11. From these results, we can deduce that the optimum size of the ellipse is 92 × 278 μm. Figure 13 shows the overall result of how the cross talk depends on the luminance as well as the pattern and size. As shown in Fig. 13, the elliptical pattern shows superior cross talk characteristics when compared to the slanted pattern, and the cross talk is further reduced than the rectangular pattern. The cross talk of the elliptical barrier with the optimum size of 92 × 278 μm was about 32%, whereas the slanted and rectangular one are about 57% and 36% cross talk, respectively, at the same luminance. As a result, we can obtain the best performance with an elliptical barrier pattern out of all the barriers at the same luminance. Furthermore, we can also deduce that having a barrier size smaller than 92 × 278 μm is inefficient in terms of cross talk reduction due to the excessive luminance reduction. We took actual photos depending on the shapes of barriers as shown in Fig. 14. Figure 14(a) represents the actual photos taken when we displayed a red box at the pixels for the first view and a white box at the pixels for the other views. Figures 14(b) and 14(c) show the photos for green and blue, respectively. We used slanted, rectangular, and elliptical patterns with widths 98 μm, size of 86 × 262 μm, and 92 × 272 μm, respectively. Measured luminance was almost the same. However, we can see more saturated red, green, and blue colors from the display with the elliptical pattern than the other ones due to reduced cross talk. Thus, we can prove that actual cross talk is decreased. Figure 15 shows the photos taken from the 3-D rectangular image when the horizontal disparity was 30 pixels. We took photos at the location of the right eye. Thus, the cross talk appeared at the right side of the rectangular image because the viewer cannot help perceiving the left image. As shown in Fig. 15(c), we can see that the cross talk region of Fig 14(c) is darker than those of Figs. 14(a) and 14(b). In addition, we simulated to find an optimum size depending on the pixel size and viewing distance. First, as shown in Table 1, we simulated five conditions of different pixel sizes. In the simulation, we varied the sizes of the ellipse according to the ratios of the area of the ellipse to pixel area. For example, the area of the ellipse with the 92 × 278 μm is 70% of the area of the pixel with the 98 × 294 μm. The gap between the barrier and the panel was adjusted according to the pixel area so that the viewing distance may be fixed to 50 cm. We can actually observe that the ray ratio of light leakage is almost constant regardless of the length ratio of a major axis and minor axis as shown in Fig. 16. Thus, we fixed the length ratio of the major and minor axes to 3∶1, the aspect ratio of the LCD. Figure 17 shows the simulation results. The reduction ratio of leakage for 70% to 100% of the pixel area is larger than that of luminance, whereas the reduction ratio of leakage for the area smaller than 70% of the pixel area is the same as that of luminance. We can deduce that the most efficient area of the ellipse is 70% of the pixel area to reduce the cross talk. Second, we varied the viewing distance to find an optimum condition. The gap between the barrier and the panel was adjusted viewing distances to 30, 50, 60, 80, and 100 cm when the pixel size is 98 × 294 μm. As shown in Fig. 18, we can find that the most efficient area of the ellipse is 70% of the pixel area to reduce the cross talk regardless of viewing distance. In summary, the optimum area of the barrier is 70% of the pixel area regardless of pixel sizes and viewing distances to reduce the cross talk. Table 1). Table 1). Conclusion In this paper, we propose a new parallax barrier pattern with an elliptical shape that reduces the cross talk caused by light leakage from all the adjacent subpixels. Because of the tradeoff between the luminance and cross talk in a parallax barrier-type 3-D display, we analyzed the relationship between the reduction of cross talk and that of luminance depending on the size of the elliptical pattern. Through this relationship, we optimized the size of the proposed barrier pattern. In addition, we verified that the cross talk of the proposed barrier pattern was superior to both the conventional slanted and rectangular barrier patterns at the same luminance condition. We think that our proposed barrier pattern can be applied to autostereoscopic displays as a design factor for reducing cross talk. We expect that an autostereoscopic display with our proposed barrier will provide much better cross talk reduction for viewers.
2019-04-18T13:13:35.543Z
2014-02-01T00:00:00.000
{ "year": 2014, "sha1": "1c76e1e6fbfe269db38270a1ea8640ab7adde0a5", "oa_license": "CCBY", "oa_url": "https://www.spiedigitallibrary.org/journals/Optical-Engineering/volume-53/issue-2/025101/New-elliptical-parallax-barrier-pattern-to-reduce-the-cross-talk/10.1117/1.OE.53.2.025101.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "a3ff57db5a073a665d51c4dbfc5209c518339d80", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Physics", "Engineering" ] }
244445127
pes2o/s2orc
v3-fos-license
CONTEMPLATION OF SYMBIOTIC MICROBIAL BIOFILMS IN WASTEWATER TREATMENT State of symbiosis is created among the species that are found in naturally existing biofilms. Biofilm formation provides protection against toxic shocks, mechanical stress, and predation. Biofilm can play an important role in wastewater treatment technologies and on the other hand could also lead to plague water. Biofilm-based treatments have been traditionally used for the treatment of water but the recent development in the stream has boosted the use of biofilm in various strategies of waste water treatment especially for strategies related to BOD and nutrients. However, the blueprint and execution of this idea is still being worked on due to the problems which arise in the implementation such as corroding pipes, increasing head loss, allowing pathogens to persist in distribution systems, and fouling membrane processes. Design for choice of species for biofilm processes in particular techniques is important wastewater treatment. All these data are essential to develop the performance, effectiveness and constancy of biofilm-based wastewater treatment strategies Trickling Filter(s), and liquid–solids separation unit(s). Ideal Trickling Filter media encourage ventilation and be responsible for a high specific surface area, low cost, high permanency, and high enough porosity to avoid clogging [92]. Trickling Filter media types include rock, random (synthetic), vertical flow, and cross-flow (synthetic). Fixed-nozzle and rotary distributors are the two types of Trickling Filter distribution systems. Microorganisms forming on the packing material surface grow in biofilm and degrade the pollutants from the effluent. organics this biofilm This study presents comparative data describing in of the This study for of these biofilm-based INTRODUCTION A congregation of microbial cells that is irreversibly linked with a surface is called a biofilm. The process of formation of biofilm occurs over a series of events leading to adaptation under diverse environmental and nutritional conditions [1,2] A mature biofilm is organized by a hydrated polymeric matrix, has a highly differentiated structure, being called mushroom and pillar-like assembly [3]. Biofilm formation is regulated by various genetic and environmental factors. Bacterial mobility, extracellular polysaccharides, cell membrane proteins and signalling molecules play significant roles in biofilm formation. Biofilm development: structure and function Biofilm matrix may be composed of acellular resources such as corrosion particles, mineral crystals, clay or silt particles, or blood components. Biofilms may cast on a wide variety of surfaces, including living tissues, indwelling medical devices, industrial or potable water system piping, or natural aquatic systems. Breakdown of different nutrients, such as phosphorus and nitrogen-containing compounds, carbonaceous materials as well as trapped pathogens are done by the microbial communities of biofilm from the wastewater. Biofilm development has following important steps (a) attachment (b) maturation and (c) detachment (dispersal) A. Attachment Primary adhesion of bacteria to the surface begins the formation of a mature biofilm and involves the reversible attachment of planktonic bacteria [4]. Cell surfaces have locomotor structures such as flagella, pili, fimbriae providing an advantage in biofilm formation. The direct primary adhesion to abiotic surfaces is mediated by non-specific physicochemical interactions (hydrodynamic forces, electrostatic interactions, Van der Waals forces and hydrophobic interactions) and the planktonic cells adhere to a surface randomly (Brownian motion and gravitational force) or in a directed way via chemotaxis, flagella motility and pili [5,6]. Motile bacteria can utilize flagella to overcome hydrodynamic and repulsive forces, which gives it a competitive advantage. The flagellar motility is important for initial attachment, as has been reported for many bacteria [7]. At this stage, the bacteria can bind to the biofilm lifestyle or vacate the surface and return to the planktonic lifestyle. After primary adhesion the next step is secondary adhesion, which links to an irreversible binding to the surface. The microorganisms begin to produce the EPS in this stage, forming micro colonies as it complexes with materials present on the surface firming up the links between the cells and the surface [8,4]. At this instant, the cells start to communicate through QS signals as there is no motility [9]. Surfaces which are rough, hydrophilic and coated deliver a better environment for most recurrent attachment and biofilm formation [10]. Microorganisms multiply into micro colonies encapsulating themselves in the EPSs. B. Maturation The maturation of biofilm occurs in response to increasing population density and high EPS production, this increases the biofilm thickness and the stability of the colony; cell division and adhesion of new planktonic cells are the two means by which the population growth takes place [8]. QS Signal and EPS build-up through continued cell division, are two factors essential for the maturation of biofilm. During biofilm formation many species of bacteria are able to communicate with one another through this mechanism called quorum sensing [11]. More than 90% of the dry mass in mature biofilms is characterized by EPS [12]. EPS components take account of polysaccharides, nucleic acids, proteins, lipids, and other biopolymers. EPS is responsible for scaffolding cells together, adhesion to surfaces and maintaining the threedimensional architecture of the biofilm. Moreover, the bacterial cell surrounded by EPS is protected against various stresses such as antimicrobials, host immune systems, oxidation and metallic cations [12]. Inside the biofilm, EPS retains quorum sensing (QS) signaling molecules, extracellular enzymes, and metabolic products. Therefore, EPS supports cell to cell communication and degrading substances [12,13]. C--ells or small portions of the biofilm may detach and disperse after the maturation of biofilm, as a result of nutrient depletion, QS signaling or shearing of biofilm aggregates because of flow effects [14,4]. C. Dispersal After biofilm formation, the bacteria leave the biofilms itself on a regular basis so it can undergo rapid multiplication and dispersal. Detachment of planktonic bacterial cells from the biofilm is programmed and has a natural pattern. Sometimes bacteria are detached from the colony into the surrounding due to some mechanical stress, but in most cases some bacteria stop EPS production and are detached into the environment. Dispersing of biofilm cells occurs either by detachment of newly formed cells from growing cells or due to flowing effects or due to quorumsensing [15]. The mode of biofilm dispersion affects the phenotypic character of organisms. Dispersed cells from the biofilm have the ability to hold its certain properties, like antibiotic in-sensitivity. The cells which are dispersed from biofilm as result of growth may return quickly to their normal planktonic phenotype. Alterations in nutrient availability, oxygen fluctuations, and increase in the toxic products or other stress -inducing conditions may also result in biofilm dispersion [16]. Acellular materials such as mineral crystals, corrosion particles, clay or silt particles as well as blood components (in case of bacterial biofilms present within the human body) might also be found in the matrix of biofilm [14]. There are channels for the circulation of nutrient and water within the matrix [12]; they also provide interspecies bacterial exchange or sharing of different metabolic substrates in biofilm. Under rigorously low nutrient conditions, bacteria can produce amino acids that cleave the structural polysaccharides allowing for detachment of cells from the biofilm hence modulating the structure of the biofilm [17]. EPS can incorporate large amounts of water into its structure by hydrogen bonding and hence is highly hydrated; it prevents desiccation in some natural biofilms. Nutrient availability, temperature, light, pH, ionic strength, carbon source, and water content can alter the structure of biofilms [12]. By increasing cross-linkage between polysaccharides the structural veracity of the biofilm can be improved by cations [18]. Biofilm thickness could be affected by the number of component organisms. Biofilm architecture is constantly changing because of external and internal processes, and heterogeneous both in space and time. Structure may also be influenced from the host or environment by the interface of particles of non-microbial constituents. Another example of particle interactions with biofilms are minerals such as calcium carbonate, corrosion products such as iron oxides and soil particles collecting in biofilms of potable and industrial water systems. Surface Surface geography greatly affects the ability of bacteria to adhere to a surface. Surface roughness reduces the shear force on bacterial cells and communities present in fluid at high flow rates, such as water pipes in industrial plants. A material surface will inevitably become conditioned or coated by polymers from the medium when exposed in an aqueous medium, and the consequential chemical modification will affect the rate and extent of microbial attachment. Furthermore, other factors influencing microbial attachment are such as charge, hydrophobicity and elasticity [19]. pH The growth and development of bacteria and biofilm formation are greatly affected by the change in pH as it can overwhelm different mechanisms and have negative or killing effects on the microorganisms. For the majority of bacteria, the optimal pH for polysaccharide production is around 7, but it varies among different species. Salinity Salt tolerance in plants depends mainly on the capability of roots for (i) restricted or controlled uptake of Na + and Cl -(ii) continued uptake of essential elements, particularly K + and NO3 -. Temperature and moisture content Limited water availability is typically the most critical factor to which terrestrial bacterial communities are exposed to including other environmental stress factors which exhibit the greatest effect on survival and activity of these communities. Nutrient availability Biofilm bacteria acquire nutrients by concentrating trace organics on surfaces by the extracellular polymers, using the waste products from their neighbours and secondary colonizers, and by using different enzymes to break down food supplies nutrients such as sucrose, phosphate, and calcium enhance biofilm formation as their concentrations increase Velocity, Turbulence and hydrodynamics The boundary layer is the area from the surface where no turbulent flow is experienced. Contained by this area, the flow velocity has been shown to be insufficient to remove biofilms. The area outside this layer is regarded as high levels of turbulent flow and has an influence on the attachment of cells to the surface. The size of the boundary layer is dependent on the flow velocity of water. The boundary layer decreases in size at high velocities, and the cells are exposed to a high turbulence level. Hydrodynamic conditions can power the formation, structure, thickness, mass, EPS production and metabolic activities of biofilms [20]. Gene regulation and quorum sensing (QS) For cell attachment and detachment from biofilms cell-to-cell signalling, also termed QS signalling, has been proven to play an important role. Bacterial cells release a density-dependent chemical signal densely packed with an EPS matrix to mediate growth and development of biofilms on different surfaces. A transcriptional activator protein is used by QS that acts in concert with small auto inducers (AIs) signaling molecules to stimulate expression of target genes, resulting in changes in chemical behaviour. After amassing sufficient AIs, this form of intercellular communication serves to coordinate gene expression, morphological differentiation and the development responses of bacterial cells [21]. Production of extracellular polymeric substances (EPSs) By bridging with multivalent cations and hydrophobic interactions EPSs aid in the formation of a gel-like network that keeps bacteria together in biofilms. In addition, EPSs also cause the adherence of biofilms to surfaces, flocculation and granulation, protect bacteria against harmful environmental conditions and enable bacteria to capture nutrients from the surroundings [22]. Extracellular DNA (eDNA) A number of single and multispecies biofilms has extracellular DNA as a major constituent. Its role is very important in numerous stages of biofilm formation, such as initial bacterial adhesion, aggregation and microcolony formation that favors wastewater treatment. eDNA also helps strengthen biofilms, be responsible for protection to biofilms from physical stress, antibiotics and detergents as well as aids as an exceptional source of nutrients for biofilm growth [23]. Divalent cations Latest studies showed that eDNA chelates divalent cations that help in the modification of bacterial cell surface properties and thus favor resistance of biofilms to detergents and antimicrobial agents [24]. In terrestrial and aquatic environments divalent cations such as Ca 2+ are present in abundance; therefore, calcium may be one of the factors that bacteria sense during biofilm-associated growth. By associating negatively charged sites on extracellular polymers, divalent cations, such as those of calcium, play a critical role in the initial attachment of microbial aggregates of activated sludge flocs, anaerobic sludge granules and biofilms [25]. Biofilm can become denser and mechanically more stable by introducing more divalent cations which enhances the thickness of biofilm, as shown in current studies [26]. Calcium has been found acting as a cofactor for certain proteins and is also active in cell signaling, cellular and extracellular product formation, biofilm virulence, and alginate regulation [27]. Bacterial, fungal and microalgal biofilms Biofilms are intricate surface-associated cell inhabitants embedded in an ECM and are capable of adhering to a sweeping diversity of surfaces with distinct biotic and abiotic compositions, including human tissue and medical expedients. Present-day applications of biofilms take account of the humiliation of toxic affluences in soil and water, the viable production of chemicals, and the generation of electricity. Bacterial biofilm is infectious in nature and can result in nosocomial infections. Lots of species of bacteria are able to communicate with one another throughout the biofilm development over a specific mechanism called quorum sensing. Bacterial biofilm construction is considered to be an up-and-coming microbial lifestyle in usual and artificial atmospheres and befalls on all surface types [28,29]. Some biofilm forming bacterias are, P. aeruginosa, Vibrio cholerae, Listeria monocytogenes, and E. coli. Temperature, pH differences, ultraviolet radiation, oxidization, metal ions and desiccation are some exterior issues against which biofilm provides protection to the bacteria. Additionally, biofilms are able to evade inherent and/or adaptive immune defences and avoid antimicrobial treatments by several appliances [30][31][32]. Fungi habitually flourish as biofilms, which are aggregated communities wrapped in a protective extracellular matrix. Fungal biofilms are communities of adherent cells bounded by an extracellular matrix. Fungi are also used for pollution removal, besides bacteria. Fungal biofilms help in the deprivation of environmental organic chemicals, from proteins to complex carbohydrates, lipids, aromatic hydrocarbons, pharmaceutical compounds, heavy metals, endocrine disrupting chemicals by means of wide array of intra-and extracellular enzymes [33,35] and therefore it forms a significant group of microscopic communities in wastewater treatment plants [36]. Many medically important fungi produce biofilms, including Candida [37], Aspergillus [38], Cryptococcus [39], Trichosporon [40], Coccidioides [41], and Pneumocystis [42]. Mixed culture biofilm Single microbial species or a combination of different microbial species which includes bacteria, algae, fungi etc, that attach tightly to one another and to biotic or abiotic surfaces to form biofilm [43][44][45][46][47]. Due to coexistence of multiple microbial species close proximity is formed which promotes interaction among its members. By the synergistic interactions between algae and prokaryotic microbial communities biological wastewater treatment processes can be improved. Increases in biomass activity, growth efficiency, and enzyme production is achieved by the effect of mixture of microorganisms. In mixed culture to overcome feedback regulation and catabolic repression the products of one microorganism act as substrate for the other. In an example of biofilms in sewage treatment, the association of Nitrosococcus sp and Nitrospira sp is proved beneficial [48]. By pure culture, there are several microbial processes that cannot be achieved. Also, in tempeh wastewater contains diverse gram-positive and gram-negative bacterial species, such as Enterobacter cloacae, Klebsiella pneumoniae,Klozaenae, Enterobacter agglomerans, Streptococcus Dysgalactiae, Lactobacillus casei, Enterococcus faecium, Staphylococcus epidermidis [49]. By providing additional oxygen from photosynthesis, microalgae help in improving the purification performance of bacterial systems and also decreases the total energy costs of direct or indirect oxygen supply [50]. Undesirable biofilm In the treatment process biofilms can have both positive and negative treatment. For membrane filtration Membrane bioreactors (MBRs) and Membrane biofilm reactors (MBfRs) are used. Membrane biofouling in a moving bed biofilm reactor (MBBR) reduces permeate flow and can cause problems in membrane bioreactor (MBR) [51]. For large scale operations the irreversible fouling cannot be removed by cleaning therefore it is very difficult to manage [52]. The major reasons for the occurrence of biofouling is the production of membrane foulants by microorganisms present in the wastewater and colonization of membrane surfaces with microorganisms. For membrane fouling problem interactions and activities of microbial community in wastewater and on membrane surface should be well understood for developing novel solutions. Microorganisms and their organic products are the main reason for membrane fouling. By this fouling and the formation of biofilm the flux and permeability of the membrane is decreased [53]. Therefore, fouling needs to be kept in control, to decrease operational costs and increase membrane lifetime. Biofilm characterization approach both traditional and modern A complex, three-dimensional microbial community that grows at an interface and interacts with the surrounding environment is known as biofilm [54,55]. By sequestration and alteration of potentially toxic compounds, as a renewable aid in applications of waste, soil and water remediation is done potentially by biofilms [56][57][58]. With respect to its compatibility with the emerging biofilms the chemical composition of the filter media is very critical; its elemental composition should be assessed. To analyse the surface chemistry of a material, different techniques can be applied, for the detection and quantification of the elements in a filter medium. The elemental composition is measured at the parts per thousand range, empirical formulas, electronic state and chemical state of the elements that exist contained by a material [59]. Determination of viable cell numbers by plate count (Colony Forming Units/mlOrCfus) A standard quantification method is used to determine the number of viable cells called viable cell enumeration of CFU/ml assay [60,61]. Living cells are differentiated from dead cells and their enumeration without dyes or instrumentation hence separating the individual cells on an agar plate and growing colonies from cells, is the basic concept of this assay. It is noted that in a mixed culture bacteria are replicated at different rates. Consequently, the culture expansion may not be suitable as it will disrupt the ratio of cells from the original biofilm. To accommodate for slow colony forming bacteria the colony forming incubation time may need to be extended [62] (table 1). Determination of biofilm weight (Wet weight and dry weight) A digital weighing balance is used to determine the weight of biofilm in terms of dry weight and wet weight. The wet weight is measured after soft rinsing with distilled water; however, the dry weight is estimated by allowing it to dry under aseptic conditions in laminar flow until the execution of the constant weight of polypropylene and polystyrene filter media [63,64]. Contrariwise, natural filter media such as rock, granite or stone media, should be dried in the oven at 60 °C to attain constant weight [65]. The difference between the weight of medium with biofilm and that of medium without biofilm gives the weight of biofilm. Determination of the biofilm Optical Density (OD) The biofilm is also measured by the OD method. To ensure the removal of any material on their surface the filter media supporting biofilm are first rinsed with sterilized water. Then the biofilm is removed from the filter media in 0.9% saline by sonication for 15 min. Finally, at 550 nm wavelength (OD550) using saline as blank the spectrophotometric absorbance of dissolved biofilms is recorded [63][64][65]. Determination of heterotrophic plate count (HPC) The HPC concentration is determined by the conventional serial dilution method. The biofilm dissolved in 0.9% saline is serially diluted (up to 10 −5 ) and then spread on the selective growth media plates and incubated at 37 °C for a specific time period (24-48 h). The microbial growth giving the impression on specific media is enumerated in terms of HPC/ml (pathogen indicators). Further identification of pure cultures from these plates are done by observing colony morphology as well as microscopic and biochemical tests. Microscopic analysis of biofilms The truthful way of visualizing biofilms without disturbing their structure is provided by non-invasive microscopic technique. The traditional microscopic techniques used for imaging analysis of biofilm samples involve light microscopy (LM) and electron microscopy (SM). The most commonly used method for structural analysis is Scanning Electron Microscopy (SEM). Overall magnification of SEM can range from about 10-500,000 times, and can be used to develop a high resolution, magnified image of surface topography making this technique vital in the analysis of microscopic structures, including those of biofilms [66]. To understand formation and persistence, high resolution images can be gathered by SEM useful in evaluation of bacterial interaction, EPS organization and biofilm morphology [67][68][69]. Clone library technique Since the beginning of the 1990s cloning and sequencing of the 16S rRNA gene have been comprehensively and effectively employed for the study of microbial biofilms, and this is still the most widely used technique [70]. The clone library method allows complete 16S rRNA sequencing and identification with very precise taxonomic studies of both cultured and uncultured microorganisms in biofilms, design of primers for PCR and probes for fluorescence in situhybridization (FISH) [71]. In combination with other advanced techniques, cloning and rRNA gene library construction have also been applied in wastewater treatment for the exploration of biofilm communities. Microbial fingerprinting methods Microbial fingerprinting methods marks a distinction between microorganisms and groups of microorganisms on the basis of their distinctive characteristics of a universal component of a biomolecule, such as phospholipids, DNA or RNA, providing the overall profile of a biofilm [72,73]. Phospholipid fatty acid analysis (PLFA), denaturing gradient gel electrophoresis (DGGE) and terminal restriction fragment length polymorphism (T-RFLP) are included in this method. The mass of PLFAs in a biofilm sample is directly proportional to viable biomass as type and proportion of phospholipid are distinctive to different microorganisms and break down rapidly upon cell death; still they are structural components of all cell membranes. Some sets of organisms have unique or "signature" types of PLFA [74]. DGGE is a nucleic acid-based technique and is engaged to generate a genetic fingerprint of a complex microbial community [70]. T-RFLP is a nucleic acid-based method and delivers the profile of a microbial community, which is used to detect specific microbial populations [75]. Fluorescence in situ hybridization (FISH) FISH is an excellent method for the identification, localization, visualization and quantification of non-cultured microorganisms in their microcosm. The most commonly used target molecules for FISH are 16S rRNA, 18S rRNA, 23S rRNA and mRNA. Detection/Identification on any desired taxonomic level is enabled by the specificity of the fluorescent probe, from domain down to a resolution suitable for differentiating between individual species [76]. Digitalization/Manipulation of images can be achieved by a charged coupled device (CCD) and appropriate image analysis software, quantifying rRNA content can help to maintain record of microorganisms, and measurement of the activity of single cells in biofilms. While obtaining three-dimensional images with thick samples with a high background (sludge flocs, biofilms) CLSM is used with FISH analysis. In order to overcome some of its pitfalls like increase its sensitivity and upgradation, FISH can be combined with other techniques. Enabling bacteria to be mapped, FISH-based methods have revolutionized investigations into the morphology and microbial composition of biofilms [77]. Next-generation sequencing (NGS) technology A unique DNA sequencing technology which transforms microbial ecology, explores deeper layers of microbial communities and is vital in presenting an unbiased view of the composition and diversity of communities [78] is developed at the Royal Institute of Technology called pyrosequencing, based on the sequencing-bysynthesis principle [79] and on the recognition of released pyrophosphate (PPi) during DNA synthesis [80]. In comparison to the first-generation Sanger sequencing technology this technology NGS platform such as Roche/454, Illumina/Solexa, Life/APG and HeliScope/HelicosBioSciences are much faster and less expensive [81]. The technique of pyrosequencing has no need for labelled primers, labelled nucleotides and gel electrophoresis. It has the potential advantages of accuracy, flexibility, parallel processing and easy automation. It has been effective for both confirmatory sequencing and de novo sequencing [80]. Bio volume can be calculated with appropriate software and computing capability. Usually requires a dedicated technician to run and maintain the instrument. Can image any cell or particle that has a fluorescent label that can be detected by the microscope. It is better used for structures and 3D architecture than counting cells. Can image within the thickness of the biofilm and assemble z-stacks. Determination of Dry Mass Analytical Balance: Lab oven capable of reaching 100 °C Film on substrate is dried, massed, then cleaned. Substrate is massed again. Film area should be measured; thickness can be measured to give dry mass per unit of wet volume. Optical Density spectrophotometer ensure the removal of any material on their surface the filter media supporting biofilm are first rinsed with sterilized water. Then the biofilm is removed from the filter media in 0.9% saline by sonication for 15 min The biofilm is also measured by the OD method. at 550 nm wavelength (OD550) using saline as blank the spectrophotometric absorbance of dissolved biofilms is recorded Toxic chemicals may be involved in some fixation techniques. Usually requires a maintenance contract and special housing conditions. Alternative qualitative characterization methods By using scanning electrochemical microscopy (SECM) the topological structure and chemical properties of biofilm surfaces can be assessed [82,83]. Based on the distribution of reactive groups used to determine the distribution of extracellular polymeric substance (EPS) components at the biofilm surface this versatile technique can provide an extra dimension to 3D models of biofilms. Literature precedence exists to analyse biofilms with atomic force microscopy (AFM), although it is not commonly utilized currently. AFM would be useful in understanding biofilm characteristics such as roughness, topography, and stiffness; it can characterize the components on the underlying substratum as well as the substratum interactions [84]. But, requires specialized equipment costing more than $100K and trained operators, like similar techniques. For effectiveness as a non-destructive method for greater understanding of biofilm aggregation, adhesion and EPS composition spectroscopic analyses of biofilm are becoming increasingly recognized. For providing similar data, Infrared spectroscopy delivers the vibrational information through the use of IR light, whereas Raman typically uses more energetic light, usually supplied by a near IR, visible, or ultraviolet laser. Despite some complications, with confocal scanning light microscopy (CSLM), or with specialized IR well-suited surfaces IR and Raman are good methods to use in aggregation with one another, [85][86][87][88]. Biofilm reactors In modern water sanitation Biofilm reactors can be traced as its origination. Biofilms have led to the development of new and emerging biofilm reactors conducive to fundamentally based design approaches by making it significant by academic understanding, advances in the design, and mathematical modelling and its applications are fundamentally design and operation procedure for traditional biofilm reactors. All biofilm reactors have two characteristic processes (1) mass transfer and (2) biochemical conversion which influence biofilm structure and function. For these processes every biofilm reactor has common Compartments for optimisation. Moving bed biofilm reactors The MBBR has a two-(anoxic) or three-(aerobic) phase system with free floating plastic biofilm carrier which requires mechanical mixing for distribution of carriers throughout the tank. The process has submerged and completely mixed biofilm reactor and unit for separation of liquid-solids [89]. A series of pollutant loading and bulk phase external carbon sources in denitrification and dissolved oxygen concentrations in carbon-oxidation or nitrification MBBRs have been applied, and response of the system is evaluated. As the activated sludge process MBBR process is also capable of meeting similar treatment objectives for carbon oxidation, nitrification, and denitrification but the MBBR makes use of a smaller tank volume. For biofilm thickness control it does not require a special operational cycle because MBBR is a continuously flowing process. In the existing municipal wastewater treatment plant, the MBBR is well suited for retrofit installation. The plan's ratio with (length to width) L: W greater than 1.5:1 results in nonuniform distribution of the biofilm carriers. MBBRs contain a plastic biofilm carrier which gives up to 67% of the liquid volume. To allow treated effluent to flow to the next treatment step, screens are typically installed with one MBBR wall while retaining the free-moving plastic biofilm carriers. To evenly distribute the plastic biofilm carriers and meet process oxygen requirements Aerobic MBBRs use a diffused aeration system. On the other hand, in anoxic MBBRs there are no process oxygen requirements so it has mechanical mixers to evenly distribute the plastic biofilm carriers. For meeting basic secondary treatment standards medium-rare MBBRs are designed, typically designed for a loading of 5-10 g BOD5m -2 d -1 at 10 degrees Celsius depending on the type of liquid-solid separation. Biologically active filters BAFs have a structure of natural mineral or random plastic media which supports biofilm growth and also serves as a filtration medium. Backwashing helps in removing solids accumulated from filtration and biochemical transformation. BAF configuration and backwash regimes influenced by media density. Preliminary and primary treatment are required by BAF influent. For secondary and tertiary treatment, downflow BAFs with media heavier than water include the Biocarbones process and packed-bed tertiary denitrification filters such as the Tetra Denites process. Using an intermittent counter-current flow these BAFs are backwashed. InfilcoDegremontBiofor process in upflow BAFs with media heavier than water have been used for secondary and tertiary treatment. To provide area for biofilm development and filtration, these processes use a floating bed of media. To meet treatment objectives, flow and backwashing regimes, media selection is integral. Media can be categorized as mineral media and plastic media. In most cases, plastic media is buoyant and mineral media is denser than water. During backwashing and chemical degradation by constituents in municipal wastewater, the media needs to resist breakdown from abrasion caused by them. BAFs designed for removal of carbon oxidation and suspended solids in secondary treatment typically have volumetric BOD loading rates in the range of 1.5-6 kg m -3 d -1 . Expanded and fluidized bed biofilm reactors Expanded bed biofilm reactors (EBBRs) and FBBRs use small media particles that are suspended in vertically flowing wastewater, so that the media becomes fluidized and the bed expands. Individual particles become suspended once the drag force of the relatively fast flowing wastewater (30-50mh -1 ) overcomes gravity and they are separated. In municipal applications, fluidized beds are typically used for tertiary denitrification. When treating groundwater or industrial wastewater, FBBRs are used for the removal of oxidized contaminants such as nitrate and perchlorate. Suspension of the media maximizes the contact surface between microorganisms and wastewater. It also increases treatment efficiency by improving mass transfer because there is significant relative motion between the biofilm and flowing wastewater. Silica sand (0.3-0.7 mm diameter) and granular activated carbon (GAC; 0.6-1.4 mm) are typically used. Other materials, however, have been used at pilot scale, such as 0.7-1.0 mm glassy coke [90], which is one of the key advantages of this process technology. In a study of tertiary nitrification of activated sludge-settled effluent using a pilot-scale EBBR, [91] found that the process also removed up to 56% CBOD and 62% TSS from the influent stream. Removal of these materials was attributed to the activities of protozoa (free-living and stalked) and metazoa (rotifers, nematodes, and oligochaetes) Rotating biological contactors RBC is an efficient attached growth system that purifies wastewater from different industries, namely food and beverage, refinery and petrochemical. In addition, it is efficient in purifying municipal wastewater, landfill leachate and lagoon effluent. When an average effluent waste water quality standard is less than or equal to 30 mg l -1 BOD the RBC process has been applied. The RBC contains a horizontal shaft, in which a cylindrical, synthetic media bundle is mounted. The bundled media is partially submerged and slowly rotates to expose the biofilm to air (when not submerged) and to substrate in the bulk of the liquid (when submerged). The RBC effluent stream is removed by liquid-solids separation units to detach biofilm fragments suspended. By reduced life cycle costs, less sludge production, less space requirement, ease of operation and high process stability with load variations as well as high effluent quality with regard to both biological oxygen demand (BOD) and nutrients, the RBC system has an edge over suspended growth systems. Trickling filters A three-phase biofilm reactor with secure carriers is called a Trickling Filter. Past a distribution system wastewater enters the bioreactor, trickles downhill over the biofilm surface, and in the third phase air circulates where it diffuses through the flowing liquid and into the biofilm. An influent water distribution system, containment structure, rock or plastic media, an underdrain and ventilation system are the components of Trickling Filter. A net production of total suspended solids is the result of treatment of wastewater using a trickling filter. And so, liquid-solids separation is required, this is achieved with circular or rectangular secondary clarifiers. The Trickling Filter process generally includes an influent/recirculation pump station, the Trickling Filter(s), and liquid-solids separation unit(s). Ideal Trickling Filter media encourage ventilation and be responsible for a high specific surface area, low cost, high permanency, and high enough porosity to avoid clogging [92]. Trickling Filter media types include rock, random (synthetic), vertical flow, and cross-flow (synthetic). Fixed-nozzle and rotary distributors are the two types of Trickling Filter distribution systems. Microorganisms forming on the packing material surface grow in biofilm and degrade the pollutants from the effluent. CONCLUSION Nutrients for growth are anchorage by Biofilm. Complex organics can be easily broken-down into metabolized substrates using enzymes beneath the biofilm matrix and also felicitate horizontal gene transfer. At one front where this biofilm strategies help in the treatment of waste water on the other hand it is difficult to remove these from environment as well as their growth is not only affected by the environment surrounding but is also affected by the native microflora. This study presents comparative data screening the benefits of biofilm treatment processes, describing their use in several stages of the wastewater treatment process. This study is important because, for improved scheming of these biofilm-based wastewater treatment strategies, knowledge about the microorganism involved, stages of treatment and factors affecting the treatment process are vital.
2021-11-21T16:15:43.053Z
2021-11-15T00:00:00.000
{ "year": 2021, "sha1": "7b81e21c5528c5b5472c698bd8deceee455eb1b7", "oa_license": "CCBY", "oa_url": "https://innovareacademics.in/journals/index.php/ijcpr/article/download/43607/25700", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "e091a8063be1d4de2a1d6d8d3c6a7e31c920a4ff", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
23652029
pes2o/s2orc
v3-fos-license
A new paradigm in low-risk papillary microcarcinoma: active surveillance Classical papillary thyroid microcarcinoma (PTMC) is a variant of papillary thyroid carcinoma (PTC) known to have excellent prognosis. It has a mortality of 0.3%, even in the presence of distance metastasis. The latest American Thyroid Association guidelines state that although lobectomy is acceptable, active surveillance can be considered in the appropriate setting. We present the case of a 37-year-old female with a history of PTMC who underwent surgical management consisting of a total thyroidectomy. Although she has remained disease-free, her quality of life has been greatly affected by the sequelae of this procedure. This case serves as an excellent example of how first-line surgical treatment may result more harmful than the disease itself. Learning points: Papillary thyroid microcarcinoma (PTMC) has an excellent prognosis with a mortality of less than 1% even with the presence of distant metastases. Active surveillance is a reasonable management approach for appropriately selected patients. Patients should be thoroughly oriented about the risks and benefits of active surveillance vs immediate surgical treatment. This discussion should include the sequelae of surgery and potential impact on quality of life, especially in the younger population. More studies are needed for stratification of PTMC behavior to determine if conservative management is adequate for all patients with this specific disease variant. Background Classical papillary thyroid microcarcinoma (PTMC) is a variant of papillary thyroid carcinoma (PTC).A metaanalysis from post mortem studies revealed a prevalence of 11.5% (1).The mortality of PTMC is less than 0.3% regardless of the presence of distant metastasis (2).Suggestions have been made by some experts to modify the terminology that describes PTMC to an 'indolent lesion of epithelial origin' due to their low aggressiveness (3).New evidence over immediate surgery emerging that favors active surveillance over immediate surgery (4,5).Active surveillance consists of regular follow-up with delay in active treatment until the malignancy shows significant progression.Immediate surgery refers thyroidectomy as per the most recent guidelines (6).Research has shown that patients who choose active surveillance over immediate surgical intervention have less complications overall (7).These undesired effects include severe hypoparathyroidism and vocal cord paralysis among others.Our case serves as an excellent example of how first-line surgical treatment may result more harmful than the actual disease. Case presentation We present the case of a 37-year-old female with history of a thyroid nodule that had been diagnosed two years prior.At the time, she was asymptomatic and had normal thyroid function as shown on serum tests results. Investigation Her initial thyroid ultrasound showed a solid hypoechoic nodule measuring right lobe measuring 0.7 × 0.5 × 0.8 cm and a left lobe solid hypoechoic nodule measuring 0.4 × 0.2 × 0.2 cm.A fine needle aspiration biopsy (FNAB) was performed and resulted positive for PTC in the right nodule and suspicious for malignancy in the left nodule. Treatment A total thyroidectomy was performed.She has continued to suffer from the sequelae of her surgery and was referred to our clinic for further management. Outcome and follow-up Her immediate postoperative course was complicated with hypocalcemia that prolonged hospital stay.For the past 2 years, there has been no evidence of disease recurrence.Nonetheless, her quality of life has been greatly affected by metabolic effects of severe hypocalcemia requiring multiple hospital admissions due to difficulty with treatment adherence and bothersome hoarseness due to vocal cord paralysis. Discussion Classical micropapillary thyroid carcinoma is considered an indolent thyroid neoplasm with a mortality of less than 0.3% even with the presence of distant metastasis (1).Clinical series of more than 1000 patients reports 0% of thyroid cancer related deaths (8,9).Not only is the mortality low, but recurrence of this neoplasm is a rare event as well.Zhang et al. reported that age older than 45 years, male sex, multifocal tumors or lesions larger than 6 mm were associated with an increased risk of nodal metastasis (9).Various factors have been associated to an increased risk of metastasis and recurrence, such as age older than 45 years, male sex, tumors that are multifocal and/or larger than 6 mm and presence of BRAF mutations (10).In an attempt to uniformly estimate the risk of recurrence, a scoring system was developed by Buffet et al. This system accounts for the presence of lymph node involvement, gender and tumor focality (8).PTMC has been increasing over the last decade, making it the most common PTC variant in patients older than age 45 years (11).The latest American Thyroid Association (ATA) guidelines for treatment of differentiated thyroid carcinoma favor more conservative management with lobectomy or even active surveillance for these tumors (6). Japan is home to the pioneers in active surveillance for this type of tumors.During a 10-year follow-up of 340 patients, 15% had an increase in tumor size and none developed metastasis or died from the disease (4).After this, several studies have been performed eliciting similar results (Table 1).For active surveillance to be performed, diagnosis by FNAB of this 'low-risk' lesions should be performed (12).Adequate patient selection remains crucial in order to obtain positive results.Brito et al. developed a risk stratification guide for this.It involves neck ultrasound findings, patient characteristics/comorbidities and availability of an experienced multidisciplinary team.Patients are classified as ideal, appropriate or inappropriate for active surveillance (13).Decreased likelihood of postoperative complications is one of the benefits of active surveillance.These include hematoma formation, hypoparathyroidism and vocal cord paralysis secondary to laryngeal nerve damage among others.In a multicenter study of 14,934 patients that underwent thyroid surgery, hypoparathyroidism occurred in 10% of the patients and 7.1% suffered laryngeal nerve damage (14).In studies where active surveillance was assessed, the rate of postoperative complications was lower than the immediate surgery group (Table 2) (15). Although the data seem favorable for active surveillance, application in the clinical practice has its burdens.Overcoming the barrier of anxiety and fear in a patient with a diagnosis of cancer is a limitation for the implementation of active surveillance.Therefore, patient education and more prospective cohort trials are needed to increase willingness of clinicians and patients to adopt this therapeutic approach. Evidence supporting active surveillance is increasing.Guidelines are controversial regarding the management of PTMC (Table 3).The 2015 ATA guidelines, due to the evidence in the prospective cohort studies done in Japan, stated the approach could be considered for papillary microcarcinomas.The patients that benefit from this approach are the ones without local invasion, with a short life expectancy or those with presence of comorbidities that that make them suboptimal surgical candidates 6).The Korean Thyroid Association (KTA) reported preliminary data that suggest they will adopt this approach in older patients (15).The British Thyroid Association has made no recommendations regarding this topic (16).The American College of Clinical Endocrinologists, American College of Endocrinology and the Associazione Medici Endocrinologi state that continued follow-up without immediate surgical intervention 'may be acceptable' (16).Nonetheless, the term 'active surveillance' is not used (17). Conclusion Although recent clinical evidence reports that active surveillance is a reasonable approach for the management of low-risk papillary microcarcinoma, no standard of care has been defined.The available data have been obtained from small number of cohorts, making it difficult to establish universal guidelines.A clear risk stratification guideline strategy would be very helpful to identify optimal candidates for this 'active surveillance'.This case highlights an important challenging issue regarding the optimal management approach for these patients and that more studies that address this matter are warranted. Figure 1 ( Figure 1 (A) (x400 H&E stain) The neoplastic follicles are lined by cells with variation in size, that are haphazardly arranged and show nuclear clearing, irregular contour, nuclear grooves and intranuclear pseudoinclusions (arrow) and uneven spacing of the nuclei in the neoplastic follicle, which are the characteristic changes of papillary thyroid carcinoma; (B) x40 Hematoxylin and Eosin (H&E) stain demonstrates the unencapsulated nature of the tumor.(*) Shows the fibrous septa dividing the neoplastic epithelium.The tumor also presents an infiltrative border with a predominantly follicular architecture and occasional papilla (arrow) seen.The right-side area represents benign thyroid parenchyma composed of macro follicles filled with colloid (pink proteinaceous material), and small blue follicular cells; the neoplastic Table 1 Active surveillance trials for papillary microcarcinoma.
2018-04-03T03:04:01.072Z
2017-09-04T00:00:00.000
{ "year": 2017, "sha1": "dd2e4712fc4b311956096fb119c236c7392eaaf9", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1530/edm-17-0065", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "d5d847bd68349b21e1aeaf05ba29fa6c93d0a502", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221324606
pes2o/s2orc
v3-fos-license
Chimeric Antigen Receptor T-Cells in B-Acute Lymphoblastic Leukemia: State of the Art and Future Directions Use of adoptive T-cell therapy modified with chimeric antigen receptor (CAR-T) has revolutionized treatment of patients with relapsed/refractory (r/r) B-cell acute lymphoblastic leukemia (B-ALL). CAR-T cells directed against CD19 antigen have produced response rates as high as 90% in clinical trials for r/r B-ALL. Despite high rates of complete remissions, the durability of responses has been sub-optimal with frequent relapses, especially in adult B-ALL population. Systemic toxicities from CAR-T therapy and standardization of toxicities grading and management is another major hurdle in the development of CAR-T field. In this review, we discuss the latest evidence of CAR-T therapy in B-ALL, potential mechanisms of relapse and barriers to CAR-T cell therapy in B-ALL. We also debate the role of allogeneic hematopoietic stem cell transplant (allo-HCT) post CAR-T therapy. INTRODUCTION Use of adoptively transferred T cells modified with chimeric antigen receptors (CAR-T) has heralded a new era in the treatment of hematological malignancies, with unparalleled survival outcomes seen in patients with relapsed or refractory (r/r) disease (1)(2)(3)(4). The idea of adoptive immunotherapy using T cells to attack cancer was developed in the early 1990s, and the first CAR was conceived by Eshhar et al. (5). CARs are synthetic receptors which include an extracellular target binding domain combined with a signaling domain, typically CD3zeta, plus costimulatory domains (single or in combination) from multiple genes such as CD28, 4-1BBL, and OX40. Over the past decade, the field of adoptive T therapy has progressed at an impressive pace, both in clinical field and in the development of innovative CAR-based platforms to improve the safety and efficacy of these therapies. In August 2017, a major milestone in the treatment of r/r B-cell acute lymphoblastic leukemia (B-ALL) was achieved with FDA approval of the first gene therapy (Novartis) (6), a CD19-targeted CAR-T cell-based product, tisagenlecleucel (CTL019) for B-ALL in children and young adults up to 25 years of age. Shortly thereafter, Gilead Pharma's CD19 CAR-T product, axicabtagene ciloleucel, obtained FDA approval for adult patients with r/r diffuse large B-cell lymphoma (7). B-ALL is the most common type of acute leukemia in children in the United States, with an annual incidence of ∼3,000 cases per year (8). In children, OS exceeds 85% (9); however, in adults, OS has been poor, ranging up to 50-60% (10,11). Frontline induction chemotherapy regimens in ALL induces high rates of complete remission (CR), of up to 90%, but ∼40-50% of adult ALL patients will eventually relapse. The outcomes in r/r ALL are even more dismal, with CR rates of only 30-40% with first salvage and only up to 10% with second salvage (12)(13)(14). With promising results from multiple trials in r/r ALL, CAR-T therapy has been added as a vital part of the therapeutic armamentarium for this disease. The development of novel agents such as monoclonal antibodies (anti-CD20), anti-CD19 bispecific T-cell engager (blinatumomab), and anti-CD22 antibody-drug conjugate (inotuzumab ozogamicin) has provided excellent results in both upfront and r/r ALL and continues to change the treatment paradigm for ALL (15)(16)(17)(18). However, the durability of responses achieved from these novel agents used as single-agent treatments in r/r ALL is dismal (15,16) and would probably be better if these agents were used in combination. Although there is no definitive randomized trial or retrospective studies comparing these novel agents with CAR-T therapy, recent data from multiple studies have positioned CAR-T therapy with a significant edge over these novel agents due to better CR rates and better efficacy in r/r ALL. However, combinations of these novel agents with CAR-T therapy, either sequentially or as a maintenance strategy, have the potential to further improve survival outcomes. The review of literature pertaining to novel agents in ALL is beyond the scope of this review. In this review, we focus on current evidence in the literature for the efficacy and safety of CAR-T therapy in ALL and discuss the role of allogeneic hematopoietic stem cell transplantation (allo-HCT) in ALL patients who receive CAR-T therapy. CD19 CAR-T CLINICAL TRIALS IN B-ALL CD19 is uniformly expressed on all B-ALL cells and remains the most widely used target for CAR-T adoptive cell therapy. CD19 is also expressed on normal B cells, and despite the on-target, off-tumor toxicity of B-cell aplasia with hypogammaglobinemia, patients do well in the short term, and those with recurrent infections can receive therapy with intravenous immunoglobulin (IVIG) supplementation. Initial studies using CAR-T cells for B-cell malignancies showed promising preliminary results in indolent lymphomas and chronic lymphocytic leukemia (19)(20)(21). A few years later, two complete remissions in pediatric B-ALL were described (22). In a pilot clinical trial led by the Children's Hospital of Philadelphia (CHOP) and the University of Pennsylvania and published in 2014, 25 children and 5 young adults with r/r B-ALL were treated with CD19 CAR (1). These patients had been heavily pretreated, and in 60%, disease relapsed after allo-HCT. CR by morphology was achieved in 27 patients (90%), 73% of whom obtained minimal residual disease (MRD)-negative CR as assessed by flow cytometry. Since then, multiple early-phase trials and later, larger multicenter trials have established the safety and efficacy of CD19 CAR-T therapy (3,21,(23)(24)(25)(26). Larger clinical trials led to the first commercially available product, tisagenlecleucel, which was approved by the FDA in 2017. In the pivotal ELIANA trial (3), which led to this approval, 75 children and young adults were treated with CD19 CAR-T cells. The overall remission rate within 3 months was 81% (61/75), with all patients who had a response to treatment found to be negative for MRD by flow cytometry. The 6-month event-free survival (EFS) and OS rates were 73 and 90%, respectively. The 1-year EFS and OS rates were 50 and 76%, respectively. Table 1 lists major trials of CD19 CAR-T therapy in patients with r/r B-ALL. These trials varied widely by CAR vector constructs, eligibility criteria, patient population, and dosing schemes; however, similar unprecedented CR rates that were achieved in almost all trials imparted credibility to CAR-T therapy in general. While tisagenlecleucel contains the 4-1BB costimulatory domain, Memorial Sloan Kettering Cancer Center conducted a trial using a CAR construct with a CD28 costimulatory domain, enrolling 53 adult patients with relapsed B-ALL (23). Complete remission was observed in 83% of the patient population. Among patients who were assessed for MRD by flow cytometry, 67% had an MRD-negative CR. The median EFS and OS durations among the 53 treated patients were 6.1 and 12.9 months, respectively. These trials, despite variation in CAR constructs and manufacturing, have consistently shown that CD19 CAR-T therapy induces high CR rates in high-risk, heavily pretreated patients with r/r B-ALL. Real-world experience from postmarketing registry data from the Center for International Blood and Marrow Transplant Research (CIBMTR) demonstrate similar results to those of preceding clinical trials, with 89% of 96 patients achieving a CR, and in patients whose MRD data were available (82% of patients), all were MRD-negative (28). This cohort included children and young adults and showed a 66% leukemia-free survival rate and 89% OS at 6 months. Further, various populations with B-ALL with historically poorer outcomes, such as those with Ph+ disease, patients whose disease relapsed after allo-HCT, and even patients with extra medullary disease and central nervous system (CNS) involvement, have responded well to CAR-T therapy. In another study of 12 patients with CNS ALL involvement before CAR-T therapy, no patients experienced CNS relapse (32). Aside from the unique systemic toxicities associated with CAR-T therapy, the major challenge to CAR-T therapy has been difficulty in obtaining durable responses, especially in the adult B-ALL population. Despite initial impressive deep responses obtained with this therapy, more than half of the adult B-ALL patients experience relapse (22,23,26,(33)(34)(35)(36)(37) if not bridged to allo-HCT. Moreover, we are currently unable to accurately predict which patients will achieve long-term remission and/or persistence of in vivo CAR-T. As CAR-T and gene therapy fields continue to evolve, we will likely see more effective products aimed at improving the potency, safety, and persistence of CAR-T therapy. TOXICITIES ASSOCIATED WITH CAR-T THERAPY The toxicities associated with CAR-T therapy range broadly, from on-target, off-tumor effects such as B-cell aplasia/hypogammaglobulinemia to immune mediated effects such as cytokine release syndrome (CRS) and immune effector cell-associated neurotoxicity syndrome (ICANS). CRS is characterized by signs and symptoms ranging from fever to widespread systemic life-threatening sequelae such as hypotension, hypoxia, and multiorgan dysfunction due to an immune-mediated cytokine storm caused by the expansion of the CAR-T cells (29). The severity of CRS almost always correlates with elevation of cytokines and chemokines such as IL-6, 1L-8, IL-10, interferon γ, and monocyte chemoattractant protein 1 (MCP-1) (29). The incidence of CRS in ALL and NHL patients treated with tisagenlecleucel was 77% (3) and 57% (2), respectively. The incidence of severe CRS in ALL and NHL patients was about 46 and 18%, respectively. In contrast, the incidence of severe CRS with axicabtagene ciloleucel in ALL and NHL patients was 13 and 29%, respectively. ICANS clinically manifests with the deterioration of neurological function starting from word-finding difficulty with stuttering, writing impairment, and decreased concentration and progressing to more severe cases with a depressed level of consciousness, convulsive or non-convulsive seizures, and at times raised intracranial pressure/cerebral edema (38). The pathophysiology of ICANS is still not completely understood, and the mechanism is believed to be related to endothelial activation and blood-brain barrier disruption. The severity of ICANS correlates with elevated cytokine levels as well as with the rate of CAR-T expansion (39). The incidence of neurotoxicity in ALL and NHL patients treated with tisagenlecleucel is about 40% (3) and 39% (2), respectively. Severe neurotoxicity is seen in about 13 and 11% of ALL and NHL patients respectively. In contrast, the incidence of severe neurotoxicity with axicabtagene ciloleucel in ALL and NHL patients is ∼38 and 28%, respectively. ICANS may occur concurrently with CRS and/or without associated CRS. Host and tumor factors such as higher tumor burden and baseline inflammatory markers may be associated with more toxicity among CAR-T patients. Some authors have suggested preemptive treatment with tocilizumab, an IL-6 inhibitor, for patients at higher risk of severe CRS due to higher disease burden, which resulted in a trend for less grade 4 CRS events in a cohort treated with this agent (40). Another study, which investigated fractionated doses of CAR-T cells, showed high CR rates with manageable toxicities in the fractionated dose cohort (41). Norelli et al. developed a mouse model to recapitulate key features of CRS and found that IL-1 cytokine physiological function abrogation can prevent both CRS and ICANS (42). In their study, the major source of IL-1 and IL-6 during CRS were human monocytes. They were able to prevent CRS by blocking the IL-6 receptor with tocilizumab or by monocyte depletion. However, tocilizumab did not protect mice from delayed lethal neurotoxicity. Instead, the IL-1 receptor antagonist anakinra was able to protect mice from both CRS and neurotoxicity. A controversial area in the management of CRS/ICANS has been whether the use of tocilizumab and steroids can blunt the efficacy of and responses to CAR-T therapy. A few retrospective analyses (43,44) have revealed that the use of corticosteroids and tocilizumab do not influence the efficacy and kinetics of CAR-T cell therapy. Few medical centers have adopted the strategy of preemptive use of tocilizumab and/or steroids for mitigation of CD19 CAR-T toxicities, and initial preliminary evidence does not show a detrimental effect on CAR-T therapy efficacy or on responses to this therapy (43). However, in a recent large retrospective analysis of 100 patients with r/r large B-cell lymphoma treated with the CAR-T product axicabtagene ciloleucel, early and prolonged use of high-dose corticosteroids was associated with early progression and death (45). More data are needed to help answer this controversial question. The CAR-construct can also influence the toxicity profile. CD28 costimulatory domains cause rapid proliferation through the B7 signaling pathway (46), whereas 4-1BB-containing constructs are slower to expand, through activation of the nuclear factor-κB (NF-κB) and tumor necrosis factor receptor-associated factor (TRAF) pathway (47). Third-generation constructs, which combine both CD28 and 4-1BB, have shown better in-vivo persistence as well as comparable safety profiles in limited Phase I trials (48,49); however, further data are needed. The antibody's affinity in CAR-T design can also influence the toxicity profile. A novel second-generation CD19 CAR-T (AUTO1), based on an antibody (Kd ∼116 nm) with a faster off-rate but equivalent on-rate compared with conventional FMC63 antibody (Kd ∼0.9 nm)-based FDA-approved CD19 CAR-T products, was designed to mimic T-cell kinetics similar to the physiological T-cell activation profile (50). In a clinical trial, AUTO1 CAR showed high efficacy, with a response rate of 83% MRDnegative CR and a favorable toxicity profile despite relatively high tumor burden. Unified staging scales and management guidelines of CRS and ICANS were recently introduced, thus standardizing patient care (51)(52)(53)(54). As the number of patients treated with CAR-T therapy and the number of centers using this therapy have increased, knowledge regarding toxicities relevant to CAR-T therapy and its management has also vastly improved. In future, newer CAR constructs with improved safety profile and greater understanding of clinical management of toxicities will lead to even wider delivery and generalizability of CAR-T therapy. RELAPSE MECHANISMS Relapse after CD19-CAR-T therapy can be broadly categorized into two patterns based on the flow cytometry assessment of CD19 expression on B-ALL: CD19-negative relapses (3,55,56) and CD19-positive relapses. Table 2 summarizes the relapse patterns in multiple trials. CD19-positive relapses are usually a function of low potency and poor in vivo persistence of manufactured CAR-T cells. Several factors limit the potency and efficacy of CAR-T cells, including the limited long-term persistence (57), the immune-suppressive tumor microenvironment (58), and intrinsic dysfunction associated with T-cell exhaustion (59,60). Various components of CAR vector constructs such as costimulatory domains (61)(62)(63), single-chain variable fragment (scFv) (64), and hinge and transmembrane domains (65,66) can influence the potency and in vivo persistence of CAR-T cells (67). For example, the 4-1BB costimulatory domain ameliorates T-cell exhaustion induced by tonic signaling leading to better in vivo persistence (47,60,68). Replacement of murine binding domains to the human binding domain in the CAR construct led to lower cytokine levels in the blood and decreased neurotoxicity (65). Increasing the length of the hinge domain in CAR can lead to slow and sustained proliferation without causing neurotoxicity or severe CRS (69). An in-depth review of the mechanistic concept of CAR vector constructs is beyond the scope of this review, and have been detailed elsewhere (67,70). Another important aspect of CAR-T cell-based manufacturing that is not well-understood is the influence of age-related immune changes and of patients' previous chemotherapies and other treatment on CAR-T production and efficiency. Guha et al. (71) showed that CAR-T cells from geriatric donors were functionally impaired compared with CAR-T cells from younger donors (71). Compared with geriatric donors, younger donors had higher transduction efficiencies and improved cell expansion with greater cytolytic capabilities. Davila et al. (72) showed that CAR-T cells produced from aged mice showed enhanced cytotoxicity but shorter persistence and a phenotype with less effector memory (73). Also, aging-related T-cell senescence and exhaustion produce significant functional challenges for engineered T-cell therapy (74). This fascinating observation may partly explain why pediatric patients with ALL has better survival outcomes and fewer relapses than the adult/geriatric ALL population. While most relapses are CD19-positive, some ALL tumors evade CAR-T cell-mediated recognition and clearance by loss of expression of CD19 on the tumor cell surface. Sotillo et al. (75) looked at the genetic/epigenetic mechanisms of CD19negative relapses by examining tumor samples from patients with CD19-negative disease. In these patients, the authors found deletions in CD19 locus and de novo frameshift and missense mutations in exon 2 of CD19. They also discovered lower levels of SRSF3 (a splicing factor whose function is to retain exon 2) in patients with r/r ALL, which allowed exon 2 skipping in tumors, producing a truncated CD19 variant that allowed tumor cells to escape killing by CAR-T cells. According to the authors, the underlying mechanism for relapse in these tumors was the selection of preexisting alternatively spliced variants. Grupp et al. (22) described the phenomenon of "selection by immune pressure" in ALL patients treated with CAR-T cells. They observed the presence of both CD19-negative and positive ALL cells by flow cytometry before CAR-T therapy; later, at the time of relapse, the dominant clone was predominantly CD19negative, induced by the selective pressure of CD19 CAR-T cells. Orlando et al. examined specimens by DNA and RNA sequencing from 12 patients who had CD19-negative after CAR-T therapy (76). CD19 mutations were found throughout exons 2-5 in all 12 relapses cases. At least one unique frameshift insertion or deletion was present in each patient. In few cases, missense single nucleotide variants were confirmed as well. In addition, loss of heterozygosity was acquired in 8 of 9 patients at relapse. The allele frequency of mutations measured through DNA sequencing was compared with the percentage of CD19negative tumor cells by flow cytometry in patients' samples and showed most tumor cells in the relapsed sample contained a CD19 loss-of-function mutation. These findings again confirmed the selective immune pressure of CD19 CAR-T cells. Authors also interrogated mutations in other B cell-specific genes including CD10, CD22, CD20, CD34, CD38, and CD45 and found no mutations associated with relapse. However, contrary to the findings of Sotillo et al. (75), authors found low levels of alternative splicing at extremely low frequencies (0-2.7%) in both initial screening and relapsed samples. This suggests that alternative splicing is incidental to CD19 mutations and may not be involved in tumor evasion of CAR-T cells' immune selection pressure. Future studies exploring the mechanism behind CD19negative post-CAR-T relapses may help determine whether splicing plays an important role in CD19-negative relapses. Another mechanism underlying CD19-negative relapses has been ascribed to lineage switch. Jacoby et al. (77), in an ALL mouse model, demonstrated that CAR-T cells create sustained immune pressure against ALL cells with the potential to cause a switch to myeloid lineage markers. Further, they showed that the deletion of Pax5 or Ebf1 recapitulated lineage reprogramming occurring during CD19 CAR immune pressure. Although rare, this lineage switch has also been shown in relapsed human patients (55,78). Other reported mechanisms of relapse include downregulation of CD22 antigen in loss of response to CD22 CAR-T cells, in a patient who previously lost CD19 expression as well (79), tumor cell-mediated CAR-T trogocytosis (the transfer of the target antigen to the effector T cell) (80) and CAR neutralizing antibody formation (24,25,64). ALLO-HCT AFTER CAR-T THERAPY FOR R/R ADULT ALL The role of allo-HCT in the remission period after CAR-T therapy is not well-established. CAR-T therapy has immunomodulatory properties, and its associated CRS toxicity with its damaging effect on the endothelium can affect the safety profile of allo-HCT after CAR-T therapy. Moreover, lymphodepletion chemotherapy preceding CAR-T infusion may have an additive effect on allo-HCT-related morbidity and mortality. The specific CAR product used, and treatment population may also be associated with variable transplant outcomes. Multiple studies have started to establish the safety and efficacy of allo-HCT after CAR-T therapy in r/r ALL patients. In a study conducted at the Fred Hutchinson Cancer Research Center in Seattle, a total of 32 patients (ALL, n = 19; NHL/CLL, n = 13) underwent allo-HCT after ≥1 CD19-targeted CAR-T infusions with a defined CD4:CD8 ratio (36). The median age at allo-HCT was 46 years (range, 23-74 years). The incidence of grade 3-4 acute graft-vs. host disease (GVHD) and chronic GVHD were 25 and 10%, respectively. One-year treatment-related mortality (TRM) was 21%. The 1-year OS rate was 58%, which is impressive in the r/r ALL setting. An important observation was that longer time from CAR-T therapy to allo-HCT (≥80 vs. <80 days) was associated with a higher risk of death (hazard ratio [HR] 4.01; P = 0.03) and a trend toward higher non-relapse mortality (HR 4.4; p = 0.19). Overall, the toxicities of allo-HCT in patients who underwent prior CAR-T therapy were not higher than expected in these high-risk patients. Similarly, in a study from Beijing, China, 52 adult patients with r/r ALL underwent reduced-intensity myeloablative allo-HCT after treatment with either CD19 or CD22 autologous CAR-T cells bearing a 4-1BB costimulatory domain (37). The median time from CAR-T treatment to allo-HSCT was 50 days (range, 34-98 days). The 1-year relapse rate and allo-HCTrelated mortality (TRM) were 24.7 and 2.2%, respectively. The incidences of acute and chronic GVHD were comparable to those in previously published studies (81). One-year OS and EFS were impressive at 87.7 and 73.0%, respectively. In this relatively larger cohort, with a quick bridge to allo-HCT after CAR-T therapy, a higher leukemia-free survival was achieved in r/r B-ALL. These studies demonstrate that CAR-T therapy can be used as a quick bridge to allo-HCT in patients with r/r ALL and could potentially augment durable remission rates. In this study, the reduction in dose intensity of the conditioning regimen may have decreased the TRM and increased the OS. The use of reduced intensity conditioning may be a reasonable strategy in these heavily pretreated r/r ALL patients; however, more definitive studies are needed to address this issue. In pediatric patients with B-ALL, CAR-T therapy has produced more sustained durable responses with lower rates of relapse, compared with rates in adults with B-ALL. For example, in the ELIANA trial (3), the overall remission rate was 81%. Of 75 patients, 45 (60%) had CR, and another 16 (21%) had CRi. However, among the 61 patients who achieved CR or CRi, 22 (36%) experienced relapse. For the whole cohort, the probability of EFS at 12 months was 50%, and median OS was not reached. Eight patients underwent allo-HCT while in remission, and all eight were alive at last follow-up. In the adult B-ALL population, the durability of response to CAR-T therapy alone has been poor compared with that in the pediatric B-ALL population. Adult r/r B-ALL carries a poor prognosis, with a median survival of less than a year; and less than half of these patients can receive allo-HCT, the only potentially curative modality in these settings (82,83). Table 3 summarizes the relapse rates and outcomes of r/r ALL after CAR-T therapy in multiple studies and compares the outcomes of adult patients who received post-CAR-T allo-HCT with those of patients who did not receive this therapy. Although CAR-T therapy in adults with r/r ALL has produced a higher CR rate of ∼70-90%, more than half of these patients experience relapse within 1 year if CAR-T therapy is not followed by an allo-HSCT (22,23,26,(33)(34)(35)(36)(37). In one study (23), among 43 ALL patients who had a CR after infusion of CD19 CAR-T cells, 26 were observed with no further therapy and 17 received allo-HCT. The relapse rates in the allo-HCT group (35% [6/17]) were significantly lower those in the no-allo-HCT group (65% [17/26]). However, the significant treatment-related mortality rate of 35% (6/17) in the allo-HCT group, dwarfed the benefits of improvement in relapse. Also, again due to increased TRM in the allo-HCT cohort, in patients who had an MRD-negative CR to CAR-T therapy, no significant difference was observed in EFS and OS between patients who received allo-HCT and those who did not. However, contrary to the above study, most studies have shown significant improvement in survival outcomes in adult patients with r/r ALL who underwent post-CAR-T allo-HCT, as shown in Table 3. For example, in an NCI study (24), 28 of 51 patients achieved MRDnegative CR. The relapse rate (9.5%; 2/21) was significantly less in patients who had undergone allo-HCT after CAR-T therapy than in those who had not (6/7; 85.7%) (P = 0.0001). The median leukemia-free survival (LFS) in the allo-HCT group was not reached compared to median LFS of 4.9 months in MRDnegative CR patients who did not proceed to allo-HCT (P = 0.0006). In another study, children and young adults (n = 85) who were treated with CD19 CAR and CD22 CAR-T cells were pooled for analysis (84). Of 51 patients who attained a CR, 43 were MRD-negative by flow cytometry. Based on competing risk analysis, the 24-month cumulative incidence of post-allo-HCT relapse of all HCT patients was significantly low at 13.5%. B-cell aplasia (BCA) can be used as a pharmacodynamic measurement of CAR-T persistence (1) since patients with short duration of BCA almost always experience relapse (85). In a phase 1/2 PLAT-02 trial (85), patients with short duration of BCA (<63 days) after CAR-T-infusion had increased risk of relapse. In this study, patients with shorter BCA duration who had attained CR and did not relapse prior to day 63 had significant benefit from consolidative allo-HCT (P = 0.007). Of the 15 patients with shorter BCA duration, six did not pursue HCT, and all experienced relapse. The difference in the CAR-constructs and variability in the patient population makes the cross-study comparison difficult. Overall, all of the above studies highlight the effectiveness of CAR-T therapy in patients with r/r disease and the synergistic role of allo-HCT in the post-CAR-T therapy period. However, prospective trials are needed to define the appropriate role of allo-HCT in the post-CAR-T therapy population. The following is a summary of the important points learned from these trials: • Adult patients with r/r ALL can achieve unprecedented CR rates with CAR-T therapy and can be transitioned to allo-HCT. Previously, the rate of allo-HCT in r/r has been dismal at 10-30% in some studies (82). • Despite the use of various targets and costimulatory domains in various CAR constructs, the durability of remission achieved with CAR-T therapy alone in adult patients with r/r B-ALL has been poor, with relapse rates as high as 65-85% in various studies. The 4-1BB costimulatory domain CAR-T shows more durable in vivo persistence than does the CD28 costimulatory domain (60). • Allo-HCT may be associated with more durable remissions and improved overall survival following CAR-T therapies. With increasing depth of remission achieved with CAR-T therapy, we hypothesize that allo-HCT conditioning deintensification will lead to less TRM and increased OS, especially in patients with second allo-HCT. • Patients with CNS/leptomeningeal diseases have had excellent responses with CAR-T therapy, despite most of these patients showing evidence of CAR-T cells in cerebrospinal fluid (34,37). CNS toxicities, including seizures, are higher in patients with evidence of CNS disease, although most of these can be managed with appropriate and timely interventions (34). OTHER TARGETS FOR CAR-T THERAPY IN ALL Although the majority of recent clinical trials have focused on CD19 as the target antigen for CAR-T therapy, other targets on B-cell surface markers such as CD20, CD22 could be used to target B-ALL. Besides CD19, the other common target for CAR-T therapy in clinical trials is CD22. In a phase 1 study with CD22 CAR-T cells, where the majority of patients previously failed CD19 CAR-T therapy, the use of CD22 CAR-T cells resulted in a remission in 73% of patients (11/15) (86). Relapses were associated with diminished CD22 density in leukemic cells, which permitted escape from CD22 CAR-T cells. In another CD22 CAR-T study from China, 34 patients who relapse after CD19 CAR-T therapy achieved 70% CR rates (24/34) (34). Eleven patients (all in CR) went on to receive allo-HCT, and 8 remained in remission at 4.6-13.3 months after allo-HCT with a 1-year leukemia-free survival rate of 71.6% for the whole cohort. Surprisingly, CD22 antigen loss or mutation was not associated with relapse. To overcome CD19-negative relapses, many research groups have tried to develop dual-target CARs by targeting CD19 and another antigen simultaneously, such as CD22 or CD20 (87)(88)(89). Gardner et al. used two lentiviral vectors constructs targeting CD19 and CD22 individually to create a CAR product with three different populations of CAR-T cells (anti-CD19, anti-CD22, and anti-CD19-22) (90). In preliminary results, seven patients were treated, and CR was obtained in 5 (71%), four of whom were MRD-negative. In another phase 1 trial (91), a modified cocktail therapy of CD19 and CD22 CAR was tested in 15 patients with B-ALL. All patients achieved CR or Cri, and 14 were MRD-negative. Among the 15 patients, 11 had an allo-HSCT, and all have remained in remission at the time of manuscript submission. In another phase 1 study (91,92) of a bicistronic CAR-T targeting CD19 and CD22 in r/r B-ALL, seven evaluable patients all achieved a remission. At a median follow-up at 8 months, three relapses had occurred, including one with CD19negative/CD22-low expression. A recently published study of clinical trial using CD19/CD22 dual CAR-T cells with a 4-1BB co-stimulatory domain showed all seven of the patients in the second dose cohort achieving CR, with six of them being MRD negative CR (93). Other trials targeting dual antigens are currently under way (30), including preclinical data regarding a dual CD19 and CD123 targeting CAR (94). CAR-T therapy targeting 3 targets-CD19, CD20, and CD22, are also under development for ALL (95). A chondroitin sulfate proteoglycan 4 (CSPG4) membrane surface receptor has been found on mixed lineage leukemia (MLL) rearranged B-ALL cells. A CSPG4-specific CAR is an active area of investigation for MLL rearranged B-ALL (96). FUTURE DIRECTIONS AND CONCLUSIONS Over the past 50 years, we have seen several breakthroughs in the treatment of B-ALL, especially in childhood B-ALL; however, CAR-T therapy represents a significant innovation and a major milestone in the treatment of both pediatric and adult B-ALL. Relapses after CAR-T therapy and poor persistence of CAR-T cells in vivo have emerged as major obstacle to widespread success in B-ALL patients who undergo this therapy. Novel strategies are being implemented to not only increase the potency and persistence of CAR-T cells but also decrease the toxicities to make the use of CAR-T therapy safer. New cancer-associated antigens are being explored as potential targets for CAR-T cells. Also, multi-targeted CARS are now being tested in early-phase studies, in an effort to reduce antigen loss as a resistance mechanism (92,95,97). Various components of CAR constructs are being enhanced to maximize their potential and synergize with the tumor microenvironment. For example, the higher affinity of the Tcell receptor may at times actually impair the selectivity of the cells and reduce overall CD8 T-cell function (98,99). One study showed that a lower-affinity ScFv CAR construct showed better proliferation than did higher-affinity ScFv CAR constructs and produced MRD-negative remission in 12 of 14 patients treated, five of whom had continuous remissions at a median of 14 months of follow-up (64). CAR constructs with cytokine secretion and immune modulation, termed fourth-generation or armored CARs, are being developed to further augment CAR-T activity (100,101). Some of these novel CAR constructs use paracrine signaling, whereas others activate immune cells or counteract immune rejection through PD-1 blockade and other immunoregulatory mechanisms (100,102,103). Checkpoint inhibitors have also been combined with CAR-T cells to improve efficacy. In one study, ALL patients who lost B-cell aplasia after CAR-T therapy were treated with checkpoint inhibitors, and three of six patients had reacquired B-cell aplasia after the treatment (104). Multiple other strategies to enhance CAR-T cell expansion and persistence are being devised including overexpression of certain genes such as c-jun (59), CRISPR knockouts, TET2 gene disruption (105), enzyme overexpression to metabolically engineer against the tumor microenvironment (106), and expression of erythropoietin receptor in CAR constructs with the ability to expand in-vivo with erythropoietin. In another example of expanding and enhancing the persistence of CAR-T cells, patient-derived antigen-presenting cells were transduced with a lentiviral vector coding a truncated CD19 (CD19t) (107) and were infused into patients at high risk of short CAR-T cell persistence, such as low antigen tumor burden, rapid CAR-T contraction, or an early loss of B cell aplasia. All 11 patients had an increase in CD19 CAR-T cells, with 5 of 10 having ongoing B-cell aplasia with a median follow-up of 8.8 months. Systemic toxicities from CAR-T therapy is another major hurdle in the developing CAR-T field, and multiple avenues are being explored to make CAR safer. Ying et al. devised a new CD19 CAR construct with a longer CD8α hinge length (86 amino acids) and found that CAR-T cells transduced with this construct produced lower levels of cytokines and proliferated at a slower pace than did prototypical CD19 CAR-T cells (69). In a phase 1 trial, 6 of 11 patients achieved CR and notably, no neurological toxicity and no severe CRS (greater than grade 1) occurred in any patient. Similarly, a clinical trial using a CAR containing a fully human scFv targeting CD19 demonstrated lower neurotoxicity rates than did a cohort using a murine scFv, due to lower cytokine secretion by the human scFv-containing cells (65). The investigators at the University of Pennsylvania have tried fractionated infusions of CAR-T cells split over 3 days, which allowed for day 2 and 3 doses to be held for early CRS, and found that high-dose fractionated dosing of CD19 CAR with patient specific dose modification optimizes safety without compromising efficacy (41). Another hurdle for CAR-T therapy that limits its broad applicability is the long manufacturing process, which not only is costly but also leads to a more exhausted CAR-T phenotype in the final product. A new "FasT" platform, which uses electroporation to transduce the CAR gene and has shortened the CAR-T cell manufacturing process by more than 24 h, has shown superior expansion capability and younger/less exhausted phenotypes in a phase I clinical trial (108). Initial clinical reports have been encouraging: CR was achieved in all 10 treated patients, nine of whom were MRD-negative. Other ways to mitigate this obstacle is to develop allogeneic "off the shelf " therapies (109); however, allogeneic cells bear the risk of immune rejection by host T cells, as well as alloreactivation of the CAR-T cells via the TCR receptor against host tissues, causing GVHD (110). Many trials are currently enrolling "off the shelf " products, including a few trials with gene-edited deletion of the surface TRAC molecule to prevent GVHD (111); however, preliminary results reveal responses to be short-lived. How successful these "off the shelf " therapies will be in the future is still an open question. Despite the multiple limitations described in this paper, the CAR-T therapy field has continued to progress with significant innovation and holds great promise to revolutionize our approach to cancer treatment. AUTHOR CONTRIBUTIONS UG and NS contributed with drafting the paper, and writing the manuscript. PK, KM, and ES participated in the critical revision of the article. All authors reviewed and approved the manuscript.
2020-08-27T09:07:43.720Z
2020-08-26T00:00:00.000
{ "year": 2020, "sha1": "2d86a79b9a3ef569f87f12f52fe83579701f7d6d", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2020.01594/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "966f90810e389158a8bdd01938acff66691aa9b8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
268570629
pes2o/s2orc
v3-fos-license
Peripheral Nerve Stimulation for the Treatment of Superior Cluneal Neuralgia: A Cadaver Demonstration of a Novel Technique for Lead Placement Abstract Superior cluneal neuralgia (SCN) is a distinct cause of lower back and/or leg pain related to pathology of the superior cluneal nerve. When assessing a patient with low back pain (LBP), superior cluneal neuralgia is frequently misdiagnosed. The pathophysiology of SCN ranges from myofascial compression brought on by aberrant muscle tone to direct iatrogenic injury or trauma. In this technical report we will discuss the anatomy of superior cluneal nerve, superior cluneal neuralgia, current treatment modalities, and a novel approach to peripheral nerve stimulation (PNS) lead placement via a cadaver demonstration for SCN. Introduction The superior cluneal nerve is a relatively small but important sensory nerve in the human body.It is a branch of the dorsal rami of the upper lumbar spinal nerves, typically arising from the dorsal rami of L1, L2, and L3, though there can be some variation in its origin. 1 The distinct cause of low back pain with symptoms affecting the groin and/or legs was first identified as superior cluneal neuralgia (SCN) in 1957.Nicknamed pseudo-sciatica, SCN is a commonly missed diagnosis due to its vastly overlapping symptoms. 2his nerve plays a crucial role in innervating the skin and soft tissues of the lower back, specifically the upper buttock area and the skin overlying the posterior superior iliac spine (PSIS). 1 Treatment modalities include conservative management, radiofrequency ablation, and decompressive surgery.The use of peripheral nerve stimulation has been described for the treatment of SCN.However, the traditional approach to PNS lead placement is technically challenging in the transverse plane limiting its effectiveness. 3Herein, we present a novel interventional minimally invasive approach using peripheral nerve stimulation of the superior cluneal nerve for SCN.We will demonstrate this technique utilizing a cadaver model. Superior Cluneal Nerve Anatomy The superior cluneal nerve is part of a group of nerves known as the cluneal nerves, which includes the superior, middle, and inferior cluneal nerves.These nerves branch off from the posterior rami of the lumbar spinal nerves.The superior cluneal nerve is further divided into superior, intermediate, and lateral branches. 1he cluneal nerve typically originates from the dorsal rami of the lumbar spinal nerves L1, L2, and L3, but the exact level of origin can vary between individuals.The neural contribution can include T11-L5 nerve roots, although multiple anatomical studies have demonstrated that there is a considerable variation. 2,3The spinal nerve roots emerge from the body and pass through the paraspinal and psoas major muscles before arriving at the quadratus lumborum muscle in the posterior region.The spinal nerve roots then approach the iliac crest by passing through the thoracolumbar fascia.The thoracolumbar fascia forms the anterior wall of the osteofibrous tunnel, which the SCN may pass through as it crosses over the posterior iliac crest. 2 Note the superficial nature of the superior cluneal nerves as they cross over the iliac crest (Figure 1). The relationship between the SCN, thoracolumbar fascia, and posterior iliac crest was examined anatomically in fifteen cadavers.Although the SCN's lateral and intermediate branches penetrated or went through a fissure in the fascia, the SCN's medial branches seemed to be stuck between the superior rim of the iliac crest and the taut thoracolumbar fascia fibers. 3he primary function of the superior cluneal nerve is to provide sensory innervation to the skin and soft tissues of the lower back, particularly the upper buttock region.It carries sensory information from these areas to the central nervous system, allowing individuals to perceive touch, pressure, temperature, and pain in this region. 3 Superior Cluneal Neuralgia The superior cluneal nerve can play a significant role in the experience of lower back pain.When it becomes irritated or compressed it is classified as superior cluneal neuralgia (SCN).SCN can lead to referred pain in the lower back, upper buttock, and posterior iliac crest region.This referred pain can be misinterpreted as lower back pain, even though the source of the problem may be the nerve itself. 4rritation or compression of the superior cluneal nerve can result from various factors, including mechanical stress, poor posture, or injury to the lower back.Prolonged sitting, repetitive activities, or occupational factors that involve bending and twisting at the waist can contribute to the development of this condition. 5dentifying and addressing issues related to this nerve can be essential for diagnosing and managing certain cases of lower back pain.The clinical presentation of SCN typically manifests as chronic, burning, stabbing, or shooting pain in 1236 the lower back and upper buttock region.The pain may radiate down the back of the thigh or around the hip area.It is often described as sharp or electric shock-like and may be associated with numbness, tingling, or hypersensitivity in the affected area. 4,5iagnosis of superior cluneal neuralgia is primarily clinical, based on a thorough medical history and physical examination.Imaging studies such as X-rays, MRI, or CT scans may be conducted to rule out other potential causes of pain, such as lumbar disc herniation or spinal stenosis.Diagnostic nerve blocks, where a local anesthetic is injected into the superior cluneal nerves, can help confirm the diagnosis by providing temporary relief from pain. 5 SCN Treatment An early and accurate diagnosis of SCN is crucial for improving the prognosis.A prompt diagnosis enables healthcare providers to initiate appropriate treatment and reduce the risk of chronic and debilitating pain.Conservative treatment is the first line of management.This may include physical therapy, postural correction, and lifestyle modifications to alleviate pressure on the superior cluneal nerves. 5,6edications like non-steroidal anti-inflammatory drugs (NSAIDs) and neuropathic pain medications can provide relief from pain and reduce inflammation.These medications can be effective in managing symptoms and may contribute to an improved prognosis for some patients. 6n cases where conservative treatments and medications are ineffective or provide only temporary relief, surgical interventions may be considered.Nerve decompression surgery involves releasing entrapped superior cluneal nerves.However, surgery carries its own set of risks, and it is typically reserved for cases where other treatments have failed. 6 less invasive treatment option involves injection of local anesthetic to reduce the pain and inflammation around the superior cluneal nerve.Diagnostic nerve blocks can confirm the diagnosis and provide temporary pain relief.The response to diagnostic nerve blocks can guide treatment decisions.If a patient experiences significant relief from the blocks, it suggests that more targeted interventions such as radiofrequency ablation may be affective. 7ore recently, PNS has been investigated as a treatment option for refractory SCN.Abd-Elsayed demonstrated successful utilization of wireless PNS systems on five patients with various neuralgias, including cluneal neuralgia.The evidence that PNS can be effectively used for patients with LBP due to SCN is strengthened by this report. 8 Peripheral Nerve Stimulation for SCN Peripheral nerve stimulation is a relatively new treatment modality for numerous chronic pain conditions.Several studies have demonstrated the effectiveness of peripheral nerve stimulation in managing acute post-surgical pain in orthopedic procedures such as total knee arthroplasty, anterior cruciate ligament surgery, as well as chronic knee pain. 9 retrospective review of 57 patients concluded that PNS is a safe and effective treatment modality with sustained pain relief for up to 24 months. 10eripheral nerve stimulation leads are responsible for directing an electric current to the afferent neurons, which are responsible for sensory input in the painful region.The idea behind this method is that the electric current applied to the peripheral nerve will affect the larger, myelinated afferent nerve fibers, which can disrupt the processing of pain signals in the spinal cord by smaller, non-myelinated afferent fibers. 9lthough peripheral nerve stimulation has been demonstrated as effective in treating several chronic pain conditions there is minimal documentation trialing PNS as a potential refractory SCN treatment modality.Dr. Abd-Elsayed demonstrated a successful utilization of a wireless PNS system in five patients with different types of neuralgias, including cluneal neuralgia in 2020. 8This was followed by two separate single case studies in 2022.One performed by Soteropoulos and the other Chauhan.They both demonstrated a refractory SCN case that was successfully treated with PNS. 11,12ovel Approach to PNS for SCN In this section we will demonstrate a novel approach to placing a peripheral nerve stimulator for the superior cluneal nerve using a cadaver model and compare this new method to the current standard.This was a cadaveric study where placement was performed on a cadaver using fluoroscopic guidance with saving images to demonstrate all steps for placing a peripheral nerve stimulator introducer and lead.The University of Wisconsin ethics committee approves of our cadaveric research. The medial, intermediate, and lateral branches are the three main branches of the SCN, as was previously mentioned.An appropriate length of lead contacts is required to achieve stimulation of all three branches, if that is the clinical goal.The lead placement should ideally follow the superior edge of the bone in order to achieve maximum contact over the superior edge of the iliac crest. Traditionally this is achieved via a transverse approach.This involves placing the introducer and lead from medial to lateral over the iliac crest in an AP view.The problem with this technique is that it does not account for the angle of the iliac crest which can significantly differ between patients.The anatomy of iliac crest is not completely transverse but instead follows an angle.Using the transverse placement approach may cause the lead to cross the iliac crest without being on top of it along the length of the lead Our novel technique for placing a super cluneal peripheral nerve stimulator involves positioning the fluoroscope in a contralateral oblique position and using a coaxial approach to advance the introducer.This allows you to align your view with the anterior angle of the iliac crest. The patient is first put in the prone position.Using fluoroscopy and a marking pen, the patient's anatomic landmarks were identified.Notably when obtaining the appropriate view with fluoroscopy a contralateral oblique angle is needed.The Iliac crest is not transverse so a transverse image will be deceiving.Once the patient and fluoroscopy are in the correct position a finder needle is then utilized (Figure 2).After placing the introducer above the iliac crest in the contralateral oblique approach, it is advanced over the iliac crest until it reaches the desired location, then an AP view can be taken to confirm final placement.This is then to be followed by removing the introducer stylet and placing the peripheral nerve stimulator lead through the introducer to its desired location.(Figures 3-6). Figure 1 Figure 1 Superior, middle, and inferior cluneal nerves.Superior cluneal nerves as they cross over the iliac crest.(Image created by Michael Gyorfi). Figure 4 Figure 4 Peripheral nerve stimulator lead placed via the contralateral oblique technique over the left iliac crest (30-degrees).
2024-03-22T15:25:49.554Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "bc00e729dbcc960eb9e7e2ec3b9084987da8fbd7", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.2147/jpr.s450177", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c8550b0b833a834082763c4cdf9241a0fe095a60", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
252458942
pes2o/s2orc
v3-fos-license
Effect of arbuscular mycorrhiza on germination and initial growth of Cinchona officinalis L. (Rubiaceae) Abstract Cinchona officinalis, known locally as cascarilla or cinchona, is a plant species native to South America. It was used as a source of quinine to combat malaria in the 17th century. The species is threatened by various anthropogenic activities. Further, the propagation of the species depends on seed dispersal and its germination capacity. Therefore, it is necessary to conserve and propagate this species. Because C. officinalis seeds have a low germination capacity, we determined the effect of arbuscular mycorrhizae (AM) on their germination and growth. A randomized design was employed with two treatments, one treated with mycorrhizae (CM) and another without mycorrhizae (SM). For each treatment, three replicates of 100 seeds were used. Germination, growth, and fungal characteristics were evaluated. In germination parameters, the CM treatment showed better performance, but the improvement was statistically insignificant. However, the application of AM significantly improved seedling height (cm), root length (cm), leaf area (cm2), and root number by 53.52, 28.72, 29.73, and 61.66%, respectively. Likewise, mycorrhization intensity (%), mycorrhization frequency (%), and extraradical mycelium length (cm) in the CM treatment were 37.13, 3.44, and 174.97% higher compared to the SM treatment, respectively. Therefore, the use of AM fungi proves to be advantageous in the propagation of C. officinalis, and these results provide a basis for the largescale and sustainable propagation of this species. Introduction Peru is home to diverse cultures, ecosystems, and flora and fauna (Fajardo et al. 2014), including very important medicinal and food plants (De-la-Cruz et al. 2007). Cinchona is a genus of medicinally valuable plants, such as Cinchona officinalis, Cinchona pubescens, and Cinchona calisaya, the barks of which contain alkaloids, such as quinine, which provided the only treatment against malaria for over 300 years (C ondor et al. 2009;Canales et al. 2020). Several studies have reported that C. officinalis needs specific conditions to grow and its distribution ranges are limited (Armijos-Gonz alez and P erez-Ruiz 2016). In Peru C. officinalis is found in small pockets of Andean forest, particularly in the Cajamarca and Piura regions (Huam an et al. 2019). This species is threatened by urbanization, migratory agriculture, cattle ranching, and widespread selective logging (Arbizu et al. 2021), which has led to the prioritization of its conservation and recovery in Peru (Alb an-Castillo et al. 2020). The restoration and preservation of this iconic species require the generation of knowledge related to its propagation (S anchez-Santillan et al. 2021). The survival of C. officinalis in natural environments depends on seed dispersal; however, the species has a low germination capacity (De-la-Cruz et al. 2007;Valdiviezo et al. 2018), which is affected by factors, such as seed quality, humidity, temperature, and microbial activity (Santos et al. 2010). Arbuscular mycorrhizae (AM) are a group of obligate symbionts involved in diverse ecological processes (van der Heijden et al. 2015). AM are symbiotically related to more than 85% of terrestrial plants (Brundrett and Tedersoo 2018;Dey and Ghosh 2022). This relationship provides diverse benefits to the host plant; for example, it helps mitigate environmental stress (Hosseyni Moghaddam et al. 2021), enhances the uptake of P (Gr€ umberg et al. 2015) and other less mobile nutrients (Lehmann and Rillig 2015;Garg and Singh 2018), facilitates low hydraulic gradient water uptake (Aug e et al. 2015), provides protection from pathogen attack, slows nitrification and nitrogen leaching, and accelerates organic matter degradation (Veresoglou and Rillig 2012;Leifheit et al. 2014;Powell and Rillig 2018;Veresoglou et al. 2019). In addition, AM provides nutrients that are of great importance for seed germination and subsequent plant establishment (Dearnaley 2007). At the laboratory level, AM has been reported to positively affect seed germination in forest species (Ballina et al. 2017). Under natural conditions, AM hyphal networks may positively affect both germination and seedling establishment (Varga 2015). In the present study, we sought to determine whether AM affects the germination and initial growth of C. officinalis as there is little information on the use of these biofertilizers in the propagation of this important medicinal species. In the future, AM could be used as a biofertilizer to accelerate the growth of C. officinalis. Furthermore, the results of this study could be used to implement recovery plans and programs for this species. Study area The study was conducted from 20 November 2021 to 20 April 2022 in the La Cascarilla community (5 40 0 21.12 00 S and 78 53 0 55.65 00 W), district of Jaen, Peru, at 1810 m asl. The annual precipitation is 1730 mm, and the mean annual minimum and maximum temperatures are 13.0 and 20.5 C, respectively (Fernandez et al. 2021;Fernandez and Huaccha 2022). Plant material We used the seeds of C. officinalis that were collected in October 2021 from a tree in the San Luis community, Cajamarca region, Peru (6 22 0 6.68 00 S and 79 3 0 29.50 00 W) at 2489 m asl. We collected 0.5 kg of mature capsules (brown to dark brown color) in cloth bags and carried them to the La Cascarilla community, located 100 km from the collection site, and the capsules were stored under shade. Twenty days later, seeds without visible cracks, fungi, and/or nematodes were selected and used for the study. Storage of seeds was avoided as the seeds of C. officinalis are recalcitrant, which makes them lose their germination capacity very quickly (Caraguay et al. 2016). Microbiological inoculation We used MycoGrow V R -Complex (Grow More, Gardena, CA, USA), which contains the AM Glomus intraradices, Glomus mosseae, and Glomus aggregatum as inoculum. The product data sheet recommended using 6 kg of MycoGrow for each cubic meter of the substrate. Thus, considering that the volume of the experimental units was 7260 cm 3 , we incorporated 43.56 g of MycoGrow into the substrate of each unit before sowing the seeds. Substrate The substrate used for the germination of C. officinalis consisted of 100% sand, which was sterilized in an autoclave at 105 C for 1 h; this process was repeated for 3 consecutive days. The physicochemical characteristics of the substrate were: sandy texture; pH, 7.7; electrical conductivity (dS m À1 ), 457.33; organic matter, 1.82%; total nitrogen, 0.8%; and phosphorus, 3.21 ppm. Experimental design and set-up A randomized design with two treatments and three replicates per treatment was used; 100 seeds of C. officinalis were used for each replicate, and 600 seeds were used for the whole trial. A sub-irrigation chamber as described by Fernandez et al. (2021) was used in the study. The substrate containing mycorrhizae (CM) was introduced to three experimental units, and the substrate without mycorrhizae (SM) was introduced to another three experimental units ( Figure 1). The substrate was moistened to field capacity, which was confirmed by touch. The seed viability was determined by the flotation test, wherein the seeds that remained floating on the water were discarded. The entire sub-irrigation chamber was manually irrigated every day with 50 mL of water. The sub-irrigation chamber was covered with Raschel mesh of 85% shade to reduce the direct incidence of solar radiation. Data collection and evaluation Germination parameters Germination percentage was calculated using the following equation: Germination % ð Þ ¼ Number of seeds that germinated Number of seeds sown  100 Germination rate coefficient (RC) was calculated using the following equation: Average germination time (T) was calculated using the following equation: Germination speed (GS) was calculated using the following equation: where n i is the number of seeds that managed to germinate per day i, t i is the number of days after sowing, and t is the germination time from sowing to the germination of the last seed. We also calculated several germination parameters, such as germination energy (GE), energy period (EP), germination capacity (GC), and maximum germination value (MGV) according to Czabator (1962) and Gonz alez et al. (2008). GE is the daily cumulative germination percentage, obtained when germination reaches its maximum; EP is the number of days required to achieve the maximum germination; GC is the percentage of seeds that germinated during the study, along with the healthy seeds that failed to germinate; MGV is the final mean germination, which was calculated by dividing the cumulative germination percentage at the end of the trial by the number of days of the trial; and MV is the maximum daily average germination recorded during the trial. Growth parameters Root length, seedling height, leaf area, and the number of roots per seedling were measured 120 days (or on the 120th day) after sowing C. officinalis seeds to evaluate the potential influence of AM on the initial growth of seedlings. The seedlings were photographed against a white background (20  12 cm cardboard) with a 2 cm reference line drawn next to the leaves for scale in image processing. To extend the leaves, they were covered with 20  12  0.3 cm transparent glass. The photographs were taken using a smartphone (Huawei P30 Lite, 24-megapixel MAR-LX3A camera). The images were then processed using ImageJ software, according to the following processes: (1) File > Open > Image > Line Width > Analyze > Set Scale > Line Width (to measure the stem and root length); and (2) File > Open > Line Width > Analyze > Set Scale > Polygon Selections > Analyze > Measure (Baker et al. 1996). Fungal characteristics To determine mycorrhizal colonization (MC), a root staining process was performed according to the methods of Phillips and Hayman (1970) with minor modifications; the roots were treated with vinegar and hydrogen peroxide, subjected to a water bath, and stained with trypan blue. After staining, they were cut into thirty fragments of 1 cm sections. Each fragment was placed on a slide and then observed under a microscope at 100 objective magnification. The percentage of internal hyphal colonization, or mycorrhizal frequency (MF), was determined using the formula of Sieverding et al. (1991) given below: MF ¼ Number of roots colonized Number of roots observed  100 The mycorrhizal intensity (IM) was determined using the formula of Trouvelot et al. (1986) given below. where N is the total number of roots that were evaluated, and n is the number of classified fragments. To estimate the length of extraradical mycelium (LMER), one gram of soil was weighed and stained according to the methods of Carballar (2010) with some modifications. The stained soil was placed in Petri dishes with 0.5 cm 2 quadrats at the base. The hyphae were observed at the line intersections under a stereoscope at 3 and 4.5 magnification (Carballar 2010). The LMER was calculated using the equation given by Newman (1966). where R is mycelium length per unit soil; A is area of the plate, N is the number of intersections, and H is the total length of the lines of the plate (cm). Data analysis An independent sample t-test was used to compare the means of the replicates of each treatment (p ¼ 0.05) after confirmation of the normality of data using the Shapiro-Wilk test. All statistical analyses were performed using StatGraphics Centurion XVI (StatPoint Technologies Inc., Warrenton, VA, USA). Results In both treatment groups, C. officinalis seed germination was characterized by a sigmoid curve and the highest percentage of germination between days 19 and 31 (Figure 2(A)). Notably, the germination curve of the CM treatment group was slightly above that of the SM treatment group. Although a higher cumulative germination percentage was observed in the CM treatment, there were no significant differences between the two groups (Figure 2(B)). Figure 3 and Table 1 shows the results of the germination parameters evaluated in the study. The CM treatment enhanced the performance of all evaluated parameters; however, no significant differences were found between the treatments. The analysis of growth parameters after 120 days of sowing the C. officinalis seeds showed that plant height in the CM treatment was 53.7% higher than that in the SM treatment (Figure 4(A)). The root length was 29.1% higher in the CM treatment than that in the SM treatment (Figure 4(B)). The leaf area in the CM treatment was 28.7% higher than that in the SM treatment (Figure 4(C)). The number of roots in the CM treatment was 28.7% higher than that of the SM treatment (Figure 4(D)). For all the growth parameters evaluated, significant differences were observed between CM and SM treatments, demonstrating that AM positively influenced the initial growth of seedlings during germination (Table 2). Mycorrhizal frequency was significantly higher (33.8%) in the CM treatment than in the SM treatment (24.7%) (Figure 5(A)). The mycorrhizal intensity was 100 and 96.7% in the CM and SM treatment groups, respectively, and although the CM treatment group showed a higher frequency, it was insignificant ( Figure 5(B)). The length of the extraradical mycelium was significantly higher in the CM treatment (115.7 cm) than in the SM treatment (42.1 cm) ( Table 3). Discussion In tree species, there are studies that show the positive effects of AM on cumulative germination due to the protective-coating action of AM on seeds, preventing infection by pathogens (Dalling et al. 2011;Huante et al. 2012;Ballina et al. 2017). The same effects occur in various species of orchids that depend on mycorrhizae to supply them with nutrients and water owing to the absence of endosperm in their seeds (Smith and Read 2008;Yuan et al. 2016;Huang et al. 2018;Shao et al. 2020;Figura et al. 2021). However, there are reports that AM can also suppress germination owing to the exudates they release (Louarn et al. 2012;Wu et al. 2014;Varga 2015;Maighal et al. 2016;Ballina et al. 2017). In this study, better germination parameter results were observed when AM fungi were inoculated. However, these results were not significant, presumably because of two factors (1) the high mycorrhizal specificity of some plant species (Zi et al. 2014;Dur an-L opez et al. 2019;Meng et al. 2019;Fuji et al. 2020;Shao et al. 2020); or (2) that C. officinalis is not AM-dependent to initiate or increase its germination. Therefore, propose that future research should identify AM species specific to C. officinalis, or confirm the results obtained in this study. However, the AM in this study showed a significant positive influence on the initial growth of C. officinalis. This may be attributed to the colonization of the seed radicle by mycorrhizal hyphae, which provide protection and nutrition to seedlings and accelerate their growth (Pankaj et al. 2021). In addition, AM significantly improved the length and number of roots in C. officinalis seedlings as observed in different studies (Dovana et al. 2015;Khalediyan et al. 2021;Hagh-Doust et al. 2022). Some authors claim that mycorrhizae can increase root growth by up to 200% (Falc on et al. 2021). This further improves plant growth through increased access to soil nutrients (Weisany et al. 2015;Weisany et al. 2016;Khalediyan et al. 2021), as AM can increase nutrient uptake by 7-250 times depending on the crop (Naranjo et al. 2011). Finally, the positive effects of AM on leaf area are attributed to the percentage of AM colonization on seedling roots (Yadav and Aggarwal 2015;Palacios et al. 2021) and the LMER which was higher in the mycorrhizal treatment. Colonization of AM can lead to increased water and nutrient uptake by increasing the surface area of uptake through the mycelium into the soil, allowing the plant to have access to more soil; this can lead to enhanced photosynthesis, improved plant growth, and thus an increase in leaf area (Smith et al. 2003;Huang et al. 2018;Khalediyan et al. 2021). Conclusion The results of this study suggest that AM are beneficial biofertilizers for the propagation of C. officinalis as they significantly improve seedling growth; therefore, AM could be used for the sustainable mass propagation of this important plant species. In addition, the results showed that G. intraradices, G. mosseae, and G. aggregatum can have different effects on the germination process (neutral effect) and growth (positive effect) of C. officinalis. Because our study did not evaluate the effects of a specific mycorrhiza on germination or growth, it was not possible to determine the individual effects of the mycorrhizal species under study. Thus, we suggest further research is necessary to determine whether C. officinalis has a tendency to associate with particular or diverse mycorrhizal species. From there, it may be possible to identify a mycorrhiza that promotes the germination of C. officinalis. In addition, it would be interesting to evaluate the effects of AM for a period longer than 120 days, to analyze the persistence or cessation of the effects observed in this study.
2022-09-23T15:08:28.766Z
2022-09-21T00:00:00.000
{ "year": 2022, "sha1": "f3b6e86108bcf2bdc05fddfeac84b38eae2f663b", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21580103.2022.2124318?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "01f9aa4fee27ea762b8c70881a0f9c58dd2c53ca", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
84951841
pes2o/s2orc
v3-fos-license
Morphoanatomy of Nothofagus alessandrii seeds and its use in the variability of populations Nothofagus alessandrii is an endangered species that is endemic to the Mediterranean area of Chile. There is no information on the anatomical structure of its seeds and there are few studies on the morphometric and germination differences between its populations. Therefore, the purpose of this study was to describe the morpho-anatomical structure of seeds of N. alessandrii in order to compare the morphology and germination four geographically distinct populations. This was done by selecting seeds of four different origins covering the entire latitudinal distribution of the species and measuring their size, shape, dry weight and germination in order to perform a comparative analysis. Results showed that the anatomical structure of N. alessandrii seeds is similar to that of other species of the Fagaceae family such as Fagus sylvatica. No differences were found between seeds from the four different origins in morphological characteristics or germinative power. Thus, it was not possible to demonstrate the existence of clinal variation, although the southernmost population showed differences in length and weight, suggesting that it may belong to a different ecotype. INTRODUCTION Nothofagus alessandrii Espinosa (ruil) is a Chilean deciduous endemic species whose distribution is very restricted and fragmented along a strip of less than 100 km in the Cordillera de la Costa mountain range of the Maule Region between 35º and 36º S (Olivares et al. 2005).The size of the remaining forest area where N. alessandrii occurs is 314 ha (Santelices et al. 2012); the species is only found on shaded slopes in mountainous areas at elevations from 150 to 500 m and forms mainly pure stands (Olivares et al. 2005, Rodríguez & Quezada 2005, San Martín et al. 2006).The climate of the area of distribution of the species is a combination of Mediterranean climate with oceanic influences, with 5 to 6 arid months and 1 to 2 semi-arid months.The average temperature is 13.7ºC; maximum temperatures of 24.8ºC are reached in January and a minimum temperatures 5.9ºC are reached in July.Annual average rainfall is 830.7 mm, with a peak of 186.4 mm in June (Di Castri & Hajek 1976, San Martín & Troncoso 1993).The soils where N. alessandrii grows usually have a moderately fine textured surface with good drainage.They are also characterized by their low fertility due to low phosphorus content and by being very weathered and thin, with very low water retention and susceptibility to erosion because of their old age and climatic influence (San Martín et al. 2006). Although N. alessandrii forests represent a very distinctive ecosystem and the species is classified as Critically Endangered (UICN 2003), the surface covered by forests continues to decline mainly due to anthropogenic causes.Available information in the literature still focuses on aspects related to the ecology of the species and on its regeneration, although to a lesser extent (San Martín et al. 2006).Although histological studies have been globally reported for seeds of the Fagaceae family (Bonner & Leak 2008), none of such studies have been reported on the anatomy of N. alessandrii seeds.Moreover, in contrast with other Nothofagus, no clinal variation has been found in the morphological characteristics of the reproductive and germination success of material of different origins (Santelices et al. 2009a). The fruit is composed of several nuts, which are arranged from 3 to 7 within the dome (3, 5, or 7).The central nut is dimerous, flat, and flanked by two trimerous nuts.The four remaining nuts are dimerous or flat, smaller than the other ones and embedded in the inner supporting base of the valve; sometimes these nuts are not fully developed (Olivares et al. 2005, San Martín et al. 2006).Seeds start to mature in mid-January and dispersal takes place mainly in February (Olivares et al. 2005).The nuts are yellow-green in color and shortly winged (Olivares et al. 2005, San Martín et al. 2006).Each nut contains a single seed, filling all the space and is surrounded by a dry and hard pericarp.Because of the size of the wings, their dispersion is mainly by gravity (Olivares et al. 2005).The cotyledons are 15-20 mm long and 7-10 mm wide and are arranged in opposite positions, expanding outward (Olivares et al. 2005). The seeds of N. alessandrii show some kind of physiological latency that has not been demonstrated yet (Olivares et al. 2005) despite the studies conducted on germination (Hechenleitner et al. 2005, Santelices et al. 2009b).These studies determined that cold stratification and immersion in gibberellic acid are suitable pre-germination treatments to obtain the best germination capacity. Several studies have suggested that Nothofagus populations in Chile have high genetic variability (Donoso et al. 2006a(Donoso et al. , 2006b(Donoso et al. , 2006c)).Smaller seed size and lower seed weight have been observed in southern populations.By contrast, germination capacity has been found to be higher in the most northern populations.This suggests the hypothesis that some Nothofagus species in Chile have a high degree of variability among populations.In the case of N. alessandrii, San Martín et al. (2006) mentioned the lack of studies showing genetic variations among different populations.However, Santelices et al. (2009a) obtained results that differed from other studies on Nothofagus species.They did not observe a significant clinal variation associated to five N. alessandrii origins.They noted that material from the southernmost origin tended to differ from the other ones, suggesting it may correspond to a different ecotype. The selection of seed origin is very important in afforestation programs.In fact, the choice of the right origin is one of the main elements that determine the success and productivity of reforestation programs (Jara 1995).The scarcity of studies on N. alessandrii and the possible existence of different ecotypes justify the present study.The aim of our study was to explore the anatomical structure of seeds of N. alessandrii and the morphometric and germination differences between four geographically distinct populations of the species.In this latter aspect we intended to complement the work done by Santelices et al. (2009a).In other words, the purpose of this study was to perform a morph-anatomical study of N. alessandrii seeds to correlate the morphometric and germination capacity of four populations. PLANT MATERIAL AND STUDY AREA The seeds of N. alessandrii used in this study were collected from four different populations in February 2009 in the Maule Region of Chile (Table I).After being harvested and cleaned, they were sent to the seed laboratory of the School of Forestry Sciences of the University of Chile (April 2009).The seeds were cold-stored at 5°C throughout the entire process in rigid polyethylene bottles. MORPHO-PHYSIOLOGICAL CHARACTERISTICS OF SEEDS Based on the methodology proposed by Santelices et al. (2009a), the following morphological variables were measured in 5 repetitions of 15 seeds: length and width -in double-winged seeds, thickness was also measured using a SOYODA ® caliper (error ± 0.05 mm). Seed tests were performed according to ISTA standards (ISTA 2006) and particularly involved determining the weight and moisture content of one thousand seeds.Additionally, viability was estimated using a cut test.For each batch of seeds, 4 replications of 100 seeds each were weighed on a four-digit precision balance (Denver Instrument Company ® , AA-200).The determination of moisture content was performed on the selected samples for the viability test, considering each of the repetitions of 15 seeds of each lot.Seeds were first weighed to determine their wet weight (WW) and then placed in a forced air oven (WTB Binder) at 105 °C for 17 h, until their weight was constant.Once the drying period was finished, seed dry weight (DW) was calculated by weighing them again on the same balance.Seed moisture content (MC) was calculated using the following expression: A germination test was conducted for each origin based on a cold stratification treatment at 5°C for 30 days plus a control treatment.Each test included three replications of 25 seeds each (ISTA 2006).In both treatments, the seeds were soaked prior to the test for 24 h at room temperature; seeds that floated were rejected.Stratification was performed by mixing the seeds with wet sand and distilled water previously sterilized at 150°C for 2 h.Subsequently, the seeds were packed in labeled plastic bags and left for 30 days at 5ºC.After the pre-germination treatment, the seeds were placed in Petri dishes using filter paper as substrate.The capsules were covered with black polythene to prevent light exposure and excessive moisture loss.They were kept at 20°C during the 31 days of the test in a growth chamber (TRILAB ® ); seeds were considered to have germinated when the radicle emerged (> 1 mm in length).The effectiveness of the treatments applied to the seeds was determined by measuring their germinating power, that is, the accumulated percentage of germinated seeds at the end of the test. The analysis of variance and mean comparisons were conducted using the General Linear Model procedure of the SPSS statistical program for Windows ® version 15.0.The Bliss angular transformation ( p arcoseno ' y = ) was applied before performing the analysis in order to normalize the variables expressed in percentages.Average values showing significant differences were compared with the Tukey test at 5% level. DESCRIPTION OF THE TISSUE STRUCTURE OF THE SEED The seeds used in the experiment were previously prepared by removing the seed coat (Figs. 1b,1c).A sample containing non-germinated seeds and sprouts was selected and subjected to small transverse incisions to facilitate infiltration of the fixative solution.The cuts were made in one of the lateral sections, always cutting off areas that were not to be used for the histological cuts.Subsequently, we followed the methodology proposed by Ruzin (1999) for fixing, processing, and obtaining the histological cuts.The cuts were made with a rotary microtome (E.Leitz Wetzlar ® , Germany) that produced sections 11 μm thick.During the assembly of the samples on the microscope slides, they were first covered with a drop of Mayer glue (egg white, glycerin and sodium silicate).Next, with the slide partly submerged in the hot water dish and with the assistance of a punch, a strip of paraffin was placedon the slides, which were placed in the right position before bringing them completely out of the water.The slide -with the paraffin strip on it-was dried at room temperature on wooden trays.The cuts were stained following the methodology described by Jensen (1962), using a combination of tannic acid, ferric chloride (FeCl 3 ), safranin and fast green.Once the staining was completed, we proceeded to final assembly.We added a few drops of Floo-Tex on the samples and then the cover slips were placed on the stained preparations.The cuts were observed with a confocal microscope (TCS-SP2-Spectral AOBS, Leica ® ) owned by the research support service (Servicio Central de Apoyo a la Investigación, SCAI) of the University of Cordoba (Spain).The images were captured on a computer thanks to a digital camera attached to the microscope and analyzed following the procedure described by Bonner & Leak (2008) for seeds of the Fagaceae family (Fig. 1a). SEED MORPHOLOGY The fruit is composed of 5 to 7 nut-like dry and indehiscent seeds.Mature seeds are light brown, oblong in shape, elongated, more or less angular, and frequently asymmetric in their main axis (Fig. 1).Seeds are clearly compressed, often with two lateral wings (Fig. 1).The surface pattern (primary ornamentation) of the seed is slightly striate, with slightly prominent outer walls that are curved or sinuate as a result of drying.Seed measures were related to seed type: trimerous seeds were 6.1 to 6.5 mm long by 4.5 to 5.0 mm wide, and dimerous seeds were 6.4 to 6.7 mm long by 4.2 to 4.4 mm wide (Table II).Significant differences were obtained in all morphological parameters both for dimerous and trimerous N. alessandrii seeds according to the one-way FIGURA 1: Vista de una semilla de N. alessandrii mostrando los detalles de las diferentes estructuras: a: sección longitudinal de una semilla de Fagus grandiflora (Bonner & Leak, 2008); b: corte de una semilla mostrando las diferentes cubiertas, c: corte fresco longitudinal del embrión; d: sección longitudinal de una semilla madura mostrando el pericarpio (per), cubierta seminal (sc), cotiledones (cot) y eje hipocotilo-radícula (hr) e: sección de la testa mostrando el endospermo (end), la cubierta seminal (sc) y las diferentes capas que forman el pericarpio (per), f: detalles de las sustancias ergásticas (puntos oscuros) presentes en las células que forman los cotiledones, g: sección de haces vasculares dentro de los cotiledones, h: sección longitudinal de los haces vasculares dentro de los cotiledones. ANOVA (P < 0.01) (Table II).It is worth noting the origin of R.N. Los Ruiles, which showed the highest values in the size of dimerous seeds, and the Empedrado origin, which showed the highest values in the size of trimerous seeds. The general structure of the seed is shown in Figure 1d in a section parallel to the longitudinal plane, where the main elements forming the nucule can be seen: a thin and hardened pericarp and an embryo occupying practically the totality of the seed.Seeds of N. alessandrii have a pericarp derived from the ovary surrounding and protecting it.At maturity, the pericarp has a smooth, light brown surface and is composed of sclerified cells, particularly fibers arranged perpendicular to the cut on the outside and parallel to it on the inside.Cells have thick walls, a narrow lumen and simple pits (Fig. 1e).The pericarp mainly surrounds the embryo because N. alessandrii seeds are not endospermic at maturity, as a result of the complete consumption of this tissue by the embryo during the growing and developing period and the conversion of the cotyledons into storage organs.There is a seed coat or very thin testa between the embryo and the pericarp, and it is composed of several layers (Fig. 1e).Although the seed coat is slightly split up, the cells are clearly distinct, with thin dark brown color walls that appear to be composed of flattened cells impregnated with tannins.The testa is derived from the integuments of the ovule.There is also a thin-walled parenchymatous tissue attached to the testa (Fig. 1d).Seeds have a terminal hilum, and seed disposition is symmetric from hilum and rarely laterally displaced.The hilum is rounded, the micropyle is obscure and a vascular bundle traverses the raphe, extending to the chalaza.The whitish embryo is composed of two thin plano-convex and wrinkled cotyledons: the epicotyl and hypocotylradicle axis, where is not possible to clearly distinguish the transition between the hypocotyl and the radicle (Fig. 1c).As N. alessandrii seeds are dicotyledons, erefore the embryo is formed by an embryonic axis and the first two leaf structures.The embryo axis is located between the cotyledons in the apical zone of the seed and is formed by the hypocotyl-radicle axis and the epicotyl.The cotyledons are thin and very wrinkled, occupying almost the entire nut.According to their function as storage organs, they contain ergastic substances that are used by the embryo during its germination and later by the plant during its early developmental stages, until functional leaves can produce photosynthesis.They have relatively homogeneous parenchyma inside that function as storage and consist of several layers of thin-walled polyhedral cells filled with various ergastic substances, notably starch grains (Fig. 1f).The starch grains, made up of insoluble carbohydrates (polysaccharides complex), are the most common storage materials found.Starch is commonly accumulated in the form of different types of starch grains with a shiny refractive point, which is the starting area of active growth.Starch grains have simple concentric shapes; some of them are spherical and others are oval-shaped.The cotyledons have a vascular system that mobilizes the reserve substances into the functional tissues (Figs.1g, 1h). The embryo axis is located in the apex zone inside the seed, on the opposite side where the dome is inserted.Their entire structure is shown in Figure 1d, where the hypocotyl-radicle axis can be distinguished.At its basal end, the embryo axis has an incipient radicle formed mostly by meristematic tissue and covered by the root cap, which is a conical coverage surrounding the root apex that is not visible to the naked eye and consists of soft undifferentiated tissue formed by live parenchyma cells.The root cap covers the meristematic tissue protecting it and providing mechanical protection to the meristematic cells as the roots grows through the soil.The hypocotyl is the portion of the embryonic axis that is below the insertion point of the cotyledons, the cotyledonary node and the radicle above.Due to the nature of the embryo of N. alessandrii, it is not easy to distinguish between the structures of the radicle and the hypocotyl.This is why the group is called the hypocotylradicle axis.The epicotyl, a promeristem that also consists of of meristematic tissue, is found at the apical end of the embryonic axis, above the cotyledonary node; this is the place from which the stem of the plant will develop. SEED ANALYSIS Table III shows the analysis of seeds from the four different origins.Laboratory germination began after 7 days in the incubator; T 50 was completed within 11-14 days and essentially finished within 21-24 days (data not included).The moisture content of the seeds at maturity varied between 7.0 and 9.0% and the weight of 1000 seeds ranged from 9.0 to 10.8 g.Although significant differences were found for both variables, moisture content was found to be a more variable parameter among origins. Control seeds of some origins almost failed to germinate (RN Los Ruiles, 1.3%) whereas those of other origins had a high germination percentage (Lo Ramirez, 21.3%).Chilling of the intact fruits increased the germination percentage, although the level of germination at the end of the experiment was variable among origins (RN Los Ruiles 34.7% to Lo Ramirez 69.3%); seeds from the southernmost origin showed the lowest germination values.The one-way ANOVA of germination percentage showed a significant effect of the stratification treatment (P < 0.001).Seed viability was not consistent with germination, as seeds from the Empedrado origin showed the highest value (60%), with significant differences among origins as well. DISCUSSION In this study we described the morphology and anatomy of N. alessandrii seed and compared the morphometric and germination variation of four geographically distinct populations. The anatomical characteristics of Nothofagus resemble those described for similar genera such as Fagus or Quercus (Ledgard & Cath 1973, León-Lobos & Ellis 2005, Cinar-Yilmaz & Akkemik 2007, Bonner & Leak 2008).Morphological and anatomical studies of the Fagaceae family have globally described the anatomy of its seeds (Kirkbride et al. 2006).However, the internal structure of N. alessandrii seeds has been not studied so far.Although the fruits and morphological structure of these seeds have been superficially described (Olivares et al. 2005;San Martín et al. 2006), no histological studies have described their anatomy in detail.N. alessandrii seeds have a general structure that consists of a hard sclerified pericarp, a very thin seed coat and an embryo with two wrinkled cotyledons -occupying almost the entire volume of the seed-and an embryonic axis at the apex zone of the seed, where the epicotyl and hypocotyl-radicle axis can be differentiated (Kirkbride et al. 2006).The pericarp is smooth, light brown and composed of sclerified cells and fibers arranged in different layers, which give better resistance to this protective layer of the seed.Between the pericarp and embryo there is a thin seed coat or testa; its color suggests that it is formed by flattened cells impregnated with tannins, although it was not possible to confirm this.It would therefore be advisable to verify this in future research with histochemical studies.There is also a thin-walled parenchymatous tissue attached to the testa.This histochemistry may be related to the ecophysiological and protective response of N. alessandrii seeds.We noted the absence of Perzelia sp.damage on N. alessanddrii seeds in contrast to those of other Nothofagus species (Santelices, R. personal observation); although it can be derived from this study. It is possible that they are remains of nucellus or endosperm, though it is not possible to affirm either, therefore it would be convenient in the future to make further cuts in the various stages of embryo development.The last part of the seed -the embryo-consists of the cotyledons and the embryonic axis.Seeds of N. alessandrii have two thin and much wrinkled cotyledons, which occupy almost the entire nut.The cotyledons consist of storage parenchyma formed by polyhedric cells of thin walls with ergastic substances inside.The starch grains are the most common reserve materials found in N. alessandrii seeds.Starch is commonly accumulated in the form of different types of starch grains that are stored by amyloplasts (leucoplasts).Layers of starchy materials are deposited around the hilum, representing lines of stratification (Stern 1994).The hilum also has a vascular system to mobilize the storage substances towards the points with major meristematic activity.The embryonic axis is located in the apical zone of the inner part of the seed and consists of the epicotyl and hypocotylradicle axis, whose name is due to the inability to differ between the hypocotyl and the radicle. The morphometric analysis parameters studied in seeds of the five origins revealed that, although there were significant differences in seed size (length, width and thickness) among origins, none of the parameters could be related with latitude variation.This is in contrast with other species of Nothofagus (Donoso et al. 2006a(Donoso et al. , 2006b(Donoso et al. , 2006c)).Considering the results of the morphometric analysis, we can conclude that seed size is relatively homogeneous, as noted in former studies (Olivares et al. 2005, Santelices et al. 2009a).As stated by Santelices et al. (2009a), it was not possible to associate any increase or decrease in seed size with the differences in latitude of the different origins of the seeds, in contrast with other species of Nothofagus (Donoso et al. 2006a;Donoso et al. 2006b, Donoso et al. 2006c).This may be due to the limited and fragmented distribution of the species (Olivares et al. 2005).The small range of N. alessandrii is exceptional compared to the rest of Chilean Nothofagus, which have larger ranges throughout the country, leading to a reduced but significant latitudinal variation in Nothofagus nervosa (Phil.)Dim et Mil., Nothofagus dombeyi (Mirb.)Oerst.and Nothofagus obliqua (Mirb.)Oerst.(Donoso et al. 2006a(Donoso et al. , 2006b(Donoso et al. , 2006c)).Still, it is worth mentioning that the southernmost origin (R.N.Los Ruiles) showed significant differences from the other populations, with a larger size, which was consistent with that observed by Santelices et al. (2009a). Although significant differences were found between origins in seed germination and viability, as in the morphometric results it was not possible to establish a relationship with latitudinal variation.It should be noted that the highest percentage of viable seeds were from the Empedrado origin (61.3%).There may be a relationship between seed diameter and seed viability and germination capacity.The highest germination power and seed size values were found in seeds from the Empredrado origin.The result range obtained for the seed batches expressed through seed weight (9.0 and 10.8 g per 1000 seeds) is consistent with the results obtained by several authors (Acuña 2001, Donoso andCabello 1978).Seeds from the southernmost origin (R.N.Los Ruiles) had the lowest weight, which can be explained by the high moisture content of seeds from this batch.Although seed weight values showed significant differences between the origins studied, as in the morphometric results it was not possible to observe any relationship between weight and latitudinal variation of origins (Santelices et al. 2009a).Still, this result is not so surprising since Olivares et al. (2005) indicated that seeds of N. alessandrii may present very low weights, reaching values of 76,209 seeds kg -1 .The low water content and weight of the seeds indicate that N. alessandrii seeds are very close to the area of orthodox seeds in the scheme proposed by Hong & Ellis (1996). The germination test of the present study, which analyzed parameters of the germinative vigor of the seed in two different treatments, was used to determine the positive effect of cold stratification on germination.It can be concluded that cold stratification of N. alessandrii seeds increases their germinative vigor.The results confirm something that has been well-documented for the genus: the large number of Nothofagus species whose seeds have internal dormancy (Donoso & Cabello 1978, Donoso et al. 2006a, 2006b, 2006c), which obviously limits germination success.Dormancy phenomena justify applying cold stratification to seeds, which removes the concentration of inhibitors and thus promotes germination (Baskin and Baskin 1998).Differences of germination power among the different origins were also significant; yet, not relationship was found with latitude, unlike other species of Nothofagus such as N. nervosa (Donoso et al. 2006c), N. dombeyi (Donoso et al. 2006b) or N. obliqua (Donoso et al. 2006a).This lack of correlation between latitudinal range and the germination rate of N. alessandrii seeds is consistent with the results obtained by Santelices et al. (2009a).The best germination rates were obtained with the cold stratification treatment at 5°C for 30 days.The origin that obtained the best results was Lo Ramírez, reaching 69.3% of germinated seeds.This origin also differed in the control treatment.The origin that obtained the worst results was R.N. Los Ruiles, reaching only 34.7% of germinated seeds; it also differed from the others in the control treatment, but with lower values.These results are comparable to those obtained by Santelices et al. (2009a), which also found the worst results with the Cauquenes origin (R.N.Los Ruiles), and very good results with the Lo Ramírez origin. The morphological variables studied did not show any significant relationships between seed size and the latitudinal position of the populations where the seeds came from, just like other Nothofagus species.Still, one of the origins -R.N.Los Ruiles (in the southeast) -differed significantly from the others, with larger seeds.No significant relationships were found between latitudinal variation and viability or between latitude and seed weight.Seeds of N. alessandrii responded very well to cold stratification (31 days with a temperature of 5ºC) which significantly improved seed germination. FIGURE 1 : FIGURE 1: Overview of N. alessandrii seed showing details of the different structures.a: longitudinal section of a Fagus grandiflora seed (Source: Bonner & Leak, 2008); b: seed stripped of the different external layers, c: fresh longitudinal cut of the embryo; d: longitudinal section of a mature seed, showing pericarp (per), seed coat (sc), cotyledons (cot) and epicotyl (epi) hypocotyl-radicle axis (hr) e: Cross section of testa showing the endosperm (end), the seed coat (sc) covering it and the different layers constituting the pericarp (per), f: detail of the ergastic substances (dark spots) present in the cells forming the cotyledons, g: cross section of vascular bundle within the cotyledons, h: longitudinal section of the vascular bundle within the cotyledons. TABLE I : (Santibáñez & Uribe 1993) climatic conditions(Santibáñez & Uribe 1993)of the origins of the plant material of Nothofagus alessandrii used in this study. TABLE II : Morphologic characteristics of the Nothofagus alessandrii seeds from the different origins (media ± standard error). TABLE III : Seed analysis of Nothofagus alessandrii seeds from the different origins (media ± standard error).TABLA III: Análisis de semillas de Nothofagus alessandrii, de los diferentes orígenes (media ± error estándar).Mean values with the same letter do not significantly differ from each other, P ≤ 0.05 / Estratificación por 30 días.Valores medios seguidos de la misma letra no son estadísticamente diferentes P ≤ 0.05.
2018-12-21T20:57:40.022Z
2013-01-01T00:00:00.000
{ "year": 2013, "sha1": "45f4e546e60e7fc05f92d852d0ce64b58cfa53e3", "oa_license": "CCBYNC", "oa_url": "http://www.scielo.cl/pdf/gbot/v70n1/art11.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "45f4e546e60e7fc05f92d852d0ce64b58cfa53e3", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Biology" ] }
5020320
pes2o/s2orc
v3-fos-license
A Simple Method of Isolating Mouse Aortic Endothelial Cells In the study of vascular biology, analyses of endothelial cells (EC) and smooth muscle cells (SMC) are very important. The mouse is a critical model for research, however, the isolation of primary EC from murine aorta is considered difficult. Previously reported procedures for the isolation of EC have required magnetic beads, or Fluorescence Activated Cell Sorting (FACS) to purify the cells. In addition, these procedures were applied to the heart, eyeball, or lung, not the aorta. Therefore we developed a simple method of isolating EC or SMC from the murine aorta without the need for any special equipment. To verify the purity of the cell culture, we performed both an immunofluorescence study and a DNA microarray analysis. The immunofluorescence study demonstrated specific expression of PECAM-1 in isolated EC cultures. In contrast, the isolated SMC didn’t exhibit PECAM-1, but rather, smooth muscle actin. The DNA microarray analysis demonstrated the expression of EC (16 genes) or SMC (5 genes) specific genes in each cell. This is due to the fact that pure EC or SMC can be isolated from the aorta, without the use of any special equipment. These results suggest that this method should be particularly useful for vascular biological research. J Atheroscler Thromb, 2005; 12: 138–142. Introduction In recent years, cardiovascular disease has emerged as the leading cause of death in developed countries (1).The formation of atherosclerotic lesions involves the recruitment of blood monocytes to the arterial intima, the engulfment of lipids, and transformation into macrophage foam cells (2,3).Endothelial cells (EC) are activated during this process.Hence the focus on EC for investigations of vascular biology is important. At present, human umbilical vein endothelial cells (HUVEC) are most often used in investigations into EC (4).This is because these cells are easy to obtain and culture, and have been demonstrated to yield reproducible data.However, there is an intrinsic problem in that HUVEC are not from arteries but from veins. In this paper, we report a novel method of isolating EC and SMC from murine aorta.The mouse is critical to basic research because techniques for genetic manipulation are developed more fully for the mouse than for any other mammalian species. Currently, there have been reports on methods to isolate EC only from the murine heart (5-7), eyeball (8) or lung (5,7).Moreover, most of these methods require special equipment, such as magnetic beads combined with antibodies (5,6) or a FACS (9,10) to remove the contaminating SMC. The method described here can be used to isolate not only EC but also SMC, and does not require any special equipment.We verified the character of EC or SMC, respectively, using both an immunofluorescence technique and a DNA microarray.This method is also applicable to the study of transgenic mice. Materials Experimental materials and reagents Mice C57BL/6J (male and female) mice (8-12 weeks of age) were purchased from Clea Japan Inc. (Tokyo, Japan) and bred.All procedures involving experimental animals were conducted in accordance with protocols approved in the local institutional guidelines for animal care of the Research Center for Advanced Science and Technology, The University of Tokyo. Isolation of EC See Fig. 1. 1. Two male or female mice are anesthetized with an intraperitoneal injection of 0.3-0.4ml of pentobarbital sodium (10 mg/ml) per mouse.2. The midline of the abdomen is incised, and the tho-rax opened to expose the heart and lungs.3. The abdominal aorta is cut at the middle to release the blood, and then perfused with 1 ml of PBS containing 1,000 U/ml of heparin from the left ventricle.4. The aorta is dissected out from the aortic arch to the abdominal aorta, and immersed in 20% FBS-DMEM containing 1,000 U/ml of heparin. 5.The fat or connecting tissue is rapidly removed with fine forceps under a stereoscopic microscope.6.A 24-gauge cannula is inserted into the proximal portion of the aorta.After ligation at the site with a silk thread, the inside of the lumen is briefly washed with serum-free DMEM.7. The other side is bound and filled with collagenase type II solution (2 mg/ml, dissolved in serum-free DMEM).After incubation for 45 min at 37°C incubation, EC are removed from the aorta by flushing with 5 ml of DMEM containing 20% FBS. 8. EC should be collected by centrifugation at 1,200 rpm for 5 minutes.Then the precipitate is gently resuspended by pipette with 2 ml of 20% FBS-DMEM and cultured in a 35 mm collagen Type I-coated dish.9. To remove SMC, after 2 h incubation at 37°C, the medium is removed, the cells are washed with warmed PBS, and medium G (20% FBS, 100 U/ml penicillin-G, 100 µg/ml streptomycin, 2 mM L-Glutamine, 1 × non-essential amino acids, 1 × sodium pyruvate, 25 mM HEPES (pH 7.0-7.6), 100 µg/ ml heparin, 100 µg/ml ECGS, and DMEM) is added (7).One week later, confluent EC are observable.2. The blood vessel is cut lengthwise, and put inside of the aorta onto a 60 mm gelatin dish.3.With a scalpel, the aorta is cut into pieces that are approximately square and 2-3 mm on each side.4. The SMC are allowed to dry briefly for one minute, 10% FBS in DMEM is added gently, and the cells are placed in an incubator, and left undisturbed for approximately 10 days. Characterization of EC and SMC Cells were fixed with cold methanol for 10min at -20°C on a cover slip (18 mm × 18 mm) in a 6 well plate.After incubation with blocking buffer (1% BSA, 0.1% saponin, and 1% FBS in PBS) for 30 min at room temperature, 2.5 µg/ml of antibody against PECAM-1 and 0.5 µg/ml of antibody against smooth muscle actin were incubated with the cells overnight at 4°C or for 3 h at room temperature.After three washes with washing buffer (0.1% BSA and 0.1% saponin in PBS), cells were incubated with Alexa 488 anti-Rat and Alexa 594 anti-Goat IgG antibodies (1 : 300) at room temperature for 30min.Cells were washed three more times with washing buffer, mounted with antifade reagent, and photographed using a microscope equipped with a digital camera. RNA extraction and gene chip analysis When the cells reached 80% confluence, we extracted total RNA with ISOGEN following the manufacturer's instructions.RNA quality was determined by measurement of absorbance at 260 nm, and RNA integrity was checked by formaldehyde gel electrophoresis.EC total RNA was analyzed as described previously (11).Briefly, 5 µg of total RNA was used to generate first-strand cDNA.After second-strand synthesis, biotinylated and amplified RNA were purified with Rneasy and quantitated by spectrophotometer.Affymetrix (Santa Clara, CA, USA) mouse MOE430 Arrays were used in this study.This array contains probe sets for 45,000 transcripts and EST clones.After hybridization, the microarray was washed, scanned, and analyzed with GeneChip Analysis Software (version 4.0, Affymetrix).These data were imported into Microsoft Excel for downstream analysis.Subsequent further analysis (clustering) was carried out with a free software program, Cluster 3.0 (Michel Eisen, Stanford University, http:/ /rana.lbl.gov/EisenSoftware.htm)and Java TreeView Version: 0. 9.5, to compare the gene expression of EC or SMC.The gene's expression is represented by color intensity in the corresponding sample.The brighter the red, the higher the genes' expression value.Black indicates that the value is effectively null. Immunofluorescence analysis Figure 2 shows that EC grew with the characteristic 'cobblestone morphology'.SMC grew in a 'spindleshaped' pattern.We used EC or SMC that had been passaged up to three times.To confirm that the isolated cells were EC or SMC, we carried out double staining using PECAM-1, a specific marker for EC, and Rabbit polyclonal smooth muscle actin antibody, for SMC. Figure 3 shows that EC and MS 1 (a mouse endothelial cell line: positive control) were stained by only PECAM-1.On the other hand, SMC clearly expressed only smooth muscle actin. Characterization of EC and SMC Furthermore, each cell type's character was profiled using a DNA microarray (Table 1, Fig. 4 ).There were a number of EC-specific genes, such as fms-like tyrosine kinase 1 (flt-1 ), von Willbrand factor (homolog ), Cadherin 5 (VE-cadherin ), PECAM-1, Ephrin B2, and Intercellular adhesion molecule-2 (ICAM-2 ), on the list for EC.Smooth muscle actin, calponin-1, and myosin can be seen on the list for SMC.These results also support this method as capable of isolating pure EC or SMC. Discussion We have developed a system to isolate and culture EC and SMC from murine aorta without any special equipment.The character of EC or SMC was confirmed both immunohistochemically and by DNA microarray analysis.Both cells could be passaged at least three times.A method of isolating SMC from the murine aorta has already been established (12), and several different ways to isolate mouse EC have been reported.However, these methods are complicated, and the cells are taken not from the aorta but from the heart, eyeball or lung.Indeed, the use of FACS or magnetic beads based on monoclonal antibodies to remove contaminating SMC or other cells is a useful technique, but unfortunately we have not been able to isolate EC from murine aorta using magnetic beads.Therefore, we have developed a new system to isolate EC from the aorta because such cells are more directly appropriate to the study of atherosclerosis than EC from other organs.The method described here is quite simple, and can be used to isolate and culture EC or SMC, or even both simultaneously.There are four critical points in the execution of this method.First, before cardiac arrest, the vascular lumen is flushed with PBS (step 3) to minimize the activation of EC due to clots in the lumen.Second is the replacement of the medium after collagenase treatment (step 9).EC attach to the bottom of the plate more quickly than SMC.This step helps to reduce contamination by SMC.If contamination persists, the incubation period may be reduced to 60min or even 30min at step 9.The third critical point is medium G.It contains a large amount of ECGS.In comparison with other reports (5-8), twice as much is contained.This makes EC grow much faster.Even if SMC are present, medium G makes the SMC weaker.The fourth point is the collagen I coat.EC prefer the collagen I coat.The speed of growth is faster and the appearance is better with collagen I than when EC are grown in gelatin-coated or non-coated plastic plates.We believe that this method of isolating EC or SMC from murine (wild or transgenic) aorta is highly useful for the elucidation of the molecular and cellular mechanisms of cardiovascular disease. Fig. 3 . Fig. 3. Immunocytochemistry of murine cells.These cells were immunostained with PECAM-1 (A, C and E) and smooth muscle actin (B, D and F) antibodies.Panels A and B were MS1, Panels C and D were C57BL/6 mouse aortic EC, and Panels E and F were SMC (400 × magnification). Fig. 4 . Fig. 4. The hierarchical clustering of mouse aortic endothelial (day 7) and smooth muscle (passage 3) cells.The expression patterns of certain specific genes of EC and SMC were monitored with a hierarchical clustering computer program (GeneSpring). Table 1 . The principal expressed genes of specific EC and SMC in the isolated EC and SMC. *: GenBank accession numbers for access to the mRNA sequence or spotted cDNA fragment.
2018-04-03T04:01:11.855Z
2005-06-30T00:00:00.000
{ "year": 2005, "sha1": "44ca60fd93eea03a8ba678bab79ed5d7bec6cd31", "oa_license": "CCBYNCSA", "oa_url": "https://www.jstage.jst.go.jp/article/jat/12/3/12_3_138/_pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "44ca60fd93eea03a8ba678bab79ed5d7bec6cd31", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
268197838
pes2o/s2orc
v3-fos-license
Shit happens on the big screen: faecal motifs in contemporary film ABSTRACT The aim of this article is to analyse various excremental motifs and their functions in selected contemporary films. Drawing on concepts such as Julia Kristeva’s abject, Mary Douglas’s taboo and Mikhail Bakhtin’s grotesque body, the authors demonstrate that dirt in the form of excrement holds metaphorical and symbolic potential in cinematic representations. Faecal tropes selected for discussion range from the use of excrement as a means of humiliation (The Help, Green Book, Kornblumenblau) or resistance (Silent Grace, Hunger) to an understanding of defecation as an ideal and peaceful act (Jarhead, Halkaa) or as a trigger for culturally conditioned disgust (Death at a Funeral, Daddy Day Care), to the use of faecal matters as a demarcation line between ‘us’ and ‘them’ in the world of the future (Uncanny, The Platform) or as a productive substance entangled with multiple life forms (The Martian). Since filmic texts can be regarded as a taxonomic representing of faecal motifs that have received considerably little scholarly attention, the discussed examples do not exhaust the topic, but lay the foundation for more detailed analysis in the future. line between 'us' and 'them'.It also frequently exposes the hypocrisy and weaknesses of contemporary consumer society; it shocks and invites the audience to the dark world of matter, unbridled nature, dirt, waste and secretions.Sometimes faeces and defecation feature as telling detail of the cultural background in stories presented or recounted, revealing of the mores at a given historical moment or a person's attitude to their body and health.Excrement can be also positively valued as a sign of life and vitality, sometimes even becoming a source of pleasure, joy or creativity.Hence, faeces, in association with the abject as conceptualized by Julia Kristeva (Kristeva 1982), can evoke ambivalent feelings: on the one hand, intriguing and fascinating, on the other, repulsive and disgusting.The unclear status of the excrement -does it belong to me or is it separate?-evokes a sense of horror as well as the sublime and even sacredness. Given scatological motifs' employment in any number of films in thoughtful and deliberate ways, it may seem surprising that the body of research on cinema's thematic use of faeces and defecation has been quite limited.This may stem from the very subject of defecation remaining hidden on screen, as in life, due to its perception as shameful and impure (Drzał-Sierocka 2019, 127); consider how many film characters appear to function without food and drink let alone excretion (Drzał-Sierocka 2018, 14).Nevertheless, some scholars have realised the interpretative potential that the occurrence of excreta carries.Much of the research on this subject has been centred on a few flagship examples, including three well-known provocative 1970s films commenting on the distortions of neocapitalism, overwhelming overconsumption of 'worthless refuse' (Greene 1990, 217), cultural degradation and bourgeois hypocrisy: The Big Feast (La grande bouffe, dir.Marco Ferreri, 1973), The Phantom of Liberty (Le fantôme de la liberté, dir.Luis Buñuel, 1974) and Salò, or the 120 Days of Sodom (Salò o le 120 giornate di Sodoma, dir. Pier Paolo Pasolini, 1975).Furthermore, studies on trash cinema with its (in)famous representative Pink Flamingos (dir. John Waters, 1972), offer an aesthetics or even 'poetics of scatology', with the 'holy shit' considered sublime or (with a nod to Waters' memorable muse) 'divine' (Gross 2009).Scholars have also explored the motif of coprophagia in horror films, in which excreta designate the punitive power of humiliation (Phillips 2013).Last but not least, much has been written on arguably the most famous toilet dive in the history of cinema, namely that into 'the worst toilet in Scotland' scene from Trainspotting (dir.Danny Boyle, 1996) and its relation to the transgressive dimension of corporeality (Harold 2000) in allowing the hero to inspect 'dark matter' up close (Drzał-Sierocka 2019, 130-133). Given that academic consideration of cinema's scatological themes remains sparse, in this article we offer an overview of various faecal motifs in selected feature films that have remained relatively underexamined for the scatological and connotative meanings they yield.Drawing on such concepts as Julia Kristeva's abject, Mary Douglas's taboo and Mikhail Bakhtin's grotesque body, we demonstrate that faeces carry metaphorical and symbolic potential, as explored by these filmmakers.As these works constitute a catalogue of faecal cinematic tropes and possible methodological perspectives, we hope it will offer an invitation to further in-depth analyses of cultural texts that thematise human relations with regard to the material products of our bodies. Between nature and culture Before turning to particular examples of faecal motifs in the selected films, it is vital that we take a closer look at the history of our relation to excrement and defecation, understood as 'a field of polar tensions between nature and culture, private and public, singular and common' (Agamben 2007, 86).This will allow us to outline the historical and cultural background that constitutes a fascinating context for the cinematic images of excretions and their often contradictory meanings.Beginning with ancient cultures, defecation and bodily secretions aroused fear and apprehension due to their ambiguous, mystic status and transgressive nature.'Shit always occupies a strange and fascinating proximity to God' (Laporte 2000, 111) 1 and it crosses the boundary between what is known -what remains inside, and what is unfamiliarwhat accrues outside.The inability to accurately define the essence of secretions specifically was most frightening, as 'dangerous bodily excreta are benign if in their proper place inside the body.(. ..) feces in the colon (. ..) are basically not present, being safely where they belong as long as attention is not called to them' (Miller 1997, 97).The ambiguous, hybrid status of faeces led to their tabooing.Within this logic, one who touches excreta becomes defiled 2 ; for this reason, body secretions (sputum, urine, faeces) can be used to dishonour and annihilate the enemy.Similarly, in the Christian culture of shame and fear, which rejects carnality as sinful, excrement was also considered unclean and detestable.One result of these restrictive practices was the emergence of a medieval folk culture fascinated by the so-called material bodily lower stratum (Bakhtin 1984), its functionalities and products.During Carnival time, this specific affection is able to reverse the hierarchical order of everyday life, establishing a grotesque defecating body as an emblem of the carnivalesque.The grotesque, lively, 'ever unfinished, ever creating body' (Bakhtin 1984, 26) ignored restrictions, at least for a brief moment, and uses its own physiology as a form of opposition to the official order and as means to transgress its own limits. The modern era brought about further bodily restrictions as legalistic culture valued restraint and social convention, which influenced 'the compulsion towards self-control' (Werner 2017, 64).Defecation and faeces thereby evoked feelings of shame, embarrassment and disgust.Since then, a civilised and cultured person has been expected to hide the bodily urges that have been regarded as remnants of a wild nature.Generally, physiological needs, including defecation, were expected to remain unseen, 3 becoming a taboo that should not be discussed, except in toilet humour.A classicist paradigm of the flawless body emerged: the body that did not eat, copulate, masturbate, and certainly did not defecate.In the late modern period, along with the development of the medical gaze that closely observes the body and its physiology, excreta came under scrutiny. 4Michel Foucault described 'an explosion of numerous and diverse techniques for achieving the subjugation of bodies ' (1978, 140), regulating life in its biological dimension as 'bio-power', a supervising and disciplining structure organising the world of social relations.The anarchic, transgressive and subversive nature of corporeality (and faeces for that matter) required continuous control.As a result, to some extent excrement became no longer entirely invisible; it is not a merely private or embarrassing matter anymore.It is a 'dark matter' which calls for increased attention.Nowadays, excreta are perceived as physical, tactile, manageable, productive matter (Reno 2014) and 'matter for thought ' (Mole 2013, 30).Serving various functions, excreta have appeared in cultural and literary texts dating back to the Middle Ages and early modern era (see, for example, Morrison 2008;Persels and Ganim 2016).Since the second half of the 20 th century, they have often become material for numerous artistic projects: visual art, stage plays, performance art and also films and television series (see, for example, Verrips 2017). Dung and (de)humanisation Aforementioned in the introduction, excrement, as abject and taboo, can be used as a means of drawing clear boundaries between different social groups or in more extreme cases as a form of punishment and humiliation.In colonial discourses, 'the other' was often described with the use of various adjectives associated with dirt, such as unclean, filthy, impure and contagious (see, for example, Plumwood 2003).As 'dirt offends against order and certain moral values are upheld and certain social rules defined by beliefs in dangerous contagion' (Douglas 2001, 2-3), in order to maintain the established norms 'the other', associated with filth, needed to be kept separate, along with his or her waste.Three films set in the Southern states of the US, namely Once Upon a Time...When We Were Colored (dir. Tim Reid, 1995), The Help (dir. Tate Taylor, 2011) and Green Book (dir. Peter Farrelly, 2018), explore the contagious potentiality of excrement of African Americans.All three films take up issues concerning racial segregation in the Deep South, including Jim Crow laws administering the separation of toilet facilities for whites and Blacks (see, for example, Abel 1999). In Once Upon a Time...When We Were Colored a five-year-old boy on the trip with his grandfather goes to use the toilet at a filling station and is stopped by a white attendant pointing at the bathroom door sign stating 'White Only'.As the boy cannot read, his grandfather writes the letters 'W' and 'C' on a piece of paper, explaining: 'This is a "W".That's the first letter of the word white.Now, when you see this, whether it's on a door or a sign or a water fountain, you don't use it.Now, this is the letter "C".This is the first letter of the word color.Now, that's what you look for.That's what you use'.Elizabeth Abel explains that 'initials are the instrument of initiation', through which the boy is taught to adhere to the existing 'racial regime ' (1999, 436).Similarly, Green Book's Dr. Don Shirley (Mahershala Ali), during intermission at his concert held in a Southern mansion, is heading towards the bathroom when he is stopped by the host, who walks him to a back door, points at an old outhouse and says, 'It's right out there 'fore the pines'.Although Shirley's performance receives great acclaim and gratitude, he is still 'the other', who despite being a famous virtuoso, is not equal to the Southern white population.In The Help, which abounds with references to the use of toilets by African American domestic workers, the separation of 'dark matter' from clean 'white' waste is most vividly illustrated through the endeavours undertaken by the white antagonist, Hilly Holbrook (Bryce Dallas Howard), who introduces the 'Home Help Sanitation Initiative'.As she explains to her friends during a game of bridge, the initiative concerns 'a disease preventative bill that requires every white home to have a separate bathroom for the colored help'.According to Hilly, faeces of Black people are 'plain dangerous' as those people 'carry different diseases than we do', therefore, she is prepared to 'do whatever it takes to protect our children'. As Stephanie Rountree explains, 'the physical expulsion of the African American body from the white bathroom demarcates a racial boundary of excretion: it implies excretion from Black bodies is not good enough for white folks' toilets ' (2013, 64).Although Mary Douglas illustrates that 'eliminating [dirt] is not a negative movement, but a positive effort to organize the environment ' (2001, 2), in the context of the aforementioned films only the dirt of Black people carries harmful and contagious potentiality, hence placing it outside the domestic sphere serves as a clear demarcation line between clean whites and dirty Blacks.As in the case of the treatment of Don in Green Book, The Help presents African American domestic workers barred from using indoor toilets even as they play an essential role in white households, in which they clean, prepare food, and most importantly take care of white people's children, to whom they are often closer than are their own mothers.As Douglas states, there is no such thing as absolute dirt: it exists in the eye of the beholder (2001,2).The understanding of dirt is a matter of the individual's attitude, one which is necessarily socially and culturally constructed.In the films we discuss, it is up to white characters to decide in which spheres dirt functions as a demarcation line and in which it does not impinge on the established order.While in the above-mentioned examples the issues concerning waste management becomes a key vehicle through which notions of difference are emphasised, the films discussed below illustrate various ways in which defecation and excrement can be used in order to humiliate, dehumanise and subjugate individuals or groups in question. 5Restricting defecation deprives human beings of their basic physiological needs as they cannot, in Kristevan sense, discard the abject (Kristeva 1982, 2).Defecating's governance by strict rules and regulations is especially apparent in the context of prisons and concentration camps, as exemplified by such films as Kornblumenblau (dir.Leszek Wosiewicz, 1988), Silent Grace (dir.Maeve Murphy, 2001) and Hunger (dir. Steve McQueen, 2008).In Silent Grace and Hunger, set during The Troubles (also known as the Northern Ireland Conflict), inmates are not allowed to leave their prison cells to regularly empty their chamber pots, which finally leads to the so-called 'dirty protest', analysed in more detail in the next section.Where the restrictive use of toilets is only implied in Hunger, it is directly explained in Silent Grace by one of the inmates, Eileen (Orla Brady), who during conversation with the Governor says: '[You] have us on a twenty-three-hour lock-up with no access to toilet facilities, what do you expect?' Being offered a bonus food parcel and the future possibility of extra visits to stop the protest, she asserts her position: 'We're prisoners of war.We're looking for political status not a bloody bar of chocolate and an orange'.Refusing the offer, the Governor tells her, 'We will break you, Eileen'.As Florian Werner points out in reference to concentration camps, different kinds of degradations involving faeces were applied 'to arouse a sense of self-disgust and self-revulsion in the prisoners: they [the guards] wished to break their self-respect, and with it to also dissolve any solidarity between the captives amidst the germs and the shit ' (2017, 76). This phenomenon is also found in Kornblumenblau, in which ill prisoners vomit and defecate together in an open space toilet in the camp, thereby transforming that which in the Western world has long been regarded as a private and singular experience into -referring to Giorgio Agamben's 'polar tension ' -a public and common activity (2007, 86).What's more, the prisoners are further debased as they are treated like animals.When a prison functionary with the help of a few inmates runs into the toilet and switches on the light, it illuminates a repulsive image of the prisoners crawling through their own faeces.Hit with a stick and called names, they are thrown, or more precisely, expunged from the toilet together with the waste.The functionary orders one of his wards to disinfect the place, telling him, 'It must be clean like in the chemist's, clear?'; in so saying, he dips his finger in excreta and makes the prisoner lick it.Since 'to touch excrement is to be defiled' (Douglas 2001, 125), being forced to consume it seems to constitute the most abhorrent form of humiliating practices, thus emphasising the power relations within the camp.Similarly, power hierarchies and the humiliating potentiality of faeces are explored in The Power of One (dir. John G. Avildsen, 1992), in which Geel Piet (Morgan Freeman), a Black prisoner suspected of distributing tobacco leaves, is forced by a guard to eat dung off his boots.Suggesting that Piet and other prisoners 'are a bunch of shit eaters', the guard dips his boot in the dung and tells the prisoner to 'get eating'.Piet's response reflecting back on the episode, 'We eat shit every day, all of us', transforms the word's meaning from the merely physiological to that of the symbolic sphere, as it implies the inmates' inhuman treatment at the hands of the prison authorities.Although the guard attempts to debase Piet as he, in the eyes of his inmates Piet gains respect as he has taken sole blame so as to save other prisoners from punishment.Significantly, these examples illustrate that excremental activities aimed at dehumanising the oppressed in fact dehumanise the oppressors, who lose their humanity through the application of such cruel activities (see, for example, Césaire 2000, 41). Excremental power While on the one hand, as illustrated above, excrement can be a source of humiliation, on the other, it can become a tool of resistance against the established order, a method of revenge or even a means of dignity and power reclamation.Nowhere does the subversive potential of faeces seem more apparent than in the cinematic representations of the 'dirty protest', organised by male republican prisoners of the Maze Prison (known as Long Kesh) then joined by female prisoners of the Armagh Prison.To oppose the discriminatory and inhuman prison conditions referred to in the previous section, to obtain political status of prisoners of war and to condemn the British occupation of Northern Ireland, both male and female prisoners refused to wash themselves, finally resorting to smearing their excrement on the walls of their cells (see, for example, Weinstein 2007;Yuill 2007).Whereas those events together with their escalation into a hunger strike in 1981 features in a considerable number of films, including Some Mother's Son (dir.Terry George, 1996), H3 (dir.Les Blair, 2001), Silent Grace, and Hunger, the body politics of interest to this article come to the fore especially in the latter two films.The motivation for representing such events seems to be similar for both directors.Silent Grace's Maeve Murphy explains that she 'wanted to humanise these women and show that in a situation of total deprivation, human beings endeavour to retain their dignity' (cited in Cantacuzino 2004) The audience of Hunger is introduced to the dirty protest through the figure of new prisoner Davey Gillen (Brian Milligan), who after asserting his identity as a political prisoner by refusing to wear a standard prison uniform and taking an unseen beating, enters the cell housing another prisoner called Gerry Campbell (Liam McMahon).With growing revulsion, Davey slowly looks at the dirty cell walls.Absent any dialogue, the camera lingers on the filthy floor and walls covered with excrement.In a subsequent scene, Gerry is shown smearing faeces on the walls with his hand while Davey eats his food, then scrapes some bits of the unfinished meal from the plate into the corner of the cell, where remnants are crawling with maggots.While juxtaposing the acts of eating and spreading faeces on the wall may seem shocking and repulsive, it is presented as a mundane quotidian activity.Faeces, despite their politicized usage for the inmates, are here as natural as they are for babies 'before repression and separation intervened' (Agamben 2007, 86).The scene concludes with all of the prisoners simultaneously spilling their urine under the cell doors into the corridor.Similarly to the defecation scene in Kornblumenblau, Agamben's 'polar tension' is exemplified here as both the excreta and urine are transformed from the private and singular to the public and common experience (2007,86).Furthermore, through such joint actions, 'despite the humiliating practices and the dirty cells, the inmates are shown to keep their dignity and pride, and the guards seem to be by-andlarge unable to break their resolve' (Merivirta 2015, 130). While the scenes in Hunger are based mainly on striking visual compositions, Silent Grace explores the dirty protest through the dialogue of the main characters.When Eileen decides to smear her faeces on the wall, she tells her cellmate: 'We gonna have to'.In contrast to Gerry from Hunger, Eileen does not use her bare hand to spread her excrement, but scoops it out on a bit of toilet paper before putting it on the wall. 6As in Hunger, the dirty protest in Silent Grace is presented through the eyes of a new inmate, Áine (Cathleen Bradley), who after entering the cell covered in excrement, vomits into the chamber pot.Later on, Eileen tells Áine that she should join the protest, which despite being revolting, can be performed with dignity.Verging on getting sick, Áine lifts the chamber pot, puts the excrement on a piece of toilet paper and starts spreading it on the wall, accidently dirtying her hand.She sits on the bed beside Eileen and after a few seconds of silence starts crying.Although 'there is no evidence that Áine has become politicized', as Aileen Blaney explains, 'the dirty protest is presented as an object through which Áine, as Eileen's protégé, channels her respect and affection for her role model ' (2008, 402). Although by that stage the viewers of both films are undoubtedly repulsed by the conditions in which the prisoners have lived, the intensity of the protest is conveyed mainly through reactions of people, including guards and priests, entering the cells.In Silent Grace, the priest visiting Eileen covers his nose with a tissue to block the stench in the cell; one of the guards offers to cover for his female colleague who feels sick from the smell while on duty; and food is delivered to inmates by guards wearing masks and gloves.The unbearable smell is illustrated even more vividly in a scene in Hunger in which a man wearing special protective clothing comes to steam clean the cells.This scene is meaningful for another reason, namely the excremental patterns on the walls.With a look of disbelief, the man removes his protective face shield to inspect circular patterns resembling a kind of artistic creation.Indeed, real life prisoners used their excrement to write messages and to create Gaelic graffiti on their cell walls (Feldman 1991, 217).They were, however, more than just artistic creations.As Allen Feldman emphasises, Through the sedimentation of its many strata -interrogation white, H-Block feces, Gaelic graffiti -it had become an archeological artifact, a place for the storage and the liberation of memory.An entire genealogy of resistance was etched with pain and endurance into the material imprisonment.Both the mind and the bodies of the prisoners passed into this cell membrane through the media of their writing and the fecal transcription of their political condition.(1991,217) Through this kind of creativity, the prisoners were able to mark their presence, tell their stories, express their cultural distinctiveness and highlight their political views.Using Foucault's terminology, both male and female prisoners through the use of their own faeces placed themselves 'outside the reach of power and established law ' (1978, 6).The marginal filthy substance was transformed into a symbolic weapon against prison authorities and the British state. The subversive potential of faeces is also illustrated in the context of The Help and a short dramatic film entitled Eat My Shit (dir. Eduardo Casanova, 2015).The latter is already linked at the linguistic level to the scene from The Help entitled 'Minny's Chocolate Pie' by means of the phrase giving Casanova's film its title and uttered by Minny (Octavia Spencer), for whom it constitutes a turning point in the maid's power relation with her former employer Hilly.Having been fired for using her employer's private toilet, Minny returns to Hilly's house bringing her favourite chocolate pie.Hilly greedily consumes two slices before discovering that apart from that good vanilla from Mexico, Minny has made her cake with something else real special, namely her own excrement.Minny's gift to Hilly is thus a form of revenge as well as a means of resistance against the long-lasting repression and devaluation.As Rountree argues, Hilly figuratively forces her racist politics down everyone's throat, so Minny physically forces her own political resistance down Hilly's, adding that Minny and Hilly are mortal enemies across a racial demarcation line, on either side of a pie that is full of shit (2013,(66)(67).Although somehow humorous, the scene is ironically subversive as well, for Hilly -who has been doing her utmost to enforce through her 'Home Help Sanitation Initiative' that Black maids' waste will be disposed of outside -invites the maid's shit, though unconsciously, inside her home and body in consuming the pie.Elizabeth Ezra rightly concludes: 'As revenge for her unjust dismissal, Minny's feces pie, consumed by Hilly with such relish, brings about the very comingling of waste materials that Hilly feared in the first place' (2018,52). While the use of faeces in the on screen representations of dirty protest was based on actual events, the 'Minny's Chocolate Pie' scene and the utterance by a Black maid to 'eat my shit' would have been inconceivable in the South at the time.Casanova's film Eat My Shit presents another highly unlikely scenario in a scene lasting a bit over three minutes.The film begins with main character Samantha (Ana Polvorosa) explaining during a phone conversation with her mother that a selfie she posted on Instagram 'has been deleted for sexual content'.Only after a few seconds are we shown that Samantha's mouth has been replaced by a hairy asshole. 7As she orders soup in a restaurant and consumes it with the use of funnel and tube placed in her rear end, it appears as if she has an inverted digestive system, her anus and mouth interchanged.In the Bakhtinian sense, Samantha's digitally manipulated self becomes the embodiment of the grotesque body with 'the substitution of the face by the buttocks, the top by the bottom' (Bakhtin 1984, 373).As in the aforementioned films, here too defecating and excreta serve a subversive purpose.Samantha, whose strong sense of exclusion is set off by the mocking of her waitress, who finds the video hilarious, decides to pay her tab with her own faeces.She defecates on top of the bill that the waitress delivered placed on a saucer, takes a photo and posts it on social networking sites with the phrase 'eat my shit'.Again, faeces is deployed as a means of revenge and a form of opposition to the unfair treatment both by the waitress and social media.Foucault rightly asserts that 'the judges of normality are present everywhere and that it is on them that the universal reign of the normative is based' (Foucault 1995, 304).In the context of Samantha's performance, critique is visited upon social media's enforcement of body standards and policing forms of otherness, treated as violations of so-called normality.Significantly, despite exploring disparate issues, Casanova's Eat My Shit as well as McQueen's Hunger, Murphy's Silent Grace and Taylor's The Help feature characters who, far from becoming vulnerable to authority and established norms, are empowered through the use of faeces.As Foucault explains, 'power is exercised through networks, and individuals do not simply circulate in those networks; they are in a position to both submit to and exercise this power' (2003,29).As evidenced, these characters choose the latter. (Dis)pleasures of everyday life The fact of defecation accompanies a human being every day, 'from the cradle to the stretcher' as 'one of the basic conditions of life' (Werner 2017, 63, 67).Defecation is often presented in screen narratives as a long-awaited respite from the hardships of everyday life, a break from duties, a moment of solitude, an ideal peaceful act or even a kind of ritual, even if sometimes unexpectedly interrupted by unforeseen circumstances or interlopers.Usually, the juxtaposition between the peaceful act on the toilet seat anticipated by the protagonist and the events that disturb this moment contributes to a scene's dramatic potential.In Jarhead (dir.Sam Mendes, 2005), the main character, Swofford (Jake Gyllenhaal), takes laxatives inducing diarrhoea in order to evade his duties in the military.We witness him sitting on the toilet reading The Outsider by Albert Camus.The tranquil act is interrupted by Staff Sergeant Sykes (Jamie Foxx), not misled by Swofford's subterfuge, calling his subordinate to order, before ruthlessly throwing Swofford's book into a dustbin.As Werner notes, our interest in a 'dark matter'' may point to 'a romantic desire to escape the western world's civilizing mechanisms of repression ' (2017, 67).Swofford, hiding in the toilet, rebels (unsuccessfully) against those mechanisms.In another sequence, Swofford, addressing the offscreen viewers, recounts unpleasant memories of his life, including visiting his sister in a psychiatric institution and baking muffins with his depressed mother.These are painful events that he will not discuss openly, but significantly engages viewers in these and intimate matters of a different type while sitting on the toilet 'taking a dump', with a comic book in hand.It seems that the predictability of an excretory act offers an escape from the torments of his everyday life, particularly from the protagonist's disturbed family relations. The image of the ideal act of defecation in 'one's own bathroom in one's own deserted house, with no time limit' (Lea 2001, 105) interrupted by sudden events is also depicted in Lethal Weapon 2 (dir.Richard Donner, 1989) and Pulp Fiction (dir.Quentin Tarantino, 1994).In the first, policeman Murtaugh (Danny Glover), discovers he is sitting on a bomb attached to the toilet.Outlining the situation to his partner, Riggs (Mel Gibson), he says: 'First time in 20 years I get the bathroom to myself.No kids banging on the door.No wife asking me to hurry up.Just me and my new "Saltwater Sportsman" magazine!' Moments later, a team arrives to disarm the bomb, and proceeds to work around Murtaugh still sitting on the toilet.In Pulp Fiction, the death of Vincent Vega (John Travolta) occurs immediately after a peaceful act of defecating when, bathroom reading still in hand, he is shot with his own gun after leaving the toilet (Lea 2001, 104-106). Neither does 'peacefulness' and 'defecation' go hand in hand in non-Western film stories.The heroes of Halkaa (dir. Nila Madhab Panda, 2018) andSlumdog Millionaire (dir. Danny Boyle, 2008) are often characterised by the defecation conditions in their neighbourhood, and sometimes, like the protagonist of Halkaa, they fight fiercely for the right to use a secluded place for this private act.Mary Douglas, writing of Indian society's 'normal attitude' towards defecation, states that they do not treat it as dirty or secret (2001,125).She emphasises that it involves 'slack disregard (. ..) to such an extent that pavements, verandahs and public places are littered with faeces until the sweeper comes along ' (2001, 125).However, in recent years this issue has garnered increasing attention.Since 2014, the government of India, in partnership with UNICEF, has taken action to end open defecation in the country (UNICEF n.d.). 8The director of Halkaa does not hide the propagandistic intent of his motion picture: 'We hope this film in some way helps the country to become 100% open defecation-free' (Ians 2018).Obviously, this logic holds, defecation in a 'civilised' country should occur behind closed doors.As Werner explains: 'Our western understanding of civilization is (. ..) intertwined with the disappearance of shit; the degree of its (in)visibility signifies the position of a country on a scale of civilizational development ' (2017, 65). The main character of Halkaa, a boy living in the Delhi slums, dreams of having a toilet built in his neighbourhood.Currently, he has to take care of his business amongst others by the railroad tracks or using a chamber pot in his house.He does not feel comfortable in any of these places -on one occasion, during the act of defecation, his father knocks on the door, causing panic and embarrassment in his son.A scene in which the boy visits an elegant bathroom showroom in the city centre is also suggestive; amid the display of shiny bathroom equipment, the salesperson praises one of the toilets, calling it a 'door that opens to heaven' -the image of the toilet thereby representing a vision of escape to a better world.Similarly, the life of the protagonist of Slumdog Millionaire is founded on a significant fecal act: Jamal is using an open latrine, 'the most sordid physical manifestation of urban marginality' (Anjaria and Anjaria 2013, 61) when he hears that his beloved Bollywood film star has visited the neighbourhood.Without thinking twice, he makes his way out of the closed latrine by jumping into the pit that collects excreta.Jamal, covered with faeces, runs for the actor's autograph, seamlessly making his way through the crowd that parts due to the boy's odour and repulsive appearance.In this situation, 'the ignominy of the beshitted body (. ..) becomes an asset' (Phillips 2013, 37).This scene illustrates 'the productive mobilizations of marginality (here symbolized by shit) for navigating urban life' (Anjaria and Anjaria 2013, 61).Excrement is not necessarily disgusting or repulsive, but it can also be a mobilising agent, a driving force. In everyday life defecation induces both hope and horror as, on the one hand, it is inseparably connected with life -in Mikhail Bakhtin's words, 'the element of reproductive force, birth, and renewal is alive in it' (Bakhtin 1984, 175) -and on the other hand, with death. 9Perhaps it is because of this, its connotative ambivalence, that we are so eager to laugh at faecal matters.One of the most clichéd comic uses of defecation appearing on the silver screen is the motif of diarrhoea.Shit as overwhelming power of nature, that which cannot be resisted, plays prominent roles in Death at a Funeral (dir. Neil LaBute, 2010), Bridesmaids (dir. Paul Feig, 2011), Dumb and Dumber (dir. Peter Farrelly & Bobby Farrelly, 1994) and many other works.The powerlessness of the characters in the face of the faecal prerogative triggering feelings of horror, embarrassment and disgust constitutes a source of humour.In Death at a Funeral one of the protagonists is accidentally flooded with his uncle's excreta.While he attempts to quickly clean the dirt off his hand, he shrieks in terror: 'Please come off, please come off! (. ..)No, no, no, please God, no!' Of course, this type of disgust does not need to be connected with touching the excrement, as frequently its repulsive smell alone is enough to make one feel ashamed and embarrassed.As Miller notes, 'Disgust undoubtedly involves taste, but it also involves -not just by extension but at its core -smell, touch, even at times sight and hearing' (1997, 2).For example, in the opening scene of Hungry Hearts (dir.Saverio Costanzo, 2014), the main characters are trapped in a restaurant toilet soaked with a horrible odour; for one woman in particular the reek causes a feeling of overwhelming helplessness.On the whole, the aforementioned films under the guise of comedy demonstrate their characters' acute awareness of the existence of a vital material-bodily element related to defecation.Human physiology has become the subject of scatological humour for a good reason.We often try to disregard this element as an uncomfortable part of our life.However, it is sometimes not possible and in such cases 'laughter about shit comes in handy, proposing a way (. ..) to attempt to distance ourselves from its physical reality' (LaCom 2007). We also laugh at the attempts of adults to curb children's faecal matters.Changing diapers in Life as We Know It (dir. Greg Berlanti, 2010) or Three Men and a Baby (dir.Leonard Nimoy, 1987) serves as a consummate example of 'handling excrements, marked by significant negotiations of power relations between parents and the child' (Werner 2017, 74).Child characters also play with their own faeces, to the dismay of adults, as in Daddy Day Care (dir. Steve Carr, 2003).While we are not shown the result of playing with poop, the noticeable pride on the boy's face after leaving the toilet and the shock of an adult peeping into the bathroom are self-evident.One might conclude that the figure of an innocent child who can defecate anywhere, uncontaminated by a culturally conditioned disgust towards excrement, appears in filmic representation to indicate a kind of longing for innocence, playfulness and carelessness concerning faecal matters.Imaginably, an adult subconsciously yearns for 'that lost paradise of shit' (Werner 2017, 67). Could it be that the future will bring us this kind of wonderland?At the final stop of our journey through faecal motifs in feature filmmaking, we turn to the science fiction genre to observe how filmmakers imagine faecal issues in the future. Shitty futures? It should be noted that motifs of defecation and excreta do not appear frequently in science fiction; problems of physiology and especially issues of biological waste produced by the human body seem largely absent in the genre.Archetypal 'cool, rational, competent, (. ..) male, and sexless' (Sobchack 1985, 46) classic science fiction films' archetypal astronauts conquering new planetary frontiers in the depths of space are not occupied with such prosaic activities.They reject biology and sexuality (Sobchack 1985, 48); the sterile futuristic interiors discourage thinking about such all-too-human uncomfortable and embarrassing activities as eating, excreting or intercourse.Sex especially is often understood as a useless relic, a remnant of the past to be replaced by more advanced technologies. 10 However, the functions of the digestive and excretory systems still seem to fascinate some sci-fi filmmakers as well as creators of artificial organisms.A flagship example of this interest is the design of the famous automatic duck brought to life by French inventor Jacques de Vaucanson.The duck was to simulate the vital functions of a real bird so as 'to test the limits of resemblance between synthetic and natural life' (Riskin 2003, 606).The eighteenthcentury technicians, including de Vaucanson, captivated by this possibility of simulating life, constructed 'devices that emitted various lifelike substances' machines that breathed, bled and defecated (Riskin 2003, 606).Allegedly, the mechanical duck was able to consume bits of corn and grain, only to excrete it after a while in a changed 'faecal' form. At first glance the concepts of 'defecation' and 'artificial creature' may seem unrelated, as sci-fi films' most famous androids or robots are devoid of any traces of digestive or excretory systems.Despina Kakoudaki emphasises: 'Artificial bodies are designed to remain immune to many of the needs and processes of organicity, to sleeping, eating, breathing, and other such functions ' (2014, 76).This does not mean, however, that faecal issues are unimportant in these filmic stories; quite the opposite: once they appear on screen, they perform key functions.Firstly, defecation may serve as a source of humour, as filmmakers imagine what excretion could look like and what actually the faeces of a mechanical being, e.g. a humanoid robot or a robotic dog, could consist of.The character referred to is worthy of mention here.American talk show Late Night with Conan O'Brien, broadcast on NBC from 1993 to 2009, sporadically featured a costumed character known as 'Robot on the toilet', whose skits build their comic potential on the character's crude construction and the absurdity of the juxtaposition of a large, heavy, angular robot with the small white toilet he uses, and of the very idea of an artificial creature defecating.The suspense raised in waiting for the final 'toilet success' also prompts comedic response; upon the studio audience (and viewers at home) hearing the hollow, metallic sounds of a 'poop' hitting the toilet bowl, it bursts out laughing.In a similar vein, in Sleeper (dir.Woody Allen, 1973), the main character acquires 'a computerised dog' of whom he most wishes to know: 'Is he housebroken or will he be leaving little batteries all over the floor?'In another example of laughter-provoking excretory-related matters, the farting scene in animated film Robots (dir.Chris Wedge, Carlos Saldanha, 2005) significantly has robots use their armpits to make farting noises because, obviously, they have no digestive system.As Kakoudaki notes, 'approaches to anthropomorphic designs revolve (. ..) around imitation ' (2014, 18). Secondly, 'dark matters' related to the digestive and excretory systems contribute to defining 'humanity' and marking the line between 'human ' and 'non-human', 'us' and 'them' in sci-fi films.Such narratives distinguish artificial beings from humans in multiple ways: 'They are not real people (. ..) because they don't have a soul (. ..), because they cannot procreate, die, or kill, because they cannot love (. ..), because they are too limited by their bodies, or because their bodies are too limitless (. ..)' (Kakoudaki 2014, 215-216).By the same token, these characters' ability to eat and defecate provides proof of (non)belonging to the human species.In the TV series Humans (AMC, Channel 4, Kudos, 2015-2018) and Real Humans (Äkta människor, Sveriges Television, Matador Film AB, 2012-2014), which focus on the social and cultural implications of creating sentient anthropomorphic robots, the ability to eat and drink is a feature that distinguishes artificial entities from humans.In order to penetrate the human environment, one of the series' protagonists must demonstrate adherence to social norms by eating and drinking in the company of others.The fourth episode of the first season reveals how the non-human heroine can consume food without damaging her internal mechanism: she has a plastic bag installed in her oesophagus, which she empties in private, pouring the contents into a dustbin or toilet, and thus 'defecating '. 11 The distinction between human and non-human on the basis of excretion is also made to seem evident in a scene in the film Uncanny (dir. Matthew Leutwyler, 2015), in which a builder of robots scolds an android after learning it has been harassing a woman in the restroom.He tries to embarrass the android by asking 'What on Earth were you trying to do? Were you trying to figure out how to take a shit?Because the last time I checked, you don't even have an asshole!'Here the robot's attempts to fit into human categories are mocked.The existence of an anus provides hard evidence of the division between human and non-human.Thus, having one's private parts erased can be read as a deprivation of humanity.In an episodes of the dystopian anthology TV series Black Mirror (Zeppotron, 2011-) entitled 'USS Callister', digital copies of human characters are trapped in a computer game and forced to play roles in its sadistic creator's sci-fi fantasies.It turns out that the imprisoned avatars have been deprived of their genitals to prevent them from experiencing bodily sensations.One of the protagonists complains about the situation, saying that now she 'can't even have the basic fucking pleasure of pushing out a shit'.An excretory act is presented here as a symbol of freedom and human dignity.Overall, 'USS Callister' reveals the misery that might be caused once we discover how to 'digitize ourselves, creating clones that can be imprisoned, abused, forced to work for us' (Schopp 2019, 66).Taking away the pleasure connected with experiencing human physiology and biology could constitute a means with which to create a kind of dystopian horror. Another compelling depiction of faecal matters in sci-fi arises in dystopian or post-apocalyptic films presenting a world 'after the eradication of all we know' (Gurr 2015, 1), in which many of the rules governing social life, including those related to human behaviour in the sphere of physiology, cease to exist.In films such as Waterworld (dir. Kevin Reynolds, 1995) and Blindness (dir. Fernando Meirelles, 2008), we witness 'an end of civilized decorum' (Stifflemire 2017, 218).As a result, the excretory functions, including urination and defecation, are made public.According to Brett Samuel Stifflemire, post-apocalyptic visions 'mobilize the carnivalesque to highlight the scatological nature of human corporeality ' (2017, 219).Noticeably, in a world without norms and structures, transgressive carnal behaviours related to death, sex, eating and excretion come to the fore.Stifflemire emphasises that such stories often criticise rituals and institutions, especially those of organized religion and government (2017,237).In addition, these narratives often use the metaphor of 'shit' to comment on social inequalities.This is the case in The Platform (El hoyo, dir.Galder Gaztelu-Urrutia, 2019), in which prisoners of a mysterious institution occupy various floors of a skyscraper that indicate their social position at a given moment.A peculiar experiment carried out in this institution relies on the titular platform, overflowing with food, descending from the penthouse to the basement, stopping on each floor for a short period; once the platform reaches their level, the prisoners can eat as much as they wish.By the time the platform opens onto the lower floors of the building, there is no food left.One of the characters residing on a higher level, hoping for a collective rebellion, seeks to convince his fellow prisoners to eat only their designated portions.Unless they obey his orders, he is going to 'shit in their food every day'.This threatening vision of contaminating food with faeces works more effectively than a previous appeal to a sense of collective solidarity.However, the aspiring revolutionary addresses his request only to those on the lower floors, as he realises that the prisoners on the higher floors will undoubtedly ignore him.'I can't shit upwards, you see', he explains.Here, faeces are not recognised as 'a great democratizer, erasing distinctions among us' (Miller 1997, 135).Even if everyone defecates, in a dystopian world of social inequality, people from society's upper echelons remain unaffected by the 'shitty matters'. A final noteworthy faecal motif appearing in a sci-fi film comes from one of the most striking examples of recent years, namely The Martian (dir. Ridley Scott, 2015).Astronaut Mark Watney (Matt Damon), accidently left on Mars by his team, must use all available resources to survive on the foreign planet.He lands on a brilliant idea to use human faeces, his own and those left by other team members, as fertiliser for the cultivation of potatoes.Needless to say, the hero succeeds as he manages to grow crops using his own waste.The Martian explores the weighty discourse of waste productivity, in which excreta are treated as generative material suitable for creative reuse.Faeces do not evoke a sense of horror in Watney or lead him to feel shame or disgust.They are no longer 'something unwanted and discarded' (Reno 2014, 2); on the contrary, they are perceived as both desirable and indisposable.This correlates with the posthuman antianthropocentric perception of waste as connecting human and non-human elements.Joshua Ozias Reno points out that we tend to imagine waste as 'something static and undead, when in reality it is unavoidably entangled with multiple life forms' (Reno 2014, 20).This enlivening discourse is evidenced in The Martian, in which human waste 'continues to be microscopically lively and readily gives way to more macroscopic arrangements' (Reno 2014, 20).Moreover, this is not the case in The Martian alone; much of the faecal matters depicted in contemporary film are laced with a notable optimism.Faeces have come to function as a means of resistance against normativity discourses.Excrement serves as an attempt to dehumanise the victim, yet ultimately debases the abuser.Sitting on a toilet is considered an escape from societal repression mechanisms.Excreta are lively and productive.Defecating equals, and equalizes, humanity -alongside other species and AI entities.Perhaps the future will not be so shitty after all. Notes 1.The relation between sacredness and bodily secretions often appears in popular culture in satirical contexts, for example, in one of the episodes of the comedy series Avenue 5 (HBO, 2020(HBO, -2022)), wherein a cloud of illuminated faeces drifting around a spacecraft miraculously forms the face of Pope John Paul II. 2. Mary Douglas writes about excreta in the context of caste society in India, while noting that the ritual, 'official' faecal impurity does not translate directly into the everyday life of Indians, who usually did not treat defecation in public as unclean or secret (2001,(125)(126).We will return to the issue of defecation in India as depicted in film later in the article. 3. A person higher in the social hierarchy could expose their private parts in the presence of those lower on the social ladder.The king's exposure to courtiers, therefore, did not evoke a feeling of inferiority or shame (see, for example, Elias 2000, 417-418).At this point, it is worth recalling the memorable scene from the Outlander series (2014-) in which we see King Louis XV of France taking care of his physiological need in front of his courtiers and royal guests.The king is clearly constipated.One of the main characters of the series, Jamie Fraser, advises the king to eat porridge, which may help him with this issue.4.An interesting filmic example of a doctor analysing human faeces can be found in The Last Emperor (dir. Bernardo Bertolucci, 1987).The court physician examines the infant emperor's stool by checking its smell and texture in order to make appropriate dietary recommendations ('No bean curd today and no meat!').These instructions aim at changing the consistency of the excreta as the hard and black stool of the emperor can indicate constipation (Yang 2015). 5. Faeces used as a means of humiliation feature also in such films as Salò, or the 120 Days of Sodom, Jarhead and Black Book (dir. Paul Verhoeven, 2006), to mention only a few.6.During the dirty protest in the Armagh prison, female prisoners also used their menstrual blood (see, for example, O'Keefe 2006).7. Samantha appears also in Casanova's feature film Skins (Pieles, 2017), which explores issues related to various disabilities of the characters.). 9. Bakhtin speaks here of the so-called 'Malbrough theme' present in world literature and oral tradition.It considers 'the interweaving of death throes and the act of defecation, or the closeness or defecation to the moment of death' (Bakhtin 1984, 151).The juxtaposition of these elements would customarily serve to connote the degradation of death and dying.10.Traditional sex is eradicated from the future as neither hygienic nor aesthetically pleasing.Instead, for example, the heroine of Barbarella (dir. Roger Vadim, 1968) 'makes love' by consuming a special 'exaltation transference pill', kneeling and pressing her palms to those of her partner.Similarly, in the distant world of the future presented in Sleeper (dir.Woody Allen, 1973), the traditional unsightly, unsanitary forms of sex have been replaced with a brief, contactless encounter between partners in a futuristic, tube-shaped machine (see, for example, Łapińska 2020, 72).11.The device of storing in a sack the food 'eaten' by an artificial human is also found in Isaac Asimov's novel The Caves of Steel. Disclosure statement No potential conflict of interest was reported by the author(s). Notes on contributors Marzena Keating, PhD, is trained in the field of Humanities in the discipline of Culture and Religion Studies with a MA in English Studies.She is the author of several texts centred on Irish history and culture.She works at the Pedagogical 8. See 'An open defecation free India: Towards maintaining an open defecation free India', Unicef.org,http://www.unicef.org/india/what-we-do/ending-opendefecation(accessed: 23.03.2021 , whereas Hunger's Steve McQueen states that his film 'is essentially about what we, as humans, are capable of, morally, physically, psychologically.What we will inflict and what we can endure' (cited in O'Hagan 2008).
2024-03-03T17:13:58.454Z
2024-01-02T00:00:00.000
{ "year": 2024, "sha1": "4603848368e28f3475b94f48d302c56850186259", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/17400309.2024.2304612?needAccess=true", "oa_status": "HYBRID", "pdf_src": "TaylorAndFrancis", "pdf_hash": "d89728bc14d24376aa69a33b2371c34e473679da", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [] }
154640972
pes2o/s2orc
v3-fos-license
Analysis of Russia and other Countries Economic Parameters and Their Connection with the Development of Science Parks Economic growth factors in different countries have their own special resources and features due to the difference in development process and environment structure. The authors analyzed the influence of the science parks upon the economic indicators on the example of Russia. Although there are organizations in Russia created to support the science parks creation, existence and development, there is no efficient and common mechanism to support functioning of the science parks and to make it become oriented at the final result (improving the growth of the country’s economy). The article contains an attempt for the analysis and estimation of the Russia’s economy growth possibilities because of the science parks’ factor, the main of which is the creation of the comfortable conditions for the establishment and development of start-ups, smooth work of the small innovative and other organizations. The analysis of the Russian economy indices is performed on the basis of the statistics data of the Russian Statistics Bureau published during 1995–2012. The regressive dependencies, contained in the work and built according to the real statistics data, can provide some data, connected with the extensive component of the science parks factor effect (increase of the number of innovations, created because of the SP factor), such as the maximum possible Russian GDP in 2010, if the science parks supported all small businesses, GDP value expected in 2015 if the situation with the science parks is not changing and if the science parks fully support all small businesses and start-ups etc.  Increase of the number of innovations, created because of the SP factor (intensive effect). Therefore, there are two main directions of the Russian SP factor analysis that could be considered the important ones: i. Is there a growth potential in Russia, established only on the basis of the quantity (extensive) effect? ii.What is the growth potential, established on the basis of the SP factor intensive component? As long ago as the 90s, there were already some attempts to create the science parks in Russia from "below", but the legislative base in the form of the regulatory acts was established only in [2005][2006].The Association of Russian Science Parks, a non-official public organization, the goal of which is to provide the support for the science parks creation, existence and development, was also established.Still, up to now there is no efficient and common mechanism that would support functioning of the science parks and which would be oriented at the final result -indicating the growth of the country's economy. In our opinion, such mechanism should the state one (possibly, hierarchically established and built into the power vertical).It should contain not only the means of support, but also the means for creation and development of the science parks in the most important industries and areas.For this it is necessary to create specific functional systems, in particular the system that would monitor the conditions and parameters of the SP, the system that would model the variants and possible scenarios of the SP development, the system that would provide SP with the human resources etc. The article contains an attempt for the analysis and estimate of the Russia's economy growth possibilities because of the SP factor.The analysis of the Russian economy indices is performed on the basis of the statistics data of the Rosstat (Russian Statistics Bureau), that were officially published on its website during 1995-2012.It is necessary to mention though, that not all the data in Rosstat within the mentioned period are presented equally well.This, in its turn, could have an impact upon the accuracy of some estimates. Gross Domestic Product One of the most important economic integral indices of any country is the GDP, Gross Domestic Product.Statistics data contain the nominal GDP value (according to the current year prices).But in order to compare the results of several years, usually an inflation coefficient is included (prices increase in relation to the previous year), which is then used for calculating the deflator, a coefficient that specifies the prices change in relation to some definite year (in this research it is 2013).In this case, the actual GDP will show the values that were corrected to the deflator value (see Table 1). Fig Table 1 . Russia's GDP for the period of 1995 to 2012 (in bln Rubles) s (t) and its lin n is highly ade ng this period.000 billion Ru the country's G nomy, includin umber of the b ate and the num mbers that we of the private e hip form Vol. 6, No. 6; near approxim equate to the v Thus, the 20 ubles or 100 bi GDP.Let's an ng small busin businesses and mber of the pr ent up during t enterprises.
2018-12-20T20:23:48.241Z
2014-05-26T00:00:00.000
{ "year": 2014, "sha1": "8399a15826b8a64a0a5759b08d756e941a8b98fc", "oa_license": "CCBY", "oa_url": "https://www.ccsenet.org/journal/index.php/ijef/article/download/34706/20877", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "8399a15826b8a64a0a5759b08d756e941a8b98fc", "s2fieldsofstudy": [ "Economics", "Environmental Science" ], "extfieldsofstudy": [ "Economics" ] }
86609466
pes2o/s2orc
v3-fos-license
Understanding How and Why Developers Seek and Analyze API-related Opinions With the advent and proliferation of online developer forums as informal documentation, developers often share their opinions about the APIs they use. Thus, opinions of others often shape the developer's perception and decisions related to software development. For example, the choice of an API or how to reuse the functionality the API offers are, to a considerable degree, conditioned upon what other developers think about the API. While many developers refer to and rely on such opinion-rich information about APIs, we found little research that investigates the use and benefits of public opinions. To understand how developers seek and evaluate API opinions, we conducted two surveys involving a total of 178 software developers. We analyzed the data in two dimensions, each corresponding to specific needs related to API reviews: (1) Needs for seeking API reviews, and (2) Needs for automated tool support to assess the reviews. We observed that developers seek API reviews and often have to summarize those for diverse development needs (e.g., API suitability). Developers also make conscious efforts to judge the trustworthiness of the provided opinions and believe that automated tool support for API reviews analysis can assist in diverse development scenarios, including, for example, saving time in API selection as well as making informed decisions on a particular API features. INTRODUCTION APIs (Application Programming Interfaces) offer interfaces to reusable software components. Modern-day rapid software development is often facilitated by the plethora of open-source APIs available for any given development task. The online development portal GitHub [1] now hosts more than 67 million public repositories. We can observe a radical increase from the 2.2 million active repositories hosted in GitHub in 2014. While many of the public repositories in GitHub may not be code-based or are personal projects, we observed similar growth in many other online API package managers. For example, we observed an increase in the number of open-source APIs shared in all the following package managers (as of July 2018) : 1) 11,782% for Javascript APIs in npm package manager (from 5,646 in December 2011), 2) 334% for Java APIs in online maven central (from 55,785 in March 2013), 3) 2,447% for C# APIs in online nuget repository (from 4,799 in February 2012), 4) 1,477% for Python APIs in PyPI (from 9,362 in March 2010). Javascript, Java, C# and Python are among the top five most popular programming languages in Stack Overflow 1 . Developers can share their APIs in other package managers as well, such as Bower for Javascript, Rubygems.org for Ruby, etc. With a myriad of APIs being available, developers now face a new challenge -how to choose the right API. For any given task, we now expect to see multiple competing APIs. For example, in the mapping community, developers can choose from multiple web APIs, such as Google Maps APIs, Bing Maps APIs, Apple Maps APIs, MapBox, OpenLayer, etc. The selection and adoption of an API depends on a number of factors [2], such as the availability of learning resources, the design and usability of the API [3], [4], [5]. Developers can learn APIs by using the API official documentation. However, the official documentation can be often incomplete, obsolete, and/or incorrect [6], [7]. In our previous study of more than 300 developers at IBM, we found 1. We used the GitHub API and modulecounts.com to collect the statistics that such problems in an API official documentation can motivate a developer to select other competing APIs. To overcome the challenge of selecting an API among available choices and properly learning it, many developers seek help and insights from other developers in online developer forums. Figure 1 presents the screenshot of seven Stack Overflow posts. In Figure 1, the four red-coloured circles are answers 1 , 3 -5 and three green-coloured circles are comments 2 , 6 , 7 . The oldest post (at the top) is dated from November 06, 2009, while the most recent one (at the bottom) is from February 10, 2016. These posts express developers' opinions about two Java APIs (Jackson [8] and Gson [9]) offering JSON parsing features for Java. None of the posts contain any code snippets. The first answer 1 representing a positive opinion about the Gson API motivates the developer 'binaryrespawn' to use it 2 . In the next answer 3 , the user 'StaxMan' compares Gson with Jackson, favoring Jackson for offering better support, and based on this feedback, 'mickthomson' 4 decides to use Jackson instead of Gson. Three out of the four answers 3 -5 imply a positive sentiment towards Jackson but a negative one about Gson. Later, the developer 'Daniel Winterstein' develops a new version of Gson fixing existing issues, and shares his API 7 . This example illustrates how developers share their experiences and insights, as well as how they influence and are influenced by other developers' opinions. A developer looking for only code examples for Gson would have missed the important insights about the API's limitations, which may have affected his development activities. Thus, opinions extracted from the informal discussions can drive developers' decision making. Indeed, opinions are key determinants to many of the activities related to software development, such as developers' productivity analysis [10], determining developers burnout [11], improving software applications [12], and developing awareness tools for software development teams [13], [14], [15]. Research on APIs has produced important contributions, such as automatic usage inference mining [16], [17], automatic traceability recovery between arXiv:2102.08495v1 [cs.SE] 16 Feb 2021 The major research steps undertaken during and after the two surveys reported in this paper API elements and learning resources [18], [19], [20], as well as recommendation systems to facilitate code reuse [21], [22] (see Section 2). However, to the best of our knowledge, there is no research that focuses on the analysis of developers' perception about API reviews and how such opinions affect their API-related decisions. As illustrated in Figure 1, while developer forums serve as communication channels for discussing the implementation of the API features, they also enable the exchange of opinions or sentiments expressed on numerous APIs, their features and aspects. Given the presence of sentiments in the forum posts and opinions about APIs, such insights can be leveraged to develop techniques to automatically analyze API reviews in forum posts. Such insights can also contribute not only to the development of an empirical body of knowledge on the topic, but also to the design of tools that analyze opinion-rich information. To fill this gap in the literature, we conducted two surveys involving a total of 178 software developers. The goals of our study are to understand (1) how software developers seek and value opinions about APIs, and (2) what tools can better support their analysis and evaluation of the API reviews. The subjects are the surveys' participants and the objects are the API reviews that the developers encounter in their daily development activities from diverse resources. The context consists of the various development activities that can be influenced by the API reviews. Through an exploratory analysis of the surveys' responses, we answer the following research questions: RQ1: How do developers seek and value opinions about APIs in developer forums? The developers reported that they seek opinions about APIs in forum posts to support diverse development needs, such as API selection, documentation, learning how to use an API, etc. The developers valued the API reviews. However, they were also cautious while making informed decisions based on those reviews due to a number of factors related to the quality of the provided opinions, such as, the lack of insights into the prevalence of the issue reported in the opinion, the trustworthiness of the provided opinion (e.g., marketing initiatives vs subjective opinion), etc. The developers wished for the support of different mechanisms to aggregate opinions about APIs and to assess the quality of the provided opinions about APIs in the developer forums. RQ2: What tool support is needed to help developers with analyzing and assessing API reviews in developer forums? The developers consider that automated tools of diverse nature can be developed and consulted to properly analyze API reviews in forum posts, e.g., visualizations of the aggregated sentiment about APIs to determine their popularity, understanding the various aspects (e.g., performance) about APIs and how other developers rate those aspects about APIs based on their usage of the APIs, etc. The developers mentioned that the huge volume of available information and opinions about APIs in the forum posts can hinder their desire to get quick, digestible, and actionable insights about APIs. The developers also mentioned that in the absence of any automated summarization technique to help them, they leverage different features in forum posts to get summarized viewpoints of the APIs, e.g., skimming through highly-ranked posts, using tags to find similar APIs, etc. Developers also envision diverse potential summarization approaches to help them address such needs, e.g., a dedicated portal to show aggregated opinions about APIs. In Figure 2, we show the three major research phases we undertook during and after conducting the two surveys. Phase 1. Design, Conduct, and Report Surveys We conducted two surveys. We analyze the survey responses using both statistical and qualitative analyses. This paper primarily focuses on the design, analysis, and reporting of the two surveys (see . Phase 2. Identify Actionable Insights for Tool Design We identify requirements from the survey results to develop techniques and tools to assist developers in their exploration of opinions about APIs from developer forums. In Section 8, we discuss the actionable findings from the survey results that could be used for future tool designs. Phase 3. Develop and Evaluate Techniques and Tools We develop techniques based on the findings from Phase 2. We incorporate the techniques in our prototype tool, called Opiner. Opiner is a search engine [23], [23]. Using Opiner, developers can search for an API by name and explore the opinions and usage scenarios related to APIs. The opinions and usage scenarios are automatically mined from Stack Overflow. In Section 8, we briefly describe Opiner. In this paper, we make the following main contributions: 1) Surveys. The design of two surveys and the collected data involving the responses of 178 software engineers. 2) Analysis. A detailed analysis of the survey responses that provides insights into: (1) How developers seek and analyze opinion-rich API information. (2) Needs for automated tool supports to make informed, proactive, and efficient decisions based on analysis of API reviews. In Table 1, we compare the major findings of this paper against the state of the art research on APIs. In Section 2, we discuss the related work in details. RELATED WORK As noted in Section 1, the findings of this paper motivated us to pursue a research journey that contributed to the development of our proof-of-concept tool, Opiner (see Section 8). Specifically, our subsequent research projects focused on the design, development, and evaluation of techniques and tools to mine and summarize opinions and usage scenarios about APIs from developer forums. The research journey is captured in the following manuscripts: 1) In [50], we present a benchmark dataset of 4,522 sentences from Stack Overflow, each labelled as API aspects, such as performance, usability, etc. The catalog of API aspects is derived from the two surveys of this paper (Q11 and Q15 in Final Survey and Q15 from pilot survey). We leverage the benchmark dataset to develop machine learning supervised classifiers to automatically detect API aspects discussed in opinionated sentences. We then present a suite of algorithms to automatically mine opinions about APIs from Stack Overflow. We report the evaluation of each technique. 2) In [51], we present two algorithms to summarize opinions about APIs from Stack Overflow. The design and development of the algorithms were motivated by the findings from this paper, e.g., developers prefer to seek opinions about API aspects, such as performance, etc. We compare the algorithms against six off-the-shelf summarization algorithms. 3) In [23], we present the Opiner architecture that supports the mining and summarization of opinions from Stack Overflow. 4) In [52], we present a framework to automatically mine usage scenarios about APIs from Stack Overflow. We present an empirical study to investigate the value of the framework. 5) In [53], we present four algorithms to automatically summarize usage scenarios about APIs and their evaluation using four user studies. As noted above, the findings from this paper have formed the cornerstone towards the development and evaluation of the techniques and tools presented in the above papers. Other related work can be categorized into four areas. (1) Studies conducted to understand how developers learn to select and use APIs, (2) Analysis of APIs in developer forums, (3) Sentiment analysis in software engineering, and (4) Summarization of software artifacts. We discuss the related work below. How developers learn to select and use APIs While our surveys focused on the role of opinions to support development tasks, previous studies mainly focused on interviews or empirical studies to understand the role of API official documentation to support the development tasks [6], [7], [38], [54], [55], [56], [57]. Developers in our survey seek opinions about APIs to support diverse development needs (e.g., API selection) as well as to compensate for shortcomings in API documentation, e.g., when the documentation is incomplete or ambiguous. The problems in API official documentation are previously reported in multiple studies, such as [6], [7], [24], [58], etc. Robillard and DeLine [7] conducted a survey and a series of qualitative interviews of software developers at Microsoft to understand how developers learn APIs. The study identified that the most severe obstacles developers faced while learning new APIs were related to the official documentation of the APIs. With API documentation, the developers cited the lack of code examples and the absence of task-oriented description of the API usage as some major blockers to use APIs. The benefit of task-based Robillard and DeLine [7] identified in a series of survey and interviews that the most severe obstacles developers faced while learning new APIs were related to the official documentation of the APIs. Our study confirms the findings of [7] that developers find API official documentation can be incomplete. In addition, we find that developers leverage API reviews in forums to compensate for those shortcomings. Carroll et al. [24] designed "minimal manual" to support task-based documentation after observing that the learning of the developers was often interrupted by their selfinitiated problem-solving tasks while using API official documentation. While Carroll et al. [24] created "minimal manual" manually, our results show that we can leverage developer forums to develop "minimal manual" by combining code examples and API reviews, because developers consider both as a form of documentation. Sentiment analysis of software artifacts (1) Developers use positive and negative opinions of other developers as an indicator of quality of the discussed API and code examples. Analysis of APIs in developer forums Developers prefer to learn about different API aspects from the opinions of other developers in forum posts using automated analysis (e.g., find all opinions discussing about the performance of an API). Zhang and Hou [34] identified problematic API features in the discussion Stack Overflow posts, by detecting sentences with negative sentiments. Treude and Robillard [35] mined important insights about an API type from the textual contents of Stack Overflow. Unlike Zhang and Hou [34], our study shows that both positive and negative opinions about API features need to be identified. The insights gained for each API type by [35] can be enhanced by also including the diverse opinions about API aspects. Summarization of software artifacts Developers mostly rely on search engines to explore opinions about APIs. They are frequently overwhelmed with the huge volume of opinions about APIs in forums. They wished for an automatic summarizer by mining those opinions. Besides an opinion summarizer, developers also asked for tool support to assist in their development tasks by leveraging opinions about APIs, such as, API comparator, trend analyzer, API opinion miner, etc. Several tools have been developed to harness knowledge about APIs from developer forums, such as automatically generating comments to explain a code example [48], recommending experts to answer a question in Stack Overflow [49], etc. Unlike [48], [49], we offer insights on the usage of opinions in tool development to support development tasks. Our findings offer possible extensions to existing research, e.g., include reactions towards a code example of [48] to show its quality. documentation over traditional hierarchy-based documentation (e.g., Javadoc) was previously also reported by Carroll et al. [24] who observed developers while using traditional documentation to learn an API (e.g., Javadoc, manual). They found that the learning of the developers was often interrupted by their selfinitiated problem-solving tasks that they undertook during their navigation of the documentation. During such unsupervised exploration, they observed that developers ignored groups and entire sections of a documentation that they deemed not necessary for their development task at hand. Unsurprisingly, such unsupervised exploration often led to mistakes. They conjectured that traditional API documentation is not designed to support such active way of developers' learning. To support developers' learning of APIs from documentation, they designed a new type of API documentation, called as the minimal manual, that is task-oriented and that helps the users resolve errors [24], [59], [60], [61]. In a subsequent study of 43 participants, Shull et al. [58] also confirmed the effectiveness of example-based documentation over hierarchy-based documentation. The advent of cloud-based software development has popularized the adoption of Web APIs, such as, the Google Map APIs, API mashups in the ProgrammableWeb, and so on. Tian et al. [62] conducted an exploratory study to learn about the features offered by the Web APIs via the ProgrammableWeb portal and the type of contents supported in the documentation of those APIs. They observed that such Web APIs offer diverse development scenarios, e.g., text mining, business analysis, etc. They found that the documentation of the APIs offer insights into different knowledge types, e.g., the underlying intent of the API features, the step-by-step guides, etc. Sohan et al. [54] observed that REST API client developers face problems while using an API without usage examples, such as correct data types, formats, required HTTP headers, etc. Intuitively, the developer forums can be used for missing code examples for an API, because developers in our surveys mentioned that they rely on the API usage discussions in developer forums when the API official documentation can be missing. The generation of such task-based documentation can be challenging. However, the developers in our survey reported that they consider the combinations of code examples and reactions towards the examples about an API in the forum as a form of API documentation. Intuitively, the Q&A format of online developer forums (e.g., Stack Overflow) follow task-based documentation format, e.g., the task is described in the question and the solution with code examples and opinions in the answer posts. Leveraging usage scenarios about APIs posted in the developer forums can be necessary, when API official documentation does not include those scenarios and can often be outdated [63]. Indeed, documentation that does not meet the expectations of its readers can lead to frustration and a major loss of time, or even in an API being abandoned [7]. To address the shortcomings in API official documentation, research efforts focused on the linking of API types in a formal documentation (e.g., Javadoc) to code examples in forum posts where the types are discussed [64], presenting interesting textual contents from Stack Overflow about an API type in the formal documentation [65], etc. However, our study in this paper shows that the plethora of usage discussions available for an API in the forum posts can pose challenges to the developers to get quick and actionable insights into how an API can be used for a given development task. One possible way to assist developers is to generate on-demand developer documentation from forum posts [66]. However, to be able to do that, we first need to understand what specific problems persist in the API official documentation, that should be addressed through such documentation efforts. If we know of the common documentation problems, we can then prioritize those problems and investigate techniques to leverage API usage scenarios posted in the developer documents to address those problems. In a recent study [6], we conducted surveys of more than 300 developers at IBM to understand the problems developers faced while using API official documentation. We observed 10 common documentation problems, such as documentation incompleteness, incorrectness, ambiguities, etc. Therefore, we can leverage the findings from our study to design API documentation resources that can address the problems developers that are commonly observed in API documentation. Sentiment Analysis of Software Artifacts Developers in our surveys reported that they use the positive and negative opinions towards an API as an indicator of quality of the features offered by the API. While our findings shed light on developers' perceptions of API quality by leveraging opinions, recent research of sentiment analysis of software artifacts focused on the mining of sentiments and emotions from software repositories. Ortu et al. [10] observed weak correlation between the politeness of the developers in the comments and the time to fix the issue in Jira, i.e., bullies are not more productive than others in a software development team. Mäntylä et al. [11] correlated VAD (Valence, Arousal, Dominance) scores [67] in Jira issues with the loss of productivity and burn-out in software engineering teams. They found that the increases in issue's priority correlate with increases in Arousal. Pletea et al. [68] found that security-related discussions in GitHub contained more negative comments. Guzman et al. [69] found that GitHub projects written in Java have more negative comments as well as the comments posted on Monday, while the developers in a distributed team are more positive. Guzman and Bruegge [13] summarized emotions expressed across collaboration artifacts in a software team (bug reports, etc.) using LDA [70] and sentiment analysis. The team leads found the summaries to be useful, but less informative. We observed in our surveys that while developers analyze opinions to learn about APIs, they also find it challenging to assess the quality (e.g., trustworthiness) of the provided opinions due to the presence of such opinions scattered across multiple unrelated forum posts. Related research has focused on the assessment of post quality using post attributes (e.g., post score) and their roles in the Q&A process [25], [26], [27], [28], [29], [30], to analyze developer profiles (e.g., personality traits of the most and low reputed users) [31], [32], or to determine the influence of badges in Stack Overflow [33]. In contrast to the above work, our findings motivate the needs to develop opinion quality assessment models, tools and techniques for forum posts. Analysis of APIs in Developer Forums Developers in our surveys prefer to seek opinions about different API aspects and wish to leverage automated tools to support such analysis (e.g., get all the opinions discussing about the performance of an API, etc.). A closely related work is the identification of problematic API features in Stack Overflow by Zhang and Hou [34], who considered sentences with negative sentences as indicators of problematic API features. In a parallel study, Wang and Godfrey [38] hypothesize that the more discussions an API class generate in the forum posts, the more likely that it is problematic to use. By applying topic modeling on the posts, they observed several recurrent themes of API usage obstacles, such as learning the interactions among components. Unlike both [34] and [38] our findings motivate the needs to study both positive and negative opinions about APIs to obtain finer-grained insights about API aspects. Intuitively, such fine-grained insights can also be useful to compare competing APIs for a given task. A large volume of API research has devoted into the automatic mining of insights about APIs from Stack Overflow. In contrast to our study, they the studies are mainly empirical (i.e., automatic mining techniques) and they do not consider opinions about APIs. For example, Treude and Robillard [35] developed machine learning tools to detect insightful sentences about API types from Stack Overflow. Parnin et al. [71] have investigated API classes discussed in Stack Overflow using heuristics based on exact matching of classes names with words in posts (title, body, snippets, etc.). Using a similar approach, Kavaler et al. [30] analyzed the relationship between API usage and their related Stack Overflow discussions. Both studies found a positive relationship between API class usage and the volume of Stack Overflow discussions. More recent research [72], [73] investigated the relationship between API changes and developer discussions. Treude et al. [57] categorized questions and posts from Stack Overflow to understand the types of questions asked in Stack Overflow. They observed that developer forums are the most effective for code reviews and conceptual questions. The findings corroborate to our study participants who mentioned that they seek expertise in API usage in forum posts. One potential technique in text-based summarization is topic modeling, which can be applicable to summarize opinions (e.g., in other domains [47]). Previously, topic modeling has been used to study dominant discussion topics in developer forums [36], [37]. Our survey findings motivate for a finer grained analysis using topic modeling, such as analysis of only opinions instead of all textual contents. Given that developers in our survey prefer to analyze opinions by API aspects (e.g., performance, usability), another potential summarization approach could be aspectbased summarization [47] of API opinions, as we developed in Opiner [51] based on the findings from this study. RESEARCH CONTEXT We further motivate the needs for better understanding of the impact of opinions about APIs on the developers first by taking cues from other domains (Section 3.1) and then by demonstrating the prevalence of opinions about APIs in Stack Overflow based on small study in Section 3.2. We discuss the rationale behind each research question, along with its sub-questions in Sections 3.3 and 3.4. We then discuss the motivation behind the two surveys we conducted to answer the research questions (Section 3.5). Opinion Analysis in Other Domains Our research on API reviews was motivated by similar research in the other domains. Automated sentiment analysis and opinion mining about entities (e.g., cars, camera products, hotels, restaurants) have been challenging but a practical research area due to their benefits for consumers (e.g., guiding them in choosing a hotel or selecting a camera product). In Figure 3, we show the screenshot of three separate initiatives on the automatic collection and aggregation of reviews about camera products. The first two tools are developed in the academia and the third tool was developed as part of Microsoft's Bing Product Search. The first two circles 1 and 2 , show two preliminary outlines of such a tool presented by Liu et al. [77]. The third circle 3 shows a similar tool in the Microsoft Bing Product Search. In all of the three tools, positive and negative opinions about a camera product are collected and aggregated under a number of aspects (e.g., picture quality, battery life, etc.). When no predefined aspect is found, the opinions are categorized under a generic aspect "GENERAL" (aspect is named as "feature" in 2 ). The opinions about the camera can be collected from diverse sources, e.g., online product reviews, sites selling the product, etc. For example, Google collects reviews about hotels and restaurants through a separate gadget in its online search engine. API Reviews in Developer Forums Similar to the camera reviews in Figure 3, API reviews can be found in forum posts. As we demonstrated in Figure 1, opinions about APIs can be prevalent in developer forums. In fact, we observed that more than 66% of posts that are tagged "Java" and "JSON" in the Stack Overflow data contain at least one positive or negative sentiment 2 . In Table 2 we show descriptive statistics of the dataset. There were 22,733 posts from 3,048 threads with scores greater than zero. We did not consider any post with a negative score because such posts are considered as not helpful by the developers in Stack Overflow. The last column "Users" show the total number of distinct users that posted at least one answer/comment/question in those threads. To identify uniqueness of a user, we used the user id as found in the Stack Overflow database. On average, around four users participated in one thread, and more than one user participated in 2,940 threads (96.4%), and a maximum of 56 distinct users participated in one thread [79] From this corpus, we identified the Java APIs that were mentioned in the posts. To identify the Java APIs, we used our API database. Our API database consists of the Java official APIs and the Java APIs listed in the two software portals Ohloh [80] and Maven central. [81] 3 We crawled the Javadocs of five official Java APIs (SE 6-8, and EE 6,7) and collected information about 875 packages and 15,663 types. We consider an official Java package as an API in the absence of any guidelines available to consider otherwise. In total, our API database contains 62,444 distinct Java APIs. All of the APIs (11,576) hosted in Maven central are for Java. From Ohloh, we only included the Java APIs (50,863) out of the total crawled (712,663). We considered a project in Ohloh as a Java API if its main programming language was Java. We collected the opinionated sentences about APIs using a technique we developed in [50]. The technique works as follows: 1) Load and preprocess Stack Overflow posts. 2) Detect opinionated sentences using a rule-based algorithm. The algorithm is an adaptation of the Domain Sentiment Orientation (DSO) [82] algorithm for software engineering. Similar adaptation of the algorithm was previously reported by Blair-Goldensohn et al. [83] for Google local product reviews. The algorithm computes sentiment score of each sentence by detecting positive and negative sentiment words in the sentence. 3) Detection of API names in the forum texts and hyperlinks based a set of heuristics (e.g., exact and fuzzy matching) and 4) Association of APIs to opinionated sentences using heuristics, e.g., proximity between API names and opinionated sentences. In Table 3, we present summary statistics of the opinionated sentences detected in the dataset. Overall 415 distinct APIs were found. While the average number of opinionated sentences per API was 37.66, it was 2,066 for the top five most reviewed APIs. In fact, the top five APIs contained 66.1% of all the opinionated sentences in the posts. These APIs are jackson, Google Gson, spring framework, jersey, and org.json. Reasons for Seeking Opinions About APIs (RQ1) We aim to understand how and why developers seek and analyze such opinions about APIs and how such information can shape their perception and usage of APIs. The first goal of our study is to learn how developers seek and value the opinions of other developers. Motivation By analyzing how and why developers seek opinions about APIs, we can gain insights into the role of API reviews in the daily development activities of software developers. The first step towards 3 Screenshots of opinion summarization engine for camera reviews. The different aspects (e.g., picture quality) are used to present summarized viewpoints about the camera. The circle 3 shows an incarnation of the camera product reviews in the now defunct Bing product search. The screenshots are taken from [47]. RQ2 How can the support for automated processing of opinions assist developers to analyze API reviews? 2.1 Theme -Tool Support: What tool support is needed to help developers to assess API reviews? 2.2 Theme -Summarization Needs: What are the developers' needs for summarization of API reviews? 2.2.a What problems in API reviews motivate the need for summarization? 2.2.b How summarization of API reviews can support developer decision making? 2.2.c How do developers expect API reviews to be summarized? understanding the developers needs, is to learn about the resources they currently leverage to seek information and opinions about APIs. The developers may use diverse resources to seek opinions about APIs. If a certain resource is used more than other resources, the analysis of the resource can be given more priority over others. As we noted in Section 3.2, our observation of the API reviews shows that opinions about APIs are shared in the developer forums. Therefore, an understanding of how the different development activities can be influenced and supported through the API reviews can provide us with insights about the factors that motivate developers to seek opinions as well as the challenges that they may face during this process. By learning about these challenges while seeking reviews about APIs, we can gain insights into the complexity of the problem that needs to be addressed to assist developers in their exploration of API reviews. Finally, opinions are by themselves subjective, i.e., opinions about APIs stem from the personal belief or experience of the developers who use the APIs. Therefore, developers may face challenges while assessing the validity of claims by other developers. By analyzing what factors can hinder and support the developers' assessment of the quality of the provided API reviews, we can gain insights into the challenges developers face while leveraging the API reviews. Approach We examine developer needs for API reviews via the following questions: Tool Support to Analyze API Reviews (RQ2) The second goal of our study is to understand the needs for tool support to facilitate automated analysis of API reviews. Motivation To understand whether developer needs any tools to analyze API reviews, we first need to understand what tools developers may be using currently to analyze the reviews and what problems they may be facing. Such analysis can offer insights into how research in this direction can offer benefits to the developers through future prototypes and tool supports. A predominant direction in the automated processing of reviews in other domains (e.g., cars, cameras, products) is to summarize the reviews. For the domain of API reviews, it can also help to know how developers determine the needs for opinion summarization about APIs. The first step is to determine the feasibility of the existing cross-domain opinion summarization techniques to the domain of API reviews. Such analysis can provide insights into how the summarization approaches adopted in other domains can be applicable to the domain of API reviews. By learning about the specific development needs that can be better supported though the summarization of API reviews, we can gain insights into the potential use cases API review summaries can support. By understanding how developers expect to see summaries of API reviews, we can gain insights into whether and how different summarization techniques can be designed and developed for the domain of API reviews. It is, thus, necessary to know what specific problems in the API reviews should be summarized and whether priority should be given to one API aspect over another. Approach We pose two research questions to understand the needs for tool support to analyze API reviews: • RQ2.1: What tool support is needed to help developers with analyzing and assessing API-related opinions? • RQ2.2: What are the developers' needs for summarization of opinion-rich information? To answer these two questions exhaustively, we further divide RQ2.2 into three sub-questions: • RQ2.2.a: What problems in API reviews motivate the needs for summarization? • RQ2.2.b: How can summarization of API reviews support the developer's decision making processes? • RQ2.2.c: How do developers expect API reviews to be summarized? The Surveys We learn about the developer needs for API reviews and tool support for analyzing the reviews through two surveys. We conducted the first survey as a pilot survey and the second as the primary one. The purpose of the pilot survey is to identify and correct potential ambiguities in the design of the primary survey. Both the pilot and the primary surveys share the same goals. However, the questions of the primary survey are refined and made more focused based on the findings of the pilot survey. For example, in the pilot survey, we mainly focused on the GitHub developers. We picked GitHub for our pilot survey, because previous research shows that GitHub developers use third-party APIs (e.g., open source APIs) and would like to stay aware of changes in APIs in their development tasks to become remain productive [84]. We focused on open source community because open source development is embraced by both individual developers as well as big and small companies (both as contributors to API development as well API usage). In addition, due to the openness, the community may also be more reliant on developer forums to inform choices. Such interactions can be visible to any other developers for further analysis (e.g., during their decision making about an API). This offers us access to diverse viewpoints (i.e., opinions) of developers on APIs that may be used in diverse development needs and contexts. We found that most of the respondents in our pilot survey considered the developer forums (e.g., Stack Overflow) as the primary source of opinions about APIs. Therefore, in the primary survey, our focus was to understand how developers seek and analyze opinions in developer forums. Stack Overflow is arguably the most popular online forums to share and discuss code and opinions about open-source APIs. Therefore, in our primary survey, we picked developers who are actively involved in the discussions of Stack Overflow posts. In Section 4, we discuss the design and the summary of results of the pilot survey. In Sections 5 and 6, we discuss the design and detailed results of the primary survey. THE PILOT SURVEY The pilot survey consisted of 24 questions: three demographic questions, eight multiple-choice, five Likert-scale questions, and eight open-ended questions. In Table 5, we show all the questions of the pilot survey (except the demographic questions) in the order they appeared in the survey questionnaires. The demographic questions concern the participants role (e.g., software developer or engineer, project manager or lead, QA or testing engineer, other), whether they are actively involved in software development or not, and their experience in software development (e.g., less than 1 year, more than 10 years, etc.). The survey was hosted in Google forms and can be viewed at https://goo.gl/forms/8X5jKDKilkfWZT372. Pilot Survey Participants We sent the pilot survey invitations to 2,500 GitHub users. The 2,500 users were randomly sampled from 4,500 users from GitHub. The 4,500 users were collected using the GitHub API. The GitHub API returns GitHub users starting with an ID of 1. We stopped calling the API after it returned the first 4,500 users. From the number of emails sent to GitHub users, 70 emails were bounced back for various reasons, e.g., invalid (domain expired) or non-existent email addresses, making it 2,430 emails being actually delivered. A few users emailed us saying that they were not interested in participating due to the lack of any incentives. Finally, a total of 55 developers responded. In addition, we sent the invitation to 11 developers in a software development company in Ottawa, Canada. The company was selected based on a personal contact we had within the company. The company was involved in multiple different software projects involving open-source APIs. Out of the 11, nine responded. Among the GitHub participants: 1) 78% of respondents said that they are software developers (11% are project managers and 11% belong to "other" category), 2) 92% are actively involved in software development, and 3) 64% have more than 10 years of software development experience, 13% of them have between 7 and 10 years of experience, 9% between 3 to 6 years, 8% between 1 to 2 years and around 6% less than 1 year of experience. Among the nine Industrial participants, three were team leads and six were professional developers. All nine participants were actively involved in software development. The nine participants had professional development experience between five to more than 10 years. Pilot Survey Data Analysis We analyzed the survey data using statistical and qualitative approaches. For the open-ended questions, we applied an open coding approach [85]. Open coding includes labelling of concepts/categories in textual contents based on the properties and dimensions of the entities (e.g., an API) about which the contents are provided. In our open coding, we followed the card sorting approach [86]. In card sorting, the textual contents are divided into cards, where each card denotes a conceptually coherent quote. For example, consider the following sentence in Figure 1 (from answer circled as 5 ): "Note that Jackson fixes these issues, and is faster than GSON." The sentence has two different conceptual coherent quotes, "Note that Jackson fixes these issues", and "and is faster than GSON". The first quote refers to fixes to issues (i.e., bugs) by the API. The second quote refers to the "performance" aspect of the API. In our analysis, as we analyzed the quotes, themes and categories emerged and evolved during the open coding process. We created all of the "cards", splitting the responses for eight open-ended questions. This resulted into 173 individual quotes; each generally corresponded to individual cohesive statements. In further analysis, the first two authors acted as coders to group cards into themes, merging those into categories. We analyzed the responses to each open-ended question in three steps: 1) The two coders independently performed card sorts on the 20% of the cards extracted from the survey responses to identify initial card groups. The coders then met to compare and discuss their identified groups. 2) The two coders performed another independent round, sorting another 20% of the quotes into the groups that were agreed-upon in the previous step. We then calculated and report the coder reliability to ensure the integrity of the card sort. We selected two popular reliability coefficients for nominal data: percent agreement and Cohen's Kappa [87]. Coder reliability is a measure of agreement among multiple coders for how they apply codes to text data. To calculate agreement, we counted the number of cards for each emerged group for both coders and used ReCal2 [88] for calculations. The coders achieved the almost perfect degree of agreement; on average two coders agreed on the coding of the content in 96% of the time (the average percent agreement varies across the questions and is within the range of 92-100%; while the average Cohen's Kappa score is 0.84 ranging between 0.63-1 across the questions). 11 59 Opinions about APIs need to be summarized because? (5-point Likert scale for each option) 1) Too many posts with opinions 60%, 2) Interesting opinion in another post 66.7%, 3) Contrastive viewpoints missed 55%, 4) Not enough time to look for all opinions 56.7% 12 60 Opinion summarization can improve the following decision making processes. 9 What other areas can be positively affected having support for opinion summarization? (text box) 1) API usage 21.4%, 2) Trustworthiness analysis 21.4%, 3) Documentation 9.5%, 4) Maturity analysis 14.3%, 5) Expertise 7.1% Summary of Results from the Pilot Survey In this section, we briefly discuss the major findings from the pilot survey, that influenced the design of the primary survey. A detail report of the findings is in our online appendix [89]. Out of the 64 respondents who completed the demographic questions, maximum 60 respondents answered the other questions. In Table 5, the type of each question (e.g., open or closedended) is indicated beside the question (in italic format). The key findings for each question are provided under the question in seek opinion were the learning of API usage and the analysis of validity of a provided API usage (e.g, if it actually works). The participants reported that making an informed decision among too much opinion, and judging the trustworthiness of the provided opinions are the major challenges they face while seeking and analyzing opinions. Needs for API Review Quality Analysis (RQ1.2). Most of the participants (73.3%) considered an opinion, accompanied by a code example, to be of high quality (Q20). 70% of the participants considered the presence of links to supporting documents as a good indicator of the quality of the provided opinion, while 55% believed that the posting date is an important factor. Stack Overflow uses upvotes and downvotes of a post as a gauge of the public opinion on the content. 66.7% of developers attributed a higher count of votes on the post to their assumption of the higher quality of the corresponding opinion in the post. The user profile, another Stack Overflow feature, was found as useful by 45% of participants to evaluate the quality of the provided opinion. The length of the post where the opinion is provided was not considered as a contributing factor during the assessment of the opinion quality (agreed by only 16.7% of the responders). One participant did not agree with any of the choices, while two participants mentioned two additional factors: (1) real example, i.e., the provided code example should correspond to a real world scenario; and (2) reasoning of the provided opinion. Tools for API Review Analysis (RQ2.1). More than 83% of the respondents agreed that opinion summarization is much needed for several reasons. Since opinions change over time or across different posts, developers wanted to be able to track changes in API opinions and to follow how APIs evolve. In our surveys, we sought to explain each option in the closed questions with a short description or examples. The description for each question can be found in the links to the surveys (see https://goo.gl/forms/ 8X5jKDKilkfWZT372 for the pilot survey). For example, in Q11 of the pilot survey (Table 5), the two options "Opinions can change over time" and "Opinions can evolve over time" are differentiated as follows: Opinions about an API about a feature can change from bad to good, e.g., developers did not like it before, but like it now (i.e., we need a contrastive viewpoint). In contrast, overall opinions about an API can evolve, e.g., it is more positive now due to the increase in adoption by users (e.g., by becoming more usable), etc. Tools for API Review Summarization (RQ2.2). The vast majority of responders also thought that an interesting opinion about an API might be expressed in a post that they have missed or have not looked at. The need for opinion summarization was also motivated by the myriad of opinions expressed via various developer forums. Developers believe that the lack of time to search for all possible opinions and the possibility of missing an important opinion are strong incentives for having opinions organized in a better way. While developers were interested in tool support for mining and summarizing opinions about APIs, they also wanted to link such opinions to other related dimensions, such as, usage, maturity, documentation, and user expertise. 85% of the respondents believed that opinion summarization could help them select an API among multiple choices. More than 78% believed that opinion summaries can also help them find a replacement to an API. Developers agreed that summaries can also help them develop a new API to address the needs that are currently not supported, improve a software feature, replace an API feature, validate the choice of an API, select the right version of an API, and fix a bug (48.3%, 48.3%, 46.7%, 53.3%, 45%, and 40%, respectively). Needs for the Primary Survey The responses in our pilot survey showed that opinions about APIs are important in diverse development needs. However, the survey had the following limitations: Design. A number of similar questions were asked in pairs. For example, in Q4, we asked the developers about the reasons to seek opinions. The developers were provided eight options to choose from. In Q5, we asked the developers to write about the other reasons that could also motivate them to seek opinions. Q5 is an open-ended question. Both Q4 and Q5 explore similar themes (i.e., needs for opinion seeking). Therefore, the responses in Q5 could potentially be biased due to the respondents already being presented the eight possible options in Q4. A review of the manuscript based on the pilot survey (available in our online appendix [89]) both by the colleagues and the reviewers in Transaction of Software Engineering pointed out that a better approach would have been to ask Q5 before Q4, or only ask Q5. Moreover, the respondent should not be given an option to modify his response to an open-ended question, if a similar themed closed-ended question is asked afterwards. Sampling. We sampled 2,500 GitHub developers out of the first 4,500 GitHub IDs as returned by the GitHub API. We did not investigate the relevant information of the developers, e.g., are they still active in software development? Do they show expertise in a particular programming languages?, and so on. This lack of background information on the survey population can also prevent us from making a formal conclusion out of the responses of the survey. Response Rate. While previous studies involving GitHub developers also reported low response rate (e.g., 7.8% by Treude et al. [84]), the response rate in our pilot survey was still considerably lower (only 2.62%). There can be many reasons for such low response rate. For example, unlike Treude et al. [84], we did not offer any award/incentives to participate in the survey. However, our lack of enough knowledge of the GitHub survey population prevents us from making a definitive connection between the low response rate and the lack of incentives. We designed the primary survey to address the above limitations. Specifically, we took the following steps to avoid the above problems in our primary survey: 1) We asked all the open-ended questions before the corresponding closed-ended questions. The respondents only saw the closed-ended questions after they completed their responses to all the open-ended questions. The respondents were not allowed to change their response to any open-ended question, once they were asked the closed-ended questions. 2) We conducted the primary survey with a different group of software developers, all collected from Stack Overflow. To pick the survey participants, we applied a systematic and exhaustive sampling process (discussed in the next section). 3) We achieved a much higher response rate (15.8%) in our primary survey. In the next section, we discuss in details about the design of the primary survey. In Section 7, we briefly compare the results of the pilot and primary survey on the similar-themed question pairs. PRIMARY SURVEY DESIGN We conducted the primary survey with a different group of software developers. Besides the three demographic questions, the primary survey contained 24 questions. In Table 6, we show the questions in the order they were asked. We show how the questions from the pilot survey were asked in the primary survey in the last column of Table 6. The survey was conducted using Google forms and is available for view at: https://goo.gl/forms/2nPVUgBoqCcAabwj1. As we noted in Section 3.5, in our primary survey, we focused on understanding how and whether developers seek and analyze API reviews in developer forums. This decision was based on the observations from our pilot survey. The participants in our pilot survey reported the developer forums as their primary sources for seeking opinions about APIs (along with co-workers). One of our major goals from the surveys is to elicit requirements for tool designs to facilitate API analysis using API reviews. Developer forums, such as Stack Overflow, can be a sharing place for coworkers as well. Moreover, the design and deployment of such tools can be better facilitated if the data is already available and shared in the forum posts. In Table 6, the horizontal lines between questions denote sections. For example, there is only one question in the first section (Q1). Once a developers responds to all the questions in a section, he is navigated to the next section. Depending on the type of the question, the next section is determined. For example, if the answer to first question ("Do you visit developer forums to seek info about APIs?") is a 'No', we did not ask him any further questions. If the answer is a 'Yes', the respondent is navigated to the second question. The navigation between the sections was designed to ensure two things: 1) that we do not ask a respondent irrelevant questions. For example, if a developer does not value the opinions of other developers, it is probably of no use asking him about his motivation for seeking opinions about APIs anyway, and 2) that the response to a question is not biased by another relevant question. For example, the first question in the third section of Table 6 is Q4 ("What are your reasons for referring to opinions of other developers about APIs in online developer forums?"). This was an open-ended question, where the developers were asked to write their responses in a text box. A relevant question was Q18 in the sixth section ("When do you seek opinions about APIs?"). The developers were given eight options in a Likert scale (e.g., selection of an API among choices). The developers were able to answer Q18 only after they answered Q4. The developers were not allowed to return to Q4 from Q18. We adopted similar strategy for all such question pairs in the primary survey. In this way, we avoided the problem of potential bias in the developers' responses in our primary survey. The second last column in Table 6 shows how the questions are mapped to the two research questions (and the sub-questions) that we intend to answer. The pilot and the primary surveys contained similar questions. The last column of Table 6 shows how 18 of the 24 questions in the primary survey were similar to 18 questions in the Pilot survey. While the two sets of questions are similar, in the primary survey the questions focused specifically on developer forums. For example, Q4 in the primary survey (Table 6) was "What are your reasons for referring to opinions of other developers about APIs in online developer forums?" The similar question in the pilot survey was Q5 (Table 5): "What are other reasons for you to refer to the opinions about APIs from other developers?" To ensure that we capture developers' experience about API reviews properly in the primary survey, we also asked six questions that were not part of the pilot survey. The first two questions (discussed below) in the primary survey were asked to ensure that we get responses from developers who indeed seek and value opinions about APIs. The first question was "Do you visit developer forums to seek info about APIs?". If a developer responded with a 'No' to this question, we did not ask him any further questions. We did this because 1) developer forums are considered as the primary resource in our pilot survey and 2) automated review analysis techniques can be developed to leverage the developer forums. The second new question was about asking the participants about the top developer forums they recently visited. Such information can be useful to know which developer forums can be leveraged for such analysis. The third new question was "Do you value the opinion of other developers in the forum posts while deciding on what API to use?". If the response was a 'no', we asked the participant only one question (24): "Please explain why you don't value the opinion of developers in the forum posts". We asked this question to understand the potential problems in the opinions that may be preventing them from leveraging those opinions. In Figure 4, we show how the above two questions are used to either navigate into the rest of the survey questions or to complete the survey without asking the respondents further questions. Participants We targeted developers that participated in the Stack Overflow forum posts (e.g., asked a question or provided an answer to a question in Stack Overflow). Out of a total of 720 invitations sent, 4) Student 3.6%, and 5) Other 1.2% . The distribution of experience of the participants is: 1) 10+ years 56.6%, 2) Seven to 10 years 28.9%, and 3) Three to six years 14.5%. To report the experience, the developers were given five options to choose from (following Treude et al. [84]): 1) less than 1 year, 2) 1 to 2 years, 3) 3 to 6 years, 4) 7 to 10 years, and 5) 10+ years. None of the respondents reported to have software development experience of less than three years. Therefore, we received responses from experienced developers in our primary survey. 97.6% of them were actively involved in software development. Sampling Strategy To recruit the participants for an empirical study in software engineering, Kitchenham et al. [90] offered two recommendations: Population "Identify the population from which the subjects and objects are drawn", and Process "Define the process by which the subjects and objects were selected". We followed both of the two suggestions to recruit the participants for our primary survey (discussed below). The contact information of users in Stack Overflow is kept hidden from the public to ensure that the users are not spammed. Vasilescu et al. [91] correlated the user email hash from Stack Overflow to those in GitHub. To ensure that the mining of such personal information is not criticized by the Stack Overflow community in general, Bogdan Vasilescu started a question in Stack Overflow with the theme, which attracted a considerable number of developers from the Stack Overflow community in 2012 [92]. The purpose of Vasilescu et al. [91] was to see how many of the Stack Overflow users are also active in GitHub. In August of 2012, they found 1,295,623 Stack Overflow users. They cross-matched 93,772 of those users in GitHub. For each of those matched users in GitHub, they confirmed their email addresses by mapping their email hash in Stack Overflow to their email addresses as they shared in GitHub. Each user record in their dataset contains three fields: (1) UnifiedId: a unique identifier for each record (an auto-incremental integer), (2) GitHubEmail: the email address of the user as posted in GitHub, (3) SOUserId: the ID of the user in Stack Overflow. We used this list of 93,772 users as the potential population source of the primary survey. In our survey invitation, we were careful not to spam the developers. For example, we only sent emails to them twice (the second time as a reminder). In addition, we followed the Canadian Anti-Spam rules [93] while sending the emails, by offering each email recipient the option to 'opt-out' from our invitation. We did not send the second, i.e., reminder email to the users who decided to opt-out. We sampled 900 users from the 93,772 users as follows ( Figure 5 4. The reputation of a user in Stack Overflow is based on the votes from other developers thread, we collected the list of tags assigned to it. We put all those tags in the tag frequency table of the user. c) We computed the occurrence of each tag in the tag frequency table of the user d) We ranked the tags based on frequency, i.e., the tag with the highest occurrence in the table was put at the top. 4) Programming Languages: We assigned each user to one of the following nine programming languages:(1) Javascript, (2) Java, (3) C#, (4) Python, (5) C++, (6) Ruby, (7) Objective-C, (8) C, and (9) R. The nine programming languages are among the top 10 programming languages in Stack Overflow, based on the number of questions tagged as languages. In our survey population, we observed that more than 95% of the users tagged at least one of the languages in the Stack Overflow posts. We assigned a user to a language, if the language had the highest occurrence among the nine languages in the user frequency table of the user. If a user did not have any tag resembling any of the nine languages, we did not include him/her in the sample. 5) Ranking of Users. For each language, we ranked the users by reputation and activity date, i.e., first by reputation and then if two users had the same reputation, we put the one at the top between the two, who was active more recently. 6) Create sample For each language, we picked the top 100 users. For each user, we created a personalized email and sent him/her the survey invite. In Table 7, we show the summary statistics of the primary survey population and the sampled users. The 88,021 users contributed to more than 3.2M threads. As of September 2017, Stack Overflow hosts 14.6M threads and 7.8M users. Thus, the users in the population corresponds to 0.012% of all the users in Stack Overflow, but they contributed to 22.1% of all threads in Stack Overflow. All of the 900 users were active in Stack Overflow as early as 2017, even though most of them first created their account in Stack Overflow on or before 2010. Moreover, each of the sampled users was highly regarded in the Stack Overflow community, if we take their reputation in the forum as a metric for that -the minimum reputation was 5,471 and the maximum reputation was 627,850, with a median of 18,434. Therefore, we could expect that the answers from these users would ensure informed insights about the needs for opinions in reviews about APIs posted in Stack Overflow. Participation Engagement While targeting the right population is paramount for a survey, convincing the population to respond to the survey is a non-trivial task. As Kitchenham et al. [90] and Smith et al. [94] noted, it is necessary to consider the contextual nature of the daily activities of software developers while inviting them to the survey. While sending the survey invitations, we followed the suggestions of Smith et al. [94] who reported their experience on three surveys conducted at Microsoft. Specifically, we followed five suggestions, two related to persuasion (Liking, Authority and credibility) and three related to social factors (Social benefit, compensation value, and timing). Liking. Nisbett and Wilson [95] explained the cognitive bias halo effect that people are more likely to comply with a request from a person they have positive affect afterwards [94]. Their advice to leverage the positive affection is to communicate with the people in the study by their name. For our survey, we created a personalized email for each user by (i) addressing Figure 6, we show a screenshot of an email sent to one of the users 1 . Authority and credibility. Smith et al. [94] pointed out that the "compliance rates rise with the authority and credibility of the persuader". To ensure that the developers considered our survey as authentic and credible, we highlighted the nature of the research in both the email and the survey front page. We also cited that the survey was governed by the formal Ethics Approval from McGill University and that the reporting to the survey would be anonymized. Social Benefit. Edwards found that participants are more likely to respond to survey requests from universities [96]. The reason is that, participants may be more likely to respond to a survey if they know that their response would not be used for commercial benefits, rather it will be used to benefit the society (e.g., through research). We contacted the survey participants using the academic emails of all the authors of the paper. We also mentioned in the email that the survey is conducted as a PhD research of the first author, and stressed on the fact that the survey was not used for any commercial purpose. Compensation value. Smith et al. [94] observed in the surveys conducted at Microsoft that people will likely comply with a survey request if they owe the requester a favor. Such reciprocity can be induced by providing an incentive, such as a gift card. Previous research showed that this technique alone can double the participation rate [97]. In our primary survey, we offered a 50 USD Amazon Gift card to one of the randomly selected participants. Timing. As Smith et al. [94] experienced, the time when a survey invitation is sent to the developers, can directly impact their likelihood of responding to the email. They advise to avoid sending the survey invitation emails during the following times: 1) Monday mornings (when developers just quickly want to skim through their emails) 2) Most likely out of office days (Mondays, Fridays, December month). We sent the survey invitations only on the three other weekdays (Tuesday-Thursday). We conducted the survey in August 2017. Out of the 900 emails we sent to the sampled users, around 150 bounced back for various reasons (e.g., old or unreachable email). Around 30 users requested by email that they would not want to participate in the survey. Therefore, the final number of emails sent successfully was 720. We note that the email list compiled by Vasilescu et al. [91] is from 2013. Therefore, it may happen that not all of the 720 email recipients may have been using those email addresses any more. Previously, Treude et al. [84] reported a response rate of 7.8% for a survey conducted with the Github developers. Our response rate (15.8%) is more than the response rate of Treude et al. [84]. Survey Data Analysis We analyzed the primary survey data using the same statistical and qualitative approaches we applied in the pilot survey. We created a total of 947 quotes from the responses to the open-ended questions. We created quotes from each response as follows: 1) We divided it into sentences. 2) We further divided each sentence into individual clauses. We used semi-colon as a separator between two clauses. Each clause was considered a quote. Two coders analyzed the quotes. The first author was the first coder. The second coder was selected as a senior software engineer working in the Industry in Ottawa, Canada. The second coder is not an author of this manuscript. The two coders together coded eight of the nine open-ended questions (Q4-Q7, Q11-Q14 in Table 6). The other open-ended question was Q24 ("explain why you don't value the opinion of other developers . . . "). Only 10 participants responded to that question, resulting in 17 quotes. The first coder labelled all of those. In Table 8, we show the agreement level between the two coders for the eight open-ended questions. The last row in Table 8 shows the number of quotes for each of the questions. To compute the agreement between the coders, we used the online recal2 calculator [88]. The calculator reports the agreement using four measures: 1) Percent agreement, 2) Cohen κ [87], 3) Scott's Pi [98], and 4) Krippendorff's α [99] The Scott's pi is extended to more than two coders in Fleiss' κ. Unlike Cohen κ, in Scott's Pi the coders have the same distribution of responses. The Krippendorff's α is more sensitive to bias introduced by a coder, and is recommended over Cohen κ [100]. The agreement (Cohen PRIMARY SURVEY RESULTS During the open-coding process of the nine open-ended questions, 37 categories emerged (excluding 'irrelevant' and responses such as 'Not sure'). We labelled a quote as 'Not Sure' when the respondent in the quote mentioned that he was not sure of the specific answer or did not understand the question. For example, the following quotes were all labelled as 'Not sure': 1) "Don't know" 2) "I am not sure" 3) "This is a hard problem." 4) "I'm not sure what this question is asking." The 37 categories were observed a total of 1,019 times in the quotes. 46 quotes have more than one category. For example, the following quote has three categories (Documentation, API usage, and Trustworthiness): "Many responses are only minimally informed, have made errors in their code samples, or directly contradict first-party documentation." In Table 19 (Appendix A), we show the distribution of the categories by the questions. In Figure 7, we show the number of participants that answered the different questions in the primary survey. We discuss the results below. Reasons for Seeking Opinions about APIs (RQ1.1) We report the responses of the developers along the three subquestions: 1) Sources for development and opinion needs, 2) Factors motivating developers to seek opinion about APIs from developer forums, and 3) Challenges developers face while seeking for opinions about APIs from developer forums. We asked the developers three questions (Q2, 16, 17 in Table 6). Q16 We further asked them which sources they use to seek opinions about APIs. We gave them six options to choose from: 1) Developer forums, e.g., Stack Overflow (Picked by all 83 developers who mentioned that they valued the opinions of other developers and that they visit developer forums). 2) Co-workers (63 developers), 3) IRC chats (22 developers), 4) Internal mailing lists (24 developers), 5) None (0 developers), 6) Others. For 'Others', we gave them a text box to write the name of the forums. Among the other sources, developers picked a variety of online resources, such as, Google search engine, Hacker news, blogs, Slack (for meetups), GitHub, Twitter, etc. The Google and Apple developer forums were present in the list of forums that the developers visited in the last two years, but they were absent in the list of forums where developers visit to seek opinions. Stack Overflow was picked in both questions as the most visited site by the developers both as a general purpose forum to find information about APIs and as a forum to seek opinions about APIs. We observed the presence of Twitter in both lists (Q2 and Q16). Therefore, besides Stack Overflow, Twitter can be another resource to support developer's learning needs, as was previously observed by Sharma et al. [102]. Q17 We asked the developers about their frequency of visiting the online developer forums (e.g., Stack Overflow) to get information about APIs (Q17). There were six options to choose from: 1) Every day (picked by 36.1% of the developers), 2) Two/three times a week (32.5%), 3) Once a week (14.5%), 4) Once a month (16.9%), 5) Once a year (0%), 6) Never (0%). Therefore, most of the developers (83.1%) reported that they visit developer forums at least once a week, with the majority (36.1%) visiting the forums every day. Each developer reported to visit the developer forums to seek opinions about APIs at least once a month. Reasons for Seeking Opinions about APIs (RQ1.1) Sources for Opinions about APIs (RQ1.1.a) Stack Overflow was considered as the major source to seek information and opinions about APIs. Developers also use diverse informal documentation resources, e.g., blogs, Twitter, GitHub issue tracking system, etc. Factors Motivating Opinion Seeking (RQ1.1.b) We asked the developers two questions (Q4,18 in Table 6). For each category as we report below (e.g., EXPERTISE 31,31 below), the superscript (n, m) is interpreted as follows: n for the number of quotes found in the responses of the questions, and m is the number of total distinct participants provided those responses. A similar format was previously used by Treude et al. [84] (except m, i.e., number of respondents). Q4 We asked the developers to write about the reasons for them to refer to opinions about APIs from other developers. The developers seek opinions for a variety of reasons: 1) To gain overall EXPERTISE 31,31 about an API by learning from experience of others. Developers consider the opinions from other expert developers as indications of real-word experience and hope that such experience can lead them to the right direction, "Getting experience of others can save a lot of time if you end up using a better API or end up skipping a bad one.". 2) To learn about the USAGE 17,16 of an API by analyzing the opinion of other developers posted in response to the API usage scenarios in the posts. Developers consider that certain edge cases of an API usage can only be learned by looking at the opinions of other developers, "It's especially helpful to learn about problems others' faced that might only be evident after consider time is spent experimenting with or using the API in production." They expect to learn about the potential issues about an API feature from the opinions, before they start using an API, "Because there is always a trick and even best practice which can only be provided by other developers." 3) To be able to SELECT 13,12 an API among multiple choices for a given development. The developers think that the quality of APIs can vary considerably. Moreover, not all APIs may suit the needs at hand. Therefore, they expect to see the pros and cons of the APIs before making a selection. "If I don't know an API and I need to pick one, getting the opinion of a few others helps make the decision." To make a selection or to meet the specific development needs at hand, the developers leverage the knowledge about the API aspects expressed in the opinions about the competing APIs. a) the DOCUMENTATION 11,11 support, e.g., when the official documentation is not sufficient enough, "possibility to get an answer to a specific question (which may not be explained in other sources like the API's documentation)". b) the COMMUNITY 11,9 engagement in the forum posts and mailing lists, "for instance, Firebase team members are active on Stack Overflow." c) the USABILITY 1,1 and design principles of the API and the PERFORMANCE 8,8 of the API to assess API quality, "It allows me to see common gotchas, so I can estimate the quality of the API based on those opinions, not just my own." 4) To improve PRODUCTIVITY 9,8 by saving time in the decision making process, "Getting experience of others can save a lot of time if you end up using a better API or skipping a bad one." 5) To TRUST 13,11 and to validate the claims about a specific API usage or feature (e.g., if a posted code example is good/safe to use), because "They represent hands-on information from fellow devs, free of marketing and business." Q18 We asked the developers about the specific development needs that may motivate them to seek opinions about APIs. We solicited opinions using a five-point Likert-scale question. The analysis of the responses (Figure 8) shows that developers find selection-related factors (i.e., determining replacement of an API and selection of an API among choices) to be the most influential for seeking for opinions (88% and 80.7% of all respondents agreed/strongly agreed, respectively). Developers also seek opinions when they need to improve a software feature or to fix a bug (69.9% and 68.7% agreement, respectively). 56.6% of the respondents seek help during the development of a new API, such as, addressing the shortcomings of the existing API. Developers are less enthusiastic to rely on opinions for validating their choice of an API to others (38.6%), or replacing one version of an API with another version (27.7%). Only 15.7% of the respondents mentioned that they seek opinions for reasons not covered by the above options. Note that while the 'Neutral' responses are not shown in Figure 8, we include the neutral counts in our calculation of percent agreements above. Our charts to show the results of Likert-scale questions (e.g., Figure 8) follow similar formats as adopted in previous research [103], [104]. We asked developers two questions (Q5, 6 in Table 6) to determine the way developers seek information and opinions about APIs and the challenges they face while seeking and analyzing the opinions. Q5 We asked the developers to write about how they seek information about APIs in a developer forums and how they navigate through multiple forums posts to form an informed insight about an API. More than 83% of the respondents include some form of SEARCHING 94,79 using a general purpose search engine (e.g., Google) or using the provided search capabilities in the forums. In the absence of a suitable alternative, the developers learn to trust such resources, "I use Google and trust Stack Overflow's mechanisms for getting the best posts into the top search results" However, they also cite that they learn to be patient to get the right results from the search engines, "Google and patience". Because such search can still return many results, developers employ different mechanisms to navigate through the results and to find the information that they can consider as the right result. Such mechanisms include, the ranking (e.g., top hits) of the results, the manual SIMILARITY 3,3 assessment of the search results by focusing on the recency of the opinion, the analysis of the presence of SENTIMENTS 4,4 in the post about the API, "I usually google keyword and then look for the positive and negative response". Q6 We asked developers to write about the challenges they face, while seeking for opinions about APIs from developer forums. The major challenge developers reported was to get the SITUATIONAL RELEVANCY 26,22 of the opinions within their usage needs. Such difficulties can stem from diverse sources, such as, finding the right question for their problem, or realizing that the post may have only the fraction of the answers that the developer was looking for, "It's hard to know what issues people have with APIs so it's difficult to come up with something to search for." Another relevant category was RECENCY 13,13 of the provided opinion, as developers tend to struggle whether the provided opinion is still valid within the features offered by an API, "Information may be outdated, applying to years-old API versions". The assessment of the TRUSTWORTHINESS 21,20 of the provided opinion as well as the opinion provider were considered as challenging due to various reasons, such as, lack of insight into the expertise level of the opinion provider, bias of opinion provider, etc. According to one developer, "Sometimes it's hard to determine a developer's experience with certain technologies, and thus, it may be hard to judge the API based on that dev's opinion alone". SEARCH 15,11 features offered by both Stack Overflow and general purpose search engines to find the right information. Finding the right keywords was a challenge during the searching. Developers wished for better search support to easily find duplicate and unanswered questions. Lack of knowledge on how to search can prove a blocker in such situations, "If I'm inexperienced or lacking information in a particular area, I may not be using the right terminology to hit the answers I'm looking for." Developers expressed their difficulties during their SELECTION 6,6 of an API by leveraging the opinions. The lack of opinions and information for newer APIs is a challenge during the selection of an API, "With newer APIs, there's often very little information.". Developers can have specific requirements (e.g., performance) for their USAGE 8,8 of an API, and finding the opinion corresponding to the requirement can be non-trivial (e.g., is the feature offered by this API scalable?). The necessity to analyze opinions based on specific API aspects was highlighted by the developers, such as, 1) link to the DOCUMENTATION 4,4 support in the opinions,"Ensuring that the information is up-to-date and accurate requires going back-and-forth between forums and software project's official docs to ensure accuracy (in this case official docs would be lacking, hence using forums is the first place)" 2) COMPATIBILITY 4,4 of an API feature in different versions, "filtering for a particular (current) API version is difficult." 3) USABILITY 2,2 of the API, and the activity, and engagement of the supporting COMMUNITY 9,7 , "But looking at several questions in a particular tag will help give a feeling for the library and sometimes its community too." The lack of a proper mechanism to support such analysis within the current search engine or developer forums make such analysis difficult for the developers. Finally, getting an instant insight into the EXPERTISE 3,3 of the opinion provider is considered as important by the developers during their analysis of the opinions. The developers consider that getting such insight can be challenging, "Need to figure out if the person knows what they are talking about and is in a situation comparable to ours." Reasons for Seeking Opinions about APIs (RQ1.1) Challenges in Opinion Seeking (RQ1.1.c) Majority of the developers leverage search engines to look for opinions. They manually analyze the presence of sentiments in the posts to pick the important information. The developers face challenges in such exploration for a variety of reasons, such as the difficulty in associating such opinions within the contexts of their development needs, the lack of enough evidence to validate the trustworthiness of a given claim in an opinion, etc. Q3 We asked the developers whether they value the opinion about APIs of the other developers in the developer forums. 89.2% of the respondents mentioned that they value the opinion of other developers in the online developer forums, such as, Stack Overflow. 10.8% reported that they do not value such opinions. Q24 We asked the 10.8% participants who do not value the opinion of other developers to provide us the reasons why they do not value such opinions. The developers cited their concern about TRUSTWORTHINESS 4,3 as the main reason, followed by the lack of SITUATIONAL RELEVANCY 2,2 of the opinion to suit specific development needs. The concerns related to the trustworthiness of the opinion can stem from diverse factors, such as, the credibility and the experience of the poster, "They are usually biased and sometimes blatant advertisements.". Lack of trust can also stem from the inherent bias someone may possess towards a specific computing platform (e.g., Windows vs Linux developers). We note that both of these two categories were mentioned also by the developers who value opinions of other developers. However, they mentioned that by looking for diverse opinions about an entity, they can form a better insight into the trustworthiness of the provided claim and code example. We now report the results from responses of the 89.2% developers who reported that they value the opinion of other developers. Q7 The open-ended question asked developers to write about the factors that determine the quality of the provided opinion. The following factors are used to assess the quality of the opinions: 1) The quality of the provided opinions as comparable to API official DOCUMENTATION 54,49 . Forum posts are considered as informal documentation. The developers expected the provided opinions to be considered as an alternative source of API documentation. Clarity of the provided opinion with proper writing style and enough details with links to support each claim are considered as important when opinions can be considered as of good quality. In particular, the following metrics are cited to assess opinions as a form of API documentation. a) Completeness of the provided opinion based on the "Exhaustiveness of the answer and comparison to others" b) Acceptable writing quality by using "proper language" and "Spelling, punctuation and grammar." c) Accompanying with facts, such as using screenshots or similar suggestions, examples from real world applications, or "Cross-referencing other answers, comments on the post, alternate answers to the same question." d) Brevity of the opinion by being "brief and to the point" e) Providing context by discussing "identified usage patterns and recognized API intent" f) Accompanying code examples with reaction, "Code samples, references to ISOs and other standards, clear writing that demonstrates a grasp of the subjects at hand" 2) The REPUTATION 35,33 of the opinion provider and upvote to the forum post where the opinion is found, "If the poster has a high Stack Overflow score and a couple of users comment positively about it, that weighs pretty heavily on my decision." 3) The perceived EXPERTISE 13,12 of the opinion provider within the context of the provided opinion, "It's easy to read a few lines of technical commentary and identify immediately whether the writer is a precise thinker who has an opinion worth reading." 4) The TRUSTWORTHINESS 8,5 of the opinion provider who demonstrates apparent fairness and weighing pros/cons. 5) The SITUATIONAL RELEVANCE 12,9 of the provided opinion within a given development needs, and 6) The RECENCY 6,6 of the opinion, "posts more than a few years old are likely to be out of date and counterproductive." Needs for API Review Quality Assessment (RQ1.2) Developer analyze the quality of the opinions about APIs in the forum posts by considering the opinions as a source of API documentation. They employ a number of metrics to judge the quality of the provided opinions, e.g., the clarity and completeness of the provided opinion, the presence of code examples, the presence of detailed links supporting the claims, etc. Q8 We asked developers whether they currently rely on tools to analyze opinions about APIs from developer forums. 13.3% responded with a 'Yes' and the rest (86.7%) with a 'No'. Q9 We further probed the developers who responded with a 'No'. We asked them whether they feel the necessity of a tool to help them to analyze those opinions. The majority (62.5%) of the developers were unsure ('I don't know'). 9.7% responded with a 'Yes' and 27.8% with a 'No'. Q10 We further probed the developers who responded with a 'Yes'. We asked them to write the name of the tool they currently use. The developers cited the following tools: 1) Google search, 2) Stack Overflow votes, 3) Stack Overflow mobile app, and 4) GitHub pulse. Q19 was a multiple choice question, with each choice referring to one specific tool. The choice of the tools was inspired by research on sentiment analysis in other domains [77]. To ensure that the participants understood what we meant by each tool, we provided a one-line description to each choice. The choices were as follows (in the order we placed those in the survey): 1) Opinion miner: for an API name, it provides only the positive and negative opinions collected from the forums. 2) Sentiment analyzer: it automatically highlights the positive and negative opinions about an API in the forum posts. 3) Opinion summarizer: for an API name, it provides only a summarized version of the opinions from the forums. 4) API comparator: it compares two APIs based on opinions. 5) Trend analyzer: it shows sentiment trends towards an API. 6) Competing APIs: it finds APIs co-mentioned positively or negatively along an API of interest and compares those. The respondents' first choice was the 'Competing APIs' tool (6), followed by a 'Trend Analyzer' to visualize trends of opinions about an API (5), an 'Opinion Miner' (1) and a 'Summarizer' (3) (see Figure 9). The sentiment analyzer (2) is the least desired tool, i.e., developers wanted tools that do not simply show sentiments, but also show the context of the provided opinions. Note that an opinion search and summarization engine in other domain (e.g, camera reviews) not only show the mined opinions, but also offer insights by summarizing and revealing trends. The engines facilitate the comparison among the competing entities through different aspects of the entities. Such aspects are also used to show the summarized viewpoints about the entities (ref Figure 3). Therefore, it was encouraging to see that developers largely agreed with the diverse summarization needs of API reviews and usage information, which corroborate with the similar summarization needs in other domains. Tool Support for API Review Analysis (RQ2.1) To analyze opinions about APIs in the forums, developers leverage traditional search features in the absence of a specialized tool. The developers agree that a variety of opinion analysis tools can be useful in their exploration of opinions about APIs, such as, opinion miner and summarizer, a trend analyzer, and an API comparator engine that can leverage those opinions to facilitate the comparison between APIs. Needs for API Review Summarization (RQ2.2) As we observed in the previous section, a potential opinion summarization engine about APIs need to show both summarized viewpoints about an API and support the comparison of APIs based on those viewpoints. Fig. 9: Tool support for analyzing API opinions. Factors Motivating Summarization Needs (RQ2.2.a) We asked the developers three questions (Q20-22 in Table 6). (Figure 10) shows that nearly all developers agree that opinion summarization is much needed for several reasons. The vast majority of the respondents think that an interesting opinion about an API might be expressed in a post that they have missed or have not looked at. The need for opinion summarization is also motivated by the presence of too many posts to explore. Developers also believed that the lack of time to search for all possible opinions and the possibility of missing an important opinion are strong incentives for having opinions organized in a better way. 5. Before conducting the two surveys, the first author manually analyzed around 1,000 posts from Stack Overflow. The goal was to understand the type of opinions developers share about APIs in the forum posts. The options were selected based on our observation of API reviews in forum posts. Needs for API Review Summarization (RQ2.2) Factors Motivating Summarization Needs (RQ2.2.a) The developers reported that they feel overwhelmed due to the abundance of opinions about APIs in the developer forums. The developers were in agreement that such difficulty arises due to a variety of reasons, such as, relevant opinions about an API may be missed because those opinions were in posts not checked by the developer, the lack of enough time to find all such opinions, etc. Summarization Preferences (RQ2.2.b) We asked three questions (two open-ended) to understand the relative benefits and problems developers may encounter during their usage of API review summaries from developer forums (Q13,14, and 23 in Table 6). Q13 We asked developers to write about the areas that can be positively impacted by the summarization of opinions about APIs from developer forums. The following areas were considered to be benefited from the summaries: 1) The majority of the developers considered that API SELECTION 30,25 is an area that can reap the benefits from the summaries, "It would make the initial review and whittling down of candidates quicker and easier.". They also considered that API review summarization can improve the PRODUCTIVITY 12,11 of the developers by offering quick but informed insights about APIs, "It would make the initial review and whittling down of candidates quicker and easier." 2) The developers expected the summaries to assist in their API USAGE 7,7 , such as, by showing reactions from other developers for a given code example that can offer SITUATIONALLY RELEVANT 3,3 insights about the code example (e.g., if the code example does indeed work and solve the need as claimed). The developers consider that they can use a summary as a form of documentation to explore the relative strengths and weaknesses of an API for a given development need. The developers do not expect the official documentation of an API to have such insights, because "You can see how the code really works, as opposed to how the documentation thinks it works". Indeed, official documentation can be often incomplete [7]. Carroll et al. [24] advocated the needs for minimal API documentation, where API usage scenarios should be shown in a task-based view. In a way, our developers in the survey expect the API usage scenarios combined with the opinions posted in the forum posts as effective ingredients to their completion of the task at hand. Hence, they expect that the summaries of opinions about API can help them with the efficient usage of an API during their development tasks, "If summarization is beneficial to understanding API, then any problem even remotely related to that use stands to achieve a network benefit." 3) As we noted earlier in this section, developers leverage the opinions about diverse API aspects to compare APIs during their selection of an API among choices. In the absence of any such automatic categorization available to automatically show how an API operates based on a given aspect (e.g., performance), the developers had expectations that an opinion summarizer would help them with such information. For example, the developers considered that opinion summaries can also improve the analysis about different API aspects, such as, USABILITY 2,2 , COMMUNITY 2,2 , PERFORMANCE 3,3 , DOCUMENTATION 6,5 , etc. 4) Finally, developers envisioned that opinion summaries can improve the SEARCH 6,6 for APIs and help them find better EXPERTISE 4,4 to learn and use APIs. Q14 We next asked developers to write about the areas that can be negatively impacted by the summarization of opinions about APIs from developer forums. The purpose was to be aware of any drawback that can arise due to the use of API opinion summaries. 1) Developers considered that opinion summaries may MISS NUANCES 18,12 in the API behavior that are subtle, may not be as frequent, but that could be "key design choices or the metainformation . . . ". The REASONING 7,6 about opinions can also suffer because of that. 2) While developers considered the selection of an API as an area that can be positively affected by opinion summaries in general, they also raised the concern that summaries may negatively impact API SELECTION 6,6 when more subtle API aspects are not as widely discussed as other aspects, because "Popularity and fashion trends may mask more objective quality criteria ". Summaries may become API BARRIER 3,3 when new APIs with less number of opinions are not ranked higher in the summaries, "Just relying on the summarization for deciding for or against an API will probably not be enough and may lead to decisions that are less than optimal." Q23 We asked developers about how opinion summaries can support their tasks and decision making process. There were nine options. We used the same options we used to solicit the needs for opinions about APIs in Figure 8. This was due to the fact that the purpose of an opinion summarizer should be to facilitate the easier and efficient consumption of opinions. In Figure 11, we present the responses of the developers. More than 70% of the developers believe that opinion summarization can help them with two decisions: 1) determining a replacement of an API and 2) selecting the right API among choices. Developers agree that summaries can also help them improve a software feature, replace an API feature, and validate the choice of API. Fixing a bug, while receiving 38.6% of the agreement, might not be well supported by summaries since this task requires certain detailed information about an API that may not be present in the opinions (e.g., its version, the code itself, the feature). The developers mostly disagreed with the use of opinions to select an API version (28.9% agreement only). Needs for API Review Summarization (RQ2.2) Summarization Preferences (RQ2.2.b) The summarization of opinions about APIs can be effective to support the selection of an API among choices, the learning of API usage cases, etc. Developers expect a gain in productivity in supporting such needs by the opinion summarizer. However, developers expressed their cautions with a potential opinion summarizer for APIs, when for example, subtle nuances of an API can be missed in the summary. Another concern is that summaries can make it difficult for a new API to be adopted. For example, developers may keep using the existing APIs, because they are possibly reviewed more than the new APIs. Summarization Types (RQ2.2.c) We asked developers three questions (Q11,12,15 in Table 6) to learn about their preferences on how they would like to have opinion summaries about APIs to be produced. a) Active Community. "Code quality, maintainer activity, ratio of answered/unanswered questions about the API." b) Mature. "Strong Stack Overflow community (to show that the API is relatively mature)." c) Reputation. "Reputation of the organization.", "Other works of its creators." 4) PERFORMANCE 30,23 "Has it been used in production grade software before?" 5) COMPATIBILITY 18,14 a) Language. "Implemented in the language I need it." b) Other API. "Uniformity with other API . . . " c) Framework/Project. "compatibility (will it work for project I'm working on)" 6) LEGAL 15,14 ". . . the API is open source and maintained." 7) PORTABILITY 5,5 "Thread safety, sync/async, cross-language and cross-OS support, etc. " 8) SECURITY 2,2 "Are security issues addressed quickly?" 9) BUG 1,1 "Features, ease of use, documentation, size and culture of community, bugs, quality of implementation, performance, how well I understand its purpose, whether it's a do-all frame-work or a targeted library, . . . " The USAGE SCENARIOS 10,9 of an API, and the FEATURES 4,4 it supports are also considered to be important factors to be part of the opinion summaries. Percent Q12 We asked developers to provide their analysis on how opinions about APIs can be summarized when such opinions are scattered across multiple forum posts. The purpose was to know what techniques developers consider as valuable to aggregate such opinions from diverse posts. While developers mention about using the SEARCH 12,12 features in Stack overflow, they acknowledged that they never thought of this problem deeply. They acknowledge that search engines or Stack Overflow features are really not the summarizer they may need, "Often there is a sidebar with similar questions/discussions, but no easy way to "summarize"." To address their needs for such a summarizer, the idea of a dedicated API OPINION PORTAL 18,15 was discussed. In such a portal all opinions about an API can be aggregated and presented in one place, "aggregate similar information, collect 'A vs B' discussions " . . . "like an aggregated portal about an API with all organized or non-organized pages". Having a centralized portal for such information can be useful, because "Then I can read through them all and synthesize a solution.". Developers advocated for machine learning approaches to develop such a portal "It would be very interesting for this to be automated via machine learning." Developers wished diverse presentation of opinions in the portal: 1) CATEGORIZED 8,8 into different API aspects, such as "Summarization by different criteria, categorization", 2) Distributed by SENTIMENTS 8,5 , such as, star rating (similar to other domains), "do an x-out-of-5 rating for portability, stability, versatility and other attributes". down on August 8, 2017 [105], mainly due to a lower than the expected number of visits to the documentation site. Therefore, we can draw insights both from the problems faced by the Stack Overflow documentation site and from the responses of the developers in our survey to develop an API portal that can offer better user experiences to the developers. Q15 We asked developers about 11 API aspects, whether they would like to explore opinions about APIs around those aspects. The 11 aspects are: 1) Performance, 2) Usability, 3) Portability, 4) Compatibility, 5) Security, 6) Bug, 7) Community, 8) Documentation, 9) Legal, 10) General Features, 11) Only sentiment. We note that each of these options are found in the openended questions already. Thus, the response to this question can be used to further ensure that we understood the responses of the developers. The results of the relevant Likert-scale survey question are summarized in Figure 12. The vast majority of the developers agrees that the presence of documentation, discussion of API's security features, mention of any known bugs in an API, its compatibility, performance, usability, and the supporting user community are the main attributes that contribute to the opinion quality. Developers could not agree whether a useful opinion should include a mention of any legal terms; while they agree that posts containing only one's sentiment (e.g., "I love API X" or "API Y sucks") without any rationale are not very useful. We note that there was a similar themed open-ended question in Q11. However, the developers were not shown Q15 before their response of Q11. Moreover, the two questions were placed in two separate sections, Q11 in section 4 and Q15 in section 6. There was only one question (i.e., Q11) in section 4. Therefore, the participants did not see Q15 or the options in Q15 during their response of Q11. DISCUSSIONS In this section, we summarize the key points from our primary survey with regards to the two research questions that we set forth to address (Section 7.1). We then analyze the findings from the two surveys along three demographics by: 1) profession in Section 7.2, 2) experience of the survey participants in Section 7.3, and 3) the nine programming languages used to sample the primary survey participants in Section 7.4. Summary In Table 9 and Table 10, we summarize the responses from our primary survey for RQ1 and RQ2, respectively. The insights are calculated using the same rules that we used in Section 4. Observation 1. Developers leverage opinions about APIs to support development needs, such as API selection, usage, learning about edge cases, etc (Table 9.) Developers expect opinion summaries can facilitate those needs by offering an increase in productivity, e.g., save time by offering quick but synthesized insights ( we asked developers about the reasons behind why they seek opinions about APIs using two questions (Q4 and Q18). Another pair was Q11 and Q15, which we used to ask developers about their preference of specific API aspects/factors that they expect to see in the opinions about APIs. In the first pair (i.e., Q4 and Q18) the responses to the openended question (Q4) show a slight variation from the response to the closed question (Q18). In Q4, the main reason mentioned was to build and look for expertise about APIs while seeking for opinions. Q18 does not have that option. Nevertheless, the majority of agreement for the options in Q18 show that those needs are also prevalent. In the second pair (i.e., Q11 and Q15), the responses to both questions produced an almost similar set of API aspects, such as, performance, usability, documentation, etc. Observation 3. In both surveys, two closed questions were 24 10 Please explain why you don't value the opinion of other developers in the developer forums. 1) Trustworthiness 23.5% 2) Situational relevance 11.8%, 3) Community 11.8%, 4) Expertise 11.8%, paired together (Q18 and Q23 in the primary survey and Q4 and Q12 in the pilot survey). The first question in each pair (i.e., Q18 in primary and Q4 in pilot surveys) aims at understanding the needs of developers for seeking opinions about APIs. The second question in each pair (i.e., Q23 in primary survey and Q12 in pilot one) aims to understand the needs for summarizing such opinions. The options for each of the four questions remained the same (see Table 9). In both surveys, the highest ranked option was "selection among choices". Therefore, developers seek opinions to decide on an API among choices and they believe that the summarization of opinions can assist them in their selection process. The three selection-related options (i.e., selection among choices, determining a replacement, and validating an API selection) are ranked as the top three in the pair of questions from the primary survey. Therefore, the primary focus to aggregate and summarize opinions about APIs would be to assist developers in their selection of APIs and any other tasks relevant to it. Analysis by Professions In Tables 11 and 12 APIs is mostly prevalent and consistent across the different reported professions in our surveys. All the research engineers, students, and team leads, and 89.2% of software developers who visit developer forums in our survey, also value the opinions about APIs in the forums. Unlike managers, technical leads are expected to work closely with the codebase and overall system architecture design. The decision on an API during the design of a system can be beneficial for such team leads. According to one team lead: "The quality of APIs can vary considerably. Getting experience of others can save a lot of time if you end up using a better API or end up skipping a bad one." For the researchers, the motivation was to learn from the experts: " they have used the API, probably more extensively then I have, and may be experts in their subfield". Observation 5. While all the team leads consult API information in the developer forums, most of them visit the forum two or three times a week. In contrast, most of the developers who consult developer forums for API information, do so more every day. Observation 6. In both pilot and primary surveys, we asked developers about their preference for tools to better support their 6. We use Mann Whitney U test and a 95% confidence level (i.e., p = 0.05). If you don't use a tool currently to explore the diverse opinions about APIs in developer forums, do you believe there is a need of such a tool to help you find the right viewpoints about an API quickly? 1) Yes 9.7%, 2) No 27.8%, 3) I don't know 62.5% 10 11 You said yes to the previous question on using a tool to navigate forum posts. Please provide the name of the tool 1) Google/Search 4, 2) Stack Overflow votes/app/related post 3, 3) GitHub issue/pulse 1, 4) Safari 1, 5) Reference documentation 1 19 83 What tools can better support your understanding of API reviews in developer forums? 1) Opinion mining 56.6%, 2) Sentiment analysis 41%, 3) Opinion summarization 45.8%, 4) API comparator 68.7%, 5) Trend analyzer 67.5%, 6) Co-mentioned competing APIs 73.5%, 7) Other Tools 8.4% 13 83 What areas can be positively affected by the summarization of reviews about APIs from developer forums? 1) API selection 26.1%, 2) Productivity 10.4% 3) API usage 6.1% 4) Documentation 5.2% 5) API popularity analysis 3.5% RQ2.2 Needs for Opinion Summarization 14 83 What areas can be negatively affected by the summarization of reviews about APIs from developer forums? 1) Opinion trustworthiness 19.3% 2) Missing nuances 16.5% 3) Opinion reasoning 6.4% 4) API selection 5.5% 5) New API Entry 2.8% 15 83 An opinion is important if it contains discussion about the following API aspects? 1) Usability 88% 2) Documentation 85. understanding of opinions about APIs in the developer forums (Q19 in primary survey, Q7 in pilot survey). A summarization engine to compare APIs based on different features is considered as the most useful among the software developers, team leads and research engineers. In our pilot survey, the leads also preferred the same tool the most among all tools. In our pilot survey, we got responses from three managers, who showed equal preference (33.3%) to all the tools except an opinion miner (66.7%). Observation 7. There is a more clear distinction among the professions in their preference of API aspects that they prefer to explore in the opinions about APIs (Q15 in primary survey). The team leads are most interested to find opinions about API documentation, while the other professions including software engineers are most interested to learn about the usability of the API. All the professions agreed the most that the summarizing of opinions can help in the selection of APIs. Analysis by Experiences In Tables 13 and 14, we summarize the results of closed questions from our final survey by the reported experiences of survey participants following the same reporting principles we discussed in Section 7.2. Observation 8. In both pilot and primary surveys, the developers with less experience show more interest to value the opinion of other developers (Q2 in Table 13). In both the surveys, the more experienced developers offer more uniform preferences towards the different tools that could be developed to facilitate opinion analysis from developer forums (Q19 in Table 14). Observation 9. The less experienced developers visit forums more frequently (Q17 in primary survey): 75%, 61.2%, and 58.2% developers with 3-6, 7-10, and 10+ years of experience, respectively visit developer forums at least two or three times a week. Observation 10. Among the less experienced developers (3-6 years of experience), both Trend Analyzer and API Comparator were ranked over other tools when asked about their preference of tools to better support their understanding of opinions about APIs (Q19 in primary survey). The developers with 10+ years of experience were most interested to explore the Competing APIs tool. The preference towards a specific tool decreases as developers become more experienced. For example, the developers with 10+ years of experience prefer the different tools with almost similar preference. The two tools (API Comparator and Competing APIs) are also ranked the highest (70.7%) by developers with 10+ years of experience in the pilot survey (Q7). Observation 11. The developers with 10+ years of experience show almost equal preference to the different implicit API aspects about which they prefer to see or seek opinions. The less experienced developers also have more specific preference of API aspects about which they like to explore the opinions of other developers (Q15 in primary survey): 1) More than 75% agreement for the six API aspects by developers with 3-6 years of experience (maximum 100% for Usability) 2) More than 75% agreement for three API aspects by developers with 7-10 years of experience (maximum 84.6% for both usability and documentation) 3) Maximum 72.7% agreement for two API aspects by developers with 10+ years of experience (Documentation and Bug). Analysis by Programming Languages In Tables 15 and 16, we summarize the results of closed questions from our final survey by the nine targeted programming languages from where our survey sample was collected. We follow the same reporting principles that we discussed in Section 7.2. Observation 12. Among the nine programming languages that we targeted in our primary survey to sample the participants of our primary survey, at least 70% respondents from each language reported that they visit developer forums to seek information about APIs. The opinions posted about APIs in the forums are valued by those developers (100% of all Java developers, followed by at least 78% of the respondents from other languages). Observation 13. All the Java developers also report that they do not have any tools that they can currently use to analyze those opinions. The lack of such tool support is also prevalent among developers across other languages. Observation 14. The developers across all the programming languages (except Python and C#) preferred most the tools that can offer summarized comparisons between APIs or show list of competing APIs. For python, the most preferred tool was an opinion miner and for C# it was a trend analyzer. Observation 15. While developers prefer to explore opinions about diverse API aspects, the usability aspect was ranked the highest among the list of API aspects across all the languages except C, C++ and Javascript. For C and C++, the most pressing aspects are documentation and for Javascript it was performance. RESEARCH JOURNEY We noted in Section 1 ( Figure 2) that the findings from the surveys led us to develop techniques and tools to assist developers in their analysis of opinions and usage about APIs from Stack Overflow. In this section, we break down the journey into two major phases that we undertook after the surveys: 1) Identification of core and actionable insights from the survey results that can be used to guide the design of tools to assist developers in their analysis of opinions and usage of APIs from developer forums (see Section 8.1). 2) Development and evaluation of techniques and our tool (Opiner) based on the insights (see Section 8.2). Core and Actionable Findings In Table 17, we summarize the core findings obtained from the surveys. Each finding is provided a unique ID (denoted by CID). The first two columns show the research questions, the third column presents the core findings. In Table 18, we identify requirements for future tool designs to support each core finding. Each requirement is mapped to the CIDs from Table 17 (first column). The second column shows the requirements, i.e., actionable findings. We cluster the requirements into five categories (R1-R5). The third column lists the questions from the primary survey that we used to identify those requirements. The last column presents the features implemented in our tool Opiner to address those requirements. In Figure 13, we show how requirements are implemented into our tool. We now discuss the findings below along with the elicited requirements. Table 17). To search for opinions and usage discussions about APIs, developers use search engines or the tags in Stack Overflow. They raise the concern that this approach of using search engines (e.g., find sentiments about an API) can be sub-optimal when they only explore the top results (CID 3 in Table 17). The developers reported to look for sentiments and situational relevance in the search results (Q5). This exploration can be challenging, because the results may not be (1) situationally relevant, such as the API about which they would like to see opinions about may not be present in the search result. It could also be that their development task is not properly supported by the code example found in the search result. (2) trustworthy, such as, sentiment towards an API found in the search result may not represent the overall sentiments expressed towards it (e.g., the opinion in the search results may be biased). (3) recent, such as the opinion and code example may not be the most recent and thus the solution may not be applicable to the most recent version of the API. In the absence of a dedicated engine for APIs, developers opt for the reformulation of their search query Fig. 13: The major research steps undertaken to incorporate the requirements from the two surveys into our proof of concept tool, Opiner by modifying search keywords. This approach is considered as challenging and time consuming. The developers wished for better search support to address their needs. Intuitively, it is easier to find opinions/solutions for an API with regards to specific task (or situation), if they are not scattered in millions of posts in the forums. As a first step towards facilitating such search, it is thus necessary to collect opinions and code examples about APIs from the millions of posts, where their usage is discussed. During our tool design to assist developers in their exploration of opinions and usage about APIs from developer forums, we formulate the following two requirements: 1) Opinions about an API need to be collected. Sentiment and emotion mining in software engineering has so far focused on the usefulness of cross-domain sentiment detection tools for software engineering, the development of sentiment detection tools for the domain of software engineering, and the relationship between team productivity with the sentiments expressed (see Section 2.2). We are aware of no technique that can mine opinions associated to an API from developer forums. We developed a framework to automatically mine opinions about APIs from developer forums. The framework currently supports the following major features: a) Detection of API names in the forum texts b) Detection of opinionated sentences in the forum texts, and c) Association of opinionated sentences to APIs. A detailed description and evaluation of the framework is the subject of our paper [78]. 2) Code examples with reactions need to be collected together. A number of recent research efforts have been devoted to mine code examples about APIs from forums, such as detecting all the APIs used in a code example in a forum post [18], [64], etc. We are aware of no technique that mines both a code example and the reactions towards the code example for an API from forum post. We developed another framework to automatically mine usage scenarios about APIs from developer forums. Each usage scenario about an API consists of three major components: a) The code example b) A short description in natural language about the code example, and c) The reactions of developers towards the forum post from where the code example is found. A detailed description and evaluation of the framework is the subject of our technical report [106]. Needs for Summarization of API Opinions (R2) In our primary survey, 89.2% of the developers mentioned that they are overwhelmed by the huge volume of opinions posted about APIs in forums (CID 7). While the majority of the developers agreed that opinion summaries can help them evaluating APIs, the following summarization types were rated as (could be) useful (CID 8): 1) categorization of opinions by API aspects, 2) show trends of API popularity, 3) show contrastive viewpoints, 4) show dashboards to compare APIs, 5) opinions summarized as topics; and 6) most importantly opinions summarized into a paragraph. It would be interesting to further investigate the relative benefit of each summarization type and compare such findings with other domains. For example, Lerman et al. [107] found no clear winner for consumer product summarization, while aspect-based summarization is predominantly present in camera and phone reviews [77], and topic-based summarization has been studied in many domains (e.g., identification of popular topics in bug report detection [108], software traceability [109], etc.). As a first step towards facilitating the summarization of opinions about APIs, we formulate the following design requirements: 1) Categorize opinions by aspect. The developers indicated that they would like to seek opinions about diverse API aspects, such as performance, usability, security, etc. We develop machine learning techniques to automatically categorize each opinionated sentences associated to an API into 11 different categories: (i) Performance (ii) Usability (iii) Security (iv) Compatibility (v) Portability (vi) Legal (vii) Bug (viii) Community (ix) Documentation (x) Other general features (xi) Only sentiment . The first nine categories are frequently asked for in the responses of our surveys (e.g., Q11, Q15 in our primary survey). 2) Statistical summarization. Developers asked for automated analysis to see trends of usage of an API based on sentiment analysis. We have developed techniques to create time series of positivity and negativity towards an API by analyzing the sentiments expressed about the API in the different forum posts. 3) API Comparator. The most-asked tool was a dashboard to compare APIs and to see competing APIs given an API. We developed two techniques to facilitate comparison between APIs. a) We rank APIs by aspect to offer recommendation, such as based on the aspect performance the API Jackson is the most popular for JSON-based tasks in Java, but for usability aspect it is the Google GSON API. b) We further apply collocation algorithm on the forum posts for each API mention and show which other APIs were positively or negative reviewed in the same forum post. This analysis can reveal other similar APIs to developers if they are not satisfied with their initially selected API. 4) Summarize opinions by topics and paragraphs. In our pilot survey, we asked developers of their preference to summarize opinions by topics and in short paragraphs. The two options were ranked lower than the aspect-based categorization, but were still considered as useful. We investigated the following existing algorithms to summarize opinions along the two options. Each algorithm takes as input all the opinionated sentences (one bucket for positive and another for negative sentences). a) We applied the widely used topic modeling algorithm, LDA (Latent Dirichlet Allocation) [70] to find topics in the input and to cluster the opinionated sentences by topics. b) We applied four algorithms based on Extractive and Abstractive summarization of natural language texts to produce a summary of the input as a paragraph. The detail of the summarization algorithms and their evaluation [35]. Such tools can be integrated within IDEs to assist developers during their APIs-based tasks. As a first step towards producing summaries of usage scenarios of an API as a form of API documentation, we formulated the following tool design requirements: 1) Show situationally relevant usage scenarios together. We developed an algorithm to find code examples of an API that are conceptually similar, i.e., the tasks supported by the code examples may be closely related to each other. One such example would be, while using the Apache HttpClient API, developers first establish a connection to an HTTP server and then they send or receive messages using the connection [110]. 2) Integrate usage scenarios into API documentation. In our previous study, we found that the official API documentation can be often obsolete, incomplete, or outdated [6]. In our primary survey, developers asked for a unified documentation by combining API usage scenarios from developer forums with the official API documentation. One such integration would be the Live API documentation, as proposed by Subramanian et al. [64], i.e., link code examples from Stack Overflow into the Javadoc of each API class. We developed algorithms to produce type-based usage summaries of an API. In a typebased summary of an API, the usage scenarios of each type (e.g., a class) of the API is grouped and further summarized into situationally relevant clusters. 3) Provide situationally relevant information to an API usage. A development task often requires the usage of more than one API. While the above two summaries are focused on producing summaries for an API, we still need information on whether and how other APIs are co-used with our API of interest in the usage scenarios. We used collocation algorithms to find APIs that are co-used together, and other API types that are co-used together with an API type of our interest. The development and evaluation of the usage summarization algorithm is the subject of our recent publication [53]. Needs for Opinion Quality Analysis (R4) In our surveys, the developers expressed their concerns about the trustworthiness of the provided opinions, in particular those that contain strong bias. Since by definition an "opinion" is a "personal view, belief, judgment, attitude" [111], all opinions are biased. However, developers associate biased opinions with noise defining them as the ones not being supported by facts (e.g., links to the documentation, code snippets, appropriate rationale). While the detection of spam (e.g., intentionally biased opinion) is an active research area [77], it was notable that developers are mainly concerned about the unintentional bias that they associate with the user experience. Developers are also concerned about the overall writing quality. As a first step towards assisting developers to analyze the quality of the provided opinions, we formulated the following design requirements: 1) Contrastive viewpoint summarization. We implement the contrastive opinion clustering algorithm proposed by Kim and Zhai [112] to find pairs of opinions that offer opposing views about an API feature. 2) Ranking by Recency. Developers in our surveys were explicitly asked about the necessity for finding more recent opinions about APIs. The reason is that API versions evolve quickly and some old features can become obsolete or changed. We rank the opinionated sentences of an API by recency, i.e., the most recent opinion is placed at the top. 3) Tracing Opinions to Posts. Developers in our surveys highlight the necessity of investigating the context of a provided opinion, such as the features about which the opinion is provided, the particular configuration parameters used in the usage example, etc. To enable such exploration, we link each mined opinionated sentence about an API to the specific forum post from where the opinion was mined. A number of additional research avenues can be explored in this direction, such as, the design of a theoretical framework to define the quality of the opinions about APIs, the development of a new breed of recommender systems that can potentially warn users of any potential bias in an opinion in the forums, etc. Needs for an API Portal (R5) In our primary survey, developers explicitly asked for a dedicated portal for APIs where they can search and analyze opinions and usage about APIs that are posted in developer forums, such as Stack Overflow (Q12 in primary survey). Based on the themes and developer needs that emerged from the study, we developed a prototype tool, named Opiner. The tool is developed as an opinion search engine where developers can search for an API by its name to explore the positive and negative opinions provided for the API by the developers in the forum posts. The Opiner's infrastructure supports the implementation and deployment for all the above requirements (R1-R4). Since the online deployment on October 30, 2017, Opiner has been accessed and used by developers from 57 countries from all the continents except Antarctica (as of July 29, 2018 by Google Analytics). • Opiner API Review Summarizer. In Figure 14, we show screenshots of the user interface of Opiner API review summarizer. The UI of Opiner is a search engine, where users can search for APIs by their names to look for the mined and categorized opinions about the API. There are two search options in the front page of Opiner. A user can search for an API by its name using 1 . A user can also investigate the APIs by their aspects using 2 . Both of the search options provide auto-completion and provide live recommendation while doing the search. When a user searches for an API by name, such as, 'jackson' in 1 , the resulting query produces a link to all the mined opinions of the API. When the user clicks the links (i.e., the link with the same name of the API as 'com.fasterxml.jackson'), all of the opinions about the API are shown. The opinions are grouped as positive and negative (see 7 ). The opinions are further categorized into the API aspects by applying the aspect detectors on the detected opinions 8 . By clicking on each aspect, a user can see the opinions about the API (see 9 ). Each opinion is linked to the corresponding post from where the opinion was mined (using 'details' link in 9 ). When a user searches by an API aspect (e.g., performance as in 2 ), the user is shown the top ranked APIs for the aspect (e.g., most reviewed, most negatively reviewed in 4 ). For each API in the Opiner page where the top ranked APIs are shown, we show the most recent three positive and negative opinions about the API 5 . If the user is interested to further explore the reviews of an API from 5 , he can click the 'Explore All reviews' which will take him to the page in 7 . The system architecture of Opiner API review summarizer is the subject of our tool demo paper [23]. • Opiner API Usage Scenario Summarizer. In Figure 15, we show screenshots of Opiner usage scenario summarizer. The user can search an API by name to see the different usage summaries of the API 1 . The front page also shows the top 10 APIs for which the most number of code examples were found 2 . Upon clicking on the search result, the user is navigated to the usage summary page of the API 3 , where the summaries are automatically mined and summarized from Stack Overflow. A user can also click on each of the top 10 APIs listed in the front page. An example usage scenario in Opiner is shown in 4 . The reactions included in a usage scenario can be simply a "thank you" note (when the code example serves the purpose) or more elaborated (when the code example has certain limitations or specific usage requirements). In Figure 16, we show an overview of the concept-based API usage summary for the API Jackson in Opiner. In Opiner, each concept consists of one or more similar API usage scenarios. Each concept is titled as the title of the most representative usage scenario (discussed below). In 1 of Figure 16, we show the most recent three concepts for API Jackson. The concepts are sorted by time of their most representative usage scenario. The most recent concept is placed at the top of all concepts. Upon clicking on a each concept title, the most representative scenario for the concept is shown in 2 . Each concept is provided a star rating as the overall sentiments towards all the usage scenarios under the concept (see 2 ). Other relevant usage scenarios of the concept are grouped under a 'See Also' (see 3 ). Each usage scenario under the 'See Also' can be further explored (see 4 ). Each usage scenario is linked to the corresponding post in Stack Overflow where the code example was found (by clicking on the details word after the description text of a scenario). The summaries page of an API in Opiner also contains a 'Search' box, which developers can use to search for review and usage summaries of another API (e.g., a competing API). Effectiveness of Opiner We investigated the effectiveness of Opiner using both empirical evaluation and user studies. We conducted the empirical evaluation to compute the performance of the mining techniques in Opiner (e.g., the precision of the sentiment detection and opinion association algorithms). We observed a reasonable degree of precision in our mining techniques, such as more than 0.73 in our sentiment detection and .90 in our opinionated sentences to API association algorithm. We conducted a total of six user studies to assess the usefulness of the opinion and usage summaries in Opiner. We found that developers were able to select the right API with more accuracy while using Opiner and leveraging opinion summaries in Opiner. Additionally, we found that developers were able to complete their coding tasks with more accuracy, in less time and In Opiner UI, users can search by API name 1 or API aspect 2 By clicking on an API name returned by search result or Explore All Reviews a user is shown all reviews about an API The reviews about an API are also grouped by aspects The reviews under an aspect are shown by clicking the aspect. The most recent three negative and positive opinions for an API are shown upon click on the API name 5 . By clicking Explore All Reviews users are shown all reviews 6 By clicking on a search result for an aspect, users are shown the top reviewed APIs by that aspect (e.g., performance). 4 Users can search for other aspects in the same page 3 Fig. 14: Screenshots of Opiner opinion search engine for API reviews. using less effort while using the usage summaries in Opiner. The details of the user studies and empirical evaluation are subject to our recent papers and journals [23], [51], [78], [106], [113]. We summarize the user studies below. Effectiveness of Opiner Review Summaries. We conducted two user studies to assess the usefulness of the opinion summaries in Opiner and four studies were used to assess the usefulness of the usage scenario summaries in Opiner. In the first study, we compared our proposed summaries (aspect-based and statistical) against the cross-domain review summarization techniques (summaries in paragraphs, topicbased). We compared the summaries using five development scenarios: 1) Selection of APIs among choices, 2) Documentation of an API, 3) Presentation of an API to others to justify its selection to team members, 4) Staying aware of an API over time, and 5) Authoring a competing API to address the problems of an existing API. A total of 10 professional software developers were asked to rate the usefulness of the summaries for the five tasks. A number of criteria were used to rate, such as usability of the API over other APIs, completeness of the summaries to produce a documentation, etc. The aspect-based summaries were rated as the most useful (more than 85% rating), followed by the statistical summaries (more than 70%). The paragraph-based summaries were considered as the least useful (less than 40% rating). We conducted the study on-site of a software company. A total of nine software developers from the company participated in the study. The developers were given access to Opiner online tool. The developers were asked to complete the tasks using Opiner and Stack Overflow. Both tasks involved the selection of an API among two competing APIs. For example, the first task asked the developers to pick one of the two APIs (GSON and org.json). The criteria used to select were the usability and licensing restrictions of the two APIs. The developers were asked to consult Stack Overflow and the review summaries in Opiner. The developers made the best decision while using Stack Overflow with Opiner, instead of while using Stack Overflow only. All the developers considered Opiner to be usable and wished to use it in their daily development tasks. Effectiveness of Opiner Usage Summaries. We conducted four user studies to assess the usefulness of the usage scenario summaries in Opiner. The first study involved the coding of four development tasks and the other involved surveys. A total of 33 professional software developers and students participated in the four user studies (33 in the coding tasks, and 31 of the 34 in each of the surveys). In the coding study, each participant was given four tasks, for which they wrote code. They used four different development Fig. 15: Screenshots of the Opiner API usage summarizer resources (one each for the four tasks): Stack Overflow, API official documentation, Opiner usage summaries, and everything including search engines. We observed an average accuracy of 0.62 in the provided solutions while the participants used the Opiner usage summaries. The second best was the everything setting with an accuracy of 0.55, followed by an accuracy of 0.5 when the participants used only Stack Overflow. The participants showed the least accuracy (0.46) while using the API official documentation. In subsequent surveys, more than 80% of the participants agreed that the usage summaries in Opiner can offer improvements over API official documentation and the developer forums. For example, the developers recommended that the usage summaries should be integrated with the API official documentation. THREATS TO VALIDITY We now discuss the threats to validity of our study by following the guidelines for empirical studies [114]. Construct Validity Construct validity threats concern the relation between theory and observations. In our study, they could be due to the measurement of errors. The accuracy of the open coding of the survey responses Fig. 16: Concept based summary for API Jackson is subject to our ability to correctly detect and label the categories. The exploratory nature of such coding may have introduced the researcher bias. To mitigate this, we coded 20% of the cards for four questions independently and measured the coder reliability on the next 20% of the cards. We report the measures of agreement in Table 8. While we achieved a high level of agreement, we, nevertheless, share the complete survey responses in an online appendix [89]. Maturation threats concerns the changes in a participant during the study due to the passage of time, such as change in development priorities or environments (e.g., moving from open source-based APIs to proprietary APIs). Intuitively, we expect to see greater concentration of opinions about open-source APIs in the forums. None of these concerns are applicable to our surveys, because each survey was supposed to take not more than 30 minutes by each participant. Internal Validity Threats to internal validity refer to how well the research is conducted. In our case, it is about how well the design of the surveys allow us to choose among alternative explanations of the phenomenon. A high internal validity in the design can let us choose one explanation (e.g., tool support to assist in opinion analysis) over another (e.g., no need for a tool) with a high degree of confidence, because it avoids (potential) confounds. In our primary survey, we sought to avoid confounding factors by asking the participants open-ended questions. The questions with options (i.e., closed questions) were presented to the participants only after they have responded to the open-ended questions. The two types of questions were divided into separate sections, i.e., the participants could not see the closed-ended questions when they answered the open-ended questions. Despite this, we observed similar findings in the responses of the questions. For example, one open-ended question was about the different factors in an API that can play a role in their decisions to choose the API (Q11). The paired closed-ended question was Q15. In both the responses, we found that developers asked for similar API aspects about which they prefer to seek opinions, such as performance, usability, etc. Another threat could arise from the placement of the options in a multiple choice question. The threat may influence the participants to agree/disagree with an option more than the other options, such as the option that is placed at the top (i.e., rated first). We did not observe any such pattern in the responses. For example, the first option in our question about tool support in opinion analysis (Q19 in primary survey and Q7 in pilot survey). Both surveys followed the same ordering of the options for the question. Despite this, we observed the lowest rank for the option "sentiment miner" in both surveys (second option) and highest rank for the option "API comparator" (fourth option). In addition, there was no significant difference in the preference of the participants towards a specific tool over another (see Figure 9). Therefore, all such tools were found as favorable by the participants, despite their placements. External Validity Threats to external validity compromise the confidence in stating whether the study results are applicable to other groups. While our sample size of 900 developers out of a population of 88K developers is small, we received 15.8% response rate in our primary survey. Moreover, the qualitative assessment of the responses offered us interesting insights about the needs and challenges to seek and analyze opinions about APIs. However, it is desirable that future researches replicate our study on a larger and-or different population of developers to make our findings more generic/robust. Due to the diversity of the domains where APIs can be used and developed, the provided opinions can be contextual. Thus, the generalizability of the findings requires careful assessment. Such diversity may introduce sampling bias if developers from a given domain are under-or over-represented. One related threat could arise due to our sampling of developers from GitHub (for pilot survey) and Stack Overflow (for primary survey). The sampling might have missed developers who do not use GitHub or Stack Overflow. As we noted in Section 3.5, we picked developers from GitHub and Stack Overflow because of their popularity among the open-source developers. We observed similar findings between our pilot and primary surveys. Nevertheless, a replication of our study involving developers from other online forums could offer further validation towards the generalization of our findings across the larger domains of software engineering. Another potential threat of over-representation would be related to all the developers being proficient in only one programming language (e.g., Javascript). This can happen because Javascript has been the most popular language in Stack Overflow over the last six years (according to Stack Overflow survey of 2018 [115]). In our primary survey population of 88,021 Stack Overflow users, we observed that the Stack Overflow posts where the users participated in 2017, corresponded to all the popular languages found in the Stack Overflow surveys. Moreover, while our sample attempted to assign each user to one of the top nine programming languages, those users also participated in the discussions of APIs involving other programming languages. Therefore, their opinions in the primary survey may be representative of the overall Stack Overflow population. In our primary survey, we only collected responses from developers who visit developer forums to seek information about APIs. As we noted in Section 5, this decision was based on our observation from the pilot survey that developer forums are the primary resource for developers to see such information. In the primary survey, we did not collect additional information from the participants who do not visit developer forums. Further probing of such participants to understand why they do not use developer forums, could have offered us insights into the shortcomings of developer forums to provide such needs. Intuitively, such insights about problems in developer forums can be more concrete from a participant who actually uses developer forums. Therefore, to understand the shortcomings in developer forums, we probed the participants with a number of questions, such as "What are your biggest challenges while seeking opinions about APIs from developer forums?" (Q6), "What areas can be positively/negatively affected by summarization of reviews from developer forums?" (Q13, 14), and "What factors in a forum post can help you determine the quality of the provided opinions?" (Q7). The responses to those questions showed us that developers face numerous challenges while seeking opinions about APIs from forum posts (such as information overload, trustworthiness, etc.). We observed similar findings in our pilot survey. However, we could still have missed important insights from the participants who do not visit developer forums. We leave the analysis of such participants for our future work. Our survey samples are derived from GitHub and Stack Overflow developers. We picked the pilot survey participants from a list of 4,500 GitHub users. We randomly selected the primary survey participants from a list of more than 88K Stack Overflow users, each of which also had an account in GitHub. We designed the primary survey by leveraging lessons learned from our pilot survey (see Section 4.4). In our primary survey, we attempted to fix each of the problems. For example, we applied stratification in our sampling to ensure that we involve developers from Stack Overflow that are proficient in different programming languages. The stratification is necessary, because otherwise a random sample from the 88K Stack Overflow users would have picked more developers from a language that is more popular (e.g., Javascript, Java, and Python) among the developers (in terms of the number of users participating in the discussion of the posts related to the language). In addition, we attempted to include the developers who can be considered as experts in those programming languages. Intuitively, expertise of a developer for a programming language can be correlated to his reputation in Stack Overflow posts that are related to the language. The higher reputation score a user has, the more likely those reputation scores are provided by many different users in Stack Overflow, and hence the higher likelihood that the user is considered as an expert within the community. An expert user is thus more likely to offer more concrete information about the APIs used in the language, based on real-world experience. Reliability Validity Reliability threats concern the possibility of replicating this study. We attempted to provide all the necessary details to replicate the study. The anonymized survey responses are provided in our online appendix [89]. The complete labels and list of quotes used in the primary survey are also provided in the online appendix. CONCLUSIONS AND FUTURE WORK Opinions can shape the perception and decisions of developers related to the selection and usage of APIs. The plethora of opensource APIs and the advent of developer forums have influenced developers to publicly discuss their experiences and share opinions about APIs. To better understand the role of opinions and how developers use them, we conducted a study based on two surveys of a total of 178 software developers. We are aware of no such previous surveys in the field of software engineering. The design of the two surveys and the survey responses form the first major contribution of this paper. The survey responses offer insights into how and why developers seek opinions, the challenges they face, their assessment of the quality of the available opinions, their needs for opinion summaries, and the desired tool support to navigate through opinionrich information. We observed the following major findings in our analysis: 1) Developers seek opinions about APIs to support diverse development needs, such as selection of an API among available choices. A primary source of such opinions is the online developer forums, such as Stack Overflow. 2) Developers face several challenges associated with noise, trust, bias, and API complexity when seeking opinions. Highquality opinions are typically viewed as clear, short and to the point, bias free, and supported by facts. 3) Developers feel frustrated with the amount of available API opinions and desire a tool support to efficiently analyze the opinions, such as API comparator, opinion summarizer, API sentiment trend analyzer, etc. The findings and insights gained from this study helped us to build a prototype tool, named Opiner. The Opiner framework can be used to mine and summarize opinions about APIs in a fully automatic way. We observed promising results of leveraging the Opiner API review summaries to support diverse development needs. The detailed analysis of the survey responses and the findings form the second major contribution of this paper. Our future work is broadly divided into two directions: 1) Tool Support by Developer Experience. As we noted in Section 7.3, less experienced developers show more interest to value the opinions and were also more distinct in their preference of tools to support such analysis (API comparator and Trend Analyzer). We plan to investigate the particular characteristics of the less experienced developers which could motivate them more to value opinions of others and to use the tools. Such insights then can be used to motivate the design of tools and APIs by focusing on the demographic needs. It is thus desirable to replicate our study on a randomized sample of Stack Overflow users, because it may give us higher concentration of less experienced developers than the sample we have used (i.e., based on users with high reputation in Stack Overflow). 2) Sentiments vs Emotions. We plan to investigate the role of finer grained emotions (e.g., anger, fear, etc.) in the daily development activities involving APIs. In particular, we are interested in conducting more surveys and developing tools to advance the knowledge on the impact of emotions in API usage analysis.
2019-03-28T13:14:44.747Z
2019-03-04T00:00:00.000
{ "year": 2021, "sha1": "e6f538fca879f409c7fca2155f8f9f51c7f82927", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2102.08495", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "2523d1df7bc5c8eafcf4401dcf7bc92853e16859", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
2914868
pes2o/s2orc
v3-fos-license
Multilingual Code-switching Identification via LSTM Recurrent Neural Networks This paper describes the HHU-UH-G system submitted to the EMNLP 2016 Second Work-shop on Computational Approaches to Code Switching. Our system ranked first place for Arabic (MSA-Egyptian) with an F1-score of 0.83 and second place for Spanish-English with an F1-score of 0.90. The HHU-UH-G system introduces a novel unified neural network architecture for language identification in code-switched tweets for both Spanish-English and MSA-Egyptian dialect. The sys-tem makes use of word and character level representations to identify code-switching. For the MSA-Egyptian dialect the system does not rely on any kind of language-specific knowledge or linguistic resources such as, Part Of Speech (POS) taggers, morphological analyzers, gazetteers or word lists to obtain state-of-the-art performance. Introduction Code-switching can be defined as the act of alternating between elements of two or more languages or language varieties within the same utterance. The main language is sometimes referred to as the 'host language', and the embedded language as the 'guest language' (Yeh et al., 2013). Code-switching is a wide-spread linguistic phenomenon in modern informal user-generated data, whether spoken or written. With the advent of social media, such as Facebook posts, Twitter tweets, SMS messages, user comments on the articles, blogs, etc., this phenomenon is becoming more pervasive. Code-switching does not only occur across sentences (inter-sentential) but also within the same sentence (intra-sentential), adding a substantial complexity dimension to the automatic processing of natural languages (Das and Gambäck, 2014). This phenomenon is particularly dominant in multilingual societies (Milroy and Muysken, 1995), migrant communities (Papalexakis et al., 2014), and in other environments due to social changes through education and globalization (Milroy and Muysken, 1995). There are also some social, pragmatic and linguistic motivations for code-switching, such as the the intent to express group solidarity, establish authority (Chang and Lin, 2014), lend credibility, or make up for lexical gaps. It is not necessary for code-switching to occur only between two different languages like Spanish-English (Solorio and Liu, 2008), Mandarin-Taiwanese (Yu et al., ) and Turkish-German (Özlem Çetinoglu, 2016), but it can also happen between three languages, e.g. Bengali, English and Hindi (Barman et al., 2014), and in some extreme cases between six languages: English, French, German, Italian, Romansh and Swiss German (Volk and Clematide, 2014). Moreover, this phenomenon can occur between two different dialects of the same language as between Modern Standard Arabic (MSA) and Egyptian Dialect (Elfardy and Diab, 2012), or MSA and Moroccan Arabic (Samih and Maier, 2016a;Samih and Maier, 2016b). The current shared task is limited to two scenarios: a) codeswitching between two distinct languages: Spanish-English, b) and two language varieties: MSA-Egyptian Dialect. With the massive increase in code-switched writings in user-generated content, it has become imperative to develop tools and methods to handle and process this type of data. Identification of languages used in the sentence is the first step in doing any kind of text analysis. For example, most data found in social media produced by bilingual people is a mixture of two languages. In order to process or translate this data to some other language, the first step will be to detect text chunks and identify which language each chunk belongs to. The other categories like named entities, mixed, ambiguous and other are also important for further language processing. Related Works Code-switching has attracted considerable attention in theoretical linguistics and sociolinguistics over several decades. However, until recently there has not been much work on the computational processing of code-switched data. The first computational treatment of this linguistic phenomenon can be found in (Joshi, 1982). He introduces a grammar-based system for parsing and generating code-switched data. More recently, the detection of code-switching has gained traction, starting with the work of (Solorio and Liu, 2008), and culminating in the first shared task at the "First Workshop on Computational Approaches to Code Switching" (Solorio et al., 2014). Moreover, there have been efforts in creating and annotating code-switching resources (Özlem Çetinoglu, 2016;Elfardy and Diab, 2012;Maharjan et al., 2015;Lignos and Marcus, 2013). Maharjan et al. (2015) used a user-centric approach to collect code-switched tweets for Nepali-English and Spanish-English language pairs. They used two methods, namely a dictionary based approach and CRF GE and obtained an F1 score of 86% and 87% for Spanish-English and Nepali-English respectively at word level language identification task. Lignos and Marcus (2013) collected a large number of monolingual Spanish and English tweets and used ratio list method to tag each token with by its dom-inant language. Their system obtained an accuracy of 96.9% at word-level language identification task. The task of detecting code-switching points is generally cast as a sequence labeling problem. Its difficulty depends largely on the language pair being processed. Several projects have treated code-switching between MSA and Egyptian Arabic. For example, Elfardy et al. (2013) present a system for the detection of code-switching between MSA and Egyptian Arabic which selects a tag based on the sequence with a maximum marginal probability, considering 5-grams. A later version of the system is named AIDA2 (Al-Badrashiny et al., 2015) and it is a more complex hybrid system that incorporates different classifiers and components such as language models, a named entity recognizer, and a morphological analyzer. The classification strategy is built as a cascade voting system, whereby a conditional Random Field (CRF) classifier tags each word based on the decisions from four other underlying classifiers. The participants of the "First Workshop on Computational Approaches to Code Switching" had applied a wide range of machine learning and sequence learning algorithms with some using additional online resources like English dictionary, Hindi-Nepali wiki, dbpedia, online dumps, LexNorm, etc. to tackle the problem of language detection in code-switched tweets on Nepali-English, Spanish-English, Mandarin-English and MSA Dialects (Solorio et al., 2014). For MSA-Dialects, two CRF-based systems, a system using languageindependent extended Markov models, and a system using a CRF autoencoder have been presented; the latter proved to be the most successful. The majority of the systems dealing with wordlevel language identification in code-switching rely on linguistic resources (such as named entity gazetteers and word lists) and linguistic information (such as POS tags and morphological analysis), and they use machine learning methods that have been typically used with sequence labeling problems, such as support vector machine (SVM), conditional random fields (CRF) and n-gram language models. Very few, however, have recently turned to recurrent neural networks (RNN) and word embedding with remarkable success. (Chang and Lin, 2014) used a RNN architecture and combined it with pre-trained word2vector skip-gram word embeddings, a log bilinear model that allows words with similar contexts to have similar embeddings. The word2vec embeddings were trained on a large Twitter corpus of random samples without filtering by language, assuming that different languages tend to share different contexts, allowing embeddings to provide good separation between languages. They showed that their system outperforms the best SVMbased systems reported in the EMNLP'14 Code-Switching Workshop. Vu and Schultz (2014) proposed to adapt the recurrent neural network language model to different code-switching behaviors and even use them to generate artificial code-switching text data. Adel et al. (2013) investigated the application of RNN language models and factored language models to the task of identifying code-switching in speech, and reported a significant improvement compared to the traditional n-gram language model. Our work is similar to that of Chang and Lin (2014) in that we use RNNs and word embeddings. The difference is that we use long-shortterm memory (LSTM) with the added advantage of the memory cells that efficiently capture longdistance dependencies. We also combine wordlevel with character-level representation to obtain morphology-like information on words. Model In this section, we will provide a brief description of LSTM, and introduce the different components of our code-switching detection model. The architecture of our system, shown in Figure 1, bears resemblance to the models introduced by Huang et al. Long Short-term Memory A recurrent neural network (RNN) belongs to a family of neural networks suited for modeling sequential data. Given an input sequence x = (x 1 , ..., x n ), an RNN computes the output vector y t of each word x t by iterating the following equations from t = 1 to n: where h t is the hidden states vector, W denotes weight matrix, b denotes bias vector and f is the activation function of the hidden layer. Theoretically RNN can learn long distance dependencies, still in practice they fail due the vanishing/exploding gradient (Bengio et al., 1994). To solve this problem , Hochreiter and Schmidhuber (1997) introduced the long short-term memory RNN (LSTM). The idea consists in augmenting the RNN with memory cells to overcome difficulties with training and efficiently cope with long distance dependencies. The output of the LSTM hidden layer h t given input x t is computed via the following intermediate calculations: (Graves, 2013): where σ is the logistic sigmoid function, and i, f , o and c are respectively the input gate, forget gate, output gate and cell activation vectors. More interpretation about this architecture can be found in (Lipton et al., 2015). Figure 2 illustrates a single LSTM memory cell (Graves and Schmidhuber, 2005) Word-and Character-level Embeddings Character embeddings A very important element of the recent success of many NLP applications, is the use of character-level representations in deep neural networks. This has shown to be effective for numerous NLP tasks (Collobert et al., 2011;dos Santos et al., 2015) as it can capture word morphology and reduce out-of-vocabulary. This approach has also been especially useful for handling languages with rich morphology and large character sets (Kim et al., 2016). We also find this important for our code-switching detection model particularly for the Spanish-English data as the two languages have different orthographic sequences that are learned during the training phase. Word pre-trained embeddings Another crucial component of our model is the use of pre-trained vectors. The basic assumption is that words from different languages (or language varieties) may appear in different contexts, so word embeddings learned from a large multilingual corpus, should provide an accurate separation between the languages at hand. Following Collobert et al. (2011), we use pre-trained word embeddings for Arabic, Spanish and English to initialize our look-up table. Words with no pre-trained embeddings are randomly initialized with uniformly sampled embeddings. To use these embeddings in our model, we simply replace the one hot encoding word representation with its corresponding 300-dimensional vector. For more details about the data we use to train our word embeddings for Arabic and Spanish-English, see Section 4. Conditional Random Fields (CRF) When using LSTM RNN for sequence classification, the resulting probability distribution of each step is supposed to be independent from each other. Still we assume that code-switching tags are highly related to each other. To exploit these kind of labeling constraints, we resort to Conditional Random Fields (CRF) (Lafferty et al., 2001). CRF, a sequence labeling algorithm, predicts labels for a whole sequence rather than for the parts in isolation as shown in Equation 1. Here, s 1 to s m represent the labels of tokens x 1 to x m respectively, where m is the number of tokens in a given sequence. After we have this probability value for every possible combination of labels, the actual sequence of labels for this set of tokens will be the one with the highest probability. Equation 2 shows the formula for calculating the probability value from Equation 1. Here, S is the set of labels. In our case S ={lang1, lang2, ambiguous, ne, mixed, other, fw, unk }. w is the weight vector for weighting the feature vector Φ. Feature Templates The feature templates extract feature values based on the current position of the token, current token's label and previous token's label and the entire tweet. These functions normally output binary values (0 or 1). These feature functions can be represented mathematically as Φ( x, j, s j−1 , s j ). We use the following feature templates. Morphological Features: In order to capture the information contained in the morphology of tokens, we used features like, all upper case, title case, begins with punctuation, @, is digit, is alphanumeric, contains apostrophe, ends with a vowel, consonant vowel ratio, has accented characters, prefixes and suffixes of the current token and of its previous or next token. Character n-gram Features: character bigrams and trigrams. Word Features: This feature uses token in its lowercase (hash-tag is removed from the token). Also, it tries to capture the context surrounding the token using the previous and next two tokens as features. Shape Features: Collins (2002) defined a mapping from each character to its type. The type function blinds all characters but preserves the case information. The digits are replaced by # and all other punctuation characters are copied as they are. For example: "London" is transformed to "Xxxxxx", "PG-500' is transformed to "XX-###'. Another variation of the same function maps each character to its type but the repeated characters and not repeated in the mapping. So "London" is transformed to "Xx*". We use both of these variations in our system. These features are designed to capture the named entity. Word Character Representations: The final representations from the char-word LSTM model before feeding to softmax layers for each token are used as features to the CRF. LSTM-CRF for Code-switching Detection Our neural network architecture consists of the following three layers: • Input layer: comprises both character and word embeddings. • Hidden layer: two LSTMs map both words and character representations to hidden sequences. • Output layer: a Softmax or a CRF computes the probability distribution over all labels. At the input layer a look-up table is randomly initialized mapping each word in the input to ddimensional vectors for sequences of characters and sequences of words. At the hidden layer, the output from the character and word embeddings is used as the input to two LSTM layers to obtain fixed-dimensional representations for characters and words. At the output layer, a softmax or a CRF is applied over the hidden representation of the two LSTMs to obtain the probability distribution over all labels. Training is performed using stochastic gradient descent with momentum, optimizing the cross entropy objective function. Optimization Due to the relatively small size the training data set and development data set in both Arabic and Spanish-English, overfitting poses a considerable challenge for our code-switching detection system. To make sure that our model learns significant representations, we resort to dropout (Hinton et al., 2012) to mitigate overfitting. The basic idea of dropout consists in randomly omitting a certain percentage of the neurons in each hidden layer for each presentation of the samples during training. This encourages each neuron to depend less on other neurons to detect code-switching patterns. We apply dropout masks to both embedding layers before inputting to the two LSTMs and to their output vectors as shown in Fig. 1. In our experiments we find that dropout decreases overfitting and improves the overall performance of the system. Dataset The shared task organizers made available the tagged dataset for Spanish-English and Arabic (MSA-Egyptian) code-switched language pairs. The Spanish-English dataset consists of 8,733 tweets (139,539 tokens) as training set, 1,587 tweets (33,276 tokens) as development set and 10,716 tweets (121,446 tokens) as final test set. Similarly, the Arabic (MSA-Egyptian) dataset consists of 8,862 tweets (185,928 tokens) as training set, 1,117 tweets (20,688 tokens) as development set and 1,262 tweets (20,713 tokens) as final test set. For Arabic we trained different word embeddings using word2vec (Mikolov et al., 2013) from a corpus of total size of 383,261,475 words, consisting of dialectal texts of Facebook posts (8,241,244), Twitter tweets (2,813,016), user comments on the news (95,241,480), and MSA texts of news articles of 276,965,735 words. Likewise, for Spanish-English, we combined English gigaword corpus (Graff et al., 2003) and Spanish gigaword corpus (Graff, 2006) before we trained different word embeddings on the final corpus. Data preprocessing: We transformed Arabic scripts to SafeBuckwalter (Roth et al., 2008), a characterto-character mapping that replaces Arabic UTF alphabet with Latin characters to reduce size and streamline processing. Also in order to reduce data sparsity, we converted all Persian numbers (e.g. ) to Arabic numbers (e.g. 1, 2), Arabic punctuation (e.g. ' ' and ' ') to Latin punctuation (e.g. ',' and ';'), removed kashida (elongation character) and vowel marks, and separated punctuation marks from words. Experiments and Results We explored different combinations of hand-crafted features (Section 3.3.1), word LSTM and char-word LSTM models with CRF and softmax classifier to identify the best system. Table 1 and 2 show the results for different settings for Spanish-English and MSA-Egyptian on the development dataset respectively. For the Spanish-English dataset, we find that combining the character and word representations learned with a char-word LSTM system with hand-crafted features and then using CRF as a sequence classifier gives the highest overall accuracy of 0.963. Also, we notice that the addition of character and word representations improves the F1-score for named entity and unknown tokens. For the MSA-Egyptian dataset, we find that a char-word LSTM model with softmax classifier is better than the CRF as this setting gives us the highest overall accuracy of 0.90. Moreover, the addition of character and word representations to hand-crafted features improves the F1 score for named entity. Based on these results, our final system for Spanish-English uses CRF with hand-crafted features and character and word representations learned with a char-word LSTM model and the MSA-Egyptian uses charword LSTM model with softmax as classifier. We do not use any kind of hand-crafted features for the MSA-Egyptian dataset. Our final system outperformed all other participants' systems for the MSA-Egyptian dialects in terms of tweet level and token level performance. For the Spanish-English dataset, our system ranks second in terms of tweet level performance and third in terms of token level accuracy. Table 3, 4 and 5 show the final results for tweet and token level performance for the Spanish-English and MSA-Egyptian datasets. For the MSA dataset, the difference between our system and the second highest scoring system is 8% and 2.7% in terms of tweet level weighted F1 score and token level accuracy. Similarly for the Spanish-English dataset, the difference between our system and the highest scoring system is 1.3% and 0.6% in terms of tweet level weighted F1 score and token level accuracy. Our system consistently ranks first for language identification for the MSA-Egyptian dataset (5% and 4% above the second highest system for lang1 and lang2 respectively). For the Spanish-English dataset, our system ranks third (0.8% below the highest scoring system) and third (0.4% below the highest scoring system) for lang1 and lang2 respectively. However, our system has consistently shown weaker performance in identifying nes. Nonetheless, the overall results show that our system outperforms other systems with relatively high margin for the MSA-Egyptian dataset and lags behind other systems with relatively low margin for the Spanish-English dataset. 6 Analysis 6.1 What is being captured in char-word representations? In order to investigate what the char-word LSTM model is learning, we feed the tweets from the Spanish-English and MSA-Egyptian development datasets to the system and take the vectors formed by concatenation of character representation and word representation before feeding them into softmax layer. We then project them into 2D space by reducing the dimension of the vectors to 2 using Principle Component Analysis (PCA). We see, in Figure 3, that the trained neural network has learned to cluster the tokens according to their label type. Moreover, the position of tokens in 2D space also revels that ambiguous and mixed tokens are in between lang1 and lang2 clusters. and ne in blue on the top. The other token occupies the space between the clusters for lang1, lang2 and ne with more inclination toward lang1. We also notice that other in Arabic contains a large amount of hashtags, due to their particular annotation scheme. Table 6 shows the most likely and unlikely transitions learned by the CRF model for the Spanish-English dataset. It is interesting to see that the transition from lang1 to lang1 and from lang2 to lang2 are much likely than lang1 to lang2 or lang2 to lang1. This suggests that people especially in Twitter do not normally switch from one language to another while tweeting. Even, if they switch, there are very few code-switch points in the tweets. However, people tweeting in Spanish have more tendency to use mixed tokens than people tweeting in English. We also dumped the top features for the task and found that word.hasaps is the top feature to identify token as English. Moreover, features like word.startpunt, word.lower:number are top features to identify tokens as other. The features like char bigram, trigram, words, suffix and prefix are the top features to distinguish between English and Spanish tokens. Error Analysis for Arabic When we conducted an error analysis on the output of the Arabic development set for our system, we found the following mistagging types: • Punctuation marks, user names starting with '@' and emoticons are not tagged as other. • Bad segmentation in the text affects the decision, e.g. EamormuwsaY "Amr Musa". • There are cases of true ambiguity, e.g. 'kariym', which can be an adjective "generous" or a person's name "Kareem". Based on this error analysis we developed a postprocessor to handle deterministic annotation decision. The post-processor applies the tag other in the following cases: • Non-alphabetic characters, e.g. punctuation marks and emoticons. • Strings with Latin script. • Words starting with a @ character that usually represents user names. Error Analysis for Spanish-English From Table 1, it is clear that the most difficult categories are ambiguous and mixed. These are rare tokens and hence the system could not learn to distinguish them. During analyzing the mistakes on the development set, we found that the annotation of frequent tokens like jaja, haha with their spelling variations were inconsistent. Hence, even though the system was correctly predicting the labels, they were marked as incorrect. In addition, we also found that some lang2 tokens like que, amor, etc were wrongly annotated as lang1. In most cases, the system predicted either lang1 or lang2 for names of series, games, actor, day, apps (friday, skype, sheyla, beyounce, walking dead, endless love, flappy bird, dollar tree). We noticed that all these tokens were in lowercase. Similarly, the system mis-predicted all uppercase tokens as ne. For eg. RT, DM, JK, GO, BOY were annotated as lang1 but, the system predicted them as ne. Moreover, we found that the tokens like lol, lmao, yolo, jk were incorrectly annotated as ne. The system predicted the interjections like aww, uhh, muah, eeeahh, ughh as either lang1 or lang2 but they were annotated as unk. In order to improve the performance for ne, we tagged each token with Ark-Tweet NLP tagger (Owoputi et al., 2013). We then changed the label for the tokens tagged as proper nouns with confidence score greater than 0.98 to ne. This improved the F1-score for ne from 0.53 to 0.57. Conclusion In this paper we present our system for identifying and classifying code-switched data for Spanish-English and MSA-Egyptian. The system uses a neural network architecture that relies on word-level and character-level representations, and the output is fine-tuned (only in the Spanish-English data) with a CRF classifier for capturing sequence and contextual information. Our system is language independent in the sense that we have not used any languagespecific knowledge or linguistic resources such as, POS taggers, morphological analyzers, gazetteers or word lists, and the main architecture is applied to both language sets. Our system considerably outperforms other systems participating in the shared task for Arabic, and is ranked second place for the Spanish-English at tweet-level performance.
2016-11-22T08:43:23.465Z
2016-11-01T00:00:00.000
{ "year": 2016, "sha1": "02e713c101e1d14e07424047d6f012b13f74134c", "oa_license": "CCBY", "oa_url": "https://www.aclweb.org/anthology/W16-5806.pdf", "oa_status": "HYBRID", "pdf_src": "ACL", "pdf_hash": "02e713c101e1d14e07424047d6f012b13f74134c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
202565836
pes2o/s2orc
v3-fos-license
Closed-loop Model Selection for Kernel-based Models using Bayesian Optimization Kernel-based nonparametric models have become very attractive for model-based control approaches for nonlinear systems. However, the selection of the kernel and its hyperparameters strongly influences the quality of the learned model. Classically, these hyperparameters are optimized to minimize the prediction error of the model but this process totally neglects its later usage in the control loop. In this work, we present a framework to optimize the kernel and hyperparameters of a kernel-based model directly with respect to the closed-loop performance of the model. Our framework uses Bayesian optimization to iteratively refine the kernel-based model using the observed performance on the actual system until a desired performance is achieved. We demonstrate the proposed approach in a simulation and on a 3-DoF robotic arm. I. INTRODUCTION Given a dynamic model, control mechanisms such as model predictive control and feedback linearization can be used to effectively control nonlinear systems. However, when an accurate mathematical model of the system is not available, machine learning offers powerful tools for the modeling of dynamical systems. A special class of models that has obtained a lot of attention recently is kernel-based models, such as Support Vector Machines (SVM) and Gaussian Processes (GP). In contrast to parametric models, kernelbased models require only minimal prior knowledge about the system dynamics, and have been sucessfully used to model complex, nonlinear systems [1]. Using the kernelbased approach for modeling a system requires the selection of an appropriate kernel function and a set of hyperparameters for that function. Typically, these selections are databased, e.g. through minimizing a loss function that is often a trade-off between the prediction error and the complexity of the model. However, the full complex and accurate dynamics model might not even be required depending on the task. Moreover, this procedure neglects the fact that the learned model is used for the control of the actual system, which can result in reduced controller performance [2]- [4]. In this work, we propose a Bayesian Optimization (BO)based active learning framework to optimize the kernel and its hyperparameters directly with respect to the performance of the closed-loop rather than the prediction error, see Fig. 1. This optimization is performed in a sequential fashion where at each step of the optimization, BO takes into account all the past data points and proposes the most promising kernel and hyperparameters for the next evaluation. The outcome is used to define a kernel-based model that is utilized by a given controller. The obtained model-based controller is then applied to the actual system in a closed-loop fashion to evaluate its performance. This information is then used by BO to optimize the next evaluation. Consequently, multiple evaluations on the actual system must be performed, which is often feasible such as for systems with repetitive trajectories. BO thus does not aim to obtain the most accurate dynamics model of the system, but rather to optimize the performance of the closed-loop system. Typically, system identification approaches aim to obtain an open-loop dynamics model of the system by minimizing the state prediction error. This problem has been well studied in literature for both linear systems, e.g. [5], as well as for nonlinear systems using the function approximators such as GP [6]- [8] and neural networks (NN) [9], [10]. However, a model obtained using this open-loop procedure can result in a reduced controller performance on the actual system [4]. To overcome these challenges, adaptive control mechanisms and iterative learning control have been studied where the system dynamics or control parameters are optimized based on the performance on the actual system, e.g. [11]- [13]. However, these approaches are mostly limited to linear systems and controllers or assume at least a parametric system model. Recently, learning-based controller tuning mechanisms have also been proposed [14], [15], but such methods might be highly data-inefficient for general nonlinear systems as they typically completely disregard the underlying dynamics [16]. To overcome the challenges of open-loop system identification, closed-loop system identification methods have been studied that lead to more robust control performance on the actual system [2]- [4]. A similar approach is presented in [17], wherein the authors also propose a goal-driven dynamics learning approach via BO. However, the authors aim to identify a linear dynamics model from scratch which might be a) unnecessary, as often an approximate dynamics model of the system is available and b) insufficient for general nonlinear systems. Moreover, stability of the closed-loop system where the controller is based on the linear dynamics model cannot be guaranteed, whereas our approach explicitly allows to preserve the convergence properties of the initial closed-loop system. To summarize, our key contributions are: a) we present a BO-based framework to optimize the kernel function and its hyperparameters of a kernel-based model to maximize the resultant control performance on the actual system; b) through numerical examples and an experiment on a real 3-DoF robot, we demonstrate the advantages of the proposed approach over classical model selection methods. Notation: Vectors a are denoted with bold characters. Matrices A are described with capital letters. The term A i,: denotes the i-th row of the matrix A. The expression N (µ, Σ) describes a normal distribution with mean µ and covariance Σ. The set R >0 denotes the set of positive real numbers. II. PROBLEM SETTING Consider a discrete-time, potentially nonlinear system in which f , g are unknown functions of the state x k ∈ R nx and input u k ∈ R nu . For the following, we assume that the state mapping f : R nx ×R nu → R nx and the output mapping g : R nx ×R nu → R ny are such that there exist a unique state and output trajectory for all u k ∈ R nu and x 0 , k ≥ 0. We assume that a control law h : is given for the system (1). The reference r k ∈ R ny is assumed to be zero but the framework is also applicable for a varying signal. In addition to the reference, the control law also depends on the output m k ∈ R nm of a kernel-based model, a regression technique that uses a kernel to perform the regression in a higher-dimensional feature space. The output of a kernel-based model, m k , depends on the kernel function K, its hyperparameters ϕ ∈ R nϕ and system input and output, i.e. m k = M(u 0:k−1 , y 0:k , K, ϕ), where the function M depends on the class of the kernel-based model, such as GP or SVM, used for the prediction. Remark 1 For example, the output m k can be the prediction of the next state or output of the system based on the current state and input, using the mean and probably the variance of a GP model. This information can then be used by the controller to compute an appropriate system input u k . The control law h might be an output tracking controller designed based on the predicted model output. For possible control laws for different classes of systems, we refer to [6]- [8], [18], [19]. The goal of this work is to optimize the model kernel and its hyperparameters such that the corresponding model output m k minimizes the following cost functional where c(y k , u k ) : R ny ×R nu → R represents the cost incurred for the control input u k and the system output y k . The cost function here might represent the requirements concerning the closed-loop, e.g. an accurate tracking behavior or a minimized power consumption. Note that the cost functional in (3) implicitly depends on the kernel-based model M through u k , see (2). The optimization of (3) is challenging since the system dynamics in (1) are unknown and the kernelbased model output m k indirectly influences the cost. To overcome this challenge, we use BO to optimize the kernel and the hyperparameters based on the direct evaluation of the control law in (2) on the system (1) to find those that minimize the cost functional in (3). A. Kernel-based models The prediction of parametric models is based on a parameter vector w ∈ R na which is typically learned using a set of training data points. In contrast, nonparametric models typically maintain a subset of the training data points in memory in order to make predictions for new data points. Many linear models can be transformed into a dual representation where the prediction is based on a linear combination of kernel functions. The idea is to transform the data points of a model to an often high-dimensional feature space where a linear regression can be applied to predict the model output. For a nonlinear feature map φ : R na → R n φ with n φ ∈ N∪ {∞}, the kernel function is given by the inner product K(a, a ) = φ(a), φ(a ) , ∀a, a ∈ R na . Thus, the kernel implicitly encodes the way the data points are transformed into a higher dimensional space. The formulation as inner product in a feature space allows to extend many standard regression methods. A drawback of kernelbased models is that the selection of the kernel and its hyperparameters heavily influences the interpretation of the data and thus, the quality of the model. Commonly, the kernel and hyperparameters are determined based on the optimization of a loss function such as cross-validation or the likelihood function. In our work, the kernel and its hyperparameters are optimized with respect to performance of the closed-loop system. B. Gaussian process Extending the concept of kernel functions to probabilistic models leads to the framework of Gaussian process regression (GPR). In particular, GPR is a supervised learning technique which combines several advantages. As probabilistic kernel techniques, GPs provides not only a mean function but also a measure for the uncertainty of the regression. In this work, we use GPR in BO to model the unknown closed-loop objective function, as well as for the kernel-based dynamics model M in the experiment. The GPR can be derived using a standard linear regression model where a ∈ R na is the input vector, w the vector of weights and q : R na → R the function value. The observed value b ∈ R is corrupted by Gaussian noise ∼ N (0, σ 2 n ). Using the feature map φ(a) instead of a, leads to f (a) = φ(a) w with f : R na → R. The analysis of this model is analogous to the standard linear regression, i.e. we put a prior on the weights such that w ∼ N (0, Σ p ) with Σ p ∈ R n φ ×n φ . The mean function is usually defined to be zero, see [20]. Based on m collected training data points A = [a 1 , . . . , a m ] and B = [b 1 , . . . , b m ] , the prediction q * ∈ R for a new test point a * ∈ R na can be computed using the Bayes' rule. In particular, it is given by . Thus, based on the training data A, B, the estimation of the function value q * follows a normal distribution where the mean and the variance depend on the test input a * . Following Remark 1, the mean and variance can be used for state estimation in the control law (2). The choice of the kernel and hyperparameters ϕ ∈ R nϕ can be seen as degrees of freedom of the regression. A popular kernel choice in GPR is the squared exponential kernel, see [20]. One possibility for estimating the hyperparameters ϕ is by means of the likelihood function, thus by maximizing the probability of which results in an automatic trade-off between the data-fit B K −1 * * B and model complexity log |K * * |, see [20]. C. Bayesian Optimization (BO) Bayesian Optimization is an approach to minimize an unknown objective function based on (only a few) evaluated samples. We use BO to optimize the cost function (3) based on the kernel-based model as this is in general a non-convex optimization problem with unknown objective function (because the system dynamics are unknown), and probably multiple local extrema. BO is well-suited for this optimization as the task evaluations can be expensive and noisy [21]. Futhermore, BO is a gradient-free optimization method which only requires that the objective function can be evaluated for any given input. Since the objective function is unknown, the Bayesian strategy is to treat it as a random function with a prior, often as Gaussian process. Note that this GP here is used for the closed-loop cost functional approximation in BO and is not related to the kernel-based model for the controller (2) as stated in Remark 1. The prior captures the beliefs about the behaviour of the function, e.g. continuity or boundedness. After gathering the cost (3) of the task evaluation, the prior is updated to form the posterior distribution over the objective function. The posterior distribution is used to construct an acquisition function that determines the most promising kernel/hyperparameters for the next evaluation to minimize the cost. Different acquisition functions are used in literature to trade off between exploration of unseen kernel/hyperparameters and exploitation of promising combinations during the optimization process. Common acquisition functions are expected improvement, entropy search, and upper confidence bound [22]. To escape a local objective function minimum, the authors of [23] propose a method to modify the acquisition function when they seem to over-exploit an area, namely expected-improvement-plus. That results in a more comprehensive and also partially random exploration of the area and, thus it is probably faster in finding the global minimum. We also use this acquisition function for BO in our simulation and the experiment. IV. CLOSED-LOOP MODEL SELECTION Our goal is to optimize the model's kernel and its hyperparameters with respect to the cost functional C(y 0:k , u 0:k ). Thus, in contrast to the classical kernel selection problem, where the kernel is selected to minimize the state prediction error, our goal here is not to get the most accurate model but the one that achieves the best closed-loop behavior. We now describe the proposed overall procedure for the kernel selection to optimize the closed-loop behavior; we then describe each step in detail. We start with an initial kernel K with hyperparameters ϕ, and obtain the control law for the system (1) using (2) with the model output m k = M(u 0:k−1 , y 0:k , K, ϕ). This control law is then applied to the actual system, and the cost function (3) is evaluated after performing the control task. Depending on the obtained cost value, BO suggests a new kernel and corresponding hyperparameters for the kernel-based model M in order to minimize the cost function on the actual system. With this model, the control task is repeated and, based on the cost evaluation, BO suggests the next kernel and hyperparameters. This procedure is continued until a maximum number of task evaluations is reached or the user rates the closed-loop performance as sufficient enough. We now describe the above three steps, i.e. initialization, evaluation and optimization, in detail. A. Initialization We define a set K = {K 1 , . . . , K n K } of n K ∈ N kernel candidates K j that we want to choose the kernel from for our kernel-based model. BO will be used to select the kernel with the best closed-loop performance in this set. Remark 2 The selection of possible kernels can be based on prior knowledge about the system, e.g. smoothness with the Matérn kernel or number of equlibria using a polynomial kernel, see [24] and [1] for general properties, respectively. In addition, each kernel depends on a set of hyperparameters. Since the number of hyperparameters could be different for each kernel, we define a set of sets P = {Φ 1 , . . . , Φ n K } such that Φ j ⊂ R n Φ j is a closed set. Here, n Φ j represents the number of hyperparameters for the kernel K j . Moreover, we assume that Φ j is a valid hyperparameter set. Definition 1 The set Φ is called a hyperparameter set for a kernel function K iff the set Φ is a domain for the hyperparameters of K. For the first evaluation of the closed-loop, the kernel-based model function M is created with an initial kernel K j of the set K and hyperparameters ϕ j ∈ Φ j with j ∈ {1, . . . , n K }. Remark 3 One potential way to select the initial kernel and hyperparameters is to set them equal to the kernel and hyperparameters of a prediction model that is optimized with respect to a loss function, e.g., using cross-validation or maximization of the likelihood function [1]. B. Task Evaluation For the i-th task evaluation, BO determines an index value j ∈ {1, . . . , n K } and a ϕ j ∈ Φ j . The control law (2) for the kernel-based model M, with the determined kernel K j and hyperparameters ϕ j , is applied to the system (1) x k+1 = f (x k , h(y k , M(u 0:k−1 , y 0:k , K j , ϕ j )) with fixed x 0 ∈ R nx . Remark 4 We focus here on a single, fixed initial state x 0 . However, multiple (close by) initial states can be considered by using the expected cost across all initial states. The corresponding input and output sequences u 0:k and y 0:k , respectively, are recorded. Afterwards, the cost function given by C(y 0:k , u 0:k ) is evaluated. C. Model Optimization In the next step, we use BO to minimize the cost function with respect to the kernel and its hyperparameters, i.e. Thus, this problem involves continuous and discrete variables in the optimization task whereas classical BO assumes continuous variables only. To overcome this restriction, a modified version of BO is used where the kernel function is transformed in a way such that integer-valued inputs are properly included [25]. Based on previous evaluations of the cost function, BO updates the prior and minimizes the acquisition function. The result is a kernel K j and hyperparameters ϕ j which are used in the model function M(u 0:k−1 , y 0:k , K j , ϕ j ). Then, the corresponding control law is evaluated again on the system and the procedure is repeated until a maximum number of task evaluations has been reached or a sufficient performance level has been achieved. D. Theoretical Analysis In this section, we show that, under some additional assumptions, the stability of the closed-loop is preserved during the task evaluation process and that BO converges to the minimum of the closed-loop cost function. Here, we focus on stationary kernels with lengthscales ϕ ∈ R nϕ >0 and Σ = diag(ϕ 1 , . . . , ϕ nx ). Stationary kernels can always be expressed as a function of the difference between their inputs and they are a common choice for kernel-based models [1]. Assumption 1 Let f K * ϕ * < ∞ and the selected control law (2), based on the model M with stationary kernel K * and hyperparameters ϕ * ∈ R nϕ >0 , guarantees that y k ≤ r y ∈ R >0 for the given system (1) for k > n 1 ∈ N. The first part of the assumption, i.e. the bounded reproducing kernel Hilbert space (RKHS) norm, is a measure of smoothness of the function with respect to the kernel K with hyperparameters ϕ * ∈ R nϕ >0 . It is a common assumption for stabilizing controllers using kernel-based methods and is discussed in more detail in [19]. Controllers that satisfy this property for nonlinear, unknown systems are given, e.g. by [6], [19], [26]. The focus on stationary kernels is barely restrictive as many successful applied kernels for nonlinear control are stationary. Lemma 1 With Assumption 1, there exists a non-empty set K and a hyperparameter set Φ 1 ⊃ {ϕ * } such that ∀K j ∈ K, for all ϕ j ∈ Φ j the boundedness y k ≤ r y of the system (1) for k > n 1 is preserved. This lemma guarantees that there exists a kernel set K and a set P of hyperparameters that contains the stabilizing kernel K * and the hyperparameter ϕ * of Assumption 1. Thus, the proposed method can be applied to existing kernel-based control methods without losing achieved guarantees. Before we start with the proof, the following lemma is recalled. Proof: [Lemma 1] Assumption 1 inherently guarantees that at least one kernel K 1 = K * exist that preserves the boundedness of the system such that we define K = {K 1 }. Since Assumption 1 ensures that f K * ϕ * is bounded and with Lemma 2, the mapping f ∈ H K 1 ϕ * and, thus f K 1 ϕ is bounded for ϕ i ∈ R >0 , ∀i where ϕ i < ϕ * i , ∀i. For an upper bound, there exist ϕ i ∈ R >0 , ∀i such that ϕ * i < ϕ i and f ∈ H K 1 ϕ , following Lemma 2. Thus, we define the set as proper superset of ϕ * . Based on this set, f K 1 ϕ < ∞ for all ϕ ∈ Φ 1 that guarantees the boundedness. Consequently, with Assumption 1, the stability of the control loop is preserved during the task evaluation. Furthermore, the minimum cost is not worse than the initial cost after BO as stated in the following. Corollary 1 Let C cl be the minimum cost (3) after BO (5) with K = {K 1 = K * } and Φ 1 of (6). Let C ol be the initial cost based on the control with kernel K * and hyperparameter ϕ * then C cl ≤ C ol holds. Proof: Since C cl is the minimum cost after BO that starts with the initial, data-based selected kernel K * and hyperparameter ϕ * , it clearly follows that C cl ≤ C ol because of K * ∈ K and ϕ * ∈ Φ 1 . We now show that BO can converge to the global minimum of the cost function C under specific conditions starting with the following assumption. Assumption 2 The RKHS norm of the cost function is bounded, i.e. C K ≤ r ∈ R >0 with respect to the kernel K of the GP (4) that is used as prior C ∼ GP(0, K) of the Bayesian optimization (5). Intuitively, Assumption 2 states that the kernel of the GP for BO is selected such that the GP can properly approximate the cost function. This sounds paradoxical since the cost function is unknown because of the unknown system behavior. However, there exist some kernels, so called universal kernels, which can approximate at least any continuous function arbitrarily precisely [27,Lemma 4.55]. V. EVALUATION In this section, we present a simple illustrative example that highlights our closed-loop model selection approach for kernel-based models. In addition, an example with a 3-DOF robot demonstrates the applicability of the proposed approach to hardware testbeds. BO is used with the expectedimprovement-plus as acquisition function because of its satisfactory performance in practical applications, see [23], using a GP as prior. A. Simulation Consider the following one-dimensional system with state x k and control u k at time k. For the purpose of this example, we assume that the system dynamics in (7) are unknown yet we wish to avoid a high-gain control approach due to its unfavorable properties [29], and use the proposed closed-loop model selection framework to optimize the control performance. As control law, a feedback linearization is applied with the predictionf of a Support vector machine model M. The data set D consists of 11 homogeneously distributed training pairs {x j k , x j k+1 } 11 j=1 of the system (7) in the interval x k ∈ [−10, 10] with u k = 0. The linear, polynomial (cubic) and the Gaussian kernel are selected as possible kernel candidates, see Table I for details. The Gaussian kernel possesses one hyperparameter ϕ 1 which is a scaling factor for the data. In addition, the regression of the SVM depends on a hyperparameter ϕ 2 that defines the smoothness of the prediction and affects the number of support vectors, see [30]. First, we evaluate a classical, data-based procedure which optimizes the kernel and the hyperparameters with respect to the cross-validation loss function [27] based on the training data only. Using BO, a minimum loss of 0.9127 is found using the linear kernel with ϕ 2 = 0.0336, Table II. Using this linear model in the control loop with the nonlinear system (7) and control law (8) for x 0 = 3, the control error remains above zero, see Fig. 2. With the cost function C = 9 k=0 kx 2 k , the trajectory generates a cost of 204.4769. In comparison, the hyperparameters and the kernel are optimized with the proposed method. For this purpose, we evaluate the performance of the closed-loop system and use BO to compute the next promising kernel and hyperparameter combination. Figure 3 shows the mean and standard deviation of 20 repetitions over 50 trials each. The repetitions are run since BO exploration of the cost is also affected by randomness. The cost is reduced to a mean value of C = 16.410 and the loss is 2.491. Figure 2 shows that the regression is more accurate which results in a reduced control error. Table II also presents the results for adding the collected data of all the 50 trials to the existing data to redefine the model (Data-based AT). Even with more training data, the data-based optimization favors the linear kernel. 1) Discussion: The example demonstrates that the optimization based on the training data only can lead to a reduced performance of the closed-loop system. Table II clearly shows that the data-based optimization results in a smaller loss with the linear kernel but generates a higher cost of the closed-loop system. In comparison, the closedloop optimization finds a set of hyperparameters with the Gaussian kernel that significantly reduced the control error even if the loss of the model is higher. Thus, especially in the case of sparse data, the data-based optimization can misinterpret the data which can be avoided with the closedloop model selection. We observe that at the beginning of the closed-loop optimization, BO switches a lot between the kernels and towards the end, it focus on the hyperparameters. Using the data which is obtained during the 50 trials to refine the model in data-based manner only slightly improves the performance but heavily increases the computational time of the kernel-based model due to the larger training data set. spoon is attached at the end effector of the robot. The goal is to follow a given trajectory as precisely as possible without using high feedback gains, which might result in several practical disadvantages, see [31]. Therefore, a precise model of the system's dynamics is necessary. Since the modeling of the nonlinear fluid dynamics with a parametric model would be very time consuming, we use a computed torque control method based on a GP model which allows high performance tracking control while also being able to guarantee the stability of the control loop [26]. Underlying, a low level PD-controller enforces the generated torque by regulating the voltage based on a measurement of the current. The controller is implemented in MATLAB/Simulink on a Linux real-time system with a sample rate of 1 ms. For the implementation of the GP model, we use the GPML toolbox 1 . The desired trajectory follows a circular stirring movement through the fluid with a frequency of 0.5 hz. Modeling: Here, we use a Gaussian process model M as kernel-based model technique based on 223 collected training points. The data is collected around the desired trajectory using a high gain controller. The placement of the training points heavily influences the control performance. However, the proposed approach focuses on improving the performance based on existing data. Each data pair consists of the position and velocity of all joints [q,q] and the corresponding torque for the i-th joint, τ i . Since the GP produces one-dimensional outputs only, 3 GPs are used in total for the modeling of the robot's dynamics. Each GP i = 1, . . . , 3 uses a squared exponential kernel that can approximate any continuous function arbitrarily exactly. With ϕ = [ϕ 1 , . . . , ϕ 6 ] and the signal noise σ n ∈ R 3 , see [20], a total number of 9 parameters must be optimized. In contrast to the simulation, the kernel is fixed to reduce the optimization space and thus, the number of task evaluations. Control law: The control input, i.e. the torque τ (q, q) for all joints, is generated based on an estimated parametric model and the mean prediction µ of the GP model as feed-forward component and a low gain PD-feedback part Here, the desired trajectory is given by q d ,q d andq d with the errorė =q d −q, e = q d −q. The feedback matrices are given by K p = diag([60, 40, 10]) and K d = diag([1, 1, 0.4]). The estimated parametric model is derived from a mathematical model where the parameters are physically measured. For the discretization of the control input, a zero-order method is used. For more details see [26]. with T = 1 ms. Therefore, the cost function is a measure for the tracking accuracy of the stirring movement. We consider as kernel candidate the squared exponential kernel, such that only the hyperparameters σ n , ϕ are optimized. Table III shows the comparison between the data-based and the closed-loop optimization. In the data-based case, the hyperparameters are optimized based on a gradient method to minimize the log likelihood function (in this case, BO of the hyperparameters results in the same values). In contrast, BO is used to minimize the tracking error in the closed-loop optimization. The initial values of the hyperparameters are set to the values of the data-based optimization. The bounds are defined as the 0.5 and 2 times of the initial values. The evolution of the minimum cost over the trials, where each trial is a single stirring movement, is shown in Fig. 5. The comparison of the joint position error for the data-based and closed-loop optimization is shown in Fig. 6. 3) Discussion: After 100 trials, the tracking error is decreased by 30% through the optimization of the Gaussian process model only. Even if the resulting hyperparameters are sub-optimal with respect to the likelihood function, see Table III, the performance of the closed-loop is significantly improved. In comparison to collecting more training data to improve the model, the proposed method does not increase the computational burden of the Gaussian process prediction which is often critical in real-time applications. Since only the model is adapted, the properties of the closed-loop control architecture are also preserved. CONCLUSION In this paper, we present a framework for the model selection for kernel-based models to directly optimize the overall closed-loop control performance. For this purpose, the kernel and its hyperparameters are optimized using Bayesian optimization with respect to a cost function that evaluates the performance of the closed-loop. It is shown that this approach allows to preserve the control architecture properties as only the model is adapted. Simulations and hardware experiments demonstrate the advantages of the proposed approach to data-based model selection techniques.
2019-08-30T07:05:48.150Z
2019-09-12T00:00:00.000
{ "year": 2019, "sha1": "fae76c81c7e0dc4d87d732264a2627e7a98521b1", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1909.05699", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c129b782806b3278cf50d160fc856520c0018c09", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
261924654
pes2o/s2orc
v3-fos-license
Effectiveness of ivermectin and moxidectin against cyathostomins in four horse breeding farms in Mexico This study aimed to evaluate the effectiveness of oral ivermectin and moxidectin against natural cyathostomin infection in four horse farms located in the central regions of Mexico. 445 horses of the Warmblood (145), Thoroughbreds (100), and Quarter Horses (200) breeds, aged between 6 months and 27 years, were used. Data on horses and parasite control methods were collected through interviews with farm owners and veterinarians. Using the McMaster technique, fecal samples were processed from all 445 horses, 180 of which were positive for cyathostomins. On each farm, a selection was made of 45 animals meeting the criteria of a Fecal Egg Count Reduction Test yielding results exceeding 150 eggs per gram of strongylid-type nematodes. Subsequently, three separate experimental groups were formed for each farm, each consisting of 15 horses The first group was treated with oral ivermectin 1.87 %; the second group with oral moxidectin 2 %; and the third was the non-treatment control group. Coprocultures were also performed to identify the presence of nematode species. The data obtained were analyzed with RESO.exe©️. Three of the four farms achieved a 100 % reduction in eggs per gram with both macrocyclic lactones. One farm achieved 93 % reduction with ivermectin and 87 % with moxidectin. This study demonstrates that macrocyclic lactones effectively reduce cyathostomins in three of the four farms studied. The results suggest potential cyathostomin resistance to macrocyclic Introduction Cyathostomins are the most frequently reported nematodes in horses, and anthelmintics (AH) have been the main method used to control them. (1,2) n Mexico, the total estimated equine population is over 6.3 million (3) It has been assessed that only 300 000 horses receive nutritional and medical care (B.Monroy-Hérnandez, personal communication, October 21th, 2021).Approximately, 150 000 horses receive basic treatments including AH, and 45 000 comprise the high-performance group, which is subjected to more continuous deworming (either monthly or every other month) with macrocyclic lactones (ML), as they offer a pharmacologically approved endectocide action. (4)Regular and non-technical use of AH, derived from customary clinical practice, has favored the selection of cyathostomin populations capable of surviving, thus promoting the anthelmintic resistance (AHR) phenomenon. (5,6) or more than 50 years, horses have been conventionally dewormed using high-intensity short-term schemes.This practice originally served the purpose of eliminating the somatic larvae of Strongylus vulgaris, which causes arteritis and aneurysms in horses. (7,8) lthough it is not common to find serious cases of S. vulgaris, these deworming practices have prevailed, subjecting the cyathostomin populations to high selection pressure, enhancing resistant or multi-resistant parasitic populations because of decreased AH effectiveness. (9,10) 3)(14) Ethical statement All applicable international, national, and/or institutional guidelines for the care and use of animals were followed.Feces were obtained while farm veterinarians were performing breeding evaluations or from the ground in the case of younger animals.As well as the usual application of deworming pastes.No handling involved stress or injury to the animals. Type of study and farm location A longitudinal cohort study with convenience sampling was conducted in four horse breeding farms located in the central and central-western regions of Mexico, specifically in the states of San Luis Potosi (farm 1), Guanajuato (farm 2), both with a dry climate BSw Köppen (15) scale and the State of Mexico (farms 3 and 4) with a temperate-subhumid climate: Cw Köppen scale. (15) Animals The study was conducted using a cohort of 445 horses distributed as follows: farms 1 and 3 held a total population of 70 and 75 racing Thoroughbreds, respectively; farm 2 held 100 racing Quarter Horses and farm 4 held 200 show jumping Warmbloods with ages ranging from birth to 27 years (Table 1). Questionnaire Interviews were conducted with horse-owners and veterinarians of four farms to retrieve information about the breed, age, intended use, frequency of AH administration, type of AH used, and criteria used for parasite control in the past seven years.Geographic location (municipality and state) and climate were also recorded (Table 1). Fecal Egg Count Reduction Test No AHs had been administered to the horses for at least 60 days before this study. The effectiveness of IVM and MOX against cyathostomins was determined with a Fecal Egg Count Reduction Test (FECRT) (methodology approved by international organizations). (16)It consists of 3 stages and its rationale is to measure the reduction of the eggs count per gram (EPG) after AH treatment (Figure 1). Stage 1: Pre-treatment Fecal samples of the total population of horses (n = 445) were processed using a modified McMaster quantitative technique with a sensitivity index of 50 EPG. (18,19) ples of 2g of feces were homogenized in 28 ml of saturated NaCl solution of 1.250 density.The solution was then filtered and transferred to a McMaster chamber to perform an EPG count using an optical microscope (10x) to determine the parasite load. Forty-five horses with a fecal egg count reduction test ≥ 150 eggs per gram of strongylidtype nematodes and who were 6 months or older were randomly selected per farm. Stage 2: Treatment A total of 180 horses (45 horses per farm) were included in the treatment phase.On each farm, three groups of 15 horses were randomly selected (Figures 1 and 2). The first group was treated with a single dose of IVM 1.87 % oral paste (200 µg / kg BW); the second group was treated with a single dose of MOX 2 % oral gel (400 µg / kg BW); and the third group was the non-treatment control group (Figure 1).The weight was estimated using a morphometric tape according to Wright's recommendations. (17) Stage 3: Post-treatment McMaster's modified quantitative technique (20) was performed in all animals (n = 180) fourteen days after treatment (Figure 1).Two assumptions were considered as part of the methodology: 1) if the result was SUSCEPTIBLE, the test would not be repeated and 2) if the result was RESISTANT, the test would be repeated (Figure 2). Larval culture Larval culture was performed in fecal samples preserved at 4 °C from stage 2 (treatment) and stage 3 (post-treatment) of all 180 horses (Figure 1).Following the technique of Corticelli and Lai, (21) larvae were collected using a Baermann device to identify the nematode genera. (2,22) arval viability was analyzed following two criteria for species counting and identification: 1) larvae were shed in their infective stage (L3) and 2) larvae had vigorous motility.A total of 600 larvae were analyzed per farm, following the recommendations and taxonomic keys of Santos et al. (2) and Bevilaqua et al. (22) to classify the species found. Data analysis The data obtained were analyzed using the program RESO.exe©; CSIRO, 1993, Animal Health Division of Microsoft Excel©.Data were considered RESISTANT when the percentage of reduction was < 95 %. (23) Questionnaire Basic information about the farms (location and climate), total horse population, individual horse information (breed and intended use), and deworming program (frequency, type of AH used, and main criteria) was gathered from horse-owners and farm veterinarians (Table 1). Fecal egg count reduction test The results of stage 1 performed in the total population (n = 445) are shown in Table 2. The percentage of horses that had ≥ 150 EPG were farm 1, 67.The effectiveness of IVM and MOX against strongylid-type nematode populations is shown in Table 3 and Figure 3. Farms 1, 2, and 4 showed 100 % efficacy for both ML, indicating that the nematodes were susceptible to IVM and MOX molecules.The arithmetic mean of the EPG released pre-and post-treatment, showed the effectiveness of both ML only in farms 1, 2, and 4 (Figure 4). A lack of efficacy due to parasite resistance (93 % for IVM and 87 % for MOX) was observed on farm 3. Therefore, 6 months after the first test, a second FECRT was performed on farm 3 (3b) (Table and Figure 3) according to the American Association of Equine Practitioners (AAEP) guidelines. (24)No horse on farm 3 received any anthelmintic treatment during that 6-month period while starting the second trial.With the same initial population (100 horses) at that moment 72 animals met the same inclusion criteria for testing (≥ 150 EPG).Randomly, 15 horses were treated with oral IVM 1.87 %; 15 horses with oral MOX 2 % at the same initial dosages, and the third group of 15 horses was the non-treatment control group (Figure 3).The effectiveness of 100 % for IVM and 94 % for MOX was then observed (Table 3, Figures 3 and 4). Larval culture In both regions (dry and temperate-humid climate), 2 400 L3 (600 per farm) were identified, and 100 % of these larvae were of the cyathostomin group (non-migratory strongyles) in the pre-treatment and post-treatment samples.The morphological characteristics found were sheathed larvae with long and acute larval tails and 6-8 intestinal cells.No larvae with 18 or more than 20 intestinal cells or morphology suggestive of migratory strongylids, such as Strongylus species, were found. Discussion Results from this study show that the AHR phenomenon in the central and centralwestern regions of Mexico occurs mainly due to non-migratory strongylid populations. Only larvae of the cyathostomin group or non-migratory strongylids were found in all four farms.Migratory strongylids need a prepatent period of at least 6 months, and when deworming is frequent (3 to 4 times a year), the development of the larvae to an adult stage is negatively affected, probably due to the high susceptibility of these nematodes to ML, as mentioned by Grice et al. (24) In addition, we found low or no prevalence of migratory strongylids such as Strongylus spp, which coincides with previous studies. (7,25) ine cyathostomosis could be caused by at least 53 different species, and approximately 11-15 species have a higher prevalence in the cecum and large colon. (26)Due to the abundance and richness of cyathostomins, AH efficacy tests may not consider this a significant factor, as shown in this study.FECRT is a field test, not a molecular or serological test.It has not been possible to distinguish between species of cyathostomins in equines through coproculture, and therefore, the resistance that each of their species may present is not detected.This can be evidenced in nematode species of ruminants.All these studies were initiated in these animal species and not in equines.The results obtained are limited because we cannot distinguish between several cyathostomin species.However, it is the test recommended by international organizations. (4,6,10,16) Seology, molecular biology, and proteomics technologies need to be developed and implemented for the identification of these species in horses. (6,19,27) I addition, factors such as the age of the horses, the time of the year related to early or late L3, cyathostomins hypobiosis processes, hosting types, feeding practices, and group randomization procedures, must be considered, as mentioned by Nielsen et al. (28) The inadequate management of farm 3 while administering ML without any established criteria or pretreatment diagnosis resulted in the lack of effectiveness of MOX.The continuous use of IVM and MOX molecules resulted in a greater number of resistant strongyles-type nematodes (Figure 4).The effectiveness of IVM (93% in the first test and 100% in the second) (Table 3 and Figure 3) highlights the existence of horses with parasite resistance to this endectocide (Figure 4). the confidence intervals of both samples [CI 52-99 (3a) and CI 12-97 (3b)] indicate that only a couple of animals were excreting resistant nematodes and must be treated as potential high shedders.ML can continue to be used in this farm if strategies to preserve both molecules are implemented.In addition, to avoid high shedders spreading resistant parasites in the pasture, a resistance management strategy (refuge) could be implemented.This strategy involves deliberately allowing the survival of cyathostomin populations that have not been recently exposed to any treatment.The progeny of unselected parasites provides a source of susceptible nematodes that can dilute resistant nematodes that survive AH, thereby reducing the rate of AHR development.In farm 3, two fecal samples were obtained six months apart; therefore, animals were grazing in different areas when each sample was obtained.This would explain why in the first FECRT, the results showed lower and upper limits pointing toward IVM resistance (Table 3 and 3 and Figure 4).In this sense, the appearance of the impending AHR is a factor that should lead to immediate action to change deworming practices, with strategic and selective deworming being a possible solution to the problem. (29)Since the use of AH remains the irreplaceable method in terms of efficacy and practicality, every horse farm should first monitor the need for treatment and subsequently its effectiveness by monthly coproparasitological analysis. Macrocyclic lactone resistance to cyathostomins was first reported in 2005. (30)nce then, multiple cases of this resistance have been reported in Italy, (31,32) France, (14) Germany, (32) England (32) and Lithuania. (14)In Latin America, ML effectiveness studies have been conducted mainly in Brazil (12,30) and Mexico. (33)Canever et al. (12) evaluated the effectiveness of IVM and MOX against cyathostomins and found levels of 5-65 % and 16 %, respectively.Rosado-Aguilar et al. (33) reported 60 % resistance to IVM in five horse farms located in the southern region of Mexico in 2014.To the best of our knowledge, no other AHR studies have been published in Mexico.We believe that the AHR phenomenon could be an ongoing health problem in several farms in Mexico mainly due to misuse and overuse of ML because of the ease of acquiring deworming products by horse owners without a veterinary prescription and its frequent administration to treat any type of parasites without a proper diagnosis.This information is fundamental to limiting the negative impact on animal health and welfare and constitutes the first step for the rational use of AH.In central and central-western Mexico horse breeding farms, it is necessary to implement integrated parasite management (IPM), which is a non-renewable and effective resource to amplify the effectiveness of ML.To achieve this objective, it is necessary to include actions such as: 1) strategic deworming based on coproparasitoscopic diagnosis, 2) improvement of feeding management practices, 3) manure management in the pasture to reduce the source of contamination, and 4) biological control methods, such as pathogenic fungi, bacteria, and mites or entomopathogenic nematodes. (6,34,35) IM is an approach to a strategy that may change consciousness toward horse breeding farms under sustainable tools. Conclusions The present study demonstrates high levels of effectiveness of IVM and MOX in treating horses against cyathostomins; however, we report an increasing and impending resistance to MOX in one farm.IPM of nematodes is mandatory for prolonging the effectiveness of ML in the region. Figure 1 . Figure 1.Diagram of the Fecal Egg Count Reduction Test (FECRT) methodology Figure 2 . Figure 2. Progressive sampling scheme, the selection of animals, and distribution of Figure 3 . Figure 3. Diagram of Fecal Egg Count Reduction Test (FECRT) results showing Figure 4 . Figure 4. Pre-and post-treatment arithmetic mean of eggs per gram (EPG) from four horse-breeding farms. Figure 3 ) Figure3), whereas in the second FECRT, the animals were in a different pasture that Table 1 . Farm and horse information and deworming programs at four breeding farms in central and west-central Mexico. Table 2 . Number of animals that met the selection criteria of ≥150 EPG on each farm and from which they were selected for stage 2. Table 3 . Ivermectin and moxidectin effectiveness in four horse-breeding farms in different ecological regions.
2023-09-16T15:15:07.687Z
2023-09-13T00:00:00.000
{ "year": 2023, "sha1": "496e66c532e9777722a1ab666c45fea2953f1065", "oa_license": "CCBY", "oa_url": "https://veterinariamexico.fmvz.unam.mx/index.php/vet/article/download/1192/935", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d045f2d4428473428adf521efe4d551db6e67115", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [] }
229480880
pes2o/s2orc
v3-fos-license
How rapidly do self‐compatible populations evolve selfing? Mating system estimation within recently evolved self‐compatible populations of Azorean Tolpis succulenta (Asteraceae) Abstract Genome‐wide genotyping and Bayesian inference method (BORICE) were employed to estimate outcrossing rates and paternity in two small plant populations of Tolpis succulenta (Asteraceae) on Graciosa island in the Azores. These two known extant populations of T. succulenta on Graciosa have recently evolved self‐compatibility. Despite the expectation that selfing would occur at an appreciable rate (self‐incompatible populations of the same species show low but nonzero selfing), high outcrossing was found in progeny arrays from maternal plants in both populations. This is inconsistent with an immediate transition to high selfing following the breakdown of a genetic incompatibility system. This finding is surprising given the small population sizes and the recent colonization of an island from self‐incompatible colonists of T. succulenta from another island in the Azores, and a potential paucity of pollinators, all factors selecting for selfing through reproductive assurance. The self‐compatible lineage(s) likely have high inbreeding depression (ID) that effectively halts the evolution of increased selfing, but this remains to be determined. Like their progeny, all maternal plants in both populations are fully outbred, which is consistent with but not proof of high ID. High multiple paternity was found in both populations, which may be due in part to the abundant pollinators observed during the flowering season. | INTRODUC TI ON The evolution of self-fertilization (selfing) is one of the most common trends in flowering plants (Barrett, 2013;Grossenbacher et al., 2017;Stebbins, 1957;Wright et al., 2013). The first step in this transition is a change in the breeding system with the loss of genetic self-incompatibility (SI). This loss, often considered a unidirectional transition (Barrett, 2013;Goodwillie, 1999;Herman & Schoen, 2016;Layman et al., 2017;Wright et al., 2013), has pronounced consequences for the subsequent evolution of a species. Most obviously, the loss of SI enables a change from a highly outcrossing mating system (who mates with whom and how frequently in populations) to the potential for higher rates of self-fertilization, with cascading effects on the distribution of genetic variation among individuals and populations, and on the balance between mutation and natural selection (Kelly, 1999;Lande & Schemske, 1985). Microevolutionary changes have macroevolutionary consequences (Cheptou, 2018;Igić & Busch, 2013). Comparative studies in at least four plant families have shown that self-compatible (SC) lineages have higher extinction rates and lower net-diversification rates than closely related SI lineages (Freyman & Höhna, 2018;Gamisch et al., 2015;Goldberg et al., 2010;de Vos et al., 2014). Comparative studies do not indicate how rapidly the loss of SI leads to mating system change. Natural populations routinely harbor genetic variation in reproductive traits that determine selfing rate and thus have the capacity to rapidly evolve selfing (e.g., Bodbyl-Roels & Kelly, 2011;Thomann et al., 2013). However, ecological and selective factors may prevent a rapid transition. Selfing is favored both by reproductive assurance and automatic selection. The latter refers to the 3:2 transmission advantage enjoyed by selfers when they fertilize their own ovules as well as outcrossing (Fisher, 1941;Jain, 1976;. While these are strong forces, high outcrossing can be maintained if selfed progeny are much less fit than outcrossed progeny (inbreeding depression, hereafter indicated as ID, Arista et al., 2017;Husband & Schemske, 1996), if mates are abundant, or if factors like pollen or seed discounting undermine automatic selection. In this study, we examine the tempo of selfing transitions by estimating the mating system in recently evolved SC populations of Tolpis succulenta (Asteraceae: Cichorieae). This is a species with ecological and demographic characteristics that should rapidly select for high selfing (Figure 1). The genus Tolpis (Asteraceae: Cichorieae) is a small monophyletic group occurring mainly in the Macaronesian archipelagos, especially the Canary Islands (Jarvis, 1980;Mort et al., 2015). The breeding system of Tolpis is overwhelmingly SI with little or no self-seed produced in most populations (Crawford et al., 2008. However, SI systems can be "leaky" (pseudo-self-compatible or PSC; Levin, 1996) and greenhouse experiments on Tolpis plants from some populations can produce low levels of selfseed when denied cross-pollen (Crawford et al., 2008. In this paper, we focus on T. succulenta sensu lato, a species that was described from Madeira, but later considered to also occur in the Azores (Jarvis, 1980). The Azorean plants are morphologically distinct from the Madeiran plants, and the two archipelagos are highly divergent at SSR molecular markers (Borges . Lastly, they form distinct clades with genomic markers (Crawford et al., 2019;Kerbs, 2020), indicating that the Azorean plants likely represent a distinct species. Azorean T. succulenta are known from very few populations in rocky areas and coastal cliffs on several islands (Jarvis, 1980;Schaefer, 2005). The breeding system of Azorean T. succulenta is considered largely SI or PSC based on lack of or very low self-seed set from two populations on two islands in the Azores . Recent mating system estimation using field collected progeny sets from two populations on Madeira confirm predominant outcrossing (Gibson et al., 2020). However, despite a functional SI system, two of the 75 offspring scored in that study were confirmed as products of self-fertilization (Gibson et al., 2020). Crawford et al. (2019) recently documented a breeding system shift from SI to SC within T. succulenta from the Azorean island of Graciosa ( Figure 2). In contrast to Madeiran T. succulenta and populations from other islands in the Azores, plants from Graciosa readily self in the greenhouse. This shift has a genetic basis: selfseed set segregates in a nearly Mendelian fashion in F 2 hybrids between Graciosa plants and SI T. succulenta (J. K. Kelly et al., unpubl). As a contrast to T. succulenta in Madeira, we here estimate the outcrossing rate in the field for the two known populations on Graciosa (GRSC and GRBL; Figure 2). These populations are very small, with estimated census sizes of 30-80 (GRBL) and 10-20 individuals (GRSC), respectively. Judging from the few collections of T. succulenta made from Graciosa, it appears that flowering can occur from July through September, which falls within the broad flowering period (June to September) for SI populations of Azorean T. succulenta (Jarvis, 1980;Schaefer, 2005). Both populations occur in disturbed habitats (Crawford et al., Schaefer, pers. obs.). These two populations are sister and form a strongly supported clade in a molecular phylogeny of Azorean T. succulenta (Crawford et al., 2019). Field studies conducted on Graciosa after this manuscript was accepted indicate that population GRSC may be extinct, as plants could not be located (H. Schaefer, pers. obs. 2020). The floral parts in plants from GRSC and GRBL are smaller than in SI T. succulenta (Crawford et al., 2019; L. Borges Silva & M. Moura, unpubl.) but the "selfing syndrome" (Cutter, 2019;Ornduff, 1969;Slotte et al., 2012) is not nearly as highly developed as in ostensibly more ancient transitions to predominant selfing in Macaronesian Tolpis (Crawford et al., 2008;Koseva et al., 2017;Soto-Trejo et al., 2013). Another line of evidence supports Graciosa T. succulenta as a recent origin of SC. The transition to selfing was likely associated with the colonization of disturbed habitats (complex volcanic history and/or human colonization) on Graciosa. If so, the loss of SI occurred in the range of 400,000 and 1.05 million years as estimated by several radiometric dating studies (Larrea et al., 2014) or 700,000 years as dated by Sibrant et al. (2014) for the age of Graciosa. Secondly, the estimated divergence time between SI and SC T. succulenta in the Azores based on a dated Bayesian tree using genome-wide genotyping is 511,000 years (B. Kerbs et al., unpubl.). The range of island age estimates from radiometric dating, estimated divergence time from a dated molecular phylogeny, and the limited evolution of the floral selfing syndrome point to a recent origin of SC in Graciosa succulenta (Crawford et al., 2019). We hypothesized that the transition to SC could lead to high selfing in GRSC and GRBL because of the limited number of compatible mates and pollinators in small populations, which should produce strong selection for reproductive assurance (Pannell, 2015). Alternatively, ID could select against selfing and maintain outcrossing. | Sampling This study examined the two known populations of T. succulenta A total of 22 progeny (2-7 progeny per mother plant from population GRBL and 20 progeny (2-7 per mother plant) from GRSC were genotyped (Table 1). Mean and range of greenhouse self-seed set in the two populations were 60% (31%-96%) for GRBL and 41.9% (0%-99%) for GRSC (Crawford et al., 2019). Vouchers of progeny are deposited in the R. L. McGregor Herbarium of the University of Kansas (KANU). | Cultivation and DNA extraction Seeds from wild maternal plants were germinated and reared in greenhouses at the University of Kansas. Leaf tissue was collected, pressed, dried, and ground. Samples were then frozen using liquid nitrogen and pulverized using chromium beads. DNA was subsequently extracted from the ground, dried tissue using DNeasy Plant Mini Kits (Qiagen Inc.) and DNA quantity was validated using a Qubit fluorometer (Thermo Fisher Scientific). DNA from samples was cut using the restriction enzyme Csp6I (syn. CviQI), 250-300 bp fragments were selected for using a BluePippin (Sage Science), and 6 bp barcodes were ligated to fragments. DNA was sequenced on an Illumina NovaSeq 6000 (Novogene) to produce 150 bp paired-end reads. Following sequencing, the demultiplexing of FastQ files was carried out using STACKS (Catchen et al., 2013) and loci were de novo assembled using the same program with parameters M = 2, m = 3, n = 1, as well as invoking the deleveraging algorithm and specifying alpha = 0.05 in ustacks. De novo assembly and SNP calling yielded a total of 111,613 variant sites. | Mating system estimation and multiple paternity The resultant VCF from STACKS was assessed, and SNPs were filtered using custom python scripts. We first determined the distribution of reads per SNP per individual at all SNPs called in at least 10 plants. SNPs called in fewer than 10 plants were suppressed. We next eliminated SNPs with excessively high or low coverage: low threshold = 7.4 based on 10th percentile of distribution, high threshold = 42 based on 90th percentile. 8,530 SNPs remained. We then suppressed SNPs that exhibited a statistically significant excess of heterozygotes (relative to Hardy-Weinberg proportions) and then thinned to the data to one SNP per RADtag. We selected the one SNP per RADtag with the most minor genotype calls. This produced the list of 516 SNPs that were formatted for input to BORICE. We simultaneously made the "CX" file that species the fraction of SNPs called for each plant, an input to the genotype uncertainty calculations in BORICE. We first determined the distribution of reads per SNP per individual at all SNPs called in at least 10 plants. SNPs called in fewer than 10 plants were suppressed. We next eliminated SNPs with excessively high or low coverage: low threshold = 7.4 based on 10th percentile of distribution, high threshold = 42 based on 90th percentile. 8,530 SNPs remained. We then suppressed SNPs that exhibited a statistically significant excess of heterozygotes (relative to Hardy-Weinberg proportions) and then thinned to the data to one SNP per RADtag. We selected the one SNP per RADtag with the most minor genotype calls. This produced the list of 516 SNPs that were formatted for input to BORICE. We simultaneously made the "CX" file that species the fraction of SNPs called for each plant, an input to the genotype uncertainty calculations in BORICE. The programs used to perform these operations as well as the genotype file and BORICE settings script are contained in Supplemental File 1. BORICE was run with burn-in of 1,000 and chain length of 4,000 steps. | Inbredness of individual parents and offspring Individual progeny from both populations were determined to be outcrossed or selfed with strong confidence (posterior TA B L E 1 Assignment of sibship for each family across two Tolpis populations probabilities > 0.99). One offspring of the five in family 8 from GRSC was found to be selfed; hence, the posterior probability for the overall outcrossing is 0 at t = 1 ( Figure 3). All other offspring were found to be produced via outcrossing. All eight maternal plants across the two populations were determined to be outbred. The high posterior probabilities-P[IH = 0] = 100% for families 1, 3, 4, 5, and 6, 95% for family 2, 94% for family 7, and 81% for family 8. There was minimal (0.05) allele frequency divergence between populations at the markers used for BORICE. | Sibships Across all families, the probability that progeny are full sibs is 15.3%. No full siblings were detected in five of the eight families (Table 1). Families 1 and 4 contain one set of full sibs each, and family 5 contains two sets of full sibs (Table 1; Figure 4). Outputs show a very high confidence (>90%) in assignment of offspring to a sire in the vast majority (78%) of contrasts. There was moderate support (~50% to 90% probability) for 23% of contrasts. Sporophytic self-incompatibility (SSI), which is characteristic of Asteraceae (Crowe, 1954;Gerstel, 1950;Hughes & Babcock, 1950), may appear more restrictive in terms of self-and cross-compatibility than gametophytic self-incompatibility (GSI) systems. In the latter, the haploid genotype (allele at S-locus) of pollen controls compatibility but with SSI the alleles in the parental plant occur on the pollen grain and if either of the pollen alleles are the same as the stigma, fertilization does not occur. However, dominance relationships among S-alleles of SSI in Asteraceae (Crowe, 1954;Gerstel, 1950;Hughes & Babcock, 1950) increases cross-and self-compatibility because recessive alleles are not expressed, and therefore do not prevent fertilization in the presence of more dominant alleles (Brennan et al., 2011(Brennan et al., , 2013Hiscock, 2000). Alleles may also show different dominance relationships in the stigma and pollen (Brennan et al., 2006;Hiscock & Tab ah, 2003). Unlike codominant alleles where an increase in frequency of an allele will result in it finding fewer compatible mates in a population (Byers & Meagher, 1992), more recessive alleles in a dominance hierarchy are not subjected to negative frequency dependent selection and will increase in a population. Thus, species with SSI may set seed both by outcrossing and selfing in small populations despite low S-allele diversity (Brennan et al., 2002;Silva et al., 2016). | Mating system The automatic selection hypothesis posits that SC mutations are selected because selfing variants, in contrast to outcrossers, can fertilize their own ovules giving them a 3:2 transmission advantage (Fisher, 1941;Pannell, 2015). The highly outcrossing mating system in the two populations of Azorean Tolpis despite having the ability to set self-seed in the greenhouse indicates that there are negative factors associated with selfing, one of which could be inbreeding depression (ID), see below. Selfing may also be disadvantageous when there is a reduction in pollen and ovules available for outcrossing because they are used for selfing, so-called pollen and seed discounting (Holsinger, 1991;Nagylaki, 1976). The mating system of plants where SC has ostensibly evolved recently may depend on a complex combination of the aforementioned factors (Voillemot et al., 2019). The role(s) of ID and pollen/seed discounting for the outcrossing mating system in the two SC populations of Azorean Tolpis are not known. | Inbreeding depression ID is considered a major factor opposing the evolution of selfing subsequent to SI breakdown (Charlesworth & Willis, 2009). ID within populations can be estimated comparing the inbreeding coefficients of seeds and adults (Ritland, 1990;Scofield & Schultz, 2006) | Paternity Pannell and Labouche (2013) Crawford et al., unpubl.). In the present study, seed was collected in bulk from each maternal plant, thus precluding determination of intra-and intercapitular components of multiple paternity. The percent of full sibs detected in the present study may be compared with other Asteraceae, including the recent study of T. succulenta on Madeira island (Gibson et al., 2020). Those populations had 22% full sibs, some 50% higher than detected in the two small Graciosa populations. Thus, correlated paternity is higher in the SI species in Madeira than in the SC populations on Graciosa. The reason(s) for the differences are obscure and comments would be highly speculative. Hardy et al. (2004) found that in the rare endemic SI perennial herb Centaurea corymbosa (Asteraceae), 20% of sibs were full (same sire), and Sun and Ritland (1998) found 19% full sibs in the SI annual Centaurea solstitials. Perhaps, the important point is that in the small insular populations on Madeira and Graciosa, the number of sires among fruits of maternal plant is comparable not only to the other few Asteraceae investigated but also to among-fruit values estimated for other plant families (Pannell & Labouche, 2013). There is little evidence that particular sires are contributing disproportionately to the progeny of maternal plants, making it unlikely that biparental inbreeding is occurring in the populations. | Summary and questions for future study There are two major findings of this study. The first is that two small insular populations are highly outcrossing in nature despite the breakdown of SI. However, families of plants grown from seeds collected in nature yield high self-seed set in the greenhouse. The origin of SC in these two small populations that have ostensibly recently colonized Graciosa would seem to favor the transition to selfing (Pannell, 2015). The reasons for the somewhat unexpected results remain to be determined, and in a real sense, this study raises more questions, than it answers. One likely reason for the retention of outcrossing is high ID (Layman et al., 2017;Pannell, 2015). That is, selfed seeds are presumably aborted on production, do not germinate well, do not flower well, or are vegetatively noncompetitive. Whether this is the situation for T. succulenta awaits further study; two generations of selfed progeny in the greenhouse flowered and set fruit but more thorough studies are needed. Of course, it may be that fitness of selfed progeny is lower in the natural habitat than under greenhouse cultivation (Arista et al., 2017;Armbruster & Reed, 2006). Self-seed set is sometimes used to infer the breeding system in plants from oceanic archipelagos Bernardello et al., 2001;Chamorro et al., 2012;Crawford et al., 2011) and indeed likely provide useful first estimates of breeding system. Gibson et al. (2020) discuss the advantages of the methods employed herein for studying the small populations of rare island plants. A major advantage, with conservation implications, is that mating system can be inferred because individual progeny can be called as selfed or outcrossed with few maternal families and few progeny per family. This advantage can hardly be overstated given the low seed set available for progeny arrays and small population sizes. Although small populations provide challenges for mating system and paternity studies, they may offer certain advantages using genome-wide genotyping. In small populations such as the two examined in this study, it may be feasible to map all plants making it possible to detect genetic structure in the populations (Colicchio et al., 2020) and to infer not just the number of sires but the specific sires of progeny (Gibson et al., 2020). CO N FLI C T O F I NTE R E S T The authors declare no conflict of interest. DATA AVA I L A B I L I T Y S TAT E M E N T The sequence data are deposited in the NCBI Sequence Read Archive (SAMN16395570-SAMN16395611) under BioProject ID PRJNA668054.
2020-11-26T09:05:38.532Z
2020-11-20T00:00:00.000
{ "year": 2020, "sha1": "be8da7f3e5a0cdaed0a2d7f0cf34ade8939d7bb3", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ece3.6992", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9cf1fc5328185662f11416637aca7629e133331e", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
233407775
pes2o/s2orc
v3-fos-license
Mirror enhanced directional out-coupling of SERS by remote excitation of a nanowire-nanoparticle cavity We report on the experimental observation of mirror enhanced directional surface enhanced Raman scattering (SERS) from a self-assembled monolayer of molecules coupled to a nanowire-nanoparticle (NW-NP) junction on a mirror in remote excitation configuration. Placing NW-NP junction on a metallic mirror generates multiple gap plasmon modes which have unique momentum space scattering signatures. We perform Fourier plane imaging of SERS from NW-NP on a mirror to understand the effect of multiple hotspots on molecular emission. We systematically study the effect of ground plane on the directionality of emission from NW-NP junction and show that the presence of a mirror drastically reduces angular spread of emission. The effect of multiple hotspots in the geometry on directionality of molecular emission is studied using 3D numerical simulations. The results presented here will have implications in understanding plasmon hybridization in the momentum space and its effects on molecular emission. molecules to confined optical fields like plasmonic cavities, [8][9][10] whispering gallery microcavities [11] and Fabry-Pérot cavities. [12] When a molecular dipole is placed inside an optical cavity, its emission characteristics such as rate of spontaneous emission, [9,13] polarization signatures, [14,15] and direction of emission [4,14] can be influenced. The ability to engineer plasmon-matter interactions has been extensively utilized to design and develop optical antennas to direct optical emission from molecules. [1,5,16] An important aspect of optical antenna design is to achieve low angular spread without compromising the enhancement of light-molecule interactions. To this end, a variety of antennas have been studied to influence secondary emission from molecules and quantum dots. [17][18][19] Of interest to this study is the emission of secondary photons through Raman scattering. [20,21] Thanks to the development in nanoscale fabrication and synthesis, routing Raman scattered light from molecules by designing plasmonic geometries has gained prominence. [22,23] Most of the studies in the context of Raman optical antennas use lithographically fabricated structures and arbitrarily couple molecules to 'top-down" nanostructures. [24,25] A relatively less explored approach is to create Raman optical antennas by self-assembling chemically prepared colloidal nanostructures which are pre-coated with molecules. Molecule coated particle can be placed near a plasmonic structure to utilize the localized electric field provided by the cavity formed between the particle and nanostructures for enhanced spectroscopies. A unique nanostructure, in this regard is a one dimensional plasmonic nanowire. Coupling nanoparticle with a plasmonic nanowire offers the possibility of remotely exciting the cavity using nanowire as a waveguide which eliminates the damage caused to the molecules in the direct excitation of cavity. [26][27][28] Such geometries have been used in our past works to show enhanced directional spectroscopic signals. [25,29] Placing the nanowire-nanoparticle on a metallic mirror can generate particle-mirror particle cavity having ultra-small mode volumes. [30,31] Such types of cavities formed using mirror have been used for various studies such as strong coupling at room temperature, [32] large Purcell enhancement [33] and in tailoring the spectral signatures of two dimensional materials. [34] In addition to the enhancement, metal substrates direct majority of the emission to the collection objectives, [16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35] because of the absence of leakage radiation which occurs in using high refractive index substrates. [1] Motivated by this, we modify the geometry of nanowire-nanoparticle junction used in our past studies, [25,29] by placing the junction on a gold mirror and with a superior control on the placement of molecules in the cavity. We study silver nanowire-gold nanoparticle (NW-NP) junction placed on a gold mirror in remote excitation configuration. The fields generated in the NW-NP and NP on mirror (NP-Mirror) cavities enhance the Raman scattering from the molecules coated on the nanoparticle and also reduces the angular spread of emission. Figure 1. Schematic of the experimental configuration. A BPT coated AuNP was assembled near an AgNW placed on a gold mirror using self-assembly. One end of the AgNW was excited with a focused 633 nm laser. The SERS emission from the NW-NP junction was collected and its spectral and wavevector distribution were studied using spectroscopy and Fourier plane and energy-momentum imaging. A schematic of the experimental configuration is shown in figure 1. Gold nanoparticles of size ∼180 nm were coated with a monolayer of BPT molecules using self-assembly process. AgNWs of diameter ∼350nm were prepared using polyol process [36] with polyvinylpyrrolidone (PVP) as a surfactant. [37] Gold mirrors of thickness 160 nm were prepared by thermally evaporating gold on a glass coverslip. A typical NW-NP junction was prepared through capillary force driven self-assembly, [38] by dropcasting silver nanowires on gold mirror, followed by drying and dropcasting of gold nanoparticles on top of it. One end of the AgNW was excited with a 633 nm laser using a 100x, 0.95 numerical aperture objective lens. AgNW surface plasmon polaritons (SPPs) get scattered by NW-NP junction and out-couples as free space photons and also excites the gap plasmons in the NW-NP cavity. By focusing light onto one end of the nanowire, we also excite SPPs on the metal film. These propagating plasmons on the metal film excite the gap mode between the nanoparticle and the mirror. [39] The gap plasmons generated at both the cavities enhance the Raman scattering signatures of the molecules coated on the NP and influence its far-field scattering signatures. Out-coupled free space emission from the junction was collected by the same objective lens by spatially filtering the region and was projected onto the Fourier plane to study the spectral and wavevector signatures (see supplementary information S1 and S2 for details on sample preparation and experimental setup respectively). Figure 2.a (iii) shows the same AgNW when one end was excited with a focused 633 nm laser polarized along the axis of the nanowire. SPPs on AgNW and the metal film remotely excite the gap plasmons in the NW-NP and NP-Mirror cavities respectively. Intense electric field in the cavities due to these generated gap plasmons, results in the enhanced Raman scattering from the BPT molecules coated on the particle. We used a thick nanowire, diameter ∼350 nm, to get better waveguiding properties, [40,41] as we probed the NW-NP using remote excitation mechanism. The size of nanoparticle was chosen such that the localized plasmon resonance of the NP overlapped with the wavelength of the excitation to generate maximum response from the system. The out-coupled SERS emission from the junction (shown in a white circle in figure 2a (iii)) was collected. The remotely excited SERS spectrum of the BPT molecules from NW-NP junction on mirror is shown in Figure 2b. The sharp Raman lines are clearly visible with abroad inelastic background emission. Since the molecules are present only on the nanoparticle, the SERS emission originates only from the junction and not from the nanowire on the mirror cavity (see supporting information S3). To study the wavevector distribution of remotely excited SERS emission, we performed Fourier plane imaging, [42,43] which quantifies the directionality of emission in terms of radial (θ) and azimuthal angles (ϕ). The Fourier plane image (figure 2.c) shows that the maximum SERS emission is biased towards higher ky/ko. The SERS emission from the NW-NP on mirror is more directional when the junction is excited remotely as compared to the direct excitation of the junction or only the NP-Mirror cavity (see supplementary information S4). Along with the SERS emission from the BPT molecules, there is also an inelastic background emission from the PVP coating [37] on the nanowire which can also out-couple at higher angles (see supplementary information S5). To further confirm that the majority of emission at higher +ky/ko angles is the SERS emission from the molecules we performed energy-momentum imaging [14,44] on the emission from the junction. A small portion of the Fourier plane image along kx/ko = 0 was projected onto the slit of the spectrometer and dispersed to get the image shown in figure 2.d. The energy-momentum image reveals that both SERS signal and the inelastic background from the junction out-couples at higher wavevectors. To quantify the emission, we defined directionality (Dir), using the ratio of forward and backward intensity of emission in Fourier plane, [45] as To understand the effect of the ground plane on SERS emission from NW-NP junction, we studied the wavevector of emission from NW-NP junction placed on a glass substrate. We calculated the near-field electric field using finite element method with COMSOL Multiphysics as a solver to study the effect of different hotspots on emission wavevectors. Fourier plane images were then calculated by projecting the near-field to the far-field using reciprocity argument. [46] We place oscillating x, y and z oriented dipoles to mimic the molecular emission at the hotspots of the geometry. AgNW was modelled with a pentagonal cross-section with an edge to edge thickness of 350 nm and length of 5μm. AuNP of diameter 180 nm is placed at a distance of 5 nm from the AgNW. This 5 nm gap is to model the PVP coating on the AgNW and molecular coating on the AuNP. The refractive indices of the material were taken from reference. [47] Figure 3.c shows the calculated near-field electric field at the NW-NP junction placed on a glass substrate in remote excitation configuration. One end of the AgNW was excited using focused Gaussian laser of 633 nm. The field at the junction is only concentrated in the NW-NP cavity (shown as α), from where the SERS signal will originate. To study the effect of this cavity on the emission wavevector we placed oscillating x, y and z oriented dipoles at a wavelength of 703 nm in the NW-NP cavity and calculated Fourier plane image after incoherently adding the far-field radiation patterns from individual dipoles. The wavelength of the dipolar source was set at 703 nm because the BPT To study how AgNW influence the SERS emission wavevectors we study the change in the directionality of the emission when the distance between the AgNW and NP is varied. To conclude, we have shown how unidirectional SERS emission can be achieved by NW-NP junction on mirror cavity in remote excitation configuration. The nanowire and gold mirror helps in providing enhancement and directing the SERS emission to a narrow range of wavevectors. Calculated forward-to-backward emission ratio for SERS emission is ∼8 dB. Three-dimensional numerical calculations reveal the influence of electromagnetic hotspots generated in the geometry on the wavevectors of out-coupled SERS emission. We believe that the results shown in this letter will be extrapolated for studying strong interaction of molecules with extremely small cavities in remote excitation configurations. The NW-NP junction on mirror excited by nanowire plasmons will be a good testbed for remote detection of single molecules and studying quantum electrodynamics effects. Conflict of interest There are no conflicts to declare. ethanol solution were dropcasted on a gold mirror and left to dry. After this, molecules coated gold nanoparticles were dropcasted on the gold substrate. Gold nanoparticles tend to sit near the nanowire forming a self-assembled junction as shown in figure S1. Figure S1. Scanning electron microscope image of a self-assembled NW-NP junction formed using a 356 nm thick silver nanowire and ~180 nm gold particle placed on a 160 nm thick gold mirror. Figure S5 shows the spectrum collected from the end of AgNW placed on a gold mirror. One end of the nanowire was excited with 633 nm laser with polarization along the length of AgNW. Supporting Information The spectrum shows an inelastic background from PVP coating on the AgNW which is sandwiched between nanowire and mirror. The PVP coating is also present in the NW-NP cavity which also out-couples from the junction along with the SERS emission from the molecules. Table S6. Variation of directionality with a change in δ1 and δ2. Figure S4 (f). The possible reason for this difference is that along with SERS from the molecules coated on the nanoparticle, there is also possibility of inelastic emission from the metal substrate or the particles which can out-couple at all the angles.
2021-04-28T01:16:17.194Z
2021-04-27T00:00:00.000
{ "year": 2021, "sha1": "5372a5a6314cf0d41131a28aff983e4af4bfcdfc", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2104.13121", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5372a5a6314cf0d41131a28aff983e4af4bfcdfc", "s2fieldsofstudy": [ "Physics", "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Physics" ] }
22627022
pes2o/s2orc
v3-fos-license
A phenomenological study of delusions in schizophrenia. 112 patients with final clinical diagnosis of schizophrenia were subjected to detailed mental sums examination using, a structured interview schedule the present state examination. Phenomenology of delusions was determined according to the definitions and criteria of this schedule. The relationships of phenomenology will) socio-demography variables were also studied. It was seen that delusions of persecution were significantly more in males and in patients above the age of 30 years. Educated patients had more delusional misinterpretation, delusions of references and delusions of thoughts being read. Systematization of delusions was more in younger patients. Married patients had more delusions of reference. Introduction -Delusions often dominate the manifest psychoparhology of schizophrenics and are usually complex, bizarre, highly systematised and frequently affect the behaviour of patients. Many authors have studied delusions trom phcnomenological and developmental points ot view, the most notable being the studies ot Jaspers (1962), Kretschmer (1974) and Schneider (1974 a, 1974 b). It was proposed by Lucas et al (1962) that symptoms of patients can be more meaningfully related to their socio-cultural background than to the diagnosis of their disorder; It is generally agreed that prevailing cultural and social beliefs and values influence the content ot various psychopathological patterns and many investigators have emphasised cultural determinism of the content ot delusions (Carothers 1947, Yap 1951, Stainbrook 1952& Lambo 1955). In our country, the study of delusions has not received much attention. Bhaskaran (1963) observed male patients to be more deluded than females and also noted delu-sions of persecution and grandiosity to be more in males. Bhaskaran and Saxena (1970) again reported similar findings in a group of schizophrenics. The frequency of occurrence of delusions has been reported by Subramaniam and Verghesc (1977) and Kulhara and Wig (1978). Kala and Wig (1978) commented that the content of delusions was influenced by socio-demographic factors. Significant work has been done by Sharma and Gupta (1978) and Singh and Sachdcva (1981). Most studies with the exception of Kala and Wig (1978) have only estimated the frequencies of various types of delusions in schizophrenics. Though there is considerable evidence supporting the notion of influence of socio-cultural factors on the content of delusions, there is very little evidence that the form of delusion is affected by such factors. In fact, one of the largest multicentre project on schizophrenia did not find much difference in the form of delusions across various centres of different cultural background (WHO 1973). Most of the studies on phenomcnoJogkal Department of Psychiatry, Post-graduate Institute of Medical Education and Research. Chandigarh -160 012, India. aspects of schizophrenia from our country have grave methodological shortcomings. Many studies have not utilised any structured interview technique of proven reliability and applicability to ascertain the type of delusions. Kulhara and Varma (1985) in a review discussed these issues and pointed out that phenomenology of schizophrenia is an area that warrants more research. The present study was undertaken with the aim of eliciting the types of delusions and their relationship to various demographic parameters. By employing a structured interview schedule, the Present State Examination (PSE) (Wingetal 1974), a certain degree of credibility and reliability in the assessment of psychiatric phenomena, which was hitherto lacking, has been introduced. Material and Methods Consultant colleagues in the Department were requested to refer to research team patients with a final clinical diagnosis of schizophrenia. The diagnosis of schizophrenia conformed to ICD-9 (WHO 197H) concept of schizophrenia. Within 3 to 7 days of referral, the patients were evaluated by one of us (PK) using the 9th Edition of PSE (Wing et al 1974). The presence of delusions and associated phenomena were determined on the basis of PSE criteria. Analysis of Data The PSE data were analysed at the Institute of Psychiatry, Denmark Hill, London, U.K. using CATEGO programme. ("hi-squarc test with Yates correction as applicable was used to determine the level of significance for non-parametric variables. Results The total number of patients seen by the research team was 112. Of these 59 were males and 53 females. The mean age of pa-tients was 27.65 years with a standard deviation of 7.61 years. The subtyping of schizophrenia according to ICD-9 (WHO 1978) was as follows, 5 Hebephrenic, 10 catatonic, 58 paranoid, 19 acute, 12 chronic, 3 schizo-affective and 5 others. According to CATEGO classification. 76 patients belonged to CATEGO class 5, 16 to class 0,8 to class P, 6 were classified as D, 4 were categorised as Af and 2 as N. Thus, the rate of general agreement between ICD-9 and CATEGO classes of schuophrenia is good being 82.1 percent. The socio-demographic characteristics of the total patient sample and the deluded group are shown in Table 1. Since evasiveness can pose methodological problems in research, this particular FSE item was subjected to further analysis. No definite relationship between socio-demographic variables and evasiveness was observed. All patients who had evasiveness were noted to have delusions of persecution, reference and misidentification. 6 patients were noted to have evasiveness because of incoherence, excitement etc. In 8 patients it was felt that evasiveness was because of active concealment on the part of the patients. It is interesting to note that of the 8 patients who were actively concealing delusions, 7 were paranoid schizophrenics. These results are displayed in Table 4. . Younger patients were seen to have significantly more systematization. Patients above the age of 30 years had significantly more delusions of persecution. Male patients were observed to have significantly more persecutory delusions. Apart from this, sex of the patient did not have any significant contribution in determining the 'type of delusions. Delusions of reference were seen more frequently in married and educated patients. Educated patient (education more than matriculation) had significantly more delusional misinterpretation and delusions of thoughts being read. Table 4 Relationship between evaiiveness and sociodemographi.. and . IIIIKJI variables t,lini..al/Socio-demogra-Evasiveness Evasiveness phi*, variable due to due Co incoconcealment herance etc. The place of residence did not have any significant influence on the type of delusions displayed by the patients. The relationship of these socio-demographic variables with the types of delusions is displayed in Table 5. Discussion Firstly, our choice of ICD-9 (WHO 1978) diagnosis of schizophrenia requires some explanation. Had we used any other definition of schizophrenia, we might have introduced certain degree of bias towards eliciting delusions as many of the contemporary systems for the diagnosis of schizophrenia depend on the presence of a particular type of delusion in the patient. The concept of schizophrenia as described in ICD-9 is broad and does not specifically depend on any particular symptomatology or phenomenology to the exclusion ot others for the diagnosis ot schizophrenia. Moreover, the high rate ot agreement between ICD-9 diagnosis and CATEGO class ot In our study 87.5 percent were found to be deluded. This finding is in agreement with Ndetci and Singh (1982), hut is higher than the figures reported by Lucas et al (1962), Kulhara and Wig (1978), Sharma and Gupta (1970) and Bhaskaran and Saxena (1970). We have found that delusions of persecution, delusions of reference, delusions of mind being read and delusional explanation in terms of paranormal phenomena are more common than subcukurally influenced delusions, fantastic delusions, simple delusions concerning appearance etc. This is in agreement with the findings reported in the literature. Our observation that male patients had more delusions ot persecution is in agreement with the findings ol Bhaskaran (l*-^ ), Bhaskaran and Saxcna ( ?0) and Lucas ct al (l9f>2) but at variance with Ndetci and Singh (1982). It is also observed that married people have more delusions of reference than single patients. There docs not seem to be any tangiable explanation for this. Educational level of the patients appears to have curious influence on the type of delusions. Delusions ot reference, delusional misinterpretation and delusions of thoughts being read were seen significantly more in better educated patients. It could be argued that these patients have better linguistic competence and as such can elaborate and express delusions in a better way. Varma (1982) and Varma et al (1985) have consistently argued that higher linguistic competence is one of the important factors that lends to sustenance and further systematization of paranoid delusions. Surprisingly enough, place of residence of the patients as a variable did not have any significant influence on the type of delu-sion. The observation that rural patients have significantly more delusional elaboration in terms of paranormal phenomena and urban patients in terms of physical phenomena, as observed by Kala & Wig (1978) is not substantiated by our study. The relationship between current age of the patient and delusions is intriguing. Though older patients have excess of persecutory delusions, younger patients have significantly more systematization. Ndetci and Singh (1982) did not find any such difference. We are unable to offer any reasonable explanation for our findings. To conclude, it can be said that delusions are an important association of schizophrenia as identified in this study. The relationship between education and certain types of delusions is striking and needs further exploration particularly in the context of linguistic competence.
2014-10-01T00:00:00.000Z
1986-10-01T00:00:00.000
{ "year": 1986, "sha1": "50145684d8beea7c468d38f3c6eea206c9940f51", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "50145684d8beea7c468d38f3c6eea206c9940f51", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
259007277
pes2o/s2orc
v3-fos-license
Concerning the detection of electromagnetic knot structures in space plasmas using the wave telescope technique . The wave telescope technique is broadly established in the analysis of spacecraft data and serves as a bridge between local measurements and the global picture of spatial structures. The technique is originally based on plane waves and has been extended to spherical waves, phase shifted waves as well as planetary magnetic field representation. The goal of the present study is the extension of the wave telescope technique using electromagnetic knot structures as a basis. As the knots are an 5 exact solution of Maxwell’s equations they open the door for a new modeling and interpretation of magnetospheric structures, such as plasmoids. Introduction The classification and mathematical modeling of spatial structures are among the major missions of theoretical physics.Our extraterrestrial space environment in particular provides a diversity of spatial structures with different characteristics.For example, oscillating structures can be classified into plane waves (e.g., MHD waves), spherical waves generated at the bow shock, surface waves triggered by instabilities at the magnetopause and phase-shifted waves caused by field line resonances (Plaschke et al., 2008;Narita et al., 2022).On the other hand, global planetary magnetic fields can be interpreted in terms of a multi-pole series based on spherical harmonics (Gauss, 1839;Glassmeier and Tsurutani, 2014;Toepfer et al., 2020aToepfer et al., , b, 2021)).For the characterization of such structures, empirical models, such as magnetospheric models or models based on a set of specific basis functions spanning the solution space of differential equations, are required. In general, any spatial structure can be expanded into a set of mathematical basis functions, such as plane waves or spherical harmonics.Plane waves are the simplest spatial structures forming a basis for the representation of spatial fields.The contribution of any plane wave with its characteristic spatial scale to the total field is described by the spectrum of the field.However, in the worst case, infinitely many elements forming the basis have to be incorporated to describe the structure, resulting in an infinite set of expansion coefficients that have to be determined from the measurements.In this case, it is desirable to choose a new representation based on a new set of basis functions that are well-adjusted to the symmetry of the structure with fewer unknown parameters. Electromagnetic knots, proposed by Cameron (2018), are a special superposition of infinitely many plane waves, forming such a new basis set for localized, divergence-free structures, namely the electromagnetic ring and the electromagnetic globule.The geometry of these basis elements is depicted in Fig. 1a and b.A variety of electromagnetic field topologies can be constructed by spatially distributing and superposing several rings and globules as illustrated in Fig. 1c.The complexity of the emerging field geometries prompts the naming electromagnetic knots (Cameron, 2018). The electromagnetic ring and the electromagnetic globule are an exact solution of Maxwell's equations and provide a new tool in the context of plasma physical and electrodynamical modeling.Based on the elaboration of Cameron (2018), the mathematical foundations of electromagnetic knots are Published by Copernicus Publications on behalf of the European Geosciences Union.revisited in the present study.Within this context, the formalism is reformulated in terms of the classical wave telescope technique (Motschmann et al., 1996).Additionally, the applicability of describing and interpreting spatial structures in planetary magnetospheres via knots is discussed.The wave telescope technique enables the classification of spatial structures in planetary magnetospheres from a limited number of satellite positions and has successfully been applied to several problems in space physics (Glassmeier et al., 2001;Narita et al., 2003Narita et al., , 2009Narita et al., , 2013Narita et al., , 2022)).Originally, the method was based on a plane wave representation and was later extended to spherical waves (Contantinescu et al., 2006), phase-shifted waves (Plaschke et al., 2008) and planetary magnetic fields (Narita, 2019;Toepfer et al., 2020a, b).The goal of the present study is the extension of the variety of spatial structures that can be analyzed from a limited set of measurement positions by considering the electromagnetic knots a new basis set for the wave telescope.The method is tested against synthetically generated magnetic field data describing a plasmoid as a two-dimensional magnetic ring structure. The classical wave telescope Maxwell's equations represent a set of coupled partial differential equations for the magnetic field B(x, t) and the electric field E(x, t).These equations can be transformed into a set of algebraic equations via the Fourier transform.In the following discussion we will focus on the magnetic field. The measurement position x and the measured field B(x, t) are known from a set of magnetometer measurements.Due to the high temporal resolution of the magnetometer, the temporal Fourier transform can be applied to the data, delivering the spectral amplitude B(x, ω) (Motschmann et al., 1996).In general, this spectral amplitude is a continuous function of ω.However, in the practical application outstanding points of the spectrum, for example sharp maxima, are of major interest.Thus, the data are evaluated at a peak, where ω = ω 0 , with the corresponding amplitude B(x, ω 0 ).So far, the magnetic field can be written as where B0 (k, ω 0 ) is the spectral amplitude of the magnetic field with respect to the wave vector k.As the magnetic field measurements are solely available at a limited number of measurement points, the spatial Fourier transform is not applicable.Thus, the spectral amplitudes B0 (k, ω 0 ) and the corresponding wave vectors k are to be determined by the data fitting procedure.Although a variety of inversion techniques are available (Haykin, 2014, e.g.,), we will focus on the wave telescope technique (Motschmann et al., 1996).Suppose that the magnetic field vector B(x, ω 0 ) is measured at N positions x i (i = 1, . .., N ), summarized into the 3N-dimensional vector B(ω 0 ).Thus, the determination of the spectral amplitude B0 (k, ω 0 ) results in an overdetermined inversion problem.Following Motschmann et al. (1996), Narita (2019) and Toepfer et al. (2020b), the magnetic field model can be rewritten as where is the shape matrix and I ∈ R 3×3 denotes the identity matrix. The magnetic field measurements can be arranged into the data covariance matrix where the angular brackets denote the statistical average of the data.The spectrum of the wave can be estimated via where the dagger † denotes the Hermitian conjugate and tr The maximum values of P (k) may be interpreted as the spectrum of the field.If only a finite number of sharp peaks emerges, the magnetic field may be interpreted as a superposition of plane waves with discrete k values.As P (k) is a nonlinear function of the vector k, the whole threedimensional k space needs to be scanned to identify the peaks (Motschmann et al., 1996). Electromagnetic knots The classical wave telescope technique does not assume any symmetry or relation between different k vectors of the spectrum.However, to be able to use electromagnetic knots as a system of basis structures, the geometry of the k space needs to be specialized.In this respect, the classical wave telescope technique differs from its extension presented here.The following mathematical derivation of electromagnetic knots is based on Cameron (2018). Construction of the knots For the specific evaluation of the integral in Eq. (1), spherical coordinates (k, ϕ, θ ) in the k space are introduced: The vectors e x , e y and e z denote the unit vectors of the Cartesian coordinate system. In this case, the magnetic field in Eq. ( 1) can be rewritten as where is the spectral amount of the field corresponding to k. Due to Maxwell's equations, the magnetic field (as well as the electric field in the absence of free charge carriers) is solenoidal, such that To guarantee the solenoidality of the magnetic field, the ansatz and where α(k, ϕ, θ, ω 0 ) and β(k, ϕ, θ, ω 0 ) are complex functions of (k, ϕ, θ ) and ω 0 .In the following this ansatz is specified by constraining the geometry of the three-dimensional k space. In this respect, the spectral amplitude (Eq.11) can be rewritten as where the functions α (ϕ, θ ) and β (ϕ, θ ) weight the summation over the k space with respect to the angulars ϕ and θ .Introducing the abbreviation In the following, the functions α (ϕ, θ ) and β (ϕ, θ ) are specified to evaluate the spectral amplitude B(k, ω 0 , x) with regard to electromagnetic knots (Cameron, 2018).Each spectral amount (corresponding to a fixed k value) of the field may be characterized by a superposition of plane waves with the same amplitude propagating in every direction (independent of ϕ and θ ) such that In this case, the spectral amplitude results in representing a superposition of infinitely many plane waves of the same amplitude with the spectrum Therefore, the distribution in k space is completely characterized by the value k. Using the definitions of the unit vectors e ϕ and e θ , the magnetic field can be further expanded into the form For the evaluation of the integrals in Eq. ( 22) it is useful to introduce a cylindrical coordinate system (ρ, φ, z) in the position space: where ρ = x 2 + y 2 .The corresponding unit vectors are given by e ρ = cos φ e x + sin φ e y , e φ = − sin φ e x + cos φ e y , e z = e z . The scalar product of the k vector and the position vector results in Using x = ρ cos φ and y = ρ sin φ provides For the further evaluation of the integrals in each component of Eq. ( 22), the abbreviations η 1 (θ ) := kx sin θ and η 2 (θ ) := ky sin θ are introduced.By means of these preparations, the ϕ integration can be solved analytically, delivering the Bessel functions of the first kind: The detailed evaluation of the integrals can be found in the Appendix, resulting in and The complex constants α 0 and β 0 are the free parameters of the magnetic field in Eq. ( 26) and can be chosen independently of each other.The first part of the field, that corresponds to the expansion coefficient α 0 is called the magnetic ring (see Fig. 1a).The second part, corresponding to the expansion coefficient β 0 , is the magnetic globule (see Fig. 1b). It should be noted that the electromagnetic knot structures do not form an entire set of mathematical basis functions.Regarding the derivation presented here, electromagnetic knots can be written as a superposition of infinitely many plane waves, as plane waves represent an entire set of basis functions.However, the inverse is not true.The functions α (ϕ, θ ) and β (ϕ, θ ) in Eq. ( 16) control the angular dependency in the k space.By choosing α (ϕ, θ ) = const.and β (ϕ, θ ) = const., infinitely many plane waves propagating in every direction contribute to the field.The resulting field structures are solenoidal and spatially localized.Thus, the magnetic ring and the magnetic globule can be interpreted as a set of basis functions for isotropically localized, divergence-free structures.Choosing different shapes for the functions α (ϕ, θ ) and β (ϕ, θ ) enables the modeling of structures beyond electromagnetic knots. Electric field The electric field and the magnetic field are connected via Ampère's law.Under the absence of ohmic currents, Ampère's law reduces to where c ph is the phase velocity.Fourier transformation provides such that Ampère's law is valid for every k vector that contributes to the spectrum of the field, yielding the ansatz such that the real part can be expressed as Thus, the electric field is given by Electric current density When ohmic currents j (x, t) = 0 are present, Ampère's law can be written as under the assumption of stationarity or if the displacement current is negligible.Again, Fourier transformation provides In analogy to the electric field, the current density can be calculated via Thus, the current density of the magnetic ring follows the topology of a globule and vice versa. Spatially distributed knot structures Within the derivation of the knot structures, the magnetic ring and the magnetic globule are defined with respect to the same origin of the cylindrical coordinate system (ρ, φ, z).The resulting structures are also known as (electro)magnetic disturbances of the first kind (Cameron, 2018).However, in general the structures can be defined with respect to different (local) coordinate systems, spanned by the local unit vectors (e ρ q , e φ q , e z q ), where q = 1, . .., Q, with different origins O q .The resulting structures, where x = O q + x q , x q = ρ q e ρ q + z q e z q , are a superposition of Q translated and/or rotated (electro)magnetic disturbances of the first kind (see Fig. 1c) and are also called (electro)magnetic disturbances of the second kind (Cameron, 2018).The field is characterized by 8Q free parameters, i.e., the expansion coefficients α 0q and β 0q , the origins O q , and the orientation of the local coordinate system that can be described, for example, via Euler angles (Cameron, 2018). Discussion of the knot structures Within the derivation presented above, the spectral distribution of the field with respect to k is controlled by the function K(k).Electromagnetic knots, as originally described by Cameron (2018), are superpositions of infinitely many monochromatic plane waves, i.e., K(k) = δ(k − k 0 ), with the same amplitude, propagating in every direction with the spectrum In contrast to single plane waves, knots are localized structures, similar to wave packages.The localization of the structures results from the spatial distribution of the wave phases: Thus, the knots are a superposition of plane waves with different phases F(θ, ϕ) at all points in space despite its central point.At the origin of the structure (x = y = z = 0) the phases of the waves are all equal: F(θ, ϕ) = 0, resulting in a constructive interference with a maximum amplitude at the central point.The scale size of the knot is determined by k 0 , representing a set of infinitely many k vectors with the same length.The superposition of the plane waves is schematically illustrated in Fig. 2. Equation ( 27) represents the magnetic field with respect to the position vector x an the frequency ω 0 .However, the spatial structure of the field can also directly be analyzed from the measurement data B(x, t) evaluated at different time steps t, and thus no Fourier transform with respect to time is required. Extension of the wave telescope Following this short derivation and discussion of the electromagnetic knots, the knot model needs to be reformulated in terms of the wave telescope technique to estimate the spectrum of the knots. Reformulation of the model After performing the temporal Fourier transform, the magnetic field (Eq.27), measured at the position x i , i = 1, . .., N , can be rewritten as where is the corresponding shape matrix of the position x i .Summarizing the measurements into a 3N-dimensional vector B(ω 0 ), the magnetic field can be rearranged as where Again, the determination of the amplitudes α 0 B0 (k, ω 0 ) and β 0 B0 (k, ω 0 ) results in an overdetermined inversion problem. In analogy to the classical wave telescope technique, the spectrum of the ring can be estimated via Since P (k) is a nonlinear function of k, the whole k space has to be scanned to estimate the spectrum of the field (Motschmann et al., 1996).Solely considering the magnetic ring (Eq.28), the shape matrix transfers onto the shape vector (Narita, 2019) In this case, the spectrum of the ring can be estimated via Application to plasmoids For the first application of electromagnetic knots in the context of magnetospheric structures, we consider the modeling of plasmoids via a magnetic ring (Zhang et al., 2013).Plasmoids are a consequence of magnetic reconnection in the far-tail region of a planetary magnetosphere triggered by the Dungey cycle (McPherron, 1995, e.g.,).The structures are characterized by a magnetic ring along the neutral sheet line with a length scale of the order of the solar wind's obstacle (e.g., McPherron, 1995;Zong et al., 2004). https://doi.org/10.5194/angeo-41-253-2023Ann.Geophys., 41, 253-267, 2023 We model the magnetic field in the tail region by superposing a stationary magnetic ring (α 0 = −i, Eq. 28), 52) composed of monochromatic plane waves, representing the plasmoid, with the field generated by the neutral sheet current (Harris neutral sheet, Harris, 1962) such that where the x axis points towards the night side magnetosphere, the y axis points from the southern geographic pole to the northern geographic pole and the z axis completes the right-handed system.Thus, we model the plasmoid as a two-dimensional structure in the x-y plane (Zhang et al., 2013).The value B 0 represents an arbitrarily chosen background amplitude, B s = 0.3 B 0 , and the length scale of the current sheet is chosen to be L = 10 −3 R E , where R E is the planetary radius, e.g., the terrestrial radius.The characteristic length scale of the plasmoid is chosen to be λ 0 = 1.5 R E , corresponding to k 0 = 2π/λ 0 ≈ 4.19 R −1 E .The resulting magnetic field data are evaluated at N = 7 synthetically generated spacecraft positions, representing a HelioSwarm-like configuration (Klein and Spence, 2021).As plasmoids are highly dynamical, traveling structures, the measurement positions are shifted along the x axis with respect to the origin of the plasmoid (left, mean, right), representing different time steps.The length scale λ 0 (or equivalently k 0 ) of the plasmoid is estimated from the virtual spacecraft data via Eq.( 51).The resulting field geometry (blue arrows) and the measurement positions (red dots) as well as the corresponding spectra are illustrated in Fig. 3. When the measurement positions are distributed around the origin of the plasmoid (mean), the implemented value of k 0 can be reconstructed with high precision from the data.In the other cases, the spatial length scale is slightly overestimated and the relative error results in about 6 % (left) and 4 % (right).Thus, the wave telescope technique is capable of (1) separating the plasmoid from the neutral sheet part and (2) estimating the characteristic length scale of the plasmoid from a limited number of measurement positions. In analogy to the classical wave telescope technique, the accuracy of the reconstruction depends of the relation between the plasmoid's length scale λ 0 and the mean distance d between the spacecraft positions (Narita et al., 2022, e.g.,).For example, if d λ 0 , the measurement positions do not properly cover the spatial extend of the plasmoid, resulting in ambiguities within the reconstruction procedure.In the case of d λ 0 , the magnetic field structure of the plasmoid is not detectable.Thus, the mean distance between the spacecraft positions has to be of the order of the plasmoid's spatial scale d ∼ λ 0 , which will be realized by the configuration of the planned HelioSwarm multiscale mission. Furthermore, the amplitude of the ring B 0 has to be of the same order as or larger than the sheath field B s to guarantee a precise reconstruction result.For example, in the case of B s = 10 B 0 no peak occurs within the spectrum P r (k) and the ring cannot be discerned from the background field.On the other hand, the peak within the spectrum becomes sharper in the case of B s = 0.1 B 0 . Further applications The application presented above of electromagnetic knots indicates the potential of the representation.Spatially distributed electromagnetic knots as described by Cameron (2018) enable the modeling of more complex structures, provide generalized spectral information and open the door for further applications, delivering an alternative interpreta-tion of magnetospheric structures.For example, the magnetic field configuration resulting from a field-aligned current can be modeled as a superposition of magnetic rings stacked on top of each other.Due to Ampère's law, the corresponding current density is given as a superposition of globules.Thus, the inner structure of field-aligned currents can be analyzed directly from the magnetic field measurements (Toepfer et al., 2021).Also, the current system of Alfvén wings can be described as a superposition of rings (Vernisse et al., 2018, e.g.,) so that the corresponding magnetic field topology follows the structure of superposed globules.Furthermore, field line resonances (Glassmeier et al., 1999;Plaschke et al., 2008) may be described as a special superposition of magnetic rings. Conclusions Electromagnetic knots are a superposition of infinitely many monochromatic plane waves with a spherical symmetric spectrum and represent an exact solution of Maxwell's equation.The resulting basis elements, i.e., the electromagnetic ring and the globule, form a basis set for localized, divergence-free spatial structures.For this reason, the concept of electromagnetic knots opens the door for a completely new description and interpretation of spatial structures in planetary magnetospheres. The classification of spatial structures evaluated at a limited number of measurement points describes an overdetermined inversion problem.The wave telescope technique serves as a robust data analysis tool for the global interpretation of spacecraft measurements in terms of expected physical structures.By reformulating the formalism of electromagnetic knots in terms of the wave telescope technique, we extended the zoo of spatial structures that can be analyzed by the method.In this sense, the present study can be interpreted as a generalization of the wave telescope technique to a structure telescope technique. For a first validation, the concept of electromagnetic knots has been applied to the modeling of a plasmoid.Using a HelioSwarm-like satellite configuration, the wave telescope technique is capable of separating the plasmoid, modeled as a magnetic ring, from the field generated by the neutral sheet current and enables the estimation of the length scale of the ring.Thus, the presented extension of the wave telescope technique serves as a new data analysis tool for multispacecraft missions, such as the planned HelioSwarm mission.However, the application of electromagnetic knots for characterizing further structures, such as field-aligned currents or Alfvén wings, should be analyzed in future studies.In general, we conclude that the modified wave telescope technique outlined here bears the potential for a new representation and physical description of complex spatial structures existing in space plasmas.As the integrand is a 2π-periodic function, the integral is independent of γ 0 so that Figure 2 . Figure 2. Illustration of superposed, monochromatic plane wave fronts (gray lines) with the wave length λ 0 = 2π/k 0 .The knots are localized in the origin of the red coordinate system spanned by the vectors e ρ , e φ and e z . Figure 3 . Figure 3. Reconstructed spectrum P r (k) resulting from different measurement positions (red dots) with respect to the origin of the plasmoid.The length scale of the plasmoid is chosen to be k 0 ≈ 4.19 R −1 E .
2023-06-02T15:05:29.476Z
2023-05-31T00:00:00.000
{ "year": 2023, "sha1": "be6cbd98993fd74d4799db226f68ea69c7789b61", "oa_license": "CCBY", "oa_url": "https://angeo.copernicus.org/articles/41/253/2023/angeo-41-253-2023.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "bf3485d5d5f18d51b99260efebe249cd2f9687ee", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
221697224
pes2o/s2orc
v3-fos-license
Can Diffusion-Weighted Imaging Serve as an Imaging Biomarker for Acute Bacterial Rhinosinusitis? Acute rhinosinusitis is defined as symptomatic inflammation of the mucosal lining of the nasal cavity and paranasal sinuses lasting less than four weeks. It is most commonly secondary to viral infection but is often challenging to distinguish from bacterial etiologies. Even with recommendations from several specialty societies, there continues to be a frequent practice of overprescribing oral antibiotics for acute rhinosinusitis, thus leading to multidrug-resistant organisms, and rendering oral medication useless when actually clinically warranted. We observed a potential non-invasive imaging biomarker that could predict which patients would benefit from anti-microbial therapy. Often computed tomography (CT) imaging is obtained by the provider before consultation with the otolaryngologist, sometimes leading to unnecessary radiation to the patient. In addition, there are no clear CT findings to make the diagnosis of acute rhinosinusitis. The diagnosis is challenging for all clinicians involved, and therefore, additional signs on other imaging modalities would be helpful. We present a series of four patients with incidentally discovered culture-positive acute rhinosinusitis. Patients with incidentally discovered culture-positive acute rhinosinusitis were found to also have magnetic resonance imaging (MRI) that showed corresponding restricted diffusion on diffusion-weighted imaging (DWI). An imaging biomarker for acute bacterial rhinosinusitis may improve the appropriate use of antibiotic therapy. DWI MRI should be further investigated as a potential candidate screening modality. Introduction Acute sinusitis accounts for over 400,000 emergency department visits a year [1]. Sinusitis affects nearly one in every eight adults in the United States with nearly 30 million diagnosed cases a year. It is responsible for 11 billion dollars in direct costs per year without consideration to loss of productivity and quality of life. Nearly 20% of all antibiotic prescriptions are for sinusitis [2]. Ultimately, 90% of these prescriptions are deemed not indicated resulting in multidrug resistance, unnecessary costs, and patient side effects. One study by Smith et al. 1 2 2 1 1 estimated that more than 80% of patients who present acutely with nasal congestion and discharge will receive oral antibiotics, despite the fact that less than 2% of acute rhinosinusitis cases are bacterial in etiology [3]. The most recent clinical practice guidelines consensus statement published in 2015 defines acute rhinosinusitis as less than four weeks of purulent nasal drainage and nasal obstruction, facial pain/pressure, or both that persist without improvement for at least 10 days or if symptoms worsen after initial improvement [2,4]. The current recommendation from the American College of Radiology (ACR) Appropriateness Criteria and American Academy of Otolaryngology-Head and Neck Surgery is that diagnosis of acute sinusitis should be based on clinical evaluation and history [2,4,5]. Despite these guidelines, general practice includes imaging of patients with computed tomography (CT), exposing them to unnecessary radiation, and often still without a definitive diagnosis. This is especially problematic in the emergency setting where there is pressure to diagnose and treat patients accurately and efficiently, sometimes based solely on imaging. Since the diagnosis of acute bacterial sinusitis is difficult with both clinical presentation and sinus CT, the use of other imaging modalities should be explored. In our experience, we have noticed that acute bacterial sinusitis can occasionally have restricted diffusion on magnetic resonance imaging (MRI). Case Presentation We identified four patients who had incidentally discovered culture-positive acute bacterial rhinosinusitis on MRI. In these cases, we observed restricted diffusion on diffusion-weighted imaging (DWI). The four patients ranged in age from eight to 59 years. There were three males and one female, none were immune-compromised, and all four were referred to MRI for reasons not referable to the sinuses. They had imaging for reasons such as headache, post-treatment surveillance, or pre-operative planning. Based on the restricted diffusion in the sinuses, and despite the fact that the MRI was not performed because of sinus disease, the patients all had sinus cultures obtained and performed by the otolaryngology service. The following organisms were cultured: beta-lactamase positive Bacteroides fragilis, coagulase-negative Staphylococcus, gram-positive Streptococcus, and Pseudomonas aeruginosa. The key feature in all of these cases is that each had restricted diffusion within the sinus that was cultured, and found to be positive for bacteria ( Figure 1). FIGURE 1: Diffusion-weighted imaging (DWI) magnetic resonance imaging (MRI) in patients with acute bacterial rhinosinusitis Diffusion restriction in culture-positive acute sinusitis (diffusion-weighted image on the left and apparent diffusion coefficient map on the right, arrows point to restricted diffusion Discussion To date, the diagnosis of acute rhinosinusitis has been a clinical dilemma despite recommendations from several specialty societies, including the Centers for Disease Control [6]. This has resulted in overuse of CT scanning, with unnecessary radiation exposure, and without definitive diagnosis given lack of defined CT imaging findings for acute bacterial rhinosinusitis. It has also lead to frequent prescribing of oral antibiotics for presumed acute bacterial rhinosinusitis despite the fact that most patients present with sinusitis secondary to viral etiologies. This overprescribing has contributed to the development of multi-drug resistant organisms. Over-prescribing oral antibiotics for non-proven sinusitis may be due to concern for potential complications from untreated acute rhinosinusitis, and decreased patient satisfaction if no medication is prescribed. From a radiology standpoint, no clear imaging findings exist on CT for the diagnosis of acute bacterial rhinosinusitis, and therefore, diagnosis on imaging has always been controversial. The presence of a fluid-level is one of the more common findings suggested for acute rhinosinusitis, but can also be seen in the setting of viral and allergic sinusitis. In addition, air-fluid levels can be seen in chronic bed-ridden and trauma patients. Other findings that have been suggested in the literature include total filling/opacification of a sinus and mucosal thickening greater than 3 mm. However, none showed a correlation with positive bacterial cultures [7,8]. Studies have also shown that mucosal thickening can be a completely incidental finding on imaging [9]. For these reasons, it is clear that CT is not reliable for the diagnosis of acute bacterial rhinosinusitis, exposing patients to unnecessary radiation. To the best of our knowledge, no study has been done to show that the presence of restricted diffusion within fluid or mucosal thickening in a sinus suggests the diagnosis of acute bacterial rhinosinusitis. One study suggests that lower apparent diffusion coefficient values are associated with mucosal thickening rather than other inflammatory lesions such as mucous retention cysts and air-fluid levels with homogeneous or heterogeneous T2-signal intensity [10]. No correlation with pathology was made on this study. All of the patients presented in this report had restricted diffusion within the sinus with confirmed positive bacterial cultures, consistent with acute bacterial rhinosinusitis. Perhaps a cutoff value for the apparent diffusion coefficient may be more useful to the radiologist in suggesting the diagnosis of acute sinusitis. We found that the signal on T2-weighted images in the area of restricted diffusion was variable. An associated fluid level within the sinus was also variable. Conclusions The presence of restricted diffusion within a sinus may be a potential biomarker for acute bacterial sinusitis. A prospective study would be necessary in patients who clinically present with acute sinusitis. Techniques will need to eventually be optimized to address overutilization and cost issues. The sensitivity and specificity of MRI findings could be correlated with endoscopic and bacterial culture. It is conceivable that the presence of restricted diffusion in addition to other imaging features may offer greater sensitivity for the diagnosis. If DWI proves sensitive and specific, a simple scan of the sinuses with MRI could transform the practice of diagnosing acute bacterial rhinosinusitis, preventing unnecessary radiation to the patient and the overuse of antibiotics. Additional Information Disclosures
2020-08-27T09:05:46.823Z
2020-08-01T00:00:00.000
{ "year": 2020, "sha1": "e4ccd4db9dea5cee390db563828cb53b50eebbb8", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/34882-can-diffusion-weighted-imaging-serve-as-an-imaging-biomarker-for-acute-bacterial-rhinosinusitis.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "62fb0ce9bafb7bacfaaa73bd8c70c45a3ec7fafb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
67813295
pes2o/s2orc
v3-fos-license
Profiling of Microbial Contamination in Internal Atmosphere of Hospital Ward Background: Indoor air is the greatest propagating source of pathogenic microbes which causes significant contamination in the indoor hospital environment, principally in terms of nosocomial infections. Hence, microbiological testing is necessary to assess air contamination in indoor air of hospital. Objective: The present study was undertaken to assess air contamination in different wards of the hospital to obtain a causative relationship between air contamination and risk of developing infections through microbiological testing. Method: Microbiological sampling was performed in indoor environment of different wards namely pediatric ward, maternity ward, labor room, pediatric intensive care unit (PICU) and neonatal intensive care unit (NICU) of Bharati Hospital, Pune. Settle plate method was selected wherein McConkey’s Agar plates were used for isolation of Gram-negative bacteria, one of the pathogenic groups. The petri plates were then exposed for an hour in different wards and incubated at 37oC for 24 hours. After incubation, the total colony forming units (CFU) were counted. Results: It was found that highest CFU was present in the labor room. As compared to other petri dishes, 18±3Gram-negative bacteria colonies were seen in the labor room petri dishes. Presence of such contamination in labor room may occur due to improper ventilation and improper sanitization. During each labor, large amount of blood as well as amniotic fluid along with other body fluids are being spilled in the room which plays a significant role in promoting the growth of microorganisms irrespective of whether the mother had any infection during the labor. Conclusion: Microbiological air contamination testing depicted that the labor room was the most contaminated ward. Proper ventilation and sanitization in the hospital wards with regular quality control would overcome the probability of hospital nosocomial infections thereby promoting safety of mother and infant health. . Although the cause-and-effect relationship between airborne pathogen levels and nosocomial infections is yet not established, it could be hypothesized that lowering the level of these pathogens in the air would result in providing an environment that would help decrease the risk of nosocomial infections in hospital. Insufficient ventilation, high dusting, overcrowd, aerosols spread through sneezing and coughing, high movement of personnel and improper validation of hospital units as well as equipments are the main sources of indoor air contamination [8]. Indoor hospital air contains diverse microbial population which is responsible for nosocomial infections. Nosocomial infections can cause urinary tract infections (UTIs), severe pneumonia and infections of other parts of the body. This risk of nosocomial infections is further escalated by the increasing prevalence of antibiotic-resistant pathogens such as Methicillinresistant Staphylococcus aureus (MRSA) and Vancomycin-resistant Enterococci (VRE) among Gram-positive organisms and multi-drug resistant (MDR) Pseudomonas aeroginosa and Acinetobacter among Gram-negative organisms [9,10]. In the tropics, researchers have identified that Gram-negative are the most commonly isolated pathogenic microorganisms from hospital environment [9,10]. HAIs produce high morbidity, mortality, economic snags and increased hospital stay. Airborne microorganisms and other sources of contamination in hospitals must be reduced to minimum as many of the people passing through hospital lobbies as well as health care workers could be sensitive to these pathogens. Thus, to maintain the lowest possible airborne microbial levels in hospital lobbies, it is crucial to identify the factors influencing these levels [11]. Evaluation of the quality of air in internal hospital environment can be performed routinely via microbiological sampling techniques. Air sampling of microorganisms is a popular method of conducting microbial examinations as it allows direct evaluation of microbial presence [12]. In the present study, air and surface contamination was measured in different wards of the hospital namely, pediatric ward, maternity ward, labor room, pediatric intensive care unit (PICU) and neonatal intensive care unit (NICU). These wards were targeted as risk of developing infections is higher in these wards. The maternity and pediatric wards were selected as there is movement of large number of personnel and patients with active infections in the wards. Labor room was considered as during each labor, a large amount of blood, amniotic fluid and other body fluids are spilled in the room which plays a significant role in promoting the growth of pathogenic microorganisms irrespective of whether the mother had any infection during the labor [13]. Intensive care units were taken into consideration to assess aseptic conditions in these wards. Location Microbiological sampling was performed in indoor environment of different wards of namely pediatric ward, maternity ward, labor room, PICU and NICU of Bharati Hospital, a tertiary care teaching hospital, Pune. Sampling Sampling was carried out at a busy day time using settle plate method. Agar plates were prepared by suspending 50 grams of McConkey's Agar powder in 1 liter of purified water and mixed thoroughly. The mixture was boiled for 2 minutes to dissolve the powder completely with subsequent autoclaving at 121°C and 15 psi pressure for 15 minutes. Autoclaved agar suspension was cooled down at 40-45°C and stirred well before pouring into the sterile petri dishes. Petri plates containing autoclaved McConkey's Agar were exposed to air for one hour without informing the hospital cleaning team and the plates were incubated at 37 °C for 24 hours. Counting The total colony forming units (CFU) were counted using magnifying colony counter. Result and Discussion The study of airborne microorganisms in indoor hospital environments is important to understand the dissemination of airborne microbes particularly the pathogenic ones. It is assumed that the environment where patients are treated has an important impact on their recovery wherein acquired infections may complicate their existing medical conditions [13,14]. Patients are exposed to a greater risk in indoor air environment because confined areas contained aerosols and allow them to breed to an infectious level. Therefore, it is of utmost importance to evaluate the quality of indoor air in the hospital environment. Various wards and areas in the gynecology ward are easily prone to microbial contamination owing to the considerable presence of amniotic fluid, blood samples, stem cells, tissues and such other biological fluids which serve to be rich source for microbial breeding. If timely measures and routine microbial control is not maintained, the amplified microbial pathogens may pose a serious threat to the health of mother as well as infant with life-threatening consequences. Microbial profiles of simultaneous cultures obtained from hospitalized patients of the targeted wards (maternity ward, pediatric ward, labor room, PICU and NICU) were as presented in Table 1. Observations revealed the influential presence of Gramnegative pathogens (Escherichia coli and Pseudomonas aeroginosa) to be more dominant as compared to Gram-positive pathogens (Bacillus subtilis and Staphylococcus aureus) in different wards of Gynecology and Obstetrics department. As seen in Table 1, these Gram-negative pathogens illustrated significant existence in labor room, PICU and pediatric wards. Patients admitted in these wards are prone to infections such as pneumonia, sepsis, urinary tract infection (UTI) and meningitis owing to catheterization. Catheterization serves as a greatest source for microbial contamination further leading to nosocomial infections. Culture tests conducted during catheterization revealed positive results for Gram-negative microorganisms. Additionally, health care workers and paramedics in hospital wards like the cleaning staff, nursing staff, patient attendants and even the physicians may be the cause of Gram-negative infections [15]. The air handling and ventilation system in a hospital set up goes a long way to determine the microbial load per ward of a given hospital. Probing the airborne microorganisms in the hospital wards is important to understand the distribution of microorganisms and the level of cleanliness in that particular area [16]. Thus, the environment where patients are treated has a vital influence on the recovery of the patients and the spread of HAIs [15,16] (Figure 1). As during active labor, large amount of biologic fluids (blood, amniotic fluid) is spilled in the room, it serves as a growing medium for microorganisms and hence escalates the risk for nosocomial infections. In the course of labor room visit, it was observed that the labor room was used as a passageway for the health care personnel to enter and exit the room while the procedure was going on which could be another reason correlative of high number of colonies isolated from the labor room [1,16]. Various other contributing factors liable for this level of contamination might be building design, improper ventilation, unrestrained movement of the individuals, high dusting, poor level of public awareness, inadequate health training and limited use of disinfectants [17]. However, these effects are nullified in tertiary care units where the hospital management has to maintain stringent quality standards as per the norms of Medical Council of India (MCI). Hospital plays a significant role in limiting the spread of common nosocomial infections, the magnitude of which depends on the level of biological fluids, personnel and public movement in the hospital environment [18]. Thus, it is advisable that more frequent quality audits and strict measures should be put in place to check the increasing microbial load in the hospital environment [18,19]. This will definitely contribute to rapid recovery of patient [20,21]. Conclusion Microbiological air contamination testing confirmed that along with the labor room, PICU and pediatric wards are the most contaminated wards. The indoor unhygienic factor owing to excess biological fluids discharge and unrestricted personnel movement could be responsible for acquiring cross infections to all patients, hospital care personnel and other associated staff. Mostly Gramnegative microorganisms are the major cause of nosocomial infections as depicted in our study. Nosocomial infections produce high morbidity as well as mortality thereby, escalating the cost and length of hospital stay. It is thus, essential to monitor bacteriological load in the indoor air of hospitals and to improve the quality of hospital environment so as to reduce the microbial load. Hospitals should enhance the frequency of good sanitation protocols and infection control measures. Routine auditing of hospital air bioburden is substantially recommended. Thorough hand washing and use of alcohol rubs by medical personnel and public visitors before and after each patient contact would effectively combat nosocomial infections by nullifying the microbial dispersals within the hospital.
2019-04-02T13:14:52.339Z
2018-05-11T00:00:00.000
{ "year": 2018, "sha1": "291161340d6c7e64be8658719500a406100f63bd", "oa_license": "CCBY", "oa_url": "http://crimsonpublishers.com/rpn/pdf/RPN.000531.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "5b51a09e50605d49200c02a04d706ea80d910678", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
257937113
pes2o/s2orc
v3-fos-license
Precision measurements in the beta decay of 6 He . We report here about the ongoing data analysis of an experiment performed at GANIL with a 25 keV 6 He + beam to determine the Fierz interference term from the β particles energy spectrum. Introduction The 6 He beta decay played a fundamental role in establishing the V-A character of the weak interaction [1,2]. The fact that 6 He decays into 6 Li by a pure Gamow-Teller transition made it attractive for searches of physics beyond the standard model. The Fierz interference term is one of the coefficients that can be used to probe new physics, since it is linearly dependent on exotic tensor and scalar couplings. The Fierz term can be accessed experimentally by high precision measurements of the shape of the β-energy spectrum. Experimental setup The apparatus is described in details in Ref [3]. An experiment with a low-energy beam of 6 He + ions was performed at the Grand Accélérateur National d'Ions Lourds (GANIL), Caen. The 6 He + ions were guided at 25 keV towards the surface of a fixed detector "det 1". A second identical and movable detector "det 2" is used to enclose the ions implantation region and to achieve a 4π solid angle as shown in Fig 1. The motion of "det 2" is accurately synchronized with the beam implantation and the data acquisition. The period of the cycles is chosen such as to determine with a sufficient precision the surrounding background. The detector coverage ensures the full collection of all β particles emitted by the implanted 6 He + ions, eliminating thus any energy loss due to backscattering. The detectors "det 1" and "det 2" are formed each of a cylindrical YAlO 3 Ce-doped inorganic scintillator (YAP) surrounded by an EJ-204 plastic scintillator. The two scintillators are mounted in a phoswich configuration in which both of them are read out by one single . The labels on panel (a) are: 1 and 2-the two Ø6 mm collimators in the first section of the chamber; 3-a movable Si detector; 4 and 5-the moving detector and its mechanical guide; 6-the third Ø4 mm collimator; 7-the fixed detector. The green arrow indicates the 6 He + beam. On panel (b), label 8 indicates the implantation region and 9 the two 241 Am calibration sources [3]. Right panel: experimental β-energy spectrum for one run of two hours duration. photomultiplier tube (Fig 1). A 5-kBq 241 Am source is mounted on each of the detectors as illustrated in Fig 1. The 59.54 keV γ rays from the 241 Am were used as a reference to monitor gain and baseline variations during the experiment. Each event is labeled with a time stamp and an energy charge integration, allowing a control over systematic effects in the offline analysis. Five sets of runs were taken with different experimental conditions, to study systematic and background effects [3]. Background investigation The experimental β-energy spectra showed the expected continuous spectrum extending up to 3.5 MeV, the endpoint energy of 6 He decay, alongside with an unexpected contribution peaked at 0.1 MeV (Fig 1). This peak was also present in the background runs, where the 6 He beam was implanted on the collimator fixed to the moving detector, and not on the YAP (Fig 2). This peak was identified to be caused by β particles from the 6 He + ions, interacting with the material of the collimator itself and generating Bremsstrahlung photons that are detected by the two detectors. The experimental geometry with the two detectors and the collimator attached to "det 2" was built in GEANT4 (Fig 2), to validate the origin of this peak at low energy. Events were generated using the phase space of the 6 He decay, on the inner surface of the collimator as shown in Fig 2. The deposited energy spectrum inside the two YAP scintillators obtained by simulation was found to match the experimental spectrum up to 1 MeV (Fig 2, left). A wide peak between 1.5 and 3 MeV, which is not reproduced by the former simulation, was also observed in the background data (Fig 2, left). This peak was attributed to electrons from 6 He decay, going through a hole in the lower part of the detector, which leads to the YAP scintillator of the movable detector through the plastic scintillator. Another simulation was then performed where the source of electrons was set on the outer surface of the collimator. The simulated spectrum of the deposited energy inside the YAP scintillators showed the appearance of the same distribution along with the Bremsstrahlung peak (Fig 2, center), confirming thereby the origin of these events. For the β spectrum shape analysis, the presence of these two intruders within the β-energy spectrum will be suppressed by using data taken during the background runs. β-energy spectrum fit function The energy distribution of the β particles emitted from 6 He decay can be written as: where p, W and W 0 are respectively the momentum, total energy and the endpoint energy of the emitted electrons in unit of the electron mass. α −1 , α 0 , α 1 and α 2 are coefficients that represent all the relevant corrections to the shape of the spectrum (the Fierz term is included in α −1 ). It can be useful to have these coefficients fixed with different values or set as free parameters in the fitting procedure. However, this function cannot be used directly to fit the experimental β-energy spectrum, since it does not account for the energy loss due to Bremsstrahlung energy escape. In the following, we introduce a method used to account for Bremsstrahlung escape within an analytical model. As a first step, four distributions of events were generated following the functions: where i = −1, 0, 1, 2. For each g i (W) distribution, the effect of the Bremsstrahlung escape was estimated using GEANT4. The energy spectra of the generated and the deposited energies were built for each of the four sets. The histograms of the normalized difference between the generated and deposited energy spectra were then plotted and fitted with a polynomial function f i (W) (Fig 3). These functions represent the effect of the Bremsstrahlung escape on each term of the energy spectrum. The deposited energy spectrum resulting from the decay function of Eq. (1) and accounting for Bremsstrahlung escape can be expressed or fitted with: where α is a normalization factor and f i (W) are the polynomial functions that describe the effect of Bremsstrahlung energy escape for each term of the β-energy spectrum. These functions were tested with several statistically independent data sets generated in GEANT4 with Eq. (1) for several values of α −1 , α 0 , α 1 and α 2 . The deposited energy spectra were built afterwards and fitted with Eq. (3), where α 0 , α 1 and α 2 were fixed parameters, while α and α −1 were free parameters. The results of the fits were all statistically consistent with the values of α 1 that were initially used to generate the spectra. The standard residuals of the fit function, which are distributed around zero within ±2σ, also proved the validity of the fit function (Fig 3). Summary To summarize, the analysis of the spectrum shape of 6 He has been briefly introduced. Two background components present within the experimental β spectrum have been identified. They will be dealt with using background runs, during which 6 He + ions were blocked from reaching the detector surface. The effect of the Bremsstrahlung energy escape on the shape of the β-energy spectrum was presented alongside how we plan to account for this effect with the addition of a polynomial analytical function which will be used later on to fit the experimental data. This project was supported in part by the French Agence Nationale de la Recherche under grant ANR-20-CE31-0007-01 (bSTILED).
2023-04-05T15:04:45.748Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "42d031ab50aa3d4e606b46812e07c76ec53e242a", "oa_license": "CCBY", "oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2023/08/epjconf_ssp2023_01010.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "10d24e9f2272d547d69e753ad3122e65674037b2", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
255950857
pes2o/s2orc
v3-fos-license
Telomerase promoter mutations in human immunodeficiency virus-related conjunctiva neoplasia Squamous cell carcinoma (SCC) of the conjunctiva is a common cancer in Africa mainly associated with solar ultraviolet (UV) exposure and human immunodeficiency virus (HIV) infection. We analyzed the role of HIV on the occurrence of telomerase reverse transcriptase (TERT) promoter mutations among a cohort of conjunctiva neoplasia Ugandan patients. Telomerase reverse transcriptase promoter mutations were searched in 72 conjunctiva neoplasia cases, comprising SCC and intraepithelial neoplasia grade 1–3 (CIN1–3), as well as in 53 conjunctiva normal tissues and in 24 HIV-related Kaposi sarcoma. The average prevalence of TERT promoter mutations in conjunctiva neoplasia was 31.9%. The mutation rates were significantly higher in HIV-positive (31.8% of CIN1 and CIN2, 46.2% of CIN3 and SCC,) than HIV-negative patients (22.2% of CIN1 and CIN2, 13.3% of CIN3 and SCC). Such mutations were rarely identified among HIV-positive conjunctiva controls (3.6%) and never in Kaposi sarcoma lesions. The most frequent variations were the hot spots − 124G>A and − 146G>A and tandem transitions − 124_125GG>AA and − 138_139GG>AA. Telomerase reverse transcriptase promoter mutations are early events in conjunctival neoplasia and could be used for timely diagnosis of conjunctiva tumours. The high frequency of UV-signatures in HIV-positive conjunctiva lesions suggests an additive effect of the virus to UV-related mutagenesis. Background Squamous cell carcinoma (SCC) of the conjunctiva is a relatively common tumour in subjects infected with the human immunodeficiency virus (HIV) living in the United States or in tropical regions of Africa [1][2][3][4][5][6]. In the United States, during the period 1996-2012, the standardised incidence ratio (SIR) for SCC of conjunctiva was 5.56 (CI 95%, 3.44-8.50) among HIV positive people versus the US general population [3]. In sub-Saharan African countries a strong association between conjunctiva SCC and HIV infection was reported since the 1980s. Indeed, the incidence of conjunctiva SCC increased more than tenfold between 1960-1971 and 1995-1997 in Kampala (Uganda) and near tenfold during the period 1991-2004 in Harare, Zimbabwe [7,8]. More recently, a cross sectional study performed at the Kenyatta Hospital in Kenya showed that conjunctiva SCC has been the leading non-AIDS defining malignancy during the years 2000-2011 among HIV-positive patients [9]. The high prevalence of tandem CC to TT mutations identified in the TP53 gene of conjunctiva SCC DNA was suggestive of the important role played by UV solar radiation in the pathogenesis of such tumour Open Access Journal of Translational Medicine *Correspondence: ml.tornesello@istitutotumori.na.it Molecular Biology and Viral Oncology Unit, Istituto Nazionale Tumori IRCCS "Fondazione G. Pascale", via Mariano Semmola, 80131 Naples, Italy [13,14]. In addition, UV-related mutations along with hot spot mutations have also been identified in the promoter region of telomerase (TERT) gene in sun-exposed tumours such as melanoma, basal cell carcinoma, non melanoma skin cancer and conjunctiva SCC [15][16][17][18]. Both hot spot and UV-related mutations in TERT promoter region act as oncogenic driver events by creating binding sites for the E-twenty-six (ETS) transcription factors which generally cause a two to fourfold increase in the expression levels of TERT gene [19]. In some tumour types, including hepatocellular carcinoma and follicular thyroid adenoma, TERT promoter mutations are early events in the neoplastic process and they might be useful to monitor tumour development from dysplastic lesions [20,21]. Very few studies have compared the mutation profile of tumours arising in HIV-positive patients versus those without HIV infection. Gleber-Netto et al. [22] analyzed the nucleotide sequence of 18 genes in HIV-related and non-HIV-related head and neck SCC and showed that among HIV-positive patients the mutations tended to be TpC>T in all mutated genes but especially in TP53. This type of nucleotide change is mainly caused by the activity of APOBEC family cytosine deaminases as host defence against viral infections which also cause nucleotide mutations in human DNA [23,24]. In this study we have assessed the presence of TERT promoter mutations in HIV-positive and HIV-negative conjunctiva neoplasia cases to identify a possible synergistic effect of HIV on the accumulation of UV-induced mutations. Moreover, we have included in this analysis conjunctival lesions with different degree of malignancy in order to determine how early this genetic event occurs during carcinogenesis. HIV-related cutaneous Kaposi sarcoma (KS) biopsies have been also included in this study in order to verify the eventual effect of HIV status on the occurrence of TERT promoter mutations in lesions developing at body sites not exposed to sun UV radiation. Patients The cohort study comprised conjunctiva neoplasia patients surgically treated at seven countrywide eye clinics in Southern Uganda, within the Ugandan Ruharo Eye Project coordinated by Dr Waddell KM [11]. The histological diagnosis was performed by Prof Lucas SB at the Department of Histopathology, King's & St Thomas' School, London, UK. The conjunctiva control tissues were obtained from healthy subjects matched to the cases by sex and age (± 10 years), which were treated for eye injuries or pterygium in the seven eye clinics. Moreover, HIV-related cutaneous African KS cases were also included in this study [25]. All cases and controls were previously characterized in terms of histology, DNA quality, HIV serology, cutaneous and mucosal HPV as well as HHV8 DNA positivity [11,25,26]. The study was approved by the Institutional Scientific Board of the Istituto Nazionale Tumori "Fond Pascale", and is in accordance with the principles of the Declaration of Helsinki. TERT promoter mutation analysis Telomerase reverse transcriptase promoter region was amplified using the primer pair hTERT-F (5′-ACG AAC GTG GCC AGC GGC AG-3′) and hTERT-R (5′-CTG GCG TCC CTG CAC CCT GG-3′), generating a 474 bp fragment covering the rs2853669, rs34233268, rs34764648 and rs35226131 single nucleotide polymorphisms (SNPs) and the hot spot mutations within the TERT promoter region. PCR negative samples were further amplified with the primer set hTERT_short-F (5′-CAG CGC TGC CTG AAA CTC -3′) and hTERT_short-R (5′-GTC CTG CCC CTT CAC CTT -3′) which amplifies a sequence of 163 bp encompassing the TERT promoter hot spot sites. PCR reactions were performed in 50 μl mixture containing 300 ng of genomic DNA, 10 pmol of each primer, 1.25 Unit of Hot Master Taq DNA Polymerase (5 Prime GmbH, Hamburg, Germany) and 25 μl of PreMix J (Master Amp PCR, Epicentre). DNA was amplified in the Sure Cycler 8800 thermal cycler (Agilent Technologies) with the following steps: an initial denaturation at 94 °C for 3 min, followed by 32 cycles of annealing at 65 °C for 30 s when using hTERT-F/-R primer set or at 53 °C for 30 s when using hTERT_short-F/-R primer set, elongation at 72 °C for 1 min, denaturation at 94 °C for 30 s, and 10 min final elongation at 72 °C. All amplified DNA samples were subjected to automated bidirectional sequencing analysis at Eurofins Genomics, Munich, Germany. Nucleotide sequences were edited using the BioEdit software package (http://jwbro wn.mbio.ncsu.edu/BioEd it/ bioed it.html). Statistical analysis The statistical analyses were performed using Graph Pad Prism Software version 6.00. Two-tailed Χ 2 test, Χ 2 test for trend or Fisher's exact test were used for comparison of categorical data. Differences were considered statistically significant when P values were less than 0.05. Results This study included a total of 72 cases of conjunctiva neoplasia, comprising 16 CIN1, 15 CIN2, 17 CIN3 and 24 SCC. Fifty-three conjunctiva non-neoplastic controls and 24 HIV-related KS lesions were also analysed in this study ( Table 1). The majority of patients and controls were positive for HIV infection (66.7 and 52.8%, respectively). Overall, TERT promoter mutations were detected in 23 out of 72 (31.9%) conjunctiva neoplasia cases, in one out of 53 (1.9%) control tissues and were absent in KS lesions ( Table 1). The frequency of mutations was found statistically significantly higher in the group of HIV-positive CIN3 and SCC (46.2%) compared to HIV-negative CIN3 and SCC cases (13.3%), P = 0.04, ( Table 2). Similarly, a higher mutation rate, although not reaching statistical significance, was observed among HIV-positive CIN1 and CIN2 (31.8%) compared to HIV-negative cases (22.2%). Moreover, the occurrence of TERT promoter mutation in conjunctiva neoplasia was not affected by the HPV or HHV8 infection status. The most common nucleotide changes were the hot spot mutations − 124G>A (17.4% of all mutated cases) and − 146G>A (21.7%) as well as the UV-related tandem mutations − 124_125GG>AA and − 138_139GG>AA which together added up to 43.5% of all mutated cases. The two hot spots and the UV-related tandem mutations were found mutually exclusive while sporadic changes were detected as additional variations. Particularly, two cases containing the − 124G>A transition also carried a G>A mutation at position − 101 or − 122 from the ATG TERT start site. One sample harbouring the mutation at nt − 146 also contained two additional G>A transitions at nt − 100 and − 149. Tandem mutations − 124_125GG>AA and − 138_139GG>AA were also accompanied by sporadic G>A transitions at nt − 102 in one case and at − 101 and a G>T transversion at position − 125 in another case. Most of the observed changes lead to the creation of putative transcription factor-binding sites, such as the ETS-binding motif and SPI1 or ELK1 binding sites (Table 3). All nucleotide changes were heterozygous with one affected allele (Fig. 1). Only one HIV-positive case among the conjunctiva control samples harboured a − 124_125GG>AA change suggesting that UV-related mutations may precede the development of conjunctival low grade neoplasia (Table 3). No mutations were identified in TERT promoter region of DNA extracted from HIV-negative conjunctiva controls or HIV-positive cutaneous KS samples. Several single nucleotide polymorphisms (SNPs) are present within the TERT locus and some have been associated with the risk of cancer. In the present study, the TERT promoter region amplified with hTERT-F and hTERT-R primer set encompassed four SNPs and their allele frequencies have been evaluated in conjunctiva cases and controls. Particularly, a higher frequency of minor alleles (MAF) among cases versus controls was observed for the rs2853669 (− 245 G, MAF = 0.15 and MAF = 0.10, respectively), the rs34233268 (− 218 C, MAF = 3.3 and MAF = 1.9, respectively) and the rs35226131 (− 269 T, MAF = 3.3 and MAF = 1.9, respectively), however such differences did not reach a statistical significance. On the other hand, similar allele Discussion The solar UV radiation exposure is the main cause of the most common skin cancers, such as basal cell carcinoma, cutaneous SCC, cutaneous melanoma and other epithelial tumours such as conjunctiva neoplasia [27,28]. The frequency of C>T or CC>TT UV-related mutations in TERT promoter region has been found similarly high in basal cell carcinoma (56%), cutaneous SCC (50%), cutaneous melanoma (up to 71%) and conjunctiva neoplasia (43.8%), [16][17][18]29]. The pattern of TERT promoter mutations identified in our Ugandan conjunctiva neoplasia cohort is similar to that previously described in conjunctiva SCC of German patients as well as in melanoma and nonmelanoma skin cancers [17,18,29,30]. Interestingly, the occurrence of TERT promoter mutations in 37.5% of CIN1 observed in our results suggests that the UVinduced DNA damage may precede the progression of conjunctiva early lesions to high grade neoplasia. Several studies demonstrated that HIV infection strongly increases the risk of conjunctiva neoplasia [12]. HIV-related immunosuppression has been demonstrated to play a key role in such association (Holkar et al. [31]; Grulich et al. [32]). In our cohort the frequency of UV-related TERT promoter mutations was significantly higher in HIV-positive compared to HIVnegative conjunctiva neoplasia cases suggesting a synergistic effect of the virus with UV in the accumulation Table 3 Pattern of TERT promoter mutations in conjunctival neoplasia and conjunctiva control tissues a Positions refer to the distance from the ATG start site of TERT gene b Putative transcription factors binding sites identified in JASPAR database (http://jaspa r.gener eg.net) c One sample carried an additional mutation at nt − 101G>A and another at nt − 122G>A (no effect on putative binding sites) d One sample carried an additional mutation at nt − 102G>A (no effect on putative binding sites) e One sample carried two additional mutations at nt − 101G>A and nt − 125G>T with no effect on putative binding sites f One sample carried two additional mutations at nt − 100G>A (no effect on putative binding sites) and nt − 149G>A (creating a putative ETS binding site) g Χ 2 for trend = 16.13, P = 0.00006 of DNA damages. No biomolecular study has systematically analyzed the effect of HIV on the occurrence of UV-related mutations, however, such phenomenon is supported by epidemiologic studies showing that the incidence of basal cell carcinoma and cutaneous SCC was 2.1-fold higher and 2.6-fold higher, respectively, among HIV-positive patients compared with HIV-negative subjects [33]. In their study, the increased risk of skin cancer was correlated with lower CD4 counts in squamous cell carcinoma but not among basal cell carcinoma among HIV-positive patients suggesting that immunosuppression was only partially responsible for the increased incidence of skin cancer. TERT mutations A recent report compared the pattern of mutations among HIV-related and non-HIV related head and neck SCC in genes known to be frequently mutated in such tumours and identified a different pattern of nucleotide changes in all mutated genes including TP53 [22]. Particularly, they observed an enrichment of C>T changes in the HIV-infected cases likely caused by the cytosine deamination [34]. We have observed a high rate of single C>T or tandem CC>TT changes in HIV-related conjunctiva lesions but not in HIV-related Kaposi sarcoma suggesting a synergistic effect of UV exposure and HIV infection but not a direct effect of HIV in not UV-related cancers such as Kaposi sarcoma. The hot spot nucleotide changes − 124G>A and − 146G>A in TERT promoter have been detected at high frequency in cancers of internal organs, such as bladder cancer, hepatocellular carcinoma, thyroid cancer, and gliomas [15,35,36]. In contrast, tandem mutations CC>TT are related to the UV mutagenic activity and very rarely identified in tumours of internal organs [13,29,35,[35][36][37][38][39]. In our study CC>TT substitutions in TERT promoter were very frequent in accordance with their overall frequency in other UV-related tumours [16,40]. TERT promoter mutations create de novo binding motifs for the ETS (E-twenty-six) family or TCF (ternary complex factor) subfamily of transcription factors and increase the expression of TERT by twofold to fourfold [18]. This increased telomerase expression enables tumours to maintain their telomere length and continuously proliferate without becoming apoptotic or senescent due to genetic instability [41,42]. Several SNPs have been analyzed in conjunctiva samples, including rs2853669, rs34233268, rs34764648 and rs35226131, but no significant differences have been noted on the minor allele frequency distribution among cases and controls. However, the limited number of samples insufficient to perform a significant statistic may have hindered the possibility to associate specific SNPs with susceptibility to conjunctiva neoplasia. Conclusion In conclusion, we observed that HIV infection, a major risk factor for development of conjunctiva neoplasia, significantly contributes to the accumulation of UVrelated mutations in TERT promoter. Both hot spot mutations and UV-related variations are frequently identified in low grade conjunctiva lesions (CIN and CIN2) as well as in high grade lesions (CIN3) and invasive carcinoma. More studies are needed to understand the molecular mechanisms underlying this previous unknown phenomenon and to determine whether these genetic traits are useful for early detection of conjunctiva progressing lesions.
2023-01-18T14:05:15.492Z
2018-03-21T00:00:00.000
{ "year": 2018, "sha1": "9c9be5dcea35af377425eb1d0446cd3c4e4c6c1b", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12967-018-1456-0", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "9c9be5dcea35af377425eb1d0446cd3c4e4c6c1b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
25860492
pes2o/s2orc
v3-fos-license
Resolving the Function of Distinct Munc18-1/SNARE Protein Interaction Modes in a Reconstituted Membrane Fusion Assay* Sec1p/Munc18 proteins and SNAP receptors (SNAREs) are key components of the intracellular membrane fusion machinery. Compartment-specific v-SNAREs on a transport vesicle pair with their cognate t-SNAREs on the target membrane and drive lipid bilayer fusion. In a reconstituted assay that dissects the sequential assembly of t-SNARE (syntaxin 1·SNAP-25) and v-/t-SNARE (VAMP2·syntaxin 1·SNAP-25) complexes, and finally measures lipid bilayer merger, we resolved the inhibitory and stimulatory functions of the Sec1p/Munc18 protein Munc18-1 at the molecular level. Inhibition of membrane fusion by Munc18-1 requires a closed conformation of syntaxin 1. Remarkably, the concurrent preincubation of Munc18-1-inhibited syntaxin 1 liposomes with both VAMP2 liposomes and SNAP-25 at low temperature releases the inhibition and effectively stimulates membrane fusion. VAMP8 liposomes can neither release the inhibition nor exert the stimulatory effect, demonstrating the need for a specific Munc18-1/VAMP2 interaction. In addition, Munc18-1 binds to the N-terminal peptide of syntaxin 1, which is obligatory for a robust stimulation of membrane fusion. In contrast, this interaction is neither required for the inhibitory function of Munc18-1 nor for the release of this block. These results indicate that Munc18-1 and the neuronal SNAREs already have the inherent capability to function as a basic stage-specific off/on switch to control membrane fusion. Membrane fusion in eukaryotic cells is mediated by a conserved machinery consisting of compartment-specific v-SNAREs 3 on transport vesicles and t-SNAREs on the target membrane (1)(2)(3)(4). SNAREs are characterized by SNARE motifs, stretches of 60 -70 amino acids, which contain heptad repeats with a central "0" layer and assemble into specific four-helix bundles (5). The formation of SNAREpins, trans v-/t-SNARE complexes bridging two membranes, occurs in a zipper-like manner that starts at the membrane distal (N-terminal) end of the SNAREpins and proceeds toward the (C-terminal) membrane-spanning anchors of the SNAREs (6,7). Zippering brings the two lipid bilayers in close apposition, finally resulting in membrane merger (2,8). Thus, the energy required for membrane fusion is provided by the exergonic folding of the largely unstructured v-and t-SNARE proteins into stable four-helix bundles (2,5,9). Although SNAREs can be considered to be the minimal membrane fusion machinery, in the physiological cellular environment, an array of accessory proteins and lipids controls the spatial and temporal activity of SNARE proteins (10). One class of accessory proteins, the SM (Sec1p/Munc18) proteins, directly bind to SNAREs, control their activity, and are required for membrane fusion in vivo (11)(12)(13)(14)(15)(16)(17)(18)(19)(20)(21)(22). SM proteins contain about 600 amino acids, which are folded into an archshaped structure. At least two SNARE-binding modes have been described. In the first mode, the SM protein binds the t-SNARE component syntaxin in a "closed" conformation, in which the N-terminal three-helical Habc domain of syntaxin folds back on a part of the SNARE motif (23)(24)(25). In this conformation, syntaxin cannot bind its cognate SNARE partners (26). Binding of Munc18 to this closed conformation is also important for the transport of syntaxins from the endoplasmic reticulum to the plasma membrane and syntaxin stability (27)(28)(29)(30)(31)(32)(33). In the second mode, the SM protein binds t-SNAREs, SNAREpins, and fully assembled cis v-/t-SNARE complexes and contacts residues on the exposed surface of both the v-SNARE and t-SNARE (29, 34 -40). In this binding mode, the SM protein likely assists SNAREpin organization and assembly (34,(41)(42)(43). Therefore, SM proteins can function as catalysts of SNARE complex formation, and hence the combination of the SM and SNARE proteins has been designated to be the universal fusion machinery (44). This terminology expands the concept of the SNAREs, representing the minimal components of the fusion machinery. Consistent with this notion, reconstituted fusion assays have revealed that defined SM proteins increase selectively the fusion of distinct v-and t-SNARE partners (41,43). Molecular binding sites contributing to the SM/SNARE interactions have been mapped to the N-terminal peptides of syntaxins, the Habc domains of syntaxins, the linker connecting the Habc domain with the SNARE motif, the syntaxin SNARE motifs, and cognate v-SNAREs (35,36,(45)(46)(47)(48)(49). One of the best studied model systems is neurotransmitter release, which employs the v-SNARE VAMP2/synaptobrevin 2 (on synaptic vesicles), the t-SNAREs syntaxin 1, SNAP-25 (on the plasma membrane), and the cognate SM protein Munc18-1 (19, 50 -52). Specific point mutations (L165A/E166A) in the linker of syntaxin 1 reduce the affinity for Munc18-1 by interfering with binding mode 1 and increase t-SNARE assembly (26). This result strongly suggests that this syntaxin 1 mutant mimics an "open" conformation. The transition from the syntaxin/SM conformation to the SNAREpin⅐SM complex apparently requires such an open syntaxin 1. Binding studies revealed the existence of an open syntaxin 1/Munc18-1 intermediate (29,53). However, components and mechanisms favoring the closed to open transition still need to be characterized in detail. In living cells, this transition is facilitated by regulatory components, such as Munc13 and lipids (54 -58). Interestingly, in Caenorhabditis elegans, the open syntaxin (L165A/E166A) can rescue the secretion defect observed in unc-13 mutants, but cannot rescue an unc-18 null mutant (18,59). Together with recent studies, these results provide further in vivo support for the late acting (mode 2) function of Munc18 in SNAREpin formation/assembly (60). The syntaxin 1 N-peptide appears to be essential for the Munc18-1 stimulation of membrane fusion both in vitro and in vivo, but its exact role is still debated, and controversial observations have been published (19,29,31,36,41,53,61). In a reconstituted liposome fusion assay containing preassembled t-SNARE complexes, the presence of the N-peptide favors membrane fusion (41,62). In contrast, using the same components, but now in solution, the presence of the N-peptide inhibits the formation of a stable v-/t-SNARE complex in the presence of Munc18-1 (35). When UNC-18 mutants (F113R and L116K) that selectively abolish the N-peptide interaction were expressed in unc-18 null C. elegans, the defect in regulated exocytosis could not be rescued, supporting the functional importance of the Munc18-1/syntaxin 1 N-peptide interaction (31,53). Furthermore, the N-peptide inhibits neurotransmitter release in the calyx of Held synapses (37). However, in secretion-deficient PC12 cells, which lack both Munc18-1 and Munc18-2, Munc18-1 mutants (F115E and E132A) that impair the binding to the syntaxin N-peptide Munc18-1 rescue exocytosis to a large degree (32,63). This result suggests a more subtle role of the syntaxin 1 N-peptide in dense core vesicle exocytosis. To obtain further insights into the different syntaxin 1/ Munc18-1 interaction modes and the role of the N-terminal syntaxin 1 peptide in this reaction cascade, we established a liposome fusion assay, which measures the assembly of the t-SNAREs on liposomes and the subsequent SNAREpin formation between the v-and t-SNARE liposomes. In such an assay, syntaxin 1 can adopt its stage-specific conformations, and the role of Munc18-1 can be studied at distinct steps of the fusion reaction using lipid mixing as the ultimate readout signal. Protein Expression and Purification-Recombinant proteins were expressed in the Escherichia coli strain BL21(DE3) (Stratagene). The culture media were supplemented with the appropriate antibiotics (50 g/ml kanamycin or 100 g/ml ampicillin). Cells were grown at 37°C in 12 liters of LB media to an A 600 of 0.8. Protein expression was induced with 1 mM isopropyl ␤-D-thiogalactopyranoside. t-SNARE complexes were formed by cotransforming pFP247 (His 6 -SNAP-25) together with either pTW20 (syntaxin 1 (WT)) or pYS1 (syntaxin 1 (open conformation)). t-SNARE complex and v-SNARE expressions were induced at 37°C for 3 h. Syntaxin 1 constructs and Munc18-1 were induced at 16°C overnight. Cells were collected by centrifugation and washed once with PBS, resuspended in breaking buffer, snap-frozen in liquid nitrogen, and stored at Ϫ80°C. The breaking buffer for the t-SNARE complex was composed of 25 mM HEPES/KOH, pH 7.4, 400 mM KCl, 10% glycerol, 2% Triton X-100, 30 mM imidazole and freshly added 2 mM ␤-mercaptoethanol (␤ME). For syntaxin 1 and Munc18-1 purification, the salt concentrations were reduced to 150 mM KCl. The bacterial pellets were rapidly thawed in a final buffer volume of 300 ml, containing 2 mM ␤ME and a protease inhibitor mixture (final concentrations: leupeptin (1.5 g/ml), antipain (2.5 g/ml), turkey trypsin inhibitor (25 g/ml), benzamidine (12.5 g/ml), Pefabloc SC (6.25 g/ml), aprotinin (1.25 g/ml), chymostatin (5 g/ml), and pepstatin (2.5 g/ml)). Cells were lysed by one pass at 18,000 p.s.i. through a Microfluidizer M110L (Microfluidics). Insoluble material was removed by ultracentrifugation for 60 min at 40,000 rpm at 4°C in a 45Ti rotor (Beckman Coulter). 50 ml of the supernatants containing His 6 -tagged proteins were incubated for 1 h at 4°C with 1.5 ml of nickel-nitrilotriacetic acid beads (Qiagen). The beads were washed two times with breaking buffer and two times with buffer A (25 mM HEPES/KOH, pH 7.4, 100 mM KCl, 10% (w/v) glycerol, 2 mM ␤ME) containing 30 mM imidazole and 1% Triton X-100. Beads were packed into a chromatography column and extensively washed with buffer A containing 50 mM imidazole and 1% n-octyl-␤-D-glucoside. Proteins were eluted from the nickel-nitrilotriacetic acid resin with a gradient from 50 to 500 mM imidazole in 25 mM HEPES/KOH, pH 7.4, 100 mM KCl, 10% glycerol, 1% (w/v) n-octyl-␤-D-glucoside, and 2 mM ␤ME. SNAP-25 and Munc18-1 were purified in the absence of detergent. SNARE proteins were purified as described previously (2, 65). Munc18-1 was directly eluted in buffer A containing 500 mM imidazole and dialyzed against buffer A followed by an ultracentrifugation step at 50,000 rpm for 30 min at 4°C in a TLA55 rotor (Beckman Coulter). The supernatant was again dialyzed against buffer A, and protein aggregates were removed by ultracentrifugation as mentioned above. GST-VAMP8 was purified on glutathione beads, eluted by thrombin cleavage, and further purified using Mono S-Sepharose chromatography (GE Healthcare) (65). VAMP2 and SNAP-25 were further purified using Mono Q and Mono S chromatography (GE Healthcare), respectively. Protein concentrations were determined by SDS-PAGE and Coomassie Blue staining using defined amounts of BSA as the protein standard. Protein amounts were quantitated using the ImageJ software (National Institutes of Health). Light Scattering-To determine the hydrodynamic size of the reconstituted liposomes a Zetasizer 1000HS (Malvern Instruments) was employed. Light scattering was measured at 633 nm, and the mean diameters of the vesicles were determined using the analysis software supplied by Malvern Instruments. Vesicle mean diameters are based on the peak analysis by intensity. Fusion Assays-Assays were performed in white 96-microwell FluoroNunc plates (Nunc). Typically, 5 l of fluorescently labeled v-SNARE vesicles (containing either VAMP2 or VAMP8, ϳ6 nmol of lipid) were mixed with 30 l of t-SNAREor syntaxin-liposomes (ϳ 157 nmol lipid), and fusion was measured in buffer A in the absence or presence of additional components in a final volume of 70 -80 l. Specific preincubation and order of addition procedures are described under "Results." Briefly, preincubation steps occurred usually in 500-l standard reaction tubes at the indicated temperatures. Then the probes were transferred to and mixed in a preheated 96-well plate, and the fluorescence measurements were immediately started in the prewarmed fluorescent microplate reader (Fluoroskan Ascent FL, Thermo Scientific). NBD fluorescence was detected with filters at 460 nm (excitation) and 538 nm (emission) and monitored at 1-min intervals. After 2 h at 37°C, 10 l of a 2.5% (w/v) n-dodecyl-␤-D-maltoside solution was added to terminate the reaction and to allow maximum fluorescence dequenching. Fluorescence measurements were normalized by setting the lowest NBD fluorescence signal to zero and by setting the NBD fluorescence after n-dodecyl-␤-D-maltoside lysis to 100% as described previously (2). The maximum fusion rates within the first 30 min of the fusion reaction were used to determine the initial fusion rates and to calculate the inhibition/stimulation efficiencies relative to the fusion reaction containing wild type syntaxin 1 in the absence of Munc18-1. The statistical analyses includes at least three independent experiments. Syntaxin 1 Liposomes Fuse with VAMP2 Liposomes in a SNAP-25-dependent Manner-To test the role of Munc18-1 in t-SNARE complex assembly in the membrane environment and to monitor subsequent membrane fusion, recombinant syntaxin 1 was expressed in bacteria, purified, and reconstituted into liposomes. To mimic the SNARE density in physiological membranes, VAMP2 was reconstituted at a protein to lipid ratio of about 1:250, corresponding to the VAMP2 density in synaptic vesicles (66). Syntaxin 1 was reconstituted into liposomes at a protein to lipid ratio of ϳ1:3000. However, because of some variations in the reconstitution efficiencies of the different syntaxin 1 constructs, the syntaxin 1 to lipid ratios covered a range of 1:2750 -1:4500. When different syntaxin constructs were directly compared, liposomes containing similar protein to lipid ratios were employed, and in every case aliquots of the fusion reactions were analyzed by SDS-PAGE and Coomassie Blue staining. Dynamic light scattering revealed that VAMP2-and syntaxin 1-liposomes had mean diameters of about 80 and 130 nm, respectively. Liposome aggregates were not detectable. Taking account of these size estimates, ϳ17-28 syntaxin 1 molecules would be exposed on the surface of a 130-nm liposome, whereas VAMP2 liposomes would contain about 120 surface-exposed v-SNAREs. To obtain robust fusion signals, v-SNARE liposomes were incubated with a 10-fold molar excess of syntaxin 1 liposomes. Thus, at low temperature, which usually blocks/slows down lipid mixing, an average of 10 syntaxin 1 liposomes would bind to a single v-SNARE liposome, containing 120 VAMP2 molecules. Making this assumption, 12 SNAREpins would theoretically be available per fusion site. Because 1-8 SNAREpins appear to be sufficient to drive membrane fusion, the average number of available SNAREpins per docking/fusion site will not become a rate-limiting factor in the fusion assay (67)(68)(69). Membrane fusion was measured by a well established lipid-mixing assay based on fluorescence dequenching (2,70). When donor VAMP2 liposomes, which contain a quenched pair of fluorescently labeled lipids (rhodamine-PE, NBD-PE), fuse with unlabeled acceptor syntaxin 1 liposomes, the fluorophores are diluted, and the NBD fluorescence increases. Fig. 1A shows that syntaxin 1 liposomes do not fuse to a significant degree with VAMP2 liposomes in the absence of SNAP-25. Thus, membrane fusion depends on the presence of SNAP-25, and increasing concentrations of soluble SNAP-25 raise both the initial rate and the final extent of lipid mixing (Fig. 1A). Efficient fusion requires a significant molar excess of SNAP-25 over membrane-embedded syntaxin 1 (Fig. 1B). We noticed that t-SNARE complex formation in the membrane environment is less efficient than the t-SNARE complex formation in solution suggesting that membrane-embedded syntaxin 1 is less reactive. Therefore, the fusion efficiencies are lower than those obtained with liposomes containing already preassembled t-SNARE complexes at similar protein to lipid ratios (data not shown). These results suggest that the formation of productive t-SNARE complexes is the rate-limiting step in this liposome fusion assay. Munc18-1 Inhibition of Liposome Fusion Requires the Closed Conformation of Syntaxin 1 but Occurs Independently of the Syntaxin 1 N-peptide Interaction-Because t-SNARE complex formation is the rate-limiting step, it would be expected that Munc18-1, which stabilizes the closed conformation of syntaxin 1, should block fusion. Indeed, a 30-min preincubation of syntaxin 1 liposomes with increasing amounts of Munc18-1, followed by the addition of SNAP-25, significantly inhibits membrane fusion (Fig. 2A). Even preincubation of the syntaxin 1 liposomes with Munc18-1 for a few minutes was sufficient to obtain the inhibition (data not shown), because the formation of syntaxin 1⅐Munc18-1 complexes is fast and efficient com-pared with t-SNARE complex assembly. Control experiments in the absence of SNAP-25 revealed that Munc18-1 does not affect the fluorescent signal (data not shown). Maximum inhibition was reached when Munc18-1 and syntaxin 1 were present at equimolar amounts (compare lanes in Fig. 2, A and B). Inhibition was efficient (80%) (see also Fig. 3D) but not complete, because the syntaxin 1 liposomes may contain a syntaxin pool that is binding-competent for SNAP-25 but not for Munc18-1. This syntaxin pool could contain syntaxin 1 in the open conformation. To test if a functional syntaxin 1 N-peptide is required for the Munc18-1-mediated inhibition, a syntaxin 1 construct lacking the N-terminal 24 amino acids (d24) and a syntaxin 1 point mutation (L8A) impairing the interaction of Munc18-1 with the syntaxin 1 N-peptide were reconstituted into liposomes. These mutants displayed slightly reduced membrane fusion kinetics, but the inactivation of the N-peptide did not abolish the inhibitory function of Munc18-1 in the liposome fusion assay (Fig. 3A and supplemental Fig. S1). Fig. 3D shows a comparison of the initial kinetics of the fusion reactions relative to the wild type syntaxin 1. Next we analyzed how the open conformation of syntaxin 1 (L165A/E166A), which is characterized by a reduced Munc18-1 affinity and an impaired binding mode 1, affects membrane fusion in the presence of Munc18-1 (26). In the presence of the open conformation of syntaxin 1, Munc18-1 did not show any inhibitory effects; it even stimulated liposome fusion by a factor of ϳ2 (Fig. 3, B and D). The stimulation suggests that an interaction of Munc18-1 with the open syntaxin increases SNAREpin formation/assembly. Unexpectedly, Fig. 3D also demonstrates that the open conformation of syntaxin shows a 2-fold lower fusion activity than wild type syntaxin 1. The open syntaxin might form oligomers or increased amounts of t-SNARE complexes, which contain syntaxin1/SNAP25 at a molecular ratio of 2:1, which are known to inefficiently form SNAREpins (7). Taken together, the data clearly demonstrate that the inhibitory function of Munc18-1 requires syntaxin 1 in its closed conformation but occurs independently of the Munc18-1/syntaxin 1 N-peptide interaction. Preincubation of Munc18-inhibited Syntaxin 1 Liposomes in the Presence of SNAP-25 and VAMP2 Liposomes Stimulates Lipid Mixing-Previous experiments have shown that Munc18-1 stimulates liposome fusion when the liposomes contain preassembled t-SNAREs (41). This stimulation strictly required a preincubation of Munc18-1 with both the v-SNARE and t-SNARE liposomes under nonfusogenic conditions and was VAMP2-specific. Thus, the issue raises the following question. Is the inhibited state of the syntaxin 1⅐Munc18-1 complex released by a subsequent incubation with both SNAP-25 and VAMP2 liposomes? To test this point, we changed the previous incubation regime; syntaxin 1 liposomes were preincubated with Munc18-1 for 30 min at room temperature as before, but now SNAP-25 and VAMP2 liposomes were added simultaneously, incubated for 1 h on ice, which inhibits lipid mixing, and subsequently warmed up to 37°C to start membrane fusion (compare incubation schemes in Fig. 4A). Remarkably, the preincubation in the presence of VAMP2 liposomes was sufficient to reverse the inhibitory effect of Munc18-1, and more importantly Munc18-1 stimulated fusion (Fig. 4B). The analysis of the initial kinetics shows that Munc18 stimulates liposome fusion by a factor of 5.5, which is comparable with the Munc18 stimulation observed with preassembled t-SNAREs (Fig. 4D) (41). Thus, the closed Munc18-1⅐syntaxin 1 complex, which based on our previous experiments, was in an inhibited state (Fig. 2), was able to switch to an open conformation, allowing SNAREpin formation. Munc18-1, which now interacts with VAMP2, apparently stabilizes the newly formed SNAREpins and/or provides additional force to favor SNAREpin zippering and membrane fusion. To test if this reaction indeed requires the dual interaction of Munc18-1 with the v-and t-SNARE, fusion reactions were also performed with VAMP8 liposomes. When VAMP8 liposomes were used, neither the Munc18-1 inhibition was released nor was the stimulation observed (Fig. 4, B and D). Thus, the inhibition release and stimulation need a productive Munc18-1/VAMP2 interaction. Syntaxin 1 N-peptide/Munc18-1 Interaction Is Required to Stimulate Liposome Fusion-To test if the stimulatory function requires an interaction of Munc18-1 with the syntaxin 1 N-peptide, the syntaxin 1 constructs containing the L8A and the d24 mutations were used in the fusion assay. Both mutations significantly reduced the Munc18-1-dependent stimulation, indicating that the stimulatory effect requires an interaction of Munc18-1 with the syntaxin 1 N-peptide (Fig. 5, A and D, and supplemental Fig. S2). Interestingly, the syntaxin 1 N-peptide seems not to be necessary to release the Munc18-1 inhibition. We also tested the open conformation of syntaxin 1, which did not show any Munc18-1-dependent inhibition in the t-SNARE complex assembly assay. As expected, the open syntaxin 1 was characterized by a dramatic stimulation, clearly visible in the initial fusion rates (supplemental Fig. S3). Interestingly, when VAMP8 liposomes were used, we observed a small but reproducible stimulation by Munc18-1 (supplemental Fig. S3). Hence, independent of a VAMP2 interaction, Munc18-1 might further support membrane fusion by increasing the number of t-SNARE complexes containing open syntaxin 1. To support the conclusion that t-SNARE complex formation contributes a rate-limiting step in the overall fusion reaction, we bypassed the t-SNARE complex assembly process by reconstituting preassembled t-SNAREs containing either the wt or open syntaxin 1 into liposomes and analyzed them in the fusion assay. In these experiments, the fusion kinetics of the wild type and open syntaxin 1 in the presence of Munc18-1 did not differ significantly (supplemental Fig. S4). Thus, when t-SNARE complexes have formed, an open syntaxin 1 conformation does not contribute additional functions to the subsequent reactions in the reconstituted system. DISCUSSION Here, we have resolved the different functional properties of Munc18-1 and syntaxin 1 at distinct stages of SNARE complex assembly in a reconstituted fusion assay, which analyzes SNARE complex formation in a membrane environment and measures lipid mixing as the ultimate functional readout. Our experiments reveal that the inhibitory effect of Munc18-1 depends on a closed conformation of syntaxin 1 but does not require the N-peptide of syntaxin 1. The observation that Munc18-1 inhibits t-SNARE complex assembly in a lipid environment is consistent with previous experiments demonstrating the inhibitory mode in solution (71). Because we used soluble SNAP-25, which lacks the post-translational palmitoyl modifications, we cannot exclude that t-SNARE complexes might form more efficiently on liposomes in the presence of palmitoylated SNAP-25. However, the presence of palmitoyl membrane anchors in SNAP-25 should not affect the inhibitory Munc18-1/syntaxin 1 interaction that is characterized by distinct binding revealed in the crystal structure of the Munc18-1⅐syntaxin 1 complex (24). Remarkably, the simultaneous preincubation of the Munc18-1-inhibited syntaxin 1 liposomes with SNAP-25 and VAMP2 liposomes at low temperature (nonfusogenic conditions) releases the Munc18-1 inhibition, and Munc18-1 is converted into a stimulator. The inhibition release and the stimulation depend on the specific Munc18-1/ VAMP2 interaction. Thus, at steady state an inhibitory Munc18-1⅐syntaxin 1 complex dominates in the presence of SNAP-25, and only a small fraction of t-SNARE complexes might form. In the presence of v-SNARE liposomes, this small fraction of t-SNARE complexes can bind the v-SNARE resulting in SNAREpins that still need to zipper up in the membrane proximal region. Because VAMP8 liposomes are not sufficient to efficiently relieve the inhibitory function of Munc18-1, it becomes apparent that the specific interaction of Munc18-1 with VAMP2 is required to further drive the reaction. In the simplest model, Munc18-1 binds VAMP2 in partially assembled SNAREpins, thereby enhancing SNAREpin stability/assembly and membrane fusion (41). Thus, the small fraction of SNAREpins, which forms at steady state, would be constantly consumed by the action of Munc18-1. Alternatively, VAMP2 might function at an earlier stage before it binds its cognate t-SNARE partners. In an initial step, VAMP2 could interact with the Munc18-1⅐syntaxin 1 complex, relieve the inhibitory conformation, and subsequently SNAP-25 binding would follow. The exact order in which VAMP2 interacts with Munc18-1 and its cognate t-SNAREs still remains open. A recent study showed that VAMP2 itself and VAMP2 as part of the SNARE four-helix bundle can compete with the syntaxin 1 Habc domain for binding to the central cavity of Munc18-1, thereby suggesting a potential molecular reaction mechanism (72). A model emerges in which VAMP2 contributes to the displacement of the Habc domain from the central cavity of Munc18-1 or blocks the reassociation of Munc18-1 with the Habc domain. Mapping of the binding site revealed that Munc18-1 interacts with the membrane proximal region of VAMP2 (72). Based on FRET experiments VAMP2 likely binds domain 3a of Munc18-1 (72). In addition, structural analysis showed that domain 3a can adopt a helical structure, which is compatible with a direct interaction with the helical SNARE motif of syntaxin 1 (73). Thus, upon relief of the Habc domain inhibition, the central cavity of Munc18-1 and likely domain 3a will be available to bind the SNARE motifs within the SNAREpin. In this binding mode, Munc18-1 stimulates SNAREpin assembly and membrane fusion. Indeed, in vitro fusion experiments using preassembled t-SNAREs showed that the central cavity of Munc18-1 contains critical amino acids that are required to stimulate fusion (62). Point mutations within the Habc domain that impair the binding to the central cavity of Munc18-1 or removal of the entire Habc domain, excluding the N-peptide, do not abolish Munc18-1-dependent fusion stimulation in vitro (62). In contrast, the syntaxin 1 N-peptide/ Munc18-1 interaction is required to stimulate membrane fusion (41,62). Although the exact function of the syntaxin 1 N-peptide still remains unclear, recent structural analyses suggest that it can alter the conformation of Munc18-1 (73). Our liposomes were incubated in the absence or the presence of Munc18-1 (2-fold molar excess over syntaxin 1) at room temperature for 30 min in a total volume of 50 l. Reactions were transferred onto ice and a 3-fold molar excess of SNAP-25 and 5 l of v-SNARE liposomes were added and incubated for an additional hour in a final volume of 70 l. Samples were transferred into a preheated microwell plate, and fusion was monitored as described before. B, 20% of the fusion reactions shown in A were separated by SDS-PAGE, and the proteins were stained by Coomassie Blue. Lanes from the same gel were cropped as indicated. C, comparison of the initial rates of fusion reactions containing either wt, or L8A, or d24 syntaxin 1 liposomes in the absence or presence of Munc18-1. The initial fusion rate of the reaction containing wild type syntaxin 1 liposomes and VAMP2 liposomes was set to 1 and used to normalize the other reactions. All reactions contained SNAP-25. Error bars represent the means Ϯ S.E. functional studies show that the release of the Munc18-1 inhibition indeed requires VAMP2 but can occur independently of the syntaxin 1 N-peptide. Interestingly, the Munc18-1/syntaxin 1 N-peptide interaction is also controlled by phosphorylation, which inhibits regulated exocytosis (61). In general, dependent on the intracellular transport step, the physiological requirements for syntaxin N-peptides and the affinities of distinct Munc18 homologs for syntaxin N-peptides, Habc domains, and SNARE complexes can vary significantly (74). A previous study showed that the deletion of the syntaxin 1 N-peptide (first 24 amino acids) permits v-/t-SNARE complex formation in the presence of Munc18-1, suggesting an inhibitory role for the syntaxin 1 N-peptide/Munc18-1 interaction (35). This inhibitory role of the N-peptide in SNARE complex assembly seems to contradict the stimulatory role of the N-peptide in membrane fusion. However, the different experimental approaches provide an explanation. In one set of experiments, the assembly of cytoplasmic SNARE domains was measured (35). These experiments also show that the affinity of Munc18-1 for the cytoplasmic domain of syntaxin 1 (K d 1.4 Ϯ 0.3 nM) drops in the absence of the N-peptide (K d 8.1 Ϯ 1.0 nM). The binding of Munc18-1 to the N-terminal regulatory domain (amino acids 1-179) is significantly weaker (K d 693.9 Ϯ 84.2 nM). Thus, the absence of the N-peptide further reduces the affinity of Munc18-1 for the Habc domain ((35) and data not shown). Because the Habc domain apparently functions as an inhibitor, a weaker Munc18-1 interaction (in the absence of the syntaxin 1 N-peptide) would reduce its inhibitory function, thus allowing cis SNARE complex formation as observed by Burkhardt et al. (35). The other sets of experiments analyze full-length SNAREs in their cellular environment or reconstituted into liposomes, measure SNAREpin assembly, and use membrane fusion as a functional readout system. In such assays, other functions for the N-peptide become apparent. Already, the membrane environment adds different constraints. For example, in solution, Munc18-1 shows only a weak interaction with assembled v-/t-SNARE complexes that lack the syntaxin N-peptide (41). Upon reconstitution of assembled v-/t-SNARE complexes into liposomes, the presence or absence of the N-peptide hardly affected Munc18-1 binding to the v-/t-SNARE liposomes (41). In addition, our experiments indicate that t-SNARE complex formation in liposomes differs considerably from t-SNARE complex formation in solution, consistent with a role of lipids in SNARE complex assembly (75,76). It is also worth mentioning that the binding of Munc18-1 to membrane-embedded syntaxin 1 could significantly change the functional state of syntaxin 1 in such a manner that an inherently inactive pool of syntaxin might be shifted into a more reactive state (77). Among other possibilities, Munc18-1 could activate syntaxin 1 oligomers or clusters (78,79). Our observation that Munc18-1 weakly stimulates the fusion of liposomes containing open syntaxin 1, independent of the specific v-SNARE, suggests that Munc18-1 is able to increase the pool of reactive syntaxin 1 molecules. We also noted that upon further reducing the syntaxin 1 to lipid ratio, which coincides with an overall reduction of the fusion signal, Munc18-1 stimulation became less dependent on the syntaxin 1 N-peptide. At such low syntaxin 1 copy numbers, the reactive syntaxin 1 molecules per liposomes could become limiting, changing the rate-limiting step and additional functions of Munc18-1 might now dominate the overall reaction. Low protein to lipid ratios in the syntaxin liposomes make the assay in particular sensitive for changes in the functional state of the syntaxin 1 population. In reactions using already preassembled t-SNAREs (bypassing t-SNARE complex assembly), such an additional Munc18-1 activity has not been observed (62). In conclusion, by probing the function of Munc18-1 in a reconstituted fusion assay, it is possible to assign distinct Munc18-1/SNARE interactions to different steps of the reaction cascade that mediates membrane fusion in vivo. Remarkably, under the experimental conditions employed, an external factor that releases the Munc18-1 inhibition is not strictly required, indicating that Munc18-1, syntaxin 1, SNAP-25, and VAMP2 are the core components of this off/on switch. Munc18-1 inhibits syntaxin 1 to prevent unspecific SNARE complex assembly and membrane fusion. The binding of Munc18-1 to VAMP2 then provides a basic switch to convert the inhibition into a compartment-specific stimulation. In vivo experiments demonstrate that additional factors such as lipids and Munc13 play an important role in syntaxin activation, suggesting that at least in the case of regulated exocytosis additional regulatory components have been added to further control the universal fusion machinery. We expect that variations of the reconstitution assay in combination with careful kinetic studies will be suitable to identify and characterize such factors controlling the Munc18-1 switch.
2018-04-03T02:10:49.876Z
2011-07-07T00:00:00.000
{ "year": 2011, "sha1": "829abf1d8c28c805ff98eacfcfdc899da4b69205", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/286/35/30582.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "cc11eb1a1306beb2c33c3e0328ffb2fae9d77a37", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
58378619
pes2o/s2orc
v3-fos-license
Postorolateral rotatory instability of the elbow–—A new use for the sugar tong cast Posterolateral rotatory instability of the elbow was first described by O’Driscoll. It is described as the first stage in the spectrum of elbow instability. It is usually caused by a traumatic fall onto the outstretched hand and may be associated with a dislocation of the radiohumeral or elbow joint. It can occur in isolation or with a concomitant bony injury, usually a radial head fracture. It is thought to relate to injury with laxity or rupture of the ulnar part of the lateral collateral ligament. This allows a transient rotatory subluxation of the ulnohumeral joint and a secondary dislocation of the radiohumeral joint. The annular ligament remains intact so the radio-ulnar joint does not dislocate. Introduction A sugar tong cast 1 is an effective, comfortable and cheap alternative to commercially available hinge braces in the treatment of posterolateral instability of the elbow. Its use and application is described. Posterolateral instability of the elbow--pathology and diagnosis Posterolateral rotatory instability of the elbow was first described by O'Driscoll. 3 It is described as the first stage in the spectrum of elbow instability. It is usually caused by a traumatic fall onto the outstretched hand and may be associated with a dislocation of the radiohumeral or elbow joint. It can occur in isolation or with a concomitant bony injury, usually a radial head fracture. It is thought to relate to injury with laxity or rupture of the ulnar part of the lateral collateral ligament. This allows a transient rotatory subluxation of the ulnohumeral joint and a secondary dislocation of the radiohumeral joint. The annular ligament remains intact so the radio-ulnar joint does not dislocate. Patients will often describe a feeling of instability with pain and occasionally clunking on movement. Posterolateral instability of the elbow is notoriously difficult to diagnose due to it often having no obvious clinical or radiological signs. O'Driscoll described a posterolateral rotatory instability test 3 which is similar in principle to the pivot shift test of the knee. It is performed by supinating the forearm, applying a valgus moment with axial compression whilst flexing the elbow from full extension. The test is positive if a palpable clunk is felt as the elbow reduces at approximately 408 of flexion. This test can be performed in clinic but is most sensitive under general anesthetic. MRI is a useful examination as it can demonstrate rupture or inflammation within the lateral collateral ligament complex. Patients often present with a chronic or neglected problem with the elbow. In this instance surgical reconstruction of the lateral collateral ligament complex is recommended. 2,[4][5][6] However if the condition is diagnosed acutely current literature recommends the use of a hinged brace holding the forearm pronated and blocking extension at 308. 4,6 This prevents further subluxation whilst the lateral collateral ligaments recover. Hypothesis and methods The use of a commercially available hinged elbow brace is traditionally recommended. These are Injury Extra (2006) 37, 264-266 www.elsevier.com/locate/inext usually expensive and have to be ordered individually leading to potential delays in treatment. Patients find the braces cumbersome and as such they are often poorly tolerated. Hypothesis. It was hypothosised that a simple sugar tong cast 1 would provide adequate stabilization and as such might represent a cheap and readily available alternative to the commercially made hinged brace. Three consecutive patients with posterolateral rotatory instability have been treated using only a sugar tong cast for a total period of between 6 and 8 weeks. Results The sugar tong cast was cheap and easy to apply. It provides an adequate block to extension whilst resisting rotation. If made out of a modern, lightweight and synthetic material it can last for the duration of treatment. These casts are much less bulky and have excellent patient acceptability. We have now successfully completed the treatment of three elbows with posterolateral rotatory instability using the sugar tong cast. We have found high levels Postorolateral rotatory instability of the elbow 265 of patient comfort and tolerance with no complications. All elbows resolved completely. All elbows have had no recurrence of symptoms or need for surgery at follow up of 18 months. Application of the sugar tong cast Application of the sugar tong cast is based on the 'sugar tong' principle and was originally described for use in forearm fractures of the child by JH Stilwell. 1 It can be applied as demonstrated in Figs. 1-7. The sugar tong cast does allow the forearm to be held in any desired degree of rotation which does increase its versatility. In posterolateral rotatory instability the position has to be pronation. Conclusion In conclusion we recommend the use of a modified sugar tong cast in the treatment of postero-lateral rotatory instability of the elbow. The cast could also be utilized in any condition of the elbow requiring control of rotation and limitation of extension.
2019-01-23T22:25:51.532Z
2006-07-01T00:00:00.000
{ "year": 2006, "sha1": "9a44598b8a6eeacf59f35c8a0326c44c25de478a", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.injury.2006.01.011", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b8b630d5b93594fa3049a30281e8dbfcc99b84c1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245923115
pes2o/s2orc
v3-fos-license
Video Content Analysis of Human Sports under Engineering Management Incorporating High-Level Semantic Recognition Models In this paper, a high-level semantic recognition model is used to parse the video content of human sports under engineering management, and the stream shape of the previous layer is embedded in the convolutional operation of the next layer, so that each layer of the convolutional neural network can effectively maintain the stream structure of the previous layer, thus obtaining a video image feature representation that can reflect the image nearest neighbor relationship and association features. The method is applied to image classification, and the experimental results show that the method can extract image features more effectively, thus improving the accuracy of feature classification. Since fine-grained actions usually share a very high similarity in phenotypes and motion patterns, with only minor differences in local regions, inspired by the human visual system, this paper proposes integrating visual attention mechanisms into the fine-grained action feature extraction process to extract features for cues. Taking the problem as the guide, we formulate the athlete's tacit knowledge management strategy and select the distinctive freestyle aerial skills national team as the object of empirical analysis, compose a more scientific and organization-specific tacit knowledge management program, exert influence on the members in the implementation, and revise to form a tacit knowledge management implementation program with certain promotion value. Group behavior can be identified by analyzing the behavior of individuals and the interaction information between individuals. Individual interactions in a group can be represented by individual representations, and the relationship between individual behaviors can be analyzed by modeling the relationship between individual representations. The performance improvement of the method on mismatched datasets is comparable between the long-short time network based on temporal information and the language recognition method with high-level semantic embedding vectors, with the two methods improving about 12.6% and 23.0%, respectively, compared with the method using the original model and with the i-vector baseline system based on the support vector machine classification method with radial basis functions, with performance improvements about 10.10% and 10.88%, respectively. Introduction With the continuous development of information technology, the way people obtain and store massive video information keeps developing towards diversification, and video information gradually becomes the mainstream multimedia data carrier. In the context of huge video data resources, users face the challenge of how they can efficiently retrieve video resources according to their interests [1]. erefore, it is necessary to classify and organize the massive video resources intelligently to facilitate users to retrieve according to their preferences. Video semantic analysis technology can annotate and classify important semantic information in videos, and users can retrieve it according to their preferred categories, which improves the efficiency of users' access to information. In addition, the host-based implementation of video semantic analysis technology can replace manual annotation work, reducing many human resources and improving information utilization. Video semantic concept analysis refers to the generalized description of video content after obtaining video sequences, and the content of events, scenes, objects, and so forth is the multicategory semantic information contained in semantic concepts. A large amount of video data has large intraclass variations for the same action class, which may be caused by background clutter, viewpoint changes, and movement speed and style [2]. In particular, the feature information generated by the deep learning model is very large. Only when the attention mechanism is used in the huge feature space can the model extract more effective features and discard useless information. e high dimensionality and low resolution of the videos further increase the difficulty of designing efficient and robust recognition methods. Although traditional manual annotation methods can achieve the understanding and description of video semantic concepts to a certain extent, the time and labor cost of manual annotation are huge, subjective, and difficult to cross the semantic gap between the underlying features and the semantic understanding of video data, and its annotation speed cannot achieve efficient classification and organization of video data [3]. erefore, in recent years, researchers have focused their research on how they can automatically access the semantic concepts of video data and annotate, classify, and organize rich video data. is research has significant academic and applied value and helps to improve video management techniques to make them more complete and more efficient. Video semantic concept analysis is a key and difficult area in the field of machine learning and pattern recognition, where video data can be efficiently and intelligently retrieved and organized by recognizing and understanding the main events, scenes, and objects in the video. In recent years, with the rapid development of technology and the improvement of the computing power of hardware devices, the way of video acquisition has become faster [4,5]. Video retrieval is the process of finding a match in the video database according to the user's textual description according to a specific algorithm and filtering all videos that match the user's needs according to some qualifying conditions. Group behavior recognition technology can derive labels for crowd scene images that can provide clues for retrieval of images and videos of group scenes. e advancement of group behavior recognition technology is a great boost to crowd scene classification, labeling, and retrieval. With the popularity of electronic devices, especially mobile electronic devices such as mobile phones, the change in people's lifestyles, and the need to record their productive lives, a large amount of image and video data has been generated. However, it is not a simple task to manage and utilize these huge amounts of images and data [6]. A good starting point can not only prevent the gradient descent algorithm from falling into the local extreme point which is difficult to jump out of but also reduce the time to find the global optimal solution, if the initialization point is close enough to the optimal solution. Current video retrieval technology relies heavily on users submitting and sharing videos along with video subject descriptions. is is difficult to achieve in real time, and it is difficult to get down to the specifics, as describing specific details is tedious and time-consuming, and only based on image and video analysis techniques can achieve real-time frame-by-frame analysis. Group behavior recognition techniques are of great interest for real-time frame-level video classification retrieval of crowd scenes. Classification and detection of video-based actions is an important research topic in the field of computer vision, which has a very wide range of applications in intelligent human-computer interaction, video surveillance, telemedicine, and other fields. e difficulties of traditional action analysis tasks mainly come from several aspects: the differences arising from the same action performed by different people; the influence of environmental factors, such as occlusion, viewpoint changes, lighting differences, and dynamic background interference; and the ambiguity in defining the starting and ending points of the action. However, the existing recognition and detection performance still falls far short of the requirements for accuracy in practical applications. e reason for this is that, on the one hand, the number of action classes in the existing dataset is limited and cannot cover all actions in realistic scenes, and the definition of the classes is relatively coarse, so the models trained on the coarse-scale action classes are not able to analyze increasingly fine-grained action classes. Another important reason is that action classes in existing datasets are usually handselected and cropped, resulting in more significant episodic and motion differences between classes, while action boundaries in realistic scenes are usually fuzzy and uncertain, and similarities between actions are often large, often with only minor local differences in the actions. In such cases, more fine-grained action discrimination and detection are required. erefore, we believe that fine-grained action analysis will facilitate further breakthroughs in the task of action classification and detection, thus promoting the advancement of abstract theoretical research to practical applications in realistic scenes. Related Work How to represent the behavior in the video is the core problem of behavior recognition research, which determines the recognition performance to a certain extent. ere exist many feature representation methods, which can be divided into two categories according to the source of features: handcrafted features and features learned from samples [7]. Handcrafted features are features designed by research experts based on human visual principles. In contrast, features obtained from samples are learned without prior feature design, and suitable feature representations are found directly in training samples by various types of learning algorithms, among which deep learning methods have become the mainstream method for learning features due to their excellent performance [8]. It is pointed out that implicit knowledge is real in the practice of competitive sports, especially in the acquisition of motor skills where implicit cognition plays an important role, and a coping strategy is proposed on how implicit cognition and explicit cognition can be transformed into each other in motor skill learning. Large-scale image retrieval systems necessarily have high demands on the time overhead of retrieval [9]. e performance problem is a difficult problem that must be solved for retrieval on a collection of images of hundreds of millions of sizes [10]. e basis of image retrieval lies in the similarity calculation between the query image and the database image in the feature space. Calculating the similarity between each vector following a linear traversal is very time-consuming [11]. How to ensure the efficiency of image retrieval systems has been a key research direction in the fields of information retrieval, machine learning, and computer vision [12]. Considering that the image features to be retrieved are not uniformly diffuse in the feature space but in some specific distribution pattern, it is sometimes not necessary to traverse the whole query space to query for the nearest neighbor feature vector. Based on this idea, many kinds of tree-based index structures were proposed to narrow the retrieval domain of the query vector by recursively partitioning the feature space. e vector quantization approach, on the other hand, lies in approximating the original features using quantization to certain representative elements [13]. Deep learning methods can learn multilayer feature hierarchies and automatically construct high-level representations of the original input. A large amount of data is used to drive network training and model optimization to extract more representative semantic features, thereby improving classification performance, making the limitations of traditional manually designed feature extraction methods largely avoided. Using quantization methods, the global features of an image are usually quantized to sparse storage, and the query features are associated with only a small number of relevant quantization points during the similarity calculation, which corresponds to a reduction in the dimensionality of the image features and therefore an increase in retrieval speed [14]. In terms of tacit knowledge measurement methods, most of the relevant results of hierarchical analysis and fuzzy integrated measurement methods are used for specific applications [15]. In recent years, many scholars have started to try to combine the relevant theories of discrete mathematics with multicriteria decision-making to characterize the weight information for multiple scenario comparisons and decision-making. e theory of partial order set, as an important element in discrete mathematical order theory, is a very attractive decision support tool, and the options can be compared and ranked by qualitative weight information. No systematic theoretical system has been developed. Many scholars have addressed the influence of nontechnical factors on athletes' performance from a psychological and philosophical perspective. ere are more discussions about implicit learning in psychology, alienation, and sustainable development of competitive sports from a philosophical perspective. e literature applying knowledge management theory to the field of competitive sports practice is occasionally found to be more general. A systematic study of the basic theoretical issues of athletes' tacit knowledge will certainly make up for the lack of research in this field to a certain extent, thus promoting the enrichment and development of the theoretical system of athletes' tacit knowledge. High-Level Semantic Recognition Model Design under Engineering Management. Interaction information is an important clue for the task of group behavior identification, and mining the interaction information between individuals is crucial for identifying individual behavior and group behavior. Group behavior can be identified by analyzing the behavior of individuals and the interaction information between individuals. Individual interactions in a cluster can be represented by individual representations, and analysis of the relationships between individual behaviors can be modeled by modeling the relationships between individual representations [16]. It is necessary to extract the appearance representation of individuals, to establish the relationship between individual interaction and individual representation, to analyze the behavior of individuals and to analyze the model of group behavior and individual interaction, and to obtain the group behavior by analyzing individual representation, individual behavior, and interaction between individuals. e attention mechanism is used in the individual fusion of interaction information. After completing the establishment of the index based on the random split rule, we input the automatically mined dance element into each random tree in turn and implement the top-down matching process to find its nearest neighbor in the feature space. By attention mechanism, we mean that the model devotes more attention to the important information and allocates more attention to it. e attention mechanism is essentially designed to pick out the most representative information and discard features with less information. In particular, the feature information generated by deep learning models is very large, and only by using the attention mechanism in large feature space can the model extract more effective features and discard useless information. Attention mechanisms can be used on different dimensions of features, spatial attention to determine which region in the image is more significant and should receive attention, temporal attention to determine which moment in the temporal sequence contains more information, and channel attention to highlight the important role of certain channels in the feature. Achieving attention can be done in both hard and soft ways; the hard attention approach is by completely retaining and completely discarding some information. e soft attention approach generates new states by calculating the weight of information in the new state. rough this form, it is possible to discover the cognitive differences between the two knowledge elements that directly affect the sports performance of the snow sports athletes to provide effective help for coaches to adopt targeted training methods and guidance strategies. Computational Intelligence and Neuroscience where x i and y 2 i represent the i-vector of the i-th training language sample and the corresponding labels, respectively, and w and b are the parameters of the classification hyperplane to be trained. is is an optimization problem with constraints, and thus it can be optimized using the Lagrange multiplier method, so the following function is defined: Since the error back-propagation algorithm is, in fact, a search for states close to the extremum in a large numerical solution state space, a good starting point can both prevent the gradient descent algorithm from getting stuck in local extremes that are difficult to jump out of and reduce the time to find the global optimal solution, if the initialization point is close enough to the optimal solution. Moreover, the response threshold of the activation function is finite due to the nonlinear factor of the model [17]. A good initialization parameter can reasonably activate the activation function so that most parameters are involved in the training, allowing most neurons to participate in the expression without dying, as shown in Figure 1. e data distribution of the dataset is often fixed and we cannot change the data distribution of the dataset, so the distribution of the parameters directly affects the output response of this network, and if the range of the region of the response is not in the expected interval, then the loss of the model is huge and hard to debug. e parameter-transformed output data should not appear in some uncommon zones, which will make it more difficult to fit the model and reduce its capacity. e conceptual analysis of video semantics has been a more active research direction in recent years due to the potential applications of an effective understanding of human behavior in video and its interactions in the environment in a variety of domains. To accomplish this challenging task, several research areas have worked on modeling multiple aspects of video semantics (emotions, relational attitudes, behaviors, etc). e other subset contains all the remaining video clips as the natural dataset to be recommended. In this context, understanding the underlying semantic concepts in videos becomes crucial in interpreting complex video events. In recent years, deep learning methods have played an important role and have been widely used in computer vision tasks such as image segmentation, detection, recognition, and retrieval. In the field of video semantic concept analysis, how to cross the "semantic gap" and establish a mapping relationship between underlying features and high-level semantics to extract abstract features that are closer to the high-level semantics of video has become a core problem for researchers to solve. Deep learning methods can learn multiple layers of feature hierarchies and automatically construct high-level representations of the original input, using large amounts of data to drive the training of the network and optimization of the model to extract more representative semantic features, thus improving the classification performance, making the limitations of traditional manually designed feature extraction methods largely avoided. Since the feature construction process is fully automated, they are more general. In our experiment, 10 video clips of each dance type are simulated to the video that the user has clicked on, and the final recommendation result is automatically obtained according to the degree of matching with the dance style excavated from the 10 input videos. Locality-sensitive discriminant analysis is a classical supervised dimensionality reduction algorithm that considers both the discriminant information in the data and the geometric structure of the data. By constructing intraclass and interclass graphs, the method can better characterize the original local features of the data manifold and preserve the original class labels of the data with good discriminability. e sparse constrained autoencoder enables the encoded learned feature representation to better obtain the sparse reconstruction relationship between data by introducing SPP-constructed graph constraints for the nonlinear autoencoder. is pretraining model not only effectively exploits the natural discriminative power of the sparse representation but also largely alleviates the difficulties in the selection of the nearest neighbor parameters. In this framework, to exploit the structural information between images, we wish to obtain the flow information of the previous layer (which can be the input or pooling layer) by constructing locality and sparsity graphs and using the flow information to redesign the mapping relationships between adjacent layers. ese graph construction methods make the learned features more stable and discriminative as the network depth deepens, further speeding up the convergence and improving the generalization of the model [18]. e objective function of the localization and sparsitypreserving embedding convolutional neural network of adjacent layers consists of two components: reconstruction error between feature graphs of adjacent layers and graph regularization. After completing the establishment of the random split rule-based index, we input the automatically mined dance elements into each random tree in turn and implement a top-down matching process to find their nearest neighbors in the feature space and recommend dances in the natural dataset based on the cumulative ranking of the matches of such features, as shown in Figure 2. en the impact of differences like this on the performance of the language recognition model is obvious. Depending on the method of selecting spatiotemporal interest points, the current mainstream methods can be divided into spatial-temporal interest point features and trajectory features. Feature detection of local spatial-temporal feature points usually selects spatial-temporal localization and scale by maximizing a specific saliency function, and different detectors usually differ significantly in the type and sparsity of the selected points. Feature descriptors capture shape and motion features in a neighborhood of the selected point of interest using metrics such as spatial or spatial-temporal image gradients or optical flow. Behavioral event interviews, conducted with both high and average athletes in snow sports, reveal the knowledge, qualities, and abilities that snow sports athletes must have to achieve excellent athletic performance, and this is often Backprop agation algorithm Computational Intelligence and Neuroscience implemented by interviewing only the research subjects themselves. However, due to the special nature of sports practice and the role of snow sports athletes, coaches and athletes must spend time together, not only training and competing together but also living together every day and "feeling and fighting" for a few years or more than ten years, and they are in contact with each other, and sports practice activities such as training and competing are done jointly by athletes and their coaches. Considering the strong complementarity between spatial flow features and optical flow features, choosing a suitable fusion method can effectively improve the performance of video classification. is method first extracts video image frames to form image sequence and optical flow sequence. erefore, coaches even know their strengths and weaknesses better than athletes. Based on the relevant knowledge information obtained by the snow sport athletes themselves, the behavioral event interviews with their coaches can not only provide a basis for the researcher to confirm the content elements of tacit knowledge but also find out the differences in the cognitive aspects of the knowledge elements that have a direct impact on the athletes' performance in snow sport, to provide effective help for the coaches to adopt targeted training methods and coaching strategies. is will help coaches to adopt targeted training methods and coaching strategies. e processing of video presents more challenges compared to still images; for example, temporal sequencing is important for behavior recognition in video, but how to reflect temporal information in the representation of behavior still needs further research, as well as issues such as occlusion, background noise, and interclass differences, and further improvements in both hand-designed features and deep learning features, as well as how to fuse multiple features to improve recognition rates, require further research. Experimental Design for Video Content Analysis of Sports. e information in the two hidden layers can well contain the language-related identity information of the speech segment and reflect the nature of that speech segment; that is, it can be considered as the language-related identity information of that speech segment. is representation of speech segments is more exploitable than the LSTM network model. In fact, in the traditional language recognition approach, the i-vector itself is also a representation of the language vector after highly abstracting the high-level semantic information, which is very similar to the nature of the embedding vector. Moreover, the i-vector itself assumes that the sample distribution of language recognition conforms to a Gaussian distribution, whereas LSTM networks do not have such type of assumptions [19]. erefore, if the i-vector, which reflects the nature of speech segments, can be replaced by the embedding vector and then the investor-based language recognition classification method can be used for classification and scoring, theoretically, it can achieve better results than the i-vector method. e subset used for dance style mining consists of 10 video segments from each dance genre; another subset contains all remaining video segments as the natural dataset to be recommended. is unbalanced method of slicing the data exactly matches the reality. We know that a user browsing a video on a website selectively selects only a small number of videos to click through, while the amount of data to be recommended in the webspace is huge. A video recommender should then be able to efficiently select videos relevant to the video content that the user has clicked on for recommendation from a large amount of distracting data. In our experiments, 10 video clips of each dance genre are simulated as videos clicked by the user, and the final recommendation results are automatically obtained based on the match with the dance styles mined from the 10 input videos. Its purpose is to guide the spatial stream to pay more attention to the foreground area of the human body and reduce the influence of background noise, to better obtain the changes and differences between temporal and spatial features, and to improve the rationality of the network to extract video features. In the AP17-OLR dataset description, the dataset provider also points out that there are some differences between the training data and the test data used for the experiments, and, in the cases of Japanese, Korean, and Russian, the dataset directly gives the sampling environments between the training and test sets, both in quiet conditions and in speech segments mixed with noise. Also, the provider of the dataset points out that the sampling environments of Kazakh, Tibetan, and Uyghur are completely different from the situations of all other languages, and there are some differences between the training and test sets. Whereas DNN-like networks (including LSTMs) are more sensitive to such issues, the impact of differences like these on the performance of language recognition models is obvious if there is no good method for channel compensation, or if existing channel compensation measures are not sufficient to solve the problem. In fact, in the models described in the previous sections of this paper, there is a significant degradation in the recognition accuracy of some of the languages; taking the LSTM-1-MFCC network as an example, the false rejection and false acceptance rates for each specific language in this network are shown in Table 1. e video semantic concept analysis task is richer and more complex compared to recognition tasks such as image classification, and complex situations such as background dynamic information interference, angle transformation, and target blocking can occur in different scenes. Although convolutional neural networks have achieved great success in image classification and recognition tasks, how to model the spatial-temporal features of videos and obtain the spatial-temporal information contained in videos is still one of the main problems that need to be solved urgently for video semantic concept analysis using deep learning methods. Many works have designed various effective deep 6 Computational Intelligence and Neuroscience convolutional neural networks for learning and extracting static frame appearance information and motion timing information of videos, such as adding a temporal dimension to the 2D convolutional kernel of convolutional neural networks and expanding it to the 3D convolutional kernel to extract both spatial and temporal dimensional features. Considering the strong complementarity between spatial flow features and optical flow features, choosing a suitable fusion method can effectively improve the video classification performance. As a result, a recognition rate of almost 100% is obtained. When the trajectory feature is used, the trajectory information of the limb movement is captured, which greatly enhances the expression of behavior. Especially when the MBH descriptor is used, the recognition rate of boxing and applause is increased by more than 20%. e method first extracts video image frames to form image sequences and optical flow sequences, then extracts spatial flow features and optical flow features by the convolutional neural network, and introduces optical flow attention layer from temporal flow network to spatial flow network by mining the nearest neighbor relationship and association information between features in the flow embedded spatial flow convolutional neural network to guide the spatial flow to pay more attention to the human foreground region and reduce the influence of background noise. us, the variations and differences between spatial-temporal features are better obtained, as shown in Figure 3. e visual attention mechanism is a unique signal processing mechanism of the human brain; through the observation of the global sample to determine the focus area and area of interest, the key information closely associated with the target will be quickly accessed; attention mechanism frees people from the colorful and complicated information and improves the efficiency of information processing, and it is introduced into the field of computer vision to improve the computer to solve image, video, and other prediction and analysis tasks. Consider that optical flow can be used to direct human foreground attention when appropriate compensation is applied to the movement of the lens. We investigate the combination of spatial streaming embedding CNN and temporal streaming CNN to form a dual-stream convolutional neural network to learn video features [20]. e purpose of introducing an optical flow attention layer from the temporal network to the spatial network is to guide the spatial flow to pay more attention to the human foreground region and to reduce the effect of background noise. us, the variation and differences between spatial-temporal features are better obtained and the rationality of the network to extract video features is improved. Attention is a mechanism used to give more weight to a subset of elements, and the optical flow attention map is directed to foreground regions and helps the spatial flow convolutional network to learn distributed feature representations around these regions to accomplish the label prediction task. In dual-stream convolutional networks, we propose an optical-stream attention layer to model the interaction of the two networks, which can be trained end-to-end using stochastic gradient descent and back-propagation algorithms. Improve the efficiency of users to obtain information. In addition, the video semantic analysis technology based on the host computer can replace manual annotation work, reduce a lot of human resources, and improve the utilization rate of information. Considering the perceptual wildness of spatial information, the range of neighboring points can be expanded. When building a graph structure, the most extreme case, where the current node can be associated with all other nodes on the graph, can be achieved by the subsequent adoption of attention mechanisms or the amount of information passed. e inclusion of all nodes in the graph, as well as the interconnection of all nodes to each other, is a fully connected graph, constituting a complete graph that allows information about all locations to be perceived by each other. It allows each member to have a large enough perceptual field to recognize a larger range of spatial patterns. Performance Results of High-Level Semantic Recognition Models under Engineering Management. By changing the length of the input sequence from 5 to 10 frames, the accuracy of the model was improved by 0.9%, but as we continued to increase the length of the input sequence to 15 and 20 frames, the accuracy of the model started to decrease. e reason for this phenomenon is that the size of the video dataset is relatively small and overfitting occurs when the input sequence is too long. Since each RGB image frame corresponds to 10 adjacent frames of the stacked optical flow image, the 10-frame input contains 100 consecutive frames of spatial-temporal information in the video clip, which is sufficient to represent the main semantic information of the video clip. After the selection of the best input sequence is completed, the two-stream network stream embedding parameters and the confidence fusion parameters are set, and the experiments are firstly conducted to search the grid for the two parameters, and the best parameter for stream embedding is obtained as 0.2. After the stream embedding parameters are fixed, the experimental analysis of the effect of the confidence fusion parameter changes on the model Table 1: Network specific to each campaign. Name Ct -cn Id-id Kazak Ko-kr Tibet Ct-cn 1233 11 0 23 0 Id-id 11 1234 11 0 0 Kazak 23 11 1234 11 23 Ko-kr 0 0 11 1234 11 Tibet 35 0 223 11 1234 performance is conducted, and the semantic concept detection accuracy based on different confidence fusion parameters is shown in Figure 4. ere is difficulty in designing an efficient and robust identification method. Although the traditional manual labeling method can realize the understanding and description of the video semantic concept to a certain extent, the time and labor cost of manual labeling is huge, and the subjectivity is strong. e vertical coordinate in the Cartesian coordinate system indicates the corresponding video semantic detection accuracy at different confidence parameters. e video semantic concept detection accuracy keeps improving when the values are taken in the interval [0.1, 0.7], which proves that the confidence of the classifier based on the probability error has an important contribution to the final category prediction. e model prediction performance is best when it is 0.7, so this chapter chooses to take a value of 0.7 as the confidence parameter of the dualstream network classifier. e performance of feature engineering-based algorithms IDT remains competitive; in addition, many methods based on deep learning were combined with IDT to achieve better results, but several video semantic analysis methods with the best model performance are deep convolutional network-based algorithms, and CD methods do not have an advantage over traditional methods due to many model parameters and more difficult training. e basic dual-stream network model has achieved good results by emulating the human visual mechanism and has a better understanding of the spatial and temporal information of the video. e TSN method is built based on the dual-stream network model, which can learn video features efficiently by modeling long time scales and combining sparse sampling strategies and video supervision methods, and it has achieved good results. e proposed method in this paper has 0.4% higher accuracy than TSN. is shows that the proposed method can better reflect the nearest neighbor relationship between samples and structural features, as well as the complementary relationship between images and optical flow, and the method of confidence fusion classification can effectively obtain video semantic concept features and improve the accuracy of video semantic concept detection, as shown in Figure 5. e research has important academic significance and application value and helps to improve the level of video management technology, making it more complete and more efficient. In the process of optimal learning of video, features consider the nearest neighbor relationship between samples, association features, and so forth to construct stream shape constraint terms; optical flow attention mechanism was introduced to guide the spatial flow to pay more attention to the foreground region and reduce the influence of background noise, and, to better obtain the changes and differences between spatial-temporal features, in the acquisition of contextual information of video frame sequences, LSTM was introduced to construct stream shape embedding and optical flow attention based dual-stream CNN video semantic concept detection model. e proposed method can better reflect the nearest neighbor relationship and structural features between samples, as well as the complementary relationship between images and optical streams to obtain effective video semantic concept features, and the confidence fusion classification method for the category score results of the two-stream SoftMax layer can more effectively improve the accuracy of video semantic concept detection. Experimental Results of Sports Video Content Analysis. As shown in Figure 6, the classification accuracies achieved by different coding and normalization methods are 8 Computational Intelligence and Neuroscience compared using spatial-temporal interest point features with the number of topics varying between 10 and 100. As the number of topics increases, all coding methods achieve significant performance gains, but after the number of topics is 60, the performance does not change much. e difference between the results obtained when using vector quantization and local soft assignment is small, and the different normalization methods make a limited contribution to the recognition rate, with exponential plus l normalization achieving the best classification accuracy for most of the number of topics. e performance mostly improved as the number of topics increased, and the best performance was obtained when the number of topics reached 80 and then decreased. For soft assignment coding, there was a more significant decrease in classification performance compared to the results for spatial-temporal interest points, and the performance fluctuated by a maximum of more than 15 percentage points using different normalization methods. For both classes of descriptors, soft assignment coding tended to achieve optimal performance in combination with l. Group behavior recognition technology can derive tags of crowd scene images and can provide clues for retrieval of pictures and videos of group scenes. e advancement of group behavior recognition technology has greatly promoted the classification, labeling, and retrieval of crowd scenes. In Figure 7, the confusion matrix under different features is presented, and it is evident that walking and waving have the highest recognition rate among all cases, and, correspondingly, boxing and clapping have the lowest recognition rate. is is in line with the expectation that, in terms of form movements, boxing and clapping focused on upper limb movements and have a high degree of similarity, while walking and waving, which are more differentiated from the rest of the behaviors, obtain a recognition rate of nearly 100%. When trajectory features are used, the trajectory information of the limb movements is captured, which substantially enhances the representation of the behaviors, especially when MBH descriptors are used, increasing the recognition rate of boxing and clapping by more than 20 percentage points at maximum. We obtained a classification accuracy of 89.63% using spatial-temporal interest points, which results in a 6-percentage point improvement. It is reasonable to assume that similar behaviors have similar characteristics and thematic distributions. Describing behaviors with mixed topic probability distributions is superior to the approach of corresponding a topic to a class of behaviors. One advantage of the topic model is that topics can be considered as a midlevel semantic feature and then used to describe more complex behaviors. Inevitably, there are similar form movements in different behaviors; for example, boxing and clapping both have similar upper body movements. us, different behaviors share the same themes, and each behavior has its distribution of themes, which enhances the discriminative nature of the features. Overall, principal component analysis preprocessing of raw features not only reduces the feature dimensionality, thus making it less demanding on computational resources, but also retains most of the discriminative primary information, while having a suppressive effect on noise caused by various reasons, and whitening was also performed in the experiments to reduce the correlation between features, further improving the robust performance of recognition. e difference in the performance of the same action by different people; the influence of environmental factors, such as occlusion, viewing angle changes, lighting differences, and dynamic background interference, etc; and the start and end points of the action are blurred. e use of principal component analysis to preprocess raw features has an important impact on improving the performance of recognition. e principal component analysis projects the original features onto the feature components, which objectively suppresses the noise to a certain extent but, at the same time, inevitably brings about a loss of information. ese two effects cancel each other out; if the noise component is large, the utility achieved by suppressing noise is large, which brings an increase in recognition rate, while the information loss effect is large and the corresponding performance is decreased. On the other hand, the performance of densely sampled features is superior, and the number of features to be processed is increasing, especially for video signals, which are particularly computationally intensive. If PCA is used to preprocess the original features, the number of feature dimensions is significantly reduced while retaining most of the information, resulting in little degradation in classification performance, which will greatly reduce the computational effort and improve the response speed, which is significant for applications that require real-time signal processing. Conclusion Better performance has been achieved after its introduction into the field of computer vision. In the word-packet framework, it has been shown that different feature encoding methods have an important impact on performance. Inspired by this, the impact of different coding methods combined with normalization methods on the classification performance of probabilistic implicit semantic analysis models is focused on, and it is found experimentally that local soft assignment coding combined with exponential normalization methods substantially improves the recognition performance; the impact of principal component analysis preprocessing raw features on performance is also examined, and when the features contain more noisy components, the computational effort is significantly reduced, while the classification recognition performance is even improved when the features contain more noisy components. However, the performance improvement of the fusion model for the language recognition model is limited. In addition, the idea of this paper is still stuck on the traditional pattern recognition task flow of the feature extraction-classification recognition model, and the two separated links may also affect the performance of the model to some extent. erefore, a language recognition model based on an end-to-end approach is a very promising problem. en the spatial flow features and optical flow features are extracted by the convolutional neural network, and the nearest neighbor relationship and association information between features are mined by stream shape embedded in spatial flow convolutional neural network, and the optical flow attention layer from the temporal flow network to spatial flow network is introduced to guide the spatial flow to pay more attention to the human foreground region and reduce the influence of background noise so that the variations and differences between spatial-temporal features can be better obtained. en the features obtained from the two streams were input in temporal order to learn temporal features, and, finally, confidence fusion was performed on the classifier results of the two streams to detect the video semantic concept categories. Data Availability e data used to support the findings of this study are available upon request to the author. Conflicts of Interest e author declares that there are no known conflicts of financial interest or personal relationships that could have appeared to influence the work reported in this paper.
2022-01-14T16:17:31.567Z
2022-01-12T00:00:00.000
{ "year": 2022, "sha1": "8303319091fe82b66cb4f0e57c508478ab45a3e8", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/cin/2022/6761857.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4e446fc726e91c0d3ccfd85c8189cb832344bd7a", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
56475635
pes2o/s2orc
v3-fos-license
Chaotic motion in the Johannsen-Psaltis spacetime The Johannsen-Psaltis spacetime is a perturbation of the Kerr spacetime designed to avoid pathologies like naked singularities and closed timelike curves. This spacetime depends not only on the mass and the spin of the compact object, but also on extra parameters, making the spacetime deviate from Kerr; in this work we consider only the lowest order physically meaningful extra parameter. We use numerical examples to show that geodesic motion in this spacetime can exhibit chaotic behavior. We study the corresponding phase space by using Poincar\'{e} sections and rotation numbers to show chaotic behavior, and we use Lyapunov exponents to directly estimate the sensitivity to initial conditions for chaotic orbits. INTRODUCTION We study the geodesic motion in a family of spacetimes constructed by Johannsen and Psaltis (2011). The corresponding metric is characterized by an infinite number of parameters, i.e. the mass M , the spin a and a series of deviation parameters ǫ k , where k ∈ N 0 . However, in this work we constrain ourselves to the lowest order of the unconstrained parameters, which is ǫ 3 . The Johannsen-Psaltis (JP) metric was designed to be a perturbation of the Kerr spacetime, which is of great astrophysical interest. The so-called no-hair theorem (see, e.g., Carter, 1971) states that the class of uncharged black-hole exterior solutions which are axisymmetric and don't violate causality (i.e. no closed timelike curves) consists of a discrete set of continuous families, each depending on at least one and at most two independent parameters. No other externally observable parameters are required for this description. Typically, the Kerr spacetime is assumed to describe a black hole (Rico, 2013). Kerr black holes are parametrized by their mass M and their angular momentum a. However, there is yet to be a proof if black holes are indeed described by the Kerr paradigm. Therefore, it would be of great astrophysical interest to test this conjecture by observing black hole candidates through electromagnetic and gravitational wave signals. The Kerr spacetime is axisymmetric and stationary, but one special feature of this spacetime is that it has an extra "hidden" symmetry that makes geodesic motion in such a background correspond to an integrable system (Carter, 1968). There are spacetimes that deviate from Kerr by a deformation parameter, these spacetimes are called in the bibliography non-Kerr spacetimes (see, e.g., Bambi, 2017). These non-Kerr spacetimes do not usually possess the symmetry that the Kerr spacetime does, making geodesic motion correspond to a non-integrable system. As a result, geodesic motion in such spacetimes exhibits chaotic behavior, which is the topic of our study. The organization of the article is as follows: in section 2 we describe the basics of geodesic motion, deterministic chaos in dynamical systems and some of the properties of the JP spacetime. In section 3 we use numerical examples to show that the JP metric doesn't correspond to an integrable system. Section 4 summarizes our main findings. Note that geometric units are employed throughout the article, G = c = 1. Greek letters denote the indices corresponding to spacetime and the metric signature is (−, +, +, +). GEODESIC MOTION AND CHAOS The line element of a rapidly spinning black hole introduced in Johannsen and Psaltis (2011) reads in Boyer-Lindquist-like coordinates ds 2 = g tt dt 2 + g rr dr 2 + g θθ dθ 2 + g φφ dφ 2 + 2g tφ dtdφ , where the metric components g µν (Johannsen and Psaltis, 2011) are and the metric functions are Σ = r 2 + a 2 cos 2 θ , The function h (r, θ) is what causes the deviation from the Kerr metric. Namely, setting ǫ k = 0 ∀k ∈ N 0 gives the Kerr metric. The parameters (ǫ k ) ∞ k=0 are, however, constrained. As explained in detail in (Johannsen and Psaltis, 2011), we have to set ǫ 0 = ǫ 1 = 0 and the parameter ǫ 2 is constrained by observational constraints on weak-field deviations from general relativity (Johannsen and Psaltis, 2011), i.e. |ǫ 2 | ≤ 4.6 · 10 −4 . We therefore set ǫ 2 = 0 as well and limit ourselves to the lowest order remaining parameter, which is ǫ 3 , and set all the higher order parameters ǫ k = 0 ∀ k ≥ 4. The proper time τ defined as dτ 2 = −g µν dx µ dx ν is employed as the evolution parameter. The geodesic motion of a free particle of rest mass m is then generated by the Lagrangian (see, e.g., Rindler, 2006) L where dot denotes a derivative with respect to the proper time. Due to the preservation of the four-velocity g µνẋ µẋν = −1 along a geodesic orbit L = −m/2 is a constant. The corresponding canonical momenta are and performing the Legendre transform gives the Hamiltonian The JP metric functions are independent of the parameters t and φ, i.e. it is stationary and axisymmetric, therefore the energy E := −p t and and the component of the angular momentum L z := p φ are integrals of motion. This allows us to restrict our study to the meridian plane generated by the polar-like coordinates (r, θ) and move to a simpler system of two degrees of freedom. Namely, one has to merely replacė in the equations of motion to reduce the system. The motion in the resulting reduced system is characterized by the Newtonian-like two-dimensional effective potential For p θ = p r = 0 the roots of this effective potential V eff = 0 form a curve in the meridian plane, which is called the curve of zero velocity (CZV). In the Kerr case, an extra "hidden symmetry" exists 1 , giving rise to the Carter constant K (Carter, 1968). This constant, along with E, L z and H, are independent and in involution, therefore geodesic motion in the Kerr spacetime background corresponds to an integrable system and trajectories of the reduced system lie on a family of two-dimensional invariant tori. These orbits oscillate in both degrees of freedom with their respective characteristic frequencies ω r and ω θ ; their ratio ω = ω r /ω θ is called the rotation number and it is useful for the classification of orbits. If ω is rational, the torus is called resonant and it hosts an infinite number of periodic orbits. If ω is irrational, the motion is called quasiperiodic and each orbit on the torus covers it densely. When a perturbation is applied to such an integrable system, all the resonant tori are destroyed. According to the KAM theorem (Meiss, 1992), however, most of the non-resonant tori survive in the perturbed system for small perturbations; these are called KAM tori. According to the Poincaré-Birkhoff theorem (Lichtenberg and Lieberman, 1992), where there was a resonant torus, an even number of periodic trajectories survives in the perturbed system, half of them stable and half unstable. We use a Poincaré surface of section to display the phase space structure of the system. We define a surface in the phase space and plot the intersections of the orbits with the surface. Invariant tori correspond to circles in the surface of section. These form the main island of stability around a stable fixed point in the center. Near the now destroyed resonant tori, quite a different structure arises. Around the stable periodic points (corresponding to surviving stable periodic orbits), smaller islands of stability arise, forming together with the unstable points (corresponding to surviving unstable periodic orbits) Birkhoff chains. These unstable periodic points lie between the aforementioned islands of stability. From the unstable points emanate asymptotic manifolds, there are stable and unstable branches. The branches of the same type cannot cross each other, which results in very complicated structures in the phase space. These complicated structures are the driving engines of deterministic chaos. An effective tool to analyze types of motion on a Poincaré section of a nonintegrable system of two degrees of freedom is the angular moment ν ϑ , known in the literature as the rotation number (see, e.g, Voglis and Efthymiopoulos, 1998;Voglis et al., 1999). We denote the central fixed point of the main island of stability u c and the n-th crossing of the surface of section by the orbit u n . We define rotation angles and the angular moment as The dependence of this angular moment on the distance of the initial condition from the central fixed point is called the rotation curve. In an integrable system, such as the Kerr spacetime, the rotation curve is strictly monotonous, but in a nonintegrable system, it has non-monotonic variations when passing through chaotic zones, and plateaus when passing through islands of stability. In order to quantify sensitivity to initial conditions, which is a property of chaotic systems by definition (Devaney, 1989), it is useful to define the deviation vector as a point of the tangent bundle of the phase space and interpret it as connecting two infinitesimally close trajectories. This vector evolves through the geodesic deviation equation As a measure of the deviation vector in a curved spacetime (see, e.g., Lukes-Gerakopoulos, 2014) we use Typically, the deviation vector follows one of two behaviors -a linear one for regular trajectories and an exponential one for chaotic trajectories. These behaviors can be detected by the maximal Lyapunov characteristic exponent which gives the inverse of a characteristic deviation time scale for chaotic trajectories. In the case of regular trajectories, it behaves as ∼ τ −1 for large τ , so in a plot in logarithmic scale it appears as a line of slope -1. equatorial plane θ = π/2 withθ > 0 is taken as the surface of section. We notice no difference from an integrable system, as the chaotic behavior is not prominent at this broad scale depiction. This difference becomes, however, clearly visible in top panel of Fig. 2, which focuses on the left tip of the main island of stability shown in the right panel of Fig. 1. In particular, in the top panel of Fig. 2, alongside with KAM curves, appear islands of stability belonging to Birkhoff chains (ellipsoid-like structures) and chaotic zones (scattered points). Under the panel containing this detail of the surface of section, the corresponding rotation curve is plotted. The rotation curve exhibits non-monotonic variations in a chaotic zone and plateaus (denoted by the corresponding fraction) along islands of stability. Thus, Fig. 2 indicates that the JP spacetime corresponds to a non-integrable system. To directly estimate the sensitivity to initial conditions, we have calculated the mLCE. Fig. 3 shows the convergence of the mLCE for one regular (left panel) and one chaotic orbit (right panel). For the regular orbit indeed the mLCE convergence follows the -1 slope, while for the chaotic orbit the mLCE converges to a positive value. CONCLUSION We have shown by numerical examples that geodesic motion in the JP spacetime background corresponds to a non-integrable system, since chaos was detected. The astrophysical implication is that if the spacetime around black holes is not described by the Kerr metric, then one should expect imprints of chaos in electromagnetic and gravitational wave signals coming from systems like extreme mass ratio inspirals.
2017-11-08T14:26:24.000Z
2017-11-07T00:00:00.000
{ "year": 2017, "sha1": "62e04ce8949bb12d8a9f08b26add3ca8ac099e50", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "62e04ce8949bb12d8a9f08b26add3ca8ac099e50", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
4567584
pes2o/s2orc
v3-fos-license
Treating EGFR mutation resistance in non-small cell lung cancer – role of osimertinib The discovery of mutations in EGFR significantly changed the treatment paradigm of patients with EGFR-mutant non-small cell lung cancer (NSCLC), a particular group of patients with different clinical characteristics and outcome to EGFR-wild-type patients. In these patients, the treatment of choice as first-line therapy is first- or second-generation EGFR-tyrosine kinase inhibitors (EGFR-TKIs), such as gefitinib, erlotinib, or afatinib. Inevitably, after the initial response, all patients become refractory to these drugs. The most common mechanism of acquired resistance to EGFR-TKIs is the development of a second mutation in exon 20 of EGFR (T790M). Osimertinib is a third-generation EGFR-TKI designed for overcoming T790M-mediated resistance. Based on the results of efficacy and tolerability of Phase II and Phase III studies, osimertinib has been approved for treatment of advanced EGFRT790M+ mutation NSCLC following progression on a prior EGFR-TKI. Occurrence of acquired resistance to osimertinib represents an urgent need for additional strategies including combination with other agents, such as other targeted therapies or checkpoint inhibitors, or development of new and more potent compounds. Introduction Lung cancer is the second most commonly diagnosed cancer and the main cause of cancer-related mortality in both men and women. Non-small cell lung cancer (NSCLC) represents ~85% of lung cancer cases and it presents as metastatic disease in over half of all cases. In the last few years, treatment of NSCLC has radically changed after the discovery that inhibition by target agents of molecular drivers, such as EGFR, could be effective in reducing tumor burden. The prevalence of EGFR mutations in adenocarcinoma is 10% of Western and up to 50% of Asian patients. It is well known that EGFR mutations are more frequently observed in Asiatic than in Caucasian patients, in female, in never smokers, and mainly in adenocarcinomas, with deletion in exon 19 or point mutation in exon 21 (L858R) as the most common (>90%) types. Nine randomized Phase III clinical trials (OPTIMAL, First Signal, IPASS, WJTOG 3405, NEJSG 002, EURTAC, ENSURE, LUX-3, LUX-6) demonstrated that, in patients harboring classical EGFR mutations, EGFR-tyrosine kinase inhibitors (EGFR-TKIs) such as erlotinib, gefitinib, or afatinib are superior to the standard platinum-based chemotherapy in terms of response rate, progression-free survival (PFS), toxicity profile, and quality of life (Table 1). In up to 60%-80% of patients treated with an EGFR-TKI, there is a meaningful tumor regression, but inevitably, after a median time of 9-12 months, all patients develop acquired resistance and become refractory. [1][2][3][4][5] Among the different mechanisms of acquired resistance, a secondary mutation, T790M, in the exon 20 of the EGFR gene is the most frequent event, occurring in ~50%-60% of cases. At the present time, only one agent has been US Food and Drug Administration (FDA) approved for treatment of EGFR T790M+ patients. Phase II studies and more recently a large Phase III trial demonstrated that osimertinib (Tagrisso, AstraZeneca, London, UK) is active in EGFR-TKIpretreated, EGFR T790M+ patients, representing today the best option in the acquired resistance setting. [5][6][7][8] Resistance to EGFR-TKIs According to Jackman's criteria, resistant patients should have the following features: 1. Previously received treatment with a single-agent EGFR-TKI; 2. Either or both the following elements: a tumor harboring an EGFR mutation known to be associated with drug sensitivity (ie, G719X, exon 19 deletion, L858R, L861Q) or objective clinical benefit from treatment with an EGFR-TKI (documented partial or complete response [CR] according to RECIST or WHO criteria) or significant and durable (≥6 months) clinical benefits (stable disease [SD] as defined by RECIST or WHO) after initiation of an EGFR-TKI. The following criteria are additional: systemic progression while on continuous treatment with an EGFR-TKI within the last 30 days and no intervening systemic therapy between cessation of EGFR-TKIs and new therapy. 9 Acquired resistance to EGFR-TKIs can be target dependent, if it is characterized by the development of a second mutation in EGFR sequence, or target independent, if it is a consequence of the activation of alternative pathways. 4 The most frequent mechanism of acquired resistance (up to 60% of cases) is target dependent and consists of the emergence of the T790M mutation, a characteristic point mutation in exon 20 of the EGFR gene. Target-independent mechanisms include MET amplification (4%), human EGFR type 2 (HER2) amplification (8%-13%), PIK3CA mutation (2%), BRAF mutation (1%), histological transformation from NSCLC to SCLC (6%), or epithelialmesenchymal transition (1%-2%). 2,4 In 18% of the cases, the mechanism of acquired resistance is unknown (Figure 1). Histological and biological review of tissue samples, taken after the development of acquired resistance, demonstrated that, in some cases, these mechanisms overlap and are not mutually exclusive. 10 The complexity of resistance mechanisms highlights the importance of repeating a tumor biopsy at the time of disease progression. Moreover, availability of new agents specifically effective only in the presence of EGFR T790M mutation explains why tumor re-biopsy is now entering into clinical practice. Unfortunately, in lung cancer patients, repeating tumor biopsy is not feasible in the majority of cases mainly because of the risk related to a new biopsy in a difficult-toaccess disease or patient refusal. Therefore, during the last few years, much interest is growing around the possibility to assess the mutational status on circulating tumor DNA (ctDNA). The so-called "liquid biopsy," which involves isolating ctDNA in plasma or other biologic fluids, including urine, presents several advantages including the fact that it is 51 Treating EGFR mutation resistance easy to perform, rapid, and repeatable, overcoming the problem of tumor heterogeneity. 11,12 The only relevant limitation is represented by the relatively low sensitivity (60%-80%). Sensitivity is also influenced by the type of mutation and tumor burden, with patients with low tumor burden at a high risk of a false-negative result. Therefore, in clinical practice, liquid biopsy is now recommended as the first test to offer to the patient, with tumor biopsy recommended only in the case of a negative result ( Figure 2). 13 Pharmacodynamic The T790M mutation consists of the substitution of threonine at the "gatekeeper" amino acid 790 by methionine. This mutation makes the receptor refractory to the inhibition by reversible EGFR-TKIs through both steric hindrance and increased ATP affinity (its natural substrate). Osimertinib is an oral, irreversible, third-generation TKI targeting T790M and EGFR-TKI-sensitizing mutation sparing the activity of wild-type EGFR. First-generation reversible TKIs (erlotinib and gefitinib) are ineffective at targeting T790M, while they strongly inhibit wild-type EGFR cell lines with similar potency to sensitizing mutant EGFR. Second-generation irreversible TKIs (afatinib and dacomitinib) show activity against T790M in vitro, but concentrations required to overcome T790M activity preclinically are not achievable in humans due to non-selective inhibition of wild-type EGFR which is associated with significant toxicity. 14 The good safety profile and the tolerability of osimertinib are related to the selective inhibition of T790M and EGFR-sensitizing mutation. 1) Osimertinib is less potent at inhibiting phosphorylation of EGFR in wild-type cell lines. It is associated with lower skin and gastrointestinal toxicity. Clinical trials AURA 5 was a Phase 1 study assessing the safety, tolerability, and efficacy of osimertinib. Eligible patients had a locally advanced or metastatic NSCLC, had a known EGFR-TKI-sensitizing mutation or a prior clinical benefit from treatment with EGFR-TKI, and had a radiologically documented disease progression while receiving such treatment. This study included dose-escalation and doseexpansion cohorts. In the dose-escalation cohorts, patients received a single dose of osimertinib. The first dose tested was 20 mg daily. Each subsequent dose represented a 100% increase from the previous dose, with the exception of the final dose escalation, which was from 160 mg once daily to 240 mg once daily. In the dose-escalation cohorts, pretreatment EGFRT790M testing was optional, while in the dose-expansion cohorts, a new tumor biopsy was required after disease progression on the most recent regimen. Testing for EGFR T790M was performed in a central laboratory or in a local laboratory followed by confirmation in a central laboratory. The objective response rate (ORR) for the entire population was 51% and the disease control rate (DCR) (CR plus partial response plus SD) was 84%. In patients harboring the T790M mutation, the ORR was 61% and the DCR was 95%. There was activity in the T790M-negative patients with an ORR of 21% and DCR of 61%. In EGFR T790M+ patients, the median PFS was 9.6 months and in EGFR T790Mpatients, it was 2.8 months. The most common adverse events (AEs) were diarrhea (47%), rash (40%), nausea (22%), and anorexia (21%). Grade 3 or higher AEs were observed in 32% of patients, with AEs leading to dose reduction in 7% of patients and AEs leading to drug discontinuation in 6% of patients. 5,15 The optimal dose to obtain the best efficacy with the lower risk of toxicity is 80 mg once daily. AURA ex, 6 a Phase II extension cohort of the Phase I trial, evaluated the efficacy, tolerability, and safety of osimertinib at a dose of 80 mg once daily in the EGFR T790M+ patients progressing after EGFR-TKI treatment. Similar to the results from the Phase I trial, the ORR was 61% with a DCR of 91%. Osimertinib was well tolerated with drug-related grade ≥3 AEs reported in 12% of the patients and a discontinuation rate of 4%. These promising results were confirmed also in AURA 52 Mazza and Cappuzzo 2 study, 7 a Phase II, single-arm trial conducted in EGFR T790M+ , which showed an ORR of 71%, with a DCR of 92% and a median PFS of 6.8 months. 6,7 In a combined analysis of 411 patients in both Phase II trials (AURA ex and AURA 2), the most commonly reported all-grade AEs were diarrhea (42%), rash (41%), dry skin (31%), nail toxicity (25%), eye disorders (18%), nausea (17%), decreased appetite (16%), and constipation (15%). These events were primary grade 1/2, with a low rate of grade ≥3 AEs. The most common grade ≥3 AEs were pneumonia (2%) and pulmonary embolism (2%). Across both studies, dose reductions as a result of AEs were needed for 4.4% of patients. The most frequently reported AEs that led to a dose reduction or interruption were QTc prolongation (2%) and neutropenia (2%). Other AEs resulting in treatment discontinuation were interstitial lung disease (ILD) or pneumonitis (2%) and cerebrovascular accident (1%). Fatal AEs occurred in 3.2% of patients and consisted of four cases of pneumonitis, which were attributed to osimertinib. 6,7 Since in AURA and AURA 2 trials brain metastases were assessed as non-target lesions, there were no measurements of metastatic brain lesion diameter. Therefore, it was not possible to calculate an ORR or DCR for central nervous system (CNS) disease. In these studies, the proportion of patients with the CNS as the first site of progression was 12%. 16 Omuro et al reported that the incidence of the CNS as an initial failure site reached 33% in EGFR-TKI responders with advanced NSCLC. 17 Approximately half of patients with EGFR-positive metastatic NSCLC treated with first-line chemotherapy develop CNS disease relapse, and the low rate of primary CNS relapse in AURA and AURA 2 trials may suggest a CNS antitumor activity 16 of osimertinib. The mechanism underlying the relationship between clinical benefit from EGFR-TKIs and CNS metastasis may involve several causal factors. Prolonged survival through the use of EGFR-TKIs may coincide with a substantial risk of developing CNS metastasis, as the cranial event occurs in a relatively late phase of the disease. The high frequency of EGFR mutations in brain metastases of lung adenocarcinoma suggests an intrinsic brain tropism of these tumors. Incomplete drug penetration of the brain-blood barrier may account for the increased incidence of CNS metastasis. Metastatic CNS clones may possess an inherited resistance to EGFR-TKIs, or they may acquire earlier drug resistance during EGFR-TKI therapy. 17,47 Noteworthy, first-generation EGFR-TKIs hardly penetrate across the blood-brain barrier at the recommended doses. 18 Based on the data from Phase II studies (AURA extension 6 and AURA 2 7 ), osimertinib was approved by FDA, in November 2015, and by European Medicines Agency, in April 2016, for patients with advanced EGFR T790M+ NSCLC following progression on a prior EGFR-TKI. AURA 3, 8 published on December 2016, is a Phase III trial comparing osimertinib with platinum-based doublet chemotherapy in patients with EGFR T790M+ advanced NSCLC after first-line EGFR-TKI therapy. Patients were randomly assigned to received oral osimertinib (80 mg once daily) or intravenous pemetrexed (500 mg/m 2 ) plus either carboplatin area under the curve (AUC) 5 or cisplatin (75 mg/m 2 ). The median PFS was significantly longer with osimertinib than with platinumbased chemotherapy (10.1 months vs 4.4 months). This benefit was observed across all predefined subgroups also among patients with stable, asymptomatic CNS metastases (8.5 months vs 4.2 months), supporting preclinical and clinical data suggesting that osimertinib may be an EGFR-TKI with improved brain exposure. 19 The ORR was significantly better with osimertinib than with platinum-based chemotherapy (71% vs 31%). The good clinical profile of osimertinib was confirmed also in AURA 3 trial: the proportion of patients with grade 3 AEs was 23% in the osimertinib group and 47% in the chemotherapy group. Osimertinib was associated with a lower rate of AEs leading to permanent discontinuation. 8 Leptomeningeal metastasis is another detrimental complication of advanced EGFR mutation-positive NSCLC. A Phase I study (BLOOM study NCT02228369) is ongoing to test osimertinib monotherapy at 160 mg once daily against brain and leptomeningeal metastasis. Preliminary data demonstrated encouraging results in terms of safety and efficacy. 20 According to the clinical activity and tolerability, osimertinib is being tested in other Phase III studies. In FLAURA Phase III trial (NCT02296125), treatment-naïve patients with locally advanced or metastatic EGFR mutant NSCLC were randomly assigned to receive osimertinib (80 mg qd, orally) or standard of care EGFR-TKI (gefitinib 250 mg qd, orally, or erlotinib 150 mg qd, orally). 21 Combination treatment is another strategy to improve the efficacy and antitumor activity. TATTON trial (NCT02143466) is a multi-arm, Phase Ib trial investigating osimertinib 80 mg once daily in combination with durvalumab (anti-PD-L1 monoclonal antibody) or with savolitinib (MET inhibitor) or with selumetinib (MEK 1/2 inhibitor) in patients with advanced EGFR-mutant lung cancer. Primary objectives were safety and tolerability and the secondary objective was clinical activity of the combinations. An increase in ILD events was observed with the combination of osimertinib plus durvalumab. Therefore, enrollment in the osimertinib plus durvalumab combination arm has been suspended. 22 Other trials assessing the combination treatment are ongoing: osimertinib 53 Treating EGFR mutation resistance plus necitumumab (NCT02496663), plus ramucirumab (NCT02789345), or plus bevacizumab (NCT02803203). In addition to metastatic disease, osimertinib is being tested in the adjuvant setting (ADAURA study, NCT02511106). 23 Currently, osimertinib is the only EGFR-TKI approved for patients with metastatic EGFR T790M+ NSCLC. Rociletinib (Clovis) is another third-generation EGFR-TKI designed to inhibit both T790M and EGFR-activating mutations while sparing wild-type EGFR. Rociletinib has been investigated in EGFR-mutant patients who progressed after at least one line of EGFR-TKI treatment (TIGER X Phase I/II trial), in first-line setting versus erlotinib in EGFR-mutated patients (TIGER 1 Phase II/III trial), and in second-line post-standard EGFR-TKI treatment (TIGER 2 Phase II trial) versus chemotherapy in patients who have progressed after standard EGFR-TKI and after platinum-based doublet chemotherapy. 2 Despite initial promising results, Clovis has stopped the clinical development of rociletinib because of updated data revealing lower response rates than initially reported, a negative vote from the FDA's Oncologic Drugs Advisory Committee (ODAC), and FDA approval of osimertinib, rociletinib's main competitor in the setting. 24,25 Further third-generation EGFR-TKIs in clinical development are HM61713, ASP8273, EGF816, and PF-06747775. 26 Immune checkpoint inhibitors and EGFR-TKIs Recent data showed that nivolumab (Checkmate 057 27 and Checkmate 017 28 ), pembrolizumab (Keyote 010 29 ), and atezolizumab (POPLAR 30 and recently OAK 31 study) are superior to docetaxel in second-line setting and pembrolizumab (Keynote-024 32 ) also improve survival versus platinum-based chemotherapy in PD-L1-positive untreated NSCLC. As shown in these studies, among EGFR-mutant patients, the efficacy of checkpoint inhibitors was lower than that in wild-type population probably because of the low level of mutational load in EGFR-mutant tumors. EGFR-mutant NSCLC expresses higher PD-L1 levels than wild-type, while gefitinib can reduce the PD-L1 expression, suggesting that combined strategies of EGFR-TKI and immunotherapy may be an interesting approach. 33 While promising results come from a combination of nivolumab plus erlotinib in EGFR-mutant advanced NSCLC with acquired resistance to EGFR-TKI 34 and from pembrolizumab plus gefitinib in heavily pretreated (up to four prior therapies) EGFR-mutant NSCLC, 35 association treatment with osimertinib showed significant toxicity. In the Phase I TATTON trial (NCT02143466) and in Phase III CAURAL trial (NCT02143466), the combination of osimertinib and durvalumab (anti-PD-L1 monoclonal antibody) in patients with EGFR-mutant NSCLC with acquired resistance to EGFR-TKI and T790M positivity showed a high incidence of ILD. 22,23 However further Phase I/II combination trials of checkpoint inhibitors are ongoing in EGFR-TKI-naïve and pretreated patients (NCT02013219: erlotinib plus atezolizumab, NCT02364609: afatinib plus pembrolizumab). 36 Several clinical studies are underway to assess new immunotherapy strategies in different settings. 37 A combination of ipilimumab and nivolumb has been tested as first-line treatment in advanced NSCLC with interesting ORR. 48 Durvalumab (anti-PD-L1) and tremelimumab (anti-CTLA4) showed clinical activity in relapsed NSCLC. 49 Many studies are ongoing to test new targets for immunotherapy such as inhibitory molecules (indoleamine-dioxygenase, adenosine, TIM-3, LAG-3) or stimulatory molecules (OX40, CD40, CD27), new peptide vaccines targeting novel antitumor antigens, alternative checkpoint inhibitors, chimeric antigen receptor T cells (CAR-T), histone deacetylase (HDAC) inhibitors, and DNA hypomethylating agents target epigenetics for tumor growth suppression. 50 Immune checkpoint inhibitors have been approved for therapy of a variety of advanced cancers, and they also can be considered for combination therapy to overcome the acquired resistance to EGFR-TKIs. 37 Clinical strategies beyond EGFR-TKI progression The standard of care for patients with acquired resistance to EGFR-TKIs is rapidly changing after the development of third-generation EGFR-TKIs targeting both T790M and EGFR-TKI-sensitizing mutation. Osimertinib is the new standard of care in patients with metastatic EGFR T790M+ NSCLC after progression on erlotininb, gefitinib, or afatinib. A second-line platinum-based doublet chemotherapy remains the standard of care in patients without T790M mutation or other targetable resistance mechanism especially in the case of dramatic progression. 3 54 Mazza and Cappuzzo had a non-significant benefit with combination treatment (PFS 6.7 vs 5.4 months). 38 For patients with oligometastatic progression, local therapies such as radiotherapy, surgery, and stereotactic ablative radiotherapy in conjunction with continued EGFR-TKI can extend disease control by over 6 months. 39 Finally, patients with indolent, asymptomatic progression, and good performance status may continue to be treated with EGFR-TKI beyond RECIST progression if there is no evidence of deterioration or intolerable toxicity ( Figure 3). These strategies are supported by the risk of disease flare after EGFR-TKI cessation, considering that some clones remain sensitive to EGFR inhibition after the acquired resistance. 3,36 Second-generation irreversible TKIs failed to overcome T790M-mediated resistance because concentrations at which they overcome T790M activity preclinically are not achievable in humans due to dose-limiting toxicity related to non-selective inhibition of wild-type EGFR. 40 Vertical inhibition, the simultaneous inhibition of both extracellular and intracellular receptor domains with the combination of cetuximab and Afatinib, in a Phase Ib clinical trial demonstrated promising results, but the high rate of toxicity limited its use in clinical practice. 41 Acquired resistance to a T790M-specific EGFR inhibitor The main mechanism of resistance to osimertinib and to all third-generation irreversible EGFR inhibitors 42 is the acquisition of missense mutation EGFR C797S in exon 20 which consist of the substitution of cysteine with serine at the amino acid position 797 within the kinase-binding site. Osimertinib loses the ability to form covalent bond with EGFR at the position of cysteine residue. 12 EGFR C797S arise in approximately one-third of patients treated with osimertinib 11 over a period of 9-13 months. 42 In preclinical models, the configuration of T790M and C797S mutation affects how cells respond to therapy. If the two mutations are in trans (on different alleles), cells are resistant to third-generation EGFR-TKIs, but a combination of first-generation TKIs can restore EGFR inhibition. If the two mutations are in cis (on the same allele), cells are refractory to any EGFR-TKIs. 12,23 In a case report of a patient with an EGFR-mutant lung cancer, next-generation sequencing (NGS) techniques were performed on three biopsy specimens obtained before treatment with erlotinib, after acquired resistance to erlotinib, and after acquired resistance to AZD9291. The original sensitizing EGFR mutation was present in all tumor samples. Under the selective pressure of EGFR-TKIs, the tumor developed secondary (T790M) and tertiary (C797S) mutations to maintain EGFR signaling. 43 A subsequent study collected plasma samples from 15 patients who received osimertinib therapy and had preexisting plasma EGFR T790M . A total of 40% of patients had EGFR del19/T790M/C797S , 33% of patients had EGFR T790M alone, and EGFR T790M was no longer detectable in 27% of patients. 44 Interestingly, in patients with chronic lymphocytic leukemia treated with ibrutinib, (Bruton tyrosine kinase [BTK] inhibitor), mutations have been detected in C481 (C481S), the analogous cysteine residue to C797 in EGFR, suggesting that mutations in this conserved residue may be a common mechanism of acquired resistance to covalent kinase inhibitors. 45 Additional mechanisms of resistance to osimertinib in patients negative for C797S include HER-2 or MET amplification, loss of T790M mutation, 46 EGFR L718Q, EGFR L798I, KRAS G12S. 42 In addition to acquired mutations and gene amplifications, phenotype transformation represents a distinct mechanism of resistance. Adenocarcinoma turns into small cell lung cancer with RB1 inactivation as the defining features. 42 The genomic heterogeneity associated with resistance to EGFR-TKIs in NSCLC requires the development of target therapy to overcome C797S resistance and combination therapies 55 Treating EGFR mutation resistance that can inhibit the emergence of multiple resistance mechanisms. 44 EAI045 is the first allosteric TKI purposefully designed to overcome T790M and EGFR C797S mutations. In a genetically engineered mouse model of L858R/T790M mutant-driven lung cancer, EAI045 was tested alone and in combination with cetuximab. While the allosteric inhibitor was ineffective alone due to receptor dimerization, the combination of EAI045 and cetuximab showed significant tumor regression. Clinical trials are required to assess these results in patients with advanced NSCLC. 42 Conclusion Acquired resistance is one of the most significant limitations in lung cancer treatment. Despite an initial benefit to target therapies, all patients become refractory. Identification of mechanisms involved in drug resistance is essential to tailor the best treatment strategy for each patient. This is the reason why identification of biomarkers should be encouraged and appropriate tissue sample or plasma assay is essential for biological characterization. Failure of currently available targeted therapies suggests that a single agent may be not sufficient to overcome drug resistance. New strategies, including combination treatment, are currently under investigation to identify new opportunities of treatment.
2018-03-12T19:19:30.503Z
2017-07-26T00:00:00.000
{ "year": 2017, "sha1": "922313aaf9131432efdfdc32e4b829adab3d04c1", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=37629", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e1e0860c8481d22f8cef04bbccd3f07f9e063fb8", "s2fieldsofstudy": [ "Biology", "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
15183419
pes2o/s2orc
v3-fos-license
Flow cytometric analysis of the CD4+ TCR Vβ repertoire in the peripheral blood of children with type 1 diabetes mellitus, systemic lupus erythematosus and age-matched healthy controls Background Data regarding the quantitative expression of TCR Vβ subpopulations in children with autoimmune diseases provided interesting and sometimes conflicting results. The aim of the present study was to assess by comparative flow cytometric analysis the peripheral blood CD4+ TCR Vβ repertoire of children with an organ-specific autoimmune disorder, such as type 1 diabetes mellitus (T1DM), in comparison to children with a systemic autoimmune disease, such as Systemic Lupus Erythematosus (SLE) in comparison to healthy age-matched controls of the same ethnic origin. The CD4+ TCR Vβ repertoire was analysed by flow cytometry in three groups of participants: a) fifteen newly diagnosed children with T1DM (mean age: 9.2 ± 4.78 years old), b) nine newly diagnosed children with SLE, positive for ANA and anti-dsDNA, prior to treatment (mean age: 12.8 ±1.76 years old) and c) 31 healthy age-matched controls (mean age: 6.58 ± 3.65 years old), all of Hellenic origin. Results CD4 + TCR Vβ abnormalities (± 3SD of controls) were observed mainly in SLE patients. Statistical analysis revealed that the CD4 + Vβ4 chain was significantly increased in patients with T1DM (p < 0.001), whereas CD4 + Vβ16 one was significantly increased in SLE patients (p < 0.001) compared to controls. Conclusions CD4 + Vβ4 and CD4 + Vβ16 chains could be possibly involved in the cascade of events precipitating the pathogenesis of T1DM and SLE in children, respectively. Background Autoimmunity involves complex pathophysiological processes; however, no comprehensive model of inflammatory induction has been unraveled as yet. Genetic and environmental immunologic factors, several hormones and stress have been implicated in the pathogenesis of autoimmune disorders [1][2][3][4]. Moreover, the T cell receptor (TCR), with its extended repertoire derived from somatic recombination mechanisms, plays a potential role in human autoimmune disease [5]. Animal and human genetic studies have shown that skewing of TCR Vβ repertoire is a characteristic feature of some autoimmune disorders although still controversy exists on that issue [6,7]. TCR Vβ repertoire is assessed with two methodologies, CDR3 spectratyping, which is a genetic assay that provides mainly qualitative information about TCR Vβ clonality [8,9], and flow cytometry analysis, which provides a quantitative assessment of TCR Vβ clones and is well established in the clinical setting [10,11]. In T cellmediated organ-specific autoimmune diseases, such as type 1 diabetes (T1DM), TCR Vβ repertoire analysis, using both spectratyping and flow cytometry, produced conflicting findings [12,13]. In systemic autoimmune diseases, like Systemic Lupus Erythematosus (SLE), TCR Vβ analysis has been conducted in adult populations using the spectratyping method and a marked oligoclonality of the TCR Vβ repertoire has been reported, which is more prominent in patients with active disease [14,15]. It is likely that children with SLE display a similar phenotype, although no data exist regarding the expression of the Vβ chains in the pediatric population. The marked skewing of the TCR Vβ repertoire observed in some SLE patients has not been found in T1DM patients, although a comparative study in a pediatric population between an organ-specific disease, such as T1DM, and a systemic autoimmune one, such as SLE, to our knowledge, has not been conducted as yet. The aim of this study was to compare the peripheral blood CD4+ TCR Vβ repertoire in children with T1DM and SLE, in comparison to healthy age-matched controls of the same ethnic origin with the use of the flow cytometry assay. a) Healthy children Complete blood count values, immunoglobulin levels and lymphocyte subpopulation counts were normal in all healthy children. Autoantibodies, either SLE-specific or diabetes speficic (ANA, anti-dsDNA, ICA), were evaluated and they were negative in all controls tested. There was an unavoidable male preponderance in the tested children (boys: 80.6%), due to the higher frequency of anatomical abnormalities needing surgical reconstruction in boys. Mean values and standard deviations (SD) of each distinct Vβ family percentage on CD4+ lymphocytes of the 31 studied children are shown in Table 1. TCR Vβ usage appears to be non-random varying from low values of 0.0% for Vβ4 and Vβ7.2 to high values of 12.8% for Vβ2. Additionally, in one individual the value of the Vβ7.2 chain was 0.0%. The known polymorphism (null expression) in Vβ20 subfamily was not detected in the population tested. Enrolled infants and children were separated into three age groups, which were compared in order to find out possible differences in Vβ expression at different age groups. Table 2 displays median values and interquartile range of the CD4+ TCR Vβ repertoire in the aforementioned subgroups as well as in the total population of the study. Statistical analysis revealed no significant difference for each of the 24 TCR Vβ subfamilies in the CD4+ T lymphocytes. b) T1DM children Assessment of the CD4+ TCR Vβ repertoire was performed in two ways: a) Vβ repertoire of each T1DM patient was compared with the Vβ mean values of the control group and b) statistical analysis was performed, in order to detect the presence of CD4 + Vβ lymphocyte clones in the group of T1DM patients with different quantitative expression in comparison to the control group. The increased/decreased usage of a Vβ subfamily was defined as control mean value ± 2 or 3 standard deviations [10,11]. Five patients (33%) presented no abnormalities in all CD4 + Vβ chains values when compared to controls (Table 3, Figure 2). One T1DM patient had decreased usage of Vβ14 (0.7%, mean value: 4.00 ± 1.78) and in another one Vβ20 value was 0.1% (mean value: 4.08 ± 1.75). None of the healthy individuals had such low usage of these Vβ chains. Increased usage of several CD4 + Vβ chains (Vβ3, Vβ4, Vβ5.1, Vβ5.3, Vβ8, Vβ11, Vβ18, Vβ16 and Vβ21.3) was identified sporadically in T1DM patients. With the exception of Vβ11 chain, increased usage of the mentioned Vβ subpopulations were also found in healthy individuals. Although the CD4 + Vβ4 lymphocyte population increased usage was found in 5 T1DM patients when compared to the mean value of controls, statistical analysis showed a significant difference between the two groups (p < 0.001) concerning the CD4 + Vβ4 subfamily (Table 4, Figure 3). This is expected, since values of the controls ranged between 0.0% and 2.1% while T1DM values ranged from 0.6% to 3.3%. It should be mentioned that 17 healthy children had values of the CD4 + Vβ4 chain < 1% and 12 T1DM children had values > 1% ( Figure 4). Taking into account the very low number of CD4 + Vβ4 cells analyzed and that the maximum coefficient of variation (CV) was 20%, statistical analysis was adjusted to this CV. As shown in Table 5, the differences remained significant even after "narrowing" the mean values, adjusting for a 20% difference in the mean values of our measurements. c) SLE children In the SLE group, skewing in CD4 + Vβ subpopulations was detected in the majority of patients when compared to healthy individuals. In contrast to healthy children and T1DM patients, which presented with a similar CD4 + TCR Vβ repertoire pattern, 7 of the 9 SLE children tested displayed concurrently marked increased (mean value ± 3SD) and decreased usage (with even values of 0.0%) of several CD4 + Vβ subpopulations. This was noteworthy in one SLE patient who displayed increased usage of the CD4+ Vβ12 (9.44%), Vβ16 (5.6%) and Vβ20 (9.17%) and a decreased one of Vβ2 (2.83%) and Vβ5.1 (0.12%) chains and in another one patient with high values of CD4+ Vβ3 (9%), Vβ12 (14.2%) and Vβ16 (7.3%) chains and very low values of CD4+ Vβ2 (0.2%), Vβ5.2 (0.2%), Vβ7.1 (0.1%) and Vβ22 (0.3%) (Table 6, Figure 5). Markedly low values were found in the following CD4 + Vβ chains: Vβ2 (in 2 patients), Vβ22 (in 2 patients), Vβ5.1, Vβ5.2, Vβ7.1, Vβ8, Vβ13.6, and Vβ18. The CD4 + Vβ4 subpopulation had values of 0.0% in three SLE patients. However, this finding, as previously mentioned, was noticed in healthy children as well. Increased usage of Vβ16 was found in 5, of Vβ4 in 2, of Vβ20 in 2 and of Vβ12 in 2 patients. It should be mentioned, though, that there were two SLE patients with a normal repertoire at initial diagnosis. When statistical analysis was performed in order to compare the CD4 + Vβ expression between SLE and healthy children, significant differences in TCR Vβ quantitative expression were noticed only for the Vβ16 chain (p <0.001) (Table 3, Figure 6). This was expected, since five SLE patients presented with higher values of the CD4 + Vβ16 chain than those of the control group (mean value + 3SD) ( Figure 7). Only one healthy individual presented an increased usage of CD4 + Vβ16, but he displayed no other discrepancies of the rest CD4 + Vβ subfamilies. d) Findings of T lymphocyte populations by flow cytometric analysis The basic immunophenotyping findings of the T1DM and SLE groups, when compared to controls, were as follows: a) Increased expression of CD69 molecule in two SLE patients and in one T1DM patient, b) elevated percentage of the CD3+ TCRγδ lymphocyte population (20%) in two SLE patients, c) increased expression of the HLA-DR + cells in 2 of the nine SLE children tested and d) absolute number of B lymphocytes (CD19+) beyond normal values in one SLE and in one T1DM patient. No difference in the CD5 + CD19+ expression was noticed among the three groups and the values (%) of the CD5 + CD19+ populations ranged between 0.1 and 1.2% in all individuals tested. Discussion Flow cytometric analysis of the CD4 + TCR Vβ repertoire performed in three different groups of children of Greek origin, healthy, T1DM and SLE ones, provided results similar to previous published studies, but with some differences. No age-related differences in CD4 + Vβ expression were found among the three age groups of healthy Greek children studied. * IQR Interquartile range (75 th -25 th percentile). † The numbers in brackets indicate the two age groups that were used for the comparison with the corresponding p-value. As far as healthy children are concerned, the results of this study support the non-random usage of TCR Vβ subfamilies in Greek children, as observed in previous studies on normal adults and children [11,16,17]. There is also no statistical difference in T cell repertoire among the three pediatric age groups studied and this is in accordance to another pediatric population studied in the United Kingdom [11]. Many healthy individuals had also sporadically increased expression of CD4 + Vβ chains as previously shown [11,16,17]. To our knowledge, analysis of the CD4 + Vβ repertoire expression pattern in Greek adults has not been conducted so far, so comparison with values of adults of the same ethnic origin could not be performed. T1DM children had an almost same CD4 + Vβ repertoire pattern with healthy individuals, with the exception of the CD4 + Vβ4 chain which had increased expression in T1DM patients when compared to controls. Previous studies have shown similar results, that is either no differences between the two groups [12], or increased usage of other chains like the Vβ7 one [13,18]. Some putative explanations for the absence of consistency in the results among several studies could be the scarcity of published studies in human populations, the small size of the samples tested both in our study and others, as well as the different methodologies applied for the TCR repertoire assessment. It is still unclear though, whether the increased expression of CD4 + Vβ4 in newly diagnosed T1DM children of Hellenic origin is a constant finding in later stages of diabetes or not. Large-scale multicenter longitudinal studies are required to clarify this issue. A similar confirmed polymorphism of the TCRV20S1 gene was found in the healthy Sardinian population, but further research failed to relate this polymorphism with the high incidence of T1DM in Sardinians [19]. In contrast to T1DM patients, in SLE patients skewing of the CD4 + Vβ repertoire is far more prominent. This finding is in accordance to previous studies published in adults [14,15]. CDR3 spectratyping performed in the peripheral blood of 20 adult SLE patients has shown a prominent usage of the Vβ16 chain among other Vβ subpopulations (Vβ2, Vβ8, Vβ11, Vβ14, Vβ19 and Vβ24) [15]. In the present study, in which flow cytometry was used, increased usage of the Vβ16 chain in the CD4+ T cells of peripheral blood was found in the majority of SLE children. Higher values of other CD4+ Vβ chains were also noticed in SLE children in comparison to healthy ones, but they concerned individual patients. Our results are difficult to compare with previous studies published in adults, not only because of the different methodologies used (flow cytometry versus spectratyping) or to different samples studied (children versus adults, patients at initial diagnosis versus patients under treatment) but also because of the limited number of the studies conducted in that field. Larger scale analysis in populations categorized according to age, to time of the study held (before or after treatment), to sex, to ethnicity and to methodology used could provide further insight on the role of Vβ repertoire not only in autoimmune disease pathogenesis but also in its clinical significance as a prognostic factor in the case of SLE. Limitations of the present study are the limited number of the patients studied, but this is due to the rarity of lupus in children in the Greek population and to the fact that only patients upon the initial diagnosis were selected in order to avoid alterations in their immunological profile that medication could induce. Another parameter is the fact that evaluation of the Vβ repertoire in each patient is not influenced by the number of the patients investigated. Although genetic evaluation was not performed, all previous published studies in immune mediated diseases have showed that there are no discrepancies between the results from flow cytometric analysis and spectratyping, enhancing the reliability of flow cytometry. Finally, the CD8+ Vβ lymphocyte subpopulations were not analyzed, and therefore it is not known whether there are abnormalities in the repertoire of children with diabetes or lupus or not. As far as immunophenotyping analysis is concerned, no major differences among the groups were noticed. Regarding the role of the CD5 + CD19+ population in autoimmune process, its expression was quite similar in the three groups studied. This finding is in accordance with other studies in adult SLE patients, but not with studies in T1DM patients, in whom increased expression was observed [20,21]. Conclusions This is the first comparative flow cytometric analysis between an organ-specific autoimmune disease, T1DM, and a systemic one, SLE, which underscores the importance of the TCR Vβ repertoire in patients with autoimmune diseases. This study was possible because of the availability of an easily applicable method at a clinical setting, which enabled TCR Vβ repertoire analysis prior to initiation of any treatment that could possibly be a confounding factor. Abnormalities of the CD4+ TCR Vβ repertoire quantitative expression were observed mainly in SLE children. Low usage of certain CD4 + TCR Vβ chains was also far more prominent in lupus patients. Only the CD4 + TCR Vβ4 and CD4 + TCR Vβ16 lymphocytes were significantly increased in T1DM and SLE children, respectively. However, the potential role of these lymphocyte clones in the pathogenesis of autoimmunity remains to be clarified. Subjects Samples were collected from a) Fifteen newly diagnosed children with T1DM (mean age ± SD: 9.2 year ± 4.78, 11 males-4 females) during their hospitalization upon diagnosis in the First Department of Pediatrics of the University of Athens in "Aghia Sophia" Children's Hospital and b) Nine patients with SLE (mean age ± SD: 12.8 years ± 1.76, 3 males-6 females) upon diagnosis and prior to treatment initiation, all of Hellenic origin. Thirty one healthy Greek children (mean age ± SD: 6.58 years ± 3.65, 26 males-5 females) undergoing minor surgery at the First and Second Department of Surgery at "Aghia Sophia" Children's Hospital were included as controls. Taking into account that several germ line polymorphisms of the TCR Vβ genes have been described in different ethnicities [22,23], this study was restricted to children of Greek origin in order to include an ethnically homogeneous population. A standardized interview was held with parents of all children. Ethical approval The study was approved by the Ethical Committee of the "Aghia Sophia" Children's Hospital and participants were included in the study only after informed consent was obtained from their parents or guardians. Methodology Blood (4 ml in EDTA tube and 3 ml serum) was taken from all participants for investigations, including full blood count, serum immunoglobulin levels assessment by nephelometry and screening for autoantibodies by indirect immune-fluorescence (ANA, anti-dsDNA, ICA), as well as lymphocyte subpopulation immunophenotyping by flow cytometry. Flow cytometry CD4 + TCR Vβ repertoire was analyzed using threecolor flow cytometry with the IOTest Beta Mark TCR Repertoire Kit (Beckmann Coulter, Marseille, France), which consists of fluorochrome conjugated monoclonal antibodies that identify 24 TCR Vβ subfamilies, covering about the 70% of the normal human CD4+ T cells. Since at the time of the study initiation, only three-color flow cytometry was in practice, and in order to assess the results with the same method, no changes in methodology were undertaken. For that reason only the CD4+ TCR Vβ repertoire and not the CD8+ one was assessed. Peripheral blood samples were collected in tubes containing anticoagulant (EDTA) and were stained within 2 hours. 100 μl of blood was incubated for 20 minutes, at room temperature with 20 μl of a mixture of three distinct anti-Vβ monoclonal antibodies and 10 μl of anti-CD4-PC5 monoclonal antibody (MoAb). Erythrocytes were lysed by Adjustment of statistical analysis for the CD4 + Vβ cells for a maximum CV of 20% showed that differences between healthy controls and T1DM children remained significant. CV: coefficient of variation. NH 4 Cl solution. At least 10.000 CD4+ lymphocytes were collected for analysis. CD4+ lymphocytes were gated using forward scatter, side scatter and FL4 fluorescence; whereas Vβ repertoire analysis used two additional fluorescence channels (FL1 and FL2). Data acquisition was performed initially on an EPICS XL (Beckman Coulter) flow cytometer and, later on, on an FC-500 (Beckman Coulter), instrument. Listmode analysis was performed using the CXP software. Statistical analysis Statistical analysis among three age groups of the healthy individuals included in the study was performed in order to detect possible skewing among them. Age was transformed to categorical variables, thus constructing three age groups: 18 months-4 11/12 years (n = 12), 5-9 11/12 years (n = 10) and ≥ 10 years (n = 9). The reason for this transformation was to reveal a possible difference in Vβ repertoire usage among specific age groups that roughly reflect the stages of child development and immune system maturation. Vβ repertoire expression variables are described as median and interquartile range IQR (75th-25th percentile) due to the small sample size and to their non-normal distribution. For the same reason, non parametric statistics were used; Mann-Whitney U statistic was used for the comparison of the aforementioned variables between two age subgroups and Kruskal-Wallis rank test when comparing the same variables among the three age subgroups of the study. All tests were two-sided at a significance level of p < 0.5. Data were analyzed using STATA™ (Version 9.0, Stata Corporation, College Station, TX 77845, USA). To assess the possible differences of CD4 + TCR Vβ repertoire expression, pediatric patients with newly diagnosed SLE and T1DM were compared with healthy agematched controls. Due to the small subgroup sizes of SLE and T1DM (n = 9, n = 15 respectively), median and IQR (75 th -25 th percentile) were used to describe the continuous variables. In addition, nonparametric statistical analysis (Mann-Whitney U) was used to test for any significant differences between the subgroups. Due to the multiple comparisons (n = 24), Bonferroni adjustment was performed in order to correct for inflation in type I error, setting the significance level from p < 0.05 to < 0.0021 (=0.05/24). Statistical analysis was also adjusted to a CV of 20%, which could characterize the results of Vβ chains with rare expression. The significant difference that was found between controls and T1DM patients in Vβ4 chain was also present when using Student's t-test for the comparison. Although t-test is a parametric statistic and less powerful in our case due to the small number of patients, it allowed us to perform new hypothetical comparisons between the aforementioned groups in order to verify that the comparisons would remain significant even after increasing the mean value in the control group by 20% and decreasing the mean value in the T1DM group (for Vβ4 chain) by 20%. Competing interests We disclose any financial competing interests. There are no non-financial competing interests to declare in relation to this manuscript. Authors' contributions FT carried out the study (collection of patients and controls, conduction of flow cytometric analysis and autoantibody screening) and has written the manuscript. MK has significantly contributed in the conception and design, the acquisition of data, the analysis and interpretation of data as well as in the drafting the article and in revising it critically for important intellectual content. She has also significantly contributed in the final approval of the version to be published. MT contributed in the analysis and interpretation of the data and revised the manuscript. CM has significantly contributed in the acquisition of data and the analysis and interpretation of data as well as in the drafting the article and revising it critically. More specifically due to his high qualifications in statistical analysis and interpretation he has contributed in the interpretation and presentation of the important findings of the study. EP has personally conducted the measurement of serum immunoglobulin levels and the assessment and screening for autoantibodies and she has contributed to the interpretation of data. GC revised the manuscript critically for important intellectual content. CKG contributed in the interpretation of the collected data and the preparation of the manuscript. Together with MK, she had the main contribution in the drafting of the article and in revising it critically for important intellectual content. She has also significantly contributed in the final approval of the version to be published. All authors read and approved the final manuscript. Authors' information FT is the PhD candidate, who has conducted the study under the supervision of Dr. Maria Kanariou, the Director of the Department of Immunology & Histocompatibility, "Aghia Sophia" Children's Hospital, Greece and the help of the members of the Department of Immunology & Histocompatibility. She holds master degree in "Molecular Medicine" (supervisor of the two-year training in basic research methodology is Professor Nikolaos Anagnou). She is Pediatrician and is currently Consultant at the 3 rd Department of Pediatrics' Clinics of "Attikon" University Hospital. MK is the Director of the Department of Immunology & Histocompatibility, Specific Center & Referral Center for Primary Immunodeficiencies -Paediatric Immunology, "Aghia Sophia" Children's Hospital, Greece where all laboratory measurements have been conducted under her supervision. MT is a Medical Biopathologist with over a decade of experience in Flow Cytometry. She is currently responsible for the Flow Cytometry and Cell Culture Laboratories of the Department of Immunology & Histocompatibility, Specific Center & Referral Center for Primary Immunodeficiencies -Paediatric Immunology, "Aghia Sophia" Children's Hospital. CM is GP, Consultant at Hospital of Kimi, Greece and holds a master degree in biostatistics and PhD diploma in epidemiology. He also has many publications in several studies in which he contributed as a biostatistician. EP is also a staff member of the Department of Immunology & Histocompatibility, "Aghia Sophia" Children's Hospital, Greece.
2017-06-20T21:59:46.029Z
2013-08-03T00:00:00.000
{ "year": 2013, "sha1": "e9290834c9a43b511b88fc6dedf533aad00bb056", "oa_license": "CCBY", "oa_url": "https://bmcimmunol.biomedcentral.com/track/pdf/10.1186/1471-2172-14-33", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "48b6fbae099df13ed6be50f34640cf1f669e9cc4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269991587
pes2o/s2orc
v3-fos-license
Anticipation of Predicates in Simultaneous Interpretation Between Different Word Order Languages A : Anticipation, broadly defined as the act of predicting words or phrases before their verbalization by the speaker, is a pragmatic simultaneous interpretation strategy enabling interpreters to minimize the temporal gap between the source and target languages, expedite the retrieval of equivalent words or phrases, and mentally prepare for the progression of the source discourse or speech. The literature on anticipation as an interpretation strategy explains that interpreters harness both linguistic and extralinguistic resources to engage in anticipation during simultaneous interpretation (SI). Linguistic resources include idioms, set phrases, lexical transition probabilities, and common sentence structures, whereas extralinguistic resources include the contextual information about the source text and the interpreter’s background knowledge about the topic, setting, and speaker. Anticipation is particularly crucial to use during simultaneous interpretation from Korean into English. The structural difference between Korean, characterized as a subject-object-verb (SOV) language, and English, a subject-verb-object (SVO) language, necessitates interpreters’ adept anticipation, particularly anticipation of predicates that typically conclude Korean sentences. Predicates in Korean sentences, besides indicating tense, also convey semantic content in the form of verbs or adjectives. Thus, anticipating predicates is often a crucial determinant of the success of SI. However, anticipating predicates is a skill to be obtained and trained that may not be effectively employed by interpreting students. This study examined a set of interpretation outputs from a sample of 22 graduate students to examine their utilization of anticipation during SI from Korean into English.The analysis of their interpretation focused on their attempts to anticipate predicates as well as the accuracy of their predictions.The analysis of the students' anticipation attempts and anticipation accuracy revealed a discernible but weak correlation between the two variables.Additionally, the analysis discovered a tendency among the students to predict the auxiliary verb only and wait for more input (English) to complement or repair their partial anticipation of a predicate.This study offers insights into the ways in which students employ anticipation and provides avenues for interpreting trainers to design methods to train students' anticipation skills employed during SI. Introduction While unbeknownst to many, anticipation is almost naturally mobilized when humans communicate.In fact, human sensory organs and cognition can be likened to nodes open to not only other humans but also the external environment that make constant attempts to grasp the situation or the communication happening or about to happen.Anokhin (1978) argued that "the human central nervous system developed as a mechanism of maximal anticipation of sequential and iterative phenomena of the outsider world at the greatest possible speed" (p. 19).Indeed, as part of the effort to quickly grasp the surrounding situation which they are in and thereby protect themselves, humans employ anticipation both passively and actively.It is no exception when verbally communicating; one unknowingly yet constantly attempts to anticipate where the dialogue or utterance is going and what meanings will be made.This is evident when one can predict the next word to be said before the interlocuter actually utters it.Regardless of the language in use, most people experience this in normal conversations.As suggested by Seleskovitch, the process of SI involves the same tasks as normal communication, which are thinking and speaking the formulated thought at the same time albeit after a time lag (Seleskovitch, 1978, p. 32).Scholars who have written on anticipation as an interpretation strategy have also acknowledged that anticipation is used in monolingual conversations (Jörg, 1997;Lederer, 1978;Lim, 2011).This innate human ability to anticipate becomes a functional strategy when performing SI.Particularly on account of the fact that a thought comes from outside of the interpreter's brain, in other words, a thought is given to the interpreter by the speaker's input (Seleskovitch, 1978, p. 33), anticipation comes in handy when the interpreter attempts to sync his or her thought with the speaker so as to minimize the cognitive load and time lag to more seamlessly perform SI.Indeed, according to Bartłomiejczyk (2006) and Pöchhacker (2016), anticipation is one of the most widely discussed interpretation strategies.In conference interpreting, the anticipation strategy is defined as "the prediction and interpretation of source text (ST) units before their actual utterance and can be explained as a response to previously received and processed linguistic and extralinguistic stimuli" (based on similar definitions by Jörg, 1997, p. 218; Setton, 1999; Wilss, 1978).When performing SI, which requires the effective and efficient management and simultaneous utilization of various efforts (listening/analysis, memory, and speech production), the interpreter can efficiently allocate the limited processing capacity by employing anticipation as an interpretation strategy (Moser, 1978, p. 359) based on linguistic and extralinguistic stimuli. Although the distinction between inferencing and anticipation may be unclear, anticipation, which is categorized as a cognitive interpretation strategy (Li, 2013;Vandepitte, 2001;Won, 2010), is one of the most useful and frequently used strategies employed during SI (Li, 2013).Chernov (2004) even regards anticipation ("probability prediction") of the verbal and semantic structure of the message as "one of the most essential psycholinguistic factors" (p. 140) that enables simultaneity in SI along with the message redundancy.Chernov's (1994) view assumes that linguistic understanding completely hinges on human inference, and also explains that "the objective semantic discourse redundancy of the ST message and its subjective sense redundancy for the interpreter" are the conditions for successful inferencing (p. 200).Inference might not be exactly equivalent to anticipation, yet anticipation can be understood as the observable act of inference.In order to perform successful simultaneous interpretation, the interpreter should not stop at inferencing the upcoming idea, word, rheme, or sense; Only when anticipating what is about to be said, can the interpreter successfully render a word or phrase, or start forming a sentence in the target language in a way that reduces the cognitive load as well as the input load during SI.Hence, this paper focuses on anticipation, an observable act of inference that occurs when unsaid or yet-to-be-verbalized words or phrases are predicted and expressed verbally by the interpreter. Anticipation makes interpreters' interpreting more efficient as a result of contributing to lowering "the processing capacity requirements for the listening component" (Kurz & Färber, 2003, p. 123) and is particularly useful when simultaneously interpreting structurally different languages such as German and English, and Korean and English.However, it is a strategy or skill that is acquired and honed through training and experience.Lim (2011) noted that when given the task of doing simultaneous interpreting Korean into English, students tend to fear that they will not be able to begin or finish a sentence because the verb comes at the end of a sentence (p. 60). When it comes to performing SI from Korean into English, waiting for the verb in the source language could lead to choppy rendering of interpretation output, often difficult to follow.Hence, the importance of anticipation as an SI strategy and particularly, the importance of anticipating the predicate part of a sentence, cannot be downplayed.Hoping to provide interpreting trainers and students with some insights into how to encourage students and novice interpreters to employ anticipation and how to guide them, the researcher decided to observe and analyze how student interpreters utilize anticipation of predicates during their SI from Korean into English.Although the way student interpreters utilize anticipation may not be generalizable to professional interpreters, student interpreters having to interpret from Korean into English do employ anticipation, albeit in a non-masterful way, and their tendencies or ways of anticipating can offer insights into how to train and improve novice interpreters' anticipation strategy during SI.With this aim, this paper explores how the second-year graduate students majoring in Korean-English conference interpretation utilize anticipation during Korean into English SI and analyzes both the frequency and accuracy of anticipation used by the student interpreters.While it would be fruitful to analyze which linguistic cues or extralinguistic resources each student used to anticipate each predicate, here, we content ourselves with analyzing and finding some tendencies or patterns in using anticipation as an SI strategy exhibited by students and the implications of such findings. Types of Anticipation in Simultaneous Interpretation Anticipation is generally understood as a preemptive action or response triggered or enabled by stimuli.In simultaneous interpreting, depending on the stimuli or resources that the interpreter taps into to anticipate and produce the next word, phrase, or idea to be verbalized by the speaker, anticipation can largely be categorized into "linguistic anticipation" and "extralinguistic anticipation" (Lederer, 1978;Setton, 1999;Wilss, 1978).Similar to how interpreters end up making errors based on wrong expectations arrived at on the basis of linguistic and nonlinguistic cues (Gerver, 1976;Seleskovitch, 1978;as cited in Anderson, 1994), interpreters need to refer to linguistic and extralinguistic resources and information to anticipate during SI.The first is done by relying heavily on "transitional probability (TP)," or the statistical likelihood of two or more words occurring together in a given language" (Gile, 2009;as cited in Hodzik & Williams, 2017).Collocations, set phrases, or maxims are the typical cases where interpreters can make such language predictions, as Lederer initially labeled this type of anticipation in 1978.Extralinguistic anticipation is done by mobilizing the interpreter's background knowledge of the topic, information about the speaker or the setting, and is used mostly to enable inferences and anticipation of what will be said in the unfolding speech (Setton, 1999).Lederer (1978) referred to extralinguistic anticipation as "anticipation based on sense expectation" (p. 331).Except for typical collocations or phrases with high TP, anticipation of what will be said is not always possible with the linguistic information present only.Only if situational or contextual pragmatic information is present, extralinguistic or sense-based anticipation is possible.Albeit difficult to observe and capture, interpreters can also utilize "non-verbal, visual information such as the speaker's facial expressions and gestures and his audience's reaction to what he is saying" as extralinguistic resources that contribute to clarifying the situational context (Argyle, 1972;as cited in Anderson, 1994).In line with these explanations, Lim (2011) used the term "contextual anticipation" to refer to the equivalent of extralinguistic anticipation.Lim (2011) further divided "contextual anticipation" into intratextual and extratextual.Intratextual anticipation is explained as anticipation based on "the clues derived from understanding of the speech itself " (p. 61), while extratextual anticipation relies on the interpreter's knowledge about the topic, setting, or speaker.However, as it is quite challenging to determine whether prediction of a plausible continuation of the source language speech is derived from intratextual elements such as the omnipresent redundancy in human discourse or from the interpreter's knowledge about the subject area gained through the preparation for a specific event happening, this paper will consider any anticipation carried out on the basis of the situation or context of the source text as extralinguistic anticipation.It is also because some may argue that intratextual anticipation hinges more on linguistic knowledge, rather than on extralinguistic resources.Bartłomiejczyk (2006) labeled this latter type of anticipation "general anticipation, " meaning anticipation done by building up expectations about the source text.However, general anticipation may be misleading at least in this paper, as this paper will use the adjective "general" to refer to the moderate level of exactness of the anticipated word or expression. Challenges in Simultaneously Interpreting between Structurally Different Languages and Anticipation as a Coping Strategy Whether it is linguistic anticipation or extralinguistic anticipation, anticipation is of great use when interpreting two languages that exhibit structural differences, namely different word orders (Setton, 1999;Won, 2010).Even though Hodzik (2014) found that anticipation relying on TP may be neither possible nor effective for anticipation during SI between asymmetrical sentence structures, generally, linguistic anticipation, including the one utilizing TP, is useful in anticipating downstream elements in a sentence.Think of interpreting a language with the SOV (subject-object-verb) word order into a language with the SVO word order.Interpreters frequently anticipate the verb part of an SVO source language before it is uttered to minimize the memory effort and avoid having an excessively long EVS (ear- voice span).Although SI is "a linear and forward process" often realized by employing "segmenting" or "chunking" strategies (An, 2009, p. 188), when the source language and target language have differences in word order, the interpreter may not be able to process the input and reproduce the output in a linear manner but may choose to anticipate the final segment or chunk of the sentence before it is uttered.Easy it may not be, anticipating the sentence-ending segment, which is mostly a verb, is almost necessary when interpreting from an SOV language into an SVO language.Otherwise, the interpreter would have to carry a longer EVS span and process a heavier input load at every second. Anticipating the predicate that usually comes at the end of a sentence in an SOV language is a way to reduce the cognitive load and memory load.In addition, anticipation of the predicate, which is often a verb or an adjective, in an SOV sentence holds significant importance, when taking the themerheme structure into account.As Chernov (1994) explained, semantic components containing new information are generally placed in the rheme part, and hence, the rheme part is where the interpreter's attention should be disproportionately distributed, as mistranslation or omission of the rhematic item can lead to a substantial error.It can be argued that it is the predicate placed in the rheme part in an SOV language that ultimately decides the sense of an utterance.This statement holds true for the Korean language as well.In Korean, the predicate consists of not only a verb or an adjective but also a sentence-ending element that signifies the relationship between the speakers communicating and the time-tense of a sentence.To be sure, in English as well, predicates including verbs mostly exist in conjugated forms and therefore, signify the time tense as well.This makes accurate anticipation of the predicate part in a sentence more challenging.Lee (2014) likened the verb to "the soul" of a sentence as it determines the syntactic structure of a sentence and explained that interpreting Korean into Mandarin Chinese adds a particular burden on interpreters precisely because the verb comes at the end of a Korean sentence while in Chinese, a verb comes right after the subject.Her analysis also indicates that when performing SI from Korean into Chinese, capturing the linguistic cues in the beginning of a sentence, and utilizing them to predict the following verb and the entire sentence structure is crucial in order for the interpreter to minimize the EVS and utter the target language in time (Lee, 2014). Structural disparities between Korean and English present unique challenges for interpretation, with anticipation emerging as a key strategy. Korean, an SOV language, places verbs and time-tense markers at the end of sentences, while English follows an SVO structure.This syntactic distinction necessitates reformulating information for interpreters (Lee, 1997).The intrinsic linguistic feature observed in the Korean language, whereby verbs and time-tense markers are typically positioned towards the concluding segment of a sentence, frequently at its ultimate juncture, necessitates anticipation as a paramount strategy for effective interpretation during interpretation from Korean into English.Owing to instances wherein Korean sentences manifest a notable temporal delay between the subject and the terminal predicate, and occasionally embrace a dual-subject framework (An, 2009), the act of anticipating predicates within Korean sentences poses a distinctive challenge. Albeit challenging to master, anticipation of predicates by leveraging both linguistic cues including the lexical transition probability and extralinguistic resources, namely contextual information, is a widely used interpretation strategy among interpreters, especially when simultaneous interpreting Korean into English.This study examined whether and how students in the interpreting training program utilize anticipation when executing SI of a Korean speech into English.The experiment of 22 graduate students majoring in conference interpreting aimed to discover how often and how accurately students attempt anticipating predicates in Korean sentences, and the ways they activate and enable anticipation of predicates during SI. Experiment Design This study recruited 22 second-year students at a graduate school of interpretation and translation based in South Korea.The second-year students attending the graduate program can be considered prospective interpreters because they were only a couple of months away from joining the interpreter workforce in Korea.The second-year students of the graduate program take at least eight hours of classes on simultaneous interpreting every week and invest even longer hours in practicing and studying simultaneous interpreting.Furthermore, the experiment was conducted during the last week of October 2023, which was about a month before the students' final graduation exam.At this time of the year, the students were not only used to simulated mock conference interpreting settings but also to interpreting typical speeches, especially those delivered by government officials, making them suitable for this study's experiment.Data were collected from two SI classes that made up a total of 22 students. The source text selected for this experiment was a fictitious speech created by editing and merging two speeches delivered by Korea's Prime Minister Han Duck-soo in July and September of 2023, respectively.The speeches were both on the topic of Korea's population crisis brought about by the low birth rate and the government's population and immigration policies.Government speeches are frequently used as training materials in graduate schools for interpretation in Korea.The participating students had previously practiced SI using government speeches that tend to show great similarity in style and flow.In addition, considering the growing concern about the low birth rate and population aging in Korea, immigration and population policies were topics with which the students were assumed to be familiar.In short, the chosen source text was a text presumed to allow ample chances to attempt anticipating during SI.The source text was titled "Congratulatory Remarks delivered at the Population Future Forum" and the participants were informed of the title so they could understand the setting and context of the source speech.The source speech was comprised of 30 sentences, which included a total of 38 predicates, as some sentences contained more than one predicate.In this experiment, verbs used to modify the subject of a sentence or other nouns were not considered as predicates.The focus was on the anticipation of predicates that convey the meaning of action or state in the source text.According to the researcher's preliminary source text analysis, among the 38 predicates identified, 19 were predicates possible to anticipate based on the (lexical) transitional probability, while the remaining 19 were ones possible to anticipate using extralinguistic resources including the interpreter's background knowledge, knowledge about the setting or the speaker, and the preceding ideas and sentences within the source speech.Given that whether anticipation of each predicate is enabled by linguistic resources or by extralinguistic resources can be ambiguous, this aspect was neither the topic nor within the scope of this study.Yet, the source text containing predicates that could be anticipated based on either type of the two resources was deemed appropriate for this study and further studies in the future. Data Collection and Analysis After being reviewed by a more experienced professor with decades of experience of teaching and researching SI, the source text was read out as the participants' midterm exam material by the two instructors of the two SI classes and was about 6.5-minute-long.The participants had only a single try to execute SI and recorded their interpretation in the format of .mp4or .wav,then sent the files to the researcher immediately after they finished interpreting. Audacity, an open-source audio editor and recording application software, was used to compare the source speech file and each of the interpretation output files sent by the participants.As shown in Illustration 1, Audacity visualizes the recording files, enabling easy comparison of the two audio files to capture whenever anticipation was attempted or made by each student.Whenever a predicate (including a modal verb that appears before the actual verb conveying the sense of action or state is said) was verbalized by a student before the corresponding predicate was said in the source speech, it was counted as an attempt to anticipate.In this study's analysis, when a participant uttered either a complete predicate or just the auxiliary (modal) verb part of a predicate almost simultaneously as when the source sentence's predicate was said, it was categorized as "freewheeling." Freewheeling is when an idea or a sense is assumed to have been anticipated by the interpreter even though the verbalization was executed not necessarily before the equivalent word was said in the source speech.Predicates in English verbalized as freewheeling were counted toward the total number of anticipation attempts in this analysis. In cases where participants anticipated only the time-tense or auxiliary verbs like should, could, or may, these attempts were given a weight of 0.5 each.Such partial attempts occurred when participants anticipated only modal verbs, which indicate time-tense or modalities such as likelihood, ability, permission, obligation, necessity, or advice.The researcher assumed complete anticipation when both the auxiliary verb and the subsequent verb with semantic content were verbalized before the source speech expressed the corresponding predicate, warranting a weight of 1 for the attempt count.However, if an interpreter anticipated only the modal verb portion and waited for more information to complete the predicate, it was considered a deliberate choice, labeled as a "partial" anticipation attempt, and given a weight of 0.5 for statistical purposes.For example, when anticipating to interpret the source sentence, "Tto-han, yug-a-dol-bom-e dae-han bu-dam-eul wan-hwaha-gi wi-hae, seo-ul-si-leul dae-sang-eu-lo 100myeong gyu-mo-ui oe-gug-in ga-sa-gwan-li-sa si-beom-sa-eob-do chu-jin-ha-go iss-seub-ni-da" [To lessen the burden of childrearing, (the government) is putting forth a pilot project to allow 100 foreign caretakers to be employed in the City of Seoul], Student D predicted only the modal verb that signifies the time-tense by saying "we are" first and then added "providing support for childcare" after hearing "oegug-in ga-sa-gwan-li-sa si-beom-sa-eob-do" [a pilot project to allow foreign caretakers' employment].In this instance, only the component that signifies the time-tense was anticipated, not the whole predicate, hence being assigned with 0.5 anticipation attempt. When it comes to evaluating and statistically quantifying accuracy of anticipations made by the participants, the researcher, a professional conference interpreter and interpreting instructor with more than two years of teaching experience, had to utilize her judgment.For the accuracy analysis, the researcher evaluated both the time-tense and verb choices in each of the interpreted sentences.The accuracy evaluation took interpretation shifts, or paraphrasing, into account.If the source sentence, "Jon-gyeong-ha-neun naeoe gwi-bin yeo-leo-bun, 'in-gu-mi-lae-po-leom 2023' gae-choe-leul jin-sim-eulo chug-ha-hab-ni-da" [Distinguished guests, congratulations to all of you on the opening of the 2023 Population Future Forum], had been interpreted with the verb "welcome" instead of "congratulate" or "congratulations," such a shift in interpretation was accepted, and the output sentence was evaluated as a correctly anticipated interpretation.Such interpretation is labeled "general anticipation" in this study, to refer to the moderate level of exactness of the anticipated word or expression, meaning relatively less exact word-to-word translation was done.A similar approach was taken throughout the evaluation, which means the researcher was well-aware of multiple interpretation versions being possible and considered accurate for each Korean predicate.For instance, predicates such as "no-lyeog-ha-go issseub-ni-da" [(the government is) working to], "chu-jin-ha-go iss-seub-ni-da" [(the government is) putting forth], and "jun-bi-ha-go iss-seub-ni-da" [(the government is) preparing to put forth], carry somewhat general meanings of "working hard on something" or "aiming to implement something, " and these predicates can be interpreted in various ways using a wide variety of verbs in English. To quantify anticipation accuracy rates, each student's total count of accurate anticipations was divided by his or her total count of anticipation attempts made, which produced the values of accuracy rates in percentage.When a participant attempted anticipating a whole predicate made up of a modal verb and the "main" verb (bon-dong-sa or bon-yong-eon in Korean) following the modal verb but managed to predict only one of them (either the modal verb or the following "main verb") correctly, it was deemed the anticipation was incorrect, which led to the deduction of one point instead of 0.5 point from the full-score of accuracy.For instance, Student F anticipated the predicate for the sentence, "In-gu-mi-lae-po-leom-eun geu-dong-an 4cha san-eob-hyeog-myeong, AI, gi-hu-wi-gi, in-gu-mun-je deung-eul da-lu-myeonseo dae-han-min-gug-i na-a-gal bang-hyang-eul mo-saeg-hae-wass-seub-ni-da" [The Population Future Forum has discussed the topics such as the Fourth Industrial Revolution, AI, the climate crisis, and population problem, and sought ways forward for the Republic of Korea], by verbalizing "we have, we are going to talk about..."Although Student F tried anticipating the entire predicate before the predicate was said in the source speech, the student inaccurately repaired the time-tense (modal verb) part of the predicate.This case was deemed incorrect anticipation, leading to the deduction of one whole point for the accuracy, not 0.5 point.The detailed accuracy evaluation results can be found in the Appendix 2. Results All of the 22 participants' names have been anonymized using alphabet letters.Anticipation frequency was analyzed by counting how many predicates were anticipated regardless of the accuracy, out of the total 38.Anticipation accuracy was determined by counting how many of the total 38 predicates each interpreter correctly anticipated, and by comparing the total number of accurate anticipations to the total number of anticipation attempts.These calculations produced the ratio of accurate anticipation compared to the total 38 predicates and the accuracy rate of attempted anticipations. Anticipation Frequency Table 1 below illustrates the anticipation attempts made by each participant in terms of the absolute number of anticipation attempts (anticipation of a modal verb being weighted equally as anticipation of the entire predicate) and the number of attempts counted after issuing the 0.5 anticipation attempt for the attempts of anticipating a modal verb only. The number of anticipation attempts made by the participants ranged from minimum five times to maximum 21 times.Student A made the most frequent anticipation attempts: 23 times in the absolute number and 21 times in the weighted number.Student P made anticipation attempts only five times, which is the least number of anticipation attempts among the 22 participants.The minimum number of anticipation attempts was the same both in terms of the absolute number and the weighted number of times, as Student P always anticipated a predicate in its entirety, never attempting to anticipate only the modal verb part of a predicate.The average number of anticipation attempts made by the 22 participants was 12.3 times, which is translated to 32.48% of the 38 predicates on average were anticipated by the students.Figure 1 below visualizes the statistics on anticipation attempts in the form of a graph.In this study's analysis, the number of anticipation attempts calculated by giving the 0.5 weight to each attempt to anticipate the modal verb only in a sentence will be used for discussion. Anticipation Accuracy Table 2 illustrates each participant's anticipation accuracy in the metric of the accuracy rate of attempted anticipations.The accuracy rates of attempted anticipations seem unexpectedly high, with the average value being 76.81%.When looking only at the figures on the far-right column, readers may be misled to believe that the participating students were skilled at using anticipation to its best effectiveness.Table 2 also shows the number of anticipation attempts made for the total of 38 predicates in fractions, so readers can discreetly view each participant's anticipation accuracy.Since the number of attempted anticipations varies by participant, it can potentially cause the accuracy rates of attempted anticipations to seem significantly high among those who made significantly fewer anticipation attempts.As reported by the table and graphs above, minimum 7.89% and maximum 51.32% of the 38 predicates in the source speech were accurately anticipated by the students.The results indicate that, on average, the participants were able to accurately anticipate more than 25% of the predicates (25.18%) verbalized in the source speech.Data on the accuracy rates of attempted anticipations provides more value insights for analysis and discussion.The accuracy rate of attempted anticipation was minimum 53.85% and maximum 100%.Although the maximum value of 100% was possible only for Student L, an outlier who made significantly fewer anticipation attempts of six times only and did not try guessing the modal verb part of a predicate at all, anticipation seemed to have served as a quite effective interpretation strategy for the participants, as suggested by the average anticipation accuracy rate of 76.81% with the standard deviation of 0.12.Although direct comparisons of the participants' anticipation rates are not within the scope nor the focus of this study, merging and integrating the statistics on anticipation attempts and the statistics on anticipation accuracy would be useful in directly comparing the accuracy of anticipation between the participants who made anticipation attempts the same number of times. More will be discussed about anticipation accuracy rates in the following section. Anticipation Attempts The researcher hypothesized that given the structural differences between Korean and English, the participants would have no choice but to anticipate the predicates, especially the sentence-ending predicates.The conjecture was corroborated by the experiment results.All of the participants attempted anticipation of predicates at least five times throughout their interpretation.On average, the participants attempted anticipation for 12.3 predicates. Although the number of anticipation attempts varied by participants, there were four participants who made anticipation attempts for at least nineteen predicates or more, which account for 50% or more of the total predicates used in the source speech.Student A's interpretation, which exhibited significantly frequent anticipation attempts (total 21 anticipation attempts) and a fairly high accuracy rate of anticipation (83.33%), caught the researcher's attention, as Student A's Correlation between Anticipation Attempt Rate and Accuracy Rate Anticipation Attempt Rate (weighted) Accuracy Rate of Attempted Anticipation native language is English, not Korean.In fact, students who have English as their native language and are not Korean nationals are rare in the graduate school programs in Korea.Student A's active use of anticipation and high accuracy raised the possibility of the native language impacting the use and accuracy of anticipation in SI.However, another student (Student B) whose native language is English and a foreign national, attempted anticipation only nine times in the same experiment and his anticipation accuracy was 23.68%, which was not an impressive figure compared to Student A's.Because it would require a better-designed experiment with controlled variables to understand the influence of one's native language on the use of anticipation as a simultaneous interpretation strategy, this experiment does not offer robust evidence that attributes Student A's unusually active use of anticipation and high anticipation accuracy to his native language factor.This study did not consider each participant's English proficiency level, and therefore, it did not capture the instances where the participants did attempt anticipation but failed to deliver the anticipated predicate in English in a timely manner.It is possible that the anticipation of sense or corresponding words in the target language occurred more frequently but was not observed in the form of interpretation output.However, with the current data collection methodology available, there seems to be no way to really observe or capture such instances.Anticipation attempts were more frequent in the beginning and final parts of the source speech.For instance, the predicates in the sentences aimed to greet the audience, introduce the topic or purpose of the event, and thank the host, which typically appear in the beginning of most speeches, were anticipated by almost all of the participants.The very first sentence of greeting was accurately anticipated by all participants without any exception.The second predicate in the second sentence which introduced the topics discussed in the pertinent event was anticipated by 21 participants.Only one participant did not attempt to anticipate the predicate of the second sentence.The second-to-last sentence of the source speech, "O-neul i ja-liga in-gu-mun-je-e dae-han gong-gam-dae-leul hwag-san-ha-go geon-seoljeog-in dae-an-eul ma-lyeon-ha-neun mae-u tteus-gip-eun non-ui-ui jang-i doe-gi-leul gi-dae-hab-ni-da" [I hope today's event will be a venue for very meaningful discussion to share awareness on the population issues and produce constructive solutions], which conveys the speaker's best wish for the successful hosting and fruitful discussion made in this event, is another case in point.It is a cliché statement in Korean government speeches and its predicate was anticipated by all participants except for one.These are a testament to the fact that the participants have the knowledge about how government speeches typically unfold and ample experience of practicing interpretation of government speeches in Korean.It was found that the participants know how to use their extralinguistic resources to utilize anticipation.The concentration of anticipation attempts in the beginning and the end of the speech suggests that the understanding of the text flow increase the ease of anticipating for the participants. Correlation between Anticipation Attempts and Anticipation Accuracy The notably high anticipation accuracy of Students A, K and N, all of whom attempted anticipation for about half of the 38 predicates (anticipation attempt rates ranging from 50% to 55.26% among the three participants) prompted the researcher to examine the correlation between the number of anticipation attempts and the anticipation accuracy rate.When calculating the correlation coefficient, the values of two outliers, Students L and P, who attempted anticipation only five and six times respectively, were deliberately excluded.Students L and P made too few anticipation attempts for their anticipation accuracy rate to be considerably reliable.In particular, Student L attempted anticipation only five times, and all of the five anticipations were correct.Student L's 100% accuracy rate does not necessarily indicate sophisticated use of anticipation during SI; rather, it suggests that the student mustered enough courage to predict the predicates only when highly assured of accuracy.Meanwhile, Student P made even fewer anticipation attempts, yet achieved an anticipation accuracy rate of only 60%.This suggests fewer attempts do not necessarily lead to higher anticipation accuracy.With Students L and P's data included, the correlation coefficient between anticipation attempt rates and accuracy rates of attempted anticipation stood only at 0.170.When the two outliers' values were excluded, the correlation coefficient rose to 0.250.Evans (1996) suggests the guide to describe the strength of the correlation coefficient, r, as the following: (0.00 -0.19 = very weak), (0.20 -0.39 = weak), (0.40 -0.59 = moderate), (0.60 -0.79 = strong), and (0.80 -1.00 = very strong).Using this guide, the resulted correlation coefficient 0.250 is interpreted as indicating a "weak positive correlation."Figure 3 illustrates the result that suggests that a positive correlation exists between the two variables, albeit weak. It is worth noting that the participants who attempted anticipation for at least half of the total 38 predicates, showing the anticipation attempt rates well beyond the average of 36.84%,achieved relatively high accuracy rates of attempted anticipations.For instance, Students A, K, N, and U, whose anticipation attempt rates ranged from minimum 48.68% to maximum 55.26%, achieved anticipation accuracy rates ranging from 78.38% to 97.50%. An assumption can be made that the participants who were confident about the accuracy of their anticipations employed anticipation as an interpretation strategy more actively.The reverse assumption is also possible; the more anticipations one tried, the higher the anticipation accuracy rate was, cautiously suggesting that encouraging the use of anticipation in SI from Korean into English by interpreting trainers could be effective in improving the effectiveness of students' anticipation.While it is not appropriate to highlight a single interpretation strategy when teaching SI, the effectiveness of anticipation can be illustrated using the data presented in this study.The weak yet existent correlation between anticipation attempts and anticipation accuracy could be used to convince students who are often hesitant to make anticipation attempts due to their lack of experience and initial high inaccuracy in anticipation. Attempts to Anticipate Auxiliary Verbs Only An interesting pattern was found through the analysis of interpretation output produced by the participants.Initially, the researcher had expected to observe anticipation of each predicate in Korean (the source language), which would have both the time-tense or the modal verb and the actual verb or adjective that carries the semantic content.Except for three participants (Students L, P, and Q) who anticipated the whole predicates in all anticipation attempts, most of the participants sometimes anticipated and verbalized only auxiliary verbs (or also known as modal verbs), such as "will," "be going to," "should, " and "need to, " and waited to hear more words in the source language to say the actual verbs to complete the predicates.Although some might view this as an instance of using the stalling strategy, the researcher believes that by verbalizing modal verbs which also signify the time-tense of each sentence, the participants indeed made the intentional choice of anticipating the time-tense as well as the overall direction of how each sentence would unfold or develop meanings.It can be considered a safer approach that students tend to take to make up for the lack of confidence in their anticipations. Except for Student T, whose five out of six anticipations of the modal verbs were incorrect, almost all participants achieved relatively high success rates in anticipating the auxiliary verbs.In other words, the participants were quite good at anticipating the time-tense of sentences as well as the intention of sentences, whether the intention is to induce or encourage someone to take action or to inform the government's plan for the future.Although it is not ideal to have a time lag between the moment when a modal verb is said and the moment when the real verb (with the actual semantic content) is uttered, student interpreters seem to resort to this approach to lower the load of input by at least anticipating and verbalizing the modal verb and aim to increase their interpretation accuracy by listening to more words or to the actual verb before finishing the interpretation of predicates.Similar to how the participants tapped into their contextual knowledge as well as their understanding of how Korean speeches typically flow to make anticipation attempts more actively in the beginning and end of the speech, the participants may have more confidence when attempting to predict modal verbs including the time-tense, owing to their knowledge-based and experience-based anticipation of what would be said next in a typical flow of Korean government speeches.Refining this tactic under the umbrella of the anticipation strategy and properly teaching it may contribute to increasing students' overall anticipation accuracy and efficiency. This tactic of anticipating the modal verbs only was more actively employed towards the end of the source speech.Especially when interpreting relatively lengthy sentences that contained more than one predicate in the parallel syntactic structure, the participants opted to use this tactic of anticipating the modal verb only and well in advance, then completing their interpretation output after hearing more cues from the source speech or even upon hearing the action verb or the adjective.For instance, when the sentence "U-li jeong-bu-do ji-geum-kka-ji-ui in-gu-jeong-chaeg-eul myeon-milhi geom-to-ha-yeo, deo hyo-gwa-jeog-i-go che-gam-do nop-eun jeong-chaeg-eulo ba-kkwo-na-ga-gess-seub-ni-da" [By thoroughly reviewing (or examining) the existing population policy, the government will make revisions to develop a more effective policy that produce tangible results] was uttered in the source speech, 16 out of the total 22 participants anticipated the time-tense first by uttering "will" and then completed the interpretation of the first predicate after hearing the action verb "geom-to-ha-yeo" [to review or to examine].This pattern was repeated for the following sentence, "U-li sa-hoe jeon-ban-eul yug-a-chin-hwa-jeog-eu-lo jae-seol-gye-ha-go, go-yong, gyo-yug, ju-geo deung gu-jo-jeog-in mun-je-leul pul-eo-na-ga-gess-seub-ni-da" [The government will redesign the Korean society to become easier for childrearing and address structural issues related to employment, education, and housing]. 11of the 22 participants anticipated the modal verb that signifies the time-tense first and then added the real action verb equivalent to "redesign" after hearing more input or even the Korean word itself.Although the tactic of anticipating the modal verb only was not confined to the latter sentences in the source speech, its use was definitely more frequent in the latter part of the speech.As the source speech reached the latter part where longer sentences were uttered, the participants would have had more cues and confidence to prompt them to anticipate at least the modal verb part.In addition, coincidentally, the latter part of the speech included more lengthy sentences than the beginning part.The longer sentences seem to have forced the participants to anticipate at least the modal verb to lower the memory load and to give them chances to "divide and conquer" the sentence. Conclusion It is evident that while the degree of attempting anticipation varied among the students, a significant number of them actively engaged in anticipation tactics; the average number of anticipation attempts made for the total of 38 predicates is 14, and with anticipation attempts for modal verbs weighted as 0.5 attempt, the average number of anticipation attempts is 12.3.This suggests that the students attempted anticipating for almost a third of the predicates in the speech.Notwithstanding their lack of experience and expertise which could often lead to inaccurate anticipation, the students seem to understand the necessity of predicting predicates when simultaneously interpreting Korean into English. Also, the experiment results revealed that most of the students (except for three students) employed the tactic of anticipating or attempting to anticipate at least the time-tense or modal verb of a sentence.In these cases, the students would wait to hear more information or words and then complement their initial anticipation with the "real verb" carrying the sense.Indeed, the accuracy rates of anticipating only the auxiliary verbs were relatively high, except for one outlier who had five of the six auxiliary verbs predicted incorrectly.Although further analysis using qualitative methods including interviewing is needed to understand as to why this tactic was employed, a conclusion can be drawn that interpreters still in training also refer to extralinguistic resources including the preceding ideas or sentences and the overall context of a speech to make anticipation attempts, even if they are incomplete anticipation attempts requiring repairs or complementation. This analysis based on the experiment observations can be developed into a research hypothesis that novice interpreters like student-interpreters tend to make anticipation attempts in a very safe manner, choosing to anticipate only the word for which they are quite sure of anticipation accuracy.That is why this study looked at the correlation between anticipation attempts and anticipation accuracy.The correlation coefficient was 0.250, a value that suggests a weak or ambiguous positive correlation between the two variables.Despite the correlation coefficient suggesting a weak correlation between the two variables, when examining the values produced by students who made significantly more anticipation attempts (more than 18 times; Students A, K, N, and U), there appears to be a clear correlation between active anticipation and anticipation accuracy.Students A, K, N, and U exhibited the (weighted) anticipation accuracy rate ranging from 73.38% to 97.50%.Albeit confined to a few students, the high accuracy rates observed among those who actively employed anticipation may suggest the potential effectiveness of anticipation as a strategy in SI from Korean into English. This study is not without its limitations.First, whether the selected source texts' difficulty level both in terms of content and sentence structure was appropriate for the experiment participants may be up for debate.The source text was written and delivered as a speech; however, such a speech does not represent the full spectrum of texts that interpreters encounter both during training and in professional settings.In particular, the source text used contains specific information about the Korean government's policies regarding the topic.Anticipating such information accurately might have been challenging for the participants, even if they had the capacity to utilize both linguistic and extralinguistic cues for successful anticipation.It would have been ideal if the source text had been peer-reviewed and selected by multiple professional interpreters and interpreting trainers.Perhaps a similarly designed experiment should be conducted again, with a more spontaneous, spoken-language style source text.In addition, when analyzing the students' anticipation of the modal verb(s) in a sentence, whether each of the attempts was truly the result of employing the anticipation strategy or it was a way to stall interpretation(verbalization) until more words are heard could not be clearly distinguished.In other words, some of this study's analyses relied on the researcher's subjective evaluation.Although the researcher has several years of experience of teaching and evaluating interpreting students, the researcher's decision on when and at which point in time anticipation occurred may not always be accurate.The distinction between stalling and anticipating a modal verb was blurry at times, but the researcher decided to view the anticipation of a modal verb as the interpreter's intentional choice to attempt predicting the time-tense or the general direction of the sentence's semantics. Moreover, this study was conducted solely on student interpreters, highlighting the need for further research involving professional interpreters to draw more accurate, concrete conclusions about anticipatory tactics used in SI from Korean into English.This study discovered a tendency among the students to anticipate auxiliary verb(s) only and wait for more source text input to complement their anticipation of predicates A similar experiment needs to be replicated for professional interpreters to examine whether this tendency or tactic is unique to student interpreters or not.Then, the observed tactic of anticipating auxiliary verbs could be considered a tried and proven tactic, deserving to be taught by interpreter trainers. Nevertheless, this study has some implications for interpreters in training and their trainers.The findings emphasize the potential effectiveness of active anticipation in improving interpretation accuracy.This suggests that interpreter trainers could benefit from emphasizing and refining anticipation skills as a core component of training.Moreover, the study's identification of a specific tactic, which is anticipating auxiliary verbs only and complementing the anticipation after more input comes from the source text, provides valuable insights for interpreter trainers about a coping mechanism of utilizing anticipation in a rather cautious manner employed by inexperienced, often unconfident, students.This result can be further studied to offer some insights on how to guide students to more actively engage in anticipation and increase the accuracy of their anticipation in SI. Furthermore, more in-depth analysis on which resources each participant used to make anticipation attempts will be useful in explaining which cues or what kinds of contextual or extralinguistic information aided their anticipation.In particular, if there had been linguistic or contextual cues and specific lexical transitions that made anticipation easier, such examples should be highlighted by instructors who introduce and encourage the use of anticipation in SI from Korean into English.A series of qualitative interviews of the participants may contribute to identifying both useful resources for effective anticipation and hurdles in making predictions for students.Overall, the study contributes to the advancement of interpreter training by shedding light on the role of anticipation in simultaneous interpretation.In addition, this empirical study is one of the rare studies that analyzed a relatively large sample of 22 students' Korean-English SI renderings to identify and capture when, how often and whether accurate anticipation occurred.The data and results of this study could be used as the basis for interpreter trainers to develop and incorporate activities and assignments aimed at enhancing anticipation skills, leveraging both lexical transition probability and extralinguistic resources including contextual knowledge. Figure Figure 2: Anticipation accuracy Figure Figure 3: Correlation between anticipation attempts and anticipation accuracy
2024-05-25T15:20:24.993Z
2024-04-30T00:00:00.000
{ "year": 2024, "sha1": "1c7f72e2f1430932c3a96981eac62ac4e6e1a35a", "oa_license": "CCBYNC", "oa_url": "https://incontextjournal.org/index.php/incontext/article/download/77/47", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "9c237093f064942cda4a2ee73b3d0835e0c7ac67", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [] }
231709695
pes2o/s2orc
v3-fos-license
Towards Domain Invariant Single Image Dehazing Presence of haze in images obscures underlying information, which is undesirable in applications requiring accurate environment information. To recover such an image, a dehazing algorithm should localize and recover affected regions while ensuring consistency between recovered and its neighboring regions. However owing to fixed receptive field of convolutional kernels and non uniform haze distribution, assuring consistency between regions is difficult. In this paper, we utilize an encoder-decoder based network architecture to perform the task of dehazing and integrate an spatially aware channel attention mechanism to enhance features of interest beyond the receptive field of traditional conventional kernels. To ensure performance consistency across diverse range of haze densities, we utilize greedy localized data augmentation mechanism. Synthetic datasets are typically used to ensure a large amount of paired training samples, however the methodology to generate such samples introduces a gap between them and real images while accounting for only uniform haze distribution and overlooking more realistic scenario of non-uniform haze distribution resulting in inferior dehazing performance when evaluated on real datasets. Despite this, the abundance of paired samples within synthetic datasets cannot be ignored. Thus to ensure performance consistency across diverse datasets, we train the proposed network within an adversarial prior-guided framework that relies on a generated image along with its low and high frequency components to determine if properties of dehazed images matches those of ground truth. We preform extensive experiments to validate the dehazing and domain invariance performance of proposed framework across diverse domains and report state-of-the-art (SoTA) results. Introduction Visibility degradations arising from environmental variations such as haze, smoke and fog affect image quality by concealing underlying information which is undesirable in applications where accurate surrounding information is necessary for safe operation such as autonomous vehicles, aerial robots and intelligent infrastructure. To overcome complications arising from deteriorations such as haze and fog, image dehazing has been extensively studied to recover clean image from its degraded version. Common approaches rely upon haze estimation using atmospheric scattering model (McCartney 1976;Nayar 2000, 2002)(Eq. 1) to recover haze affected regions, that establishes a pixelwise (x) relationship between ambient light intensity (A) and transmission matrix t(x) = e −βd(x) (representing fraction of light reaching camera sensor) by using scene depth (d(x)) and scattering coefficient (β), to generate a hazy image (I(x)) from a clean image (J(x)). (1) Traditional computer vision based dehazing algorithms relied upon handcrafted priors such as dark channel (He, Sun, and Tang 2010), color attenuation (Zhu, Mai, and Shao 2015), bi-channel (Jiang et al. 2017) and color lines (Fattal 2014;Berman, treibitz, and Avidan 2016) to estimate atmospheric light or transmission map to recover dehazed image, following atmospheric scattering model. However strong reliance on priors makes these methods vulnerable in scenarios when these priors don't hold. To avoid dependence on priors, different models leveraging the feature extraction capabilities of convolutional neural networks (CNNs) were proposed, following either atmospheric scattering model (He, Sun, and Tang 2010;Cai et al. 2016;Zhang and Patel 2018) or end-to-end approach (Ren et al. 2018;Li et al. 2017;Mei et al. 2018a;Chen et al. 2019;Engin, Genç, and Kemal Ekenel 2018) to estimate dehazed images. Although learning based approaches represent current state-of-the-art (SoTA), they require large number of training samples accurately representing haze scenarios in different outdoor settings. Constructing such a large scale real dataset is both expensive and time consuming, thus atmospheric scattering model is used to generate synthetic haze corresponding to a clean image. However such approaches are limited in considering the effect of airborne particles on different wavelengths ) apart from presence of wind, resulting in difference between real and synthetic images in form of domain difference and haze distribution. This results in performance gap arising between real and synthetic haze trained models when evaluated upon either of the datasets. To overcome the dual challenge of varying haze distribution and domain difference. In this paper, we propose a framework for achieving domain and distribution invariant dehazing framework. We begin by focusing on achieving consistent performance irrespective of haze distribution and highlight the importance of localizing haze affected regions as a necessary step towards effective dehazing. To attain such characteristics, we concentrate upon data augmentation and architecture of the underlying CNN. Specifically, to generate non-uniform haze distribution on synthetic samples, we leverage greedy localized data augmentation proposed in (Shyam et al. 2020) that copies multiple patches of varying shapes and sizes from noisy image onto corresponding paired clean image to generate non-uniform noise patches. For our purpose, this approach results in generation of non-homogeneous haze. In order to accurately recover image regions affected by unknown haze distribution, we utilize an encoder-decoder framework built upon UNet (Ronneberger, Fischer, and Brox 2015), to aggregate and represent features across different scales into a higher order latent space which is subsequently be used by a decoder to reconstruct haze free image. To ensure information and color consistency between recovered patches with neighboring pixels we aggregate features from multiple scales using our proposed spatially aware channel attention mechanism and fuse these features into feature encoding obtained at encoder. Motivated by observations of ) that highlight domain gap arising due to sensitivities of CNN towards high frequency (HF) components within an image and the ability of adversarial training to incorporate domain invariant properties by focusing upon generalizable patterns within samples. We propose prior based dual discriminators, that use low frequency (LF) and high frequency components along with corresponding dehazed image to determine similarity between recovered image and ground truth. We thus summarize our contributions as • Propose an end-to-end dehazing algorithm, directly recovering images affected by unknown haze distribution using spatially aware channel attention mechanism within CNN architecture to ensure feature enhancement and consistency between recovered and neighboring pixels. • Integrate a local augmentation technique, to ensure the network learns to identify and recover haze affected regions in real and synthetic images. • Perform exhaustive experiments to highlight performance inconsistencies between networks trained on synthetic and real datasets and attribute this to weak modeling of haze. • To ensure consistent performance across synthetic and real datasets, prior based adversarial training mechanism is introduced that leverages LF and HF components within an image to ensure retention of color and structural properties within recovered image. Related Works Single Image Dehazing : Image dehazing algorithms can be categorized into model based and end-to-end. Model based algorithms utilize atmospheric scattering model to recover haze affected images either using prior or learning based approach. Prior based approaches such as (He, Sun, and Tang 2010) proposed using dark channel prior on the premise that pixel value within a color channel is close to zero in regions affected by haze, using which transmission map and atmospheric light within an image can be estimated. Other approaches (Zhu, Mai, and Shao 2015;Fattal 2014;Berman, treibitz, and Avidan 2016) devise priors such as color attenuation and color lines for estimating transmission map. Since prior based methods are sensitive towards environment variations, learning based approaches are utilized to leverage feature extraction capabilities of CNNs to estimate different components of atmospheric scattering model. Specifically, (Lu et al. 2016) used CNNs to estimate atmospheric light, (Cai et al. 2016) estimates transmission and Li et al. 2018a) estimates both transmission and atmospheric light to recover regions affected by haze. Recently learning based approaches have shown considerable performance improvement for recovering haze affected regions in an end-to-end manner. (Ren et al. 2018) proposed an encoder-decoder formulation to encode features from the hazy images which are then extrapolated by a decoder to reconstruct haze free images. (Qin et al. 2020;Mei et al. 2018a;Li et al. 2018a;Liu et al. 2019b) followed a similar approach by performing modifications in CNN network and loss functions. On contrary (Qu et al. 2019) proposed the task of dehazing as image-to-image translation and used a modified variant of Pix2Pix . To reduce reliance on paired dataset, (Engin, Genç, and Kemal Ekenel 2018) modified CycleGAN formulation for the task of dehazing. However these approaches are sensitive towards domain changes between synthetic and real datasets. To overcome this (Shao et al. 2020) proposed a domain adaptation mechanism to translate image from one domain to another thereby aiming to achieve the best of image translation and dehazing. In order generate more visually pleasing dehazed images, (Dong et al. 2020) proposed a fusion of frequency priors with image in adversarial learning framework. However, unlike prior approaches we emphasize on disentangling frequency information (LF and HF) from an image to extract multiple priors and independently learn association between LF (color) and HF (edge) components for retaining color and structural consistency, beyond the traditional loss functions. Domain Invariance : Feature extraction capabilities of CNNs leads to their SoTA performance on various tasks. However performance inconsistencies arise when domain gap exists between test and train sets. To overcome such scenarios domain adaptation is proposed to perform either feature level (Tsai et al. 2018;Tzeng et al. 2017) or pixel level adaptation (Shrivastava et al. 2017;Bousmalis et al. 2017;Dundar et al. 2018;Hoffman et al. 2018). Feature level adaptation minimizes maximum mean discrepancy (Long et al. 2015) between source and target domain, while pixel level adaptation focuses upon image-to-image translation or style transfer to increase data in source or target. However reliance on target dataset makes this approach less favorable. An extension, domain generalization, focuses on techniques that provide consistent performance across unknown domains by emphasizing on the task using stylization techniques (Somavarapu, Ma, and Achieving Domain Invariant Dehazing The overall structure of the proposed framework comprises of two parts namely greedy data augmentation (Fig. 2) and adversarial training framework ( Fig. 1) comprising the dehazing and discriminator network. Greedy Localized Data Augmentation In a realistic setting, haze can vary wildly across regions within an image ( Fig. 2 (f)). However synthetic datasets while providing a large number of paired samples arent able to account for such non-homogeneous variations ( Fig. 2 (d)), leading to inaccurate recovery in such scenarios (Fig. 5). A simplistic approach to overcome such a scenario would be to increase the dataset to account for these variations which is costly and time consuming. Thus to train the network towards such diverse scenarios, we leverage greedy localized data augmentation technique proposed in (Shyam et al. 2020) to generate small hazy patches of random shape and size within clean images and task the CNN to recover these affected images. This allows utilization of both real and synthetic datasets for homogeneous and non-homogeneous dehazing. Fig. 2 demonstrates the working of this augmentation mechanism, along with a visual comparison to synthetic, real and non-homogeneous haze examples. Network Architecture For recovering haze affected regions we build upon encoderdecoder architecture UNet. The encoder of the proposed dehazing network is tasked to represent noisy input image into a latent space using 4 convolution blocks (conv-blocks) that extract relevant features across different scales. Each convblock represents 2 chains of convolutional filter of size 3 × 3, batch normalization layer and ReLU activation function. A max pool operation is performed after each convolution block to aggregate features while increasing the receptive field size for subsequent convolution blocks. In order to reconstruct noise-free image, 4 upscaling blocks, each comprising of pixel shuffle layer (Shi et al. 2016) followed by convolutional filter of size 3 × 3, batch normalization and ReLU activation are used. Features obtained at each level of encoder are concatenated with features of corresponding level at the decoder end via long skip connections. This ensures presence of fine grained features extracted in early layers to be present in the noise-free image, helping to maintain boundary properties of objects present within image. The result from decoder is passed to a convolution layer with filter size 1 × 1. Spatially Aware Channel Attention (SACA) : While such an encoder-decoder architecture tends to work on homogeneous haze distribution. In case of non homogeneous haze, affected regions might extend beyond receptive field of convolutional kernels resulting in weak representation being extracted along different scales of encoder. Thus dynamic adjustment of receptive field based on haze distribution for encompassing relevant features is required. For this we propose a spatially aware channel attention mechanism that comprises of non-local operation followed by a channel attention mechanism. The non local operation allows in capturing long-range dependencies across spatial dimension, while channel attention mechanism filters important channels within feature map. Adding the channel attention mechanism helps in reducing the computational cost, thus allowing deployment of such blocks at different scales. The channel attention mechanism (Fig. 3) is constructed using 1 × 1 convolution, global average pooling and a softmax activation layer and works by amplifying relevant channels while suppressing irrelevant ones. In order to maximize the effect of spatially aware channel attention layer, we place them in long skip connections and justify such a design choice for ensuring long skip connections carry relevant local features by refining complete feature map corresponding to a particular scale without modifying features within conv-blocks in encoders. The resultant refined features from each SACA block is include into final embedding representation by performing a max pool operation (of varying size) to match feature map size and highlight that such a mechanism allows in enriching the feature space by concatenating additional features. Frequency prior based Discriminators for Adversarial Training Domain gap between synthetic and real haze samples adversely affect performance of underlying dehazing algorithms (Tab. 1). Thus to obtain a domain invariant dehaz-ing algorithm, we take advantage of frequency domain and propose a frequency prior based discriminator that relies on both high and low frequency components of an image to determine if a recovered image matches ground truth. The discriminator architecture comprises of 6 conv-blocks resulting in multi-dimensional output, similar to Patch-GAN , with a patch-size of 64. We utilize two independent discriminators with same architecture but using different frequency priors to obtain different set of weights. We base this design choice on two observations, 1. HF components cover edge information, while LF components cover structure and color information. In this context the intensity of LF components would be larger than HF components, which might lead to LF components gaining more importance in adversarial process. 2. ) highlights during early optimization process LF components are learned owing to a steeper descent of loss surface. These observations incline us to introduce two discriminators to avoid over reliance of one component over other, while optimizing the complete framework. Monitoring the optimization process ascertains that both LF and HF components are learned. To train the discriminators, for a given image, we first extract its high and low frequency components using laplacian and gaussian filters respectively (of filter size 3 and 7 respectively) and concatenate them with original image. To ensure standard pixel scale, we normalize the HF components before concatenation. Thus for a given pair of hazy I N and its corresponding dehazed image I R , the dehazing network estimates a dehazed image G(I N ). Framework Optimization To train the proposed framework, we follow standard GAN approach wherein the dehazing algorithm and discriminator are optimized alternatively. The optimization function for the dehazing algorithm is composed of L1, SSIM (Wang 7 (Johnson, Alahi, and Fei-Fei 2016) losses along with dual adversarial loss. where λ 1 , λ 2 are loss balancing terms. In our experiments we set λ 1 = λ 2 = 0.5 to balance both LF and HF discriminators. We design the proposed framework in Pytorch 1.6. The input patch is set to square patches of size 512 normalized to [0, 1]. ADAM (Kingma and Ba 2014) is used as optimizer with β 1 = 0.5 and β 2 = 0.9 and learning rate of 0.0001 for dehazing and 0.0003 for discriminator networks respectively with a batch size of 4. Apart from the aforementioned greedy localized data augmentation (max patch size of 50 × 50), we also use random horizontal and vertical flipping as additional augmentation techniques. For our experiments we utilize a system equipped with Intel 8700-K CPU and 64GB RAM having Nvidia Titan V and Titan RTX GPUs. Experimental Evaluations Datasets and Evaluation Metrics : In order to evaluate performance of various algorithms across both synthetic and real datasets, exhibiting different haze distributions. We utilize real i.e. NTIRE-18 ( Individual vs Aggregated Dataset for Strong Baseline : Synthetic datasets provide access to extremely diverse characteristics such as scene setting, differing camera properties and illumination conditions, covered in large amount of paired datasets, making them indispensable despite their flaws in modeling haze. We begin by determining performance of algorithms when trained and evaluated on datasets having same distribution while summarizing the results in Tab. 2. We observe dehazing algorithms to perform well on test sets following similar distribution to training dataset, but their performance drops drastically when tested on datasets outside the training distribution, even for algorithms trained on synthetic samples. However, we observe that compared to previous methods, the performance drop of proposed approach is not substantial and we attribute this to utilizing frequency priors while training. A common approach to achieve domain invariant per- formance is to increase the dataset size by accumulating data from different sources. Following this we aggregate the aforementioned datasets and evaluate them individual sub test sets with results summarized in Tab. 3. We conclude this approach to aid in achieving peak performance for all algorithms on real datasets such as NTIRE-19 and NTIRE-20. We further corroborate that all algorithms including ours benefit from increased dataset size on account of merging synthetic and real datasets. In this scenario the proposed model outperforms the top algorithm DuRN-US by 3.78 db, 0.19 and 6.84db and 0.21 on NTIRE-20 and NTIRE-19 datasets, thus showcasing improved preservation of structural properties while improving PSNR of the recovered image. The performance boost on real dataset comes with a reduced performance on synthetic dataset. However when considering the broader perspective of deployment of these algorithms in real scenarios, such a performance trade-off between real and synthetic datasets is reasonable provided these methods retain their performance, when deployed in another domain. To evaluate this scenario, we refer to algorithms trained on aggregated dataset as being representative of strong baseline for further evaluation. Performance on datasets outside training distribution : To ascertain that higher performance of algorithms when trained on aggregated dataset ensures performance retention to unknown domains, we use Haze-RD and NTIRE-18 datasets for blind evaluation of strong baseline. We summarize numerical results in Tab. 4 and visual results in Fig. 5. While the proposed framework retains its performance on NTIRE-18 dataset, performance of all algorithms on Haze- RD drops significantly. However performance drop in terms of PSNR is not substantial on NTIRE-18 for both proposed framework as well as strong baselines. Upon a visual examination of dehazed images, we observe that while performance in PSNR terms is mostly retained, prior works couldn't dehaze the image completely with some regions still affected by haze. Furthermore the structural properties of recovered objects is not retained. On contrary, the proposed framework was not only able to remove haze but also preserve color and structural properties of underlying objects to a substantial degree. Thereby demonstrating the effectiveness of proposed framework in unknown domains. We observe using greedy local data augmentation technique (GLDA) significantly boosts performance of baseline model on both known and unknown datasets. We attribute this towards the ability of the network to focus explicitly on haze affected regions. To corroborate this observation, we progressively introduce SATA module with multi-scale feature aggregation (MSFA) and report continuous performance improvement both in terms of PSNR and SSIM, with SACA contributing more to SSIM whereas MSFA contributes towards improved PSNR. This validates the design choice to introduce these enhancements to preserve structural and feature properties respectively. While the PSNR and SSIM were considerably improved on NTIRE-19, the same wasnt observed for SOTS-IN dataset. Thus we examine the effect of using a HF prior based adversarial learning setup, which improves structural preservation across datasets but not the PSNR of recovered images. Subsequently we introduce an additional LF prior based discriminator and observe significant performance retention is achieved. This confirms our hypothesis of adding HF and LF prior based discriminators to preserve structural and color consistency within recovered images which wasnt possible when using a simple discriminator owing to weak supervision. Conclusion In this paper, we focused on the dual challenge of domain and haze distributions that significantly reduces the performance of dehazing models. To overcome this, we first proposed a spatially aware channel attention mechanism integrated within a CNN for increasing the receptive field and utilized local data augmentation to simulate non-uniform haze regions. We then trained the proposed network within an adversarial framework that uses high and low frequency components as priors to determine whether a given image is real or fake. This is shown to improve performance retention in unknown domains. We perform extensive experiments to demonstrate the effectiveness of different methods aiding the proposed method to achieve SoTA and domain invariant performance,
2021-01-27T02:16:11.325Z
2021-01-09T00:00:00.000
{ "year": 2021, "sha1": "d3faa88ddedd47d5d2acb05a706dc5c6f5d14743", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d3faa88ddedd47d5d2acb05a706dc5c6f5d14743", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
16730496
pes2o/s2orc
v3-fos-license
Crop Damage by Primates: Quantifying the Key Parameters of Crop-Raiding Events Human-wildlife conflict often arises from crop-raiding, and insights regarding which aspects of raiding events determine crop loss are essential when developing and evaluating deterrents. However, because accounts of crop-raiding behaviour are frequently indirect, these parameters are rarely quantified or explicitly linked to crop damage. Using systematic observations of the behaviour of non-human primates on farms in western Uganda, this research identifies number of individuals raiding and duration of raid as the primary parameters determining crop loss. Secondary factors include distance travelled onto farm, age composition of the raiding group, and whether raids are in series. Regression models accounted for greater proportions of variation in crop loss when increasingly crop and species specific. Parameter values varied across primate species, probably reflecting differences in raiding tactics or perceptions of risk, and thereby providing indices of how comfortable primates are on-farm. Median raiding-group sizes were markedly smaller than the typical sizes of social groups. The research suggests that key parameters of raiding events can be used to measure the behavioural impacts of deterrents to raiding. Furthermore, farmers will benefit most from methods that discourage raiding by multiple individuals, reduce the size of raiding groups, or decrease the amount of time primates are on-farm. This study demonstrates the importance of directly relating crop loss to the parameters of raiding events, using systematic observations of the behaviour of multiple primate species. Introduction Understanding and addressing conflict between humans and wildlife due to crop-raiding is a crucial conservation issue [1,2]. Crops near forest are often predictable and accessible sources of nutrition for wildlife [3], and extensive damage through raiding can adversely impact farmer livelihood [4,5], compromise food security [6], reduce tolerance of wildlife [7], and undermine management strategies [8]. Conflict mitigation requires a comprehensive record of crop-raiding activity, including patterns of raiding, farmer and raider behaviour, crop losses, and the parameters of raiding events [9]. The literature on crop-raiding includes many accounts of nonhuman primates or other animals entering farms and raiding crops [10,11,12,13]; however, these are typically indirect or anecdotal rather than systematic observations of behaviour. There is also little empirical analysis of which attributes of crop-raiding events (CREs) determine amount of crop loss. Although raider age and/ or sex, group size, crop-raiding experience, and distance from forest potentially influence the extent of raiding at a farm [14,15,16,17,18], few studies quantify these or other parameters of CREs, or confirm links to the amount of damage that occurs during a CRE. This information is essential when developing techniques to protect crops because (i) deterrents can be designed to address specific raiding characteristics and (ii) methods reducing damage directly have the largest impact on yields and greatest value for farmers [19]. The effectiveness of crop-protection techniques is reflected in crop loss per unit of cost and farmer effort [9]. Therefore, quantifying the CRE parameters that determine damage to crops also measures deterrent efficacy. These parameters will be behavioural indices of the impact of deterrents and are likely to include how many individuals raid, how far they travel onto a farm, and how long they raid for. Related factors might include whether raids occur in series and/or age composition of the raiding group. Age probably correlates with raiding experience for primates consuming crops [3]; compared to novice raiders, primates with greater experience should access or process crop items more efficiently, and avoid detection by farmers more frequently or for longer durations. Parameter values may vary across species and/or circumstances, and collectively probably reflect the tactics used by raiding animals. The research investigated the behaviour of multiple primate species to explore links between CRE characteristics and resultant damage to crops. The parameters of CREs that determine farmers' losses were identified and quantified, to better understand which aspects of raider behaviour should be targeted by deterrents to reduce crop-raiding and manage conflict. This research is part of a study applying systematic observational methods to examine primate crop-raiding behaviour and develop effective conflictmitigation techniques [9]. Study Site and Farms The research was conducted at forest-agriculture interfaces around Budongo Forest Reserve in the northern Albertine Rift, western Uganda ( Figure 1). The reserve comprises almost 790 km 2 of moist, semi-deciduous tropical forest and woodland managed for timber harvesting through selective logging since the 1920 s [20,21]. Study farms were located across six villages within Nyabyeya parish (Table 1), which has an ethnically diverse human population reliant on artisanal farming [22]. Contingencies undermining crop yields directly impact local food security, and many farmers perceive crop-raiding by wildlife as the major threat to their livelihood [4]. Mean annual rainfall is 1,500 mm, peaking in April and October; mean monthly temperature is 21uC [23]. The primary crop-growing season is from March to September. All study farms adjoined forest and were selected for (a) vulnerability to crop-raiding [4,16,22], (b) extensive view of forest edges, (c) range and distribution of crops that was representative of local farms, and (d) farmer support for research objectives. The sample of farmers and farms reflected local demographic diversity and variation in farm size [4,5]; each farmer used guarding as their primary method of crop protection. Farms were mapped using 30 m measuring tapes, a GPSMAPH 60CS global positioning system unit, and MapSourceH 6.11.5 (Garmin Ltd., Olathe, USA). Data compiled were topographic features, structures, perimeter characteristics, edge lengths, field areas, crop distribution, and crop abundance (derived from plot counts). Median farm extent perpendicular to forest was 183 m (range 41 m to 419 m); median length of farm-forest edge was 146 m (range 72 m to 312 m). Distances from farm edges to reference features or structures (e.g. trees, termite mounds, paths, or huts) were recorded to aid distance estimation. Each farm map included numeric sectors to describe locations rapidly and consistently; sector boundaries coincided with features or structures ( Figure 2). Crops were along the farm-forest edge of all study farms and covered a median of 88% of farm area. Maize (Zea mays) and beans (Phaseolus vulgaris) predominated across study farms (73% of total crop area) and locally; sorghum (Sorghum bicolor), bananas (Musa spp), and cassava (Manihot esculenta, Manihot palmata) were also abundant. Median stem density per square metre was 2.9 for maize and 4.9 for beans. Each study farm adjoined other farms with crops, and therefore none were especially vulnerable to raiding due to isolation [6]. Timing of crop planting, growth, and maturity was equivalent across study farms and adjoining farms. Data Collection Data reported here were collected from February to September 2006 by GW and four other observers, including three Ugandans skilled in English and local languages. GW trained and assessed observers to ensure standardised procedures; all observers attained 100% accuracy for crop and primate identification. A crop-raiding event (CRE) was defined as when one or more individuals of a species entered a farm (i.e. crossed a farm boundary), interacted with one or more crop stems, and left the farm. A CRE commenced when the first individual entered the farm and ended when the last individual exited; duration was measured in seconds using digital stop-watches. A stem was one plant, stalk, or fruit of a crop, and a crop was deemed targeted if more-abundant or more-accessible crops were bypassed to acquire it. Primate age categories were adult (full species-sex-specific size), sub-adult (not fully grown, beyond infant development, exhibits independent behaviour frequently), or infant (developmentally small and dependent, carried frequently, maintains close proximity to adults). Data were collected using all-occurrences continuous sampling [24] and included for each CRE: (1) time and distance to nearest human when the first individual entered the farm, (2) time when each additional individual entered the farm, (3) age-category and sex of each individual, (4) farm entry point(s), (5) incidence and location(s) of crop interaction, including type(s) of crop, (6) time when each individual exited the farm, (7) time and distance to nearest human when the last individual exited the farm, (8) farm exit point(s), (9) total number of individuals entering the farm and total number remaining at the forest edge, (10) maximum distance any individual travelled onto the farm, and (11) median distance that most individuals (i.e. just over 50%) travelled onto the farm. Data regarding the behaviour of farmers and other humans on farms were also collected using all-occurrences continuous sampling. These data included presence or absence of humans on farms, nature of on-farm human activity, extent of guarding behaviour, and responses to crop-raiding primates. Crop damage was determined by counting stems interacted with, consumed, and/or carried by primates during CREs. Usually two observers worked together and rotated data-recording to avoid fatigue. Binoculars were often used to aid observations. Observations were from hides affording a continuous view of on-farm and forest edge activity while rendering observers inconspicuous to wildlife. As agreed with farmers, observers did not respond to animals entering farms and did not disclose raiding activity to any people on farms. Farmers' guarding huts were conspicuous and not used for observations because this may appear as guarding, thereby influencing primate behaviour, biasing data, and suggesting that humans in guarding huts do not respond to raiding. Ad libitum data indicated that observer presence did not modify wildlife or farmer behaviour [9]. All data were collected in accordance with institutional ethics requirements, established ethical guidelines for social and primate research, and with the consent and support of village councils and participating farmers. A total of 1,803 hours of observations were conducted over 346 sessions, each 5 to 6 hours in duration. Sampling was representative across farms, months, days of the week, and time of day from sunrise to sunset; schedules at each farm were varied to avoid confounds from predictable sampling patterns. Inter-observer reliability and distance estimates exceeded 95% concordance during bi-monthly assessments; each observer's estimates were within 10% of measured distances and considered sufficiently accurate for analysis [25]. Data Analysis Data were analysed using SPSS 14 for Windows (SPSS Inc., Chicago, USA); tests were two-tailed and results considered statistically significant when p#0.05. Kolmogorov-Smirnov and Schapiro-Wilk tests confirmed non-normal distributions of data, and hence non-parametric tests were used for primary analysis. Median values describe central tendency. For partial correlation and multiple regression analysis, values for continuous variables were logarithmically (base-e) transformed and Q-Q plots confirmed normality after transformation. Regression models were built using forward, backward, stepwise, and direct-entry methods; Distance Travelled onto Farm Because study farms adjoined forest, the distances travelled onto farms by raiding primates were also distances travelled from the Number of Individuals Raiding A total of 1,115 primates (not necessarily identified individuals) were counted at forest edges immediately prior to or during CREs. Of these, 939 (84%) entered farms, including all black & white colobus monkeys (n = 23), 96% of chimpanzees (n = 46), 87% of vervet monkeys (n = 112), 84% of baboons (n = 485), 79% of redtailed monkeys (n = 208), and 70% of blue monkeys (n = 65). Redtailed monkeys and blue monkeys were significantly more likely than other primates to remain near the forest edge while conspecifics raided (Kruskal-Wallis test, x 2 = 50.248, df = 5, p,0.001). Number of individuals entering a farm correlated positively with number at the forest edge prior to raiding (Spearman's Rank Correlation Coefficient, r s = 0.807, n = 218, p,0.001); this was the case when humans were present on the farm (Spearman's Rank Correlation Coefficient, r s = 0.803, n = 163, p,0.001) and also when humans were not present (Spearman's Rank Correlation Coefficient, r s = 0.814, n = 55, p,0.001). Most CREs (52%) involved three or fewer individuals, 36% were by a single individual or pair, and only 23.9% involved more than five individuals ( Figure 6). Baboons raided in significantly greater numbers than other species (Kruskal-Wallis test, x 2 = 41.914, df = 5, p,0.001); however, most baboon raiding groups were small compared to maximum sizes and 70% comprised fewer than ten individuals. Blue monkeys, red-tailed monkeys, and vervet monkeys were more likely than other species to raid alone (Kruskal-Wallis test, x 2 = 15.785, df = 5, p#0.007). Influence of Farmer Behaviour The amount and quality of guarding observed during the study varied between farms and did not prevent raiding of crops. Farmers and/or other humans were present on study farms during Wallace [9] for analysis of other aspects of farmer behaviour not impinging upon relationships between primate CRE parameters and amount of crop loss. Association between CRE Parameters Primates allocated most on-farm time to interacting with and eating crops, typically only travelling further onto farms to access more or targeted crops [9]. Therefore, stem damage was expected to increase as duration of raid, distance travelled onto farm, and/ or size of raiding group increased. Number of stems damaged correlated positively with size of raiding group (r s = 0.819, n = 218, p,0.001), duration of raid (r s = 0.685, n = 218, p,0.001), maximum on-farm travel distance (r s = 0.374, n = 218, p,0.001), and median on-farm travel distance (r s = 0.269, n = 218, p,0.001). Partial correlation analysis confirmed each parameter (i.e. raiding group size, raid duration, and on-farm travel distance) interlinked (Table 3). Maximum and median travel distances were highly associated, as expected; other parameters were not. All inter- correlations were positive, indicating that larger raiding groups travelled further onto farms and raided for longer durations compared to small groups or lone raiders. Partial correlation analysis also showed that number of stems damaged only associated significantly with duration of raid and number of individuals raiding (Table 3), suggesting crop loss is not directly related to distance travelled onto farm when other parameters are controlled for. Accounting for Crop Damage Because primate raiding behaviour is often context dependent [9] it is unlikely that CRE parameters contribute equally to crop loss during a raid. Four models accounting for the number of stems damaged by primates were derived using multiple regression: (i) all types of crop, (ii) maize and beans, (iii) maize only, and (iv) beans only (Table 4). In each case loss was predominantly tied to number of individuals raiding and duration of raid; maximum or median on-farm travel distance did not predict stem damage significantly. Each model accounted for a major proportion (74.6% to 87.2%) of total variance in damage, and high tolerance values (0.708 to 0.840) confirmed they were not compromised by collinearity between variables. Regression equations describing parameter contributions to crop loss are: Finer-scale regression models were derived for each frequentlyraiding species and the crop they raided most often. Although these species-crop-specific models were exploratory due to small sample sizes [26], key parameters were again number of individuals raiding and duration of raid, accounting for large proportions of variance in damage (Table 5). Stem damage (a) per CRE and (b) per unit of each CRE parameter (i.e. per minute of raiding, per metre onto farm, or per raiding individual) differed between species (Table 6). Damage per CRE was greatest for baboons and black & white colobus monkeys, and least for blue monkeys. Crop loss per unit of each parameter reflected variation in parameter values across species. Age Categories of Crop-raiding Primates Significantly more adults than sub-adults, and more sub-adults than infants, were observed on study farms during CREs (Mann-Whitney U tests: n (sub-adult) = 221 n (adult) = 672, U = 61159.0, p,0.001; n (infant) = 46 n (sub-adult) = 221, U = 3286.0, p,0.001); this was also the case for each species (chi-square tests, minimum p#0.007) ( Table 7). Almost 72% of raiders were adult, including 83% of guenons, and adults were a majority in 92% of CREs by multiple individuals (n = 177). Baboons and chimpanzees raided in mixed age-category groups significantly more frequently than other species (Kruskal-Wallis test, x 2 = 28.539, df = 5, p,0.001), and baboon raiding groups were most diverse (Kruskal-Wallis test, x 2 = 53.645, df = 5, p,0.001) ( Table 8). At least one infant was on- farm during 24 baboon raids and one chimpanzee raid; infants were occasionally near forest edges and accompanied by an adult during raids by other primates, but did not enter farms. Almost two-thirds of baboon raiding groups included one or more subadults. All on-farm adult and sub-adult primates damaged at least one crop stem. Although infants interacted with crops intermittently by pulling or biting stems, they usually travelled or rested near an adult female, or engaged in play behaviour with other infants or sub-adults, suggesting they were not anxious during CREs. Females with an infant were particularly vigilant on farms, usually first to return to the forest carrying their infant, and first to flee in response to human actions. Sex of raiding individual was not determined with sufficient reliability for analysis; however, counts of male (n = 62) and female (n = 51) adult baboons on-farm during CREs did not differ significantly (chi-square test, x 2 = 1.071, df = 1, p = 0.301). While significantly more crop stems were damaged by mixed-age groups than by adults-only groups, the former also comprised more individuals, travelled further onto farms, and raided for longer durations (Mann-Whitney U tests n (adults) = 73 n (mixed) = 104: stems U = 1133.5, p,0.001; individuals U = 598.0, p,0.001; maximum distance U = 2877.0, p,0.001; median distance U = 3079.5, p#0.032; duration U = 2354.0, p,0.001). Multiple versus Single Raids A significantly greater proportion of raids (65%; n = 141) were in series rather than single raids (chi-square test, x 2 = 18.789, df = 1, p,0.001); 79% of these were within a 2-CRE or 3-CRE series. Vervet monkeys, red-tailed monkeys, and baboons had diverse multiple-CRE profiles (Figure 7) and raided in series significantly more often than other species (Kruskal-Wallis test, x 2 = 27.387, df = 5, p,0.001). Single raids (n = 77) were most likely to involve one raiding individual (Kruskal-Wallis test, x 2 = 12.976, df = 5, p#0.024), indicating groups often continue to raid whereas single individuals do not. However, crop damage per CRE did not differ significantly between single raids and raids in series. Raiding in series was not associated with non-detection by farmers, suggesting that farmers' responses often failed to deter primates from returning. Discussion The prevalence of crop damage by primates across study farms meant that insights about the parameters of primate CREs were integral to understanding the dynamics of raiding. Variability in duration of raid confirmed each species carried out hit-and-run as well as extended raids, as also reported by Maples et al. [27]; Crockett and Wilson [28], Warren [29], Priston [30], and Hockings [31]. Although many CREs were terminated by farmers' responses, differences in raid duration could reflect adaptation of raiding tactics to perceived on-farm risks, such as probability of detection. Whereas other studies observed that primates predominantly raided crops within 10 m of farm-forest edges [29,32,33], median on-farm travel distances during this study exceeded 10 m for each species and were consistent with Naughton-Treves [7]. This suggests that distances travelled onto farms (and hence minimum buffer widths to deter travel) are site specific, particularly because Warren [29] also observed olive baboons. Table 7. Proportion of the total number of on-farm primates during CREs (n = 939) that were adults, sub-adults, or infants. The positive relationship between primate body size and onfarm travel distance indicates baboons and chimpanzees were more comfortable (or less threatened) away from forest than smaller-bodied species. This might be because baboons and chimpanzees are more terrestrial, or their mass, strength, and average group-size reduces fear of humans, even beyond typical habitat [6,34,35,36]. Primates usually remain near the edges of high-risk habitat [37,38], suggesting baboons and chimpanzees did not always regard study farms as dangerous places. Similarly, greater travel distances for groups compared to single raiders could have been due to perceived risk because primates typically travel in larger numbers under higher-risk conditions [39,40,41]. Planting a crop relatively far from forest is often considered an option to minimise the likelihood of the crop being raided by wildlife [4,16]. Our data demonstrate species differences in onfarm travel distances for raiding primates. Accordingly, the deterrent value of planting crops in fields relatively far from forest edges probably depends on which primates raid each crop. The results indicate most primates at forest edges prior to or during CREs were present to participate in raiding. Red-tailed monkeys and blue monkeys were more likely than other species to only observe. Reports of one or two sentinels remaining at the forest edge when baboons raid [6,14] suggest active involvement in raiding by individuals outside of farms and highly organised, cooperative tactics. However, sentinel behaviour can only be inferred from vigilance and scanning directed over a farm, and was observed rarely during the study (primarily by blue monkeys or red-tailed monkeys and only once by a baboon). Although sentinels were high in trees affording a broad view of on-farm activity, they did not alarm call when farmers approached raiders. Crop-raiding was not an activity that all members of primate social groups engaged in. Most raids involved small groups relative to species-specific norms for non-raiding activity, and median raiding-group sizes were smaller than typical for primate social groups [42,43,44]. Baboon raiding-group sizes aligned with Warren [29], where mean size was 5 (63) individuals and markedly smaller than social-group size. Although study farmers stated that baboons and red-tailed monkeys usually raid in large groups [4,6,29], farmers' perceptions can differ from observational data due to imperfect detection of raids. Farmers detected relatively large groups most frequently and regularly failed to detect CREs by one to three individuals [9]; raiding in small groups may therefore be effective to avoid detection [15,27,45]. For most primates, crop-raiding alone is probably a tactical behaviour to minimise risks while maximising individual returns. The regression model for all crops raided by primates estimates stem damage generally. It also reflects the crop mix and range of raiding species it is derived for, so that transferability depends on similarity across sites and contexts. While the model incorporates the broad variety of crops raided, it is unlikely to be best fit for specific crops because primates were observed to consume stems of different crops at different rates per unit of time. Compared to the all-crops model, the maize & beans model provides an improved estimate of crop loss during primate CREs because it is attuned to crop prevalence; maize and beans were predominant and raided most frequently by almost all species. The maize & beans model retains broad applicability while accounting for a major proportion of local stem damage. However, for either crop the maize & beans model is probably skewed towards (a) rates of damage to beans (i.e. many stems consumed per unit of time) for CREs of relatively short duration and (b) rates of damage to maize (i.e. few stems consumed per unit of time) for CREs of extended duration, irrespective of number of individuals raiding. Because the maize-only and beans-only models incorporate crop-specific rates of damage they align more closely with observed maize or beans loss than either other model. This is evident in greater coefficients of determination (R 2 values) for the maize-only and beans-only models compared to the allcrops and maize & beans models (i.e. 0.869 and 0.872 versus 0.746 and 0.777 respectively). Precision in accounting for stem damage during CREs improved as regression models became increasingly crop specific. Similarly, models specific to each species that raided frequently and the crop they raided most often also had high coefficients of determination, albeit with smaller sample size. The lower coefficient for the baboon & maize model reflects the greater diversity of crops raided by baboons compared to other species. These results suggest it is possible to derive detailed models to understand and predict context-specific crop loss with sufficient parameter data. Regression models indicate the relative importance of variables for explaining outcomes but do not establish causation [46]. Although the key parameters determining crop loss were raidinggroup size and CRE duration, exclusion of on-farm travel distances from models does not mean these variables failed to impact stem damage entirely; rather, their influence was probably secondary. Travelling progressively further from forest was often necessary to access more stems during raids. Distances were tied to crop location and preference when baboons or chimpanzees targeted mangos, papaya, or jackfruit, usually grown relatively far from forest. Similarly, variables determining whether and how quickly farmers detect CREs could impact crop loss by influencing raid duration. As expected from between-species variation in parameter values, stem damage per CRE as well as per unit of each parameter (i.e. per minute of raiding, per metre onto farm, or per raiding individual) was relatively species-specific. In particular, baboons, vervet monkeys, and black & white colobus monkeys often damaged crops quickly and extensively. Rates of damage also reflect interactions between parameters; for example, baboons were likely to cause more crop loss than other primates per CRE and per minute of raiding because they typically raided in greater numbers. This level of analysis discloses species and/or contextual differences in raiding behaviour, including how damage may vary with changes in parameter values due to modified raiding tactics, perhaps in response to deterrent interventions. Crop-raiding was an adult-led and adult-oriented activity for each species. However, adult predominance does not characterise social-group composition for these primates [42,43], further confirming that not all group members raided crops. Only baboon raiding groups regularly comprised individuals across all age categories. Absence of infants when most primates, and all guenons, raided might reflect species-specific tactics and raidinggroup size, perceived on-farm dangers, and/or age-related differences in diet. Although recent studies also report adult primates raiding and leading CREs most frequently [29,30,32,47], early research identified sub-adults as main raiders [15,45,48,49]. While raiding by sub-adults could be driven by comparatively high rates of exploratory behaviour or risk-taking [50,51], this was rare and observed only for baboons and chimpanzees. However, perceptions of risk may influence the age composition of primate raiding groups; for example, adult females with infants consistently raid least frequently, possibly because they are more cautious [31,52]. This was the case for all species except baboons, indicating baboons restrict the composition and size of raiding groups less than other primates. Absence of infants and hence adult females with infants during most CREs also suggests more males than females raided. Elephants (Loxodonta africana, Elephas maximus) and wild boars (Sus scrofa) exhibit similar behaviour [53,54,55], which could characterise many raiding species. Presence and active raiding by adults and sub-adults during many CREs suggests the skills and tactics of crop-raiding are transferred through imitation and social learning [56,57], as reported for elephants [58]. Because crops provide greater nutrition than many natural primate foods, consuming crops might also allow sub-adults to grow more quickly than normal and benefit from larger body size [15,16]. The diverse composition of baboon raiding groups, on-farm presence of infants, and high rates of raiding by baboons [9] suggests baboons were more comfortable on farms than other primates. Hence, baboons in the study area might learn to raid earlier in development, making them more adept, adaptable, and persistent raiders over time. When primates consume crops regularly, by choice or necessity, they may develop a raiding tradition or culture [59,60]. The group's cumulative experience would then manifest as crop-raiding behaviour adapted and finely tuned to local conditions, including farmer behaviour. Primates with extensive raiding history can therefore habituate quickly to crop-protection techniques. Deterrents might require cycling or modification over time to be effective, and farmers may need to monitor raiding to plan their responses. Although the age-category composition of raiding groups influenced crop loss, the effect was secondary because it interlinked with raiding-group size. Similarly, crop-raiding experience may have influenced CRE duration (perhaps by enabling group members to avoid or delay detection by farmers) and/or rate of damage (possibly through greater efficiency when processing stems). Broad multiple-CRE profiles for vervet monkeys, red-tailed monkeys, and baboons indicate these species raid persistently when opportunities arise. However, variation in damage per raid probably only reflects raiding in series per se to the extent that raiders become satiated over consecutive raids. Preferences for raiding in series are probably explained by the energetic efficiency of crop consumption [48,61], whereby reduced foraging and feeding time allows more time for resting and social behaviour [62,63]. This provides incentives to raid repeatedly, increasing crop loss. Demonstration that crop damage by primates is mainly determined by number of individuals raiding and CRE duration has implications for crop protection. Farmers will benefit most from deterrent techniques that discourage raiding by multiple individuals, reduce the size of raiding groups, or decrease the amount of time that primates spend on farms. This involves increasing perceived risks for raiders; for example, by improving farmer detection of raids, impeding or restricting farm entry and exit, increasing the efficacy of farmers' responses, and/or requiring raiders to be more vigilant on farms. Furthermore, values for parameters of CREs vary between species, probably reflecting unique raiding tactics according to perceived circumstances. Key CRE parameters can therefore be used as quantifiable yardsticks for assessing the behavioural impact of techniques to deter raiding. Specifically, if primates raid in groups of fewer individuals or for shorter durations at a farm (compared to baseline values) after deterrent introduction, it can be concluded that the deterrent is effective because crop loss per raiding event will be reduced. Efficacy may also be indicated if primates raid over reduced distances at the farm, the age composition of raiding groups is relatively homogenous, or primates rarely raid in series. Assessing CRE parameters provides valid indices of how comfortable primates are on a farm, and is informative for managing and mitigating human-wildlife conflict. The process also confirms the importance of understanding cropraiding thoroughly in order to address it.
2016-05-16T03:44:51.690Z
2012-10-03T00:00:00.000
{ "year": 2012, "sha1": "128f5001cb0764c4288c42a3115eec5a4f7d657a", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0046636&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "128f5001cb0764c4288c42a3115eec5a4f7d657a", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
225048288
pes2o/s2orc
v3-fos-license
Mixed-methods process evaluation of SafeTea: a multimedia campaign to prevent hot drink scalds in young children and promote burn first aid Objectives SafeTea is a multifaceted intervention delivered by community practitioners to prevent hot drink scalds to young children and improve parents’ knowledge of appropriate burn first aid. We adapted SafeTea for a national multimedia campaign, and present a mixed-methods process evaluation of the campaign. Methods We used social media, a website hosting downloadable materials and media publicity to disseminate key messages to parents/caregivers of young children and professionals working with these families across the UK. The SafeTea campaign was launched on National Burns Awareness Day (NBAD), October 2019, and ran for 3 months. Process evaluation measurements included social media metrics, Google Analytics, and quantitative and qualitative results from a survey of professionals who requested hard copies of the materials via the website. Results Findings were summarised under four themes: ‘reach’, ‘engagement’, ‘acceptability’ and ‘impact/behavioural change’. The launch on NBAD generated widespread publicity. The campaign reached a greater number of the target audience than anticipated, with over 400 000 views of the SafeTea educational videos. Parents and professionals engaged with SafeTea and expressed positive opinions of the campaign and materials. SafeTea encouraged parents to consider how to change their behaviours to minimise the risks associated with hot drinks. Reach and engagement steadily declined after the first month due to reduced publicity and social media promotion. Conclusion The SafeTea campaign was successful in terms of reach and engagement. The launch on NBAD was essential for generating media interest. Future campaigns could be shorter, with more funding for additional social media content and promotion. INTRODUCTION Scalds from hot drinks are a major paediatric public health concern. 1-3 Hot drinks account for 55% of scald injuries in children less than 5 years old, 1 resulting in an estimated 30 children per day in the UK attending hospital for treatment. 4 The most common mechanism is a pull-down injury, whereby a child reaches up and pulls a hot drink down over themselves. 1 These injuries may cause permanent scarring and long-term psychosocial and mental health difficulties, 5 6 yet they are entirely preventable. While primary prevention is the priority, immediate appropriate first aid administered by a caregiver can mitigate the devastating effects of scald injuries and improve clinical outcomes by facilitating healing and reducing injury severity and the extent of treatment required. 7 Evidence-based burn first aid is simple: COOL the burn with cool running water for 20 min, CALL for medical assistance or advice and COVER the burn with cling film. 8 9 Nevertheless, only 25%-28% of children with a burn receive optimal first aid before they present to the emergency department. 8 10 Parents have limited knowledge of the appropriate first aid treatment for burns, [11][12][13] online information regarding burn first aid is inconsistent and inaccurate, 14 15 and inappropriate or harmful home remedies are often used. 8 12 There are few interventions aiming to prevent hot drink scalds in children, [16][17][18] and limited research into effective burn first aid education for caregivers, although there is evidence that educational interventions may improve caregivers' knowledge of appropriate burn first aid, at least in the short term. 19 'SafeTea' is a community-based, multifaceted intervention designed for delivery by early-years practitioners. 20 SafeTea uses a variety of bespoke materials and teaching activities to disseminate consistent key prevention and first aid messages in a range of environments (homes, parenting groups and childcare settings). 20 SafeTea has been demonstrated to be feasible in a smallscale community setting. 20 The bespoke materials and teaching activities were acceptable to parents and community practitioners, improved parents' knowledge of the risk factors for hot drink scalds and appropriate burn first aid, and demonstrated potential to change caregivers behaviours with regards to hot drink scalds prevention and burn first aid. 20 SafeTea was developed into a national integrated multimedia campaign, using social media, a dedicated website ( www. safetea. org. uk) hosting information and free downloadable materials, digital advertising, targeted promotion to professionals working with parents of young children, and national and local television, radio, and press publicity. The SafeTea intervention materials were modified where necessary according to feedback from the feasibility study, 20 reproduced with new branding and adapted for online delivery. The multimedia approach was chosen to disseminate the SafeTea key messages to as many caregivers of young children as possible, and to encourage professionals to deliver the intervention and promote SafeTea messages in their own settings. Online delivery enabled data collection for a process evaluation to understand not only whether the campaign was successful, but how and why it was successful. 21 22 Google Analytics and social media metrics have been established as useful process evaluation measurements for online health promotion campaigns. 23 24 This paper describes the design and development of the SafeTea campaign and reports a mixed-methods process evaluation of the campaign, to: (1) measure the reach of the campaign, (2) evaluate parents and professionals engagement with SafeTea and determine which components generated the most engagement, (3) evaluate the acceptability of SafeTea to parents and professionals, (4) assess the impact of SafeTea on parents knowledge and behaviours with regards to hot drink scalds prevention and burn first aid and (5) pinpoint strengths and weaknesses in the campaign strategy and implementation to identify areas for improvement. Design and development of the SafeTea multimedia campaign SafeTea had five key messages (box 1) and was targeted at parents/caregivers of children aged <5 years, and community practitioners (eg, parent group staff, health visitors) working with these parents/caregivers. The logic model for the SafeTea campaign is shown in figure 1. The intervention is based on behavioural change theories including the health belief model, 25 protective motivation 26 and social cognitive theory, 27 and aims to modify caregiver's beliefs about the risks and severity of hot drink scalds, and their self-efficacy for preventing them and mitigating their severity using appropriate first aid. A steering group of researchers, national injury prevention charities and public health organisations informed the development and implementation of the campaign (figure 2). These collaborating partners were invited to support, inform and promote the campaign as they shared an interest in burns prevention and first aid. They came on board to boost the profile of SafeTea and help to increase reach and engagement. The steering group attended monthly meetings, and members suggested materials and dissemination methods, promoted SafeTea via their existing social media profiles, contacts and networks, shared their experiences of promoting campaigns and shared their own resources. In addition, charities and organisations signed up as Ambassadors and promoted SafeTea messages in blogs, e-news and via Twitter. Raw Marketing ( www. raw-marketing. co. uk) designed the marketing strategy and SafeTea branding and collaborated with the Children's Burns Trust to coordinate the launch on National Burns Awareness Day (NBAD). SafeTea social media accounts were established on Facebook, Instagram and Twitter. These platforms were selected due to their popularity with the target audience. 28 29 One author (IB) managed the social media accounts part-time from September 2019-January 2020. She finalised and scheduled the social media posts using the Hootsuite social media management tool, and responded to comments. There were 3-4 social media posts per day in October, and 1-2 from November onwards. Posts were designed to deliver the key messages and to guide social media users to the dedicated website www. safetea. org. Box 1 The five key SafeTea messages ► Keep hot drinks out of reach of young children. ► Never pass a hot drink over the heads of children. ► Never hold a baby and a hot drink at the same time. ► Create a SafeTea area at home where hot drinks are made and drink them safely away from children. ► Burn first aid: Cool, Call, Cover. Original research uk, where they could find custom-made educational videos and cut-downs, and free downloadable materials in English: posters, flyers, magnets, reach charts, activity sheets and digital logos and banners (online supplemental appendix 1). All printed materials used in the campaign had been tested in the feasibility study. 20 Some were modified according to feedback, 20 for example, the first aid magnet was made larger and the prevention message was added, and additional posters were created with a range of different images. All materials were professionally reproduced by a graphic designer, with new branding (a new SafeTea logo, colour scheme and standardised graphics). The branding was part of the marketing strategy designed by Raw Marketing to create an effective online presence and in turn maximise reach on social media. Finally, the materials were adapted so that they could be downloaded and printed from the SafeTea website. Free resource packs of printed materials were available for professionals to request via the website, to facilitate intervention delivery in their own settings. The custom-made videos were produced in collaboration with a professional film company, and featured an adult discussing their life experiences after sustaining a scald in childhood, and an emergency department doctor discussing appropriate burn first aid. Digital advertising during the SafeTea campaign included 2 months of paid posts on Facebook designed to reach a minimum of 5500 UK parents per day, and 1 month of paid adverts on Mumsnet ( www. mumsnet. com) designed to target mums with young children. Targeted promotion via blogs, online articles, newsletters, bulletins and conference presentations was used to brief professional groups (eg, health visitors, childminders, paediatricians) working with families about the campaign and materials, and encourage them to deliver the intervention to parents in their own settings (online supplemental appendix 2). The SafeTea campaign was launched on NBAD, 16 October 2019, with coordinated and widely circulated public relations press releases, and ran for 3 months. Two volunteers recruited through the Children's Burns Trust publicised their personal stories which were used as 'case studies' to attract media interest. These included a story from a mother about her toddler who was scalded by hot coffee, and a story from a young father who sustained a serious scald injury in childhood. Several cafés signed up via the website to promote the campaign. A number of National Childbirth Trust baby cafés were contacted and asked to support the campaign, however, no responses were received. Consideration was given to advertising SafeTea in general practitioner surgeries, however, this method was too costly. Figure 2 illustrates the relationships between the campaign components and figure 3 details activities undertaken during the design and development of SafeTea. Process evaluation measurements The evaluation was informed by the Medical Research Council guidance for process evaluations of complex interventions, 21 and the intervention mapping framework, 30 which recommend that the evaluation design, data collection and analysis are based on the underlying theory of how an intervention works. The evaluation strategy was, therefore, underpinned by the inputs, processes and assumptions detailed in the logic model (figure 1). Figure 2 The relationships between the different components of the SafeTea campaign and a list of collaborating partners. TV, television. The process evaluation used a mixed-methods approach consisting of five components: (1) an appraisal of the publicity generated by the launch on NBAD, (2) the metrics of reach, impressions and engagement from the SafeTea social media accounts and Mumsnet, (3) the analytics from the website usage, (4) quantitative and qualitative analysis of an online survey of professionals who requested free resource packs via the website and (5) qualitative analysis of social media users comments on the campaign. Website and social media data were collected for the period October 2019-January 2020, from Google Analytics, Twitter Analytics and Facebook Insights. Reach on Facebook and potential reach on Twitter was defined as the total number of people who saw (or potentially saw) any SafeTea content. Impressions on Twitter were the total number of times any SafeTea content was seen. Engagement on social media referred to any interaction with a social media account or page, and was measured using metrics such as likes, follows, reactions, retweets, comments and shares. Engagement rate on Twitter was calculated by dividing engagement (likes, retweets, etc) by the total number of impressions during the 3-month campaign period. The survey was created using 'Online surveys' ( www. onlinesurveys. ac. uk). When requesting free resource packs via the SafeTea website, professionals were required to indicate whether they would complete an anonymous online survey (online supplemental appendix 3). The survey link was emailed to those who consented, in February 2020, to evaluate the use and impact of the SafeTea materials, and to assess the professionals' opinions of the campaign and materials. The survey remained live for 1 month before it was closed, during which, two reminder emails were sent to encourage responses. Analysis A triangulation approach was used to integrate the results from the five data sources, to provide a comprehensive overview of the findings and enhance confidence in the conclusions. 31 32 One author (IB) conducted a comprehensive social media evaluation and extracted the key social media metrics. The lead author (LEC) extracted the SafeTea website usage statistics using Google Analytics, analysed the quantitative survey data and undertook a qualitative analysis of survey respondents' and social media users' free-text comments using thematic analysis. 33 The lead author then categorised and summarised the data from the different sources according to themes corresponding to the evaluation goals. Thematic analysis of the qualitative data entailed grouping codes into categories, and arranging categories under the a priori overarching evaluation themes. Findings were discussed at research team meetings and any disagreements regarding data interpretation were resolved by consensus. An analytical framework was developed; categories and their definitions are detailed in the framework (table 1). Qualitative data were summarised using indicative quotations (box 2). Quantitative data were summarised using counts, frequencies and proportions. Reach of the SafeTea campaign The SafeTea launch on NBAD reached 50 000 people on Facebook and attracted widespread publicity via media coverage on Table 1 Analytical framework for the qualitative analysis of survey respondents' and social media users' comments on the SafeTea campaign and materials Theme Category Definition Reach Professionals proactivity in spreading the SafeTea messages Any comments by professionals about how they used the SafeTea materials to deliver the intervention in their own settings and spread the SafeTea messages to their colleagues and to parents/caregivers of young children. Any comments on social media promoting the campaign or encouraging others to interact with (eg, share) SafeTea posts. Longevity of the SafeTea campaign Any comments by professionals regarding the continued use of the SafeTea materials after the end of the 3-month campaign. Professionals use of initiative in spreading the SafeTea messages Any remarks by professionals about actions they have taken in order to continue using the materials to deliver the intervention and spread the SafeTea messages in their own settings. Parent's and children's engagement with the SafeTea intervention Any comments regarding parent's and children's interest in taking part in interactive demonstrations or engaging with the printed materials. Components of the SafeTea intervention (printed materials, activities, videos) that parents and professionals were most engaged with Any comments about which of the SafeTea materials or components of the intervention that professionals and parents/caregivers were most engaged with and why. Any remarks by professionals regarding the practicality of delivering the SafeTea intervention to parents/ caregivers in their own settings. Acceptability Parents and professionals opinions of the SafeTea campaign and materials Any comments regarding parents and professionals opinions (positive or negative) of the SafeTea campaign or materials. Clarity and comprehensibility of content in SafeTea materials Any remarks about the clarity of the information and messages conveyed in the SafeTea materials and how easy the SafeTea materials are to understand. Use of SafeTea materials for facilitating communication between professionals and parents Any comments about the role of the SafeTea campaign in helping professionals to convey key burns prevention and first aid messages to parents/caregivers of young children. Suggestions for improvement Any comments about the ways in which the SafeTea campaign could be improved. Impact/behavioural change Awareness/education of the risk of scalds to young children from hot drinks and of appropriate burn first aid Any remarks about whether the SafeTea campaign has raised parents' awareness of the risk of hot drink scalds to young children or succeeded in educating parents about the risks, and about appropriate burn first aid. Knowledge of correct burn first aid practices Any remarks about whether the SafeTea campaign has affected parents' knowledge of the correct burn first aid to administer in the event of a burn. Responses to the SafeTea materials and messages Any comments regarding parents' reactions to the SafeTea materials, or about how the SafeTea campaign has affected parents and professionals behaviour with hot drinks around children. national and local radio and television news channels, in newspapers, the online press, articles and blogs, magazines and health/ injury prevention newsletters and bulletins (online supplemental appendix 4). SafeTea reached an average of 9500 UK parents daily on Facebook (84% were women mostly 25-44 years old), more than the 5500 anticipated ( figure 4), however, reach and impression rates steadily decreased after NBAD. Videos were viewed over 400 000 times. The Mumsnet adverts appeared on 27 781 visitors' screens. Social media users and website visitors originated from across the UK; most website visitors were from London (11%), Cardiff (8%) and Bristol (3%). SafeTea resource packs (577) were requested by 472 professionals, and sent out by post. Of these 472 professionals, 405 (86%) consented to receiving the survey, and 163 (40%) completed it. Professionals were predominantly childminders, health visitors and nursery staff (online supplemental appendix 5) and were proactive in sharing the SafeTea materials with parents and colleagues and spreading the SafeTea messages (box 2). One paediatrician stated: 'I used the resources myself in a toddler group and I emailed the link to the SafeTea website to all health visitors, paediatric consultants and juniors in the Health Board'. Of the 93% (151/163) who used the materials with parents, most estimated that they reached between 1 and 10 parents, however, 10% (15/151) reached more than 100 parents. Many professionals reported that they continued to display the materials around their workplaces and use them in educational sessions, thereby increasing the longevity of the campaign. Organisations used their own initiative to continue using and distributing the materials, for example, by securing funding to print materials. Engagement with the SafeTea campaign Facebook and Twitter users and website visitors engaged well with the campaign (figure 4). Engagement on Instagram was poor from the outset and was deemed unviable for the campaign and abandoned after 3 weeks. There were regular peaks of Facebook followers throughout the campaign, with few unfollows. Many Box 2 Continued ► My son had a near miss burn in day care and this campaign has definitely assisted in educating staff and other parents of the risks. Respondent 60, Parent ► I thought this campaign was excellent. I discussed it with my parents, one admitted she wouldn't really have known what to do if her child poured hot tea or coffee on himself. I educated her and recommended a parents first aid course. Respondent 5, Childminder ► The message of first aid appears to be getting through. Respondent 78, Paediatric Specialist Knowledge of correct burn first aid practices ► The children look at and talk about the poster most days as it's on the back of the front door so it's a constant reminder about the dangers of hot drinks and why they must not touch hot drinks… The two three year-olds have taken the information in from the magnet on the fridge and know how to call 999 and put the burn under cold water. It's been a very useful pack. Respondent 71, Childminder ► A lot of parents were originally unaware of the need to cool the burned area and that they should not apply creams prior to A&E assessment. Respondent 134, Health Visitor Responses to the SafeTea materials and messages ► A parent straight away saw the poster and said 'oh gosh, I must stop having hot drinks near my child'. Respondent 28, Childminder ► My parent was shocked when I told her that her son could reach my kitchen side and she hadn't realised he could also reach hers. Respondent 5, Childminder ► People were surprised how long a cup of tea stays hot. Respondent 53, Children's Centre Employee ► I feel parents and children take more notice of the height chart and I often see parents measuring their children against it surprised to see how far their child can actually reach. Respondent 155, Children's Centre Employee ► I got each child to demonstrate their reach and this really brought to my attention that I need to move things even further away although I try to use travel mugs. Respondent 40, Childminder ► Unfortunately parents don't always realise the dangers until an accident happens. The parents looked at the magnets and the reach chart but didn't seem to take the warnings seriously. I think most parents were rather blasé about the dangers! Respondent 30, Baby and Toddler Group Leader Original research parents shared personal stories of children's burns. Engagement rates on Twitter were high initially (1.60% in October), but steadily decreased to 0.70% in November, and 0.40% in December and January; the average engagement rate across the campaign was 0.78%. Although not definitive, a Twitter engagement rate between 0.09% and 0.33% is considered to be high, and the median engagement rate across every industry is 0.045%. 34 On Mumsnet, two SafeTea adverts received higher engagement than any other advert on Mumsnet at the time. The components of the social media campaign that generated the most engagement were: the launch on NBAD; stories posted by parents; posts accompanied by images and videos; and factual posts with an emotive angle. Posts consisting of mainly statistics for example, '30 babies and toddlers go to the hospital with a hot drink burn every day', generated comparatively less engagement and page views. Most website visits (62%) originated from social media and 97% of these originated from Facebook. Most users accessed the website via their mobile (57%) and visited the resources pages (>24 000 page views). Professionals mainly reported using the materials directly with families. Posters were used by 95% of professionals; flyers by 93%; magnets by 86%; reach charts by 69%; activity sheets by 54%; and videos by 24%. Professionals felt that parents and children engaged with the posters, magnets and reach charts and that they were interested in improving their knowledge around burns prevention and first aid (box 2). As one health visitor remarked: 'The posters have quickly caught parents' attention when in clinic and at groups'. Another observed: 'Parents were very interested in developing their knowledge'. However, some professionals commented that they did not have time to deliver interactive demonstrations or discuss the information in the flyers with parents. While the SafeTea videos were perceived as a potentially powerful teaching resource, professionals reported lack of technology as a barrier to sharing them. Acceptability of the SafeTea campaign and materials SafeTea users' opinions of the campaign and materials were extremely positive (box 2). One Facebook user commented: 'SafeTea are doing great work on prevention and first aid awareness of burns in young children'. There were surprisingly few negative comments on social media. Two parents on Facebook expressed the view that the prevention advice was obvious, while another questioned the accuracy of the first aid advice, which encouraged debate and eventually clarification from a burns surgeon. Professionals praised the materials' visual appeal, clarity and ease of understanding. Over 87% of professionals rated the materials as 'excellent' or 'good' and over 97% felt they were 'definitely' or 'probably' easily understood by parents. Some professionals suggested producing resources in additional languages, and many wanted more hard copies of the leaflets and magnets to distribute. Impact/behavioural change The professionals who completed the survey reported increased awareness of the dangers of hot drinks and the benefits of appropriate first aid among parents and believed that the campaign improved children's and parents' knowledge of correct first aid for burns (box 2). One nursery employee remarked: 'I feel that all parents, professionals and adults entering the building now have a better awareness'. Another stated: 'Many parents were not aware of how long they should cool the burn for and that they should use cling film to cover the burn'. Increased awareness and knowledge was partially attributed to the fact that materials on display act as a constant reminder, thereby facilitating information retention. Professionals noted that the materials invoked strong reactions in some parents, and that many were surprised to see how high their child could reach. Professionals felt that the posters and reach charts were particularly useful resources for encouraging parents to think about the risks associated with hot drinks. One childminder remarked: 'I think the reach chart helped to make a few parents realise what children could do and stopped them leaving hot drinks in silly places'. The campaign led to parents and professionals discussing how to change their behaviours to minimise risk (box 2). One children's centre employee commented: 'I spoke to parents about SafeTea and this made us make a decision not to have tea in our setting when children are present'. One baby and toddler group leader was sceptical of the ability of SafeTea to change parents' behaviours with hot drinks. DISCUSSION This process evaluation suggests that the SafeTea campaign was successful in reaching a large number of the target audience of parents/caregivers of young children and professionals working with these parents/caregivers across the UK. Parents and professionals engaged with SafeTea both online and in community settings, and expressed positive opinions of the campaign and materials. The findings reinforce those from the feasibility study that the SafeTea materials were acceptable and easily understood, and have the potential to change caregivers' behaviours with regard to hot drink scalds prevention and burn first aid. 20 The acceptability and efficacy of SafeTea were thus maintained following online adaptation, and the online delivery mechanism enabled nationwide dissemination of key messages, and accessibility of the materials to a wide variety of health professionals and community practitioners. To our knowledge, SafeTea is one of only two interventions aimed at both preventing hot drink scalds in young children and improving caregiver knowledge of correct burn first aid. [16][17][18] A randomised controlled trial of Cool Runnings, 35 a mobile appbased intervention that used social media to recruit mothers of young children in Australia, 36 found that the intervention was effective for improving mothers' knowledge about risks of hot drink scalds and burn first aid. 35 While we did not directly measure improvements in knowledge among parents, these results confirm those of the SafeTea feasibility study, 20 in which parents' perceived risk of hot drink scalds and knowledge of correct first aid procedures improved postintervention. A mobile app containing the SafeTea messages and materials could be considered as a potential delivery channel for future campaigns. Although research evaluating the effectiveness of parent education on scald prevention in preschool children is scarce, 16 17 systematic reviews and meta-analyses suggest that multimedia campaigns and online interventions are effective at influencing behaviour change across a range of issues including alcohol consumption, tobacco use and physical activity. 37 38 The evidence indicates that effective behaviour change interventions are multifaceted and underpinned by theory, and that shorter interventions offer larger impacts. 37 38 Launching SafeTea on NBAD and coordinating the launch with the Children's Burns Trust was critical for optimising publicity. Regional press and media coverage was effective in Cardiff and Bristol where the project team were based; a more widespread publicity campaign with local case studies in other UK regions may have widened the reach. Reach, impression and engagement rates and website visits were highest in the first week and were maintained for the first month of the campaign, falling off thereafter. This was likely related to the reduced publicity, and the reduced frequency of social media posts from November onwards, due to a lack of fresh content to post. This demonstrates the importance of continually generating and posting new social media content, and suggests that the campaign could be equally effective over a shorter time period using additional and more frequent social media posts. Part of the campaign strategy was to guide social media users to the SafeTea website and the downloadable materials. Given that most website users originated from social media platforms, this strategy was clearly effective. Most of this traffic came from Facebook, with a small proportion originating from Twitter. It is possible that Twitter users were not using the materials to deliver the intervention in their settings but were promoting the key messages in a supportive role. It became clear early on that Instagram was not a viable platform for the campaign, probably due to the lack of influencers or brands to work with, and because the SafeTea posts and messaging were not aesthetically appealing. The strengths of this study are in the use of theory, and of mixed methods and triangulation of data from different sources, which enabled us to increase the scope and depth of the findings and enhance confidence in our conclusions and recommendations. The evaluation was limited because it was difficult to determine precisely how many people the SafeTea campaign reached, and it was not possible to directly measure behaviour change among parents and caregivers. The ability to reach a wide audience is a strength of a multimedia campaign, however paradoxically this also presents the greatest challenge for evaluation. 22 It is difficult to reach and follow-up the target audience as there is little control over who is exposed to the campaign. 22 The analysis was limited by the information that social media and Google Analytics provides: for example, Google Analytics only provides demographic information if a user is logged into their Google account. The 40% of survey respondents were likely those who found the materials beneficial, therefore, the survey results should be interpreted with some degree of caution. Recommendations Any future SafeTea campaign should be shorter and more widely advertised in all UK countries. Taken together, the findings from the feasibility study 20 and the campaign process evaluation demonstrate the potential for SafeTea to achieve the intermediate outcomes described in the logic model (figure 1); that is, increase parents perceived risk of hot drink scalds in preschool children, improve their knowledge of appropriate burn first aid, and encourage them to change their behaviours to minimise risk. Future research should focus on assessing behavioural change directly with parents. In order to measure whether SafeTea is effective in achieving the long-term intended outcomes of reducing the incidence of scalds in preschool children, reducing burn severity and improving burn first aid administration, attention must be given to an evaluation of the epidemiology of hot drink scalds and burn first aid practices over and beyond the time-frame of the campaign. This will require continued surveillance of burns presentations to hospitals, and could be facilitated using the Burns and Scalds Assessment Template, an evidence-based data collection proforma designed to standardise the clinical assessment of childhood burns in the emergency department. 39 The SafeTea campaign strategy and materials are now validated for use with an online population and could be used in future burns awareness campaigns. The materials will remain on the SafeTea website for the foreseeable future. CONCLUSIONS The SafeTea multimedia campaign reached a greater number of the target audience (young parents and professionals working with families) with a limited budget in comparison to what is possible in local, face-to-face community-based campaigns. Collaboration with NBAD enabled us to capitalise on an established 'National Day' with its associated television, radio and news publicity, which significantly enhanced the campaign promotion. Engagement with parents and professionals was successful and they had high opinions of the SafeTea campaign and materials. A shorter campaign is advisable and could be run again alongside NBAD as the resources remain current. A future campaign would benefit from funding for a full-time staff member to coordinate the social media component. What is already known on the subject ► Scalds from hot drinks are the leading cause of burns in children less than 5 years old and parents lack knowledge of appropriate first aid for burns. ► SafeTea is a multifaceted intervention that aims to prevent hot drink scalds to young children and improve parents' knowledge of appropriate burn first aid. ► The delivery of SafeTea by community practitioners is feasible and the intervention materials and teaching methods are acceptable to both parents and practitioners. What this study adds ► SafeTea was adapted for a national integrated multimedia campaign that was launched on National Burns Awareness Day 2019 and ran for 3 months. ► This mixed-methods process evaluation using data from multiple sources showed that the SafeTea campaign was successful with a good reach, and that parents and professionals were engaged with the campaign and had high opinions of the campaign and materials. ► The launch on National Burns Awareness Day was critical for optimising publicity. However, a future campaign could run for a shorter period of time using additional social media content, more frequent posting and more funding for social media coordination and promotion. Contributors LEC extracted the website usage data from Google Analytics, analysed the survey data and social media comments, integrated all of the results using data triangulation, contributed to the interpretation and discussion of the findings, produced all of the tables and figures, drafted the initial manuscript and revised the manuscript. CVB made substantial contributions to the conceptualisation and design of the study and the design and development of the SafeTea campaign and revised the manuscript. IB managed the social media accounts, conducted an evaluation of the social media data and extracted the key social media metrics, contributed to the interpretation and discussion of the findings and revised the Original research manuscript. AE and AMK conceptualised the study, contributed to the design of the study and the design and development of the SafeTea campaign, contributed to the interpretation and discussion of the findings and revised the manuscript. All authors approved the final manuscript.
2020-10-24T13:05:48.108Z
2020-10-22T00:00:00.000
{ "year": 2020, "sha1": "87b837bf34b2dc9adfae08e01efbcf65857daab1", "oa_license": "CCBYNC", "oa_url": "https://injuryprevention.bmj.com/content/injuryprev/early/2020/10/22/injuryprev-2020-043909.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c832b3074b7b612bfbb242b77a44632e55a4a388", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Political Science" ] }
51901612
pes2o/s2orc
v3-fos-license
Banking Innovations and New Income Streams: Impact on Banks’ Performance the mechanics of accumulation of financial capital in the macroeconomic system and its implications. include fee-based income and its impact on bank performance, exchange rate movement and central bank intervention in foreign exchange markets, and financing of international trade. papers and cases in national journals. traditional lending activities and there is a need to understand the underlying risks and effectively monitor the provision of such services. Diversification has been more pronounced for banks in the United States and Europe. De Young and Rice (2001) observed that between 1980 and 2001, non-interest income in the US commercial banking system increased from 0.77 percent to 2.39 percent of the aggregate banking industry assets, and increased from 20.31 percent to 42.20 percent of the aggregate banking industry operating income. For developed countries, Kaufman and Larry Mote (1994) showed an increase in the share of non-interest income to total income in the banking sectors of most developed countries between 1982 and 1990. Esho et al. (2004) pointed out that Australian credit unions diversified their activities in the 1990s to reduce their reliance on interest revenue which took three primary forms: (i) a change in pricing policy with transaction fees on loans and deposits; (ii) new financial services including insurance, funds management, and off-balance sheet activities that generated commissions and facility fees; and (iii) shift in the portfolio mix of assets away from personal loans and advances into residential lending. Importantly, Feldman and Schmidt (1999) pointed out that the composition of non-interest income had been changing for the US economy with fee income becoming the dominant source of non-interest income received by banks made possible by technological and regulatory changes opening up new sources of non-interest income. Gamra and Plihon (2011) suggested that emerging market banks are required to innovate in services and products, to differentiate strategies, and fundamentally transform their business into a much wider array of non-traditional services. This paper shows that from 1997 to late 2007, emerging market banks saw non-interest income as a share of net operating revenue rise from 28.2 percent to around 36.7 percent with the biggest increase of non-interest income being in trading. The shift towards non-interest income has been significant for Indian banking. Umakrishnan and Bandopadhyay (2005), compared the difference in income composition for new generation private sector banks, foreign banks, public sector and cooperative banks during [1999][2000][2001][2002][2003][2004] and concluded that the share of interest income in total income had been declining over the years. The RBI Report on International Trade in Banking Services (2010) showed 4 that foreign bank branches operating in Indian had been more successful in generating income from fee-based services than Indian banks' branches operating outside India. Interestingly, the report also pointed out that while for foreign banks operating in India, derivative, stock, securities, foreign exchange trading services' and 'financial consultancy and advisory services' were the major source of fee income, for Indian banks operating abroad, the largest proportion of fee income came from 'credit-related services' and 'tradefinance related services'. Uppal (2010) showed that for banks in India, interest income was continuously decreasing on account of deregulation in interest rates and non-interest income was rising. Sahoo and Mishra (2012) Much of such shift in the Indian context has been contributed by the private sector and the foreign sector. Private sector banks and foreign banks in India have been successful in generating a greater proportion of their income from fee-based sources, while public sector banks in India have lagged behind in this context. In the recent years, there has been increasing efforts by public sector banks to reduce dependence on fund-based sources majorly through sale of third party products. However, as Figure 1 shows, the proportion of non-interest income in total income has not changed much for public sector banks, implying a full-fledged resistance in adoption of non-interest streams. The economic explanation of such behaviour could be that banks are unsure of the strategic implications of such a shift. The primary role of a bank is to accept deposits and make loans, profit being the difference between the costs of deposits and the earnings from lending. However, in the last two decades, the environment facing banks have changed drastically, with banks' income no longer confined to lending and income generated from its own funds. Fee-based income or income earned from sources that do not involve exposure to bank's own funds is globally becoming more and more important for the bank's income statement. Following the methodology of diversification scores developed by Stiroh and Rumble (2006), this paper uses such diversification scores generated for a comparative analysis of bank groups in India. Using multiple regression analysis, it questions the impact of diversification and increasing share of fee-based income on profitability and risk-adjusted profitability measures for all banks in India over the period 2005-2012. The paper thus tries to delve into the diversification brought about by the move to innovative income sources and its impact on bank profitability and income stability. LITERATURE REVIEW The primary role of banks as intermediaries channelizing savings into investments is underlined by deposit-taking and lending activities. Smith et al. (2003) pointed out that while the basic functions of banks and other financial service companies remained relatively constant over time, these functions are now being provided through different products and services. Economic forces have led to financial innovations, in turn fostering competition and diminished to an extent of cost advantage. Traditional banking, as a result, has lost profitability, with banks diversifying into new activities that may bring higher returns. The shift of banks towards new business lines and fee-based income has been more prominent in developed countries and therefore much of such literature has emanated from the banking industries of Europe and the United States. De Young and Rice (2004 b) documented that for the US economy, a part of increases in non-interest income flow was coming from new lines of business made possible by deregulations introduced since 1990s while a part stemmed from producing traditional banking services with new production processes that were made possible by advances in information technology, communications channels, and financial processes. In this context, a vital question being raised in economic literature is whether the growth in income from fee-based activities has contributed to greater stability in bank income. Davis and Tuori (2000) showed that banks obtained diversification benefits in increasing noninterest income, which in turn helped to smooth profitability. Smith et al. (2003) examined the variability of interest and non-interest income and their correlation for banking systems of EU countries for the period 1994-1998. This paper finds that an increased importance of noninterest income for most bank categories have stabilized profits in the European banking system in those years. However, it does not establish that non-interest income is invariably more stable than interest income. Chiorazzo, Milani, and Salvini (2008) found that income diversification increased riskadjusted returns for Italian banks during the period 1993-2003. Busch and Kick (2009), studying the impact of growth of non-interest income on the financial performance and risk 6 profile of German banks between 1995 and 2005, found evidence that risk-adjusted returns on equity and total assets had both been positively affected by higher fee income activities for German universal banks. They also found that savings and commercial banks having a greater share of fee-based income charged lower interest margins, implying subsidization between interest and fee business. Inaba and Hattori (2007) showed that Japanese commercial banks had also been expanding their fee-based business and found a positive correlation between Japanese commercial banks' fee business income and net interest income in the second half of the 1990s. They pointed out that this relationship led to an increase in the variability of their ROA but did not affect their management stability over that period. However, during 2001-2005, such a positive correlation was not clearly observed . Umakrishnan and Bandopadhyay (2005) indicated that diversifying to fee-based income was a more viable option for banks in the long run and needed constant feel of the market requirement, innovation, and skill upgradation. Arora and Kaur (2009) examined the internal determinants for diversification of banks in India using aggregate bank level data for foreign sector banks, nationalized banks, private banks, and the SBI group. They found that risk, cost of production, regulatory cost, and technological change were very significant for bringing variation in the income structure of the banks. However, many studies pointed to a greater dependence on non-interest income contributing to increased volatility in bank income. De Young and Roland (2001) and Stiroh (2004) found this to hold for the US firms. DeYoung and Roland (2001), analysing the quarterly movements in revenues and profits at 472 large and medium-sized banks between 1988 and 1995, found that earning volatility increased with greater share of revenue coming from feebased activities. They pointed out three reasons why fee-based income may not be more stable than traditional banking activities. Firstly, banks may have qualitatively different relationships with their fee-based customers as opposed to their traditional banking customers, with relationship with the latter tending to be stronger. For example, during a downturn, the fall in revenue from fee-based income like mutual fund sales may be sharp while interest earnings from lending activities is not likely to fluctuate much. Secondly, expanding production of fee-based activities requires much greater fixed costs than increasing production of lending activities. Thirdly, as fee-based activities do not require banks to hold capital against them, banks can take advantage of this to raise return to equity. 7 This creates incentives for banks to arbitrage risk-based capital regulations by transforming on-balance sheet risk from interest-based activities to off-balance sheet risk from fee-based activities. De Young and Rice (2003) showed that large banks tended to generate relatively more noninterest income. The paper also found that well-managed banks relied less heavily on noninterest income while relationship banking tended to generate non-interest income. Further, some technological advances (e.g. cashless transactions, mutual funds) are associated with increased non-interest income while other technological advances (e.g. loan securitization) are associated with reduced non-interest income at banks. Stiroh (2004) examining the link between risk-adjusted bank performance and diversification, for community banks from 1984 to 2000 showed that higher non-interest income was negatively linked with risk-adjusted performance. Esho et al. (2004) pointed out, in their study spanning 198 Australian credit unions, that increased reliance on fee income generating activities was associated with increased risk . Stiroh and Rumble (2006) showed that diversification benefits were more than offset by increased exposure to non-interest activities. These non-interest activities were volatile but not more profitable than lending activities. Stiroh and Rumble (2006) decomposed the impact of the move to greater fee-based activities into a ''direct exposure effect'' (coming from a greater dependence on new activities) and an ''indirect diversification effect'' (coming from the resultant change in revenue concentration). Analysing the performance of the US financial holding companies (FHCs) from 1997 to 2002, this paper showed that while FHCs adopted 'cross-selling' for diversifying revenue and lowering costs, this actually meant exposure to multiple income streams with similar shocks and the greater correlation across revenue streams significantly hampered diversification benefits. Vallascas, Crepi, & Hagendorrff (2011) analysing the impact of income diversification on the performance of Italian banks during the recent financial crisis, showed that institutions which were diversified before the crisis experienced the largest decline in performance during the financial crisis. 8 In the Indian context, Sahoo and Mishra (2012) found that the banks with greater extent of operational diversification suffered from the problem of greater fluctuations in financial performance and a larger asset base did not necessarily help a bank to bring in stability in its financial performance. Moreover, greater efforts by the banks towards creating entry barrier or image advantage raised fluctuations in their financial performance. While banks in India have recognized the importance of raising income from fee-based income activities and thereby reducing the dependence on fund-based income, there are many challenges in the way of moving to more fee-based activities and sustaining them, especially for public sector banks. It may be that, banks face certain barriers to adopting the orientation relevant to such new business lines, recognizing that fee-based income may not contribute to stable income and that the right choice of activities for income diversification are unclear. DEFINITION AND METHODOLOGY The empirical analysis uses data on revenue sources and performance measures banks in India for the period 2005-2012. The key variables to be used are identified and the importance of these variables is discussed in the context of the study. Diversification Scores Following methodological construct of Stiroh and Rumble (2006), diversification scores are built for the banks. In this paper, two diversification ratios are considered. The first considers the diversification in bank income into interest and non-interest income and the second considers the diversification of non-interest income into 'commission, exchange and brokerage income' and other components. 'Other Income' for banks in Indian comprises of 'commission, exchange and brokerage', net profit (loss) on sale of investments, net profit (loss) on revaluation of investments, net profit (loss) on sale of land and other asset, net profit (loss) on exchange transactions, and miscellaneous income. In the study, the income from 'commission, exchange and brokerage' is denoted as 'fee income '3 . 9 This methodologically improves upon the existing literature by introducing a diversification score for non-interest income which will help to underline how banks are generating their non-interest income. The components of non-interest income other than fee income are largely income generated from the bank's own investment. This diversification score thus helps to know if the banks are generating their non-interest income from only fee income or only their own investments or have they diversified the non-interest income generation by focusing on both. Further, this refinement in methodology also helps to find the impact of such diversification of non-interest income on bank performance. This is important in the face of many studies showing that trading/ investment income tends to be volatile and generate lower-risk adjusted returns (Gamra & Plihon, 2011;Stiroh & Rumble, 2006;Umakrishnan & Bandopadhyay, 2005). Importantly, it is 'commission, exchange, and brokerage' which encapsulates income being generated from provision of new services and products by banks in the recent years. Considering a separate diversification score for this component implies that focus can be on the impact of diversification of non-interest income on profitability and stability of income for banks. The first diversification of an independent variable score simply sees the impact of diversified income on bank performance. However, the second diversification score goes beyond that to underline the impact of the banks' movement into newer income streams (which had the most profound impact on 'fee income' in the years under study) on bank performance. It helps to analyse whether with banks focusing on generating non-interest income, the movement to newer income streams (which will raise 'fee income') should bring in better performance or should the banks focus more on income from their own investments to positively impact profitability and stability. Further, the focus can also be on the distribution of non-interest income into fee-based and other components of non-interest income. Further, along with studying the impact of the share of fee income in total income (SH FEE ), and the share of non-interest income in total income (SH NON ) on bank performance, the paper also considers the impact of the share of fee-income in non-interest income (SH FOT ) on bank performance. It becomes crucial to consider separately the income generated from 'fee income', along with 'other or non-interest income' to see the implication of both the increasing share of non-interest income and the increasing 'fee-income' in non-interest income on bank profitability and stability. Thus, diversification scores are generated from these two different indicators. Following methodological construct of Stiroh and Rumble (2006), diversification scores are defined as follows: where, (2006) where diversification scores for each period were averaged. A third measurement of risk adjusted performance is also introduced following Stiroh and Risk-adjusted Measures of Performance Rumble (2006), the 'Z' score, defined as where E/A is the mean equity to asset ratio, the Z score thus shows risk adjusted performance, with a higher score denoting a better performance. Methodological Construct a. A comparative study of the share of income coming from fee-based activities for public sector banks 5 vis-à-vis private sector banks (Old and New) and foreign banks 6 is done b. The Z scores are compared with the Diversification rations for banks groups c. For analysing the impact of diversification on risk adjusted performance, two basic empirical specifications are used: income generated from fee and non-fee income for different bank groups in India. Using diversification ratios, DIV 1 and DIV 2, it further compares the diversification in portfolio for banks groups in India and uses Z scores to compare the risk adjusted performance between different bank groups. Additionally, the impact of diversification in income and rising share of fee-based income in other income on risk adjusted performance for banks in India is examined using regression analysis. Distribution and Diversification of Total Income For public sector banks, the proportion of fee-based income in total income over the period 2005-2011 is on the average 6.25 percent, while for new private sector banks and foreign banks, it is 14.41 percent and 16.10 percent respectively. Table 1 shows the percentage of income coming from fee-based activities for the three groups of banks considered in this study. As shown in Table 1 Table 2 gives the percentage of 'interest income' and 'other income' in total income for the three groups of banks considered for the study. As seen from the Table, It may be also be seen from Table 5 that for the Indian banking industry as a whole, the share of non-interest income to in the total income stands (SH NON ) at 11 percent % with a maximum of 51 percent %. Again the share of fee income in total income (SH FEE ) is 9 percent % for the industry. Distribution and Diversification of Non-interest Income It is also observed that the share of 'fee income' in non-interest income stands at an average of 46 percent for the Indian banking industry (Table 5). This implies that the distribution of non-interest income is in favour of an increasing share of fee income, also corroborated for the different bank groups as seen in Table 3. The diversification in total income stands at 0.27 implying a moderately diversified portfolio for the Indian banks. However, diversification in non-interest income is quite high at 0.39, implying a greater diversification of non-interest income compared to total income (Table 5). Looking once again at Table 3, it is observed that public sector banks not only generate much of their non-interest income from 'fee income' (SH FOT at 43%), they also have a diversified non-interest income base. Private banks (new) have around 62 percent of non-interest income coming from 'fee income' with high diversification of non-interest income. Private banks (old) also have a well-diversified non-interest income portfolio, with 40 percent of the non-interest income coming from 'fee income'. Foreign banks have relatively lesser diversification in non-interest income, but 49 percent of the non-interest income is generated from 'fee income'. Impact of Diversification on Risk-adjusted Performance The descriptives of key dependent variables, averaged over 2005-2011, as given in Table 6, show that there is high variability in performance measures and risk-adjusted performance for banks in India over this period. ROA averages around 0.90 for banks in India while ROE averages around 11.26 but with a high variability; however, risk-adjusted performance measures (RAROA, RAROE, and Z score) average around 4 -4.5 with considerable variability. The descriptive statistics for the key predictor variables and control variables is seen in Table 5. As seen in the Table, diversification in total income (DIV1) varies from 0.1 to 0.5, with a mean of 0.27 for the sample representing a moderately diversified portfolio for the banks in India. Diversification in non-interest income stands higher at 0.39. The share of income coming from commission, exchange, and brokerage activities averages around 9 percent for banks in India while the share of income contributed by non-fund based activities together averages around 11 percent for Indian banks (Table 5. The share of 'fee' income in non-interest income is high at 46 percent . Further, control variables (Log Assets, Log Capital, Ratio of Non-performing Assets to Income) are introduced, all of which can have potential impact on performance (as measured by profitability and risk-adjusted profitability). Here again, there is a wide variability in these variables for banks in India over this period. Analysing the impact of diversification and increasing share of income coming from feebased sources, using the empirical specifications as specified in equation (6), (7), and (8), the author fails to accept the null hypothesis that there is no improvement in the relation between set of independent variables and dependent variables when predictor variables are added. As seen in Table 7, the ANOVA F values are significant, for all of the three empirical specifications, referred to as Model 1, Model 2, and Model 3. The beta values and significance are presented in Table 8 and the impact direction of predictor variables is shown in Table 9. The collinearity statistics suggests that tolerance levels are high. It is seen that for the dependent variable ROA, the impact of diversification on total income and increasing share of fee-income in both total income and non-interest income are significant. However, for ROE and risk adjusted measures, the impact of diversification scores and SH FEE , SH FOT , and SH NON is not statistically significant. The beta values show that impact of diversification in total income is positive on profitability but negative on risk-adjusted measures. However, increasing non-interest income in total income has positive impact direction while increasing diversification of non-interest income may have negative impact on profitability and risk adjusted measures, though the results are not statistically significant. Again, the impact direction and beta values show a positive impact of increasing share of 'fee income' in both total income and non-interest income on profitability as well as risk-adjusted measures. IMPLICATIONS OF RESULTS The paper questions how the innovation-led diversification has been impacting bank profitability and stability of income in the Indian context. There are two distinct ways in which this paper tries to add to the existing body of literature. First, it tries to see the impact of diversification in non-interest income separately from diversification in total income. The rationale behind this is the fact that the major components of non-interest income in recent years have come from innovative income streams and gestation of new products and services which have all contributed to increasing 'fee income' and consequent diversification of noninterest income. This is in keeping with global and Indian studies that have focused separately on fee-income based diversification and other non-interest income based diversification strategies (Stiroh & Rumble, 2006;Gamra & Plihon, 2011;Umakrishnan & Bandopadhyay, 2005;Sahoo & Mishra, 2012). Secondly, while the results show a positive impact of diversification for profitability, the paper underlines that the impact direction of diversification in total income on risk-adjusted measures clearly suggests the need to choose stable sources of fee-income for future. Moreover, the diversification in non-interest income may not impact profitability and risk-adjusted income positively as discussed. Evidently, for foreign and private sector banks (new) a greater proportion of income comes from fee-based activities, compared to public sector banks. However, as seen in the empirical results, while diversification and increasing share of fee-income in total income positively impact Return on Assets, the impact on Return on Equity or other risk-adjusted performance measures is not statistically significant. Moreover, the impact direction of diversification measures may be negative, which is in agreement to what many studies have shown in the US, European, Australian, and Indian context (Stiroh & Rumble, 2006;Stiroh, 2004;De Young & Roland, 2001;Esho et al., 2004;Inaba & Hattori, 2007;Sahoo & Mishra, 2012). The results suggest that while public sector banks need to generate more income from feebased activities, it would be imperative to choose sources of fee-based income that remain stable and have a positive impact on risk-adjusted measures. The choice of income streams from which non-interest income can be generated under these circumstances becomes relevant. Encouragingly, greater proportion of 'fee-income' in noninterest income impacts positively both profitability and risk-adjusted performance measures. However, diversification of non-interest income may, in fact, negatively impact profitability or stability of income. Economic literature points out that the impact of fee-based income and other components of non-interest income, especially trading income, will be different on stability of income. Gamra and Plihon (2011), for example, show that fee income can generate some improvement in the risk-adjusted measures, while trading income imply lower performance as measured by risk-adjusted returns. Stiroh and Rumble (2006) point to trading income as the most volatile part of the non-interest income. Again, for fee-based income, the choice of new products or streams can determine the likely stability of income. Umakrishnan and Bandopadhyay (2005) also point out in the Indian context that investment income is the most volatile across all ownership groups. CONCLUSION The paper looks at the impact of new business lines and income streams on banks' profitability and stability. It is clear that adoption of innovations leading to new business lines confront certain barriers even as these evolve from gestation to implementation. Globally, the impact of adoption of these income streams and the consequent diversification, on profitability and stability of income for banks has not been clear. In Indian banking, the move to innovation adoptions and new income streams has been more pronounced for new private and foreign banks, while there appears to have been certain hesitation on the part of public sector and old private banks. The study points out that while the impact of diversification of both total income and 'non-interest' income (encapsulating newer income streams) on profitability is positive and significant, the other impact on stability is not. The implications of the study may be that banks adopting new income streams must choose those that are likely to enhance stability of income. As seen in the paper, the distribution of non-interest income can significantly impact stability of income as increasing 'fee-income' in non-interest income may have a positive impact on risk-adjusted performance. Trading income, an important component of non-interest income, seems to be more volatile while core service income or the fee, commision, and brokerage income generated from provision of new products and services may lead to greater stability in income. Future research needs to delve deeper into these aspects of innovation led businesses that banks may consider for adoption. LIMITATIONS AND FUTURE RESEARCH DIRECTION The study has confined itself to finding the impact of increasing diversification on performance for banks in India, and has not delved into the impact separately for different bank groups. This could be an important direction of research in future. Further, on acccount of data being unavailable on the various components of non-interest and fee income, the impact of increasing share of various components of fee-based income on profitabiltiy and stability could not be analysed, which needs to be looked into in future. This will help in understanding which components of fee-income can contribute to profitability and stabiltiy for banks. The superscripts *, **, and *** denote statistical significance at the 1%, 5% and 10% levels, respectively.
2018-08-01T05:26:41.264Z
2015-03-01T00:00:00.000
{ "year": 2015, "sha1": "3f78a572b5243ec92b52ca566b6999ed5f859265", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1177/0256090915573616", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "ed20dbbdfcf13ec29dbb83f10ef0b7f41e2882aa", "s2fieldsofstudy": [ "Economics", "Business" ], "extfieldsofstudy": [ "Business" ] }
56149909
pes2o/s2orc
v3-fos-license
Clavicular non-union treated with fixation using locking compression plate without bone graft Background The articles that have reported on the size at which a segmental defect of clavicular non-union requires bone grafting are scarce. This study evaluated the functional and radiologic results of fixation by locking compression plate (LCP) without bone graft when the defect size is less than 2 cm following bone sclerosis removal for the treatment of clavicular non-union. Methods The study included 17 patients with mid-shaft clavicular non-union. All patients underwent bone sclerosis resection and fixation using LCP without bone graft. The patients were evaluated preoperatively, and after a minimum of 24 months (mean, 44.47 months; range, 24 to 60 months) postoperatively in terms of the disabilities of the arm, shoulder and hand (DASH) score, the Constant-Murley score, and radiography. Results In this study, no patients were lost to follow-up. The mean DASH score improved from 38.76 ± 7.76 (31.00–46.52) points preoperatively to 19.88 ± 7.18 (12.70–27.06) points 2 years postoperatively (P < 0.01). The mean Constant-Murley score improved from 41.59 ± 8.81 (32.78–50.40) points preoperatively to 75.47 ± 13.50 (61.97–88.97) points 2 years postoperatively (P < 0.01). Radiographs revealed fracture union in all patients. No correlations between the defect size and the postoperative Constant-Murley score or between the defect size and the postoperative DASH score were found based on Pearson tests. No complications, particularly acromioclavicular joint complications and sternoclavicular joint complications, were observed. Conclusions In conclusion, we can suggest, from the findings of our study, that bone sclerosis resection and fixation using LCP without bone graft is effective for the treatment of clavicular non-union involving a gap of less than 2 cm and has a low rate of complications. Introduction Clavicle fractures account for up to 5 to 10% of all fractures [1]. Clavicular non-union is a rare complication and has been reported in between 0.1 and 24% of patients following non-operative treatment [2][3][4][5]. It can be disabling and presents mainly with pain and limitations of shoulder movement [6][7][8]. Contributing factors to clavicular non-union include clavicle shortening > 15-20 mm, female sex, fracture comminution, fracture displacement, older age, severe initial trauma, and unstable lateral fractures (Neer type II) [8]. Schnetzke et al. studied 58 patients with variations in the included types of clavicle fracture healing complications and reported rates of 33% atrophic, 20% hypertrophic, 7% mixed type, and 40% delayed fracture healing complications [9]. Symptomatic non-unions including functional shoulder impairment and pain are usually treated surgically. Surgical methods include resection of part of the clavicle or the entire clavicle and clavicle reconstruction. With the development of implant technology for anatomic reconstruction, clavicle resection has been abandoned. Currently, the general principles for the treatment of clavicular non-unions are the same as those for other fracture sites, i.e., bone union is achieved with fracture site reduction, stable fixation, and bone grafting transplantation from the site itself or the iliac crest [10,11]. Fixation is often internal and must provide stability. Many authors recommend the use of bone grafts [2,9,10,12]. The disadvantages of bone graft include the limited volume of available bone [13], increased operative time and blood loss [14], and donor-site morbidity [15,16]. The articles that have reported on the size at which a segmental defect of the clavicular non-union requires bone grafting are scarce. In recent years, we have treated non-unions of the clavicle with defects less than 2 cm (after bone sclerosis resection) directly using compressive locked-plate internal fixation without bone graft. Based on the evaluation of postoperative symptom relief, functional improvement of the shoulder, clavicle healing, and complications, we hypothesized that direct internal fixation using a LCP without bone graft for the treatment of clavicular non-unions would result in reduced pain levels, improved function of the shoulder, and promoted clavicular non-union but without complications. Materials and methods A prospective clinical study was designed. The inclusion criteria for the patients consisted of the following: clavicular non-union lasting more than 6 months, functional shoulder impairment and pain, and defect after bone sclerosis removal of less than 2 cm. The exclusion criteria were the following: clavicular non-union lasting less than 6 months, no functional shoulder impairment and pain, and defect after bone sclerosis removal of more than 2 cm, infected non-unions, tumors, or pathologic fractures, patients with cancer or compromised immune systems, patients refusing surgical treatment. Between January 2009 and June 2014, 17 patients (12 men and 5 women; age range, 13 to 74 years; mean age, 38.5 years; 9 left side and 8 right side; 12 initially treated with non-operative treatment and 5 treated with fracture reduction and internal fixation) with clavicular non-union were included in this study. On standard A-P X-ray of the clavicle, 8 of the non-unions were atrophic (no osteogenesis) and 9 were hypertrophic. Non-union and non-union type was defined by the lack of both periosteal and endosteal healing responses and bridging of the fracture after 6 months. In cases of doubt, a CT scan was performed to support or reject the diagnosis based on conventional radiographic images and clinical signs, such as pain and weakness. The demographic data, injury-surgery time, clavicular non-union type, defect size of each patient, and initial treatment were recorded ( Table 1). All patients reported shoulder pain and impairments of shoulder function. The preoperative radiographs revealed no other injuries. The operations and clinical follow-ups were conducted by two senior surgeons (K.T., X.T.). The defect sizes were measured after bone sclerosis resection. Clinical evaluation The disabilities of the arm, shoulder and hand (DASH) scores and the Constant-Murley scores were used to All patients underwent an X-ray before and after a minimum of 24 months to observe fracture union after surgery. Surgical technique Under general anesthesia, the patient was placed in the supine position with a large bump placed between the scapula. A 6-cm incision was made along the clavicle anterosuperiorly, centered over the fracture site. A dissection was carefully performed to avoid dissecting the periosteum and any injury to the subclavian vessels or the anterior fibers of the brachial plexus, and the fracture position was fully exposed. An oscillating saw was used to remove bone sclerosis and/or the fracture callus until fracture end bleeding was observed and the fragments were smooth. The medullary canal was opened via Kirschner wire perforation. A steel ruler was used to measure the gap size. If the defect size was within 2 cm, the fragments were aligned and immobilized with a pointed reduction clamp and fixed with a LCP (Synthes, Switzerland). The LCP was placed on the anterosuperior side of the clavicle. The anatomic reduction and screw lengths were confirmed by fluoroscopy. If the defect was greater than 2 cm, bone grafting was used, and the patient was excluded from these analyses. The wounds were closed in a routine manner, and sterile compression dressings were applied. Postoperative rehabilitation The operated extremity was placed in a sling for comfort. Pendulum (also known as Codman) exercises were taught to the patient, and the patient was encouraged to use the arm but to avoid heavy lifting, pushing, or pulling. Full return to activities was allowed when fracture healing was present, which was typically at 2 to 3 months. Statistics The statistical analyses were performed using SPSS software version 13.0 (IBM, Armonk, NY). Paired-sample t tests and Mann-Whitney U tests were used to compare the DASH scores and Constant-Murley scores, respectively, before and after the procedures. Pearson tests were used to analyze the correlation between the defect size and the postoperative Constant-Murley scores or between the defect size and the postoperative DASH scores. The level of significance was set at 95%, and P < 0.05 was considered significant. Results According to the previously described criteria, 17 patients were included and treated with fixation using a LCP without bone graft; 3 patients were excluded due to that the defect size was more than 2 cm. The fracture sites of 9 patients were hypertrophy and 8 were atrophy. The mean follow-up time was 44.47 ± 14.55 (24-60) months. No patients were lost to follow-up. The mean Fig. 1). No correlations between the defect size and the postoperative Constant-Murley score or between the defect size and the postoperative DASH score were found based on Pearson tests (Fig. 2). The radiographs revealed fracture union in all patients in the study group (Fig. 3). No complications, particularly acromioclavicular joint complications and sternoclavicular joint complications, were observed. Discussion There is a general agreement that fracture site reduction and stable fixation is a key procedure for promoting clavicle union. But there is no agreement in published studies on the requirement for bone substitutes. Most authors recommended bone grafting [2,10,[17][18][19]. One study reported union rates of 71% with the use of bone graft in a series of 21 clavicular non-unions treated after the first surgery. Six complications required a revision procedure: three for infection and three for mechanical failure [10]. A retrospective study conducted by Schnetzke et al. revealed that bone graft transplantation can result in a significantly shorter time to bone consolidation, better clinical results in terms of the DASH and Constant-Murley scores, and lower revision rates compared with non-bone-graft transplantation [9]. However, it did not provide the defect size after preparing of grafting group and no grafting group. Harvesting of bone graft in itself can have a high morbidity rate, up to 30% in one study [20]. This includes graft site infection, pain, and fracture. By contrast, others suggest that bone grafting may be unnecessary in every case of clavicular non-union. Some suggested bone grafts should be used only with atrophic non-unions [21][22][23]. Baker et al. reported on a series of patients with clavicular non-unions that were fixed with a pre-contoured locking plate and no bone graft. All of these patients returned to work and regular sports activities, and this finding is consistent with our series [21]. However, it did not provide information regarding the defect lengths of all patients after bone sclerosis removal, and it did not analyze the extent of clavicle shorting, which did not affect clinical outcomes. In the present study, the defect size of 17/20 was less than 2 cm, all 17 patients achieved pain relief and functional improvement, and the X-rays revealed clavicle fracture healing in all patients. No correlations between the defect size and the postoperative Constant-Murley score or between the defect size and the postoperative DASH score were found. Matsumura et al. suggested in a biomechanical study that a clavicle shortening of 10% or more alters the scapula. Moreover, this author reported that the mean length of the clavicle was 158.1 ± 7.0 (151.1-165.1) mm (range, 144.0-176.4 mm) [24]. However, in the present study, a cutoff point was set for the defect of 2 cm, which slightly exceeds the 10% of the mean length of the clavicle reported by Matsumura, and clavicle shorting below 2 cm did not affect shoulder function. Maybe it is because Matsumura's results were based on a cadaveric study. Currently, there is no consensus regarding the optimal fixation devices for clavicular non-union. In a comparison of dynamic compression plating (DCP) in 16 patients and low-contact dynamic compression plating (LC-DCP) techniques in 17 patients, Kabak et al. reported that the use of LC-DCP is a more reliable treatment method than the use of the DCP because the LC-DCP has several technical advantages that make it an ideal implant for satisfying the unique anatomic and biomechanical requirements of the internal fixation of clavicular non-union [25]. In the present study, LCP was used to fix clavicular non-unions and obtained good outcomes. The LCP offers a low-profile solution for the plating of the clavicle. The titanium plate offers strength, with a rounded profile and a low-profile screw-plate interface, which is known to promote early callus formation [23]. There are several limitations to our study. First, the sample size was relatively small. The calculation of accurate cutoff point of defect size which helps to decide whether or not to transplant bone graft needs a bigger size sample and to conduct ROC curve analysis. Second, the follow-up time was relatively short. Third, there is no control group. Thus, additional high-level evidence like RCT is needed. Conclusions In conclusion, we can suggest, from the findings of our study, that bone sclerosis resection and fixation using LCP without bone graft is effective for the treatment of clavicular non-union involving a gap of less than 2 cm and has a low rate of complications. Abbreviations DASH: Disabilities of the arm, shoulder and hand; DCP: Dynamic compression plating; LC-DCP: Low-contact dynamic compression plating; LCP: Locking compression plate Availability of data and materials All data and materials were in full compliance with the journal's policy. Authors' contributions All surgical procedures were carried out by KT and XT. KT and WC designed this study. WC, CsY, and BhZ participated in the patient selection, investigation on the outpatient clinic and radiographic assessment, literature search, and data monitoring. WC and XT carried out the statistical analysis and manuscript writing. All authors have read and approved the final manuscript. Ethics approval and consent to participate The study was approved by the Clinical Academic Committee of the Third Military Medical University Southwest Hospital and was approved by all the members. The study was conducted in compliance with the Helsinki Fig. 3 a Pre-operative radiograph of a clavicle fracture non-union (atrophic) in a 31-year-old female. b Immediate post-operative radiograph showing fixation with three screws on either side of the fracture. c One year after surgery, the internal fixation devices were removed. The final follow-up radiograph showing fracture union
2018-12-18T17:08:18.131Z
2018-12-01T00:00:00.000
{ "year": 2018, "sha1": "fd9d5ee385c1e0f5103d0fdaeb16625f1b113ada", "oa_license": "CCBY", "oa_url": "https://josr-online.biomedcentral.com/track/pdf/10.1186/s13018-018-1015-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fd9d5ee385c1e0f5103d0fdaeb16625f1b113ada", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
266566460
pes2o/s2orc
v3-fos-license
Safe-Learning-Based Location-Privacy-Preserved Task Offloading in Mobile Edge Computing : Mobile edge computing (MEC) integration with 5G/6G technologies is an essential direction in mobile communications and computing. However, it is crucial to be aware of the potential privacy implications of task offloading in MEC scenarios, specifically the leakage of user location information. To address this issue, this paper proposes a location-privacy-preserved task offloading (LPTO) scheme based on safe reinforcement learning to balance computational cost and privacy protection. This scheme uses the differential privacy technique to perturb the user’s actual location to achieve location privacy protection. We model the privacy-preserving location perturbation problem as a Markov decision process (MDP), and we develop a safe deep Q-network (DQN)-based LPTO (SDLPTO) scheme to select the offloading policy and location perturbation policy dynamically. This approach effectively mitigates the selection of high-risk state–action pairs by conducting a risk assessment for each state–action pair. Simulation results show that the proposed SDLPTO scheme has a lower computational cost and location privacy leakage than the benchmarks. These results highlight the significance of our approach in protecting user location privacy while achieving improved performance in MEC environments. Introduction With the continuous development of mobile communication technology, mobile edge computing (MEC) has emerged as a new computing paradigm and has gained significant attention and application.MEC has been successfully applied in various major scenarios such as autonomous driving, smart cities, smart homes, virtual reality, and facial recognition [1,2].However, during edge computing applications, the leakage of user location privacy poses a potentially significant risk [3]. Traditional task offloading strategies typically focus on reducing latency and network resource consumption, but they often overlook the protection of user location privacy.For instance, attackers can infer user location information by monitoring the status of edge servers or wireless channels [4][5][6].Therefore, when users offload their computation tasks to edge servers in the context of MEC, it is important to minimize latency and network resource consumption and ensure user privacy protection. In order to protect user location privacy while meeting the performance requirements of MEC applications, such as response time and energy consumption, it is necessary to take corresponding technical measures.These measures include location perturbation, anonymization, and differential privacy [7][8][9].However, current research on location privacy protection in MEC applications has the following major shortcomings: 1. The system lacks universality and flexibility and is only suitable for static scenarios or single-application demands [10,11], making it difficult to adapt to the ever-changing MEC scenarios.2. The system falls short of providing adequate security guarantees, leaving it vulnerable to internal and third-party attacks that may compromise the accuracy of location data [12,13].The lack of a trusted identity verification mechanism in the authentication system increases the risk of user location information leakage.3. The system lacks a dynamic balance between the computational cost of MEC systems and users' privacy requirements.Notably, the protection of user location privacy can lead to an increase in the computational burden on MEC servers [14]. To address the aforementioned shortcomings, enhancing the comprehensiveness of privacy protection mechanisms in MEC applications is necessary.This includes establishing secure communication channels, employing secure computing techniques, and taking measures to prevent untrusted third-party attackers from leaking user location privacy [15,16].In MEC applications, the distance and wireless channel conditions between users and edge servers are closely related; the closer the distance, the better the channel conditions, and the farther the distance, the worse the channel conditions [17].If the edge server is untrusted or compromised, attackers can infer wireless channel information by monitoring users' task offloading rate and deducing their location information.In the context of MEC systems, location perturbation is crucial in preserving user privacy by adding noise or modifying sensitive location information.Differential-privacy-based location perturbation techniques have been studied to protect users' location privacy [18].Thus, this paper investigates location privacy protection in MEC systems based on differential privacy. This paper aims to investigate the issue of location privacy protection in MEC applications.In addition, different tasks have varying sensitivities to energy consumption and computation delays, resulting in different energy consumption and latency requirements.Additionally, it is significant to consider the dynamic trade-off between computation cost and user privacy requirements.By tackling the challenges associated with location privacy protection and enhancing MEC performance, we can attain sustainable development of location services and optimize MEC applications.To address the trade-off between privacy protection level and energy consumption/latency performance in MEC, we can design an objective function that considers both location privacy and computation cost, aiming to maximize the overall performance of the MEC system. Traditional research on balancing privacy protection, energy consumption, and latency has often relied on rule-based approaches, which typically require predefined rules and models [7].These approaches neglect the dynamic network environment and fail to adapt to complex and changing protection requirements.In contrast, reinforcement learning (RL) stands out by learning through iterative interaction with the environment, facilitating adjustments to environmental changes and uncertainties.Unlike traditional decisionmaking approaches that often require manual strategy adjustments, RL achieves adaptive strategy refinement through trial-and-error learning.Moreover, deep RL (DRL) combines the strengths of deep learning and RL, employing neural networks to glean insights from the environment and optimize decision-making strategies.As a result, the integration of DRL enhances decision effectiveness, providing a more robust framework for addressing complicated challenges in the dynamic environment.Notably, the inclusion of safe learning mechanisms empowers the learning agent to avoid the selection of high-risk state-action pairs [19].Thus, we develop a safe DRL algorithm to solve the designed problem. This paper proposes a safe deep Q-network (DQN)-based location-privacy-preserved task offloading (SDLPTO) scheme to dynamically balance computational cost and privacy protection.This scheme utilizes differential privacy techniques to protect user location privacy while considering the trade-off between energy consumption, latency, and privacy protection.Simulation results demonstrate the performance advantage of our proposed scheme compared to benchmarks.The main innovations of this paper can be summarized as follows: 1. We propose a location-privacy-aware task offloading framework that utilizes differential privacy technology to design a perturbed distance probability density function, making it difficult for attackers to infer the user's actual location from a fake one. 2. We model the privacy-preserving location perturbation problem as a Markov decision process (MDP).We use the DRL method to adaptively select location-privacypreserved task offloading (LPTO) policies to avoid location privacy leakage while ensuring computational performance in a dynamic MEC system.This solution can jointly consider location privacy and computational offloading performance, enabling a balance between them. 3. We develop an SDLPTO scheme to find the optimal location-privacy-preserved task offloading policy.We utilize the DQN algorithm to capture the system state and accelerate policy selection in a dynamic environment.Meanwhile, we implement a safe exploration algorithm for location perturbation and offloading decisions, mitigating potential losses from high-risk state-action pairs.4. Simulation results demonstrate that our proposed SDLPTO better balances location privacy and offloading costs.This scheme consistently outperforms benchmark schemes across various task sizes and perturbation distance ranges, demonstrating its advantage in preserving location privacy while minimizing offloading overhead. The subsequent parts of the paper are organized as follows: Section 2 discusses related work.Section 3 presents the proposed system model, location perturbation model, and problem formulation.Section 4 introduces a safe DQN-based location-privacy-preserved task offloading scheme.Section 5 gives simulation results and performance analysis, and Section 6 concludes the paper. Related Work As a distributed computing model that pushes data processing and storage to the network edge, edge computing has been expanding its application scope, but privacy issues have become increasingly prominent.Data encryption, K-anonymity, blockchain, and location perturbation techniques [7,[20][21][22] have been studied in the context of privacy protection in MEC.More specifically, location perturbation is a technique employed to protect privacy by introducing modifications or perturbations to the original location data.Various technologies, such as differential privacy, path cloaking, temporal clustering, and location truncation, can be utilized for location perturbation [3,23,24]. In recent years, many works have utilized differential-privacy-based location perturbation techniques to effectively protect users' location privacy [14,18].Differential privacy technology has superior privacy protection effects and can prevent attackers from re-identifying data based on known background knowledge [18,25].In [18], Wang et al. propose a location-privacy-aware task offloading framework (LPA-Offload) that protects user location privacy by using the location perturbation mechanism based on differential privacy.The scheme formulates the optimal offloading strategy based on an iterative method and then calculates the computation cost and privacy leakage.In [25], Miao et al. propose an MEPA privacy-aware framework for MEC that uses differential privacy technology to protect location privacy in the dataset domain.A privacy-preserving computation offloading scheme based on the whale optimization algorithm is proposed in [7].This scheme uses differential privacy technology to perturb users' locations and makes offloading decisions based on the perturbed distance.However, this scheme faces challenges in adapting to dynamic environments.It fails to derive an effective location perturbation strategy despite proposing an algorithm to address the convex optimization problem of computation offloading under a given privacy budget.The previous studies mentioned do not consider preserving privacy while optimizing for delay and energy consumption in edge computing.Alternatively, some studies consider privacy preservation but fail to optimize these factors simultaneously.Furthermore, it is important to note that the aforementioned methods are designed for static scenarios and cannot effectively address optimization challenges in dynamic environments. RL technology has been widely used in dynamic MEC systems [26][27][28], and one of its most important applications is to protect user privacy.The algorithm has the characteristic of adaptive learning and can automatically adjust the learning strategy according to changes in data and the environment.It also uses distributed storage technology to store user data on multiple nodes, thereby preventing user data from being stolen or tampered with by attackers [29].In [17], Min et al. propose a scheme that can protect both user location privacy and user pattern privacy and study a privacy-aware offloading scheme based on RL, which can reduce computation latency and energy consumption and improve the privacy level of medical IoT devices.In [29], Liu et al. propose a privacy-preserving distributed deep deterministic policy gradient (P2D3PG) algorithm that solves the problem of maximizing the distributed edge caching (EC) hit rate under privacy protection constraints in a wireless communication system with MEC.In [14], Zhang et al. studied differential privacy and RL task transfer strategy, established an MEC system model, and designed a four-layer policy network as an RL agent, but lacked a balance between privacy and computation offloading performance.To solve the above problems, our work proposes an RL-based algorithm that achieves a balance between privacy protection and computation offloading performance by combining differential privacy and RL technology. System Model and Problem Formulation In this section, we present the system model for computation offloading in MEC, the location perturbation model, followed by the specific formulation of the design objective function. System Model We assume that the MEC system consists of a MEC server and a user with mobile devices, similar to the work in [17,18].The user's offloading strategy takes into account the distance between the user and the MEC server, as well as the wireless channel conditions.In scenarios where the distance is short, and the channel quality is good, users are more likely to prefer a high offloading rate, offloading more tasks to the edge server to improve performance.Conversely, in scenarios where the distance is long, or the channel quality is poor, users may prefer a lower offloading rate, executing more tasks locally to reduce computational costs.In order to protect the user's location privacy, we perturb the user's real location into a fake location, and we perform task offloading through the fake location.The user's location perturbation and the offloading process between the user and the MEC server are shown in Figure 1. Offloading Model At each time slot k, we assume that the user needs to execute a total of v (k) (in bits) computational tasks.The offloading ratio, denoted as χ (k) ∈ [0, 1], represents the proportion of total tasks that the user chooses to offload to the MEC server.Accordingly, the user has two offloading strategies: χ (k) v (k) bit computation tasks are offloaded to the MEC server, while the remaining 1 − χ (k) v (k) tasks are processed locally [30]. Local computing model.Let f l and ϕ denote the CPU frequency and the number of CPU cycles for the mobile device to process the task, respectively.Accordingly, the energy consumption E (k) l [30] and computational delay T (k) l [31] for local computation on a mobile device are given by where κ represents the effective energy coefficient associated with the chip architecture.Edge computing model.When the channel state is better, the mobile device sends tasks to the edge server through the channel and offloads them to the edge server, which can reduce energy consumption and latency. There are three main processes for task offloading to the MEC server: user task upload, MEC server computation, and computation result return.Due to the small number of tasks to be returned, the returned results are usually much smaller than the size of the input data, so for this paper, we ignore the communication delay of the computed results when they are returned.The data transmission rate R (k) for offloading tasks from the user to the MEC server can be established as where B 0 denotes the communication bandwidth, h (k) denotes the channel gain between the user and the MEC server, N 0 is the noise power, and p is the transmission power when the user offloads the task.And h (k) = (d (k) ) −ϑ , where ϑ is the path loss index, and d is the distance from the user to the server.According to Equation (3), it can be concluded that the distance between the user and the server affects the state of the wireless communication channel, and the further the distance, the worse the state of the wireless communication, and the distance between the user and the edge server is related to the condition of R (k) .The computational latency of the edge server consists of the data transfer time and the execution time of the task on the edge server.Considering that the output data size is usually much smaller than the input data, we can ignore the time overhead of transferring data from the edge server to the mobile user.Therefore, the computational delay T where f s denotes the CPU-cycle frequency of the MEC server.To sum up, at time slot k, the total execution delay T (k) and energy consumption E (k) are given by Location Perturbation Model Due to the limited coverage range of the edge servers within a local area, applying traditional differential privacy techniques to the MEC servers for task offloading faces challenges [18].Therefore, a novel distance perturbation probability density function is used to protect the user's location privacy.Users can use this function to perturb the distance between themselves and the edge server, ensuring the security of their location information and preventing any potential leakage.Assume that the maximum coverage radius of the edge server is d max , the upper bound of the user perturbation distance range is d 1 , and the lower bound is We apply differential privacy techniques to introduce randomness to neighboring datasets, thereby making it challenging for attackers to deduce the user's actual location from the fake positions.The process of utilizing the perturbation probability function is depicted in Figure 2. Kullback-Leibler divergence (KL divergence) [32] can also be referred to as relative entropy.It is a method used to quantify the degree of fit between the probability distribution of mechanisms P(d * | d) and privacy-preserving mechanisms without privacy protection Q(d * | d), where d is the true distance and d * is the perturbed distance between the user and the edge server.The KL divergence is defined as According to the definition of KL divergence, a smaller value of K LD (P||Q) indicates a higher degree of fit between P(d * | d) and Q(d * | d), resulting in a higher probability of user location information leakage.Conversely, as the value of K LD (P||Q) increases, it indicates a better fit between P(d * | d) and Q(d * | d), resulting in a lower probability of user location information leakage.Therefore, the degree of privacy leakage PL d 1 ,d 2 can be defined as where Q(d * | d) denotes the probability distribution of offloading the task at the true distance between the user and the server, and P(d * | d) denotes the probability distribution of perturbing the true distance to a fake distance by adding a differential privacy mechanism. The Proof of Differential Privacy Guarantee Given the true distance d between the user and the server and the neighborhood d Problem Formulation The system's total cost to process the user terminal device task is C (k) .In the context of mobile edge computing, energy consumption and latency are the two most commonly used metrics to measure the performance of offloading schemes.Considering the computational cost required during task offloading comprehensively, the total computational cost consists of computational latency and energy consumption, which is expressed as where λ ∈ (0, 1) is defined as the weight of balancing energy consumption and computa- tional latency in task offloading.Users can automatically select a suitable offload strategy based on factors such as the current network congestion level and the distance between the user and the server and develop an offloading scheme targeting the minimum computational cost based on the state conditions of the wireless channel.The designed objective in this paper is to minimize the weighted sum of the computational cost C (k) and the level of privacy leakage PL (k) (i.e., maximize the utility U (k) ), which is given by max where ω ∈ (0, 1) is an influencing factor reflecting the user's concern about the level of privacy leakage.The larger ω is, the more concerned users are about the degree of privacy leakage.We adjust ω to weight the privacy and computational costs. Safe DQN-Based Location-Privacy-Preserved Task Offloading It is typically difficult to employ traditional optimization techniques to obtain the optimal location-privacy-preserved task offloading policy in a dynamic MEC system.In this section, we will show how to utilize a safe DRL method to protect the user's location privacy while ensuring the performance of MEC.In detail, we first model the privacypreserving location perturbation problem as an MDP [33].Then, we propose an SDLPTO scheme in which risk assessment is performed on state-action pairs to avoid the selection of high-risk disturbance policies, as shown in Figure 3. The system's next state is only related to the state and selected policy of the current time slot.Hence, the MDP model can model the location-privacy-preserved task offloading process.Therefore, we can use RL technology to dynamically explore the optimal locationprivacy-preserved task offloading policy [34].We define the state, action, reward, and risk level function of the SDLPTO scheme, which can be represented by a tuple (s, a, U, l). • State: ] is the system state at time slot k, s (k) ∈ S, where S is the state set.Before optimizing the performance of the edge computing system and the degree of privacy leakage, we set the number of tasks V (k) generated by user devices and the wireless channel condition R (k) between the user and the edge server to the environment state.• Action: ) , ε (k) ] is the system state at time slot k, a (k) ∈ A, where A is the action set.We use the task offloading ratio χ (k) and privacy budget ε (k) as actions which affect the computational offloading decision and privacy leakage situation, and perturbation location d * , respectively.The risk level of the current state-action pair l(s, a) is evaluated by the user based on the privacy leakage level p(s (k) , a (k) ), which is evaluated based on Equation (10).It represents the extent of privacy leakage caused by the perturbation policy a (k) in state s (k) .We assume that there are L risk levels, with the highest risk level being Lh, representing the most dangerous behavior state.Conversely, zero represents the lowest risk level.We define {ξ d } 0≤d≤Lh as the safety performance indicators with L risk thresholds.Consequently, similar to, the risk level l(s (k) , a (k) ) can be evaluated by where I{•} is an indicator function. Although the user evaluates the current state-action pair's risk level by l(s (k) , a), the location-privacy-preserved task offloading policy may result in severe privacy leakage.Therefore, the user also estimates the long-term risk level E(s (k) , a) of the previous locationprivacy-preserved task offloading policies to estimate their impact on the future by tracing back the prior B experienced state-action pairs, which are given by where γ is the decay factor.The long-term expected reward (Q-value) of the user that adopts the perturbation policy a (k) at state s (k) is updated as follows: where α ∈ [0, 1] is the learning rate and δ ∈ (0, 1] is the discount factor.The user takes account of both the Q-value and E-value while selecting the locationprivacy-preserved task offloading policy.The policy function π(s (k) , a) is the probability distribution of selecting the offloading policy and location perturbation policy a (k) in the current state s (k) , which is given by At time slot k, based on the current state s (k) , the user selects the location-privacypreserved task offloading policy π(s (k) , a) according to Equation (17).Then, the user executes the action a (k) = [χ (k) , ε (k) ] and obtains the system reward U (k) after evaluating the energy consumption, latency, and privacy leakage level.Then, the system state transfers to the next state s (k+1) . The experience feedback technique is an important part of the DQN algorithm.The transitions Υ (k) = (s (k) , a (k) , U (k) , s (k+1) , l(s (k) , a (k) )) are stored in a storage pool B, and then some experiences from B are randomly selected to train on a small batch M. The system state s (k) is extended to the location-privacy-preserved task offloading experience sequence denoted by φ (k) , consisting of the state s (k) and the previous H action-state pairs, i.e., φ 1) , a (k−1) , s (k) ].The experience sequence φ (k) is input to the E-network and the Q-network to estimate E s (k) , a and Q s (k) , a , respectively.Then, the policy a (k) is selected based on Equation (17). The current state-action pair is fed into the E-network to obtain the network's estimate of the E-value.Then, the target E-value is calculated.The difference between the estimated E-value and the target E-value is computed, and this difference is used to update the weights E of the E-network.The loss function of the E-value L E (k) is defined as follows: During training, we use a stochastic gradient descent algorithm to update the weights of the convolutional neural network (CNN).The CNN evaluates the strategy as a Q-value so that the agent can choose the optimal action based on the current state.By minimizing the mean square error between the estimated network's output Q-value and the optimal target Q-value, the agent can update the Q-network weights Q and improve its performance in the environment, with the loss function L Q (k) given by The process is repeated H times to update Q (k) and E (k) .We also adopt the transfer learning technique to initialize the weights of the two deep CNNs to improve the training efficiency, and the random exploration is avoided at the beginning of learning.For the traditional DQN algorithm, the usual approach involves calculating the target Q-value and selecting the action with the highest Q-value as the current policy choice.However, in the SDLPTO algorithm, a risk assessment method is employed during action selection to avoid choosing high-risk actions.The detailed safe DQN-based location-privacy-preserved task offloading is described as Algorithm 1. Algorithm 1 Safe DQN-based LPTO (SDLPTO) 1: Initialize the real distance, ω, γ, α, Q and E according to transfer learning 2: for k = 1, 2, 3, • • • do 3: Observe the system state s 1) , a (k−1) , s (k) ] to the Q-network and E-network to estimate the Q-values and E-values Select a (k) = [χ (k) , ε (k) ] based on the offloading policy and location perturbation policy obtained from the network Calculate the average cost C (k) and privacy leakage PL (k) to obtain the utility U (k) 9: Update the weights of the CNNs for Q (k) and E (k) by applying minibatch updates via (18) and (19) 10: end for Simulation Setup and Results In this section, we evaluate the performance advantage of our proposed scheme through simulation experiments.In the context of task offloading in edge computing, we assume that there is one user and one edge server.The coverage range of the MEC server is 500 m [7].The mobile user is randomly distributed within this area, and the path loss exponent is assumed to be ϑ = 0.2.All the experiments are implemented by Python 3.8 and on the same machine, with 16 GB RAM and an Intel(R) Core(TM) i5-12500 processor. The learning rate of the agent is set to 0.004, the discount factor is set to 0.99, and we train the agent for 4000 time slots.Each time slot has a duration of 1 s, similar to [35].Our research team determined these parameters by conducting multiple experiments, enabling us to fine-tune the parameters and achieve optimal simulation performance.The rest of the parameter settings for the experiments are shown in Table 1.Adjusting the system environment parameters might impact the numerical results; they do not alter our approach's overall trends and advantages. When the privacy parameter ω set by the user is larger, the level of privacy leakage is lower, and the user tends to use a larger privacy parameter to protect location privacy.However, on the other hand, the distance between the user and the server may increase, which means that the average cost incurred by the user will be higher.As shown in Figure 4a, the computational cost also increases with the increase in the parameter ω.When the value of ω is high, a larger perturbation range is required to perturb the user's location to protect location privacy.Because the perturbed position data may have a certain deviation from the actual distance between the user and the server, which might be greater than the real distance, more computation is needed when offloading tasks, thus increasing the computational cost.As the parameter ω increases, the level of privacy leakage of the user's location will decrease.The parameter ω balances the trade-off between the user's privacy leakage level and the computational cost, reflecting the user's concern about location privacy protection.A higher ω value indicates that the user pays more attention to location privacy protection and requires a larger disturbance range to disturb the user's location.When the disturbance area becomes larger, the distance difference between the disturbed pseudo-location and the actual location may increase, making it difficult for attackers to infer the user's actual location, thus protecting the user's location privacy.Figure 4b shows that as the parameter distance increases, the computational cost also increases.As the distance gradually increases, the range of the perturbation region will become larger, and the probability of the user's perturbed location being further from the true location will become larger to protect location privacy.When the distance between the user and the server after perturbation becomes greater, the state of the wireless channel may worsen, the user may perform more tasks locally, and the offloading strategy may not be optimal, thus increasing the computational cost.Furthermore, it shows that as the parameter distance increases, the level of location privacy leakage of the user will decrease.As the distance gradually increases, the range of the perturbation region will become larger.The attacker needs to search within a broader area to determine the user's real location, which greatly increases the difficulty and probability for the attacker to find the user's real location.In this way, the user's location privacy is protected. Figures 5 and 6 illustrate the performance of the proposed mechanism versus time.From the figure, it can be seen that our proposed SDLPTO mechanism outperforms the No DP and DPRL mechanisms by reducing the privacy leakage level of SDLPTO by 18.2% and 11.2% at time slot 2000, reducing the computational cost by 33.1% and 35.2%, and improving the utility by 33% and 27.2%, respectively.This is because No DP does not consider location privacy, and DPRL only implements location privacy protection and offloading optimization separately; our proposed method jointly optimizes user privacy and computational offloading cost, effectively improving the overall user benefit.Moreover, compared with LPTO, SDLPTO reduces the privacy leakage level and computational cost by 7.7% and 26.7%, respectively, and improves the benefits by 9.1%.This is because safe exploration can avoid selecting operations with higher risk levels, thus reducing privacy leakage and computational cost.Figure 7a,b illustrate the relationship between average computational cost, privacy leakage, and task size for the four mechanisms.As the task volume increases from 50 Kbit to 300 Kbit [35], SDLPTO exhibits a 6.0% increase in privacy leakage level and a 5.2-times increase in computation cost.This indicates that as the task scale grows, more computational resources are required to execute these tasks, leading to a significant rise in computation cost.Moreover, as the task volume increases, users tend to offload more tasks to edge servers, which entails greater collection and processing of location information data, potentially increasing the risk of location privacy leakage. Figure 1 . Figure 1.Illustration of location-privacy-preserved task offloading in MEC. s for offloading are given by Figure 2 . Figure 2. Location privacy protection process based on differential privacy. Figure 4 . Figure 4.The relationship between ω, d, and the cost and privacy leakage.(a) Evaluation of parameter ω.(b) Evaluation of parameter d. Figure 5 . Figure 5. Utility of the four schemes versus time. The probability density function P(d * | d) for perturbing the true distance d to fake distance d * is set as follows: ′ , the probability of d being perturbed to d * is Pr(d * |d).The probability of d ′ being perturbed to d * is Pr(d * |d ′ ).The ratio of Pr(d * |d) to Pr(d * |d ′ ) satisfies the definition of ε-differential privacy.The detailed proof is given as follows:
2023-12-28T16:09:18.257Z
2023-12-25T00:00:00.000
{ "year": 2023, "sha1": "f3c2fd3c028cff6cdebe7c969c40cf60a7de750c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-9292/13/1/89/pdf?version=1703476606", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "851d214b70d8399885af396dc521b784943528db", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [] }
260114049
pes2o/s2orc
v3-fos-license
Crystal Structure of de Novo Designed Coiled-Coil Protein Origami Triangle Coiled-coil protein origami (CCPO) uses modular coiled-coil building blocks and topological principles to design polyhedral structures distinct from those of natural globular proteins. While the CCPO strategy has proven successful in designing diverse protein topologies, no high-resolution structural information has been available about these novel protein folds. Here we report the crystal structure of a single-chain CCPO in the shape of a triangle. While neither cyclization nor the addition of nanobodies enabled crystallization, it was ultimately facilitated by the inclusion of a GCN2 homodimer. Triangle edges are formed by the orthogonal parallel coiled-coil dimers P1:P2, P3:P4, and GCN2 connected by short linkers. A triangle has a large central cavity and is additionally stabilized by side-chain interactions between neighboring segments at each vertex. The crystal lattice is densely packed and stabilized by a large number of contacts between triangles. Interestingly, the polypeptide chain folds into a trefoil-type protein knot topology, and AlphaFold2 fails to predict the correct fold. The structure validates the modular CC-based protein design strategy, providing molecular insight underlying CCPO stabilization and new opportunities for the design. R ecent developments in protein design combined with machine learning enable the design of de novo globular proteins. 1−4 Nevertheless, extensive experimental validation is still required to identify the sequences with desired structure and function. 5 An alternative strategy to protein scaffold design is to use modular building blocks with a well-understood sequence−structure relationship. 5−9 In a similar manner, DNA nanotechnology takes advantage of the modular base-pairing in the DNA duplex to design DNA nanostructures. 10,11 The principle of modular pairing also applies to some protein motifs such as coiled-coils. 12−14 Coiled-coils (CCs) associate according to well-defined pairing rules encoded in the heptad repeat pattern of abcdefg positions. 13,15 CC dimers pair with high specificity through a combination of hydrophobic and electrostatic interactions at the heptad positions a/d and e/g, respectively ( Figure 1a). 16 In the coiled-coil protein origami (CCPO) design strategy, CCs are used as modular building blocks to design protein nanostructures. 17 The desired shape is defined through the topological arrangement of parallel and/or antiparallel CC dimers arranged into a precisely defined sequential order, based on the underlying mathematical rules. 18,19 Protein folds such as tetrahedron, bipyramid, as well as multichain assemblies have been assembled using CCPO, 19−22 and even the folding pathway of those assemblies has been designed. 23 Although the CCPO has proven to be a robust strategy for the design of various protein topologies and their shape has been confirmed by electron microscopy and small-angle X-ray scattering (SAXS), no high-resolution structural information has been available for these structures. The main difficulty concerns the high flexibility and small size of CCPO structures, which makes them challenging to study using high-resolution methods such as cryoelectron microscopy and X-ray crystallography. To address this issue, we sought to determine the crystal structure of the most elementary CCPO, the triangle. We Figure 1. Design of triangular CCPO using coiled-coil building blocks. (a) Helical wheel representation of a parallel coiled-coil with hydrophobic interactions between a/d residues and electrostatic interactions between e/g pairs. (b) To design triangular origami, three parallel coiled-coil pairs are selected and arranged sequentially in the polypeptide chain. designed a triangular protein using three orthogonal CC heterodimers concatenated in a single polypeptide chain. The triangular topology can only be achieved using parallel CC dimers since the polypeptide chain has to transverse each triangle edge in the same direction ( Figure 1b). For the initial design, we used charged CC variants (abbreviated SN) P1:P2, P3:P4, and P5:P6 19 connected with a 5-residue linker (GSGPG) ( Table S1). TRI-6SN had CD spectra with high helical content ( Figure S1); however, no crystals could be obtained under the studied conditions. As TRI-6SN likely resists crystallization due to flexibility, we designed three variants with shorter linkers, having 1−3 residues (G, GS, GSG). Variants with 2 and 3 residue linkers expressed a high helix content and unfolded cooperatively ( Figure S1). However, no crystals were obtained. We speculated that the termini may be responsible for the high flexibility. Therefore, we designed a cyclized variant TRI-cySN where termini were covalently linked using a transsplicing reaction based on orthogonal split-inteins ( Figure S2). 24 While cyclization significantly increased protein thermal stability, it still did not lead to crystallization. Finally, we resorted to the ultimate strategy for the crystallization of difficult proteins and used nanobodies as crystallization chaperones. We designed the TRI-SHb variant using stabilized and helical peptides (abbreviated SHb) of P1:P2, P3:P4, and P5:P6. Since no specific nanobodies are available to bind these CCs, we applied an epitope transplantation strategy. 25 By substituting several solvent-exposed residues, we mimicked the helical epitope of the IB3 intrabody, which binds a helical segment of the huntingtin peptide. 26 In this way, we successfully introduced the IB3 binding site into P1:P2 or both P1:P2 and P5:P6 pairs ( Figure S3). However, even the triangle-nanobody complexes failed to yield any crystals. Previously we characterized a set of specific nanobodies that recognize different CCs of the designed tetrahedron. 27 During crystallographic experiments, we observed that nanobody complexes with P5:P6 and P7:P8 CC heterodimers were difficult to crystallize compared to the homodimers, like APH 2 , GCN 2 , and BCR 2 . Based on this experience we tested whether a substitution of one CC heterodimer for a parallel homodimer would improve crystallization. We designed the variant TRI-4SHbGCN where P5:P6 is replaced by a GCN 2 homodimer, and the segments are connected with GSG linkers (Figures 2a, S4). The purified TRI-4SHbGCN ( Figure S5) is monodisperse in solution with the molecular weight 24.5 ± 0.2 kDa, in agreement with the theoretical value, and a hydrodynamic radius of 5.5 ± 1.2 nm (Figure 2b,c). CD analysis shows around 90% helix content, exceptional thermal stability with a melting temperature > 85°C, and protein refolding ability upon cooling (Figure 2d,e). Importantly, the TRI-4SHbGCN variant crystallized in a range of different conditions. One crystal form belonged to space group P1, diffracted to 2.05 Å, and the structure was solved using molecular replacement (PDB: 8P4Y, Table S2). The asymmetric unit contains one TRI-4SHbGCN molecule with a triangular fold as designed (Figure 3a). The triangle is nonequilateral with a shorter GCN 2 side (34 Å) and two longer (47 Å) P1:P2 and P3:P4 sides. There is an internal cavity of about 600 Å 2 . The electron density is continuous for the entire chain, except for linker sequences, where only two linkers have clear density and could be modeled in the structure. These two linkers are attached to the P4 segment, suggesting this is a more rigid part of the structure, as also reflected in the lower average B-factor for P4 ( Figure S6). Interestingly, in the linker connecting P4 to P2 both Ser100 and Gly101 become part of the P2 helix and Ser100 side-chain hydrogen bonds to Trp95 on the preceding P4 segment (Figure 3b). Thus, part of the GSG linker is integrated into the helix, leaving only Gly99 as a flexible linker residue. Our final crystallographic model is consistent with the SAXS profile of the TRI-4SHbGCN in solution (chi 2 = 1.34) and fits well into the ab initio protein envelope calculated from SAXS data (Figure 3c). Individual CC dimers are well resolved in the structure and show the expected packing interactions between a/d residues (Figure 3d). The CCPO strategy relies on the modularity and orthogonality of CC dimers. It is therefore relevant to examine whether the incorporation of CCs into larger assemblies affects their structure. Superposition of the GCN 2 dimer as observed in CCPO with the isolated GCN 2 shows no significant changes in terms of Cα RMSD and Crick's parameters ( Figure S7, Table S3). The structure of P1:P2 and P3:P4 dimers has not been determined before, as the only available structure of CCs from this design set is that of the P5:P6-nanobody complex. 27 Pairing Asn residues at position a has a stabilizing effect and contributes to peptide orthogonality. Within the structure, we observe the formation of a hydrogen bond network involving the backbone on one side and the adjacent Glu and Lys residues on the other side ( Figure S8). Superpositions of Table S3). Therefore, the structures of CCs incorporated into the triangular fold remain essentially identical to the structures in isolation. The crystal lattice is assembled by dense stacking of triangles on top of one another along the a and b unit cell axis and through end-to-end arrangement along the c axis (Figure 4a). Despite the presence of a cavity in the triangle center, the solvent content of crystals is 43.5%, below the average for this point group. 28 Each molecule interacts with 10 symmetry-related molecules via 5 unique interfaces, all of which are heterologous ( Figure S9). Crystal packing buries 3600 Å 2 of solvent-exposed surface area per molecule, which represents about 30% of the molecular surface. While typical crystal contacts are formed via a subset of residues, we observe that a considerable number of the residues are involved in crystal contacts, mostly hydrogen bonds and salt bridges. As expected, the majority of hydrogen bond crystal contacts are formed by the residues at exposed positions b/c and f (Figure 4b); however almost an equal amount of hydrogen bond contacts is established by residues at the e/g position. Generally, e/g positions provide electrostatic complementarity between CC dimers, but here, electrostatic interactions between two e/g positions also promote crystal contacts (Figure 4c). An unexpected feature of the structure is the interactions between the CC segments at the vertices. The contact map of TRI-4SHbGCN shows the designed interactions between orthogonal CC segments parallel to the main diagonal, while the interactions between CC pairs appear cross-diagonally (Figure 5a). For example, at vertex 1 (P1:P2/GCN 2 ) Arg23 at position c on the P1 segment stacks against Tyr150 from the second GCN segment. At the neighboring position f on P1 Arg26 hydrogen bonds to Asn149 from GCN, while Trp30 packs on top of the Leu-Leu pair of the GCN hydrophobic core (Figure 5b). At vertex 2 (GCN 2 /P3:P4) there is a hydrogen bond network between Arg158 from the second GCN segment position c and two glutamate side chains (Glu69 and Glu73) on the P4 (Figure 5b). At vertex 3 (P3:P4/P1:P2) Arg91 from P4 forms electrostatic interactions with Glu11 from P1 and P4 Trp95 hydrogen bonds to Ser100 from the GSG linker (Figures 3d, 5b). Although the interactions between CC dimers were not intentionally designed, they likely form due to the acute angles at the triangle vertices, which bring the side chains from the CC segments into proximity. A closer inspection of the TRI-4SHbGCN topology revealed that the chain forms a relatively shallow protein knot, known as the trefoil-type knot. 29,30 From the top view, helix segments in each dimer are approximately parallel and alternate by packing on either the inner or outer side of the triangle ( Figure S10). From the side view, the helical axis in each dimer is crossing the plane of the triangle ( Figure S10) so the first three helix segments are arranged in a triangle that intertwines with the triangle formed by the last three segments. Interestingly, AlphaFold2 31,32 is unable to predict the fold of TRI-4SHbGCN (and other CCPO structures, Figure S11), most likely due to the complex folding topology and absence of this type of fold in structural databases. CoCoPOD 19 was used to generate an ensemble of TRI-4SHbGCN models. The knot is not present in these models, and the agreement with SAXS data is systematically worse compared to the crystallographic model ( Figure S12). However, due to the SAXS resolution limit, we cannot conclusively resolve whether the knot is present also in the solution. The presented high-resolution structure not only validates the designed CCPO topology but also reveals previously unobserved structural features such as stabilizing interactions between CC segments at vertices and integration of linkers into the CC helix while also confirming that the structure of CC dimers is unperturbed in the context of protein origami. TRI-4SHbGCN forms, to our knowledge, the smallest knot in a designed protein, occurring due to a supercoil of CCs, similar to designed knots in DNA nanostructures. 33
2023-07-25T06:17:27.664Z
2023-07-24T00:00:00.000
{ "year": 2023, "sha1": "58bfd361cc484f730d155c843e4a199531ec5410", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "0c9e27f1bbd7956492f4e46630d47117e9904cf9", "s2fieldsofstudy": [ "Materials Science", "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
42884507
pes2o/s2orc
v3-fos-license
Enterocyte dendritic cell-specific intercellular adhesion molecule-3-grabbing non-integrin expression in inflammatory bowel disease AIM: To investigate dendritic cell-specific intercellular adhesion molecule-3-grabbing non-integrin (DCSIGN) expression in intestinal epithelial cells (IECs) in inflammatory bowel disease (IBD). METHODS: The expression of DC-SIGN in IECs was examined by immunohistochemistry of intestinal mucosal biopsies from 32 patients with IBD and 10 controls. Disease activity indices and histopathology scores were used to assess the tissue lesions and pathologic damage. Animal studies utilized BALB/c mice with dextran sodium sulfate (DSS)-induced colitis treated with anti-P-selectin lectin-EGF domain monoclonal antibody (PsL-EGFmAb). Controls, untreated and treated mice were sacrificed after 7 d, followed by isolation of colon tissue and IECs. Colonic expression of DC-SIGN, CD80, CD86 and MHC II was examined by immunohistochemistry or flow cytometry. The capacity of mouse enterocytes or dendritic cells to activate T cells was determined by coculture with naïve CD4 T cells. Culture supernatant and intracellular levels of interleukin (IL)-4 and interferon (IFN)-γ were measured by enzyme-linked immunosorbent assay and flow cytometry, respectively. The ability of IECs to promote T cell proliferation was detected by flow cytometry staining with carboxyfluorescein diacetate succinimidyl ester. RESULTS: Compared with controls, DC-SIGN expression was significantly increased in IECs from patients with Crohn’s disease (P < 0.01) or ulcerative colitis (P < 0.05). DC-SIGN expression was strongly correlated with disease severity in IBD (r = 0.48; P < 0.05). Similarly, in the DSS-induced colitis mouse model, IECs showed upregulated expression of DC-SIGN, CD80, CD86 and MHC, and DC-SIGN expression was positively correlated with disease activity (r = 0.62: P < 0.01). IECs from mouse colitis stimulated naïve T cells to generate IL-4 (P < 0.05). Otherwise, dendritic cells promoted a T-helper1-skewing phenotype by stimulating IFN-γ secretion. However, DC-SIGN expression and T cell differentiation were suppressed following treatment of mice with DSSORIGINAL ARTICLE Enterocyte dendritic cell-specific intercellular adhesion molecule-3-grabbing non-integrin expression in inflammatory bowel disease Jing-Qing Zeng, Chun-Di Xu, Tong Zhou, Jing Wu, Kai Lin, Wei Liu, Xin-Qiong Wang Basic Study Zeng JQ et al . DC-SIGN and enterocytes induced colitis with PsL-EGFmAb. The proliferation cycles of CD4 T cells from mice with DSS-induced colitis appeared as five cycles, which was more than in the control and treated groups. These results suggest that IECs can promote T cell proliferation. CONCLUSION: IECs regulate tissue-associated immune compartments under the control of DC-SIGN in IBD. METHODS: The expression of DC-SIGN in IECs was examined by immunohistochemistry of intestinal mucosal biopsies from 32 patients with IBD and 10 controls.Disease activity indices and histopathology scores were used to assess the tissue lesions and pathologic damage.Animal studies utilized BALB/c mice with dextran sodium sulfate (DSS)-induced colitis treated with anti-P-selectin lectin-EGF domain monoclonal antibody (PsL-EGFmAb).Controls, untreated and treated mice were sacrificed after 7 d, followed by isolation of colon tissue and IECs.Colonic expression of DC-SIGN, CD80, CD86 and MHC Ⅱ was examined by immunohistochemistry or flow cytometry.The capacity of mouse enterocytes or dendritic cells to activate T cells was determined by coculture with naïve CD4 + T cells.Culture supernatant and intracellular levels of interleukin (IL)-4 and interferon (IFN)-γ were measured by enzyme-linked immunosorbent assay and flow cytometry, respectively.The ability of IECs to promote T cell proliferation was detected by flow cytometry staining with carboxyfluorescein diacetate succinimidyl ester. RESULTS: Compared with controls, DC-SIGN expression was significantly increased in IECs from patients with Crohn's disease (P < 0.01) or ulcerative colitis (P < 0.05).DC-SIGN expression was strongly correlated with disease severity in IBD (r = 0.48; P < 0.05).Similarly, in the DSS-induced colitis mouse model, IECs showed upregulated expression of DC-SIGN, CD80, CD86 and MHC, and DC-SIGN expression was positively correlated with disease activity (r = 0.62: P < 0.01).IECs from mouse colitis stimulated naïve T cells to generate IL-4 (P < 0.05).Otherwise, dendritic cells promoted a T-helper-1-skewing phenotype by stimulating IFN-γ secretion.However, DC-SIGN expression and T cell differentiation were suppressed following treatment of mice with DSS- INTRODUCTION Inflammatory bowel disease (IBD), primarily comprised of Crohn's disease and ulcerative colitis, is an idiopathic disease characterized by chronic, relapsing, nonspecific inflammatory reactions of the bowel [1,2] .The exact etiology of IBD is still unknown.Recent studies have provided substantial insight into how functional mucosal immunity is maintained and how the pathogenesis of IBD is initiated.IBD is generally attributed to inappropriate and continuing inflammatory stimulations [3][4][5][6] . Dendritic cells (DCs) play a key role in the initiation of inflammation, which is associated with the migration of DCs mediated by the adhesion molecule P-selectin.Adhesion and migration of DCs is inhibited by anti-Pselectin lectin-EGF domain monoclonal antibodies (PsL-EGFmAb), which target the carbohydrate recognition domain of P-selectin [7,8] .Our previous work demonstrated that PsL-EGFmAb had a blocking effect on DC-specific intercellular adhesion molecule-3-grabbing non-integrin (DC-SIGN) in a mouse model of nephritis and improved disease progression and outcome [7] .DC-SIGN, also designated as CD209, is a member of the C-type lectin superfamily, and has a carbohydrate recognition domain similar to P-selectin [9] . In this study, we investigated the expression of DC-SIGN in the intestinal tissues of patients with IBD and its significance in the disease activity.To further study the mechanisms of how DC-SIGN functions in colitis, we examined expression with PsL-EGFmAb treatment in an experimental model of dextran sodium sulfate (DSS)-induced colitis. DSS-induced colitis mouse model The DSS-induced colitis mouse model of IBD was described by Okayasu et al [10] .Thirty female BALB/c mice (aged 6-8 wk, 16-20 g) were purchased from the Hayes Lake Experimental Animals Co. (Shanghai, China) and randomly assigned into three groups (n = 10 each): control, DSS-treated, and PsL-EGFmAb + DSS-treated.The DSS-treated group was orally administered a 5% DSS solution for 7 d.The PsL-EGFmAb + DSS-treated group were given daily injections with 2 mg/kg PsL-EGFmAb (ip) for 3 d during the 7 d of 5% DSS administration.Control animals were orally administered a sterile saline solution.Clinical Disease Activity Index for DSS-induced colitis was measured by weight loss, stool consistency, and bleeding [11] .All the mice were sacrificed at day 7, and intestinal mucosa and spleens were quickly removed for histologic and cellular function analyses. Immunohistochemical staining Paraffin sections of human and mouse intestinal mucosal tissues were treated with endogenous peroxidase and nonspecific protein blocking, and incubated with 1:100 primary antibody at 4 ℃ overnight and 1:400 secondary antibody for 1 h at room temperature.Antibodies used were as follows: mouse anti-human DC-SIGN mAb (R and D Systems, Minneapolis, MN, United States) and biotinylated anti-mouse IgG (Invitrogen of Thermo Fisher Scientific Inc., Waltham, MA, United States) for human tissues, and rat anti-mouse DC-SIGN mAb (eBioscience Inc., San Diego, CA, United States) with biotinylated anti-rat (Invitrogen) for mouse tissues.Finally, the sections were stained by diaminobenzidine for microscopic examination.The primary antibody was replaced with phosphatebuffered saline as a negative control and known positive sections were used as positive controls. Disease severity assessment of colitis Paraffin-embedded sections (5 µm) prepared from the distal colons of experimental mice were stained with hematoxylin/eosin and examined under a Zeiss Axioplan 2 imaging microscope equipped with an AxioCam MRc5 camera (Carl Zeiss AG, Oberkochen, Germany).Histologic scoring was ranked according to the amount and depth of inflammation, and the amount of crypt damage [13] .Isolated mouse splenic cells (1 × 10 5 cells/mL) were incubated with fluorescein isothiocyanate-labeled CD4 mAb and stained with allophycocyanin-labeled interferon (IFN)-γ mAb and phycoerythrin-labeled interleukin (IL)-4 mAb to evaluate the systemic inflammatory response in mice [14] . Flow cytometry Mouse intestinal epithelial cells (IECs) were sorted by flow cytometry using anti-mouse phycoerythrin-conjugated CD326 (epithelial cell adhesion molecule) and incubated with fluorescein isothiocyanate-labeled DC-SIGN, CD80, CD86 or MHC mAb at a density of 5 × 10 5 cells/mL.Phenotypic analysis was performed by flow cytometry using a FACS Calibur and FACSAria Cell Sorter (Becton, Dickenson and Co., Franklin Lakes, NJ, United States) and data were analyzed with FCS Express version 3. Statistical analysis SPSS version 16.0 (SPSS Inc., Chicago, IL, United States) was used for the database analysis.Data are presented as mean ± standard deviation and were measured by nonparametric rank sum test and one-way analysis of variance.Numerical data were measured using Fisher's exact test.The correlation between groups was analyzed using Spearman coefficients.A value of P < 0.05 was considered statistically significant. Expression of DC-SIGN in human IECs Expression of DC-SIGN was rarely detected in the intestinal mucosa of healthy children, but was elevated in the intestinal mucosa of children with IBD, especially in the IECs and mesenchymal cells (Figure 1).DC-SIGN expression was significantly higher in children with Crohn's disease (61%; P = 0.002) and ulcerative colitis (50%; P = 0.019) compared with controls (10%).However, there was no significant difference between expression in Crohn's disease and ulcerative colitis. Correlation of DC-SIGN expression with IBD disease activity To determine if increased expression of DC-SIGN was correlated with IBD progression and severity, we used PCDAI and PUCAI to evaluate disease activity in children with IBD.The scores were significantly higher in the DC-SIGN-positive group than in the DC-SIGN-negative group (PCDAI: 25.91 ± 10.20 vs 13.93 ± 7.20, PUCAI: 32.14 ± 13.50 vs 15.71 ± 8.86; Ps < 0.01), and DC-SIGN expression was strongly correlated with disease severity in IBD (r = 0.48; P < 0.05) (Figure 2). Characterization of DSS-induced colitis model Hematoxylin and eosin staining revealed greater neutrophil infiltration in the intestinal tissue from the DSS and DSS + PSL-EGFmAb groups compared with the control group (Figure 3A).The disease activity index score was significantly elevated in DSS-treated mice compared with the controls (11.4 ± 0.70 vs 0.5 ± 0.53, P < 0.01), and was signficantly decreased by PsL-EGFmAb treatment (8.6 ± 3.60, P < 0.05) (Figure 3B).In addition, histologic examination of intestinal biopsies showed significantly higher scores in the DSS-treated group (6.6 ± 1.78, P < 0.01) compared with the controls (0.7 ± 1.06) but suppressed disease following treatment with PsL-EGFmAb (4.7 ± 1.06, P < 0.05) (Figure 3C).IL-4 and IFN-γ expression levels in mouse splenic CD4 + T cells were increased in the DSStreated group compared with the control and DSS + PsL-EGFmAb groups (Figure 3D). Expression of DC-SIGN, CD86, CD80 and MHC Ⅱ in mouse IECs DC-SIGN expression was rarely detected in normal intestinal tissues, but was clearly observed in the intestinal tissues of the DSS-treated and DSS + PsL-EGFmAb groups (Figure 4A).Further analysis revealed that DC-SIGN expression was significantly correlated with disease activity scores (rs = 0.62; P < 0.01).Flow cytometric analysis revealed that, as well as co-stimulatory molecules, CD80, CD86 and MHC Ⅱ were markedly elevated in IECs of DSS-treated mice and downregulated with PsL-EGFmAb treatment (Figure 4B). T cell differentiation and proliferation induced by mouse IECs IECs are not traditional antigen-presenting cells.However, we report here that after co-culturing naïve CD4 + T cells and IECs, T cells were activated and T helper (Th) cytokines (IFN-γ and IL-4) were detected by flow cytometry and enzyme-linked immunosorbent assay.The results show that compared with controls, IL-4 expression levels peaked in the DSS-treated group (P < 0.05), and increased in the PsL-EGFmAb + DSS-treated group (Figure 5A, B).No significant changes in IFN-γ were observed among the three groups.In addition, the IL-4/IFN-γ ratio of the co-culture supernatant was higher in the DSS-treated group, but downregulated with PsL-EGFmAb treatment (P < 0.05).The proliferation cycles of CD4 + T cells in the DSS-induced colitis group appeared as five cycles, which was more than in the other groups (Figure 5C). T cell differentiation induced by DCs Co-culturing of naïve T cells from normal BALB/c mice and DCs from mouse spleens resulted in an increased proportion of IFN-γ in the DSS-treated group compared with the PsL-EGFmAb-treated and control groups (Figure 6). DISCUSSION IBD is a chronic intestinal disorder of unknown etiology and pathogenesis.However, it is generally believed that uncontrolled intestinal immune response facilitates onset and development of IBD [6,16,17] .Murine models of IBD have demonstrated that the imbalance in Th1/Th2 cells plays a pivotal role in determining the type of immune response generated in the gut and that distinct cytokine profiles characterize each CD4 + T cell subset [18,19] .The formation of a physical barrier by IECs plays an important role in innate immune defense [20][21][22] .Recently, it has been found that IECs are not only a passive barrier that limits the access of pathogens, but also participate in mucosal immune regulation through the pattern recognition receptors [23][24][25] . DCs are dysregulated in IBD, which leads to overproduction of chemokines and proinflammatory cytokines that stimulate the activation and differentiation of pathogenic Th cells [26][27][28] .DCs are antigen-presenting cells that are responsible for the regulation of abnormal T cell activation.Upon activation, a number of cell surface molecules and maturation markers are expressed, such as Toll-like receptors, Nod-like receptors, and C-type lectin receptors.Among them, DC-SIGN is a member of the 191 January 7, 2015|Volume 21|Issue 1| WJG|www.wjgnet.com A B C C-type lectin superfamily, functioning as an adhesion receptor and a pattern recognition receptor [29][30][31] .It plays a critical role in regulating the migration of DCs and subsequent activation of T lymphocytes involved in the immunoregulation of infectious and inflammatory diseases [9,32] . Our study showed that IECs express DC-SIGN, which is significantly correlated with intestinal disease severity.In vitro, we further demonstrated that IECs stimulate CD4 + T cells to secrete IL-4, suggesting that they potently induce a Th2-predominant host immune response in experimental colitis.In contrast, DCs from animals with experimental colitis induced T cells towards a Th1-skewing phenotype by IFN-γ secretion.Based on the above results, we propose that the injured IECs might trans-differentiate, leading them to acquire immune properties.Trans-differentiation is a biologic process by which one differentiated cell type coverts into another [33,34] .In disease states, IECs exert antigen-presenting function by trans-differentiation and regulate mucosal immunity together with DCs in the local microenvironment, to determine the type of immune response.This phenomenon may be associated with the regulation of gut immune compartmentalization [35,36] . In summar y, DC-SIGN modulates the transdifferentiation of IECs, interacts with DCs as well as the local intestinal immune compartment, and might play a vital role in facilitating damage to the gut mucosa in IBD.Further study is needed to investigate the underlying mechanisms for the immunomodulatory effects of IECs. Background Dendritic cell-specific intercellular adhesion molecule-3-grabbing non-integrin (DC-SIGN) is a DC phenotypic molecule and plays an important role in mediating DC adhesion and migration, inflammation, and activation of primary T cells.This study aimed to confirm if intestinal epithelial cells (IECs) express DC-SIGN and explore if it plays a role in inflammatory bowel disease (IBD). Figure 2 Figure 2 Inflammatory bowel disease activity in patients with and without dendritic cell-specific intercellular adhesion molecule-3-grabbing non-integrin expression.Disease activity was assessed in dendritic cell-specific intercellular adhesion molecule-3-grabbing non-integrin (DC-SIGN) + and DC-SIGN-patients using the A: Pediatric Crohn's disease activity index (PCDAI); and the B: Pediatric ulcerative colitis activity index (PUCAI). Figure 3 Figure 3 Histology, disease activity index, and cytokine expression profiles in mouse splenic CD4 + T cells.A: Hematoxylin and eosin staining in intestinal tissue (magnification × 200); B: Disease activity index (DAI) scores; C: Histopathology scores from examination of intestinal biopsies; D: Interferon (IFN)-γ and interleukin (IL)-4 expression levels in mouse splenic CD4 + T cells.a: control group, b: dextran sodium sulfate-treated group, c: anti-P-selectin lectin-EGF domain monoclonal antibody-treated group; a P < 0.05, c P < 0.01 vs control. Figure 6 T Figure 6 T lymphocyte differentiation induced by mouse dendritic cells.Flow cytometric analysis of interferon (IFN)-γ and interleukin (IL)-4 levels in mouse CD11c + dendritic cells.A: control group; B: dextran sodium sulfate-treated group; B: anti-P-selectin lectin-EGF domain monoclonal antibody-treated group.
2018-04-03T05:26:13.646Z
2015-01-07T00:00:00.000
{ "year": 2015, "sha1": "3b390c118c809552a1bf5781697b5516f323ee40", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3748/wjg.v21.i1.187", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "ed1f013bcb451b07189c452a66ef87a76209d4be", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Chemistry", "Medicine" ] }
237691777
pes2o/s2orc
v3-fos-license
Diversity is not the Enemy: Promoting Encounters between University Students and Newcomers In today’s globalized world with dynamic processes of political, social, and societal change (Mergner et al., 2019) the uni‐ versity should be a place of encounter between people with different (cultural) backgrounds. The learning arrangement presented here therefore initiates intercultural exchange and aims to help students see diversity as an asset rather than a challenge (Roos, 2019). To this end, an intercultural project was initiated at TU Dortmund in Germany in 2017. In the con‐ text of different learning environments future teachers were invited to have encounters with young newcomers through a nearly completely self‐managed learning arrangement. The students were prepared for the encounters in focused courses dealing with theoretical backgrounds and didactic concepts. They would then prepare the lessons with the newcomers. In the context of this learning arrangement the following questions were important: What did the university students expect with regard to the encounter with newcomer students from schools? How did they prepare the lessons? What did students and newcomers think about the encounters later? What have they learned? And what do these reflections mean for inclusive and intercultural teacher education at universities? In the project we could observe that the didactic approach supports the students’ level of sensitivity towards differences and encourages future teachers to train the edu‐ cation of newcomers in a non‐judgmental framework (Bartz & Bartz, 2018). Based on a selection of qualitative empirical findings (ethnographic approach during six lessons in a period of two years and 147 interviews including the students’ and newcomers’ points of view about their learning encounters at TU Dortmund), this article discusses opportunities to create more innovative spaces for inclusive practices and cultures under the restricted terms of a mass university. Introduction In Germany, as in many other parts of the world, globalization and migration have led to an increasing influx of students from different (cultural) backgrounds in schools and universities. The student body is very diverse, with students speaking different languages and having different religious or ethnical backgrounds (Florian, 2017, p. 11). Particularly in the context of teacher training, it is important to help future teachers to use this diversity as an opportunity. Research conducted in this field shows that many teachers already have positive attitudes towards heterogeneous student groups (Ruberg & Porsch, 2017) but are struggling with the practical tasks in school (Grimm & Schlupp, 2019). So, there is still a big gap between the theoretical idea of seeing a heterogeneous student body as an educational resource rather than an excessive burden and the practical implementation of this idea. Particularly in the case of teaching newcomers, this well-known gap becomes highly relevant. Recent studies call attention to the fact that teachers feel unprepared to teach newcomers, have many insecurities about teaching them and are struggling with increased learning demands like language support (e.g., Bačáková & Closs, 2013;Kipouropoulou, 2019;Lechner & Huber, 2017). To put it in a nutshell, teaching refugee students is often perceived as a challenge for teachers (Kleina & Ruberg, 2020). Apart from that, many refugee students face discrimination and experience racism in school systems (e.g., Block et al., 2014;Correa-Velez et al., 2016;Uptin et al., 2016). Thus, both educational systems and teachers must learn to adapt to the needs of newcomers. Consequently, teacher education programs in universities should offer possibilities to reflect on the fixed idea of newcomers as extraordinary students who are an additional burden in the classroom (Grimm & Schlupp, 2019). Results of previous studies indicate that personal encounters with disadvantaged or marginalized learners can support teachers in having a more positive attitude towards them and encourage them to teach in more inclusive ways (Fichten et al., 2005;Seifried, 2015). Following the contact hypothesis formulated by Allport (1954), the facilitation of accompanied learning processes where students experience real learners, including their needs, and can see for themselves that there is no such thing as one homogenous group of newcomers with a single story to tell about them, entails a great opportunity for educational settings. It is quite important to underline that this experience works in both ways: Newcomers, for their part, get the opportunity to become more familiar with higher education settings, are invited to a new learning arrangement and can speak their own truth, if they like, instead of being addressed as passive and as people being in need (Brewer, 2016, p. 136). These research findings encourage programs that provide teachers and refugees with appropriate insightful encounters and learning. So far, there has been insufficient research on how such programs can be designed in terms of content, didactics, and organization, and what outcomes can be expected . This research gap is to be closed with this work. For this purpose, an explorative, qualitative research design was chosen in order to create the preconditions for larger-scale, hypothesis-testing studies in the future. This article provides insights into an experimental seminar project at TU Dortmund in Germany that intends to help future teachers experience and reflect on cultural diversity in the context of higher education. In our research project, the future teachers and the newcom-ers are both included as target groups. The seminar project has an innovative approach to open the university towards the community and is connected to the local meeting center TU@Adam's Corner. Based on the concept of reflective inclusion and the use of the Universal Design for Learning (UDL) method, the seminar project contributes to reflective and difference-sensitive teacher education. By using the concept of reflective inclusion, we also try to think about stereotypes. We encourage students to talk about their thoughts honestly and reflect them together in the group. Thereby we try to avoid the possibility of participants remembering only the information that fits their existing views or stereotypes from the encounters. The cooperation project introduced in this article is associated with the DoProfiL program (Dortmunder Profil für inklusionsorientierte LehrerInnenbildung), which focusses on inclusive teacher education at TU Dortmund. This project is part of the Qualitätsoffensive Lehrerbildung, a joint initiative of the Federal Government and the Länder which aims to improve the quality of teacher training. The program is funded by the Federal Ministry of Education and Research. The authors are responsible for the content of this publication. About Migration in Germany and First Intercultural Projects in Dortmund Germany has a long tradition of migration and therefore the topic of immigration is not new, but since 2015 the intensity and extent of migration have reached a different level. In that year, Germany recorded the highest rate of immigration in its history. One-third of the refugees coming to Germany were children and young adults (Statistisches Bundesamt, 2016). The city of Dortmund, the setting for the activities and methodological reflections presented here, is a place of encounter between different cultures. Like the entire region known since the 1920s as the Ruhr area, it has always and fundamentally been shaped by migration. The migration movements of recent years have brought many unaccompanied, underage refugees to the region and to Dortmund. For this reason, TU@Adam's Corner has been created in the city to facilitate the arrival of these young people. TU@Adam's Corner is a meeting place where learners and teachers jointly design a learning space for the international classes at Dortmund's vocational colleges. In these international classes, young refugees learn German together with other students who are new to Dortmund. Since February 2016, the project has been supplemented by TU@Adam's Corner: Scientists from TU Dortmund share their knowledge with young refugees and immigrants, and in this way open up perspectives of belonging, arriving and shaping the future. The main goal of this attached university organization is to take an active stance towards working with refugee students and help them getting to know their new surroundings, including local educational institutions. Requirements for Teacher Education and Reflective Inclusion as the Main Concept Schools and universities play a significant role in facilitating the human right of education for all newcomers. In Germany, however, many universities are only just beginning to find appropriate ways to prepare students for teaching newcomers. Related topics like migration and critical race theories are still not part of the mainstream curriculum for teacher education (Karakaşoğlu et al., 2017). Nevertheless, future teachers should be prepared for the situation of diversity in German schools as early as possible. This is "a matter of social justice and equity in education" (Florian, 2017, p. 9) and should be addressed as a permanent task for educational systems. Particularly with the worldwide agenda of inclusive education there is an ongoing discussion about how teachers can learn to fully address the needs of all learners and how teacher education programs can support this goal. In this respect, the concept of (self)reflection is one of the most widely discussed ideas (Watkins, 2012). There is a broad agreement that it is the universities' task to create learning settings in which students can experience irritation, new ground, deal with possible misperceptions while being guided, and learn to frame their experiences with the help of scientific theory and by communicative exchange with peers and training staff. Inclusive education succeeds above all through reflection by all those involved in teaching processes (Beutel & Pant, 2020). In particular, this is underpinned by the approach of reflective inclusion, which understands difference as a product of social interactions in which (dis)advantages are inscribed. Such an understanding requires a specific mode of reflection that comprises a permanent reflection on the individual consequences and structural conditions of one's own actions (Dannenbeck & Dorrance, 2009). Being already a subject of general discussion as an important dimension of professionalism for teacher education, (self-)reflection is thus of significant importance for difference-sensitive teacher education as well. Such an approach involves the challenge of reflecting on school practice with regard to the (re-)production and processing of differences concerning cultural diversity as well as illuminating processes of stereotyping and othering (Ashcroft et al., 2000). Universal Design for Learning as a Method for Difference-Sensitive Higher Education One of the most promising methods for managing diversity in the classroom and for education in universities is the UDL (Powell & Pfahl, 2018). This concept developed in the US can provide orientation in the planning and implementation of inclusive and difference-sensitive teaching. Based on the design concept of the same name, it highlights key points of a learning environment with as few barriers as possible, an environment that considers a variety of learning strategies and levels. Three basic prin-ciples ensure that learners can acquire knowledge and skills according to their individual requirements: 1. Offering various options for task processing (representation) 2. Design of active learning and expression possibilities (action and expression) 3. Enabling motivated learning (commitment) One major main benefit of UDL is the fact that it provides a systematic guide for creating didactic settings. Given the documented insecurities about teaching newcomers who are still learning German, it seems to be especially important that future teachers feel capable of planning the didactic setting and use this highly structured method to gain confidence. The basic principles of UDL allow the students to anticipate difficulties in learning and find new creative ways of working with them. For instance, UDL gives a lot of inspiration to use easy language and different visualization methods. What We Do: Acknowledging Diversity through Guided Encounters between Future Teachers and Newcomers The basic idea of the seminar concept is to help students to prepare for the task of teaching newcomers. This includes reducing uncertainties, sensitizing the students towards different backgrounds of learners and creating a safe space for exchanges between future teachers and newcomers. These goals result in a two-pillar agenda with support in didactic techniques and guided (self-)reflection. The 65 students involved in this project are studying to obtain a master's degree in special needs education. At the time of the encounters, they were in their first, second or third semester. Over the last two years, 82 young newcomers have taken part in this project. Some of them came from Iraq, Syria, Eritrea or Afghanistan, others from Europe, e.g., from Poland or Albania. The participants had been in Germany for an average of ten months, and their language level at the time was between A1 and A2. Prior to the encounters, the university students developed a teaching concept for a period of 90 minutes using UDL to deal well with the linguistic, cognitive and cultural diversity of the newcomers. They focused on the following: 1. The students decide on teaching topics on which the newcomers are motivated to work. 2. Both the students and the newcomers work in an action-oriented and product-oriented way. 3. After welcoming the group of 15 to 20 newcomers, the students divide them into small groups of up to 5 to allow for more intensive encounters. 4. The learning material used in class is clearly structured and explains German terms with the additional help of pictures. 5. The newcomers receive a product that they can take home. The following topics were worked on in our project: celebrations, happiness, school, healthy eating, leisure activities and games. They are very general and intended to invite the newcomers to share their experiences. We always take an advisory and supportive role in creating the materials and preparing the lesson. As a rule, all lessons observed followed a similar schedule: Welcome and introduction of all participants (10 minutes), information about the respective topic and the structure of the lesson (5 minutes), work at different group tables (60 minutes) and discussion of the results (15 minutes). Small groups of 5 newcomers worked at a topic table at a time. This was supervised by 3-4 university students to ensure that a close and, if desired, personal exchange could take place. It is important to us that all participants meet with acknowledgment and allow personal conversations. In this way, people get to know each other more intensively and can exchange ideas more easily. Empirical Design: Research Questions, Materials and Methods For the accompanying research, we selected four guiding research questions to highlight different aspects of the seminar setting and to receive multi-perspective insights from students and newcomers (Table 1). In order to gain differentiated insights into the learning processes, we chose to use a complex qualitative research design with different survey times (pre-post-design). The research sample includes all participants in the program a total of 147 individuals (65 students, 82 newcomers). The data was collected through ethnographic observation and semi-structured interviews. Thus, it was possible to obtain differentiated answers to our research questions by systematic observation, collecting materials in the field and subsequent documentation of the experiences through the participants (Flick, 2014, p. 302). The semi-structured interviews identified students' expectations and didactic considerations for the planned learning arrangement in combination with assumptions about the newcomers. In addition, the newcomers were asked about their expectations and wishes with regard to the upcoming encounter with the university students. Both groups of individuals were asked about their experiences during the post-encounter interviews. A special focus was placed on the learning processes that the subjects observed themselves going through. To document the encounters, observation protocols were used by the students and by us who observed the study. The following were central points of observation: 1. The manner of opening the encounter 2. The involvement of the newcomers during the first round of introductions 3. The type and intensity of the newcomers' involvement during the group work 4. Didactic success and failures of the students during the group work 5. Non-verbal communication 6. Changes in behavior or involvement of all individuals An argument for using the ethnographic method was the uniqueness of the encounters. Although encounters between newcomers and students are organized every semester (usually one or two times), the participants and teaching concepts change every time. Thus, from the point of view of ethnographic research, it makes sense to be methodologically pragmatic and to document information and impressions comprehensively (Flick, 2014, p. 302). The ethnographic data collected were analyzed deductively according to Mayring (2015) using three categories. These are for both target groups: (1) expectations, (2) arrangements, (3) learning experiences. The semistructured interviews conducted before and after the encounters were evaluated using a qualitative content analysis according to Mayring (2015). This procedure serves the purpose of reduction by systematically creating inductive and deductive categories from the given data. The main categories developed in this process are the basis for a typification of the observed results. Findings from the ethnographic data and the interviews are treated equally in the formation of categories. Results As pointed out before, the research questions were divided into three categories: (1) expectations, (2) didactic arrangement and (3) learning experiences on both sides. Along these categories, the collected data from the ethnographic observations and the interviews is summarized. Expectations The expectations of the students prior to the encounters varied greatly and related to a wide range of feelings. Based on the evaluation of the interviews conducted with 65 students, three different types of student expectations were identified. There were students with no objections who seemed to be very open-minded in relation to teaching newcomers (5 = type 1), students with mixed feelings (35 = type 2), and students who had major concerns (25 = type 3). This is rather surprising, because Dortmund is located in a region of Germany that is very multicultural. The chance of university students meeting newcomers at schools is rather high. Upon closer examination, it turned out that many students came from a section of society that can be described as affluent and not very intercultural. This could explain why many of them had intense feelings and concerns in the run-up to the encounter. Only a minority of the students (5 out of 65) had had previous experiences with refugees in general. These students made a conscious decision to get involved in refugee work. A 21-year-old student, Laura, explained this in her interview: I grew up very privileged. I am doing very well. I don't know what hunger, war or displacement means. But I know that as a teacher I will later encounter many children and young people who have had these hard experiences.… I don't want to do that unprepared. I want to get in touch with people like that while I'm still at university, which is why I help out in a school. I help [refugee] children with their homework and it gives me great pleasure.… That's why I'm looking forward to meeting the newcomers at our university. Within the type 1 group, there is also a different kind of reasoning. For example, 20-year-old Luke said in the interview: "I am looking forward to meeting the newcomers. They are people who have had special experiences. But apart from that, they are people like you and me. If I approach them with an open heart, it will work out." So, he was very optimistic and open to new experiences. The latter also applied to the students from group type 2. However, it also became clear in the interviews that even students who had had good experiences or were openminded used forms of othering: They often referred to newcomers as "these people" and displayed a very distant attitude. This dissociation may be due to a lack of personal encounters with refugees in their leisure time and in school settings. One student, Maria, explained it as follows: I have little experience [as a teacher] and many questions. Especially, I have no experience with newcomers. The only thing I know is all the bad reports in the media. There is often talk on TV about Muslim boys not accepting women, even assaulting them. So I wonder how to protect myself from that. This statement shows how the media have negatively influenced Maria's perceptions. This phenomenon was particularly evident among the type 3 students, who clearly expressed their fears and insecurities. For example, Markus said: I don't know if we can do it. We all have no experience in school teaching. And many of us have no experience with newcomers. That's totally difficult. What do you do as a teacher if the newcomers don't respect you? Or what do I do if they don't understand me? I just can't imagine that it will be that easy. Both students clearly pointed out fears associated with negative stereotyping of refugees, such as men being disrespectful towards women or refugee students lacking respect for authority figures. However, as said before, these students had not experienced this kind of behavior themselves; they seemed to have taken on these concepts from the media or from general public discourses. German media reports often create the impression of new immigration as a topos of danger (Geier & Mecheril, 2021) and refugees are often marked as people who are unfamiliar with democratic values. This may be one reason why some of the students were so concerned about being respected by the newcomers. In addition to that, many students expected to hear stories of flight and very drastic accounts of war and conflict. However, the newcomers had very different backgrounds; some were from neighboring European countries, others from far away countries like Iraq; they all had different (flight) stories to tell and these were above all stories of resilience. What also became apparent were stereotype ideas of restricted gender roles, as Markus' statement points out: We have chosen the theme of celebrations, and at my group table it'll be weddings. I really don't know how I'll react when young girls with headscarves tell me that they really want to get married when they are 16 or 17. Other students expected the female newcomers to be "very shy" and to "need help from the students to be confident." They seemed to think of female newcomers as persons who are not confident and needy. This refers to a common discourse of refugees who are often seen as people in need and not as active participants (Brewer, 2016). Other students also expressed prejudices towards newcomers, the most common being: a lack of German language skills, that their bad experiences in the past would affect them in teaching contexts and that they would be unfamiliar with regular school settings given their long absence from school. It was important to us that the students had the opportunity to freely express their doubts and prejudices during the interviews and seminars and reflect on them with each other. In this exchange, it became apparent that type 1 students critically questioned many of the prejudices named by their peers. In some cases, we intervened to contradict prejudices that the newcomers should be protected from. For instance, many students thought that the newcomers did not want to know much about Germany at all because they would be going back home after the war. We used studies and official data to underline that these were misperceptions and that the schooling of newcomers is a task that goes beyond a short-term emergency (UNHCR, 2016). Otherwise, the students discussed their concerns, let some of them stand, and waited for the encounter with the newcomers. Didactic Arrangement We examined the didactic arrangement both ethnographically and through interviews. First, the results of our ethnographic observation: As mentioned before, the students worked with the inclusive method of UDL. They created different learning materials, formulated tasks, researched information, used pictures and symbols and did a lot of crafting. Besides, the students researched information from the newcomers' countries of origin and presented some of it in different languages. During the preparation, the students talked about how they would act in case of problems and gave each other hints. Didactically relevant questions asked by the students, such as "how good are the language skills?" and "can the newcomers read?" as well as decisions they had to make before the encounters indicate that they expected teaching newcomers to be a challenge. These questions can be explained by the uncertainty of the students. On the one hand, they had little experience of teaching schoolchildren and on the other, they had little or no experience with language learners or newcomers. The interviews revealed, however, that with the detailed preparation of the material according to the UDL guidelines the students' uncertainty diminished and they felt well prepared. All of them pointed out that the preparation involved an enormous amount of time, which they had not expected. However, they agreed that this would make the meeting all the better. From our experience, these lessons generally run very well in all semesters and the good preparation allows the lessons to proceed in a structured way. According to our observations, everyone involved in the situation feels comfortable, and inspired conversations arise. Furthermore, it turns out that the newcomers are not the only ones who learn something. Learning Experiences on Both Sides From the ethnographic perspective, it can be stated that the students reacted mainly with surprise, pleasure in teaching and relief about well-functioning processes. Only two participants in the sample were not surprised or not satisfied with the results. This is also evident in the interviews. For example, Maria told us: I have learned so much. I am so happy that there were no problems at all….Honestly, I feel ashamed that I thought so badly about male Muslim newcomers. We were talking about school, and they explained to me that in Islam you treat every teacher respectfully. It doesn't matter if it's a man or a woman. They were so polite and kind to me. I was really pleased, and now I find it really embarrassing that I was so unreflective before. But precisely because I had such prejudices, this encounter is so precious to me. Like Maria, Mark also had an unexpected experience: I was at the group table on the topic of marriage. And I told them that Germans often marry later than people in other cultures. Then a girl comes forward and says that she doesn't want to get married until she's 30 or so. Definitely not earlier. I was totally surprised and asked her why that was so important to her….She said that in her home country women are oppressed and she doesn't want that. She is now in Germany and wants to graduate from high school and go to university. She wants to take care of herself and only then look for a handsome man….When she said that, I realized that I hadn't expected her to be so selfconfident and take her life into her own hands like that….This experience showed me that I'd had pretty strong prejudices. But it is important to get involved with people individually. That's what I've learned. We also interviewed the newcomers who met Mark and Maria. Sahid said about his meeting with Maria: Maria will be a very good teacher. She was very friendly to us….We talked about school in Syria and about teachers. Then we told her that we Muslims pay special attention to teachers because they put a lot of effort into teaching us. She was very happy about that. After that it was really good. We laughed a lot and talked about our school days….And I got to know the university. I would like to become a teacher one day and I have already met some nice colleagues. Now I really feel like doing my Abitur and studying. We also asked Lilas about her encounter with Mark. She said: It was so much fun to learn about German weddings. I didn't know all the traditions. But there are also things we all have in common: good food, friends and relatives….It was funny when I told him that I would never get married before I was 30. I think Mark was very surprised. I find that funny, because why should a woman marry early and have children? I think he watched too much television. But it's good if a teacher knows that we girls want to study and not get married right away. I think he also learned something important. These statements from Lilas and Sahid reflect the overall perceptions of the newcomers. The interviews show that they have learned a lot about German festivities and focused on the similarities to their own traditions. It was a lot of fun and encouraging for them to see the university is reachable, both through the regional proximity, but also through the shared learning experience. Most of them showed self-confidence and saw themselves as people who have enriched the teaching processes. This is especially evident in their joyful realization that the students have also learned something from them. They find this eye-to-eye encounter very important, because in their everyday lives, their experiences are different, especially with public authorities and administrations. Sadly, and shockingly, their reports have in common that they often encounter people who are prejudiced towards them. One newcomer summed it up: "Finally I am being treated as an intelligent person and not like a stupid animal. The students have done a really good job and I feel welcome." Discussion and Conclusion The results can be summarized in five main points: 1. Even though the future teachers are studying in a multicultural city, only five of them have reached out to newcomers before. 2. 60 of the 65 students interviewed expressed (major) worries, fears and even prejudices before the encounter. 3. The encounter itself was evaluated as fruitful and educative by nearly all participants. 4. The 82 newcomers interviewed had a positive view of the past encounter, 45 of them feel motivated to study in Germany and all interviewed newcomers felt very welcome and liked the open dialogue. 5. Type 2 and type 3 students pointed out that the encounter with the newcomers was very meaningful for them and that their previously negative perspective has changed significantly. Looking at these results, the importance of guided encounters in the context of inclusive and intercultural educational processes becomes clear. Learning and reflection processes among the university students were only initiated by real encounters with newcomers. The project revealed that the majority of the stu-dents had had no contact points with refugees before, but many negative assumptions. In part, the students' monocultural social environment and the predominantly negative media reports on migration may explain these findings (Geier & Mecheril, 2021). After meeting the newcomers many students questioned their views and learned to see the individual instead of an imagined group. Therefore, this encounter is not only instructive in terms of the content imparted, but above all through the personal interaction. The basis of this interaction is simple but important: strengthening commonalities and understanding particularities. For teacher education, these findings imply the need to regularly examine the extent to which students have had intercultural experiences. In our view, for the students' future work in inclusive German schools it is imperative for them to have already made and reflected on first experiences during their studies. Regarding the newcomers interviewed, our project has also shown some interesting and disturbing findings. Obviously, a large proportion (45 of 82) have had mainly bad experiences in interactions with public authorities and administrations. In contrast, they describe the encounter with the students as enriching and consider this special, because it took place in a public institution as well. It is rather alarming that the majority of the newcomers regarded it as exceptional to be welcomed and to be treated as equals, but certainly this is no isolated case (Brewer, 2016;Seukwa, 2007). A relevant encounter also needs to be well prepared methodically. Studies underline that not the encounter itself is of importance but rather the quality of the experience (Urton et al., 2015). Based on the ethnographically collected data, we were able to determine that the differentiated preparation with UDL supported lively discussions and made the active participation of all newcomers possible. These controlled conditions give the students greater confidence in their actions, which is particularly important when they first come into contact with teaching (newcomers). Of course, the concept does not guarantee success, but the students learn how to deal more competently with heterogeneous learning requirements. They learn not only at the personal, but also at the methodical level: Diversity is not the enemy. In fact, from the results it is clear for us that one encounter alone has made a major difference for the students involved. Nevertheless, it is essential that such encounters are well accompanied. There is a need for reflection spaces in order to critically question one's own (professional) actions (self-reflection) and to learn new skills (didactic competence). However, the results should also be viewed in light of the limitations of our research. We did not conduct a representative study that would allow for generalizations, and we cannot say anything about long-term effects. Our focus was on the individual experiences and encounters and the accompanying reflective processes that the interviewees went through. Our research is understood as an exploratory design, which invites further (quantitative) research. This is especially important if meaningful results on changes of students' attitudes towards newcomers are to be collected. For future studies, we would recommend the objective measurement of attitude changes.
2021-09-27T20:55:49.434Z
2021-07-21T00:00:00.000
{ "year": 2021, "sha1": "c79195a32350add48d3945a07ee92b7eb91eec8b", "oa_license": "CCBY", "oa_url": "https://www.cogitatiopress.com/socialinclusion/article/download/4121/4121", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5d59222973217ab7aa1fa1d95f9f65053cba3a6a", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Sociology" ] }
146102160
pes2o/s2orc
v3-fos-license
A Toolbox for the Analysis of the Grasp Stability of Underactuated Fingers † In the design of humanoid robotic hands, it is important to evaluate the grasp stability, especially when the concept of underactuation is involved. The use of a number of degrees of actuation lower than the degrees of freedom has shown some advantages compared to conventional solutions in terms of adaptivity, compactness, ease of control, and cost-effectiveness. However, limited attention has been devoted to the analysis of grasp performance. Some specific issues that need to be further investigated are, for example, the impact of the geometry of the fingers and the objects to be grasped and the value of the driving mechanical torques applied to the phalanges. This research proposes a software toolbox that is aimed to support a user towards an optimal design of underactuated fingers that satisfies stable and efficient grasp constraints. Introduction In the last few years, increasing interest has been devoted towards compliant and underactuated hands as a compact, reliable, and flexible grasping solution in manipulation applications [1,2].However, relatively limited attention has been given to the development of simulation tools that address the specific challenges connected with underactuated grasping [3].Notable examples are GraspIt![4] and OpenGrasp [5].Both simulators allow a set of common objects and various types of grippers to be analyzed, but they are not well suited for grasp stability analysis, especially when underactuated architectures are considered.Other recent efforts that address the specific design of under-actuated hands include [6], whereas SynGrasp [7] is a MATLAB toolbox for grasp analysis of fully or underactuated robotic hands.Finally, in [8], the Yale-CMU-Berkeley (YCB) object and model set is presented that is intended to be used to facilitate benchmarking in robotic manipulation, prosthetic design, and rehabilitation research. This paper introduces a simulation toolbox that the authors developed during their current efforts at BionIT Labs towards an efficient design of Adam's Hand: a transradial myoelectric prosthesis that uses a highly underactuated mechanism, composed of 14 differential stages actuated by a single motor −15 degrees of freedom (DOFs), 1 degree of actuation (DOA) [9].The underactuation among the fingers is obtained by symmetrically stacking five bevel gear differential stages, while the underactuation within each finger is obtained by stacking serially two differential idler pulleys per finger.A functional scheme of the designed mechanism is shown in Figure 1, while a more in-depth analysis of the proposed mechanism can be found in [10,11].This paper extends the study, preliminary presented by the authors in [12], on the contact forces generated by the underactuated fingers during enveloping grasps.The overall goal is to optimize their features by maximizing the contact conditions for which a stable grasp can be achieved.As explained in the following section, these contact situations are identified by a combination of phalanx flexion/extension angles and contact points of the phalanges with the object to be grasped.In order to simplify the analysis that involves a high number of variables, a software is presented in Section 4 to support a user during the design stage.The software framework, available upon request, is developed in the Mathematica environment, which is a very powerful symbolic language and well-established in the scientific and industrial world.Mathematica programming environment allows to easily exploit other specific tools and built-in math functions enabling the exploration of multiple approaches and the integration with other analysis tools, e.g., statistical processing of experimental data, optimization, dynamic models, and simulations. General Static Model Drawing on [13], the model of underactuated finger used in this paper is shown in Figure 2. The following assumptions hold: the finger motion is planar (no abduction/adduction), and all the n phalanges, which are driven by a single actuator, are linked through revolute joints.Equating the input and output virtual powers of this system, one obtains: where: • t is the input torque vector exerted by the actuator (T a ) and the springs located between the phalanges (T 2 , . . ., T n ): • ω a is the corresponding joint velocity vector: where θ i is the ith joint variable.• ξ i is the twist of the ith contact point on the ith phalanx (assuming one contact per phalanx) with a corresponding wrench ζ i , and the operator "•" stands for the reciprocal product of screws in the plane. It can be shown that: where: • f T is the vector of the resultant of contact forces, f i , normal to phalanx 1, . . ., n: • J is the Jacobian Matrix, a n × n square matrix which depends only on the location of the contacts k i on the phalanges and their relative orientation r T ij , their length l i and the friction coefficients µ i and η i : • T is the Transmission Matrix, a n × n square matrix which depends on the stage transmission ratios x i of the mechanism used to propagate the actuation torque to the phalanges: where I n−1 and 0 T n−1 are the identity matrix and the zero vector of dimension (n − 1). Then, considering Equations ( 1) and ( 4), the equilibrium of virtual power for the system results: from which one obtains a useful relationship between the actuator torques and the contact forces: It should be noted that a n-output m-input underactuated mechanism requires n − m springs in order to be statically determined.For this reason, depending on the mechanism design, it is possible that a torsion spring will be required also in the base joint O 1 .In this case, matrix J remains the same, while vector t and matrix T −1 respectively become: where I n is the identity matrix of dimension n, and T * is now a rectangular matrix of dimensions (n + 1) × n.Equation (9) in this case becomes: and the forces obtained are the same calculated in absence of the base joint spring, except for f 1 , which contains the additional term T 1 1 k 1 + η 1 (and this holds for any number of phalanges).This result represents the most general one, since if T 1 = 0 also f 1 equals the one previously obtained.For this reason, from now on, matrices J (Equation ( 6)) and T * (Equation ( 11)) and vector t (Equation ( 10)) will be used to optimize the fingers design using Equation ( 12). Impact of Phalanx Thickness As can be seen in Figure 3, when the ith phalanx thickness i is not negligible the angle θ i should be augmented by the quantity: and the contact location k i should be shifted by In addition, when friction is non-zero, the equilibrium locus changes due to the moment generated by the tangential force, which can be modelled using the coefficient η i : the tangential force produces a moment about O i equal to − f ti i .This moment can be seen as a wrench with the same normal and tangent forces and a torque τ i equal to the case of a zero thickness phalanx.Therefore, one gets The latter coefficient must be added to the previous value of η i describing the contact friction (even if it is zero).This change can be reflected directly into the matrix J to obtain the new force expressions. Positive Definiteness of the Forces Given a set of geometric parameters, Equation ( 9) or ( 12) provides the contact configurations defined by the pair (k * , θ * ) that ensure full positiveness of the vector f .The set of these contact situations corresponds to the stable part of the space spanned by the contact situations pair (k * , θ * ) which are referred to as the space of contact configurations or grasp-state space.Stable grasps correspond to contact situation pairs for which the vector f has no negative component, that is, the phalanges in contact with an object have a positive (or zero) contact forces.The other phalanges that are not in contact with the object must correspond to zero contact forces.It should be underlined that this approach tries to characterize the finger itself, independently from the object being grasped. It should be also considered that the grasps requiring all the phalanges correspond only to a subset of all the possible grasps: fewer-than-n-phalanges grasps can also be stable if each phalanx which contacts the object has a strictly positive contact force and each phalanx not in contact with the object has a null contact force. Contact Forces Writing for the Proposed Finger Mechanism The general equations presented in Section 2 are written for the scheme proposed in Figure 1, for both the two-phalanx thumb (I) and for the four three-phalanx fingers, from index (I I) to pinkie (V).As mentioned in Section 1, the prosthetic hand under study features n = 15 DOFs that are actuated by just m = 1 DOA, so n − m = 15 − 1 = 14 springs are required to solve the static equilibrium equations.Due to symmetry considerations, these springs have to be located in all the joints of each finger (three springs for finger I I ÷ V and two springs for finger I), so that also the base joint (O 1 ) of each finger will be linked through a spring to the fixed palm. Two-Phalanx Finger Three DOFs are assigned to the thumb, corresponding to proximal and distal phalanges flexion/extension and to metacarpus abduction/adduction.These members are interconnected via three revolute joints.A torsional spring that is positioned in each joint, which links the phalanges, the metacarpus, and the palm.Since the analysis carried out in Section 1 does not consider out-of-the-flexion-plane movements, in the following analysis the metacarpus motion is constrained so that the thumb results as composed only by the proximal and the distal phalanges.Whereas this approximation subtracts generality to the analysis, it should be considered that many grasp typologies can be obtained with the metacarpus fixed relatively to the palm.The model of the thumb is presented in Figure 4a.Both flexion/extension of the two phalanges and abduction/adduction of the metacarpus are driven by the torque deriving from the bevel gear differential stage 4, as shown in Figure 1. Matrix J I according to Equation ( 6) is given by: The physical meaning of J I is showed in Figure 4b-d. When phalanx thickness i is taken into account, the finger model becomes that shown in Figure 5a, as discussed in Section 2.1.Matrix J I gets: since vectors r ki in this case acquire another component proportional to phalanx thickness i along y i axis, as shown in Figure 5b-d.This matrix represents a generalization of the matrix reported in Equation ( 16) for i = 0 (i = 1, 2).Therefore in the following analysis, matrix J I will be derived from Equation (17).Matrix T I and vector t I can be obtianed respectively from Equations ( 7) and ( 10): with x 12,I being the transmission ratio between the base and the middle idler pulleys and T a,I the torque exerted by one of the two sun gears of the bevel gear differential Stage 4. Moreover: with K h,I being the spring stiffness and Z h,I the spring preload for joints h = 1, 2 of the thumb. The contact forces obtained from Equation ( 12) are then: Neglecting friction (µ i = η i = 0 ∀i) and considering x 12,I = 1, these equations become much simpler: The grasp is stable only if f i > 0 for i = 1, 2. By studying the contact situations defined by the pair (k 2 , θ 2 ) for a determined set of geometric, static, and dynamic parameters (phalanx length and thickness, friction coefficients, springs stiffness and preload, actuation torque, . ..) the portion of the grasp-state space in which all the forces are positive can be obtained.By varying the design parameters, this portion can be maximized in order to ensure a stable grasp for the largest number of contact situation achievable. Three-Phalanx Fingers In this case, three DOFs are assigned to each one of fingers I I ÷ V.They all feature three phalanges linked through three revolute joints among them and to the fixed palm.A torsional spring is located in each joint.The model of this finger is presented in Figure 6a.The model is modified as shown in Figure 6b when phalanx thickness is not negligible. Each finger is driven by the torque delivered by the bevel gear differential stages 2 (fingers I I and I I I) or 3 (fingers IV and V), as shown in Figure 1.Considering the same assumptions made for the two-phalanx finger, the contact forces obtained from Equation ( 12) are: Again, neglecting friction (µ i = η i = 0 ∀i) and considering x 12 = x 23 = 1, these equations become much simpler: The grasp is stable only if f i > 0 for i = Proposed Software The software toolbox provides a useful tool to simplify the grasp-state space analysis and parametric optimization.It was developed under the Wolfram Mathematica environment [14].The graphical user interface (GUI) is shown in Figure 7.It consists of four main areas.The first row (2-phalanx finger) and the second row (3-phalanx finger) refer to the finger geometric parameters explained in the previous sections; the third row collects the parameters relative to the target object (position, size, shape), other parameters used to define the limits of the grasp-state space, and the constraints on the normal and tangential contact forces both for two-and three-phalanx fingers; the fourth row includes the graphs that represents the grasp-state space (k 2 , θ 2 ) and the scheme of the two-phalanx finger on the left side, whereas two graphs representing the grasp-state spaces (θ 2 , θ 3 ) and (k 2 , k 3 ) and the scheme of the three-phalanx finger are shown on the right side.Each subsection of the GUI is explained in more detail in the remainder of the paper. • Sections 1 and 2-Phalanx length and semi-thickness Phalanx lengths can be set by matching those of the human hand, using standard biomechanical measurements ( [15]), as shown in Table 1.Phalanx thickness, instead, has been set considering the size of the mechanical transmission, in order to simplify the following design validation and due to the lack of standard biomechanical measurements in the Literature.Note that the software requires as input the phalanx semi-thickness.The software foresees the adoption of a spring in each joint; when a joint (e.g., O 1 , O 2 , O 3 ) is checked, the rotary spring is activated and the relative stiffness and preload can be set, otherwise the spring is neglected.The user can also choose if the spring opposes opening or closing of the prosthetic hand. As stated before, the proposed mechanism requires a spring in each joint of the finger in order to obtain a statically determined finger. • Section 6-Friction coefficients The user can choose whether or not to consider friction by checking or unchecking the relative button; in the first case the value of friction coefficients µ i and η i (i = 1, 2, 3) can be set.These values depend on the material of the object-finger contact: typical values are 0.8 for a steel-steel contact and 1 ÷ 4 for solid-rubber (in both cases clean and non-lubricated [16]).As a matter of fact, it should be considered that robotic finger surfaces can be coated with a rubber-like layer to increase friction or indirectly through the use of a tactile sensing device. It should be also noted that for a given value of µ i,static two values of µ i should be considered, each corresponding to one sliding direction, i.e., µ i = +µ i,static or µ i = −µ i,static .• Section 7-Torque and transmission ratios The base joint actuation torque T a must be provided in order to calculate contact forces.Specifically, in the case of fingers I I − V, this torque is found under the assumption that in the steady-state condition it is equally distributed among all fingers. The user can also choose the value of transmission ratios between the phalanges: in order to simplify and speed up the mechanism prototyping, the current version presents unitary transmission ratios. • Sections 8 and 9-Force application points and flexion angles The parameters adopted to study the grasp-state space are the phalanx flexion/extension angles θ 2 , θ 3 and the force application points, expressed as a percentage of the phalanx length (k 2 ≡ %l 2 and k 3 ≡ %l 3 ): for two-phalanx fingers this space is of dimensions 3, therefore easily readable on a single 3D graph parameterized as a function of (θ 2 , k 2 ); -for three-phalanx fingers, instead, at least two different graphs should be considered: the current version of the software shows the force vector components as a function of (θ 2 , θ 3 ) in the first graph (on the left of Figure 7) and as a function of (k 2 , k 3 ) in the second graph (on the right) of the same figure.However, other parameters combinations, such as (θ 2 , k 2 ) or (θ 3 , k 3 ) are easily implementable. • Section 10-Grasped object parameters When the object button is checked, the software working modality is affected: force application points, in this case, are automatically defined by the intersection between the phalanges and the object outer shape.The user can choose the object dimension, shape, and position relative to the finger base joint O 1 . • Section 11-Graphic settings The sliders in this section help defining the grasp-state space boundaries both in terms of (θ 2 , θ 3 ) and (k 2 , k 3 ).They also define the number of points for which numeric integration of a performance index is performed.This index indicates the percentage of the defined grasp-state space, which allows for a stable grasp.The boundaries for contact forces can also be set, in order to analyze their trend. Moreover, the visualization of each single contact force both in the grasp-state space graphs and in finger schemes can be activated by checking the relative button.In detail: the contact forces f 1 , f 2 , and f 3 are denoted respectively as yellow, orange, and blue surfaces in the grasp-state space graphs, while the green surfaces indicate the portion of the grasp-state spaces where the forces are all positive, therefore indicating a stable grasp.The green (stable grasp) or red (unstable grasp) point indicates the current configuration of the parameters (θ 2 , k 2 ) for the two-phalanx finger or (θ 2 , θ 3 ) and (k 2 , k 3 ) for the three-phalanx finger; -the vectors representing the contact forces in the finger schemes are green or red if the forces are, respectively, positive or negative.The blue vectors, instead, indicate the tangential forces acting at the object contact points. Furthermore, the GUI language can be set (the current version only supports English and Italian). • Section 12-Results This section shows the main analytic outcomes obtained from the software: normal and tangential forces and values of the performance indexes both for two-and three-phalanx fingers.In the configuration considered in Figure 7, the grasped object, a disk, is positioned at the same distance from the base joint O 1 for the two finger architectures, but the grasp results stable only for the three-phalanx finger.This is due to the fact that, for the given combination of the chosen parameters, in the case of the two-phalanx finger the force f 2 is negative, while in the case of the three-phalanx finger all the forces are positive.Specifically: as can be seen in the grasp-state space graph of the two-phalanx finger (Figure 8a), the force f 2 -orange surface-is always negative for each value of the (θ 2 , k 2 ) parameters, so that the only way to obtain a stable grasp is that of changing the other parameters, such as the phalanx length or thickness, the friction coefficients, or the springs features; -in the case of the three-phalanx finger (Figure 8b) the first grasp-state space, which is a function of the (θ 2 , θ 3 ) parameters, shows a stable grasp-green surface-just in the 38.2% of the defined space, mainly due to the trend of f 2 surface; on the other hand, the second grasp-state space, which is a function of the (k 2 , k 3 ) parameters, shows a stable grasp in the 91.2% of the defined space. As an example, if in the case of the three-phalanx finger the friction coefficients are modified from µ 1 = µ 2 = µ 3 = 0.8 to µ 1 = µ 2 = µ 3 = 0.6 the grasp becomes unstable (as shown in Figure 9); this result highlights the importance of friction in the grasp stability problem and it also shows how this software could be useful in finding the best design parameters for an efficient underactuated gripper. The proposed toolbox helped in the design choices of the Adam's Hand prototype family that is shown in Figure 10.It was especially useful in setting the stiffness of the joint springs to increase the stable portion of the grasp-state space of Adam's Hand. Conclusions In this paper, a software framework was presented to help a user during the design stage of a humanoid robotic hand that employs underactuated fingers towards grasp stability optimization.The tool is highly parameterized to cope with various parameters that include phalanx thickness and length, friction, joint spring properties, and driving torque.Although it was primarily intended for a parametric design of a humanoid underactuated hand using simulated grasping, it could also be valuable as an educational tool to help non-expert users or students to understand the principles underlying underactuated grasp by visualizing how the parameter slide bars impact on the stability indexes. Future developments will be devoted to extend the single point contact model by also taking into account the linear and circular contact.The interaction with the grasped object by more than one finger at a time will be added to the system.Efforts will be made to include deformability of the phalanges, fingers, and grasped object towards a fully soft underactuated design.Finally, while many quality measures for grasps have been proposed in the literature, the use of these measures for automatic grasp choice remains an open issue [17].Therefore, grasp quality metrics other than stability will be considered, for example, by taking into account the task requirement and following knowledge-based approaches. Figure 3 . Figure 3. Impact of the phalanx thickness. Figure 7 .Figure 8 . Figure 7. Graphical user interface (GUI) of the developed software.Please refer to the online colored version for a better view. Figure 9 . Figure 9. Grasp-state space graphs and configuration of the three-phalanx finger for an unstable grasp due to insufficient friction. Figure 10 . Figure 10.Adam's Hand prototype family: alpha-prototype at the top and beta-prototype at the bottom of the image.
2019-05-07T13:57:21.638Z
2019-04-06T00:00:00.000
{ "year": 2019, "sha1": "e6ecff41bb377e44d9d4846bbcffba9d43751b36", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2218-6581/8/2/26/pdf?version=1556623455", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "e6ecff41bb377e44d9d4846bbcffba9d43751b36", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering", "Computer Science" ] }
92994606
pes2o/s2orc
v3-fos-license
Life Cycle Assessment for the Production Phase of Nano-Silica-Modified Asphalt Mixtures The application of LCA to NMAM has the potential to guide decision-makers on the selection of pavement modification additives to realize the benefits of using nanomaterials in pavements while avoiding potential environmental risks. Abstract: To combat the rutting effect and other distresses in asphalt concrete pavement, certain modifiers and additives have been developed to modify the asphalt mixture to improve its performance. Although few additives exist, nanomaterials have recently attracted significant attention from the pavement industry. Several experimental studies have shown that the use of nanomaterials to modify asphalt binder results in an improved oxidative aging property, increased resistance to the rutting effect, and improves the rheological properties of the asphalt mixture. However, despite the numerous benefits of using nanomaterials in asphalt binders and materials, there are various uncertainties regarding the environmental impacts of nano-modified asphalt mixtures (NMAM). Therefore, this study assessed a Nano-Silica-Modified Asphalt Mixtures in terms of materials production emissions through the Life Cycle Assessment methodology (LCA), and the results were compared to a conventional asphalt mixture to understand the impact contribution of nano-silica in asphalt mixtures. To be able to compare the relative significance of each impact category, the normalized score for each impact category was calculated using the impact scores and the normalization factors. The results showed that NMAM had a global warming potential of 7.44563 × 10 3 kg CO 2 -Eq per functional unit (FU) compared to 7.41900 × 10 3 kg CO 2 -Eq per functional unit of the conventional asphalt mixture. The application of LCA to NMAM has the potential to guide decision-makers on the selection of pavement modification additives to realize the benefits of using nanomaterials in pavements while avoiding potential environmental risks. Introduction Asphalt is the most widely used pavement layer in the world.It consists of a binding material called bitumen and crushed or natural aggregates.The mixture of these materials forms asphalt mixtures.Demand for paved roads exceeded the supply of lake asphalts in the late 1800s and led to the use of petroleum asphalts [1].Asphalt is often used as a shortened form of asphalt concrete which is the material of choice in the pavement sector.In the United Kingdom and the rest of Europe, the term 'bitumen' is used as a synonym for the term 'asphalt binder' while 'asphalt cement' is often used in the United States [2].Asphalt cement or bitumen is used to bind the aggregates together to provide the required strength and stiffness to transfer vehicular loads.In addition to its strength and stiffness, asphalt pavements offer a damping ability due to the viscous-elastic nature of the bitumen [3].Consequently, asphalt mixtures are qualified to provide optimal driving comfort as well as flexible maintenance actions.Asphalt pavements are designed to provide maximum performance throughout the design life.Bitumen (asphalt binder) performs two functions: Binding aggregates together and protecting the aggregates from distortions.However, unlike concrete pavements, asphalt pavements experience deformations over short periods of time.This, coupled with increased traffic loads and extreme weather conditions have resulted in asphalt pavement authorities seeking alternative solutions to improve the resistance of the road pavements to the adverse effects of mechanical and environmental loading [4]. Currently, several additives and modifiers produced commercially are used to modify the properties of the asphalt binder.Ref. [3] stated that additives and other modifiers are added in asphalt mixtures to lower mixing and compaction temperatures.This was found to improve adhesion and increase resistance against cracking and rutting.Regarding the viscosity of bitumen, Ref. [5] studied the effects of asphaltene on rheological properties of diluted Athabasca bitumen.Nanotechnology and nanomaterials have recently attracted significant attention from the pavement industry.Nanomaterial application is considered to have the potential to improve asphalt binder properties.As mentioned by Ref. [6], the application of nanomaterials as asphalt modifiers is growing rapidly in popularity due to its unique characteristics that significantly improve the performance of asphalt binder.It has been shown in several studies that the addition of nano-silica in asphalt mixtures improve the oxidative aging property, increases resistance to the rutting effect, improves the rheological properties of asphalt mixture and decreases the interaction between asphalt molecules [7][8][9][10].In addition, Ref. [11] investigated and found that increasing nano-silica content in asphalt mixtures decreases the ductility and temperature sensitivity of the asphalt mixture. It is becoming increasingly important to explore the full benefits of additives and modifiers on the long-term performance of asphalt pavements.With sustainability in mind, and also embracing the global effort to reduce the environmental impacts associated with these newly perforated materials, being able to make decisions and judge the benefits and environmental friendliness linked to the long-term pavement performance has become important.Consequently, having a life-cycle assessment (LCA) tools available to assess modified-asphalt materials on a life-cycle basis becomes necessary.Due to the concern of global warming and resource depletion, LCAs for different materials and products and systems have gained significant popularity with researchers.LCA studies can help to determine and minimize the energy consumption, use of resources, and emissions to the environment by providing a superior understanding of the systems [3].LCA studies can help to consider different alternatives if the environmental performance of a particular material or product is not favorable.There have been several studies that attempt to assess the environmental impact of asphalt materials and some studies have also been conducted on asphalt binders modified with additives [12][13][14][15].However, to the author's knowledge, no studies that assess the complete LCA for the production phase of nano-silica-modified asphalt mixtures have previously been published.A new material being used as a modifier, there are uncertainties regarding the environmental impacts associated with nanomaterials.Therefore, it is of paramount importance to investigate the extent to which the use of nano-silica-modified asphalt mixtures for asphalt concrete pavement is beneficial from an environmental perspective. This study presents the assessment of a Nano-Silica-Modified Asphalt Mixtures in terms of materials production emissions through LCA methodology.The environmental impacts of a conventional asphalt mixture were assessed so that a comparison could be made to understand the impact contribution of nano-silica in the asphalt mixture.In addition, to be able to compare the relative significance of each impact category, the normalized score was computed for each impact category using impact scores and normalization factors.The application of LCA to NMAM has the potential to guide decision-makers on the selection of pavement modification additives to realize the full benefits of the use of nanomaterials in pavements while avoiding potential environmental risks. Life Cycle Assessment LCA is described by Ref. [16] as a tool for systematically analyzing the environmental performance of products or processes over their entire life cycle, which includes raw material extraction, manufacturing, use, end-of-life disposal, and recycling.LCA is described as a 'cradle to grave' method for the evaluation of environmental impacts [17].In a similar description, Ref. [18] defines LCA as a methodology that quantifies the environmental impacts of a process or a product.In their study, Ref. [19] stated that most of the environmental impacts do not occur in the use, maintenance, and repair of the product but during the manufacturing, transportation, and disposal stages.Ref. [20] claimed that it would be premature to make any claims on the environmental benefits of a particular product or manufacturing process without first considering its consequences in a life cycle context.LCA methodology includes the establishment of an inventory of all types of emissions and waste products [21,22].LCA studies are conducted in accordance with the specification and standards of the International Organization for Standardization (ISO).The four major components of an LCA study according to Ref. [23] are illustrated in Figure 1.The inventory analysis part is made up of material extraction phase, manufacturing or production phase, use or operational phase and disposal phase.However, it is quite difficult to effectively assess the environmental impact of a product during its in-service life.Therefore, the analysis of this study does not include the operational phase and/or the disposal phase of the inventory analysis. Appl.Sci.2019, 9, x FOR PEER REVIEW 3 of 13 LCA is described by [16] as a tool for systematically analyzing the environmental performance of products or processes over their entire life cycle, which includes raw material extraction, manufacturing, use, end-of-life disposal, and recycling.LCA is described as a 'cradle to grave' method for the evaluation of environmental impacts [17].In a similar description, [18] defines LCA as a methodology that quantifies the environmental impacts of a process or a product.In their study, [19] stated that most of the environmental impacts do not occur in the use, maintenance, and repair of the product but during the manufacturing, transportation, and disposal stages.[20] claimed that it would be premature to make any claims on the environmental benefits of a particular product or manufacturing process without first considering its consequences in a life cycle context.LCA methodology includes the establishment of an inventory of all types of emissions and waste products [21,22].LCA studies are conducted in accordance with the specification and standards of the International Organization for Standardization (ISO).The four major components of an LCA study according to [23] are illustrated in Figure 1.The inventory analysis part is made up of material extraction phase, manufacturing or production phase, use or operational phase and disposal phase. However, it is quite difficult to effectively assess the environmental impact of a product during its in-service life.Therefore, the analysis of this study does not include the operational phase and/or the disposal phase of the inventory analysis. Nanomaterials and their Application as a Modifier in Asphalt Mixtures Nanotechnology is an emerging technology and is regarded as a key enabling technology due to its numerous associated benefits to many areas of society.Nanotechnology is defined as the use of very small particles of materials (either by themselves or by their manipulation) to create new large materials [24].The author added that nanotechnology is not a new science or technology, but an extension of the science and technology that has been in development for many years, and is used to examine nature at an ever-smaller scale.[25] defines nanomaterials as those physical substances with at least one dimension between 1 and 150 nm (1 nm = 10 −9 m).With reference to the European Commission's recommended definition of nanomaterials, [26] defines nanomaterial as a "natural, manufactured material containing particles, in an unbound state or as an aggregate or as an agglomerate and where, for 50% or more of the particles in the number size distribution, one or more external dimensions is in the size range 1nm -100nm".The application of nanomaterials in the field of construction is growing rapidly.[27] mentioned that nanotechnology is a rapidly expanding area of research where novel properties of materials manufactured at the nanoscale can be utilized for the benefits of constructing infrastructure.Although some nanomaterials are already being used in the concrete industry, their application as a modifier in asphalt binder has attracted more interest recently.Several experimental studies have been conducted to determine the effect of nanomaterials, especially nano-silica on the properties of asphalt mixtures.Nano-silica materials are used as additives which are applied in small percentages by weight of the asphalt binder to improve the Nanomaterials and their Application as a Modifier in Asphalt Mixtures Nanotechnology is an emerging technology and is regarded as a key enabling technology due to its numerous associated benefits to many areas of society.Nanotechnology is defined as the use of very small particles of materials (either by themselves or by their manipulation) to create new large materials [24].The author added that nanotechnology is not a new science or technology, but an extension of the science and technology that has been in development for many years, and is used to examine nature at an ever-smaller scale.Ref. [25] defines nanomaterials as those physical substances with at least one dimension between 1 and 150 nm (1 nm = 10 −9 m).With reference to the European Commission's recommended definition of nanomaterials, Ref. [26] defines nanomaterial as a "natural, manufactured material containing particles, in an unbound state or as an aggregate or as an agglomerate and where, for 50% or more of the particles in the number size distribution, one or more external dimensions is in the size range 1-100 nm".The application of nanomaterials in the field of construction is growing rapidly.Ref. [27] mentioned that nanotechnology is a rapidly expanding area of research where novel properties of materials manufactured at the nanoscale can be utilized for the benefits of constructing infrastructure.Although some nanomaterials are already being used in the concrete industry, their application as a modifier in asphalt binder has attracted more interest recently.Several experimental studies have been conducted to determine the effect of nanomaterials, especially nano-silica on the properties of asphalt mixtures.Nano-silica materials are used as additives which are applied in small percentages by weight of the asphalt binder to improve the rheological and other properties of asphalt mixtures.Ref. [7] investigated the characteristics of asphalt binder and mixture containing nano-silica and found that the addition of nano-silica has a positive influence on different properties of the asphalt binder and mixture.Ref. [28] also studied the effect of nano-silica and rock asphalt on rheological properties of modified bitumen.In their study, Ref. [29] found that the inclusion of nano-silica reduces the rutting susceptibility of nano-modified asphalt mixtures.Ref. [30] studied laboratory evaluation of composed modified binder and mixture containing nano-silica/rock asphalt/SBS.In a similar experimental study, Ref. [31] found that increasing the percentage of nano-silica increases the Brookfield Rotational Viscosity (RV).Ref. [32] worked on the application of nano-silica to improve asphalt mixture self-healing.In another study, Ref. [33] investigated the effect of nano-silica on thermal sensitivity of hot-mix asphalt.Nano-silica increases the strength or durability of asphalt mixture [34,35].Refs.[36][37][38][39][40] also made similar studies on the effect of nano-silica on asphalt binder and mixtures.Table 1 summarizes the review of previous studies on the characterization of asphalt binder modified with nano-silica.Regarding the cost of using nanomaterials, Ref. [41] provides the prices for almost all nanomaterials based on the quantity required.For example: precipitated calcium carbonate Nanopowder, 50 nm (100 g = $45, 1 kg = $85); nano-silica nanopowder, 60-70 nm (100 g = $55, 1 kg = $155); titanium oxide Nanopowder, 20 nm (100 g = $165, 1 kg = $468); zinc oxide Nanopowder, 80-200 nm (100 g = $58, 1 kg = $168).While some nanomaterials may seem costly, others may be cheap.However, on a large scale, an extensive economic analysis is required to determine the optimum cost for each nanomaterial based on the quantity required. Table 1.Review of previous studies on modification of asphalt binder with nano-silica. Author Type of Nanomaterial Effect on Asphalt Binder and Mixtures [32] Nano-silica Improves the self-healing of HMA [10] Nano-silica Improves marshal stability, resilient modulus, and fatigue life [29] Nano-silica Enhances antiaging property and rutting and fatigue cracking performance [30] Nano-silica Improves temperature stability, decreases temperature cracking resistance and reduces susceptibility to moisture damage [28] Nano-silica Enhances the complex shear modulus and improves the anti-rutting performance of asphalt mixture [34] Nanosilica Reduces the susceptibility to moisture damage and increases the strength of asphalt mixes [35] Nano-silica Improves the performance and durability of asphalt mixtures [36] Nano-silica Improve rutting and fatigue performance of asphalt binder [37] Nano-silica Decreases the interaction between asphalt molecules and increases free volumes in the configuration [38] Nano-silica Decreases the consistency, rate of water absorption and porosity of the roller compacted concrete pavement [39] Nano-silica Improves the rheological characteristics, toughness, and viscosity of bitumen [40] Nano-silica Reduces the creep strain deformation and increases the dynamic shear modulus Methodology LCA methodology was used (as standardized by the ISO in 2006) to assess the environmental impact of nano-silica-modified asphalt mixtures.There are numerous nanomaterials whose effect on asphalt binder and mixtures have previously been evaluated.However, based on the extensive literature review, the common nanomaterials which have been experimentally shown to have a greater impact on asphalt concrete performance include nanoclay and nano-silica.Consequently, nano-silica was used in this study.However, any other nanomaterial (especially nanoclay) which uses a similar production process could give similar results when modified with asphalt binder and materials.Also, the analysis of this study focused on only material extraction and production phases and does not include the operational or the disposal phase.The inclusion of the operational phase in the LCA analysis could change the inference about the conformity of nano-silica-modified asphalt mixtures. The structure of LCA studies adopted includes goal and scope definition, inventory analysis, impact assessment, and improvement assessment or interpretation stages. Goal and System Boundaries The goal of this study is to assess the potential life-cycle environmental impacts resulting from modifying asphalt materials with nanomaterial (i.e., the environmental impacts of nano-silica-modified asphalt mixtures).Additionally, a comparison is made with the environmental impacts of unmodified asphalt mixture to provide a better understanding of the impact contribution of nanomaterials in asphalt materials to allow for informed decisions to be made.In other words, the extent to which the use of nano-silica-modified asphalt mixtures for asphalt concrete pavement is beneficial from the environmental perspective is evaluated. Two alternative case scenarios were examined.In CASE 1A, the environmental impact of nano-modified asphalt material was assessed.The use of nanomaterial (nano-silica), asphalt materials, and the production processes of asphalt mixtures were considered.Modification of bitumen with nanomaterial is depicted in Figure 2. The structure of LCA studies adopted includes goal and scope definition, inventory analysis, impact assessment, and improvement assessment or interpretation stages. Goal and System Boundaries The goal of this study is to assess the potential life-cycle environmental impacts resulting from modifying asphalt materials with nanomaterial (i.e., the environmental impacts of nano-silicamodified asphalt mixtures).Additionally, a comparison is made with the environmental impacts of unmodified asphalt mixture to provide a better understanding of the impact contribution of nanomaterials in asphalt materials to allow for informed decisions to be made.In other words, the extent to which the use of nano-silica-modified asphalt mixtures for asphalt concrete pavement is beneficial from the environmental perspective is evaluated. Two alternative case scenarios were examined.In CASE 1A, the environmental impact of nanomodified asphalt material was assessed.The use of nanomaterial (nano-silica), asphalt materials, and the production processes of asphalt mixtures were considered.Modification of bitumen with nanomaterial is depicted in Figure 2. In CASE 2A, the environmental impact of asphalt material production, excluding nanomaterial (conventional asphalt mixture), was assessed.The system boundaries which defines the unit process considered in the LCA studies [4] were limited to cover the following life cycle stages in this study: The results of this study will help practitioners in the asphalt concrete pavement industry to In CASE 2A, the environmental impact of asphalt material production, excluding nanomaterial (conventional asphalt mixture), was assessed.The system boundaries which defines the unit process considered in the LCA studies [4] were limited to cover the following life cycle stages in this study: (1) raw materials extraction; (2) transportation of raw materials for a unit product manufacturing; (3) modification and production of asphalt materials in the plant.Transportation of asphalt materials to the field, use, and the end-of-life were not included.The life cycle stages and key processes of nano-modified asphalt production in the plant are shown in Figure 3.The flow emissions and resource consumption (such as electricity and natural gas) for heating and the production processes were also included in the system boundaries. Functional Unit (FU) The FU is the heart of any LCA studies.The FU is a quantified performance of a product system for use as a reference unit in an LCA study [21] (referring to the Malaysian standards handbook on environmental management).A fixed value must be created and the output results of the environmental impacts of the impact categories depend on this selected FU.In this study, a FU of 1000 kg production of nano-silica-modified asphalt mixtures was assumed. Material Extraction and Production Processes The life cycle inventory stage is the stage of actual data collection and the modeling of the system product.For the data on material extraction, processing, and production, an openLCA database was used.OpenLCA is an open source LCA tool from GreenDeLTa located in Berlin, Germany.The software uses an Eco-invent 2.2 database and other proprietary databases and produces equally good results compared to other proprietary LCA tools such as SimaPro, Gabi, etc.The software allows the user to import any external database into its platform and it can be used to model any product.For the production process of nano-silica, silica gel and precipitated silica type, the process outlined by [42] was followed.A 1000 kg production of bitumen and aggregates was assumed.For the input amount of 1kg nano-silica production, the data provided by [43] were referred to.Additives are often applied in small percentages (1-10%) by weight of asphalt binder.This study used 3% of nano-silica for asphalt binder modification.Therefore, the input amount of 30 kg nano-silica was required to modify the bitumen.Data from [13,14] were used regarding the energy consumption data per kg of material required for bitumen production, aggregates, and the mixing of asphalt materials at the plant.In other studies, such as the one reported by [3], the modification of asphalt binder with additives results in an increase in fuel consumption by approximately 15%.Therefore, it was assumed that an increase of 15% in energy for bitumen production was required to modify the bitumen.Hence, The results of this study will help practitioners in the asphalt concrete pavement industry to make informed decisions by considering the numerous benefits of nanomaterials (nano-silica) and the environmental impacts resulting from modifying asphalt mixtures with the nano-silica material. Functional Unit (FU) The FU is the heart of any LCA studies.The FU is a quantified performance of a product system for use as a reference unit in an LCA study [21] (referring to the Malaysian standards handbook on environmental management).A fixed value must be created and the output results of the environmental impacts of the impact categories depend on this selected FU.In this study, a FU of 1000 kg production of nano-silica-modified asphalt mixtures was assumed. Material Extraction and Production Processes The life cycle inventory stage is the stage of actual data collection and the modeling of the system product.For the data on material extraction, processing, and production, an openLCA database was used.OpenLCA is an open source LCA tool from GreenDeLTa located in Berlin, Germany.The software uses an Eco-invent 2.2 database and other proprietary databases and produces equally good results compared to other proprietary LCA tools such as SimaPro, Gabi, etc.The software allows the user to import any external database into its platform and it can be used to model any product.For the production process of nano-silica, silica gel and precipitated silica type, the process outlined by Ref. [42] was followed.A 1000 kg production of bitumen and aggregates was assumed.For the input amount of 1kg nano-silica production, the data provided by Ref. [43] were referred to.Additives are often applied in small percentages (1-10%) by weight of asphalt binder.This study used 3% of nano-silica for asphalt binder modification.Therefore, the input amount of 30 kg nano-silica was required to modify the bitumen.Data from Refs.[13,14] were used regarding the energy consumption data per kg of material required for bitumen production, aggregates, and the mixing of asphalt materials at the plant.In other studies, such as the one reported by Ref. [3], the modification of asphalt binder with additives results in an increase in fuel consumption by approximately 15%.Therefore, it was assumed that an increase of 15% in energy for bitumen production was required to modify the bitumen.Hence, to account for the asphalt binder modification with nano-silica in the analysis, an additional 15% increase in energy (fuel) was added to the 0.51 MJ energy for bitumen production.Transportation of nano-silica material to the milling terminal for modification was assumed as 100 km, while the total transportation for bitumen and to the asphalt plant was also assumed to be 100 km and that for aggregates was assumed as 5 km to the mixing plant.In Table 2, the material and energy requirements for the production of nanomaterial (nano-silica) and asphalt materials is summarized. Life Cycle Impact Assessment (LCIA) The life cycle impact assessment (LCIA) stage involves analyzing the data to evaluate the contribution to each impact category.LCIA consists of characterization, normalization, evaluation, and weighting depending on the LCIA used.In this study, the Tool for the Reduction and Assessment of Chemical and other Environmental Impacts (TRACI) method version 2.1 (provided in openLCA) was used to calculate the impact category indicator scores TRACI is a software from the US Environmental Protection Agency (EPA), Durham, NC, USA.TRACI uses Equation (1) [44] to determine the impact score for each individual environmental impact category. where: I i is the potential impact of all substances (x) for a specific category of concern (i), CF i xm is the characterization factor for substance (x) emitted to media (m) for each impact category (i), and M xm is the mass of the substance emitted to media (m).OpenLCA version 1.7.4 was then used for modeling the processes in this study. Finally, to be able to compare the relative significance of each impact category, the normalized score for each impact category was calculated.Normalization is the ratio of the impact score in each category and the estimated impacts from a reference (often called normalization factors).These factors represent the impact produced by an average person in a reference place per year.Equation ( 2) was used for the computation: where NS i is the normalized score of impact category i, EnvI i is the environmental impact result of impact category i, and NF i is the normalization factor of impact category i. Regarding the normalization factors, US 2008 reference data was used, the impact per person-year updated in the research by Ref. [40].Table 3 provides the details of the normalization factors.The units of four categories: ecotoxicity, carcinogenic, non-carcinogenic, and acidification are different from the reference units first converted to the reference units before computing the normalized score.Acidification potential = 1.98 × 10 −2 SO 2 /kg substance (multiplied its impact result by this value). Ecotoxicity potential for rural air = 0.064 CTU eco/kg substance (multiplied its impact result by this value). Human health cancer potential for rural air = 1.2 × 10 −7 CTU canc/kg substance (multiplied its impact result by this value). Human health non-cancer potential for rural air = 3.0 × 10 −8 CTU canc/kg substance (multiplied its impact result by this value). CASE 1A: Impact Assessment of Nano-Silica-Modified Asphalt Mixtures Analysis OpenLCA version 1.7.4 was used to model and analyzed the environmental impacts of nano-modified asphalt materials and the analysis results are shown in Table 4. Increase in the production of the raw materials and/or the FU results in an increase in fuel and energy usage and will cause a significant increase in the impact scores in each category.The environmental performance of 7.44563 × 10 3 kg CO 2 -Eq/FU global warming of nano-silica-modified asphalt mixture is better than the results of (Butt et al.) who found the modification of asphalt materials with a polymer to be 44.9 × 10 3 kg CO 2 -Eq per FU of 1 km by 3.5 km wide asphalt pavement. CASE 2A: Impact Assessment of Unmodified (Conventional) Asphalt Mixture Analysis The analysis of unmodified (conventional) asphalt materials was needed to better understand the environmental implication of modifying conventional asphalt with nanomaterials.The results of the analysis are shown in Table 5.Any increase in the production of raw materials or a change in the FU will result in an increase in the impact scores in each category and vice versa. The modification of asphalt materials with nanomaterials results in an increase in environmental impacts, which is clear when comparing the results in Table 5 with that in Table 4. Across all impact categories, there is an increase in the impact scores.This fact is reinforced by Ref. [4] when the authors found that using Ethylene-Vinyl-Acetate (EVA) polymer as a modifier agent leads to a deterioration of the life cycle profile of the pavement compared to unmodified asphalt binder.However, the deterioration of the life cycle environmental profile with nano-modified asphalt materials is insignificant.Specifically, there was only a 0.4% increase in global warming, 0.8% increase in respiratory effects, 0.009% increase in ozone depletion, 0.98% increase in eutrophication, 1.0% increase in human health carcinogenic, 0.7% increase in photochemical oxidation, 0.96% increase in human health non-carcinogenic, 0.72% increase in ecotoxicity, and 0.85% increase in acidification.This means the modification of asphalt materials with nanomaterials (nano-silica) causes more impacts in human health carcinogenic than other impact categories.Apart from ozone depletion, the modification of asphalt materials with nano-silica contributes fewer impacts in global warming per 3% by weight of asphalt binder production of nano-silica. Computation of Normalised Score Table 6 and Figure 4 show the normalized score in each impact category of nano-modified asphalt materials.According to Ref. [46], by inspection, large values of normalized scores as compared to the total are classified as worse performing impact categories, while those with small normalized scores of approximately less than 2% of the total are classified as better performing impact categories.Table 6 shows that nano-modified asphalt materials only perform significantly better in four impact categories: photochemical oxidation (0.0217 pts/FU), ecotoxicity (0.0634 pts/FU), ozone depletion (0.2323 pts/FU), and global warming (0.3102 pts/FU). However, to fully understand when and how an impact category is classified as either better or worse performing, a logarithmic scale criterion was used.This was especially useful in situations where there existed large variation in the normalization scores.It is argued by Ref. [47] that dimensionless data is more appropriately plotted on an arithmetic scale to clearly understand where the data points lie (better or worse trend).On a logarithmic scale, the center of gravity (where the eye is drawn) lies at the geometric mean, where the line starts at 1 and not 0. Hence, applying the logarithmic scale plot (see Figure 4), all the impact categories below the 1pts line are referred to as ZONE 1 (better performance zone).Hence, it can be said NMAM performs better in five categories: global warming (0.3102 pts/FU), ozone depletion (0.2323 pts/FU), eutrophication (0.6779 pts/FU), photochemical oxidation (0.0217 pts/FU), and ecotoxicity (0.0634 pts/FU).(see Figure 4), all the impact categories below the 1pts line are referred to as ZONE 1 (better performance zone).Hence, it can be said NMAM performs better in five categories: global warming (0.3102 pts/FU), ozone depletion (0.2323 pts/FU), eutrophication (0.6779 pts/FU), photochemical oxidation (0.0217 pts/FU), and ecotoxicity (0.0634 pts/FU).The worst performance in acidification, which is the increase in hydrogen ions (H+) concentration within the environment as a result of the presence of acids, can be attributed to the sulfuric acid used in the production of nano-silica and the cause of sulphur dioxide and nitrogen oxides released during transportation of the materials and including asphalt materials.As mentioned previously, the modification of asphalt materials with nanomaterial causes only 0.4% per unit increase in global warming.This is because carbon dioxide (the main cause of global warming) is released during the production of bitumen, aggregates, asphalt mixing, and also during transportation.In short, the fact that the modification of asphalt materials with nanomaterial causes just less than or equal to 1% increase in impact score across all impact categories suggests that modifying asphalt materials with nanomaterials does not cause an unreasonable risk to the environment.However, the results of this study using nano-silica does not conclude that all other nanomaterials may have very low impact.The impact on the environment and the combined impact when modified with asphalt materials depend on the production process of the nanomaterial. Therefore, it is expected that some nanomaterials may have a more negative environmental impact. Conclusions LCA is a tool that helps to assess the environmental impacts of materials and products so that decisions can be made not just on the benefits of using these materials but also considering their environmental contributions (especially to climate change and human health).This study assessed a All the impact categories above the 1 pts line are referred to as ZONE 2 (worse performance zone).NMAM performs worse in this zone in 4 categories: respiratory effects (36.9556 pts/FU), human health carcinogenic (5.8258 pts/FU), human health non-carcinogenic (182 pts/FU), and acidification (41.0101 pts/FU). The worst performance in acidification, which is the increase in hydrogen ions (H+) concentration within the environment as a result of the presence of acids, can be attributed to the sulfuric acid used in the production of nano-silica and the cause of sulphur dioxide and nitrogen oxides released during transportation of the materials and including asphalt materials.As mentioned previously, the modification of asphalt materials with nanomaterial causes only 0.4% per unit increase in global warming.This is because carbon dioxide (the main cause of global warming) is released during the production of bitumen, aggregates, asphalt mixing, and also during transportation.In short, the fact that the modification of asphalt materials with nanomaterial causes just less than or equal to 1% increase in impact score across all impact categories suggests that modifying asphalt materials with nanomaterials does not cause an unreasonable risk to the environment.However, the results of this study using nano-silica does not conclude that all other nanomaterials may have very low impact.The impact on the environment and the combined impact when modified with asphalt materials depend on the production process of the nanomaterial.Therefore, it is expected that some nanomaterials may have a more negative environmental impact. Conclusions LCA is a tool that helps to assess the environmental impacts of materials and products so that decisions can be made not just on the benefits of using these materials but also considering their environmental contributions (especially to climate change and human health).This study assessed a Nano-Silica-Modified Asphalt Mixtures in terms of materials production emissions through the Life Cycle Assessment methodology (LCA), and the results were compared to a conventional asphalt mixture to understand the impact contribution of nano-silica in asphalt mixtures.The results showed that NMAM had a global warming potential of 7.44563 × 10 3 kg CO 2 -Eq per FU as compared to 7.41900 × 10 3 kg CO 2 -Eq per FU of unmodified asphalt mixture.The study also computed the normalized score for each impact category and the results showed NMAM performs better in five categories: global warming (0.3102 pts/FU), ozone depletion (0.2323 pts/FU), eutrophication (0.6779 pts/FU), photochemical oxidation (0.0217 pts/FU), and ecotoxicity (0.0634 pts/FU).NMAM performs worse in four categories: respiratory effects (36.9556 pts/FU), human health carcinogenic (5.8258 pts/FU), human health non-carcinogenic (182 pts/FU), and acidification (41.0101 pts/FU).The modification of asphalt materials with nano-silica causes less than or equal to 1% per unit increase in impact score across all impact categories.The application of LCA to NMAM has the potential to guide decision-makers on the selection of pavement modification additives to realize the benefits of nanomaterials in the pavement while avoiding potential environmental risks.Additionally, this study has shown that even though the modification of asphalt mixtures with nano-silica results in an increase in fuel consumption, it does not cause an unreasonable risk to the environment nor does its application as a modifier results in significant deterioration of the life cycle environmental profile.However, future research is required by considering the analysis of the whole life cycle for nano-modified asphalt materials using different nanomaterials as a modifier to confirm that nanomaterials are sustainable materials. Appl.Sci.2019, 9, x FOR PEER REVIEW 5 of 13 materials.Also, the analysis of this study focused on only material extraction and production phases and does not include the operational or the disposal phase.The inclusion of the operational phase in the LCA analysis could change the inference about the conformity of nano-silica-modified asphalt mixtures. ( 1 ) raw materials extraction; (2) transportation of raw materials for a unit product manufacturing;(3) modification and production of asphalt materials in the plant.Transportation of asphalt materials to the field, use, and the end-of-life were not included.The life cycle stages and key processes of nanomodified asphalt production in the plant are shown in Figure3.The flow emissions and resource consumption (such as electricity and natural gas) for heating and the production processes were also included in the system boundaries. Figure 3 . Figure 3. Key processes of nano-modified asphalt materials. Figure 3 . Figure 3. Key processes of nano-modified asphalt materials. Table 2 . Materials and energy requirements for 1 kg unit production of nano-silica and asphalt materials. Table 3 . [45]alization factors for impact categories based on inventories from the US (2008) and US-Canada[45]. Table 4 . LCIA results of nano-silica-modified asphalt mixtures per FU. Table 5 . LCIA results of unmodified (conventional) asphalt materials per FU. Table 6 . Normalized score per FU of the impact categories for NMAM.
2019-04-02T13:19:04.676Z
2019-03-29T00:00:00.000
{ "year": 2019, "sha1": "0c37d99ce9e0faabc789ae35b0636e0116b36041", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/9/7/1315/pdf?version=1554098796", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "0c37d99ce9e0faabc789ae35b0636e0116b36041", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
229356760
pes2o/s2orc
v3-fos-license
Improving Accuracy of Brainstem MRI Volumetry: Effects of Age and Sex, and Normalization Strategies Background: Brainstem-mediated functions are impaired in neurodegenerative diseases and aging. Atrophy can be visualized by MRI. This study investigates extrinsic sources of brainstem volume variability, intrinsic sources of anatomical variability, and the influence of age and sex on the brainstem volumes in healthy subjects. We aimed to develop efficient normalization strategies to reduce the effects of intrinsic anatomic variability on brainstem volumetry. Methods: Brainstem segmentation was performed from MPRAGE data using our deep-learning-based brainstem segmentation algorithm MD-GRU. The extrinsic variability of brainstem volume assessments across scanners and protocols was investigated in two groups comprising 11 (median age 33.3 years, 7 women) and 22 healthy subjects (median age 27.6 years, 50% women) scanned twice and compared using Dice scores. Intrinsic anatomical inter-individual variability and age and sex effects on brainstem volumes were assessed in segmentations of 110 healthy subjects (median age 30.9 years, range 18–72 years, 53.6% women) acquired on 1.5T (45%) and 3T (55%) scanners. The association between brainstem volumes and predefined anatomical covariates was studied using Pearson correlations. Anatomical variables with associations of |r| > 0.30 as well as the variables age and sex were used to construct normalization models using backward selection. The effect of the resulting normalization models was assessed by % relative standard deviation reduction and by comparing the inter-individual variability of the normalized brainstem volumes to the non-normalized values using paired t- tests with Bonferroni correction. Results: The extrinsic variability of brainstem volumetry across different field strengths and imaging protocols was low (Dice scores > 0.94). Mean inter-individual variability/SD of total brainstem volumes was 9.8%/7.36. A normalization based on either total intracranial volume (TICV), TICV and age, or v-scale significantly reduced the inter-individual variability of total brainstem volumes compared to non-normalized volumes and similarly reduced the relative standard deviation by about 35%. Conclusion: The extrinsic variability of the novel brainstem segmentation method MD-GRU across different scanners and imaging protocols is very low. Anatomic inter-individual variability of brainstem volumes is substantial. This study presents efficient normalization models for variability reduction in brainstem volumetry in healthy subjects. Background: Brainstem-mediated functions are impaired in neurodegenerative diseases and aging. Atrophy can be visualized by MRI. This study investigates extrinsic sources of brainstem volume variability, intrinsic sources of anatomical variability, and the influence of age and sex on the brainstem volumes in healthy subjects. We aimed to develop efficient normalization strategies to reduce the effects of intrinsic anatomic variability on brainstem volumetry. Methods: Brainstem segmentation was performed from MPRAGE data using our deep-learning-based brainstem segmentation algorithm MD-GRU. The extrinsic variability of brainstem volume assessments across scanners and protocols was investigated in two groups comprising 11 (median age 33.3 years, 7 women) and 22 healthy subjects (median age 27.6 years, 50% women) scanned twice and compared using Dice scores. Intrinsic anatomical inter-individual variability and age and sex effects on brainstem volumes were assessed in segmentations of 110 healthy subjects (median age 30.9 years, range 18-72 years, 53.6% women) acquired on 1.5T (45%) and 3T (55%) scanners. The association between brainstem volumes and predefined anatomical covariates was studied using Pearson correlations. Anatomical variables with associations of |r| > 0.30 as well as the variables age and sex were used to construct normalization models using backward selection. The effect of the resulting normalization models was assessed by % relative standard deviation reduction and by comparing the inter-individual variability of the normalized brainstem volumes to the non-normalized values using paired t-tests with Bonferroni correction. Results: The extrinsic variability of brainstem volumetry across different field strengths and imaging protocols was low (Dice scores > 0.94). Mean inter-individual variability/SD of total brainstem volumes was 9.8%/7.36. A normalization based on either total intracranial volume (TICV), TICV and age, or v-scale significantly reduced the inter-individual variability of total brainstem volumes compared to non-normalized volumes and similarly reduced the relative standard deviation by about 35%. INTRODUCTION The brainstem as the anatomical and functional link between the cerebrum, the cerebellum, and the spinal cord is a vitally important structure, playing a key role in controlling respiratory and cardiac function, defense reflexes, and awareness. From cranial to caudal, the brainstem is divided into the three substructures mesencephalon, pons, and medulla oblongata. It carries white matter tracts to and from the cerebrum, the spinal cord, and the cerebellum, multiple cranial nerves and reticular nuclei (Nieuwenhuys, 1985;Naidich et al., 2009). While the mesencephalon plays an important role mainly in oculomotor, optic, and acoustic function, the pons contains important white matter tracts as well as cranial nerve nuclei for facial sensory and motor functions (Basinger and Hogg, 2020). The medulla oblongata regulates respiratory function and contains important reflex centers e.g., for coughing and swallowing (Bolser et al., 2015;Ikeda et al., 2017). Studying the brainstem is crucial for our understanding of both physiologic neurological function and neurological diseases. Brainstem tissue loss -acquired either during aging or neurodegenerative diseases -can be visualized and quantified by MRI in vivo, offering a potential as diagnostic, prognostic, or therapeutic marker in these diseases. A recently published deep-learning-based algorithm provided accurate, highly reproducible, and robust brainstem segmentation in healthy subjects (HS) and patients with Alzheimer's disease and multiple sclerosis (Andermatt et al., 2016(Andermatt et al., , 2018Sander et al., 2019). Despite its successful application in patients, volumetry of the brainstem and its substructures has not yet been systematically assessed in HS. Brainstem volume assessments are subject to inter-individual variation, e.g., due to head size, head position in the scanner, sex or body height. Volume normalization reduces the physiologic inter-individual measurement variation due to individual anatomical effects, ideally without interfering with measurements related to possible disease processes. This allows for better statistical comparison between two inhomogeneous groups, such as healthy controls and patients. A frequently used normalization parameter for brain volumes is the FreeSurferderived total intracranial volume (TICV; Whitwell et al., 2001) and SIENAX-derived volumetric scaling factor (v-scale; Fein et al., 2004). So far, normalization covariates for brainstem segmentation have not yet been investigated and relevant normalization factors for brainstem volumetry are not known. Using a novel fully-automated deep-learning-based segmentation approach, the objectives of this study were to assess: a) the extrinsic variability of brainstem volumes depending on different scanners, field strengths, and acquisition protocols, b) the intrinsic anatomical variability and the influence of age and sex on the brainstem and its substructure volumes in HS, and c) the effects of normalization models on variability in brainstem volumetry. Brainstem Segmentation Brainstem volumes were assessed using a recently published fully-automated segmentation approach based on multidimensional gated recurrent units (MD-GRU). The deep-learning-based algorithm provides accurate, robust, and reproducible segmentations of the brainstem and its substructures (Andermatt et al., 2016(Andermatt et al., , 2018Sander et al., 2019). All segmentations were visually inspected. Written informed consent was obtained from all participants mentioned above. All brainstem segmentations were visually inspected for anatomic accuracy. Statistical Analyses Statistical analyses were performed using JMP Pro 14 and SPSS 25. Assessing the Extrinsic Variability of Brainstem Volumes Dice coefficients were each calculated comparing brainstem segmentations obtained in the same individual (a) on different scanners (1.5T vs. 3T), (b) on the same scanner but with different protocols, and (c) on different scanners and protocols. Assessing the Intrinsic Variability of Brainstem Volumes To assess the inter-individual variability, the respective deviation from the group mean was calculated for each subject as (measured volume -mean volume)/mean volume. Age and Sex Effects Differences in brainstem/brainstem substructure volumes between men and women were assessed using linear regression analysis with (a) age as covariate, as well as in a sensitivity analysis with (b) field strength, (c) acquisition protocol, and (d) TICV as additional covariates, respectively. Correction for multiple testing (4 analyses) was performed using the Bonferroni correction, adjusting the level of significance to p < 0.05/4. The associations between age and brainstem/brainstem substructure volumes were assessed using linear regression analysis covarying for sex. Differences of brainstem volumes in younger vs. older persons (below vs. above the group mean) were assessed using linear regression analysis with (a) field strength and (b) acquisition protocol as additional covariates, respectively. The associations of these parameters with brainstem volume were first assessed using Pearson correlation coefficients. To correct for multiple tests, Bonferroni correction was performed with a correction factor of n = 12 (11 anatomical variables and age) (p < 0.05/12). Only those anatomical metrics showing a significant association with all brainstem and brainstem substructure volumes with a Pearson correlation coefficient of |r| > 0.30 (Cohen, 1988), respectively, were considered as potential normalization covariates in further analyses. We then performed a backward selection procedure starting with a model with total brainstem volume as outcome parameter, and all anatomical variables with a Pearson correlation coefficient of |r| > 0.30 in univariate analysis as well as age and sex as predictor variables. This procedure was performed (a) with TICV and (b) with v-scale separately, as these parameters are co-linear. The adjusted r 2 of the models resulting from the backward selection were reported, as well as of a further simplified model (considering simple application with preference for fewer and easy to measure covariates). The normalization of brainstem volumes and its substructure volumes was then performed by using the following equation (4), dens-opisthion (5), and brainstem angle (6). (Sanfilipo et al., 2004;Papinutto et al., 2019): with a, b, c being the estimates (regression coefficients) obtained by the linear regression analysis and X, Y, Z their measured values. To assess the performance of the different normalization models, the inter-individual variability of the normalized brainstem volumes of each model was first compared to the variability of the non-normalized brainstem volumes using paired-t-tests, with Bonferroni correction for multiple tests (3 models, p < 0.05/3). In a second step, we compared the performance between the normalization models by comparing the inter-individual variability of the normalized brainstem volumes by a one-way ANOVA (analysis of variance). The performance of the different normalization models was also expressed by the % relative standard deviation (%RSD) reductions of the predicted volumes to the %RSD of the nonnormalized, measured volumes of the whole group (n = 110). The relative standard deviation (RSD) is the standard deviation divided by the mean volume. Brainstem Segmentation The automated brainstem segmentation approach yielded anatomically accurate results in all subjects in < 200 s/scan on an NVidia GeForce GTX 1080 GPU: All obtained brainstem segmentations were considered anatomically correct in its location and borders, when visually inspected, no manual correction was needed. Assessing the Influence of Different Scanners and Protocols on Brainstem Volume Variability The results of brainstem segmentation comparisons from different scanners and protocols are shown in Table 1. MD-GRU derived brainstem segmentations from scans of the same individual obtained on different 3T scanners (Prisma vs. Skyra) using the same acquisition protocol showed Dice scores between 0.95 and 0.98. Similarly, Dice coefficients comparing segmentations from scans of the same individual using different imaging protocols as specified above on the same 1.5T Avanto scanner were between 0.94 and 0.97. Age and Sex Influence Men had significantly larger unadjusted volumes of the total brainstem, mesencephalon, pons, and medulla oblongata (all p < 0.0001, respectively), compared to women with total brainstem volumes of 28274.0/2670.1 for men (mean [mm 3 ]/SD) vs. 24826.2/2824.1 (mean [mm 3 ]/SD) for women. However, after adjustment for age and TICV (to account for head size differences), men showed significantly larger medulla oblongata volumes compared to women (Appendix A in Supplementary Material) with all other comparisons being insignificant after Bonferroni correction. Adjustment for field strength or protocol did not alter these observations. With adjustment for sex, there was no significant association between age and total brainstem (p = 0.4131) volumes. In line with this observation, total brainstem volumes did not differ significantly between older subjects (aged above the group mean of 35 years; n = 44) and younger subjects (<35 years; n = 66) (p = 0.3068) with adjustment for sex. Results were comparable for mesencephalon, pons, and medulla oblongata volumes (Appendix B in Supplementary Material). This finding was independent of additional adjustment for field strength or acquisition protocol (Appendix C in Supplementary Material). Table 2 reports the strength of the correlations of total brainstem volume with each of the investigated variables. Amongst these metrics, nasion-opisthion, dens length, TICV, v-scale, WM, GM, and BV (all normally distributed) showed a significant correlation with brainstem and all substructure volumes surviving the Bonferroni correction for multiple tests with a Pearson correlation coefficient |r| > 0.30 (Table 2 and Appendix D in Supplementary Material) and are therefore potential univariate predictors. As BV, GM, and WM volumes can be altered by neurodegenerative processes, these variables were not considered in further analyses. Assessing Potential Normalization Models for Anatomical Variability Reduction Pearson correlation coefficients of all variables and brainstem substructure volumes are shown in Appendix D in Supplementary Material. Comparison of Different Normalization Models The backward selection procedure resulted in two models based on TICV and age (Model 1a) and v-scale (Model 2). The model based on TICV and age was further simplified to TICV alone (Model 1b) ( Table 3). Results of the linear regression analysis with brainstem substructure volumes as outcome are shown in Appendix E in Supplementary Material. The model with TICV and age consistently yielded the highest adjusted r 2 . However, eliminating the variable age from the Model 1a did not substantially reduce the variance explained. Brainstem volume normalization by TICV (Model 1b) or v-scale (Model 2) yielded comparably high r 2 . DISCUSSION Using a novel, accurate, fully automated, and rapid brainstem segmentation method (Sander et al., 2019) we explored sources of extrinsic (field strength, protocol) as well as intrinsic anatomical variability, investigated age and sex influences on brainstem volumes on high-resolution MPRAGE images in HS and developed potential normalization strategies for variability reduction in brainstem volumetry. The extrinsic variability of our brainstem volumetry assessment method with respect to different acquisition protocols, hardware, and magnetic field strength was low; the comparisons of brainstem segmentations obtained in the same individuals assessed by different scanners as well as different protocols and both different scanners and protocols yielded very high Dice scores (≥0.94). These results confirm the robustness of the applied brainstem segmentation algorithm with respect to different image acquisition settings, i.e., different scanners with 1.5T and 3T field strength and different acquisition protocols. Consistent with previous studies, our results showed no relevant age dependent volume reduction of the brainstem and its substructures in this cohort aged between 18 and 72 years. With a mean age of 34.9 years and a median age of 30.9 years, this cohort might be, however, more representative for middle-aged and younger adults. Based on the result we cannot fully exclude a decline in brainstem volume in healthy persons of advanced age. The lack of an age dependent volume reduction observed in this cohort is consistent with previous studies: Several crosssectional brainstem segmentation studies based on manual brainstem segmentation reported no association of ventral pons volumes with age (Raz et al., 2001;Sullivan et al., 2004). Likewise, no age effects were found in total brainstem and medulla oblongata volumes (Luft et al., 1999;Lee et al., 2009). Lambert et al. (2013) found isolated midbrain atrophy in HS of age older than 60 years, predominantly due to a volume loss of the superior cerebellar fiber bundles which are not taken into account in our mesencephalon volumetry definition. In our study, men showed significantly larger unadjusted volumes of the brainstem and its substructures compared to women, which is in line with findings by Raz et al. (2001) and Sullivan et al. (2004). Lee et al. (2009) also reported larger medulla oblongata volumes in men. However, after adjustment for TICV (to account for head size differences) and age, the differences observed between men and women remained only significant for medulla oblongata volumes. Anatomical variations between HS are an important source of brainstem volume variability with this cohort showing an inter-individual variability of about 10% for brainstem volumes. Therefore, normalization of brainstem volumes is crucial to reduce measurement variation to facilitate the applicability of brainstem volumetrics as a surrogate marker for prognosis, disease course monitoring and therapeutic monitoring in neurodegenerative diseases as e.g., amyotrophic lateral sclerosis, Alzheimer's and Parkinson's disease. Intracerebral metrics like GM, WM, and BV are expected to be altered by neurodegenerative pathologies, and their potential use as covariates of brainstem volumes might therefore only be adequate in studies involving HS. Hence these parameters were not considered as adequate brainstem normalization parameters. Models based on FreeSurfer-derived TICV and SIENAXderived v-scale, two commonly used normalization parameters, as well as TICV and age scored highest adjusted r 2 in linear regression analyses with brainstem and brainstem substructures as outcomes and were therefore further tested as normalization variables. Normalization for anatomic variation of head size by TICV and age reduced the %RSD of total brainstem volumes by 36%, of mesencephalon volumes up to 46%. Normalization with TICV or v-scale alone showed comparable results. Brainstem volume normalization based on each of the three normalization models significantly reduced the interindividual variability compared to the non-normalized volumes. Comparison between the three normalization models showed no significant differences in inter-individual variability of brainstem and brainstem substructure volumes, indicating an equal efficiency of normalization by these models. TICV and v-scale are frequently applied normalization parameters for brain volumes because in general not affected by neurological/neurodegenerative diseases. By normalization with TICV, inter-individual variation of brain volumes was previously reduced about 4% (Whitwell et al., 2001). Using a similar methodological approach normalization with v-scale reduced variation in spinal cord volumetry by up to 10.24% (Papinutto et al., 2019). By reducing measurement variability, we expect the proposed normalization methods to improve the sensitivity in detecting subtle brainstem volume differences between patients with diseases affecting the brainstem and/or its substructures and healthy controls or between patients' subgroups. Thus, previous studies showed improved detection of spinal cord volume differences between multiple sclerosis patients and controls after cervical volume normalization (Oh et al., 2014). Brainstem volume normalization, by reducing anatomical variability, might allow to reveal and strengthen clinical-radiological correlations in neurodegenerative diseases such as multiple sclerosis or Alzheimer's disease (Zhou et al., 2014). The absence of brainstem volume reductions with increasing age observed in our cross-sectional study is in line with findings in other cross-sectional studies of Raz et al. (2001), Sullivan et al. (2004), and Walhovd et al. (2011). Walhovd et al. reported age-related volume differences in all examined brain structures except the brainstem based on FreeSurfer assessments in a large cohort of HS. To disentangle the exact mechanisms underlying the relative volume preservation of the brainstem with increasing age is beyond the scope of this descriptive study. As a phylogenetically relatively old structure the brainstem is crucial for survival. The reasons for its relative resilience to atrophy compared to other phylogenetically old structures like the hippocampus (Jack et al., 1998;Schröder and Pantel, 2016), amygdala (Kurth et al., 2019) and entorhinal cortex (Hasan et al., 2016) remain unknown. Potential limitations of this study include the underrepresentation of very advanced age and the cross-sectional design that does not allow intra-individual comparisons. Longitudinal studies covering a sufficiently long time-span are difficult to perform, but are certainly necessary to confirm our cross-sectional results in this regard. The vital function of the brainstem, its clinical involvement in neurodegenerative and neuroinflammatory diseases, and the absence of volume reductions observed in HS aged from 18 to 72 years in this study render atrophy assessments of the brainstem and its substructures an interesting imaging surrogate candidate for the study of neurodegeneration as e.g., in progressive multiple sclerosis. This study analyzed different sources of both extrinsic and intrinsic variability of brainstem volumetry assessments and evaluated normalization models for variability reduction in healthy controls. The inter-individual anatomical variability of total brainstem volumes is relatively high but can be efficiently reduced by 36% using a normalization based on both TICV and age, and by about 34% based on TICV or v-scale alone. This study's automated segmentation approach proved to be robust across different scanners, field strengths and imaging protocols and allows very fast, efficient, anatomically accurate, and reliable automated brainstem segmentation. DATA AVAILABILITY STATEMENT The data analyzed in this study is subject to the following licenses/restrictions: Upon reasonable request, we will render the detailed results derived from the reported analyses available. Requests to access these datasets should be directed to regina.schlaeger@usb.ch. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Ethikkommission Nordwest-und Zentralschweiz. The participants provided their written informed consent to participate in these studies. AUTHOR CONTRIBUTIONS LS: conceptualization, methodology, analysis, and writing. AH, SP, and SA: methodology and analysis. MA: data collection. TS: analysis. MW and EK: proof reading and analysis. ÖY: data collection and proof reading. LK and CG: proof reading and methodology. JW and PC: supervision, proof reading, and acquiring funding. RS: conceptualization, analysis, writing, supervision, and acquiring funding. All authors contributed to the article and approved the submitted version. FUNDING This work was supported by the Swiss National Science Foundation (MHV program, PMPDP3 171391); and the Swiss MS Society.
2020-12-23T14:15:15.578Z
2020-12-23T00:00:00.000
{ "year": 2020, "sha1": "ea1705f6eb1e5745505c28c9b5a7d2e4e6cafec5", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2020.609422/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ea1705f6eb1e5745505c28c9b5a7d2e4e6cafec5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4848602
pes2o/s2orc
v3-fos-license
Sustainability in Ecovillages – A Reconceptualization This paper identifies and explores factors affecting sustainability and their interrelationships within the context of ecovillages. Through critical analysis of the theoretical concepts of sustainability and ecovillages as intentional communities, as well as their practical exploration through a multiple case study, a contextualised reconceptualization of sustainability is developed. The conceptual framework proposed in this paper depicts sustainability as a dynamic, context-dependent concept consisting of a variety of interdependent factors. The ecovillages looked at are held together by shared principles, which act as unifying themes. They translate the community characteristics (lifestyle, commitment, understanding) into community activities. These activities can be organised into different dimensions (environmental friendliness, economic alternatives, social network, organisation), all including the element of self-sufficiency to a greater or lesser extent. Sustainability has different levels (personal, community, global); ecovillages are connected to society and network with other communities, creating a link between the internal and external factors. Regular review processes address the dynamics of the factors; this self-reflexivity helps to keep the communities dynamic, and through the interrelationships of the factors can lead to enhanced sustainability. With a holistic approach to a reconceptualisation in a practical context, the conceptual model proposed in this paper facilitates a deeper understanding of the concept. Drawing on findings from a multiple case study conducted in Scotland and Germany, through in-depth interviews with inhabitants of ecovillages involved in the organisation of their communities, it offers a guideline for understanding sustainability in ecovillages and can serve as inspiration to rethink conceptualisations of sustainability to date. Paper type: Research paper Introduction The contradiction of the claimed necessity for continuous growth in a world with finite resources, the irreversible impacts on the ecosystem as well as continuous change in society have led to an increased awareness of the consequences of our actions (Yanarella and Levine 2011;Bates 2005;Taylor 2003).The aggravation of these changes through the globalization of society and economy (Levitt 1983)resulting in increased instability of the political, economic, social and ecological environment -has provoked the sustainability debate and has reinforced it (Yanarella and Levine 2011).This debate is gaining increasing attention ever since the World Commission on Environment and Development coined a definition and programme for 'sustainable development', seeking to reconcile the trends of economic growth and declining resources (WCED 1987;Bell and Morse 1999).The sustainability debate is increasingly relevant, as the pace of change continues to step up.However, it remains a vague and contested concept without any universally accepted definition or meaning (Connolly cited by Yanarella and Levine 2011). Despite the acknowledgement of the challenge of sustainability on a higher political level, the focus tends to be on resolving issues with a short term focus, which often restrains sustainability, leading to a 'sustainability schizophrenia'.The task of facilitating sustainability has been delegated to the local level, and there it is mainly local grass root organizations that forward it (Yanarella and Levine 2011).Ecovillages can be seen as such organizations: they address the challenges modern society faces using alternative approaches that tend to be more sustainable (Dobson 2007).An insight into what keeps these communities together -or what factors affect their sustainability -and how these factors are interconnected, can be used as inspiration and role model for different contexts, and change the way sustainability is looked at from an intercontextual point of view. With a top-down approach to defining and implementing, and bottom-up forwarding of possible solutions, the sustainability debate shows the incoherence of the concept (Bell and Morse 1999;Yanarella and Levine 2011).This is reflected by research to date, which has been too narrow (Dobson 2007), mainly focusing on defining without context, or implementing predetermined concepts, aiming at measuring sustainability rather than understanding it.Increased exposure to environmental problems did not necessarily lead to a better understanding of their causes, consequences or solutions (Dobson 2007).What is needed is a holistic reconceptualization of sustainability (Imran et al., 2014) that reflects the bottom-up approach by incorporating the practical dimension, without losing sight of the theoretical knowledge that has been created as yet, avoiding a resource intensive 'reinvention of the wheel' (Mather 2014).It is essential to be aware of realistic and available routes to sustainability, and ideas about a sustainable society that help to bring these about (Dobson 2007).This paper addresses the need for such a holistic view by examining the concept of sustainability in the context of ecovillages, identifying and exploring the key factors that affect sustainability in ecovillages.Presenting the outcomes of a multiple case study conducted among ecovillages in Scotland and Germany, the paper proposes a conceptual model with a bottom up approach that facilitates a deeper understanding of the concept, as well as indicating key factors to be addressed for enhancing sustainability. The paper is organized in six sections.After the introduction, a critical analysis of the concepts -sustainability and ecovillages -looking at key aspects and approaches, points out their vagueness and dynamics.Both are perceived in different ways and lack a final definition, which indicates their contest.The third section briefly outlines the methodology of the study that was conducted prior to the writing of this paper.The paper then presents the outcomes of this study, proposing a conceptual framework that visualizes the key factors affecting sustainability in ecovillagesindicating that shared principles and commitment are the main unifying elements.It then continues to explain how sustainability in ecovillages can be enhanced using this framework, emphasising the importance of self-reflexivity in the communities.The sixth and last section concludes with the implications of the study, pointing out the importance of the contextualization of the concept of sustainability and the need to address its dynamics. Both Scotland and Germany play a decisive role in the ecovillage movement, which was initiated in Scotland, with German communities being highly involved in the founding and developing process (Jackson 2004;GEN 2013).Both countries feature ecovillages that serve as an inspiration for the founding of new intentional communities, which grow to be more experienced and stabilised over the years (Stengel 2005;Bang 2005;GEN 2013).The amount of ecovillages continues to increase, especially in Germany (Stengel 2005), and these two countries are considered a suitable setting for the case study. The terms sustainability and sustainable development are used interchangeably, as has been done in previous literature (Bell and Morse 1999).An ecovillage in this paper is an intentional community that attempts to continuously improve its approach to support healthy human development and decrease its impact on the environment by considering the sustainability of its actions. The concept of sustainability -relevant but contested Ever since the World Commission on Environment and Development coined the term 'sustainable development' to determine the concept of continuous growth in a world with finite resources, the concept of sustainability attracted worldwide attention (Yanarella and Levine 2011; Imran et al., 2014).Since then the sustainability debate did not cease to be of major concern to research, economy and politics.The general growth-resource paradox has been discovered long before the Earth Summit in Rio by researchers such as Malthus (1798).The Limits to Growth report conducted for the Club of Rome in the 1970s expanded on this concept and pointed out the consequences of unchecked growth; stating that besides the tangible physical needs, there are social needs that have to be addressed to sustain world economic and population growth (Meadows et al., 1972).Although this report has encouraged a political debate and movement that have addressed the outlined environmental concerns, it is only with the apparent irreversible impacts of an increasingly industrialised and globalised world that sustainability is of central concern (Dobson 2007). There have been various attempts at defining and measuring sustainability -with the triple bottom line dividing sustainability into social, economic and ecological dimensions (Yanarella and Levine 2011) and sustainability indicators such as the ecological footprint (Bell and Morse 1999).The vagueness of these approaches and the lack of a final universal definition confirm the contest of the concept (Dobson 2007; Connolly cited by Yanarella and Levine 2011).The definition of 'sustainable development' given by the WCED as "[...] development that meets the needs of the present without compromising the ability of future generations to meet their own needs" (WCED 1987, p. 43) acknowledges the concepts of needs and limitations and their implications to economic and social development (WCED 1987;Meadows et al., 1972).It has been criticised for being too anthropocentric (Imran et al., 2014), and it has been questioned whether sustainable development can lead to sustainability (Yanarella and Levine 2011), but as it is the most agreed upon definition in politics and research (Imran 2014; MacGillivray 1998) as well as community networks (GEN 2013), it can serve as an approximation to provide an initial understanding.It led to the development of Agenda 21, a framework for sustainability that can be divided into social, economic, ecological and institutional factors (United Nations 1993; Bell and Morse 1999).Many of its parameters, elements and factors are kept vague (Webster 1998; Bell and Morse 1999), but when comparing research to date, there is a notable compliance to this approach. The sustainability schizophrenia -acknowledgement of the relevance of sustainability on the higher political levels, yet a short term focus of resolving issues that often hamper sustainability efforts -leaves the task of forwarding sustainability with individuals and local, often so called grass root organizations (Yanarella and Levine 2011) that can address the issues more effectively (Brenton 1994).As the nature of the consequences of growth is transnational, international agreements and incentives are never obsolete.There is a need to work with local incentives rather than across them (Brenton 1994), therefore the approach to the concept of sustainability has to be a practical, bottom-up approach, meeting the top down approach outlined in the Rio Declaration. More recent events like the conference Rio+20 and the UN decade for sustainable development confirm the relevance of the concept and the ongoing debate about it (United Nations 2012; Yanarella and Levine 2011).The sustainability schizophrenia leaves the task of progressing sustainability with local grass root organizations (Yanarella and Levine 2011), and it is important to understand these organizations. Ecovillages -new approaches to old social structures and a potential alternative Ecovillages are predominantly intentional communities; sharing life time, visions and values on a voluntary basis in an informal cooperation of everyday life (Stengel 2005).As such they are organizations that -consisting of individuals -are set within society, and face the same challenges and trends.Ecovillages themselves are difficult to define, as they are communities that have "different meanings in different contexts and to different people" (Warburton 1998, p. 17).The definitions vary; with some including high ecological standards (Jackson 2004), while others include a particular scale (GEN 2013).For the purpose of this paper, ecovillages are defined as intentional communities that attempt to continuously improve their approach to support healthy human development and decrease their impact on the environment by considering the sustainability of their actions.This reflects the view of ecovillages as learning and developing communities rather than fixed institutions, as well as the importance of sustainability as element of ecovillages (GEN 2013;Bang 2005). The main focus in ecovillages is on social aspects, as they try to combine individual and community life (Stengel 2005).As the term indicates, they have a strong ecological focus as well; trying, for instance, to grow their own organic food in the community bio-region, and support localised systems of economic and ecological management (GEN 2013).With the willingness to experiment, ecovillages promote an alternative lifestyle that tends to be more ecologically and economically sustainable and often self-sufficient to a great extent (Bang 2005;GEN 2013).To better understand ecovillages, it is important to grasp the concept of intentional communities. The continuous change of modern society -accelerated through globalization -has led to progressed segregation, separation, exclusion (Bauman 1998) and the breaking up of old structures, resulting in an increasing need for belonging and security (Bauman 2001;Blackshaw 2012).Together with the loss of the 'traditional communities' that used to meet these needs without a sufficient substitute, this led to the creation of new kinds of communities -intentional communities (Bauman 2001; Delanty 2010).'Community' itself has become an increasingly vague and romanticised concept (Blackshaw 2012); a community can broadly be defined as a group that shares something or whose members agree on certain characteristics, behaviour or interests (Stengel 2005). Communities create security and a sense of belonging; they provide a form of support and stability through networks, reciprocity and trust (Bauman 2001;Taylor 2003).Intentional communities are often mutual, sharing trust and knowledge through a shared purpose, pooling economic risk and offering opportunities for social and politic interaction (Leadbeater and Christie cited by Taylor 2003).However, the concept of community has been subject to criticism. Where 'old communities' had a natural, shared understanding, intentional communities have to continuously agree on shared, unifying elements in a 'rolling contract' (Bauman 2001, p.12) that reflects the differing interests within a community (Taylor 2003).This indicates the frailty of these consciously formed communities, as they face the danger of dissolving as soon as a rolling contract can no longer be reestablished (Bauman 2001).Intentional communities are often perceived as exclusive groups, distinguishing between 'us' and 'them' (Bauman 2001).They can become fixed, hierarchical and oppressive (Taylor 2003). As part of the same environment, ecovillages can be seen as alternatives to the 'standard lifestyle' of modern society, with environmental and social concerns as central elements (Bang 2005).The concept of ecovillages is developing, and it has drawbacks that need to be addressed.However, ecovillages as alternatives to existing norms and practices show the possibility of a different and more sustainable life (Dobson 2007).They can be seen as model projects for sustainable development (GEN 2013), addressing the consequences of unchecked industrial development and the challenges faced by modern society in an alternative way (Bates 2003;Dobson 2007). Exploring sustainability in ecovillages -two complex, connected concepts With increasing political and economic instability, social isolation and the danger of resource scarcity (where not already present), there is a need for alternatives that are more sustainable (Yanarella and Levine 2011; Bell and Morse 1999).Intentional communities, and ecovillages in particular, can be seen as such alternatives: model settlements that promote an inspiration on how to approach the challenges faced by modern society in a different way.As such they offer a suitable context for a more practically oriented, bottom-up reconceptualization of sustainability.Besides the rationalization of sustainability in ecovillages, the question of how it can be secured and enhanced is addressed in this paper. It has been argued that to promote change in communities -and an enhancement of sustainability arguably involves change -the people whose actions have an impact on the community have to relate to the things that are going to change (Lawrence 1998;Bell and Morse 1999).Especially in communities it is important to have a clear idea of how to operate in a sustainable way (Dobson 2007).This can be achieved by involvement and engagement of the individual members of the community, their ownership of the processes and personal commitment (Webster 1998).A deeper understanding of the concept of sustainability can serve as a foundation for common action, and as such promotes change (Lawrence 1998). Methodology of the study The study that serves as basis for this paper was set up as a multiple case study, including data from six ecovillages that were founded between 15 and 50 years ago and are equally distributed in Scotland and Germany.Five in-depth interviews were conducted with five voluntary respondents who live and work in ecovillages and are involved in the organisation of their community.Drawing on 'first-hand-knowledge', this approach allowed for a practical, bottom-up understanding of the concept.Through the multiple case study design a conceptual model could be developed that integrates both practical and theoretical knowledge with an interpretive research philosophy. The exploratory study followed a mainly critical hermeneutic approach.It analyses the research phenomena acknowledging subjective values and pre-suppositions of both researcher and respondents, and avoides bias of opinion through the multiple case study research design (Schwandt 1997).It is sceptical of claims to finite truth; interpreted meaning is seen as provisional assumption continuously made and revised as a result of complex situations (Rickman cited by Burrell and Morgan 1979).Through aiming at emancipation from taken-for-granted circumstances, this approach bears the potential for social change (Schwandt 1997). Data was analysed with thematic analysis, open as well as axial coding, which allowed the development of patterns both emerging from data as well as predetermined from theory (Schwandt 1997).To explore the concept of sustainability in a suitable practical context, the selected cases are ecovillages according to the definition given in the first section of the article. The selected cases should not be seen as a representative group for ecovillages, as their geographical scope is limited to Germany and Scotland.With five research participants that act as representatives for their communities, the case study qualitatively explores the research phenomena. Reconceptualising sustainability in ecovillages The findings of the study indicate that sustainability of ecovillages is dependent on various interconnected factors.These have been summarised in the conceptual model in figure 1.The key factor that keeps a community together are shared principles.As the basis for community life, these are the unifying themes the community members commit to, despite diversity in individual values and priorities.These shared principles often reflect the experimental character of the ecovillages: "[…] our aim is to change something within the system in which we live at the moment, to show new ways. And at the same time to build an alternative that can support if the system […] breaks down." "But even though the opinions might differ [...] we're all pulling together and we are living together [...] as sustainable as I can imagine it to be." The shared principles form the basis for community life; as the agreed upon elements members commit to, that are an angling point for communication and understanding, and have implications for the lifestyle.They are formed and influenced by the understanding, commitment and lifestyle of the members, and can be seen as 'common interest' that keeps the community together (Bauman 2001;Taylor 2003).In the words of a respondent: "[…] we have a common ground we live by, […] agreements we all agree to for long. [...] it's part of our life basically." This especially is the case in the 'pioneering phase' of the community, when the commitment is high to engage in an alternative lifestyle, with a shared understanding of what the ecovillage should be (and therefore, what it should not be).This phase requires a considerable amount of energy and willingness of all members, often referred to as 'pioneering spirit'.The decision to become a member is taken voluntarily and consciously, with the membership being something desirable (Taylor 2003;Bauman 2001): there is a pioneering phase, where initially there is a lot of deprivation involved, to build something like this.It simply takes a huge amount of energy on different levels to build something like this.And then there's a phase where this has some kind of normality, where it is a little bit like a made nest, where people come to and do not have a pioneering feeling which is that strong." The loss of pioneering spirit is a critical development encountered in communities, often together with a process of becoming 'absorbed in society' (Pepper cited by Dobson 2007).This highlights the importance of keeping the momentum of the pioneering phase, which is achieved through reaffirming the shared principles with a consensus based 'rolling contract' (Bauman 2001).Commitment, understanding and lifestyle -in their reciprocal relationship with the shared principles -can be summarized as community characteristics, constituting the character of the ecovillage.The community characteristics are discernible but invisible, concentrated in the shared principles that transfer the factors from the individual to the community level.The shared principles also provide a foundation for and determine the approach taken to the community activities.These activities reflect community life in its observable and measurable form.As the term suggests, ecovillages have a focus on the ecological aspects, aiming to be environmentally friendly (Bang 2005;Bates 2003).This includes for instance reduction of the ecological footprint, and ensuring local and seasonal supply.It is often the desire of the members for their ecovillage to be "[a] community of people that live together and try to live as ecologically as possible […]" As community, the social aspects are equally if not more important -there is a notable shift of focus towards the social aspect -with mutual support and communication as two key elements building a strong social network within the community."[...] there is more and more isolation [...]; community life is one possibility to counteract that.And in that line of thought leads to sustainability on the social level [...]" The economic aspects are important by necessity, as the ecovillages are still 'part of the system' and 'have to pay the bills': "[...] the individuals and the community need income to survive in this world.Even though the needs are not as high as if you live on your own somewhere in the city.Still, the economic pressures are there." The communities react to this requirement in a way that is coherent with their shared principles.They use alternative approaches, such as community supported agriculture or a shared economy. Although some communities aim at reducing hierarchy, all ecovillages feature structured decision making processes, as well as often complex organizational structures, with councils and clear responsibilities.This confirms the view that the contest of a concept -in this case community -becomes evident in politics (Connolly cited by Yanarella and Levine 2011).Nevertheless the role of the individual in the community -regarding all aspects, but in its consequences particularly in decision making processes -is emphasized. "The one thing is how our community institution decides, like as association or collective, and the other thing is the private level." Environmental friendliness, social network, economic alternatives and structured organization -summarised as community activities -confirm the framework of sustainable development derived from the Earth Summit in Rio 1992; which suggests that sustainability can be divided mainly into social, economic and ecological, but also institutional factors (United Nations 1993; Bell and Morse 1999).These different aspects are interdependent, which leads to inevitable trade-offs, but also positively reinforcing effects: "[...] there's a strong relation, in short: If I am well off on the social level, then I don't need that many products or means as compensation, so to speak." " [...] it's all about how can we sustain ourselves as well. As way to care for the environment and put lots of work in, but if we are suffering and attacked because we don't get what we need, then it's also not sustainable [...] on a personal level, and then on a community level, and then also on a global level [...]" This indicates not only that the different dimensions of community life influence each other, but also that there are different levels of sustainability that have to be balanced as well. The degree of self-sufficiency in an ecovillage determines how independent the community is on the external environment.It is an aspect of every community activity; however, it tends to be more prominent in the economic and partly ecological dimension -with increased independence through community owned resources and systems, as well as the application of the 'cradle to cradle' idea when it comes to, for instance, waste management.This increases the awareness of the needs and limitations that are essential concepts of sustainability (WCED 1987): "[...] our perception of self-sufficiency [...] has an aspect of sustainability [...] to have an awareness of how much [...] is available and consumed." Self-sufficiency has to be balanced in order to contribute in a positive way to sustainability (Paech 2010).An ecovillage has to acknowledge the interdependences that exist in a community consisting of individuals; situated within the environment it depends upon for providing resources (Hatch 1997).As has been said before, the ecovillages are still part of the system, therefore they inevitably face these interdependences: "[...] we don't see ourselves as exit from society, we are connected with it in various ways.""Especially in the summertime we have guests from all over the world joining our community and joining our lifestyle, so there's a close link going out to society again." Ecovillages are in a reciprocal relationship with their external environment.This relationship has both positive and negative effects.The communities are examples for alternative approaches to issues society faces today; they inspire people to change their lifestyles to become, for instance, more environmentally friendly.The idea is "[...] to pass on ideas for people who are interested.About how can I lead a more ecological live [...] people come here and are interested, and take things home with them, and say: 'okay, I want to change this and that'."Some ecovillages provide educational and retreat functions to guests and visitors, engage in a dialogue with institutions and educational activities, and are even acknowledged on the higher political level as examples for sustainable development.In turn, guests and visitors provide the ecovillages with an income.Through this (regulated) exposure, the communities are frequently questioned in their approaches and have to continuously define their characteristics and activities. The ecovillages in the study tend to operate inside the prevailing cultures rather than outside them, thus facing the threat of becoming absorbed into conventional society, which most of them oppose in one way or the other (Pepper cited by Dobson 2007). Despite this openness to society, there is often little understanding on a local political level, and in the direct neighbourhood. "So we encounter difficulties on a direct local level, that some things are not shared of how we live here." Another external connection is the networking with other communities, especially ecovillages.Although not always a priority, networking is seen as useful tool for sharing knowledge, and to engage in shared activities and further initiatives: "[...] we [can] ask someone else to help us, to facilitate the process. [...] we can reach out for help [...]" Community networks form around unifying themes, from geographical proximity to common interests, like the ecovillage movement.They require resources, and diverging views of diverse communities often complicate the efforts taken -the networks face similar problems with these unifying themes -or shared principles -as the communities do on an internal level. To ensure the continued consent within the communities, the ecovillages feature review processes that help to reaffirm their shared principles or unifying themes, linking back to the rolling contract (Bauman 2001).These review processes range from informal community meetings, where arising issues are addressed at once, to formal convocations dedicated to reflection.A respondent explains how the process looks like in his community: "[…] we have a rotational intensive time […] to see, […] what can we change and what are possibilities for future change." In most cases the topics addressed in the reviews mainly refer to the observable community activities.Where community characteristics are reviewed, members reflect on their understanding of and commitment to the shared principles. Respondents emphasized the importance of the differences between the individual members, particularly in their understanding.The study shows that there is no final definition of sustainability in ecovillages, even though the members of ecovillages agree on shared principles, and tend to be more aware of the concept in general. "Indeed, the term 'sustainability' is not really used in everyday life.[ ...] it appears only rarely, because it is more... integrated implicitly." This confirms the contest of a concept that is shared widely but imperfectly (Yanarella and Levine 2011).Sustainability tends to be understood individually and implicitly.Each ecovillage has an individual approach to the different factors, showing the importance of contextualization of the sustainability concept. Enhancing sustainability -key factors to be addressed The findings suggest that it is more important to raise general awareness about sustainability in order to have a common ground for community action rather than having fixed guidelines.What is required is a much broader, holistic conceptualisation of sustainability (Imran et al., 2014).The reconceptualization of sustainability proposed in this paper facilitates change towards enhanced sustainability through deepening of the understanding.This enables the community members to effectively address problems by increased action taking (Yanarella and Levine 2011; Lawrence 1998). The loss of pioneering spirit reflects a decrease of organizational commitment over time, and potentially results in negative attitudes toward the ecovillage, also known as organisational cynicism (Dean et al., 1998).The key to enhanced sustainability are dynamic shared principles.Referred to as common interest they work as 'community glue', keeping the community together despite differing individual interests (Taylor 2003).Together with individual commitment they are a condition for sustainabilityrelated action (Yanarella and Levine 2011).It is therefore crucial for the sustainability of ecovillages to keep the commitment dynamic through self-reflexivity and at the same time to ensure the stability of the community by reinforcing shared principles.It is a way of assessing the commitment and monitor changes that inevitably affect the community as well as react to them.This self-reflexivity has to be participative and integrating the community members (Lawrence 1998;Bell and Morse 1999), so the shared principles can be adapted whenever this is needed.This means that they reflect the community characteristics not only in the pioneering phase, but throughout the whole existence of the community. Other than increasing the adaptability and flexibility; a continued reaffirmation of unifying themes in the form of a rolling contract increases commitment and understanding among the community members (Bauman 2001).Through practical implementation of the shared principles, the effect on commitment and understanding is increased (Lawrence 1998), as they not only represent, affect and partly determine the characteristics of the community; but as a basis for shared action also determine the approach to community activities.This in turn enhances the willingness to act, and the potential for positive change, which can result in increased sustainability through alignment of the internal and external factors. Networking with other communities can help to reinforce internal efforts, as it facilitates sustainability through provision of support without creating unnecessary dependences.Being part of a community network can enhance the learning capabilities through exchange of knowledge, and therefore avoid repeated mistakes and doubled research efforts (Mather 2014;Bang 2005).An awareness of the relation to society -and a conscious decision regarding the positioning inside or outside the prevailing culture -helps to clarify and support the intent of the ecovillage as being an inspiring alternative or a means for radical change (Dobson 2007). Figure 1 . Figure 1.The proposed conceptual model for sustainability in ecovillages.
2018-04-14T00:56:41.922Z
2014-08-27T00:00:00.000
{ "year": 2014, "sha1": "61e6347ef291fdad6b90fe19f95cdda50700a46e", "oa_license": "CCBY", "oa_url": "http://www.ijmar.org/v1n1/14-001.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "61e6347ef291fdad6b90fe19f95cdda50700a46e", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Sociology" ] }
212116077
pes2o/s2orc
v3-fos-license
Lewis antigen-negative pancreatic cancer: An aggressive subgroup Carbohydrate antigen 19-9 (CA19-9) is the most important biomarker for pancreatic cancer. Approximately 5-10% of individuals are Lewis antigen negative with scarce secretion of CA19-9 and fucosylation deficiency. However, the characteristics of Lewis-negative pancreatic cancer are unidentified. Clinicopathological characteristics of 853 patients with pancreatic cancer were examined. Pancreatic cancer cell lines were sequenced for Lewis status. Morphological and molecular features of pancreatic cancer cells were compared. Orthotopic animal modes were established. Lewis-negative patients had poorer outcome (P<0.001), higher metastatic rate (P=0.004), lower CA19-9 expression (P<0.001) and higher MUC16 expression (P<0.001) than Lewis-positive patients. Lewis-negative cells (CaPan-1, MiaPaCa-2 and Panc-1) showed a shuttle shape with scarce pseudopods. Overall, Lewis-negative cells had higher proliferation rate, higher migration ability, lower fucosylation, lower CA19-9 expression and higher MUC16 expression than Lewis-positive cells (BxPC-3, SU8686, SW1990). Lewis-negative cell line MiaPaCa-2 corresponded to larger orthotopic tumor than Lewis-positive cells SU8686. Potential proteoglycans were identified in Lewis-positive cancer, including EGFR, HSPG2, ADAM17, GPC1, ITGA2, CD40, IL6ST and GGT1. Therefore, Lewis-negative pancreatic cancer is an aggressive subgroup with special clinical and molecular features. Introduction Pancreatic cancer is one of the most lethal malignancies in the world, with its mortality close to its incidence (1,2). In recent years, the incidence of pancreatic cancer keeps rising due to the popularization of the westernized lifestyle (3). Approximately 80% of patients with pancreatic cancer are diagnosed at an advanced stage and miss the chance for curative resection (2). Pancreatic cancer, as a highly heterogenous tumor, is a major clinical challenge (2,4,5). Therefore, identifying subgroups with special biology is urgently needed for the management of pancreatic cancer. Carbohydrate antigen 19-9 (CA19-9), also called sialyl Lewis antigen A, is the most important biomarker for pancreatic cancer (6)(7)(8). The sensitivity in detecting pancreatic cancer is ~80% for CA19-9 (9). In the population, ~5-10% of individuals are Lewis antigen negative, with no or low secretion of CA19-9 (10). In a previous study, we showed that Lewis-negative patients had poorer outcome than Lewis-positive patients (11). Fucosyltransferase 3 (also called Lewis gene), an α1,3/4-fucosyltransferase, is the key enzyme of CA19-9 biosynthesis and plays a critical role in protein fucosylation (12). Protein fucosylation has undoubtedly an important effect on the function of proteins, and affects cancer development (12). Therefore, Lewis-negative pancreatic cancer, which is deficient in fucosylation, may have a special biology, different from Lewis-positive cancer (11). However, the characteristics of Lewis-negative pancreatic cancer are largely unidentified. In the present study, the characteristics of Lewis-negative pancreatic cancer in both clinical findings and basic research were investigated. The clinicopathological characteristics of 853 patients with pancreatic cancer classified by Lewis status were examined. Six pancreatic cancer cell lines were sequenced to determine their Lewis status. Morphological and molecular features of pancreatic cancer cells classified by Lewis status were compared. An orthotopic tumor model was constructed. Materials and methods Patients and data collection. Medical data were retrieved from a prospectively maintained database of the Fudan University Shanghai Cancer Center (Shanghai, China) from September 2004 to November 2011. Data including age, sex, tumor location, metastasis, grade, CA19-9, carbohydrate antigen 125 (CA125, also called MUC16), nerve invasion, lymphovascular invasion, and lymphatic metastasis were retrieved. The primary endpoint was overall survival and follow-up data were updated till October 2019. The study protocol was authorized by the Ethics Committee of the Fudan University Shanghai Cancer Center. Written informed consent was acquired from all of the patients enrolled in the study. Immunohistochemistry. Tissues were fixed in 10% formalin for 12 h at room temperature. Formalin-fixed, paraffin-embedded sections (4 µm) of surgically resected pancreatic cancer tissues were obtained [20 cases of Lewis (-), 19 cases of Lewis (+)]. After tissue sections were deparaffinized with xylene, the endogenous peroxidase activity was blocked with 3% H 2 O 2 in methanol at 37˚C for 20 min. Sections were incubated with specific primary antibodies against MUC16 (1:200, cat. no. 60261-1-Ig; ProteinTech Group, Inc.) overnight at 4˚C. The antibody solution was removed, and the sections were washed in wash buffer 3 times for 10 min each. Secondary antibody (GTVision Ⅲ immunohistochemical detection kit, GK5005; Gene Tech Co., Ltd.) was added to each section and the tissues were incubated for 1 h at room temperature. An avidin-biotin-peroxidase complex solution was used for the visualization of immunoreactions using 3,3'-diaminobenzidine to detect the protein-antibody complexes. Protein expression levels were classified as positive and negative staining using an optical microscope with a magnification of 1:400. Phase-contrast microscopy and scanning electron microscopy. Human pancreatic cancer cells were seeded into 10-cm dishes and images were captured by a phase-contrast microscope (Leica Microsystems GmbH). For the FEI Quanta 200 scanning electron microscope (Philips Healthcare), the cells were seeded into 0.8-cm glass slides treated with polylysine amino acid coating by gold powder. The cells were fixed in 2.5% glutaraldehyde solution at 4˚C for 5 h. After being washed with 0.1 mol/l phosphoric acid buffer 3 times, the cells were dehydrated by alcohol step by step, replaced by pure alcohol, dried at the critical point of carbon dioxide, and then observed and photographed by FEI Quanta 200 scanning electron microscope after coating. All images were captured by random fields. Cell proliferation assay. For cell proliferation, the pancreatic cancer cells were trypsinized, and 3x10 3 cells were seeded into 96-well plates (Corning, Inc.). After certain culture periods, 10 µl of Cell Counting Kit-8 (CCK-8; Dojindo Molecular Technologies, Inc.) were added into the wells and the cells were incubated at a humidified incubator at 37˚C with 5% CO 2 . Absorbance was detected on a multifunctional microplate reader at a wavelength of 450 nm. Transwell migration assay. Pancreatic cancer cells were trypsinized, and 3x10 4 cells were seeded into Transwell inserts (8.0 mm pore; BD Falcon; BD Biosciences) without serum. FBS (10%) and penicillin/streptomycin (Gibco; Thermo Fisher Scientific, Inc.) were plated into the lower chamber. After 24 h, the upper side of the membrane was rubbed with cotton swap and the cells were fixed in 4% paraformaldehyde and stained by 0.3% crystal violet for 20 min at room temperature. After crystal violet staining, the number of cells migrating to the basal side insert was counted. Stained cells were counted in seven randomly selected fields using an optical microscope with a magnification of 1:400. Liquid chromatography-mass spectrometry (LC-MS) for protein glycosylation. Pancreatic cancer cells (>2x10 7 cells) were freshly prepared prior to use. The sample proteins were extracted using SDT lysis buffer (4% SDS, 100 mM DTT, 100 mM Tris-HCl pH 8.0). Samples were boiled for 3 min and further ultrasonicated. Undissolved beads were removed by centrifugation at 16,000 x g for 15 min. The supernatant containing proteins was collected. Protein digestion was performed with FASP method, as described by Wiśniewski et al (14). Proteins were subjected to glycopeptide enrichment and were deglycosylated. Eluted peptides were collected and dried for further LC-MS analysis (Thermo Fisher Scientific, Inc.) using a positive or negative ionization mode. Reverse-phase high-performance liquid chromatography separation was performed with the EASY-nLC system (Thermo Fisher Scientific, Inc.) using a self-packed column (75 µm x 150 mm; 3 µm ReproSil-Pur C18 beads, 120 Å; Dr. Maisch GmbH HPLC) at a flow rate of 300 nl/min. MS data were acquired using a data-dependent top 20 method dynamically choosing the most abundant precursor ions from the survey scan (300-1,800 m/z) for HCD fragmentation. Results Clinicopathological characteristics of Lewis-negative pancreatic cancer patients. A total of 853 patients with pancreatic cancer were included to undergo Lewis antigen evaluation and 11.7% of patients were Lewis negative ( Table I). The median survival time of Lewis-negative patients was 7.4 months, which was significantly shorter than that of Lewis-positive patients (13.3 months, P<0.001; Fig. 1). In addition, Lewis-negative patients had higher proportion of metastasis (P=0.004) than Lewis-positive patients. Lewis-negative patients had lower serum level of CA19-9 (106.0±273.1 U/ml) than Lewis-positive patients (499.7±635.0 U/ml, P<0.001). However, contrary to CA19-9, Lewis-negative pancreatic cancer secreted higher level of serum CA125 (251.9±642.0 U/ml) compared with Lewis-positive cancer (135.8±401.6 U/ml, P<0.001). These data show that Lewis-negative pancreatic cancer has aggressive clinicopathological characteristics with low secretion of CA19-9 and high secretion of CA125. MUC16 expression in pancreatic cancer tissues. To confirm the association between Lewis status and CA125 secretion, the expression of MUC16 in pancreatic cancer tissues was detected by immunohistochemistry. Lewis-negative pancreatic cancer tissues (16/20) had higher levels of MUC16 expression than Lewis-positive cancer tissues (9/19, P= 0.048; Fig. 2). Lewis antigen status of human pancreatic cancer cell lines. Sanger sequencing of the Lewis gene was carried out for the determination of the Lewis antigen status of human pancreatic cancer cell lines (BxPC-3, SU8686, SW1990, CaPan-1, Figure 2. Immunohistochemical detection of MUC16 expression in pancreatic adenocarcinoma tissues. Lewis-negative pancreatic cancer tissues presented higher levels of MUC16 expression than Lewis-positive cancer tissues. Cell morphology. The difference in morphology between Lewis-positive and -negative cell lines was examined by phase-contrast microscopy and scanning electron microscopy. Lewis-positive cell lines (BxPC-3, SW1990 and SU8686) grew in a cluster pattern, whereas Lewis-negative cell lines (CaPan-1, MiaPaCa-2 and Panc-1) showed a shuttle-like morphology by phase-contrast microscopy (Fig. 4A). Scanning electron microscopy showed that Lewis-positive cells were characterized by abundant pseudopods closely attached to the culture dish, whereas Lewis-negative cells were not (Fig. 4B). Hence, these results suggest that there is a difference in cell morphology between different Lewis phenotype cells. Cell proliferation. The proliferative abilities of Lewis-positive and -negative cells were evaluated by CCK-8 assay. Overall, Lewis-negative cells had significantly higher proliferation rate than Lewis-positive cells at 96 h after seeding (P=0.006; Fig. 5). MiaPaCa-2, a Lewis-negative cell line, had the highest proliferation rate among all cells. Cell migration. The migration ability of Lewis-positive and -negative cells was examined by Transwell assay. Approximately 3x10 4 pancreatic cancer cells were seeded into Transwell chambers and crystal violet staining was examined after 24 h of seeding. Overall, Lewis-negative cell lines exhibited higher migration ability compared with Lewis-positive cell lines (P=0.003; Fig. 6). Level of fucosylation. The level of fucosylation in Lewis-posi tive and -negative cells was determined by AAL blotting analysis, which has been often used as carbohydrate probes for core fucose in glycoproteins. Lewis-negative cell lines (MiaPaCa-2 and Panc-1) exhibited lower levels of AAL compared with Lewis-positive cell lines (Fig. 7). This finding reveals that the lower fucosylation level may be attributed to the loss of function of the Lewis gene in Lewis-negative pancreatic cancer. Glycoprotein and protein expression levels. According to clinical data, Lewis-negative patients had lower levels of serum CA19-9 than Lewis-positive patients (Table I). This result was further verified in pancreatic cancer cell lines. Western blot analysis revealed that the level of CA19-9 was significantly higher in Lewis-positive cells than that in Lewis-negative cells (Fig. 8). Lewis-negative cells displayed higher level of MUC16 compared with Lewis-positive cells. The association between MUC16 and Lewis status was consistent with the clinical results of CA125 and Lewis status. Differences in Lewis genotype had no significant effect on EGFR or STAT3 expression. Network of cancer-related proteoglycans. Lewis gene is a regulator of glycosylation and plays a key role in fucosylation of proteins. In order to further verify the role of the Lewis gene on fucosylation, cancer-related proteoglycans were detected by LC-MS in the Lewis-positive cell line SU8686 (Fig. 9). Potential proteoglycan interactions were identified, such as EGFR, HSPG2, ADAM17, GPC1, ITGA2, CD40, IL6ST and GGT1. Orthotopic animal model. In order to examine the in vivo growth ability of pancreatic cancer cell lines classified by Lewis status, an orthotopic animal model was constructed by injection of tumor cells into the pancreas. Lewis-negative pancreatic cancer cell line MiaPaCa-2 corresponded to higher tumor weight than Lewis-positive pancreatic cell line SU8686 (P=0.008; Fig. 10). Discussion Lewis gene is critical for fucosylation and protein modification (12). In the present study, a total of 853 patients with pancreatic cancer were included and 11.7% of patients were Lewis negative. Lewis-negative pancreatic cancer presented aggressive clinicopathological characteristics with low secretion of CA19-9 and high secretion of CA125. Three cell lines were classified as Lewis positive (BxPC-3, SU8686 and SW1990) and three were classified as Lewis negative (CaPan-1, MiaPaCa-2 and Panc-1). Lewis-negative pancreatic cancer cells had a shuttle shape with scarce pseudopods. Overall, Lewis-negative pancreatic cancer cells demonstrated higher proliferation rate, higher migration ability, lower fucosylation, lower expression of CA19-9 and higher expression of MUC16 than Lewis-positive cells. Potential proteoglycan interactions were identified by LC-MS, such as EGFR, HSPG2, ADAM17, GPC1, ITGA2, CD40, IL6ST and GGT1. These findings suggest that Lewis-negative pancreatic cancer is a unique and aggressive subgroup of pancreatic cancer with special clinical and molecular features. CA19-9 is the most widely used biomarker in the management of pancreatic cancer (6,11,15,16). Some studies have even reported that CA19-9 is not a bystander but an effector that could promote pancreatic cancer progression (17)(18)(19). CA19-9 activation could lead to the modification of fibulin-3, which hyperactivates EGFR signaling and boosts pancreatic cancer development (18). Approximately 5-10% of the population are Lewis antigen negative and have no or scarce secretion of CA19-9 (10). Therefore, it is reasonable to infer that Lewis-negative pancreatic cancer is associated with lower levels of CA19-9 secretion. CA19-9 is not recommended as a biomarker for Lewis-negative pancreatic cancer (11). In the present study, 11.7% of patients with pancreatic cancer were Lewis negative. However, 24% of Lewis-negative pancreatic cancer patients had high secretion of CA19-9 (>37 U/ml), which was also been reported by previous studies (9,11,20). Therefore, the potential mechanisms should be explored. Lewis gene plays an important role in the fucosylation of proteins, which catalyzes the reaction of adding fucose to the α1-3,4 position (21,22). Several studies have shown that the Lewis gene is an oncogene that could accelerate cancer development (21,23). Silencing of Lewis by shRNA could reduce the expression of Lewis antigens and therefore decrease the adhesion abilities of cancer cells to endothelial cells with E-selectin expression (21). Theoretically, Lewis-negative pancreatic cancer, which has Lewis gene dysfunction and fucosylation deficiency, is supposed to be an indolent subgroup for the role of the Lewis gene in boosting cancer development. Interestingly, in the present study, Lewis-negative pancreatic cancer was shown to be an aggressive subgroup of pancreatic cancer with special clinical and molecular features, which may be explained by the fact that fucosylation is an important biological process, and fucosylation deficiency affects both cancer development and human body physiology. MUC16, also known as CA125, is a membrane bound mucin that belongs to the glycoprotein family (24). Fucosylation is an essential process for MUC16 biosynthesis. MUC16 is an important biomarker for the diagnosis of various types of cancer, such as ovarian and digestive cancers (11,15,25). MUC16 could also be applied in the management of pancreatic cancer, including diagnosis, predicting resectability, monitoring therapeutic response and follow-up (11). Importantly, several studies have reported that MUC16 could promote cancer progression (24,26). A study has shown that MUC16 could mediate cell-cell adhesion by affecting the E-cadherin/ β-catenin complex (26). In our previous study, MUC16 was shown to promote pancreatic cancer progression by Foxp3 expression and tumor-associated Treg enrichment through the activation of the IL-6-JAK2/STAT3 pathway (24). In the present study, Lewis-negative pancreatic cancer was shown to have higher levels of MUC16 secretion than its counterpart. The molecular mechanism explaining the association of the Lewis gene and MUC16 biosynthesis and the effect of MUC16 high secretion on cancer development undoubtedly deserve further research. CaPan-1 was confirmed by sequencing to be a Lewis antigen-negative cell line. However, CaPan-1 presented properties similar to Lewis antigen-positive cell lines, including low proliferation rate, low migration ability and high level of fucosylation. These findings indicate that heterogeneity even exists in the Lewis-negative subgroup. The present study is restricted by only presenting clinicopathological and molecular features of Lewis-negative pancreatic cancer. The potential mechanisms accounting for the aggressive properties of Lewis-negative pancreatic cancer should be investigated. In addition, the clinical value of the identification of Lewis-negative pancreatic cancer in guiding clinical practice should be further explored. Efforts should also be paid to the reasons Lewis-negative pancreatic cancers have Lewis antigen expression. Finally, the reasons for CaPan-1, a Lewis-negative pancreatic cancer cell line, having characteristics different from other Lewis-negative pancreatic cancer cell lines should also be investigated. no. 17ZR1406300), the Shanghai Cancer Center Foundation for Distinguished Young Scholars (grant no. YJJQ201803), and the Fudan University Personalized Project for 'Double Top' Original Research (grant no. XM03190633). Availability of data and materials The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. Authors' contributions CL, KJ and SD performed the experiments and the scientific literature search, and contributed to the figures and the writing of the manuscript. All authors participated in the data analysis and reviewed the manuscript. GL and XY conceived and designed the study and wrote the manuscript. All authors read and approved the final manuscript. Ethics approval and consent to participate The study protocol was authorized by the Ethics Committee of the Fudan University Shanghai Cancer Center (Shanghai, China). Written informed consent was acquired from all of the patients enrolled. All animal procedures were approved by the Institutional Animal Care Committee of Fudan University (Shanghai, China).
2020-02-20T09:17:48.343Z
2020-02-17T00:00:00.000
{ "year": 2020, "sha1": "f1952a40944e1aa5105aef83f69d78a111929da3", "oa_license": "CCBYNCND", "oa_url": "https://www.spandidos-publications.com/10.3892/ijo.2020.4989/download", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "4fa8165f4f16993636d2067ece2493f63715a02a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
29873259
pes2o/s2orc
v3-fos-license
Health Related Quality of Life among Breast Cancer Patients : a Study from Turkey The purpose of this study was to assess the quality of life of 123 newly diagnosed breast cancer patients who had been followed up after the initial treatment by the outpatient clinic for breast surgery of a university hospital. The Turkish version of QLQ-C30 (Quality of Life Questionnaire-Cancer 30) and QLQ-BR23 (Quality of Life Questionnaire- Breast Cancer 23) were used to measure the quality of life. The mean score for global heath status/ QOL was 64.43. Patients with localized cancer had higher scores. Those in the advanced stages of breast cancer had lower physical, social and sexual functioning than those in the early stages. Patients who were currently receiving chemotherapy had lower global health/QOL, significantly different from those receiving only hormone therapy. Breast cancer patients experience problems in multiple quality of life domains. Health professionals must recognize and take into consideration the importance of QOL, in order to improve the health of breast cancer patients. Introduction Worldwide, breast cancer is the most common malignancy among women, with an estimated 715,000 new cases for the year 2008 diagnosed in the more developed regions (26.5% of the total) and 577,000 (18.8%) in less developed countries (WHO-IARC, 2008).Breast cancer is also the most important cause of neoplastic deaths among women; the estimated number of deaths in 2002 was 410,000 worldwide (WHO-IARC, 2008).In developed countries, survival from breast cancer has slowly increased to the current rate of 85%, following improvements in screening practices and treatments.On the other hand, the survival rate in developing countries remains around 50-60% (WHO-IARC, 2008). In Turkey, breast cancer is responsible for the largest proportion of female deaths from any form of cancer, and has accounted for approximately 16.7% of all cancer-related deaths in recent years (IARC, 2002).Furthermore, breast cancer is responsible for the largest proportion of new cancers that are reported in Turkey, making up 24.2% of female cancers (IARC, 2002).The incidence of breast cancer among women in Turkey was found to be 35.47 per 100,000 in the year 2005 (KETEM, 2005). While early detection and treatment, along with advances in treatment are expected to result in better rates of survival, problems related to the treatment can cause negative effects on health related quality of life.Today QOL of patients is considered an important issue in the treatment of women with breast cancer (Ahn et al., 2007;Jayasekara et al., 2008;Kontodimopoulos, 2010;Montazeri, 2008;Montazeri et al., 2000;2008;Munshi et al., 2010;Potter et al., 2009;Salonen et al., 2009) The time of diagnosis, initial stages of the treatment course and the months following the end of the treatment are hard times for patients both physically and emotionally.During these periods poor adjustment and decreased quality of life in breast cancer patients can easily occur (Frost et al., 2000;Schnipper, 2001).Studies have shown that decreased QOL as a result of chemotherapy side effects may predict early treatment discontinuation in patients (Richardson et al., 2007).Randomized clinical trials revealed that the use of chemotherapy, especially more aggressive chemotherapy, was associated with worse QOL than was seen with hormonal interventions or less aggressive chemotherapy (Fairclough et al., 1999;Goodwin et al., 2003;Hurny et al., 1997;Levine et al., 1998).The HRQOL data are intended to help guide clinical decision-making regarding selection of the optimal treatment, to provide information about the experience of patients receiving treatment and potentially to predict prognosis (Goodwin et al., 2003).However, currently it is not clear whether health related QOL measurements influence clinical decision-making or whether the contribution of QOL measurement to clinical decision-making varies according to the stage of the disease or the type of intervention (Goodwin et al., 2003). In Turkey health related QOL among cancer patients is a neglected subject.Compared to western literature there are few published studies which encompass health related QOL among breast cancer patients in Turkey (Akin et al., 2008;Alicikus et al., 2009;Karakoyun-Celik et al., 2010;Uzun et al., 2004).One of the previous studies (Alicikus et al., 2009) evaluated only psychosexual and body image aspects of QOL by comparing breast conserving treatment and mastectomy.Other one measured QOL and self efficacy among Turkish breast cancer patients undergoing chemotherapy (Akin et al., 2008).Another study used a different quality of life scale which was not specifically designed for breast cancer patients but for measuring quality of life in general (Uzun et al., 2004) and the last study used the same QOL instruments which we have also used in this study but they did the quality of life evaluations only in terms of depression and anxiety.Therefore it can be said that our study is the first study performed among Turkish breast cancer patients by using the QOL-BR23 instrument for the evaluation of QOL in a broader sense.There are also few studies on translation and validation of various QOL measures for Turkish cancer patients (Can & Aydiner, 2009;Can et al., 2010;Cankurtaran et al., 2008;Bektas and Akdemir, 2008;Guzelant et al., 2004;Hoopman et al., 2006) and all of these studies were performed in order to validate the Turkish versions of QOL-C30 and QOL-BR23 instruments. The purpose of this study was to assess the quality of life of breast cancer patients who had been followed up after the initial treatment by the outpatient clinic for breast surgery of Uludag University hospital in Bursa Turkey. Research setting and study participants This study was performed at the outpatient clinic for breast surgery of Uludag University Hospital in Bursa/Turkey.The study group comprised patients who were followed up for breast cancer at this clinic.During a period of two months 179 patients attended for follow-up.All of the followed patients were informed about the purpose and anonymity of study and asked if they would like to participate to the study voluntarily.Hundred fifty eight patients wanted to participate and gave their written consent.Due to missing data, 35 participants were excluded, so the final study group consisted of 123 patients.Approval for this study was granted by the Ethics Committee of the Uludag University.The questionnaires regarding the demographic characteristics and QOL were completed by the participants.The medical history data regarding breast cancer were gathered by the authors from the medical records of the corresponding participants. Instruments The EORTC (European Organization of Research and Treatment for Cancer) QLQ-C30 version 3.0 is a 30-item core cancer specific questionnaire measuring QOL in cancer patients (Aaronson et al., 1993).This self-administered questionnaire incorporates five functional scales: Physical (PF), role (RF), cognitive (CF), emotional (EF) and social (SF), three symptom scales for fatigue, pain and nausea/vomiting, a global health QOL scale, and several single items for the perceived financial impact of disease and treatment and for the assessment of additional symptoms such as dyspnoea, appetite loss, sleep disturbance, constipation and diarrhea which are commonly reported by cancer patients.All items were scored on 4-point Likert scales ranging from 1 (not at all) to 4 (very much).As an exception, item 29 and 30 in the global health QOL subscale were scored on a modified 7 point linear analogue scale (Fayers et al., 2001).All functional scales and individual item scores were transformed to a 0-100 scale with higher values indicating a higher functioning in functional scales and an increased presence of symptoms in symptom scales.Approval was obtained from EORTC Quality of Life Group.We used the Turkish version of the questionnaire which had been validated in previous studies (Cankurtaran et al., 2008;Guzelant et al., 2004;Hoopman et al., 2006, Ozturk et al., 2009). The EORTC QLQ-BR23 is a 23-item breast cancer-specific questionnaire about the common side effects of therapy, body image, sexuality, and outlook for the future (Jayasekara, et al., 2008;Montazeri et al., 2008).All items were scored on 4-point Likert scales ranging from 1 (not at all) to 4 (very much).The scoring approach for the QLQ-BR23 is identical in principle to that for the function and symptom scales/single items of the QLQ-C30.We used the Turkish version of the QLQ-BR23 which was obtained from the EORTC Quality of Life Group. Analysis Statistical analyses were performed using Statistical Package for Social Sciences 13.0 program for Windows (SPSS Inc, Chicago, IL).Scale internal consistency reliability was assessed via Cronbach's alpha and the 0.70 standard for group level comparisons was adopted (Nunnaly & Bernstein, 1994).Construct validity was assessed by the interscale correlations between QLQ-C30 and QLQ-BR23, and the assumption that conceptually related scales would correlate substantially, and conversely scales with less in common would show lower correlations (Aaronson et al., 1993;Jayasekara et al., 2008).Calculation of the quality of life scores from both of the study instruments were performed according to the scoring manual developed by the EORTC study group (Fayers et al. 2001).Quality of life scores were compared with demographic and clinical parameters in order to understand the patterns.Student-t test and variance analysis (one way ANOVA and Kruskall-Wallis) were used to test the statistical significance of differences in between the groups.All results were regarded as statistically significant at p< 0.05. Patients' demographic and clinical characteristics The patients' mean age was 49.37 ± 9.55 years (Mean ± SD) with a range of 27-67 years.Most of the patients were married (91.1%), primary school graduates (56.9%) and housewives (67.5%).The median length of time since the diagnosis of cancer was twenty four months (Mean±SD = 43.46 ± 42.30 months; range 4 -168 months).Fifty five point three percent of patients had a local cancer at the time of diagnosis and 25.2% were at stage I, 35.0% at stage II and 23.6% at stage III whereas 1.6% and 6.5% were at stages 0 and IV respectively.According to the type of cancer, 58.5% had invasive ductal and 3.3% invasive lobular cancer and some 38.2% other types.Most of the patients had undergone surgical treatment (57.7%) followed by combined therapies (23.5%).Other therapies were only chemotherapy (16.4%) and only radiotherapy (2.4%).Most of the surgically treated patients had undergone mastectomy (50.5%) followed by breast conserving surgery (35.1%).At the time of the study 40.6% of the patients were receiving no therapy whereas 30.1% were having hormone and 29.3% chemotherapy. QLQ-C30 and QLQ-BR23 scales Data on central tendency and reliability of QLQ-C30 and QLQ-BR23 scales are presented in Table 1.Throughout both instruments all of the scales met the 0.70 internal consistency criterion.Among the items of QLQ-C30 nausea/vomiting, dyspnoea, appetite loss, constipation, and diarrhoea had high minimum scores (>50%) implying a lack of these symptoms in this sample but may also be hinting about an underlying reduced discriminative ability.Among the scales/items of QLQ-BR23 the sexual functioning scale also had a high minimum score, and this may reflect the diminished sexual functioning, but may also point to an underlying discriminative ability. <Table 1> The QLQ-BR23 scale showed high correlations (Table 2) with QLQ-C30 scale in 78 out of 105 comparisons (74.0%).Global health status/QOL and emotional functioning were correlated to all of the functional and symptom scales of QOL-BR-23.Whereas body image, sexual functioning, sexual enjoyment and future perspective were positively but therapy side effects, breast symptoms, arm symptoms and upset by hair loss negatively correlated.Emotional and social functioning were strongly and positively correlated to body image.Cognitive functioning was strongly and negatively correlated to therapy side effects.Physical functioning was correlated to all of the functional and symptom scales of QOL-BR23 except body image and upset by hair loss. <Table 2> The correlations between QLQ-BR23 subscales also showed high correlations and in 19 out of 28 comparisons (67.8 %) significant correlations were observed.Future perspective was strongly and positively correlated to body image and upset by hair loss was strongly and negatively correlated to body image and future perspective. QOL according to some demographic characteristics We compared the mean scores of functional scales related to QLQ-C30 and QLQ-BR23 in patients of different age groups and educational attainment.We did not find a significant difference in terms of global health status/QOL among patients of different educational status.There was a significant difference in terms of body image among patients of different educational status and those with primary education had the highest score (77.38±22.57). We did not find a significant difference among different age groups in terms of global health/QOL status.Those who were 50 years of age and older had the highest scores in emotional (76.69±20.34),social (88.13±16.39)functioning and body image (78.53±20.00). QOL according to some characteristics of breast cancer and treatment The mean scores for QLQ-C30 and QLQ-BR23 scales according to the localization of breast cancer at diagnosis are shown in Table 3.The global health status of patients with localised breast cancer was found to be higher than those with local and axillary located breast cancer.A similar result was obtained for cognitive functioning.Among the symptom scales of QLQ-C30 the pain score was higher among patients with local and axillary breast cancer whereas appetite loss was higher among patients with local breast cancer.According to the QLQ-BR23 scale sexual functioning was better among patients with localised breast cancer and fewer arm symptoms were observed than in patients with local and axillary located breast cancer <Table 3> The comparison of QOL among patients in different stages of breast cancer showed significant differences in physical functioning, social functioning, sexual functioning, sexual enjoyment, pain and arm symptoms.We found no significant difference in terms of global health status/QOL among patients in different stages of breast cancer.Physical functioning among stage II patients was significantly better than those in stage III and IV (80.78±12.95 versus 69.19±19.20,p=0.011). Social functioning among stage II patients was significantly better than those in stage 0-1 (85.27±21.58versus 68.69±32.74,p= 0.023) We found no significant differences among patients in different stages in terms of symptom scales of QOL-30 except for the pain symptom which was higher among patients in stages III and IV than those in stages 0-1 (32.43±23.22versus 18.60±17.62,p=0.020). In terms of functional scales of QOL-BR23 only sexual functioning and sexual enjoyment scales were found to be significantly different among patients of different stages.Sexual functioning of patients in 0-1 stages were significantly better than those in stages III and IV (19.19±17.24versus 6.31±13.24,p=0.002).Similar results were obtained for sexual enjoyment (37.50±11.39versus 13.33±28.11,p=0.043).Among symptom scales of QOL-BR23 only arm symptoms were found significantly different in patients with different stages and were higher among patients in stages III-IV than those in stages 0-1 and II (37.84±24.77versus 18.18±17.13and 24.29±19.74,p=0.001). Table 4 shows the comparison of QOL of breast cancer patients according to the current treatment.The global health status of patients who currently do not receive any treatment was found to be higher than of those receiving hormone therapy or chemotherapy.Similar results were obtained for physical, role and social functioning.Sexual functioning and sexual enjoyment were found to be lower among patients who currently receive chemotherapy and symptoms such as fatigue, nausea/vomiting, insomnia, appetite loss, systemic therapy side effects and breast symptoms were more frequently seen among patients who currently receive chemotherapy. <Table 4> The comparison of QOL of patients according to the time passed since diagnosis is shown in Table 5. Role functioning and sexual enjoyment was found to be higher among patients who had been diagnosed with breast cancer for five years or more.Social functioning was better among those who had been diagnosed for 2-4 years.Pain, insomnia, appetite loss systemic therapy effects and breast symptoms were frequently seen among patients who had been diagnosed for one year or less.In terms of global health status we found no significant differences among the groups with different times since diagnosis. Discussion In this study we assessed the HRQOL among a group of breast cancer patients who had been diagnosed, treated and followed up by a single clinic.The median and mean scores for global heath status/ QOL were 66.67 and 64.43 respectively.Median scores for the functional scales varied between 75.00 and 83.33 in terms of QLQ-C30 scale.When the breast cancer specific QLQ-BR23 scale was taken into account, the median scores for the functional scales were between 33.33 and 83.33 except sexual functioning.Most of the patients (56.9%) scored 0 points in terms of sexual functioning.In western literature the prevalence of sexual dysfunction is reported to be between 40% and 100% however it is hard to define a certain rate due to ethnic and cultural differences (Ganz et al., 1998;Schover, 1991).A study among Turkish breast cancer patients showed no significant correlation between depression and QQL scores related to sexuality and this condition is contributed to by the nature of Turkish women having fewer expectations of sexual life and their timidity when answering the question in this module due to their cultural and social behaviour (Karakoyun-Celik et al., 2010).Among the symptom scale scores the highest values were for fatigue, financial difficulties, insomnia and pain whereas for the breast cancer specific symptoms these were distress about hair loss, systemic therapy side effects, arm symptoms and breast symptoms respectively.Previous studies among breast cancer patients in Turkey using different QOL instruments showed similar moderate QOL scores (Akin et al., 2008;Ogce et al., 2007;Uzun et al., 2004). Studies have concluded that there is a negative relationship between age and physical and emotional well-being among breast cancer patients (Avis et al., 2005;Lu et al., 2007;Vacek et al., 2003).We did not find significant differences in terms of global health status/QOL among patients who were younger than 50 years and 50 years of age or older.However, emotional and social functioning and perception of body image were significantly better among patients who were 50 years of age and older.A study among Turkish breast cancer patients which used a different HRQQL measure found that the overall quality of life and its dimensions were more negatively affected in younger patients (Akin et al., 2008).The results of all these studies draw attention to young breast cancer patients who may need more physical, emotional and social support.Why younger patients were more negatively affected?This issue needs further evaluation but one explanation could be that physical appearance is more important in younger ages and women whose image has been changed because of hair loss and surgical interventions could feel themselves emotionally depressed and this feeling may hinder them to take part in social activities. Many studies have reported that educational level has an effect on quality of life (Akin et al., 2008;Cui et al., 2004;Guner at al., 2006;Pandey et al., 2005;Spagnola et al., 2003).We found no significant relationship among women with different educational levels in terms of global health status/QOL but physical functioning and body image were found to be better among those who were primary school graduates.Why primary school graduated patients got better scores for physical functioning and body image than those with more education needs to be further studied.The perception of self-efficacy, the value of life and the ability of adapting simple coping mechanisms may be some of the reasons for this finding. In general patients with advanced cancer have more difficulty in adjusting and they experienced greater distress than those with early stage disease (Akin et al., 2008;Bull et al., 1999;Cui et al., 2004;Ogce et al., 2007).Our findings indicated that patients with advanced stage breast cancer had lower physical, social and sexual functioning than those with early stage cancer and they also presented with more arm symptoms and pain.However, we did not find significant differences in terms of global health status/QOL among patients at different stages of the disease.Some studies have shown the negative impact of chemotherapy on HRQOL of breast cancer patients (Akin et al., 2008;Lee et al., 2001).Our results were consistent with the results of previous studies and we found that patients who were currently receiving chemotherapy had lower global health/QOL, physical functioning, role functioning, social functioning and sexual functioning significantly different from those receiving only hormone therapy.Furthermore, symptoms such as fatigue, nausea/vomiting, insomnia, appetite loss, systemic therapy side affects and breast symptoms were seen more frequently among that group.Cui et al. (2004) reported that there was a relationship between the duration of breast cancer diagnosis and the general quality of life and all its dimensions.According to Kessler (2002) HRQOL is more severely affected in patients newly diagnosed with breast cancer.The results of our study showed a consistency with the results of both of these studies.However, we did not find significant differences for global health status/QOL among patients with different duration of diagnosis but the role functioning, social functioning and sexual enjoyment were better among those with longer duration whereas symptoms such as pain, appetite loss, insomnia, breast symptoms and systemic therapy effects were more common among those with shorter duration.Among the symptom scales, only the arm symptoms score was found to be higher among patients with a longer duration of breast cancer than those with shorter duration.However, another study of Turkish breast cancer patients showed that women, who had been diagnosed less than a year, had a higher overall quality of life than women for whom more than a year had passed since diagnosis (Ogce et al., 2007).Why role functioning, social functioning and sexual enjoyment were better among patients with longer duration of breast cancer is an interesting result of our study and may explained by the challenges of survivorship which are many.The searing recognition of mortality changes everything.From that moment forward all of life will be viewed through a double lens.The possibilities of both a long life and a greatly abbreviated one are appreciated.Over time this dual view may enrich their lives.They will learn to live with cancer, to go on, to take and to appreciate the dark side as well as the daylight.So they will try to do the best of their lives. Health care professionals must recognize and take into consideration the importance of QOL, besides their treatment in order to improve the health of breast cancer patients.The results of this study would help to fill gaps in the current limited knowledge and identify areas in which patients need extra support.Since there clearly are negative effects of cancer and chemotherapy on patients' quality of life, healthcare providers need to focus on designing psychosocial interventions to improve self-care and quality of life and support the cancer patients throughout their illness and chemotherapy.This will improve cancer patients' adaptation to their disease and emotional well-being.Planned education programs addressing patients' needs, help patients by providing verbal encouragement; introducing patients to positive role models and incorporating pain management guidelines into the delivery of patient care are important interventions toward improving quality of life among breast cancer patients. Study limitations Despite the limitations of this study due to its small sample size and cross-sectional design, the results indicate that breast cancer patients experience problems in multiple quality of life domains and further studies are needed.Another limitation might be mixing patients with different stages and treatments.The internal consistency reliability and cross-sectional construct validity of the Turkish QLQ-C-30 and QLQ-BR-23 were satisfactorily demonstrated, but test-retest reliability and longitudinal construct validity were not addressed.Thus a longitudinal study design could be considered to overcome these limitations. Table 3 . QOL and localization of breast cancer
2018-05-30T19:58:00.085Z
2011-09-28T00:00:00.000
{ "year": 2011, "sha1": "7b54426a131ead3476c7efa9acd3ee9bb92fe570", "oa_license": "CCBY", "oa_url": "https://ccsenet.org/journal/index.php/gjhs/article/download/8731/8720", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "7b54426a131ead3476c7efa9acd3ee9bb92fe570", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
252319361
pes2o/s2orc
v3-fos-license
Upanayana Saṃskāra Vis-A-Vis Sandhyāvandanam for Refined Personality . Upanayana saṃskāra (thread ceremony), one among the Ṣoḍaṣa saṃskāras (sixteen ritual) described in Indian culture - Hindu philosophy, is the right through which a man is initiated into the vows of the guru, the Vedas (wisdom), the restrains (penance), observances, values and vicinity of God (ideals). It is an important saṃskāra performed in the crucial adolescent age of an individual, with a view to boost the physical, psychological, moral, social and spiritual life of an individual. In the present scenario, the increasing use of modern gad-gets, indiscreet use of social media, inadequate moral education have resulted in lowered concentration levels, diminished memory, deterio-rating practical skills besides increased stress-anxiety-depression levels in adolescents. while only very selected psychological and social wellbeing with specific reference to Indian classical Vedic literature. As Vedic literature strongly described, meaningful performance of Upanayana saṃskāra followed by regular practice of Sandhyāvandanam will not only enhance scholastic performance but also bring about comprehensive development of an individual and discipline in the society. Abstract. Upanayana saṃskāra (thread ceremony), one among the Ṣoḍaṣa saṃskāras (sixteen ritual) described in Indian culture -Hindu philosophy, is the right through which a man is initiated into the vows of the guru, the Vedas (wisdom), the restrains (penance), observances, values and vicinity of God (ideals). It is an important saṃskāra performed in the crucial adolescent age of an individual, with a view to boost the physical, psychological, moral, social and spiritual life of an individual. In the present scenario, the increasing use of modern gadgets, indiscreet use of social media, inadequate moral education have resulted in lowered concentration levels, diminished memory, deteriorating practical skills besides increased stress-anxiety-depression levels in adolescents. Upanayana saṃskāra is being observed just as a symbolic ceremony in most part of the society who perform it, while only very selected few have understanding of the core intention of it as included in the ancient Indian classics. This article is an earnest attempt to briefly apprehend and analyze a few aspects of the Upanayana saṃskāra such as the season and time of performance, the age, the Kaupīna, Yagnopavita, Sandhyāvandanam, Gāyatri japa and such others and their contribution in enhancing the physical, intellectual, psychological and social wellbeing with specific reference to Indian classical Vedic literature. As Vedic literature strongly described, meaningful performance of Upanayana saṃskāra followed by regular practice of Sandhyāvandanam will not only enhance scholastic performance but also bring about comprehensive development of an individual and discipline in the society. Keywords. Upanayana saṃskāra, Sandhyāvandanam, Comprehensive development, Refined personality Introduction India, the salient land of ancient civilizations, has been the abode of Sanātana dharma, more often than not referred as Hindu dharma, considered as its core component and earnest, esteemed essence. Sanātana dharma, obscurely translated to imply "the natural, ancient and eternal way", is much more than just a religion, is a code of ethics, a way of living with a coherent and rational view of reality, as Dharma includes three components -ethics, metaphysics and spirituality. Sanātana dharma is the world's most ancient culture and the socio, spiritual, and religious tradition which has been alleviating man off his sufferings, with a view to aid him in the attainment of the fourfold bliss -Dharma, Artha, Kāma and ultimately and utmost importantly, Moksha (enlightenment, liberation). India, the bedrock of benevolence has also been the backbone of the ancient wisdom, with exuberant and empirical knowledge bases such as vyākaraṇa, saṅgīta, nāṭya, shilpa, vāstu śāstra, Āyurveda and such others. Many of these knowledge forms extensively and exceptionally owe their origin and development to the Vedas, the universally recognized primary sources of Indian culture -Hindu Dharma. The Vedas are the venerable, vast, virtuous, valuable sources of knowledge, in all its forms. Considering the difficulty in apprehending and analyzing the authentic attributes of the Vedas, the ancient rishis, with their sense of enquiry, expertise and experience explored and extracted the Vedic wisdom, like extracting a precious metal from ore. The teachings were later elaborately explained and exceptionally exemplified for the benefit of common man, under the aegis of Upanishads, Sūtra, Smṛtis, Puranas and such others. The Vedas, the Brāhmanas, the Gṛhyasūtras, Dharmasūtras, the Smṛtis and numerous other treatises illuminate, illustrate and interpret the importance of absolute harmony in thought (mānasa), word (vācika) and deed (kāyaka) of an individual in accomplishing him to become principled, pure and perfect. The scriptures describe numerous rites, ceremonies and customs to be performed and pursued, beginning from the time of conception until the death of an individ-ual, in a disseminated manner, which are commonly and conventionally considered as 'Vaidika Saṃskāras'. The Vaidika Saṃskāras, thus, facilitate the development of an individual into a total person. This article is an earnest attempt to apprehend, analyze and amalgamate the various attributes of Hindu saṃskāras in general and Upanayana saṃskāra in particular, for the promotion of peace, purity of thoughts, perfection in deeds and prosperity of the society. 'Saṃskāras' obscurely expressed and explained as 'sacraments' of Hindu philosophy, often refer to the religious purificatory rites and ceremonies for sanctifying the body, mind and intellect of an individual, so that he becomes a full-fledged member of the community. The word 'Saṃskāra' can be etmologically elucidated to comprise of सम् उपसगर् कृ धातु धञ् प्रत्यय [1] its implications including and indicating several meanings such as i) purification [2], ii) preparation or purification of Havis or oblation for the Gods [3], iii) an act which makes a certain thing or a person fit for some purpose [4], iv) education, v) cultivation, vi) training, vii) a purificatory rite or ceremony to change the qualities or intrinsic worth and so on. Āyurveda, the science of life and the ancient Indian art of holistic medicine defines saṃskāra as "Saṃskārao hi guṇāntardhānam" and describes it as the instrument which initiates qualitative improvement by incorporating specific qualities [5]. Saṃskāras -definition and significance Saṃskāras, the authentic attributes of the Hindu faith which initially indicated the specific, special qualities secured [atishaya vishesha], were subsequently assumed to signify the very ritual or ceremony by itself. And also, Saṃskāras are assumed to accomplish two things to an individual; 'doṣāpanayanam' that is elimination of physical and mental impurities in addition to 'Guṇāvadhānam' the one which aims at adding special virtues or ātma guṇās. As saṃskāras imbibe new and noble qualities, the entire life style of an individual can be contemplated to be a process of saṃskāra, with every stage of life being envisioned, expressed and made evident by a particular saṃskāra. Thus, the saṃskāras, having been perceived to present purity and positivity are postulated to be an integral part of one's life, from the time of conception up to his death. Despite the fact that there are numerous ancient classics which describe and delineate the substance and significance of saṃskāras, the Gṛhyasūtras and Dharmasūtras are considered to be the most legitimate treatises. The texts acknowledge the existence of two principal types of saṃskāras, i.e. i) Daiva saṃskāras (the sacrifices; pāka yagnya, havi yagnya) and ii) Brahma saṃskāras (the ceremonies performed at various occasions in the life of an individual) [6]. The deflection and distinction in different treatises, with respect to the enumeration of the Saṃskāras, extending from the Garbhādāna (conception) to Antyeṣṭi (funeral) saṃskāra, is evident and can be summarized as follows [7]: • the Vedic literature refers to 3 main saṃskāras, Upanayana (the initiation), Vivaha (the marriage), and Antyeṣṭi (the last rites) • the Āśvalāyana Gṛhyasūtra mentions the number of saṃskāras to be 11 • the Pāraskara, the Baudhāyana and the Varaha Gṛhyasūtras mention 13 saṃskāras each • the Vaikhānasa Gṛhyasūtra gives a list of 18 saṃskāras • the Gautama Dharmasūtra gives a list of 40 saṃskāras • as per Manu Smṛti, the number of saṃskāras is 13 The variation in the total number of saṃskāra as in different texts reflects the prevalence of Sanskar at the respective era and might also reflects difference in the various Vedic teaching schools (Ved Shakha). However, the term 'Ṣoḍaṣa saṃskāra has gained wide acceptability, enumerating 16 very important saṃskāras in the life of an individual, from Garbhādāna (conception) to Antyeṣṭi (funeral) saṃskāra. Upanayana Sanskar Definition and implications 'Upanayana' saṃskāra is one among the 'Ṣoḍaṣa saṃskāras' and its indispensable inclusion in the various classifications of saṃskāras authenticate its importance. The literary conception of the word 'Upanayana' being 'taking near' or 'leading to' or 'initiating', it can be etymologically defined and described as; अध्ययनाथर् म् आचायर् स्य उप समीपं नीयते एन कमर् णा इ त ॥ उप +नी ल्यु ट् [8] Upanayana saṃskāra can further be cited and comprehended as; i) the rite through which the child is taken to the teacher [9], ii) introducing the novice to the stage of student hood, iii) rite by which a boy is able to realize Gāyatri mantra [10], iv) one of the most important rituals to acquire knowledge of the Vedas [11] , vi) rite of passage by which a boy entered the first stage of 'Aśrama dharma' and obtained 'dvijatva' or second birth, vii) the ceremony for the investiture of the sacred thread by which act, spiritual birth was supposed to be conferred on a child. A comprehensive connotation of Upanayana saṃskāra thus sites it as 'the right through which a man is initiated into the vows of the guru, the Vedas, the restrains, observances and vicinity of God'. Thus, Upnayana Sanskars brings one from ignorance to wisdom, new life supporting the vedic hymn "जन्मना जायते शु द्र:| सं स्का-रात् िद्वज् उच्यते | Janmana Jayate Shudrah, Sanskaraat dwij uchatye ||' -One is born in ignorance, though Sanskar (virtues) ones become Dwij (superior). Upanayana saṃskāra marks the commencement of the first stage of life, the Brahmacarya aśrama and precedes the disciplined, dynamic association in the second stage, the Gruhasta aśrama. The saṃskāra which was initially organized in a simple manner, developed as a comprehensive ritual as described in the Gṛhyasūtras, thus, being reckoned as an important epoch in the life of a 'dvija', the twice born (birth of new pure and refined personality). Upanayana instigated the imperative time to acquire knowledge, dedication, devotion and discipline in life. It played a pivotal role in sculpting the physical, psychological, moral, social and spiritual life of an individual. Important aspects of Upanayana Sanskar for personality refinement With a change in the consideration of the saṃskāra from its educational concerns to ceremonial considerations, numerous enactments are proposed and practiced, thus making this saṃskāra associated with adolescence gain and substantial social significance. The apprehension and analysis of a few aspects associated with this saṃskāra can be as follows; i) ideal time and age for the ceremony Spring season is the ideal time for Upanayana saṃskāra, which symbolises rebirth, renewal, hope, youthfulness and hope is considered to be auspicious for the performance of Upanayana saṃskāra of all individuals [12]. The period when the sun is in the northern hemisphere (uttarāyaṇa), the bright half (Śukla -Pakṣa) of a month and the five months starting from Māgha were all considered to be favorable for initiation, as they were regarded as representations of the brightness of knowledge and learning. Eight or eleven or twelve years, as per the cāturvarṇya system of the earlier days can be understood to be as per the intellectual capacity of children; intermediate options allowed considering the uniqueness of every child. ii) Kaupīna, girdle and Danda Kaupīna is given to the child as encouragement to observe social decorum and maintain own dignity and self-respect while the cloths are given by Acharya represents the bond of protection established between the teacher and the student being initiated. The Girdle used in ceremony is the symbol of purity and strength. The girdle tied around the waist of the child thought to protect own purity and to obtain strength. The Daṇḍa or the staff given to the child symbolized one as the guardian of the Vedas, the protector of the social order. The different woods used as Danda such as palāśa, bilva, badara, udumbara, nyagrodha have their own symbolic and spiritual importance besides their medicinal values iii) Yagyopavita and relation with Gayatri Mantra The sacred thread used in the ceremony is called as Yagyopavit. The scripture described : the length of Yagyopavit as ninety-six times the breadth of the four fingers of a man. the four fingers representing the four parts of the Yagyopavit i.e. one knot and three thread ( Figure 1). Each thread is consists of 3 sub-thread. The thread and knot has symbolic and spiritual meaning as described in Figure 1 and Table 1. The three folds of the cord represents the Triguṇas and 3 parts of Gayatri Mantra (Tripada) i.e. i) Tatsaviturvareniyam, ii) Bhargo Devasya Dhimahi, iii) Dhiyo yo naha Prachodayat (Figure 1). Figure 1 depicts that Yagopavit is physical presentation of Gayatri Mantra. After Upanayana Sanskar the student lives the life and gradually imbibes 24 virtues of Gayatri Mantra to become Dwij with purified and refined personality. The three cords (thread) and one knot (brahmagranthi) reminds the student constantly for the vow taken. Yagyopavit Parts Gayatri Mantra Virtues The Gāyatri mantra occurs in all four vedas i.e. Ṛigveda , Yajurveda, Sāma Veda and Atharva Veda. It is a prayer to the almighty supreme in form of sun (savita). Gayatri Mantra is supposed to be recited ritualistically at dawn, midday and dusk. There are 24 letters of Gayatri Mantra, 9 words, and 4 phrases. Spiritually chanting of Gayatri Mantra provides vibrations to 24 spiritual centers present in human body for aiding awakening of 24 divine virtues as described in Table 1. Yagyopavit is worn by the student during Upnayana Sanskar which is the start of the Vedic Study and in this process one becomes Dwij (pure with wisdom as new life from ignorance). In the ritual, Gayatri Mantra is given to the student being initiated. Gayatri is considered as Guru Mantra. It is required for the student to practice quality of Gayatri as from Yagyopavit Sanskar. The practice to become Dwij is whole life and every moment but specifically a ritual is performed daily twice as Sandhyāvandanam [13]. iv) Sandhyāvandanam The term literarily represents 'Salutation to the goddess of Dawn and dusk'; the ritual of Sandhyāvandanam includes Ācamanam, Dhyāna, Prāṇāyāma (Pūraka, Kumbhaka, Rechaka), Mārjanam, Nyāsa (Aṅga Nyāsa and Kara Nyāsa), Mudrās, Bandhas, Gāyatri japa, Agnihotra…etc., Various procedures of nyāsa in Sandhyāvandanam is described in vedic literature. One of them involves touching 12 different parts of the body invoking divine virtues in the same by touching with finger tips. The tips of the fingers are said to be terminal points of Prana. The nyāsa (AṅgaNyāsa, KaraNyāsa and such others) is the spiritual process to activate divine virtues and vital centers of the body and mind for preparing oneself before chanting of Gayatri Mantra to have refined personality.. Upanayana saṃskāra and initiation for Sandhyāvandanam is done for Dwijatva i.e. to have refined personality. The dynamics of Upanayana saṃskāra vis-à-vis Sandhyāvandanam brings positive multifaceted changes in one's life [13] and can be discerned, and demonstrated to bestow salubrious souvenirs in terms of i) physical wellbeing i.e. immunity, stamina, etc, ii) intellectual progression i.e. increase in medhā śakti, dhi, dhruti, Smṛti and psychological wellbeing i.e. bestowing tranquility, increased concentration, strong will power, iii) social wellbeing i.e. simplicity, discipline, self-respect individually and collectively. These multifaceted outcome is achieved once virtues of Gayatri Mantra is implanted in life through upanayana saṃskāra and sandhyāvandanam and are practiced in life i.e. self-realization, karma-yoga, self-control, lifemanship, mightiness, superiority, serenity, divine vision, virtue, discriminative wisdom, selfrestraint, service (Table 1). Upanayana saṃskāra and initiation for Sandhyāvandanam refines the personality. Hence resulted pure consciousness portrays similarities between an individual and nature and proving that individual is a part of nature and not separate from it [loka puruṣa sāmyata]. Also it helps in the journey i.e. creation of consciousness towards the stage of life; for example, after upanayana brahmacharyaaśrama, after vivāha gruhasthāśrama; development of sanctity for life itself. Finally, the process leads to spiritual wellbeing which helps keep the consciousness awake to the almighty-truth and establishes harmony with nature. Discussion and Conclusion It is crucial and consequential to note that sincerity and steadfastness in performing any ritual without the proper perception of its principles and purport result in the incurring of heavy expenditure. Logical, legitimate and lawful spreading of the principles, practice and precise knowledge of the Vedas enables even the destitute and deprived classes to earnestly and enthusiastically perform and participate in the performance of the saṃskāras. Upanayana saṃskāra and initiation for Sandhyāvandanam exists in Indian culture as core practice. Personality development and improve-ment in scholastic performance as a consequence of Upanayana saṃskāra are the benchmarks which demand further apprehension and analysis. In the present scenario, the quality of education, the learning standards and the safety and efficacy of students at the institutions have always been matters of chief concern. Lowered concentration levels, diminished memory, deteriorating practical skills besides increased stressanxiety-depression levels, obscured orientation and many more which may be considered as the pessimistic consequences of inconsiderate, inappropriate implementation of contemporary education system, improper and increasing use of modern gadgets, indiscreet use of social media, inadequate moral education and ignorance towards social responsibilities. It is at this instant of time that the importance of Upanayana Saṃskāra in general and Sandhyāvandanam in particular can be apprehended, analyzed and appreciated. They not only instigate the imperative time to acquire knowledge, dedication, devotion and discipline in life but also play a pivotal role in sculpting the physical, psychological, moral, social and spiritual life of an individual. It can be thus concluded that Upanayana Saṃskāra is a regenerative symbolic ceremony of immense importance. Explicating and establishing the realisms in the Vedic classics with modern parameters, is however, the need of the hour.
2022-09-17T15:04:01.584Z
2022-09-14T00:00:00.000
{ "year": 2022, "sha1": "85c5f917eedc434b282f699dec88684763dc6812", "oa_license": "CCBY", "oa_url": "http://dsiij.dsvv.ac.in/index.php/dsiij/article/download/262/218", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "38d028af6b914a35e4dda46785de7b6b554b013c", "s2fieldsofstudy": [ "Philosophy" ], "extfieldsofstudy": [] }
49388582
pes2o/s2orc
v3-fos-license
Sleep Apnea, the Risk of Developing Heart Failure, and Potential Benefits of Continuous Positive Airway Pressure (CPAP) Therapy Background Whether there is an association between sleep apnea (SA) and the risk of developing heart failure (HF) is unclear. Furthermore, it has never been established whether continuous positive airway pressure (CPAP) therapy can prevent development of HF. We aimed to investigate SA patients’ risk of developing HF and the association of CPAP therapy. Methods and Results Using nationwide databases, the entire Danish population was followed from 2000 until 2012. patients with SA receiving and not receiving CPAP therapy were identified and compared with the background population. The primary end point was first‐time hospital contact for HF and adjusted incidence rate ratios of HF were calculated using Poisson regression models. Among 4.9 million individuals included, 40 485 developed SA during the study period (median age: 53.4 years, 78.5% men) of whom 45.2% received CPAP therapy. Crude rates of HF were increased in all patients with SA relative to the background population. In the adjusted model, the incidence rate ratios of HF were increased in the untreated SA patients of all ages, compared with the background population. Comparing the CPAP‐treated patients with SA with the untreated patients with SA showed significantly lower incidence rate ratios of HF among older patients. Conclusions In this nationwide cohort study, SA not treated with CPAP was associated with an increased risk of HF in patients of all ages. Use of CPAP therapy was associated with a lower risk of incident HF in patients >60 years of age, suggesting a protective effect of CPAP therapy in the elderly. S leep apnea (SA) is associated with increased risk of cardiovascular events, worsening of heart failure (HF), metabolic disturbances, and overall a reduced quality of life. [1][2][3][4] One study even found an increased risk of incident HF among men with obstructive sleep apnea (OSA). 5 Continuous positive airway pressure (CPAP) therapy is a documented treatment for SA because of symptom relief, but it may also reduce endothelial damage and improve blood pressure, glucose tolerance, and cardiac function in patients with HF. [6][7][8][9][10] However, a direct beneficial effect of CPAP therapy on cardiovascular outcomes has never been established in a controlled setting. 11,12 However, adaptive servoventilation has recently been shown to in fact increase mortality in a randomized setting of patients with HF with reduced ejection fraction and central SA, 13 causing some concern about the use of noninvasive ventilation in these patients. Although the latter study investigated central SA patients with established chronic HF with reduced ejection fraction, it calls for further studies because the interplay between SA, cardiac disease, and pressure therapies is not fully understood. 14 It may be speculated that in patients with OSA, CPAP therapy could prevent development of HF because of its positive effect on blood pressure and metabolic function. [8][9][10] We therefore aimed to investigate the relationship between SA, incident HF, and the association of CPAP therapy in an unselected real-life population. Methods The data, analytic methods, and study materials will not be made available to other researchers for purposes of reproducing the results or replicating the procedure. The raw data are available through Denmark's Statistics on request. Databases All Danish inhabitants are provided with a unique personal identification number at birth or immigration that enables cross-linkage of information from nationwide databases. The Danish National Patient Registry holds information on hospital contacts, including diagnoses and procedural codes. Contacts are coded per the International Classification of Diseases (ICD)-the 8th revision before 1994 (ICD-8) and the 10th revision thereafter (ICD-10). The National Prescription Registry holds information on date and amount of all redeemed prescriptions coded per the Anatomical Therapeutic Chemical classification system. All used Anatomical Therapeutic Chemical and ICD codes are shown in Table S1. The Danish Civil Registration system provides data on date of birth, sex, immigration/emigration history, and vital status. Study Population and Baseline Characteristics All individuals (the entire Danish population) were included on January 1, 2000 and followed until December 31, 2012. Individuals immigrating within the study period were included at date of immigration. Exclusion criteria included age <18 or >100 years, and a prior diagnosis of SA or HF (Figure 1). The following characteristics were defined binarily as present or not present at the date of inclusion: myocardial infarction (MI), ischemic stroke, atrial fibrillation, peripheral arterial disease, chronic kidney disease, liver disease, chronic obstructive pulmonary disease, and cancer (excluding nonmelanoma skin cancer). These diagnoses have been previously validated with high positive predictive values. 15 Medication was defined as a prescription that was filled up to 180 days before date of inclusion of the following medicines: statins, b-blockers, loop diuretics, antihypertensive drugs, antiplatelet agents, and NSAIDs. In order to include patients being treated for hypertension and diabetes mellitus outside of hospitals (eg, in general practice), we defined hypertension as a combination treatment with at least 2 antihypertensive drugs and diabetes mellitus as treatment with a glucose-lowering drug, as has been done previously. 16 Definitions of SA and CPAP Therapy We identified all patients in the study population registered with a diagnosis of SA. The SA diagnosis in the Danish Clinical Perspective What Is New? • Among patients with sleep apnea, there is an increased risk of the development of heart failure across all ages. • Among patients with sleep apnea, aged ≥60 years, the use of continuous positive airway pressure (versus no treatment) decreases the risk of the development of new-onset heart failure. What Are the Clinical Implications? • Clinicians should acknowledge growing evidence of the adverse cardiovascular effects of sleep apnea, and in particular, its association with heart failure. • Early diagnosis and treatment of sleep apnea may potentially reduce associated morbidity and mortality, in addition to increasing the quality of life. National Patient Registry has previously been validated with a positive predictive value of 82%. 1 At the date of SA diagnosis, patients changed status from the background population to patients with SA. Procedural codes involving CPAP therapy were used to identify patients with SA who received CPAP therapy. To ensure adherence to therapy, 2 successive procedural codes were required: the first code representing distribution of CPAP equipment and second code indicating redistribution and probable continuous use after a tryout period. Thus, the second procedural date defined initiation of CPAP therapy. Outcome HF was defined as a first-time primary or secondary diagnosis registered at hospitalization or at an outpatient visit. The following diagnoses were included: hypertensive HF, cardiomyopathy, cardiac insufficiency, left-sided HF, lung edema, and unspecified HF (Table S1). Two separate studies have validated the HF diagnosis in the Danish National Patient Registry and found positive predictive values of 84% (77.9% for first-time HF) and 81%, respectively. 17,18 Statistical Analysis Both SA diagnosis and CPAP therapy were treated as timedependent values, meaning that the subjects contributed time at-risk in the background population until the date of SA diagnosis. Individuals were followed until study end, emigration, death, or event of interest, whichever came first. A statistical interaction between age and the use of CPAP therapy was found (P<0.0001); thus, the patients were stratified into 2 age groups: 18 to 60 and >60 years of age. This age stratification was applied to all analyses. Multivariable Poisson regression models were fitted to estimate incidence rate ratios (IRRs) of incident HF between 3 groups; (1) the background population (reference), (2) untreated patients with SA, and (3) patients with CPAPtreated SA. We calculated crude rates as events per 1000 person-years at risk with respect to age and follow-up time. Furthermore, we estimated IRRs using 2 different models: Model 1: Adjusted for age, sex, calendar year, and comorbidities present at the date of inclusion (including MI, ischemic stroke, hypertension, atrial fibrillation, peripheral arterial disease, chronic obstructive pulmonary disease, cancer, diabetes mellitus, and the use of NSAIDs), and Model 2: Fully adjusted model with all covariates mentioned in Model 1 incorporated as time-dependent variables (eg, if subjects developed hypertension 5 years into the study period, they contributed 5 years at-risk time to the model without hypertension and the rest of the study period the subjects were considered to have hypertension). The latter model was also used to estimate the association of CPAP therapy. For this analysis, the reference group was SA patients not receiving CPAP therapy; thus, we compared the use of CPAP therapy versus no CPAP therapy among patients with SA only. Interactions between use of CPAP therapy and predefined clinically relevant comorbidities were systematically checked and were found to be statistically significant for MI and hypertension; thus, we performed subgroup analyses of these groups (Table 1). We found no effect-modification for sex. Model assumptions, including proportional hazards, independent observations, goodness-of-fit v 2 test, and homogeneity of variance, were found to be valid. Sensitivity Analyses Two sensitivity analyses were performed. First, we altered the primary outcome definition to include both a HF diagnosis and a filled prescription of any dose of loop-diuretics between 45 days before to 45 days after the HF diagnosis, as has All analyses were performed using the fully adjusted time-dependent Model 2 with background population as reference. CI indicates confidence intervals; CPAP, continuous positive airway pressure; HT, hypertension; IRR, incidence rate ratio; MI, myocardial infarction; SA, sleep apnea. been done previously. 16,18 Loop-diuretics are considered firstline symptomatic treatment; thus, this restricted HF outcome was to only include potential symptomatic patients with HF. In the second analysis, we excluded all patients with a filled prescription of any dose of loop-diuretics up to 180 days before the date of inclusion, thereby excluding patients possibly in treatment for unregistered HF. Both analyses were performed using the adjusted timedependent Model 2 ( Table 2). Ethical Considerations Statistics Denmark provided access to the databases mentioned earlier, and permission to use data from the registries was granted by the Danish Data Protection Agency (Ref. j.no. 2007-58-0015/local j.nr. GEH-2014-015, I-suite no. 02733). In Denmark, registry-based studies do not require ethical approval. Results We identified 4.9 million Danish adults who were followed up to 13 years ( Figure 1). During the study period, 40 485 patients (0.8%) received a first-time diagnosis of SA (86% unspecified SA, 13% obstructive SA, and <1% other forms of SA). SA patients were followed for a median time of 3 Risk of Incident HF and Association of CPAP Therapy Crude rates per 1000 person-years at risk were calculated for the background population and SA patients stratified by CPAP therapy (Figure 2 and Table 4). In the background population and in patients with SA not receiving CPAP therapy, IRRs (from the adjusted time-dependent Model 2) were increased in both age groups, but only significantly and most pronounced among the patients >60 years of age. We did not find a statistically significant difference in IRRs between the CPAP-treated patients with SA and the background population ( Table 4). In patients who had SA and who were >60 years of age, CPAP therapy was associated with a significantly lower IRR of HF compared with patients with SA of the same age who did not receive CPAP therapy. We found the same nonsignificant tendency in patients with SA between 18 and 60 years of age (Table 4). Subgroup and Sensitivity Analyses The subgroup analyses were performed to evaluate effectmodification from MI and hypertension on the risk of developing HF. The results from the subgroup analyses were comparable to the main analysis (Table 1). Likewise, both sensitivity analyses (restriction of HF diagnosis and exclusion of potentially nonregistered HF patients) were comparable to the main results or nonsignificant because there were few events ( Table 2). Discussion This study has 2 main findings. First, SA not treated with CPAP was associated with an increased risk of HF in patients of all ages, but only significantly in patients >60 years of age. Second, use of CPAP therapy was associated with a lower risk Risk of HF There are different theories concerning the role of SA in the development of HF: First, obstructive SA generates a negative intrathoracic pressure, increasing both cardiac preload and afterload. Second, consecutive intermittent hypoxic periods increase oxidative stress and inflammation markers, which in turn can damage the endothelial walls. Third, disrupted sleep increases sympathetic nervous activity, leading to an increase in blood pressure and heart rate, which demands cardiac activity at a time when the heart should be regenerating. 2,19,20 In addition, several studies have shown that SA is associated with hypertension, coronary artery disease, arrhythmias, obesity, diabetes mellitus, and metabolic disturbances, all known to increase the risk of HF. 2,3,21,22 Hence, we applied many of these comorbidities in our regression models, including continuous assessment of their presence. Use of NSAIDs has been associated with increased risk of HF and was therefore included in the model as well. 23,24 Although lower estimates were found (Model 2 compared with Model 1, Table 4), the IRRs were still significantly increased; thus, SA seems to be an independent risk factor for developing HF. Nevertheless, the pathophysiology of HF and the causal pathway and interplay between SA and HF are heterogeneous and possibly other mechanisms associated with SA could be involved. The relationship between SA and HF has been investigated in several studies. In a prospective study, nearly 4500 patients with OSA were followed and no significant association between the severity of OSA and HF was found. 5 However, the study was relatively small and the authors did Effect of CPAP Therapy Because it provides symptom relief, CPAP therapy is first-line therapy for SA, but could theoretically also reduce the negative cardiovascular impacts of SA as it has been found to improve glucose tolerance, reduce endothelial damage, lower blood pressure, and improve cardiac function. [6][7][8][9][10] Consequently, one could hypothesize that CPAP therapy protects patients from developing cardiovascular disease such as HF. The effect of CPAP therapy on cardiovascular disease has been investigated to some extent, but with conflicting results. SAVE (Results from the Survival and Ventricular Enlargement) trial found no significant difference in cardiovascular events between a "usual-care group" and a "CPAP therapy+usual-care group." 11 In line with this, a randomized controlled study of 725 patients with OSA was unable to show a significant effect of CPAP therapy on the development of HF. 12 Opposed to these neutral findings, the previously mentioned matched study by Marin et al found a potential beneficial effect of CPAP therapy on cardiovascular complications in patients with severe OSA. 25 Likewise, our study showed in, to our knowledge, the largest cohort of SA patients to date that CPAP therapy was associated with a lower risk of developing HF in patients >60 years of age. A potential difference in adherence to medical treatment in general could be an explanation for the observed difference between patients with SA who did and did not receive CPAP therapy, assuming that a patient who accepts CPAP therapy might also be more adherent to pharmacotherapy, for example. In our cohort, the CPAP-treated patients with SA received more medication on the date of inclusion compared with the patients with SA not in CPAP therapy. However, these patients also had a higher comorbidity burden (included in the model), which could explain the increase in medication (Table 3). However, these differences cannot be statistically confirmed because of the time-dependent study design and absence of independent observations, though the observed differences seem substantial. Adherence to medical treatment in patients with SA has been investigated in a cohort of 2158 patients with severe OSA; in this study the authors found high medication adherence among all patients and, remarkably, no difference in medication adherence between patients adherent and nonadherent to CPAP therapy. 27 This supports our findings of CPAP therapy possibly playing an active role in reducing the risk of developing HF in SA patients. Concerningly, recently published data from the SERVE-HF (Servoventilation in Patients with Heart Failure) trial showed that adaptive servoventilation increased overall mortality risk in 1325 patients with chronic HF with reduced ejection fraction with predominantly central SA. 13 As a result, adaptive servoventilation is not recommended in patients with HF and predominantly central SA. 28 However, the safety concern that arose from this study should not be applied to patients with SA without HF receiving CPAP therapy, since adaptive servoventilation and CPAP therapy are completely different modes of applying positive airway pressure, 29,30 and the increased mortality was only demonstrated in patients already diagnosed with HF with reduced ejection fraction. 13 Interestingly, the US preventive service task force recently recommended not to screen for OSA because of insufficient evidence on benefits and harms of a screening program and, especially, because of the lack of evidence concerning the potential beneficial effect of CPAP therapy on hard outcomes. 31 Our study adds important data to the discussion concerning the cardiovascular risk of SA as well as the potential favorable effect of CPAP therapy. Although our study was observational by design and only hypothesis-generating, focus on interplay between HF and SA, including type of SA therapy, in future studies is important. Strengths and Limitations Inclusion of a large, unselected, and nationwide cohort of patients with SA independent of age, sex, socioeconomic status, access to health services, and ethnicity are the major strength of this study. Main limitations are the observational study design, the use of administrative databases for all diagnoses, and the inability to eliminate the possibility of unmeasured confounders. We did not have information on body mass index or smoking status, which could both be relevant confounders and overrepresented in the patients with SA. However, we had information on the mediators between smoking/overweight and HF (eg, arterial disease, diabetes mellitus, chronic obstructive pulmonary disease, and MI). A propensitymatched analysis could have strengthened our results, but it was not possible because of our time-dependent study design. We lacked information on the severity of SA, and 86% of the diagnoses were unspecified. Likewise, 57% of the HF diagnoses were nonspecific, which limited us from making any assumptions as to the direct causal pathway between SA and HF. Positive predictive values of SA and HF diagnoses in the Danish National Patient Registry are high; however, sensitivities are lower, which could have led to an underestimation of our results because of type 2 errors. 1,18 No significant association was found between use of CPAP therapy and HF among patients who have SA and are aged between 18 and 60 years. This could simply be explained by lack of power because of fewer events among the younger patients. However, another explanation could be that clinical HF has a multifactorial pathway, with factors such as hypertension and hyperlipidemia being less present among the younger patients. Some studies show a potential beneficial effect of CPAP therapy on hypertension and lipid profile, which could explain why the association between CPAP therapy and reduced risk of HF is less pronounced among the younger patients. 8,9 Finally, the pathophysiology of development of SA may differ between younger and older individuals (eg, body mass index might be an important factor in older compared with younger patients); consequently, any effect of CPAP therapy may also differ according to age. We found effect modification of MI and hypertension and for comparing with the main results, we constructed subgroups for subgroup analyses of patients with previous MI, with previous hypertension, and without 1 or the other ( Table 1). The results were all similar to the main analyses. However, subgroup (previous MI, patients with SA not receiving CPAP therapy, and 18-60 years of age) had a decreased IRR of HF. Possible explanations could be small number of events (power), healthy survivor bias because of early MI not resulting in HF in contrast to the high risk of MI causing immediate HF in the elderly, 32 or simply a random finding. We had no information on the reasons for initiating CPAP therapy, nor did we know the reasons for discontinuing CPAP therapy, which is why we required 2 consecutive procedural codes for CPAP therapy to ensure adherence. Also, we lacked information on the daily compliance with CPAP therapy, as we know that use <4 hours per night only has marginal effects. Conclusions In this nationwide cohort study of patients with SA, SA not treated with CPAP was associated with an increased risk of HF in patients of all ages, but only significantly and most markedly in patients >60 years of age. Use of CPAP therapy was associated with a lower risk of incident HF in patients >60 years of age, which suggests a protective effect of CPAP therapy in this group. Sources of Funding This study was supported by a grant from the Danish Heart Foundation, Copenhagen, Denmark (15-R99-A5912-22916). Disclosures Gislason is supported by an unrestricted clinical research scholarship from the Novo Nordisk Foundation and has received research grants from Bayer, Pfizer, AstraZeneca, Boehringer-Ingelheim, and Bristol Meyer Squibb and speaker honoraria from Pfizer, AstraZeneca, and BMS. Lamberts has received speaker honoraria from Bristol-Meyer Squibb and Bayer. Nielsen has received funding of educational and research tasks from Resmed and participated in advisory boards for Novartis Pharma. Tønnesen has been a member of the steering committee for the annual sleep scientific meeting in Denmark arranged by Maribo Medico A/S, and had the registration fee and hotel costs waived. The remaining authors have no disclosures to report.
2018-07-03T23:06:13.227Z
2018-06-22T00:00:00.000
{ "year": 2018, "sha1": "48ff4aaf7c1b8d0dc5c268d059b1bb7317d5b731", "oa_license": "CCBYNC", "oa_url": "https://www.ahajournals.org/doi/pdf/10.1161/JAHA.118.008684", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "48ff4aaf7c1b8d0dc5c268d059b1bb7317d5b731", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238425853
pes2o/s2orc
v3-fos-license
Risks assessment of Adherence to non-pharmaceutical measures towards COVID-19 among residents of Mashhad in the North-East of Iran during the awful wave of the epidemic Background: Since the Coronavirus disease 19 (Covid-19) rampaged in Iran, three waves of the epidemic occurred. Objective: In the present study, two issues are considered. First: What proportion of the people adhere to the mitigation approaches towards the disease? Second: Which are the reasons to disobey these rules? Methods: A cross-sectional, population-based phone survey was applied among the population aged over 16 years in Mashhad between November 5 and December 1, 2020. A valid and reliable knowledge, attitude, and performance (KAP (designed questionnaire was used and logistic regression was performed with STATA 14. Results: The final sample size was 776; 90.59, 89.8 and 48.1% of the participants had sufficient reliable knowledge, attitude, and practice, respectively; 20.1% of the participants did not wear masks; nearly half of them visited traditional healers for the prevention and cure; 97.8% of them believed the efficiency of the vaccine and stated that they will consume it if it is distributed. Among the sociodemographic factors, only the unemployed had low adherence to the preventive approach; 51.7% of the main worry was the weak economic situation and 69% of jobs and expenditures were poorly affected. The odds ratio (OR) for optimising attitude reduced from 4.64 to 3.22, and for good performance from 5.64 to 5.43 after adjusting for the economic, knowledge and perception factors. Conclusion: Despite all the health rules and probably COVID-19 vaccines global access (COVAX), it seems that the most effective way to reverse this horrific wave and its economic consequences is the improvement of the economy and livelihood of the society. Introduction Coronavirus disease 2019 is caused by a virus that belongs to the larger family of ribonucleic acid (RNA) viruses which cause various types of illness. [1] The main symptoms of the patients affected with COVID-19 have been reported as fever, dry cough, fatigue, myalgia, shortness of breath and dyspnoea. [2,3] Rapid transmission is the character of COVID-19; close contact with an infected person is the commonest way of infection transition. [4][5][6] However, the facts on the disease are evolving. [7] WHO has established uniform guidelines in tackling the pandemic. [8] The non-pharmaceutical interventions (NPI) to control COVID-19 are personal hygiene, public health measures such as promoting and facilitating physical distancing, advising the population to voluntarily self-isolate if infected by COVID-19, limiting the size of indoor and outdoor gatherings, promoting teleworking where possible, school closure and environmental measures. [9] Public adherence in adopting healthy practices and responsive behavior is different around the word. The disease was controlled very well in the initially reported city of China (Wuhan) and some countries, but in most countries such as Iran, the third or fourth wave is going on. [9,10] On Apr 15, 2021, when the present manuscript was being written, we fund out that the first report of confirmed cases was announced on 20th February 2020 in Iran, and then three waves of confirmed cases happened and the fourth peak was ongoing. [10] The first peak appeared at the end of March 2020, when our country passed the holy days of the new solar year. The second wave appeared after Ramadan (the holy month in Islam) at the beginning of June, [11] the third peak was the biggest and longest than the two former peaks that started following the summer vacation and was concurrent with the start of the school opening season in the last days of September. [10] After a three-month epidemic relaxation, the fourth peak has been rising again after the holidays of the new solar year. [10,12] However, during these third and fourth peaks, several COVID-19 vaccines were introduced and promised global access. It is taking too much time to access vaccines globally. So, the best way to protect against this infection is through preventive efforts. The conclusions of these efforts can be largely related to public behaviours. The public's knowledge and attitude toward COVID-19 are important in tackling this pandemic. [13,14] In addition to the socioeconomic factors, several factors such as the underlying disease, major concerns, job or economic issues, awareness and perspectives regarding COVID-19 prevention can impede public health responses. After more than 10 months of the epidemic at the beginning of the third wave, it was an essential need to investigate adherence to the preventive measures of COVID-19. In this study, we aimed to describe knowledge, attitudes, practices (KAP), and some related factors of adherence regarding COVID-19 among residents of Mashhad during the third wave of this epidemic. Subjects and Methods This cross-sectional study was conducted in the capital of Razavi Khorasan Province (Mashhad), a pilgrimage city, located in the Northeastern part of Iran in the vicinity of Afghanistan and Turkmenistan from November 5 to December 1, 2020. Mashhad's population is 3,001,184 (2016 census), so it is the second-most-populous city in Iran. The Mashhad University of Medical Sciences (MUMS) has an integrated information system to provide health services to cover households. This system has demographic information and phone numbers of households. Therefore, the feasibility of a population-based survey was available. We examined the KAP, and risk assessment of a simple randomised sample of adults (16 years or more) regarding COVID-19. The total sample size was calculated from 800 persons with a 95% confidence level, 5% margin of error and 18.8% good adherence toward COVID-19 according to a related KAP study. [15] For this sample size, we added an 80% sample size for the assessment of eight main confounders and 46% for contribution to the household telephone survey. [16] This KAP survey instrument was based on three studies on KAP COVID-19. [15,17,18] This questionnaire was translated and back-translated from English to Persian and vice versa to ensure the meaning of the content. The content validity of the questionnaire was assessed by 15 expert panellists. Two questions with less than 0.49 of the content validity ratios (CVR) were eliminated. [19] In a pilot study, the reliability of the knowledge of 45 of our participants toward COVID-19 represented that the value of Cronbach's alpha is an acceptable level of internal consistency (α =0.91). [16] Four parts of the final questionnaire had 40 items. The sociodemographic characteristics were at the beginning of the interview with six items. The KAP questions consisted of modes of transmission, clinical symptoms, treatment, risk groups, isolation, prevention and control. Each part of these had 14, 9 and 11 questions, respectively. The responses to these questions were Yes/No/I don't know basis. The point for correct responses was assigned 1, and incorrect or unknown responses were considered 0. The minimum score parts of the KAP scores were 0, and the maximum of them was 14, 9 and 11, respectively. We considered more than %80 of participants' total score toward COVID-19 as a sufficient knowledge, attitude and practice. These values were ≥12, ≥6 and ≥9, respectively. [20] This household survey was done by telephonic interviews after given informed consent for the recruitment of the study. The length of the interview was approximately 10-15 min. The follow-up was done thrice if the selected participants were not available. In order to increase the study power, random samples were substituted for unanswered cases. Univariable and multivariable logistic regression analyses were applied to determine the adherence towards COVID-19. In the final model, statistically significant factors (P value < 0.05) were maintained. Data were analysed by STATA 14. Results In this investigation, 773 questionnaires were completed, and the response rate was 40.6% [Diagram 1]. Table 1 represents the knowledge assessment regarding COVID-19 prevention. In 90.59% of the participants, the score of knowledge was in the range of sufficient knowledge to manage the prevention of the disease. The majority of the participants had correct awareness of clinical features and modes of spread of the infection, but more than half of them (61.9%) mistook in the incubation period, and 38.6% of the answers were incorrect about the treatment of COVID-19, respectively. About 89.8% of the responders had a positive attitude toward mitigation approaches against COVID-19. More than 95% of the participants agreed that handwashing, wearing face masks, and disinfection of vegetables and fruits must be done to prevent the disease. The proportion of agreement to the closure of public places such as parks, gym, salons, mosques and others was somewhat high (85%). The participants' viewpoint toward the usefulness of herbal prevention or treatment for the disease was about fifty-fifty. About 75.4% of our samples impinged that the community had sufficient knowledge of mitigation measures, and about 50% of them announced that the preventive approach of the government and community was insufficient. Only 2.2% of the participants disagreed with the preventive effect of the vaccine against COVID-19 and doubted to use it if it is distributed [ Table 2]. According to Table 3, handwashing was adopted by 99.5% of the participants as a common approach to prevent the disease. The proportion of practising other preventive approaches such as the disinfection of vegetables and fruits, avoiding public transport without face masks, travelling and going to crowded places and parties were noticeable. Vitamin supplements and traditional medicines were reported as preventive consumption by 57.9% and 46.7% of the responders, respectively. However, 41.1% of the participants visited traditional healers or used herbal medicines; 95.8% of them visited physicians if they had suspicious symptoms of the disease. In addition, two independent questions were asked to shed light on the cases for better adherence to guard against the disease. 1. What is your main concern recently? About 368 (47.4%) of the responses were COVID-19, 401 (51.7%) were economic status and only 7 (0.9%) of the responses were neutral. 2. What happened to your job and which are the economic issues following the pandemic? Reduced working hours (n: 160 [%20.6]), salary reduction or dismissed jobs (n: 74 [%9.9]), exponentially increasing economic inflation (n: Sociodemographic factors, history of the underlying disease, infection history of COVID-19, major current anxiety, job or economic issues and awareness level toward mitigation approaches were considered to introduce univariate and then multivariate logistic regression to determine the risk assessments of the gap perception among the population [ Table 4]. In the same way, we modelled these determinates to find the factors associated with good practice against the disease, but the attitude score was replaced with the knowledge score [ Table 5]. The attitude level toward the non-pharmaceutical approaches was approximately equal among age, gender, education, nationality and the relative experience of COVID-19. The unemployed and soldiers had more negative perception against the preventive approach than others; the participants who emphasised that the economic status was a major current worry had a lower positive attitude than those who replied that COVID-19 was the main problem, and the people who reduced work time, decreased salary or dismissed jobs had more negative attitude than the participants who said the pandemic did not affect their jobs (P < 0.00). But the participants with a background underlying disease had adequate knowledge and those who complained about the increase in prices had a significantly positive attitude toward battling the disease. In the final model, the adjusted OR of soldiers, unemployment, economic concern, reduced work time and decreased salary or dismissed jobs is. 05 (.005-.58 Table 5]. Discussion During the outbreak of COVID-19 in Iran, two waves of the epidemic occurred with about 3,000 cases per day and less than 2-week long. But the third wave outbreak of COVID-19 has two awful characters. It has begun in September and is going on until the start of this study (November) with more than 14,000 cases per day. [10] This condition can be largely related to the ignorance of the preventive protocols of the disease. [13] This study was conducted in Mashhad, a pilgrimage city, with the potential to increase the incidence of this disease. How much proportion of the people adhere to the mitigation approaches toward the disease? And, which are the reasons to disobey the rules of these preventive protocols? These are the two issues we seek in this investigation. The epidemic course of the disease has become long and it can adversely affect the dimensions of people's lives. So, socioeconomic factors, underlying disease, major concern, job or economic issues, awareness and perspective regarding COVID-19 prevention were investigated for obeying preventive measures toward COVID-19 among residents of Mashhad between November 5 and December 1, 2020. The samples of the household phone survey were randomly extracted from an integrated information system, the copyright of MUMS. The minimum information of this database is demographic and contact information. This information is scarcely available, so in similar studies, it is said that the feasible way of data collection during the pandemic is web based or through virtual networks. [9,15,[18][19][20][21][22] The distribution of these possibilities is varying among the population, and the generalizability of results is doubtful. In our study, only 7 out of 1,912 subjects did not have a contact number. So, we can claim that this study is a population-based investigation. However, less than half of the total contacts contributed to the investigation. So, the results of this study should be explained with thriftiness. As we expected, after 10 months of the spreading epidemic, the results of adherence to the individual response of COVID-19 represent that the participants were at a good level of knowledge (90.5%) and attitude (89.8%), but 48.1% of them had a good level of performance; 20% of the participants refused to wear masks regularly. The sections of attitude and practice of our questionnaire consisted of individual, national and community responses to COVID-19. There are conflicting views in the scientific and general community about traditional healing. Many people, particularly in developing countries, prefer traditional medicines. [23] The benefits of these traditional medicines are without scientific evidence. There are even many profit motives, and sometimes, they lead to false certainty and increasing the spread or severity of the disease. [23] In our study, about half of the participants believed and visited traditional healers for the prevention and cure of the disease. A web-based survey was carried out in Iran during the first week of March 2020. It indicated that 40% of the participants used herbal products to guard against the infection [18] although the time lag existing between the two results was very close to our outcomes. Another finding of this study is less verifying and obeying of the precautions toward the COVID-19 among the people who had the disease in close relatives. In a detailed analysis, this group of participants more accepted and used traditional medicine in the treatment of COVID 19 than those who hadn't the disease in close relatives. However, 95.8% of the participants visited doctors when suspecting this illness and considered them as a reliable source of information. In comparison with the study of Kakemam et al., [22] this correct perception is growing (65.4% vs. 95.8%). It seems that people are hesitant in decision-making, and to get rid of this problem, they used both scientific and traditional approaches. About 97.8% of the participants believed in the efficiency of the vaccine, and they would consume it if it is distributed. Contradictory news about the effect and structure of the vaccine has been published in the media, but the participants of our sample had good literacy about it (97.8%). This optimistic perception is higher than the number of conducted studies at the beginning of the epidemic in Iran. [18,22] On the other hand, this increased vaccine acceptance perception may be due to concerns about the growing incidence of the disease. Also, the participants may be tired of the prolongation and limitations caused by the outbreak. The WHO has established uniform guidelines for national or community response to COVID-19 and are updating them for tackling the pandemic; [8] although, each country has obeyed them according to its feasibilities. The policymakers of Iran decided to implement smart distancing for starting economic activities. [11] About 51.7% of the participants had no positive perception to reduce public transportations for the mitigation of the infection, and 98% of the participants wore face masks in these places. These findings show that more than 50% of our samples needed these community facilities, and they often tried to compensate for the risk of the infection by observing individual precautions. The main worry of more than half of the participants (51.7%) was the weak economic situation, and 69% of them were poorly affected by their jobs and expenditures. The socioeconomic factors, unemployment, and soldiers had a significantly lower optimistic perspective to guard against this infection. Economic concern and job issues led to negative attitudes and risky behaviour toward precautions of the infection. The economic harm is obvious and indicates that the world has experienced a huge economic shock. In addition, the United States has imposed unfirm sanctions on Iran. This human crime caused the escalation of economic woes by the COVID-19 outbreak. However, the underlying disease, sufficient knowledge and positive attitude in the univariate model enhanced the perception and obeying of mitigation approaches. After adjusting to the job and economic anxieties and its issues, these positive effects of the underlying disease diminished and had no significant gap in attitude and action to the ways of prevention against the infection between those who had chronic diseases or not. In the same studies, the effects of work status and monthly income on the changing attitudes and behaviour have been addressed. [9,20] These studies are in line with our findings. The main feature of this study was done a long time after the start of the epidemic. Thus, it is more likely to identify the true impact of the determinants of adherence to the principles of disease prevention, and the immediate effects of the intervention are reduced. Despite good knowledge and an optimised attitude, less than 50% of the participants had a good response to the disease. Despite all health rules and probably COVID-19 vaccines global access (COVAX), loss of livelihoods under the pressure of deficiency caused by the COVID-19 epidemic and the cruel foreign sanctions led to an increased risk of the disease. It seems that the most effective way to reverse an unprecedented and horrific wave of the disease and its economic consequences is to improve the economic and living conditions of society.
2021-10-08T13:37:52.202Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "0028be7775c887f4e325e46ac73b564ed40ba5c5", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/jfmpc.jfmpc_130_21", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8e56cea4a63a2575a079180765734879d47f7f49", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
53363372
pes2o/s2orc
v3-fos-license
Thermodynamics as a nonequilibrium path integral Thermodynamics is a well developed tool to study systems in equilibrium but no such general framework is available for non-equilibrium processes. Only hope for a quantitative description is to fall back upon the equilibrium language as often done in biology. This gap is bridged by the work theorem. By using this theorem we show that the Barkhausen-type non-equilibrium noise in a process, repeated many times, can be combined to construct a special matrix ${\cal S}$ whose principal eigenvector provides the equilibrium distribution. For an interacting system ${\cal S}$, and hence the equilibrium distribution, can be obtained from the free case without any requirement of equilibrium. Introduction A system in thermodynamic equilibrium has no memory of its past. Consequently there is no leading role for time in the ensemble based statistical mechanics except the subservient one to maintain equilibrium among the internal degrees of freedom and with external sources. This wisdom gets exploited in the dynamics based algorithms like Monte Carlo, molecular dynamics, stochastic quantization, to name a few, to attain equilibrium from any arbitrary state albeit in infinite time. Even a thermodynamic process involving changes in parameters is an infinite sequence of equilibrium states, and is therefore infinitely slow. A finite duration process, not destined to equilibrate at every instant of time, maintains a memory of the initial conditions or a short time correlation of states. The biased sampling of the phase space keeps these processes outside the realm of statistical mechanics and thermodynamics. In this equilibriumnonequilibrium dichotomy, a work theorem [1,2,4,5,6] attempts to bridge the gap by providing a scheme for getting the thermodynamic free energy difference from a properly weighted nonequilibrium path integral [4,5]. We show in this paper that purely nonequilibrium measurements of work gives an operator S, defined on the phase or configuration space, whose normalized principal right eigenvector is the equilibrium probability distribution. Our result is valid for any number of parameters including temperature and interaction. With this extension we can get the equilibrium distribution by constructing a matrix S connecting any two allowed states of the system without any reference to equilibrium anywhere, thereby completely blurring the boundary between equilibrium and nonequilibrium. This finds direct application in out-of-equilibrium phenomena like hysteresis. Barkhausen noise is an example of nonequilibrium response of a ferromagnet as the magnetic field is changed at a given rate [8,9]. By measuring the voltage induced in a secondary coil as the current in the primary coil wound around a ferromagnet is changed, one gets the time variation of the magnetization. The noisy signal one gets is not unique but stochastic in nature, reflecting the fluctuating microscopic response to the external field. Such signals have been analyzed in the past to extract information like avalanche statistics, material characteristics etc. Our results find a different use of the Barkhausen noise to construct the S matrix. Similar constructions for other cases like protein or DNA dynamics in vivo, pulling of polymers in single-molecule experiments, etc, call for new class of experiments to monitor the noise signals during these events. This paper is organized as follows: In Sec. 2, we recapitulate the work theorem, introduce the paths and discuss the connection between the work theorem and the histogram transformation of equilibrium statistical mechanics. In Sec. 3 we give a simple and general, dynamics independent proof of the relation between the equilibrium probability distribution and the work done in nonequilibrium paths. This relation in some form is already known [5,4] but our derivation allows us in generalizing the result to other cases involving temperature, interactions, etc. Sec. 4 deals with the main result of this paper. There we prove the eigenvalue equation for S. A few examples are also given there. How to get the operator S directly from experimental measurements of Barkhausen noise is also discussed here. Numerical verifications of some of the results are presented in Sec. 5 by taking the 2D Ising model as an example. We summarize in Sec. 6. Work theorem Consider a classical system described by a Hamiltonian H(Λ, x) where Λ is an external field that couples to its conjugate, a microscopically defined quantity, x. The thermodynamic state is specified by temperature T and field Λ. Let us start with the system at Λ = 0 in thermal equilibrium at temperature T . External field Λ is changed in some given way from 0 to a final value λ in a finite time τ or in a finite number of steps n, letting the system evolve in contact with the heat reservoir. No attempt is made to ensure equilibrium during the process. The variation of x along the nonequilibrium path (x(t) vs t) and the instantaneous final (boundary of the path) value of x, x b , when the field reaches λ, are noted. The work done along a nonequilibrium path by the external source (as in ref. [2]) is in time τ , and it varies from path to path. The difference between two definitions of work in the context of work theorem, one used in ref. [1] and the other in ref. [2], is discussed in ref. [3]. For the sake of notational simplicity we choose, where H 0 is the energy for Λ = 0. There is not much loss of generality in choosing the form of Eq. 2 because Λ and x refer to any pair of conjugate variables so that x itself need not be a linear function of the internal coordinates. As an example, in an interacting spin problem in a magnetic field h (≡ Λ), H = H 0 − h k s k where s k is the spin variable at a site denoted by k, with x = k s k . Often Λ can be taken as the switching parameter to turn on a perturbation or interaction in a Hamiltonian The work theorem [1,2] provides the equilibrium free energy difference ∆F between the two states with Λ = 0 and Λ = λ, both at inverse temperature β = 1/k B T (k B is the Boltzmann constant), from the nonequilibrium work done as where ... denotes the average over all possible paths. Paths: equilibrium and nonequilibrium We are using here a description of a state by the intensive parameters which actually characterize the surroundings. In equilibrium any system is expected to have the values of the intensive parameters same as that of the environment. A change in any of the parameters, say Λ, from λ 0 to λ, would require heat and/or energy transfer. The work done on or by the system is determined by the change in the free energies, independent of the path of variation of the intensive parameters. This is expressed as where ∆F = F (β, λ) − F (β, λ 0 ). Here x eq (Λ) = x P Λ (x)dx is the equilibrium average at the instantaneous values of the intensive parameters and P Λ (x) is the corresponding equilibrium probability distribution of x. This follows from the identification of the equilibrium value of x as x eq = −∂F/∂Λ, in contrast to the conjugate ensemble definition Λ = ∂F /∂x where F (β, x) is the fixed-x ensemble free energy. For convenience, let us discretize the integrals. For example, for Λ ∈ [λ 0 , λ], we have a sequence (Λ 0 , Λ 1 , ...Λ n = λ) and the continuum is recovered by taking the usual limit of n → ∞ with max{∆Λ i = Λ i+1 − Λ i } → 0. The work done can be rewritten as By interchanging the sums over x and Λ, we define (i) a sequence {x i |i = 0, ...n} as instantaneous values, and (ii) a sequence-dependent work done as W = i x i ∆Λ i , to reinterpret Eq. (5) as an average over these x i 's. Therefore, where P{x i } = i P Λ i (x i ) is the joint probability of getting the particular {x i } sequence, because, for a thermodynamic process, there is no memory. Going over to the continuum limit, the thermodynamic process of varying Λ is now seen as equivalent to choosing a path in the configuration space and re-weight the paths according to the probability of its occurrence in the Λ-ensemble. The relation between the free energy change and work, Eq. (4), now gets a path integral meaning where the process takes the system over the microstates and one averages the work over individual paths. This thermodynamic connection is valid only in equilibrium. The work theorem generalizes this idea by replacing P{x i } by the nonequilibrium probability of getting a path and asserting where DX stands for the normalized sum over paths, i.e., sum over intermediate x's with appropriate probabilities. Histogram transformation and infinitely fast process There is a fundamental transformation rule obeyed by the partition function, often used in numerical simulations as the histogram method [7]. This transformation connects the equilibrium probability distributions at two parameter values, Λ = λ 0 and Λ = λ as where the sum in the denominator is over the allowed values of x. The denominator of the right hand side of Eq.8 is Z λ /Z λ 0 where Z λ is the partition function at inverse temperature β, From Eq. 1, (λ − λ 0 )x can be taken as the work done in an instantaneous process that changes Λ from λ 0 to λ without changing x. The probability of getting x for equilibrium at λ 0 is P λ 0 (x) and therefore the sum in the denominator of Eq. 8 is the path integral of Eq. 7, because x does not change. This gives the work theorem. Equilibrium probability distribution We in this section use the discrete version of the process to re-derive the equilibrium probability distribution from the work theorem in a general and dynamics independent way. For the kind of nonequilibrium processes mentioned in Sec. 2.2 the equilibrium probability distribution of x at a parameter value λ can be obtained from a weighted path integral [4,5] where x b is the instantaneous boundary value at the end of the path, and the denominator is same as r.h.s. of Eq. 7. This is in the form of a path integral where the paths are weighted by a Boltzmann-like factor exp(−βW ). The same was established previously in specific cases like, the Master equation approach [2], the Feynman-Kac formula [5] and Monte Carlo dynamics [4]. The equilibrium average x eq is defined as where work theorem is to be used for the partition functions. The system starts in equilibrium at temperature T and Λ = 0, and then Λ is built up at constant T as a sequence of infinitely fast jump of ∆λ = λ/n, each jump followed by a finite time evolution in contact with the heat bath. Consider now two n-step processes, one process with final field λ and another one with λ − δ (δ → 0 at the end). In fact, the second process is just a copy (replica) of the first one in every respect except at the last stage (Fig. 1). For the last jump, the change in Λ for replica 1 is ∆λ while for replica 2 it is ∆λ − δ. A path is specified or defined by the sequence The changes in x i at any step is because of internal dynamics or exchange of heat with the external reservoirs. We do not need to let the system evolve once the field reaches the final desired value. Therefore, the sequence {x i | i = 0...n − 1} is the same for both the replicas. The work done W 1 , W 2 along an n-step nonequilibrium path for replicas 1, 2 are related via with W 1 is of the form given above Eq. (6). The work theorem of Eq. (7) when used in Eq. 11 yields This shows that the equilibrium average can be expressed in terms of the boundary value with proper weightage of the paths. The above proof can be generalized to any moments of x. Now if P(x) is the distribution of x b , that gives the average in Eq. 13 then P(x) can be written as as quoted in Eq. 10. We now invoke the moment theorem [12] which, in our case, states that for a probability distribution without sufficiently long tails, the moments uniquely specify the distribution. Since these conditions are satisfied by the equilibrium probability distributions for any finite system, the moment theorem applies. Since the moments from the nonequilibrium path integral are the equilibrium moments, P(x) is the equilibrium distribution: P(x) = P λ (x). This completes the proof. Generalization In general, for a Hamiltonian of the form , at some given parameter values, {λ α } and temperature β −1 , can be obtained in the same way provided the paths start from an equilibrium state for H = H 0 , where H 0 gives the energy for all Λ α = 0 and W is the total work done on the system along a nonequilibrium path, by each of the externally controlled parameters. E here corresponds to the energy from H 0 only. Our starting H 0 may be a free Hamiltonian for a mechanical system and can as well be zero for interacting spin-like systems. Consider the Hamiltonian H = γH 0 for a spin-like system (i.e. without any kinetic energy). In this case one of the {Λ α } could be the strength of interaction. Let's start with γ = 0, i.e. the starting point is any random configuration of the free system or a non-interacting system, and then change γ in some given way from γ = 0 to γ = 1. We thus generate the equilibrium distribution of H 0 at a particular β, by doing a similar nonequilibrium path averaging. Note that everywhere we need the product βW . So, we can discretize temperature instead of Λ and the process can be reinterpreted as cooling down to a finite temperature from an initial infinite temperature. In the usual formulation of work theorem, Λ refers to mechanical parameters such as the pulling force in AFM, which are under direct control of the experimentalists. In contrast, other intensive parameters such as temperature may not be controlled with this level of precision in experiments. But this finds various applications in numerical experiments. Such thermal quenches are quite common in numerical simulations and our results show how these can be harnessed to extract equilibrium information as well. The ensemble of states obtained in the above discussed way at the end of the path is not a representative sample of the equilibrium ensemble at the concerned temperature and field. However, the history-averaged distribution is the equilibrium distribution. The boundary states would relax to reach equilibrium via energy transfer to the reservoirs but that part of the process is not required. This difference becomes important and visible in systems exhibiting hysteresis as e.g. for a ferromagnet. Application to ferromagnet to get equilibrium magnetization curve The above-mentioned scheme can be used to get the equilibrium probability distribution or thermodynamic quantity from a process which is arbitrarily away from equilibrium and at all temperatures including phase transition points. Now we apply our result to the case of hysteresis of a ferromagnet below the critical temperature (T C ). Consider a Hamiltonian: H = H 0 − hM. The external magnetic field is varied from −h 0 to +h 0 in a fixed manner and then reversed. M is calculated using Eq. 13. Below the critical temperature, magnetization (M) vs. magnetic field (h) curve shows a discontinuity at h = 0 for infinite system size. For a finite system there is no discontinuity, M-h curve is continuous passing through the origin, and the slope of M-h curve at h = 0 increases as system size increases. But, in reality, when experiments or simulations are done, instead of single retraceable curve passing through the origin we get a loop called hysteresis loop, no matter how slowly we vary the magnetic field. The common technique known to get the equilibrium curve is to connect the vertices of the sub-loops [9]. Here the weighted nonequilibrium path integral scheme is a way out to get the equilibrium magnetization curve. We verify this for Ising ferromagnet and discuss the observations about it in Sec. 5. Equilibrium probability distribution from an eigenvalue equation: Operator S In this section we derive the main result of this paper: equilibrium probability distribution as an eigenfunction of a nonequilibrium operator S. Using the discrete notation, we can write Eq. 10 as by using the work theorem, Eq. 3, that Again, writing paths = x i P λ 0 (x i ) ′ paths , where the primed summation denotes the sum for fixed initial value of x = x i with appropriate probability and P λ 0 (x i ) denotes the equilibrium distribution of x i for Λ = λ 0 , we get, Use the transformation rule for the partition function (Sec. 2.3), to absorb Z λ 0 /Z λ into the probability distribution. This transforms P λ 0 (x i ) into P λ (x i ), in Eq. 18 as with P λ as a column vector of {P λ (x)} and the matrix elements of S as The summation in Eq. 23 is over all paths that start from an equilibrium distribution of Λ = λ 0 with value of x as x i and end in a state with Λ = λ and x = x f , with proper normalization (denoted by prime). Although we use the simple Hamiltonian: H = H 0 − Λ x in the construction, Eq.23 can be generalized for a Hamiltonian H = H + H 1 (Λ, x), because Eq. 19 has the general form, Now we address the remaining problem -the normalization of the primed summation over paths in Eq. 23. This problem is inherited from Eq. 17. Note that the l.h.s. of Eq. 17 should add up to 1 for λ = λ 0 with W = 0. So we choose the hidden factor a posteriori by demanding proper normalization of the final probability distribution. This condition can be ensured in a process-or system-independent way by choosing x S x,x i = f (x i ) = 1, (Eq.21), i.e. by making the column sum of S independent of x i . By this normalization of the sum of each column to unity it is also guaranteed that the principal eigenvalue is 1. The corresponding right principal eigenvector has all the elements real and non-negative -a necessary condition to be a probability distribution and when normalized, such that sum of all elements is unity, this eigenvector gives the equilibrium probability distribution. The number of rows and columns in S is determined by the number of allowed values of x. For continuum of states, the matrix equation is to be replaced by an integral eigenvalue equation. Hence, in brief, the scheme to get the equilibrium distribution at some parameter value λ and temperature β −1 is as follows: Pre-fix some arbitrary or convenient-to-startwith initial parameter value λ 0 which will be same for all paths/experiments. Choose a microstate from the equilibrium distribution at field λ 0 and call its value of x as x i . Change the parameter value from λ 0 to λ in some predetermined way and measure the work done by the external parameter on the system according to Eq. 1. Repeat the experiments several times and construct the matrix S using Eq. 23. Next, each column of the matrix is normalized to unity. The normalized principal eigen-vector is the equilibrium probability distribution, P λ (x), at the field λ. Eq. 22 is the main result of this paper and it is not restricted to one external parameter only and can be generalized to any parameter as mentioned above. The matrix S connects any two allowed states of the system without any reference to equilibrium anywhere and yet its principal eigen-vector determines the equilibrium distribution. Despite resemblance, there is no similarity either with the stochastic matrix of a Markov process or the adiabatic switching on of interaction in a quantum system because S is constructed out of a finite process and needs global information about the work done. Another issue that comes up in this approach via S, is the question of ergodicity which connects the Gibbsian statistical mechanics with equilibrium thermodynamics. The nonequilibrium dynamics used to construct S may not respect ergodicity but the starting points for the paths in principle span the whole phase space, even in the case when one starts with a free non-interacting system. It seems ergodicity of the free noninteracting system is sufficient to generate the equilibrium distribution. Example 1: Extreme cases Consider an extreme case: a completely equilibrium evolution of the system, where at each step the system reaches its equilibrium. Take a simple system: a single spin problem in magnetic field h and temperature β −1 : βH = −Ks, where s = ±1 and K = βh. For an n-step process, K varies from 0 to nk in steps of k, and the column normalized S matrix can be calculated exactly where at each step the spin reaches the corresponding equilibrium state, as where P nk (±) is the equilibrium probability of finding ±1 spin at the n-th step. Thus for a completely equilibrium evolution of the system the elements of the matrix S are unique and, therefore, S has only one and unique eigenvector. In that case principal eigenvalue is 1 and all other eigenvalues are zero. We may conclude that a complete reducibility of S is the signature of a thermodynamic process. Eq. 24 is to be compared with the extreme nonequilibrium process as embodied in Eq. 8. For this instantaneous change in λ, S = I, the identity matrix, with no zero eigenvalues. If at each of these n steps, the system evolves for a time ∆t in contact with the bath, then S n,∆t → S eq as ∆t → ∞. The smallness of the rest of the eigenvalues would indicate how close to equilibrium the system is. The dynamics of a many body system might be compartmentalized into slow modes and fast modes, where the fast modes would equilibrate much more quickly than slow ones. How many such fast modes have actually equilibrated, can be gauged by the number of zero eigenvalues. The S matrix is not necessarily symmetric, though real and there is a possibility of pairs of complex conjugate eigenvalues, with their magnitudes going to zero as equilibrium is reached. Example 2: Barkhausen noise and matrix S We now show the practical feasibility of the operator method for a magnet by using the Barkhausen noise [8,9] as recorded through the output voltage across a secondary coil wound around a ferromagnetic material. Though Barkhausen noise has seen many applications, its use for equilibrium properties has not been anticipated. Consider the Hamiltonian Here magnetic field h and magnetization M correspond to Λ and x respectively. The field is varied from h i to h f in a time interval τ at a constant rateḣ. The Barkhausen effect is a noisy signal proportional to the change in magnetization, η(t) = dM (t) dt . So by integrating the Barkhausen noise up to time t one gets the nonequilibrium instantaneous magnetization of the material. Therefore, we can write the work related exponent in Eq. 23 as which, in a discretized form, looks like where the Barkhausen noise at k-th step is η k = M k − M k−1 . Hence the matrix elements S M f ,M i takes the form expressed entirely in terms of the Barkhausen noise along the nonequilibrium paths. The primed summation over paths that start with M i and end at M f includes proper normalization as mentioned earlier. To go to other cases, e.g., for the case of a polymer pulled at a constant rate of change of force, one needs to monitor the time variation of the pulled point displacement dx/dt vs t. This information can then be used in Eq. 28 to get the corresponding S. Numerical verification of results Our claims about the probability have been verified for the case of 2D Ising model on a square lattice, L×L, where L is the size of the lattice with periodic boundary condition. Consider the Hamiltonian where J is the interaction strength, h is the external magnetic field and s k = ±1 is the spin at k-th site of a square lattice. Here <k,l> denotes the sum over nearest neighbor spins. Here J and h play the roles of external parameter (Λ) and <k,l> s k s l and k s k are the internal variables (x). We find equilibrium probability distribution for given J and h using weighted nonequilibrium path integral, normalizing the eigenfunction of S and compare those with the equilibrium probability distribution obtained from a usual Monte Carlo procedure. The overlap of the two distributions is determined by the Bhattacharyya coefficient [10] defined as with BC = 0 for no overlap and BC = 1 for complete overlap. Let us take an 8 × 8 lattice and start from H = 0. Each time we start from a state chosen from a uniform distribution and reach the final state with J = 1 and h = 1 in n-steps. At each i-th step, J is switched from J i to J i+1 and the external magnetic field from h i to h i+1 , ∆J = J i+1 − J i = J/n and ∆h i = h i+1 − h i = h/n; keeping the spin configuration unchanged, and the amount of work done on the system Numerical verification of the equilibrium probability distribution starting from a uniform distribution is calculated where M i is the magnetization and E i is s k s l at the i-th step. Then we let the system relax at that field h i , J i and β for a while, but do not equilibrate. Thus the work along a path consisting of n steps is which is different for different paths. We find the weighted distribution It is observed that these distributions merge well with the corresponding equilibrium distributions and for P J,h (E) (Fig.2(a)) and P J,h (M) (Fig.2(b)) we get ǫ ∼ 10 −3 (Eq. 30). Equilibrium magnetization curve using nonequilibrium path integral For this case lattice size is 8 × 8 and the interaction strength is kept fixed at J = 1. Each time we start from an equilibrium distribution of h = −h 0 . The field is varied from −h 0 to +h 0 in n steps. W (n) vs. n data are recorded and M (h) is calculated using Eq. 13. We plot the weight averaged magnetization curve, M (h), along with the hysteresis loop, average magnetization over samples, against h for h 0 = 0.2 in Fig.3 and h 0 = 2 in Fig. 4. A retraceable equilibrium curve is obtained as expected though the nominally averaged magnetization neither changes sign nor makes a complete loop (Fig.3) [11]. This reflects the fact that though in majority the magnetization does not reach the correct value, there are a few rare samples for which the spins do flip and these rare configurations, which are close to equilibrium, get more weight in the weighted path integral to give the correct equilibrium curve. For the larger field, we obtain a curve which is much narrower than the hysteresis curve (Fig.4). The equilibrium curve obtained this way is still not a single curve. The width of the loop might be connected to the droplet time scale, and signals the need for a more careful sum over paths to take care of droplet fluctuations. Numerical verification of the eigenvalue equation We start from an equilibrium ensemble at inverse temperature β = 0.2 (kept fixed throughout the experiment), J = 1 and h = 0. Each time we start from a state chosen from its equilibrium distribution and reach the final state with J = 1 and h = 1 in nsteps in the same way described above and calculate the amount of work on the system at i-th step: W i = −∆h i M i . We find the matrix elements: After the matrix is constructed, we normalize sum of each column to unity and find the normalized principal eigen-vector corresponding to the Principal eigenvalue 1, which is guaranteed. We compare the normalized eigenfunction with the actual equilibrium distribution for L = 4 and 8. We see that these distributions merge with the corresponding equilibrium distributions for L = 4 ( Fig.5(a)) and L = 8 ( Fig.5(b)) with ǫ ∼ 10 −4 (Eq. 30). Summary In this paper we show and verify numerically that the repeated nonequilibrium measurements of work done to connect any two microstates of a system can be used to construct a matrix S whose principal eigenvector is the equilibrium distribution. The matrix elements of S (Eq. 23) for a Hamiltonian H(Λ, x) with (Λ, x) as a conjugate pair are: where the summation is over all paths that start from an equilibrium distribution of externally controlled parameter Λ = λ 0 with value of conjugate variable x as x i and end in a state with Λ = λ and x = x f , with proper normalization. The work done W is defined in Eq. 1. The values of the elements of S depend on the details of the process and, therefore, there can be many different S, but all will have the same invariant principal eigenvector. In this way the distribution of an interacting system can be obtained from a free, non-interacting one without any reference to equilibrium anywhere. In the process, we also provide a dynamics independent proof of the result that the equilibrium probability distribution can be obtained using the nonequilibrium path integral. Besides giving a new perspective of thermodynamics and statistical mechanics, our result has direct implications for new ways in numerical simulations and experiments.
2010-04-21T11:32:52.000Z
2009-11-15T00:00:00.000
{ "year": 2009, "sha1": "b9d129ccbfe59807aa226418e3d84bb3df0e8275", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0911.2874", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b9d129ccbfe59807aa226418e3d84bb3df0e8275", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
218562857
pes2o/s2orc
v3-fos-license
Tryptophan pathway catabolites (serotonin, 5-hydroxyindolacetic acid, kynurenine) and enzymes (monoamine oxidase and indole amine 2,3 dioxygenase) in patients with septic shock Abstract Septic shock is associated with a strong inflammatory response that induces vasodilation and vascular hyporeactivity. We investigated the role for tryptophan-pathway catabolites of proinflammatory cytokines in septic shock. We prospectively included 30 patients with very recent-onset septic shock and 30 healthy volunteers. The following were assayed once in the controls and on days 1, 2, 3, 7, and 14 in each patient: plasma free and total tryptophan, platelet and plasma serotonin, total blood serotonin, urinary serotonin, plasma and urinary 5-hydroxyindolacetic acid, plasma kynurenine, monoamine oxidase activity, and total indole amine 2,3-dioxygenase activity. Organ-system failure and mortality were recorded. Compared with the healthy controls, the patients with septic shock had 2-fold to 3-fold lower total tryptophan levels throughout the 14-day study period. Platelet serotonin was substantially lower, while monoamine oxidase activity and 5-hydroxyindolacetic acid were markedly higher in the patients than in the controls, consistent with the known conversion of tryptophan to serotonin, which is then promptly and largely degraded to 5-hydroxyindolacetic acid. Plasma kynurenine was moderately increased and indole amine 2,3-dioxygenase activity markedly increased in the patients versus the volunteers, reflecting conversion of tryptophan to kynurenine. Changes over time in tryptophan metabolites were not associated with survival in the patients but were associated with the Sequential Organ Failure Assessment score and hemodynamic variables including hypotension and norepinephrine requirements. Our results demonstrate major tryptophan pathway alterations in septic shock. Marked alterations were found compared with healthy volunteers, and tryptophan metabolite levels were associated with organ failure and hemodynamic alterations. Tryptophan metabolite levels were not associated with surviving septic shock, although this result might be ascribable to the small sample size. Trial registration: ClinicalTrials.gov; No: NCT00684736; URL: www.clinicaltrials.gov. Introduction Septic shock (SS) is associated with an excessive inflammatory response that involves a variety of pathways and induces vasodilatation and vascular hyporeactivity with hypotension, tissue hypoxia, lactate production, and potentially fatal multiorgan failure. The levels of proinflammatory cytokines and their metabolites are elevated and correlate with the outcome. [1][2][3] The essential amino acid tryptophan is metabolized to the serotonin (5HT) or kynurenine pathway (Fig. 1). [4,5] Conversion of tryptophan to kynurenine is regulated by the enzyme indole amine 2,3-dioxygenase (IDO). During sepsis, IDO gene transcription is modulated either via various cytokines, notably interferon g, or directly by lipopolysaccharides. [4] Serotonin is primarily found in platelets (10%), the gastrointestinal tract (90%), and the central nervous system. Lipopolysaccharides induce serotonin release from platelets, platelet aggregation, vasodilation, and the production of reactive oxygen species in the lungs and central nervous system. Serotonin is either rapidly metabolized to 5-hydroxyindolacetic acid (5HIAA) by monoamine oxidase (MAO) or excreted by the kidneys. 5HIAA has little clinical effect but serves as a mechanism for eliminating released serotonin. The aims of this study were to evaluate the potential changes in tryptophan pathways, serotonin, kynurenine, 5HIAA, IDO, and MAO levels or activities during SS compared with healthy controls and to look for associations linking these compounds to clinical complications, laboratory parameters, and patient outcomes. Methods We conducted a prospective, observational, single-center study that compared 30 patients with septic shock to healthy volunteers included between June 2004 and April 2007. All individuals who performed the laboratory assays and the statistical analyses were blinded to the study group. The study was approved by the ethics committee of the Saint Germain-en-Laye Hospital and registered on ClinicalTrials.gov (NCT00684736). Written informed consent was obtained from the patients or relatives and from the healthy volunteers before study inclusion. The laboratory assays done to assess tryptophan pathways were the only procedures performed specifically for the study. Selection criteria for the group of patients with septic shock We included adults (≥18 years) admitted to our intensive care unit (ICU) with a strong presumption of septic shock defined as body temperature >38°C or <36°C, heart rate >90 bpm, systolic blood pressure <90 mmHg despite adequate fluid replacement or need for vasopressor therapy initiation within the last 3 hours, need for mechanical ventilation, and presence of at least one of the following: PaO 2 /FiO 2 < 300 mmHg, urine output < 0.5 mL/ kg/h or <30 mL/h for at least 1 hour, and/or arterial lactate >2 mmol/L. We did not include patients with any of the following: age <18 years, pregnancy, underlying disease expected to be fatal within 24 hours, do-not-resuscitate order, psychopathology (e.g., depression or psychosis), seizures, migraine, drug addiction, neuroendocrine tumor, obstructive cardiomyopathy or acute myocardial ischemia, pulmonary embolism, advanced malignancy or hematological malignancy, acquired immunodeficiency syndrome with a decision to withhold or withdraw aggressive treatments, inclusion in another clinical study, exposure to medications known to modify serotonin levels (Table 1), shock not due to sepsis, and/or septic shock onset at night, during the weekend, or on a weekday outside the laboratory opening hours. Selection criteria for the control group of healthy volunteers For each patient, we recruited a healthy volunteer among the staff members and students of our hospital. Only volunteers who were found upon detailed questioning to be free of exposure to compounds known to affect the serotonin or kynurenine pathway (Table 1) were eligible. Each volunteer was matched to a patient on age (±3 years), sex, smoking history, and season of the year. Laboratory assays In the patients, blood into EDTA tubes and urine samples was drawn between 8:30 and 9:00 am on the day of inclusion (D1), D2, D3, D7, and D14. In the volunteers, blood and urine samples were obtained once. Blinded technicians used 3 different high performance liquid chromatography techniques with colorimetric electrochemical detection as previously described [6,7] to assay plasma free tryptophan (TRPpf) and total tryptophan (TRPtot); plasma serotonin (5HTp), platelet serotonin (5HTpt), total blood serotonin (5HTtot), and urinary serotonin (5HTu); and plasma 5HIAA (5HIAAp) and urinary 5HIAA (5HIAAu). Plasma kynurenine (KYNp) was assayed using KYN ELISA (Cusabio Biotech, Wuhan Hubei, China). The results of the 5HTu and 5HIAAu assays were adjusted to renal function. MAO activity was measured in arbitrary units (AUs) as the ratio of 5HIAAp over 5HTp and IDO activity, also in AU, as the ratio of KYNp over TRPtot. Follow-up of the patients On D1, D2, D3, D7, and D14, in the morning at the same time as the blood sample collection, we recorded the vital signs, Sequential Organ Failure Assessment (SOFA) score, treatments, laboratory test results, and microbiological findings from any samples taken from new sites of infection. Organ-system failure was defined for each of the 6 major organ systems (respiration, coagulation, liver, cardiovascular, central nervous system, renal) as a score of 3 or 4 on a 0 to 4 scale for each organ system; thus, the total score could range from 0 to 24, with higher scores indicating greater organ dysfunction severity. [8] The corticotropin test was defined as non-responsive if the cortisol level rose by less than 9 mg/dL (248 nmol L). [9] Mortality was recorded. Endpoints The primary endpoints were the differences in TRPpf, TRPtot, 5HTp, 5HTpt, 5HTtot, 5HTu, 5HIAAp, 5HIAAu, KYNp, MAO, IDO, and platelet count values between the patients and controls on D1. In the patients, we also evaluated the changes in the same variables across the study period, looked for differences in these variables between survivors and nonsurvivors, and looked for correlations between these variables and the criteria for SS. Statistics The statistical analysis was carried out by a consultant statistician (FS) who had no role in patient care and was independent from the study ICU. Quantitative variables are variables not following normal distribution and described as number of observed and missing data, median and interquartile. Qualitative variables were described as number (%) of observed data by category. Differences between patients and controls and differences between survivors and nonsurvivors were assessed using the Wilcoxon signed-rank test for matched pairs. The Kendall tau rank correlation coefficient was computed to evaluate associations between 2 quantitative variables. Bonferroni correction for repeated measures was applied. So, two-tailed P value of 0.01 was considered statistically significant. SAS 9.1 software (SAS Institute, Inc., Cary, NC) was used for the statistical analysis. Figure 2 is the patient flow diagram. During the 3-year period, 255 patients with septic shock who required mechanical ventilation and had >2 organ failures were screened at ICU admission. Among them, 34 patients were included then 4 subsequently excluded. Table 2 reports the main patient characteristics. The initial infection was pulmonary (16/30), septicemia (12/30), abdominal (6/30), urinary (3/30), or a central venous catheter (3/30); 11 patients had >1 initial infection site (septicemia and intraabdominal infection, n = 3; septicemia and central venous Table 3 shows the changes over time in tryptophan, serotonin, 5HIAA, kynurenine (KYN), MAO, IDO, and platelet counts. The total tryptophan level was lower in the patients than in the controls throughout the 14-day study period. In the patients, platelet serotonin decreased markedly until D7, while MAO activity and 5HIAA p increased throughout the 14-day period, reflecting the fact that tryptophan is metabolized to serotonin, which is promptly degraded to 5HIAA. The major increase in plasma kynurenine and IDO activity throughout the 14 days in the patients is consistent with the known conversion of tryptophan to KYN. The 5HIAA/5HT ratio, reflecting MAO activity, fluctuated over time between 1.8 and 4.2 and the KYN/ TRP ratio, reflecting IDO activity, between 2.7 and 4.8, indicating a very high level of tryptophan and serotonin metabolism in the patients with septic shock compared with control patients. Table 4 reports the changes over time in tryptophan, serotonin, 5HIAA, MAO, KYN, MAO and IDO levels, as well as the comparison of survivors and nonsurvivors. No laboratory parameter was significantly associated with survival. Table 5 shows the analysis of correlations linking the hemodynamic and laboratory parameters to the SOFA score. The epinephrine dose and the 5HIAA level were positively correlated, and the platelet serotonin level negatively correlated, with the SOFA score throughout the study period. Table 6 reports the analysis of correlations linking norepinephrine dose, blood pressure values, and the SOFA score to platelet serotonin. Platelet serotonin showed significant positive correlations with blood pressure values and negative correlations with the norepinephrine dose and SOFA score throughout the 14day period. Discussion Tryptophan is an essential amino acid that is normally metabolized to kynurenine, kynurenine acid, and quinolinic acid via IDO activation and to serotonin, melatonin, and 5HIAA via MAO activation (Fig. 1). [4] To the best of our knowledge, our study is the first investigation of these 2 pathways in patients with septic shock versus healthy volunteers. We found major differences in tryptophan metabolites during the first 14 days after the onset of septic shock compared with controls. In addition, tryptophan metabolite levels correlated significantly with septic shock severity, although not with mortality. Compared with healthy volunteers, tryptophan levels in the patients were substantially lower; kynurenine pathway metabolites were elevated, with an increase in IDO activity and a 2-fold rise in kynurenine levels; and serotonin pathway activity was also increased, although serotonin was promptly metabolized to HIAA, which was 1.9-fold to 2.5-fold higher than in the controls, concomitantly with an increase in MAO activity. Renal excretion of 5HT and 5HIAA was not marked and was clearly not the main mechanism of tryptophan clearance. Quantitatively, the serotonin pathway compounds showed major changes that were significantly associated with septic shock severity. Worse SOFA score values correlated with lower serotonin and higher 5HIAA values. Serotonin is chiefly stored in gastrointestinal tract cells (95%). The remainder is within the brain and platelets. Platelets capture but do not produce serotonin. The half-life of plasma serotonin is equal to the half-life of platelets, that is, 4 to 5 days. The effects of serotonin are mediated by numerous receptors belonging to 7 different families. Serotonin exerts major effects at many sites including the central nervous system, heart, vessels, platelet aggregation, and smooth muscles. In particular, septic shock is associated with hemodynamic variations that can be fatal. [10] The hemodynamic alterations seen in septic shock are related to increased endothelial barrier permeability with microvascular leakage. Serotonin plays a major role in these abnormalities. [11] However, endothelial cell activation by various mediators or cytokines induces the release of serotonin. Thus, the interactions between serotonin and endothelial cells are complex and bidirectional. One of the limitations of our study lies in the nature of the controls, who were healthy volunteers, as opposed to ICU patients without septic shock. However, few ICU patients are free of diseases and/or treatments known to alter the tryptophan pathways. Moreover, no data are available on tryptophan metabolism abnormalities in ICU patients. Another study limitation is the small number of patients. We were only able to include patients admitted on weekdays during the laboratory opening hours. The biologists took the blood and urine samples at the patient's bedside, transported them to the laboratory, and performed the assays immediately, in order to improve the quality of the results. Another major reason for study exclusions was the high prevalence of patients with neuropsychiatric disorders and/or treatments known to modify tryptophan metabolism. Finally, we included only patients who met published criteria for septic shock. [2] A study of IDO activity and kynurenine metabolites had a similar number of patients (n = 36). [12] We included only patients at the very early stage of septic shock (e.g., with vasopressor initiation within the last 3 hours), in order to ensure comparability of the findings across patients on a given day, since tryptophan metabolites might fluctuate over time. We believe that including patients several days after the onset of septic shock may produce unreliable results. [12] However, we found that the values changed only very slowly, at least during the first week. Septic shock severity and the microbiological findings were consistent with other studies and, therefore, cannot explain differences in results across studies. [12] Those differences may be related to variations in assay methods, in assay compartments (e.g., free vs total plasma, urine, platelets), and in time since septic shock onset. Details on these points are not always available in study reports. IDO elevation may be induced by interleukin 10, whose levels correlate with the severity of sepsis. Tryptophan 2-3, dioxygenase was not tested in our study but may play only a minor role (<1%) in tryptophan metabolism. [12] The changes in tryptophan metabolism and enzyme activities induced by the cytokines Table 4 Time-course of tryptophan, serotonin, 5HIAA, kynurenine, MAO, IDO, and platelet counts in survivors versus nonsurvivors in the group with septic shock. .014 5HIAAp = plasma 5-hydroxyindolacetic acid, 5HIAAu = urinary plasma 5-hydroxyindolacetic acid, 5HTp = plasma serotonin, 5HTpt = platelet serotonin, 5HTtot = total serotonin, 5HTu = urinary serotonin, AU = arbitrary unit, IDO = indole amine 2;3-dioxygenase, KYNp = plasma kynurenine, MAO = monoamine oxidase activity, Plat = platelet count, TRPpf = plasma free tryptophan, TRPtot = total tryptophan. released during septic shock may be influenced by genetic factors and by the environment, including the diet. [13] These factors were not taken into account in our study and may have contributed to some of the alterations seen. The decrease in tryptophan was due to activation of the serotonin and kynurenine pathways, as demonstrated by the elevations in the corresponding catabolites. However, the gut microbiome may have contributed to diminish the tryptophan levels. A decrease in tryptophan modifies the immune response and alters the homeostasis of the gut, generating a vicious cycle. [13] Moreover, serotonin induces manifold responses that are mediated by multiple receptors with different effects. Our study is consistent with experimental data obtained using 5HT-receptor antagonists or tryptophan treatment. [10,[14][15][16] However, mortality did not correlate with tryptophan metabolism, in keeping with another study. [17] Specific antagonism of serotonin receptors has been reported to improve survival in experimental studies, [5,10] but this effect requires evaluation in humans. Oddly enough, despite numerous studies, the role for the tryptophan-serotonin axis is not considered in consensus statements or reviews on sepsis. [1,2] Conclusion In ICU patients who had septic shock with hypotension, a requirement for norepinephrine and mechanical ventilation, lactate elevation, and a high SOFA score, the metabolism of tryptophan was severely altered. Tryptophan levels were low compared with healthy controls during the 14-day study period, with conversion to kynurenine via IDO and conversion to serotonin, which was promptly catabolized to 5HIAA. Overall, the decreases in total tryptophan and platelet serotonin and the increases in MAO activity and plasma 5HIAA were associated with organ failures, hypotension, and higher norepinephrine requirements, but not with mortality. This last result may be related to the small sample size, and larger studies are needed. Our findings suggest that interventions capable of acting on the tryptophanserotonin axis, notably specific serotonin receptor antagonists, may deserve evaluation as a treatment for septic shock. Table 6 Correlations linking norepinephrine dose, blood pressure values, and the SOFA score to platelet serotonin in the patients with septic shock.
2020-05-10T13:04:36.156Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "d3f185f17893f679e60816f63471e538c58275e0", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1097/md.0000000000019906", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "453b25c5be81aae7de7241f9171bbb5f39773b10", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15247299
pes2o/s2orc
v3-fos-license
EFFECTIVENESS OF HOME BLEACHING AGENTS IN DISCOLORED TEETH AND INFLUENCE ON ENAMEL MICROHARDNESS Objectives: This study evaluated the effectiveness of different home bleaching agents on color alteration and their influence on surface and subsurface microhardness of discolored bovine enamel. Material and Methods: Forty-five fragments of bovine incisors were randomly allocated into 3 groups (n=15) according to the bleaching agent: 10% carbamide peroxide gel (CP10), 16% carbamide peroxide gel (CP16) and 6.5%-hydrogen-peroxide-based strip (HP6.5). Before bleaching treatment, initial values of Knoop surface microhardness and color (CIEL*a*b*) were obtained and the fragments were artificially stained in hemolyzed rat blood. Then, bleaching treatments were performed over a 21-day period. Color changes (ΔE) were assessed at 7, 14 and 21 days, and final surface microhardness reading was done after 21 days. Thereafter, the fragments were bisected to obtain subsurface microhardness. Data were subjected to ANOVA and Tukey's tests (α=5%). Results: Color changes produced by CP16 were similar to those of CP10, and the color changes produced by these materials were significantly superior to those produced by HP6.5. Color changes at 21 days were superior to 7 days and similar to 14 days. The time did not influence color changes for CP16, which showed similarity between the 14- and 21-day results. No statistically significant differences were found among the home bleaching agents for surface and subsurface microhardness. Conclusions: Microhardness of bovine enamel was not affected by the bleaching agents. The 16% carbamide peroxide gel was the most effective for bleaching the stained substrate. INTRODUCTION Tooth color is currently believed to be one of the most important esthetic concerns for patients 1 . Vital tooth bleaching has widely been performed, and several materials and techniques have been presented for this purpose 16 . Vital tooth bleaching is considered a safe, effective, minimally invasive, non-destructive and well-accepted procedure for the treatment of discolored teeth 27,29 . A number of methods and approaches have been described for the bleaching of vital teeth 17 . However, basically, there are three techniques: in-office or power bleaching, mass market bleaching products, and dentist-supervised home bleaching 15 . Since its introduction by Haywood and Heymann 14 (1989), several products have been employed for home bleaching. These products are mainly available in gels or strips containing several concentrations of carbamide peroxide or hydrogen peroxide 29 . Carbamide peroxide gel at a 10% concentration is one of the most commonly employed home bleaching agents due to its safety and effectiveness 12 . Although different concentrations of carbamide and hydrogen peroxides have shown similar results 10,18,29 , increased concentrations and application times may cause enamel surface alterations, such as loss of mineral content and microhardness 2,22 . Thus, considering the divergences on the effects of bleaching agents on properties of hard tooth tissues, the purposes of this study were to evaluate the effectiveness of different home bleaching agents and their effects on surface and subsurface microhardness of bovine enamel. The hypotheses tested in this study were that the type of bleaching agent influences the color alteration in stained teeth, but does not influence the surface and subsurface microhardness of bovine enamel. Enamel Specimen Preparation Forty-five freshly extracted sound bovine incisors were selected and stored under refrigeration in a saturated thymol solution until preparation for testing. Teeth had their roots removed 2 mm apically to the cementoenamel junction in a sectioning machine with a water-cooled diamond saw (Minitom, Struers A/S, Copenhagen, D2610, Denmark). The buccal surfaces of the crowns were cut longitudinally and transversely, using double-faced diamond discs mounted at a low-speed handpiece in order to obtain fragments with dimensions of 11 mm x 11 mm. To obtain plane surfaces, required for microhardness tests, the external enamel surfaces of the fragments were flattened in a water-cooled polishing machine (Politriz; Struers A/S) using 600-, 1000and 1200-grit silicon carbide abrasive papers sequentially, followed by a final polishing with a 0.3-µm-gamma alumina suspension on felt wheel. The specimens were kept in an ultrasonic bath (Sonic Clean, D.M.C. Equipamentos Ltda., São Carlos, SP, Brazil) in distilled and deionized water for 10 min to remove polishing debris. Staining Procedure Artificial staining of the specimens was performed following a modification 5 of the method proposed by Freccia and Peters 8 . Blood samples collected from adult male Wistar rats (weighing 200 to 250 g) were heparinised to avoid coagulation, and were centrifuged at 10,000 rpm for 10 min. The blood serum was discarded and 40 mL of distilled and deionized water were added to 60 mL of the precipitated blood. This mixture was centrifuged at 10,000 rpm for 20 min, resulting in a rich-haemoglobin hemolyzate blood solution. Specimens were immersed in this solution for 4 days, performing a centrifugation cycle (10,000 rpm, 20 min) at every 24 h. After 4 days, the specimens were removed, washed in distilled water, dried with absorbent paper and kept at 37ºC in 100% relative humidity for 15 days. Bleaching Treatment After staining, the specimens were randomly assigned to 3 groups (n=15) according to the bleaching treatment: 10% carbamide peroxide gel (CP10), 16% carbamide peroxide gel (CP16) and 6.5%-hydrogen-peroxide-based strip (HP6.5). The bleaching treatment was performed over 21 days, according to the manufacturers' instructions. Specifications of the materials used in this phase are given in Figure 1. For each specimen of the groups treated with the bleaching gels (CP10 and CP16), a tray was fabricated using a low-density polyethylene sheet and a vacuumthermoforming machine. Every day, a thin layer of each gel was applied on the enamel surface and the tray was placed on each specimen, which was stored for 8 h at 37°C in a recipient containing artificial saliva (pH 7.0 - Figure 1). After 8 h, the gel was removed from enamel surface with running distilled water for 15 s. Enamel surfaces of the group treated with the bleaching strip (HP6.5) were covered with the strip for 30 min twice a day. After each application, the strips were removed from enamel surfaces, which were rinsed with running distilled In all groups, when the specimens were not in contact with the bleaching agents, they were kept at 37°C immersed in artificial saliva, which was changed daily. Color Measurement The color of the specimens was measured after the artificial staining (baseline) and 7, 14 and 21 days after the beginning of the bleaching treatment. Before each color measurement, specimens were rinsed with water and dried with absorbent paper. Color was measured over a white background employing a color spectrophotometer (Color guide 45/0, PCB 6807; BYK-Gardner, GmbH Gerestsried, Germany), which records the color variables L*, a*, b* according to the CIEL*a*b* (Commission Internationale de l'Eclairage L*, a*, b*) color system 18 , where L* stands for luminosity dimension or whiteness, ranging from 0 (pure black) to 100 (reference white), a* for green-red contrast (a*= green and +a*= red) and b* for blue-yellow contrast (b*= blue and +b*= yellow). Color change (∆E) was calculated from L*, a* and b* values employing the following formula: ∆E= [(∆L*) 2 + (∆a*) 2 + (∆b*) 2 ] 1/2 . Positive ∆L* means the specimens became whiter, while negative ∆L* means specimens became darker. Microhardness Testing Knoop microhardness measurements of the enamel surface were performed before the staining procedure (initial values) and following the bleaching treatment (after 21 days) using a microhardness tester (HMV-2000; Shimadzu Corporation, Kyoto, Japan). The specimens were individually fixed in the device in such a way that the test surface was perpendicular to the micro-indenter tip. Three indentations (100 g load, 30 s) equally spaced over a circle and not closer than 1 mm to the adjacent indentations or the margin of the specimen were taken and the average was calculated. The post-bleaching measurements were repeated for all the specimens at locations near to the previous series of indentations. Microhardness of enamel subsurface was obtained after measuring surface microhardness post-bleaching. Specimens were individually included in acrylic resin blocks to facilitate that they were perpendicularly bisected by halves. One crosssectioned face of each specimen was ground and polished following the protocol initially described. Measurements of subsurface microhardness were accomplished at distances of 50 µm, 100 µm, 150 µm and 200 µm from the external enamel surface. At each distance, three linearly 100-µmspaced indentations were performed and the mean was calculated. Settings for load and penetration were equal to the one employed for surface microhardness. Statistical Analysis Data from color change and microhardness testing were analyzed for homogeneity and normality and were subjected to ANOVA and Tukey's test at a 5% significance level. Microhardness Test The means and standard deviations of surface microhardness and subsurface microhardness are shown in Table 2 and Table 3, respectively. The analysis of data did not reveal significant differences among the studied factors neither in the surface nor in the subsurface microhardness. DISCUSSION When the objective of an in vitro study is to evaluate the effectiveness of bleaching agents, it is very difficult to observe differences in the effect of bleaching techniques on samples without staining 7 . Thus, the present study used the method proposed by Freccia and Peters 8 (1982) for artificial staining of extracted teeth. In addition to being reliable, safe and easily reproducible, this method simulates one of the main causes of intrinsic tooth discoloration that is the oxidation of hemoglobin inside dentinal tubules 3 . A number of methods are available for evaluating the efficacy of bleaching products 4 . Shade guides, photography, colorimeters or computer digitization can be employed to assess tooth color changes 20,25,28 . Tooth color determination employing the commonly cited shade-based guides have limitations as they are subjected to examiner and environmental factors that can potentially influence classification of color 9,19 and hamper the visualization of color variation existent among the thirds of a tooth crown 26 . The CIEL*a*b* three-dimensional color space system is the most frequently quoted index employed in dental bleaching research and can be generated from colorimeters, spectrophotometers or digital image analysis 7,11,28 . In the present study, the 16% carbamide peroxide bleaching agent was more effective than the 10% carbamide peroxide gel and the strip containing 6.5% hydrogen peroxide. The reason why 16% carbamide peroxide gave the highest change in L* values is probably its extended contact time (8 h/day) with enamel compared to the considerably shorter contact time of the 6.5% hydrogen peroxide gel (30 min twice a day). It is known that the outcome of a bleaching procedure depends mainlyon the concentration of the bleaching agent, the ability ofthe agent to reach the chromophore molecules, and the duration and frequency that the agent is in contact with chromophore molecules 6 . Moreover, the difference among the bleaching systems could be explained by the different kinetics of hydrogen peroxide and carbamide peroxide. Hydrogenperoxide acts as a strong oxidizing agent through the formation of free radicals 23 , reactive oxygen molecules, and hydrogen peroxide anions 13 . These reactive molecules attack the long-chained, dark-colored chromophore molecules and split them into smaller, less colored and more diffusible molecules. Carbamide peroxide also yields urea that theoreticallycan be further decomposed to carbon dioxide and ammonia. The high pH of ammonia facilitates the bleaching procedure 24 . The influence of hydrogen peroxide and carbamide peroxide on enamel and dentin properties, such as surface morphology and chemistry, surface and subsurface ultrastructure and microhardness, has been extensively investigated in the literature. According to a recent review 16 , the most frequently employed technique for evaluating the effects of peroxide and bleaching products on enamel and dentin has been surface microhardness. Surface microhardness testing is a simple method for determining the mechanical properties of enamel and dentin surfaces and is related to a loss or gain of mineral of the dental structure 16 . On account of the acidic properties of the bleaching agents, changes in the mineral content of dental hard tissues may occur, leading a decrease in the microhardness values after the bleaching treatment 22 . However, in the present study, neither the enamel surface microhardness nor the enamel subsurface microhardness was affected by the different bleaching agents employed. As the specimens were maintained in artificial saliva during and after the bleaching treatment, it may be assumed that this solution promoted a continuous mineral intake due to the presence of calcium and phosphate ions in its composition, justifying the absence of microhardness changes. This also demonstrates that peroxide bleaching products have no significant effects on microhardness, and contrasting results, in general, reflect some methodological limitations that do not represent the clinical situation or employ highly acidic agents 16 . Furthermore, decrease in microhardness can be recovered in a post-bleaching period, following remineralization from saliva storage 21,22 . CONCLUSION It may be concluded that the 16% carbamide peroxide gel was the most effective in bleaching stained bovine enamel without causing changes in the microhardness of this substrate.
2017-04-20T00:49:11.967Z
2009-08-01T00:00:00.000
{ "year": 2009, "sha1": "b434fd7d453fba3a14a6b686f5dc8093f382f0b2", "oa_license": "CCBYNC", "oa_url": "http://www.scielo.br/pdf/jaos/v17n4/04.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b434fd7d453fba3a14a6b686f5dc8093f382f0b2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
221238698
pes2o/s2orc
v3-fos-license
A Refractive Index Study of a Diverse Set of Polymeric Materials by QSPR with Quantum-Chemical and Additive Descriptors Predicting the activities and properties of materials via in silico methods has been shown to be a cost- and time-effective way of aiding chemists in synthesizing materials with desired properties. Refractive index (n) is one of the most important defining characteristics of an optical material. Presented in this work is a quantitative structure–property relationship (QSPR) model that was developed to predict the refractive index for a diverse set of polymers. A number of models were created, where a four-variable model showed the best predictive performance with R2 = 0.904 and Q2LOO = 0.897. The robustness and predictability of the best model was validated using the leave-one-out technique, external set and y-scrambling methods. The predictive ability of the model was confirmed with the external set, showing the R2ext = 0.880. For the refractive index, the ionization potential, polarizability, 2D and 3D geometrical descriptors were the most influential properties. The developed model was transparent and mechanistically explainable and can be used in the prediction of the refractive index for new and untested polymers. Introduction The refractive index is an important property of polymers in optical applications due to the property defining the velocity at which light travels through the material relative to a vacuum [1][2][3][4][5][6][7][8]. The refractive index has been widely known since before the 19th century, and has been used to understand the optical activities of many materials [9]. There are many uses polymers have in industry today. High refractive index polymers (HRIP) have been studied by many for industrial process applications [5,[10][11][12]. Some of the applications include complementary metal oxide semiconductor image sensory (CIS) [13], polymer films for high-performance antireflection coatings [14], UV nanoimprinting lithography [15], the covering of light-emitting diodes [16], and coatings of fiber gratings [17]. The uses that are shown were recently found and many more niches will be found in the future. Recent works that investigated refractive index refer to using a refractometer which correlates the amount that light changes direction when passing through a substance [18][19][20]. The Standard Test Method for Index of Refraction of Transparent Organic Plastics, ASTM D542, has been used by some researchers [18]. Crena et al. used a refractometer to measure the refractive index of mixtures of styrene-methyl methacrylate and glycidyl methacrylate. It was reported that the desired range of refractive index of 1.5750-1.5894 was affected by the different mixtures of styrene-methyl methacrylate-glycidyl methacrylate. Others such as Kasaroa et al. used similar methods when specifying the different wavelengths, or colors, to determine the change in the refractive index of poly methyl methacrylate, polystyrene, and others [20]. Another method has been suggested recently, that of using a scanning angle Raman spectroscopy for nondestructive testing [21]. This method uses a laser pointed toward a sapphire prison which is focused onto the imaging spectrometer to which the refractive index can be translated from the information gathered, and was found to find values within 10% comparing to other methods. These methods use different strategies in obtaining the refractive index of polymers. Differing techniques have been used to model the refractive index of differing materials such as using the Lorentz-Lorenz equation, Mie Theory, the Rayleigh-Debye-Gans theory, the implementation of group contribution method, and the use of quantitative structure property relationship (QSPR) methodology [22,23]. QSPR has been used as a powerful tool to predict the properties of various chemical systems and materials for the last three decades [22][23][24][25][26][27][28][29][30]. It is worth noting that an interesting approach to predict the refractive index for aerosols can be applicable for polymers, which is represented in the work [29]. The earliest modeling of the refractive index of polymers using QSPR methods concluded that quantum-chemical descriptors such as HOMO-LUMO gap and nuclear repulsion for C-H bond heavily affected the refractive index [31]. This five-variable model was found to have R 2 = 0.940, F = 282.13, and S 2 of 3.13 × 10 −4 with an average prediction error of 0.9%. This model only used 655 descriptors and 95 amorphous polymers, where more are available today [31]. A later model was created by Xu et al. by creating a dataset of 121 linear polymers [32]. The four variable model included: the sum of valence degrees, the degree of unsaturation, the relative number of halogen atoms, and the electrostatic attraction between the main chains. The R 2 and prediction error were found to be 0.964 and 0.87% respectively [32]. Jabeen et al. published a model using a dataset of 127 diverse polymers, which achieved values of R 2 of 0.932, R 2 ext of 0.882, Q 2 LOO of 0.922, and Q 2 -F1 of 0.875. The four-variable model asserted that polarizability, sp 2 hybridization and the frequency of C-F at distance 1 affected the refractive index greatly [33]. A set of models that Khan et al. presented used only 2D descriptors and had 221 polymers in the dataset. These six-variable models were created by partial least squares (PLS) and were shown for comparison. All four models included the descriptor of MLFER_E, which is the molecular linear free energy relation and relates polarizability and the solute/solvent interaction through n-and pi-electron pairs. Moreover, the mean ionization potential descriptor (Mi) was in each model. Each of the four models had at least three latent variables and ranged from 0.899 to 0.895 for the R 2 value [34]. These descriptors that relate Mi and the interaction of electron pairs are a large factor in many of these models. Duchowicz et al. developed a model using QSPR methods with a dataset of 234 structurally diverse polymers [35]. The model considered using 1-5 monomeric repeating units to generate flexible descriptor models. The paper describes a descriptor model of linear combination of correlation weights (DCW 4 ) that used to model the refractive index of polymers. It is worth noting that the DCW descriptor is actually a latent variable and combines a number of descriptors in it, so it is not a single descriptor. The statistical values of R 2 of 0.96 and R 2 ext 0.85 were reported for this model. The current work discusses a newly developed model to predict a refractive index of the diverse set of polymers. A large set of 262 polymers was used and collected from multiple sources [1][2][3][4]6,7], and the collected set was larger than in previously reported works [31][32][33][34][35]. The current work resulted in the development of a robust four-variable model. The applied descriptors included several categories, including topological, quantum-chemical, functional group counts, constitutional, geometrical, and topological. The developed model has encompassed the combination of the largest number of polymers and the largest number of an initial set of descriptors to generate a robust, transparent and mechanistically explainable model. The generated model was internally and externally validated by different techniques such as the leave-one-out technique, y-scrambling, r 2 m_ave , CCCcv metrics, and splitting the data into prediction and training sets. Collection and Preparation of Experimental Dataset A diverse dataset of 262 polymers was collected from multiple sources [1][2][3][4]6,7,35]. The dataset included, organic-based polymers such as cellulose acetate and non-renewable-sourced polymers such as poly(ethylene). This dataset includes polyamides, polyesters, polyolefins, polysilylenes and others. Information on the collected data of polymers is shown in the Supplementary Information in Tables S1 and S2 [1][2][3][4]6,7,35]. Table S1 shows the structure and number it was assigned to in the dataset. Table S2 includes the SMILES (Simplified Molecular-Input Line-Entry System) notations for monomers, the chemical names produced by ChemDraw 16 [36], the experimental refractive index, the logarithmically transformed experimental refractive index, the predicted logarithm refractive index, and which set the specific compounds were a part of. Any duplicates and unknown polymers/polymer mixtures were removed. Each polymer was drawn in a polymerized monomer 2D structure format using ChemDraw 16 [36]. These monomer structures were "end-capped" with hydrogen atoms for consistent monomer functionality. This monomer end-capping format was implemented due to the limitations of a computational descriptor generation of long polymer chains. The end-capping technique was done to replicate the monomer structure functionality after being polymerized. The monomer structures were then optimized using HyperChem 8 [37]. The dataset was then split into training and prediction sets, approximately 75% of the 262 were used as the training set and the other 25% were used as the prediction set. The training and prediction sets were used for model generation and validation, respectively. The refractive index value was then converted to a logarithmic scale for convenient comparing the results to previously published works and for the linearity of the data to the free energies. Descriptors Set Generation The monomer unit was drawn using ChemDraw 16 [36]. Each monomer structure was optimized in HyperChem 8 [37], using the MM+ force-field. Then, a set of quantum descriptors was calculated, including the highest occupied molecular orbital (HOMO), the lowest unoccupied molecular orbital (LUMO), the dipole moment, the energy gap between HOMO and LUMO, and the ionization potential. These descriptors were calculated using a semi-empirical method RM1 [37]. The rest of the descriptors were generated using Dragon 6 [38] by inputting in the structures optimized in HyperChem. Dragon 6 generated about 4500 descriptors per structure [38]. These descriptors include the following categories: constitutional indices, 2D and 3D matrix-based descriptors, 2D autocorrelations, topological descriptors, charge-based descriptors, 0D, 2D, and 3D descriptors, molecular properties and more. Descriptors with high correlation, single variables, and non-informative information were discarded based on constant value, near constant, and pair correlation criteria. Model Development and Validation For the QSPR correlation, a combined genetic algorithm (GA) and multi-linear regression analysis (MLRA) method was used to develop the models for this work. The QSARINS software [39] was used for the final steps in the models' development. The dataset was accompanied by the log(n) value of the refractive index associated with each polymer structure for model development. The training set and prediction sets were created by sorting tool in QSARINS, and every fourth structure (25%) was assigned to the prediction set in order of ascending refractive index value. The dataset was split into the training set with 203 structures and the prediction set with 66 structures. The descriptors chosen for the model were determined by genetic algorithm (GA). A particular setup during the model development was used to select a best model. Thus, the number of generations was set to 2000, the population size of the final model was set to 20, and a mutation rate of 40% was used. For validation purposes, multiple methods were applied, including leave-one-out (LOO) cross validation, leave-many-out (LMO), y-scrambling, as well as internal and external validation protocols. Some of these methods were used to show the possible existence of fortuitous correlations. After validation techniques were applied, the best model was chosen based on multiple criteria: (1) high statistical performance variables such as R 2 and Q 2 (including R 2 -Q 2 < 0.3); (2) lowest number of outliers; (3) a low number of variables in the model; and (4) low cross-correlation between descriptors in the selected model. Results and Discussion The molecular structures and descriptor information, together with refractive index values were analyzed by QSARINS [39], applying the GA-MLR technique. Numerous models were generated with one to five descriptors in the model. An initial developmental run of models was executed with the most notable models with high statistical values. A developed Williams plot for a best four-variable model in shown in Figure 1, with all points shown within the three standard deviation limits within the applicability domain (AD). The AD was developed using the leverage approach [40] Molecules 2020, 25, x FOR PEER REVIEW 6 of 10 has an R 2 value of 0.904 and a very good Q 2 LOO of 0.897. With these values being high and comparable, the internal validation considers the model to be stable and internally robust. Figure 1(b) shows the Williams plot and all values are within three standard deviations. The molecules that are outside the HAT values are due to structural differences only, and are within the applicability domain. The yscrambling plot is generated and represented in Figure 1(c) to ensure that the selected model was not chosen by chance. Due to the model showing compliance with the applied validation, the model is concluded to be highly predictive and robust. The Williams plot allows to evaluate the deviations from the ideal matching between experimental and predicted data. Figure 1 It is necessary to iterate the robustness criteria for internal and external validation tests. In our case, the chosen model has values of 0.904 and 0.897 for R 2 and Q 2 LOO, respectively, which are the results from the internal validation tests. These values are shown in Table 2. To ensure predictability, The results of the best QSPR model for models with from 1 to 5 descriptors are shown in Table 1, where the performance data are shown for the training and prediction sets. In Table 1, the following performance indicators are shown: R 2 is the regression coefficient for the training set, R 2 adj is an adjusted R 2 , S represents the average distance the observed values are away from the regression line, Q 2 LOO is the "leave-one-out" coefficient, CCCcv is "concordance correlation coefficient cross-validation", R 2 y-scr and Q 2 y-scr are for the "y-scrambling" performance coefficients, RMSE tr is a "root-mean-square error", and r 2 m_ave is the "metrics parameters average" that shows the robustness of the model. These are all internal and cross-validation parameters which test the predictive ability and robustness of the model. Concordance correlation coefficient (CCC) was calculated as a more restrictive parameter for expressing the external predictivity of each model, as shown in Equation (1) The fourth variable model was chosen as the best model due to its high R 2 value; its passing internal, external, and validation criteria. The descriptors of the chosen model and their statistical coefficients are shown in Table 2. First involved descriptor is a Mi descriptor, which is a mean first ionization potential generated by the Dragon software [38]. The ionization potential is the amount of energy required to remove an electron from a gaseous atom or ion. The ionization potential specifies the energy required for the first excited electron to leave. According to Koopman s theory, ionization energy is the negative value of HOMO energy [42]. In addition, Reddy et al. had shown in experimental work that ionization potential negatively influenced the refractive index while working with alkali halides [43]. Second descriptor, GATS1p is a descriptor weighted by polarizability [38], Table 2. Polarizability is the ability, of an atom or molecule, to form instantaneous dipole in reaction to an external field i.e., magnetic or electrical. Polarizability is related to the refractive index via the Lorentz-Lorenz expression [44,45]. Specifically, regarding the descriptor of GATS1p, it is actually the Geary coefficient which is weighted by polarizability. When weighing data it is known the weighing factor, in this case polarizability, decreases the descriptor with the increase in weighing factor. This relates back to the model by suggesting that as polarizability increases, so does the refractive index, but due to polarizability being a weighing factor, the GATS1p descriptor negatively affects the refractive index which matches the model. The two discussed descriptors (Mi, GATS1p) are related to the excitation energies of ground state electrons [43,46]. For example, optical polarizability in quantum theory results from a mixing of suitable excited state wave functions with the ground state wave function. The mixing coefficient is inversely proportional to the excitation energy from the ground to the excited state. A small HOMO-LUMO gap (and a higher energy of HOMO and a lower ionization potential) automatically means small excitation energies to the manifold of excited states. Therefore, soft molecules, with a small gap, will be more polarizable than hard molecules. Based on the QSPR model obtained, these two descriptors have been found to heavily influence the refractive index due to their interaction with light that passes through the materials. Polymer properties are known to be affected by the chemical structure of the monomer units, as well as the bulk interactions of the chains [47]. In the developed model, the monomer unit length is represented by WIA_RG and SpMAD_A descriptors, which are 2D and 3D matrix descriptors. The size and length of a molecule affect the speed of light by creating a "barrier" of material that light needs to travel through. The length of polymeric units/chains (monomers) has also been expressed to increase the refractive index of the material with branching [48]. It is believed that the larger the molecule, the larger the refractive index which is due to the previously mentioned material "barrier." The Figure 1 represents the plots of experimental and predicted data correlation (a), Williams plot (b) and the y-scrambling analysis plot (c). Thus, Figure 1a is the correlation plot for the training (yellow dots) and the prediction sets (blue dots). The represented plot for the best 4-variable model has an R 2 value of 0.904 and a very good Q 2 LOO of 0.897. With these values being high and comparable, the internal validation considers the model to be stable and internally robust. Figure 1b shows the Williams plot and all values are within three standard deviations. The molecules that are outside the HAT values are due to structural differences only, and are within the applicability domain. The y-scrambling plot is generated and represented in Figure 1c to ensure that the selected model was not chosen by chance. Due to the model showing compliance with the applied validation, the model is concluded to be highly predictive and robust. The Williams plot allows to evaluate the deviations from the ideal matching between experimental and predicted data. Figure 1b shows that there are no anomalous trends, which is evident from the regular distribution of points in both halves of the plot. The twelve structures which are out of the applicability domain (with the HAT value larger than 0.075) are not outliers, they represent structurally peculiar compounds. These twelve structures (1,2,3,17,21,30,38,85,125,166,235,237) have log(n) values of refractive indexes that range within 0.117-0.204, well in the range of all other data points. It is necessary to iterate the robustness criteria for internal and external validation tests. In our case, the chosen model has values of 0.904 and 0.897 for R 2 and Q 2 LOO , respectively, which are the results from the internal validation tests. These values are shown in Table 2. To ensure predictability, the external validation was conducted based on the external test set, R 2 ext . With the current model, the R 2 ext was found to be 0.880. This was acceptable for an external validation test and reinforces the predictive ability of the model. Further criteria were used to ensure the external predictive capability by the predictive squared correlation coefficients of Q 2 -F1, Q 2 -F2, Q 2 -F3. The coefficient values for the chosen model were 0.874, 0.873, and 0.899, respectively, which confirm once again the high predictive performance of the selected model. Other works have shown that the concordance correlation coefficient cross-validation (CCCcv) and metrics parameters (r 2 m_ave ) can be used for additional validation. The chosen model has a CCCcv of 0.946. This further reinforces the stability and predictive ability of the developed model. Moreover, the r 2 m_ave average was found to be 0.824 which adds to the predictive capability. The root means square error (RMSE) has low values of 0.007 and 0.008 for internal and external validation. With all the internal and external criteria being surpassed by the model, it is confirmed that the model is robust, stable and can be used to predict the refractive index of polymers. It has been validated by passing R 2 and Q 2 LOO , RMSE, internal and external CCC and r 2 m_ave for training and prediction sets. Additionally, the y-scrambling validation procedure is confirmed the robustness and no chance correlation. Conclusions A four-variable QSPR model was developed using a comprehensive and diverse dataset of 262 polymers. The influential descriptors in the model were found to be the ionization potential (Mi), the descriptor weighted by polarizability (GATS1p), and the molecular structural topology (WiA_RG and SpMAD_A). This model was found to have an R 2 value of 0.904 while the internal and external validation parameters were found to be Q 2 LOO of 0.897, and an R 2 ext of 0.880. The model was also subjected to and passed the y-scrambling, CCCcv, R 2 m_ave and applicability domain examinations. The ionization potential (Mi) and polarizability (GATS1p) were connected to the refractive index via excitation energies of ground state electrons. These descriptors have a large influence on the refractive index due to their interaction with the energy gap of ground state and excited state electrons. The matrix descriptors of WiA_RG and SpMAD_A are associated with the molecular structural topology and size. By varying the shape and size of the molecule, light has a different path to go through which varies the speed of light and changes the refractive index. The characteristics of this model as well as the internal and external validation parameters confirm the predictive ability and robustness of this model. Further polymers with optical uses can be developed using this transparent and reproducible model for the development of high or low refractive indices. Conflicts of Interest: The authors declare no conflict of interest.
2020-08-20T10:12:31.217Z
2020-08-19T00:00:00.000
{ "year": 2020, "sha1": "11278dad58ab79f78e987a733de8dfa20c0ea1ba", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/25/17/3772/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2c2894eb1f0d3f303b0e044f2ec2407ae563a2cc", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
267969104
pes2o/s2orc
v3-fos-license
Long-term quality of life outcomes in patients undergoing microsurgical resection of vestibular schwannoma Background While previous studies have assessed patient reported quality of life (QOL) of various vestibular schwannoma (VS) treatment modalities, few studies have assessed QOL as related to the amount of residual tumor and need for retreatment in a large series of patients. Objective: To assess patient reported QOL outcomes following VS resection with a focus on extent of resection and retreatment. Methods A retrospective chart review was performed using single-center institutional data of adult patients who underwent VS resection by the senior authors between 1989-2018 at Loyola University Medical Center. The Penn Acoustic Neuroma Quality of Life (PANQOL) survey was sent to all patients via postal mail. Results Fifty-five percent of 367 total patients were female with a mean age of 61.6 years (SD 12.63). The mean period between surgery and PANQOL response was 11.4 years (IQR: 4.74-7.37). The median tumor size was 2 cm (IQR: 1.5-2.8). The mean total PANQOL score was 70 (SD 19). Patients who required retreatment reported lower overall scores (μdiff = -10.11, 95% CI: -19.48 to -0.74; p = 0.03) and face domain scores (μdiff = -20.34, 95% CI: -29.78 to -10.91; p < .001). There was no association between extent of resection and PANQOL scores in any domain. Conclusion In an analysis of 367 patients who underwent microsurgical resection of VS, extent of resection did not affect PANQOL scores in contrast to previous reports in the literature, while the need for retreatment and facial function had a significant impact on patient-reported outcomes. Introduction ][3] Historically, VS resection carried a high risk of morbidity and mortality.However, with technological advancements and the development of surgical practice, outcomes have dramatically improved while balancing extent of resection with optimized neurological outcomes, including facial function and hearing preservation. 4][7][8][9][10][11] Patient-reported metrics have the ability to shed light onto the most clinically important outcomes that influence QOL, which can guide our focus and management of VS.The Penn Acoustic Neuroma Quality of Life (PANQOL) questionnaire has been validated as a method to assess QOL in patients with VS, and stratifies QOL outcomes into an overall score and subdomains relevant to various area of life functions. 124][15][16][17][18][19][20][21] Here, we utilize the PANQOL questionnaire in patients who elected to receive microsurgical resection of VS to determine the baseline and surgical characteristics that most significantly impact QOL outcomes.The present study not only analyzes QOL outcomes in a large microsurgical cohort, but also provides analysis of how facial and hearing preservation, extent of resection, and retreatment vary in their association with QOL outcomes. Patient population A retrospective review was conducted of patients aged 17 years and older who underwent microsurgical resection of VS by senior authors (D.E.A. and J.P.L) between 1989 and 2018 at Loyola University Medical Center.PANQOL surveys were mailed to patients' homes along with a return envelope with postage and a letter from corresponding author, D. E.A., detailing the study.Patients who did not respond within six weeks received a single follow-up phone call.This study was approved by the Loyola University Stritch School of Medicine Institutional Review Board (reference number 211402), and informed consent was received from all survey respondents. Patient characteristics A retrospective chart review was conducted to collect baseline demographic information and tumor characteristics, including tumor lateralization and size in the greatest dimension as seen on magnetic resonance imaging (MRI).Pre-and postoperative symptoms were recorded, including tinnitus, imbalance, headache, cerebrospinal fluid leak, and postoperative House-Brackmann (HB) grade. PANQOL survey The PANQOL questionnaire, developed in 2010 by Shaffer et al, 12 consists of 26 questions grouped into 7 domains determined to be most pertinent to quality of life (QOL): hearing, balance, facial, anxiety, energy, pain, and general health.Patients self-reported answers on a scale of 1-5, with 1 representing strong discordance and 5 representing strong accordance.Individual scores from each question were then converted to a 100-point scale as follows: for all questions except for question 25, a strong accordance reflected a lower quality of life.As such, a score of 5 was assigned a value of 0, a score of 4 was assigned a value of 25, and so forth.For question 25, a strong accordance reflected a higher quality of life, so a score of 5 was assigned a value of 100, a score of 4 was assigned a value of 75, and so forth.For each domain, the scores for the associated questions were averaged, with a higher score indicating a better QOL.The primary outcomes were individual domain and total PANQOL scores. Extent of resection The presence of residual tumor was assessed intraoperatively using a custom 1 cm graduated bayoneted micro-measuring instrument.Extent of tumor resection was divided into two groups: gross total resection (GTR) and less than GTR.GTR was defined as total removal by the surgeon's operative note with no observable residual tumor identified on postoperative MRI.Less than GTR was characterized by any presence of a residual tumor placode either measurable intraoperatively or postoperative imaging. Retreatment Retreatment was defined as requiring at least one additional microsurgical intervention for resection of residual or recurrent tumor.Decision for retreatment was made between operating surgeon and patient in the setting of residual or recurrent tumor causing return or continuation of symptoms associated with VS. Hearing preservation Postoperative hearing preservation was graded based on audiogram at most recent follow up according to the Gardner-Robertson hearing scale (GR). 22Pure tone average (PTA) was calculated as a mean of 500, 1000, 2000, and 3000 Hz.Serviceable hearing was defined as GR grade 1 and 2. 22 Patients who were not candidates for hearing preservation surgery due to complete hearing loss prior to surgery or whose data were lost were excluded from analysis. Facial outcome Facial outcomes were graded using the House-Brackmann scoring system. 23Grade I describes normal facial function, grade II describes mild dysfunction, grade III describes moderate dysfunction notably with complete eye closure with effort, grade IV describes moderately severe dysfunction with incomplete eye closure, grade V describes severe dysfunction, and grade VI describes total paralysis.For the purpose of our study, grade I and II were considered ideal facial nerve functional outcome. Year of operation Based on our previous study looking at the learning curve of VS resections of the operating neurosurgeons from 1988 to 2018, we looked at PANQOL scores from 1988-2004, 2005-2009, and 2010-2018.The probability of attaining a HB score of I was twofold higher in 2005-2009 and 2010-2018 as compared with 1988-2004. 24 Statistical analysis Continuous variables were summarized using mean with standard deviation or median with interquartile range (IQR).Nominal and ordinal variables were summarized using counts and proportions.A linear regression model was used to test whether PANQOL scores were associated with age, months since symptoms began, and tumor size.In this model, the assumptions of linearity, normality, and homoscedasticity were assessed using residual plots.An independent samples t-test was used to test whether the distribution of PANQOL scores varied by sex, residual symptoms, retreatment, hearing preservation status, presence of a CSF leak, and House-Brackmann grade.A general linear model was used to test whether the distribution of PANQOL scores varied by tinnitus, balance, and headache symptoms.A Kruskal-Wallis test was used to test for differences in the distribution of each PANQOL score among patients with a HB grade of I, II, III, and IV-VI; when the overall significance value was less than .05,][27] Following univariable analysis, multivariable general linear models were used to estimate adjusted PANQOL scores.In these models, covariates were included if they improved model fit as measured by Akaike's Information Criterion (AIC).In this study, a p-value less than .05was considered statistically significant.All analyses were completed using SAS version 9.4 (Cary, NC). Baseline demographics Of the 881 patients who underwent VS microsurgical resection, 175 patients were either deceased or lost to follow-up with no valid address or phone number available.Of the 706 remaining patients, 367 PANQOL surveys were returned (52%).Fifty-five percent of patients were female with a mean age of 61.6 years (SD 12.63).The mean time between surgery and PANQOL survey was 11.4 years (median 10.53; IQR: 4.74-17.37),and the median tumor size was 2 cm (IQR: 1.5-2.8).Twohundred and four patients (55.6%) had a left-sided tumor, while the remaining 163 (44.4%) had a right-sided tumor.Zero patients had bilateral VS.The retrosigmoid approach was more commonly performed than the translabyrinthine or combined retrosigmoid/translabyrinthine approach (49% vs 37% vs 10%).A minority of patients underwent middle fossa approach.The mean period of follow-up between surgery and last clinical encounter was 81.3 months (median 40.05 months; IQR: 13.10-91.77)(Table 1). Functional outcomes House-Brackmann (HB) scores at most recent follow-up were recorded for 359 patients.Seventy-six percent of patients reported a HB grade I, 12.8% HB grade II, 8.4% HB grade III, .8%HB grade IV, .8%HB grade V, and 1.1% HB grade VI.HB scores were broken down into patients with ideal HB grade (I-II) (n = 319, 88.9%) and poor HB grade (III-VI) (n = 40, 11.1%) (Tables 2-5), in addition to further stratification (I vs II vs III vs IV-VI) in Table 6. Gardner-Robertson (GR) scores were used to determine degree of hearing preservation, with serviceable hearing defined as GR I-II.Serviceable hearing at most recent follow-up was reported in 28% of patients with follow-up audiometric data. Hearing The mean score in the hearing domain was 56/100 (SD 24).PANQOL hearing domain scores were higher for males (μdiff = 5.21, 95% CI: .26-10.15; p = 0.04) and for those with hearing preservation (μdiff = 11.22,95% CI: 1.33-21.11;p = 0.03).Otherwise, PANQOL hearing domain scores were not associated with remaining patient characteristics in this sample of data (Supplemental Table 1)..37 Valid N = The number of patients used for the estimates.The sample size for the adjusted estimates = 260.PANQOL total scores range from 0 to 100 with higher scores indicating greater quality of life. Balance The mean score in the balance domain was 69/100 (SD 26).For every 1-year increase in age, PANQOL balance scores declined by approximately − .35points (95% CI: − .56 to − .14;p = 0.001).Otherwise, PANQOL balance domain scores were not associated with remaining patient characteristics in this sample of data (Supplemental Table 2). Face The mean score in the face domain was 78/100 (SD 25 3). Anxiety The mean score in the anxiety domain was 80/100 (SD 25).Patients with a postoperative headache reported lower QOL in the PANQOL anxiety domain (μdiff = − 14.68, 95% CI: − 26.98 to − 2.37; p = 0.01).Otherwise, PANQOL anxiety domain scores were not associated with remaining patient characteristics in this sample of data (Supplemental Table 3). Energy The mean score in the energy domain was 68/100 (SD 26).Patients with a postoperative headache reported lower QOL in the PANQOL energy domain (μdiff = − 15.31, 95% CI: − 28.33 to − 2.29; p = 0.02).Conversely, males reported higher PANQOL energy scores (μdiff = 5.60, 95% CI: .17 to 11.02; p = 0.04).Otherwise, PANQOL energy domain Valid N = The number of patients used for the unadjusted estimates.The sample size for the adjusted estimates = 260.PANQOL face ranges from 0 to 100 with higher scores indicating greater quality of life..68N = The number of patients used for the estimates.The sample size for the adjusted estimates = 359.The PANQOL pain item asks participants to respond to the following item using a five-point ordinal scale (1 = Strong disagree to 5 = Strongly agree): "I have problems with head pain on the side of my acoustic neuroma tumor". scores were not associated with remaining patient characteristics in this sample of data (Supplemental Table 4). Pain 26.8% (365) of patients that responded to the pain domain question indicated that they experienced residual pain on the surgical side.Controlling for residual status, tinnitus, and months since symptoms began, patients with solely a postoperative headache were 2.8 times more likely to report higher agreement with the PANQOL pain domain (95% CI: 1.32 to 6.14; p = 0.01).Controlling for all other variables in the model, patients with solely postoperative tinnitus were 3.3 times more likely to report higher agreement with the PANQOL pain item (95% CI: 1.23 to 8.90; p = 0.02) (Table 4). PANQOL scores by House-Brackmann grade There was significant variability in the PANQOL face and total scores when analyzed by HB grade.For the face domain, patients with HB grade I had a higher QOL (median = 100, IQR: 75-100) than patients with HB grade II (median = 58, IQR: 38-75), HB grade III (median = 42, IQR: 33-58; p < .001),and HB grade IV-VI (median = 42, IQR: 25-58; p < .001).No other pairwise comparisons were statistically significant.While there was overall variability among the four HB groups in the PANQOL total score (overall p = 0.02), no post-hoc pairwise comparison was statistically significant after adjusting for inflated Type 1 error (all p > 0.05) (Table 6). PANQOL scores by extent of resection Ninety-one percent of patients received GTR (n = 334), the remaining receiving less than GTR for purpose of neurologic preservation in the setting of closely adherent tissue.Extent of resection was not associated with PANQOL total score (μdiff = − 1.91, 95% CI: .09N = The number of patients used for the estimates.The sample size for the adjusted estimates = 259.PANQOL health ranges from 0 to 100 with higher scores indicating greater QOL. Discussion While most studies utilizing PANQOL have assessed how treatment modality impacts QOL in patients with VS, [13][14][15][16][17][18][19][20][21] we sought to determine the specific baseline characteristics and surgical outcomes that influence the QOL of patients who undergo microsurgical resection.Expectedly, we identified that functional outcomes, such as hearing preservation and HB grade, had a significant impact on self-reported scores within the hearing and facial domains.Also as expected, scores within the facial domain between 1988 and 2004 were, on average, lower than the facial domain scores between both 2005-2009 and 2010-2018, given that the chance of a patient having a postoperative HB I score was higher during those times. 24In concordance with prior literature, postoperative headache was associated with lower QOL scores in the anxiety, energy, pain, and health domains. 15Total PANQOL scores were negatively influenced by poor HB grade, retreatment status, and female sex. Our results demonstrate that extent of resection had no significant impact on PANQOL domain or overall scores in our patient population, which differs from a pivotal study by Link et al who showed that greater extent of resection is associated with improved self-reported QOL.Their study analyzed long term QOL between 143 patients who received either GTR (85%) or less than GTR (15%) from microsurgical removal of VS, demonstrating that those receiving GTR reported better facial, energy, health, and total PANQOL scores than those receiving less than GTR. 28hile Link et al reported on extent of resection, the effect of the need for retreatment on QOL was not evaluated. Significantly, we found that when controlling for extent of resection and HB grade, retreatment was significantly associated with lower face domain and total PANQOL scores.Thus, we suggest that retreatment impacts QOL while extent of resection does not.This idea builds upon suggestions by Link et al that there is a psychological component to overall QOL outcomes among patients undergoing microsurgery, and there may be an interplay between preoperative expectations, patient perception of residual tumor, and requirement of retreatment influencing satisfaction and QOL despite functional outcomes. 28Although Link et al demonstrated that extent of resection was positively associated with improved QOL, differences in study design may explain our varying results.Our present study consisted of a larger patient population (367 patients vs. 143 patients) and longer mean period between surgery and PANQOL survey (11.4 years vs. 7.7 years), which may elucidate trends in patient perception of importance of extent of resection on QOL over time. The approach to VS resection remains variable among the neurosurgical and otolaryngologic communities.Some surgeons approach VS management with tumor debulking followed by planned stereotactic radiosurgery, and others plan for complete resection with willingness to sacrifice neurological function. 10Our previous study looking specifically at surgical approach and PANQOL scores showed that patients who had the retrosigmoid approach have higher PANQOL scores than those who underwent the translabyrinthine approach. 29The two senior authors (DEA and JPL) aim for GTR while recognizing that a small amount of residual tumor as a means of functional preservation is acceptable.Inclusion of various tumor sizes in this analysis highlights a potential difference in expectations between patients with small and large tumors; patients with larger tumors have a trend towards reporting improved PANQOL health and face scores, although these were not statistically significant.Reasonably, we agree with Link et al in their proposition that patients who elect to have microsurgery for VS less than 3 cm may be psychologically biased to expect to have their tumors completely removed, thus are less satisfied when discovering residual tumor was left. 27imitations of this study include the retrospective nature of chart review, in addition to self-report bias on PANQOL.Additionally, this study was performed at a single center by one interdisciplinary team, which may limit generalizability.Our response rate was 52%, with a lower response rate in earlier years, which may contribute significant response bias, and was a notably lower response rate than other PAN-QOL studies. 15,28Some patients included comments on their surveys indicating that they were unsure whether problems indicated by the survey were due to their VS or due to aging and other health conditions.Furthermore, due to the wide time range of this study, there were minor differences in reporting and data collection between patients with paper versus electronic medical records.It is also important to note that there have been advancements in technology, intraoperative monitoring, and surgical techniques over the time range of this study; however, there are numerous studies examining the effect of these advancements on the variables measured in our study, such as facial and hearing preservation and extent of resection.Therefore, one can draw obvious conclusions about the effect of these advancements on our results. Conclusion Our results demonstrate that patients who undergo surgical retreatment of VS following initial microsurgical intervention have lower patient reported QOL than those who do not.In contrast to previously reported studies, we found that extent of resection does not impact QOL.Additionally, poor HB grade and female sex were negatively associated with total PANQOL scores, while postoperative headache was negatively associated with anxiety, energy, pain, and health domain scores.These results suggest that technological advancements should continue to focus on maximizing facial and cochlear nerve preservation and minimizing the chances of reoperation and postoperative headache.Furthermore, stratification of HB grades revealed a significant decrease in PANQOL face domain scores between HB grade I, II, and III, suggesting that favorable facial function outcomes definitions should be revisited. Declaration of competing interest The authors do not have any disclosures. Table 1 Baseline characteristics among 367 patients who underwent microsurgical resection of vestibular schwannoma. Table 2 PANQOL total scores as a function of patient characteristics. ). Controlling for tumor size, retreatment status, and residual status, patients with poor HB grades reported lower QOL scores in the face domain than patients with ideal HB grades (μdiff = − 35.31, 95% CI: − 43.45 to − 27.17; p < .001).Similarly, controlling for all other variables in the model, patients who underwent retreatment reported lower quality of life scores in the face domain (μdiff = − 20.34, 95% CI: − 29.78 to − 10.91; p < .001)(Table Table 3 PANQOL face scores as a function of patient characteristics. Table 4 PANQOL pain scores as a function of patient characteristics. Table 5 PANQOL health scores as a function of patient characteristics.
2024-02-27T16:14:01.497Z
2024-02-01T00:00:00.000
{ "year": 2024, "sha1": "d220d065ca821fe6eccaf4e9f63ea6036074996e", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "7771342c9d2860605ca59c6395783cc205847e69", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
4534193
pes2o/s2orc
v3-fos-license
Tuning as Ranking We offer a simple, effective, and scalable method for statistical machine translation parameter tuning based on the pairwise approach to ranking (Herbrich et al., 1999). Unlike the popular MERT algorithm (Och, 2003), our pairwise ranking optimization (PRO) method is not limited to a handful of parameters and can easily handle systems with thousands of features. Moreover, unlike recent approaches built upon the MIRA algorithm of Crammer and Singer (2003) (Watanabe et al., 2007; Chi-ang et al., 2008b), PRO is easy to implement. It uses off-the-shelf linear binary classi-fier software and can be built on top of an existing MERT framework in a matter of hours. We establish PRO’s scalability and effectiveness by comparing it to MERT and MIRA and demonstrate parity on both phrase-based and syntax-based systems in a variety of language pairs, using large scale data scenarios. Introduction The MERT algorithm (Och, 2003) is currently the most popular way to tune the parameters of a statistical machine translation (MT) system. MERT is well-understood, easy to implement, and runs quickly, but can behave erratically and does not scale beyond a handful of features. This lack of scalability is a significant weakness, as it inhibits systems from using more than a couple dozen features to discriminate between candidate translations and stymies feature development innovation. Several researchers have attempted to address this weakness. Recently, Watanabe et al. (2007) and Chiang et al. (2008b) have developed tuning methods using the MIRA algorithm (Crammer and Singer, 2003) as a nucleus. The MIRA technique of Chiang et al. has been shown to perform well on large-scale tasks with hundreds or thousands of features (2009). However, the technique is complex and architecturally quite different from MERT. Tellingly, in the entire proceedings of ACL 2010 (Hajič et al., 2010), only one paper describing a statistical MT system cited the use of MIRA for tuning (Chiang, 2010), while 15 used MERT. 1 Here we propose a simpler approach to tuning that scales similarly to high-dimensional feature spaces. We cast tuning as a ranking problem (Chen et al., 2009), where the explicit goal is to learn to correctly rank candidate translations. Specifically, we follow the pairwise approach to ranking (Herbrich et al., 1999;Freund et al., 2003;Burges et al., 2005;Cao et al., 2007), in which the ranking problem is reduced to the binary classification task of deciding between candidate translation pairs. Of primary concern to us is the ease of adoption of our proposed technique. Because of this, we adhere as closely as possible to the established MERT architecture and use freely available machine learning software. The end result is a technique that scales and performs just as well as MIRA-based tuning, but which can be implemented in a couple of hours by anyone with an existing MERT implementation. Mindful that many would-be enhancements to the 1 The remainder either did not specify their tuning method (though a number of these used the Moses toolkit (Koehn et al., 2007), which uses MERT for tuning) or, in one case, set weights by hand. state-of-the-art are false positives that only show improvement in a narrowly defined setting or with limited data, we validate our claims on both syntax and phrase-based systems, using multiple language pairs and large data sets. We describe tuning in abstract and somewhat formal terms in Section 2, describe the MERT algorithm in the context of those terms and illustrate its scalability issues via a synthetic experiment in Section 3, introduce our pairwise ranking optimization method in Section 4, present numerous large-scale MT experiments to validate our claims in Section 5, discuss some related work in Section 6, and conclude in Section 7. Tuning In Figure 1, we show an example candidate space, defined as a tuple ∆, I, J, f, e, x where: • ∆ is a positive integer referred to as the dimensionality of the space • I is a (possibly infinite) set of positive integers, referred to as sentence indices • J maps each sentence index to a (possibly infinite) set of positive integers, referred to as candidate indices • f maps each sentence index to a sentence from the source language • e maps each pair i, j ∈ I × J(i) to the j th target-language candidate translation of source sentence f (i) • x maps each pair i, j ∈ I × J(i) to a ∆-dimension feature vector representation of e(i, j) The example candidate space has two source sentences, three candidate translations for each source sentence, and feature vectors of dimension 2. It is an example of a finite candidate space, defined as a candidate space for which I is finite and J maps each index of I to a finite set. A policy of candidate space ∆, I, J, f, e, x is a function that maps each member i ∈ I to a member of J(i). A policy corresponds to a choice of one candidate translation for each source sentence. For the example in Figure 1, policy p 1 = {1 → 2, 2 → 3} corresponds to the choice of "he does not go" for the first source sentence and "I do not go" for the second source sentence. Obviously some policies are better than others. Policy p 2 = {1 → 3, 2 → 1} corresponds to the inferior translations "she not go" and "I go not." We assume the MT system distinguishes between policies using a scoring function for candidate translations of the form h w (i, j) = w · x(i, j), where w is a weight vector of the same dimension as feature vector x(i, j). This scoring function extends to a policy p by summing the cost of each of the policy's candidate translations: H w (p) = i∈I h w (i, p(i)). As can be seen in Figure 1, using w = [−2, 1], H w (p 1 ) = 9 and H w (p 2 ) = −8. The goal of tuning is to learn a weight vector w such that H w (p) assigns a high score to good policies, and a low score to bad policies. 2 To do so, we need information about which policies are good and which are bad. This information is provided by a "gold" scoring function G that maps each policy to a real-valued score. Typically this gold function is BLEU (Papineni et al., 2002), though there are several common alternatives (Lavie and Denkowski, 2009;Melamed et al., 2003;Snover et al., 2006;Chiang et al., 2008a). We want to find a weight vector w such that H w behaves "similarly" to G on a candidate space s. We assume a loss function l s (H w , G) which returns the real-valued loss of using scoring function H w when the gold scoring function is G and the candidate space is s. Thus, we may say the goal of tuning is to find the weight vector w that minimizes loss. MERT In general, the candidate space may have infinitely many source sentences, as well as infinitely many candidate translations per source sentence. In practice, tuning optimizes over a finite subset of source sentences 3 and a finite subset of candidate translations as well. The classic tuning architecture used in the dominant MERT approach (Och, 2003) forms the translation subset and learns weight vector w via Source Sentence Candidate Translations i f (i) j e(i, j) x(i, j) h w (i, j) g(i, j) 1 "il ne va pas" 1 "he goes not" [2 4] 0 0.28 2 "he does not go" [3 8] 2 0.42 3 "she not go" [6 1] -11 0.12 2 "je ne vais pas" 1 "I go not" [-3 -3] 3 0.15 2 "we do not go" [1 -5] -7 0.18 3 "I do not go" [-5 -3] 7 0.34 optimization: find vector w that minimizes l s (H w , G) 5: return w a feedback loop consisting of two phases. Figure 2 shows the pseudocode. During candidate generation, candidate translations are selected from a base candidate space s and added to a finite candidate space s called the candidate pool. During optimization, the weight vector w is optimized to minimize loss l s (H w , G). For its candidate generation phase, MERT generates the k-best candidate translations for each source sentence according to h w , where w is the weight vector from the previous optimization phase (or an arbitrary weight vector for the first iteration). For its optimization phase, MERT defines the loss function as follows: In other words, it prefers weight vectors w such that the gold function G scores H w 's best policy as highly as possible (if H w 's best policy is the same as G's best policy, then there is zero loss). Typically the optimization phase is implemented using Och's line optimization algorithm (2003). MERT has proven itself effective at tuning candidate spaces with low dimensionality. However, it is often claimed that MERT does not scale well with dimensionality. To test this claim, we devised the following synthetic data experiment: 1. We created a gold scoring function G that is also a linear function of the same form as H w , i.e., G(p) = H w * (p) for some gold weight vector w * . Under this assumption, the role of the optimization phase reduces to learning back the gold weight vector w * . 2. We generated a ∆-dimensionality candidate pool with 500 source "sentences" and 100 candidate "translations" per sentence. We created the corresponding feature vectors by drawing ∆ random real numbers uniformly from the interval [0, 500]. 3. We ran MERT's line optimization on this synthetic candidate pool and compared the learned weight vector w to the gold weight vector w * using cosine similarity. We used line optimization in the standard way, by generating 20 random starting weight vectors and hill-climbing on each independently until no further progress is made, then choosing the final weight vector that minimizes loss. We tried various dimensionalities from 10 to 1000. We repeated each setting three times, generating different random data each time. The results in Figure 3 indicate that as the dimensionality of the problem increases MERT rapidly loses the ability to learn w * . Note that this synthetic problem is considerably easier than a real MT scenario, where the data is noisy and interdependent, and the gold scoring function is nonlinear. If MERT cannot scale in this simple scenario, it has little hope of succeeding in a high-dimensionality deployment scenario. Optimization via Pairwise Ranking We would like to modify MERT so that it scales well to high-dimensionality candidate spaces. The most prominent example of a tuning method that performs well on high-dimensionality candidate spaces is the MIRA-based approach used by Watanabe et al. (2007) and Chiang et al. (2008b;). Unfortunately, this approach requires a complex architecture that diverges significantly from the MERT approach, and consequently has not been widely adopted. Our goal is to achieve the same performance with minimal modification to MERT. With MERT as a starting point, we have a choice: modify candidate generation, optimization, or both. Although alternative candidate generation methods have been proposed (Macherey et al., 2008;Chiang et al., 2008b;Chatterjee and Cancedda, 2010), we will restrict ourselves to MERT-style candidate generation, in order to minimize divergence from the established MERT tuning architecture. Instead, we focus on the optimization phase. Basic Approach While intuitive, the MERT optimization module focuses attention on H w 's best policy, and not on its overall prowess at ranking policies. We will create an optimization module that directly addresses H w 's ability to rank policies in the hope that this more holistic approach will generalize better to unseen data. Assume that the gold scoring function G decomposes in the following way: where g(i, j) is a local scoring function that scores the single candidate translation e(i, j). We show an example g in Figure 1. For an arbitrary pair of candidate translations e(i, j) and e(i, j ), the local gold function g tells us which is the better translation. Note that this induces a ranking on the candidate translations for each source sentence. We follow the pairwise approach to ranking (Herbrich et al., 1999;Freund et al., 2003;Burges et al., 2005;Cao et al., 2007). In the pairwise approach, the learning task is framed as the classification of candidate pairs into two categories: correctly ordered and incorrectly ordered. Specifically, for candidate translation pair e(i, j) and e(i, j ), we want: We can re-express this condition: Thus optimization reduces to a classic binary classification problem. We create a labeled training instance for this problem by computing difference vector x(i, j) − x(i, j ), and labeling it as a positive or negative instance based on whether, respectively, the first or second vector is superior according to gold function g. To ensure balance, we consider both possible difference vectors from a pair. For example, given the candidate space of Figure 1, since g(1, 1) > g(1, 3), we would add ([−4, 3], +) and ([4, −3], −) to our training set. We can then feed this training data directly to any off-the-shelf classification tool that returns a linear classifier, in order to obtain a weight vector w that optimizes the above condition. This weight vector can then be used directly by the MT system in the subsequent candidate generation phase. The exact loss function l s (H w , G) optimized depends on the choice of classifier. 4 Typical approaches to pairwise ranking enumerate all difference vectors as training data. For tuning however, this means O(|I| * J 2 max ) vectors, where J max is the cardinality of the largest J(i). Since I and J max commonly range in the thousands, a full enumeration would produce billions of feature vectors. Out of tractability considerations, we sample from the space of difference vectors, using the sampler template in Figure 4. For each source sentence i, the sampler generates Γ candidate translation pairs j, j , and accepts each pair with probability α i (|g(i, j) − g(i, j )|). Among the accepted pairs, it keeps the Ξ with greatest g differential, and adds their difference vectors to the training data. 5 4 See (Chen et al., 2009) for a brief survey. 5 The intuition for biasing toward high score differential is Scalability We repeated the scalability study from Section 3, now using our pairwise ranking optimization (hereafter, PRO) approach. Throughout all experiments with PRO we choose Γ = 5000, Ξ = 50, and the following step function α for each α i : 6 We used MegaM (Daumé III, 2004) as a binary classifier in our contrasting synthetic experiment and ran it "out of the box," i.e., with all default settings for binary classification. 7 Figure 3 shows that PRO is able to learn w * nearly perfectly at all dimensionalities from 10 to 1000. As noted previously, though, this is a rather simple task. To encourage a disconnect between g and h w and make the synthetic scenario look more like MT reality, we repeated the synthetic experiments that our primary goal is to ensure good translations are preferred to bad translations, and not to tease apart small differences. 6 We obtained these parameters by trial-and-error experimentation on a single MT system (Urdu-English SBMT), then held them fixed throughout our experiments. We obtained similar results using Γ = Ξ = 100, and for each αi, a logistic sigmoid function centered at the mean g differential of candidate translation pairs for the i th source sentence. This alternative approach has the advantage of being agnostic about which gold scoring function is used. 7 With the sampling settings previously described and MegaM as our classifier we were able to optimize two to three times faster than with MERT's line optimization. and (x(i, j )-x(i, j), sign(g(i, j )-g(i, j))) for each of the first Ξ members of V . but added noise to each feature vector, drawn from a zero-mean Gaussian with a standard deviation of 500. The results of the noisy synthetic experiments, also in Figure 3 (the lines labeled "Noisy"), show that the pairwise ranking approach is less successful than before at learning w * at high dimensionality, but still greatly outperforms MERT. Discussion The idea of learning from difference vectors also lies at the heart of the MIRA-based approaches (Watanabe et al., 2007;Chiang et al., 2008b) and the approach of Roth et al. (2010), which, similar to our method, uses sampling to select vectors. Here, we isolate these aspects of those approaches to create a simpler tuning technique that closely mirrors the ubiquitous MERT architecture. Among other simplifications, we abstract away the choice of MIRA as the classification method (our approach can use any classification technique that learns a separating hyperplane), and we eliminate the need for oracle translations. An important observation is that BLEU does not satisfy the decomposability assumption of Equation (1). An advantage of MERT is that it can directly optimize for non-decomposable scoring functions like BLEU. In our experiments, we use the BLEU+1 approximation to BLEU (Liang et al., 2006) to determine class labels. We will nevertheless use BLEU to evaluate the trained systems. Experiments We now turn to real machine translation conditions to validate our thesis: We can cleanly replace MERT's line optimization with pairwise ranking optimization and immediately realize the benefits of high-dimension tuning. We now detail the three language pairs, two feature scenarios, and two MT models used for our experiments. For each language pair and each MT model we used MERT, MIRA, and PRO to tune with a standard set of baseline features, and used the latter two methods to tune with an extended set of features. 8 At the end of every experiment we used the final feature weights to decode a held-out test set and evaluated it with case-sensitive BLEU. The results are in Table 1. Systems We used two systems, each based on a different MT model. Our syntax-based system (hereafter, SBMT) follows the model of Galley et al. (2004). Our 8 MERT could not run to a satisfactory completion in any extended feature scenario; as implied in the synthetic data experiment of Section 3, the algorithm makes poor choices for its weights and this leads to low-quality k-best lists and dismal performance, near 0 BLEU in every iteration. phrase-based system (hereafter, PBMT) follows the model of Och and Ney (2004). In both systems we learn alignments with GIZA++ (Och and Ney, 2000) using IBM Model 4; for Urdu-English and Chinese-English we merged alignments with the refined method, and for Arabic-English we merged with the union method. Table 2 notes the sizes of the datasets used in our experiments. All tune and test data have four English reference sets for the purposes of scoring. Urdu-English The training data for Urdu-English is that made available in the constrained track in the NIST 2009 MT evaluation. This includes many lexicon entries and other single-word data, which accounts for the large number of lines relative to word count. The NIST 2008 evaluation set, which contains newswire and web data, is split into two parts; we used roughly half each for tune and test. We trained a 5-gram English language model on the English side of the training data. Arabic-English The training data for Arabic English is that made available in the constrained track in the NIST 2008 MT evaluation. The tune set, which contains only newswire data, is a mix from NIST MT evaluation sets from 2003-2006 and from GALE development data. The test set, which contains both web and newswire data, is the evaluation set from the NIST 2008 MT evaluation. We trained a 4-gram English language model on the English side of the training data. Chinese-English For Chinese-English we used 173M words of training data from GALE 2008. For SBMT we used a 32M word subset for extracting rules and building a language model, but used the entire training data for alignments, and for all PBMT training. The tune and test sets both contain web and newswire data. The tune set is selected from NIST MT evaluation sets from [2003][2004][2005][2006]. The test set is the evaluation set from the NIST 2008 MT evaluation. We trained a 3-gram English language model on the English side of the training data. Features For each of our systems we identify two feature sets: baseline, which correspond to the typical small feature set reported in current MT literature, and extended, a superset of baseline, which adds hundreds or thousands of features. Specifically, we use 15 baseline features for PBMT, similar to the baseline features described by Watanabe et al. (2007). We use 19 baseline features for SBMT, similar to the baseline features described by Chiang et al. (2008b). We used the following feature classes in SBMT and PBMT extended scenarios: • Discount features for rule frequency bins (cf. Chiang et al. (2009), Section 4.1) • Target word insertion features 9 We used the following feature classes in SBMT extended scenarios only (cf. Chiang et al. (2009) We used the following feature classes in PBMT extended scenarios only: • Unigram word pair features for the 80 most frequent words in both languages plus tokens for unaligned and all other words (cf. Watanabe et al. (2007), Section 3.2.1) 11 • Source, target, and joint phrase length features from 1 to 7, e.g. "tgt=4", "src=2", and "src/tgt=2,4" The feature classes and number of features used within those classes for each language pair are summarized in Table 3. Tuning settings Each of the three approaches we compare in this study has various details associated with it that may prove useful to those wishing to reproduce our results. We list choices made for the various tuning methods here, and note that all our decisions were made in keeping with best practices for each algorithm. MERT We used David Chiang's CMERT implementation of MERT that is available with the Moses system (Koehn et al., 2007). We ran MERT for up to 30 iterations, using k = 1500, and stopping early when the accumulated k-best list does not change in an iteration. In every tuning iteration we ran MERT once with weights initialized to the last iteration's chosen weight set and 19 times with random weights, and chose the the best of the 20 ending points according to G on the development set. The G we optimize is tokenized, lower-cased 4-gram BLEU (Papineni et al., 2002). MIRA We for the most part follow the MIRA algorithm for machine translation as described by Chiang et al. (2009) 12 but instead of using the 10-best of each of the best h w , h w +g, and h w -g, we use the 30-best according to h w . 13 We use the same sentence-level BLEU calculated in the context of previous 1-best translations as Chiang et al. (2008b;. We ran MIRA for 30 iterations. PRO We used the MegaM classifier and sampled as described in Section 4.2. As previously noted, we used BLEU+1 (Liang et al., 2006) for g. MegaM was easy to set up and ran fairly quickly, however any linear binary classifier that operates on real-valued features can be used, and in fact we obtained similar results 12 and acknowledge the use of David Chiang's code 13 This is a more realistic scenario for would-be implementers of MIRA, as obtaining the so-called "hope" and "fear" translations from the lattice or forest is significantly more complicated than simply obtaining a k-best list. Other tests comparing these methods have shown between 0.1 to 0.3 BLEU drop using 30best hw on Chinese-English (Wang, 2011). using the support vector machine module of WEKA (Hall et al., 2009) as well as the Stanford classifier (Manning and Klein, 2003). We ran for up to 30 iterations and used the same k and stopping criterion as was used for MERT, though variability of sampling precluded list convergence. While MERT and MIRA use each iteration's final weights as a starting point for hill-climbing the next iteration, the pairwise ranking approach has no explicit tie to previous iterations. To incorporate such stability into our process we interpolated the weights w learned by the classifier in iteration t with those from iteration t − 1 by a factor of Ψ, such that w t = Ψ · w + (1 − Ψ) · w t−1 . We found Ψ = 0.1 gave good performance across the board. Discussion We implore the reader to avoid the natural tendency to compare results using baseline vs. extended features or between PBMT and SBMT on the same language pair. Such discussions are indeed interesting, and could lead to improvements in feature engineering or sartorial choices due to the outcome of wagers (Goodale, 2008), but they distract from our thesis. As can be seen in Table 1, for each of the 12 choices of system, language pair, and feature set, the PRO method performed nearly the same as or better than MIRA and MERT on test data. In Figure 5 we show the tune and test BLEU using the weights learned at every iteration for each Urdu-English SBMT experiment. Typical of the rest of the experiments, we can clearly see that PRO appears to proceed more monotonically than the other methods. We quantified PRO's stability as compared to MERT by repeating the Urdu-English baseline PBMT experiment five times with each configuration. The tune and test BLEU at each iteration is depicted in Figure 6. The standard deviation of the final test BLEU of MERT was 0.13 across the five experiment instances, while PRO had a standard deviation of just 0.05. Related Work Several works (Shen et al., 2004;Cowan et al., 2006;Watanabe et al., 2006) have used discriminative techniques to re-rank k-best lists for MT. Tillmann and Zhang (2005) multi-class stochastic gradient descent to learn feature weights for an MT model. Och and Ney (2002) used maximum entropy to tune feature weights but did not compare pairs of derivations. Ittycheriah and Roukos (2005) used a maximum entropy classifier to train an alignment model using hand-labeled data. Xiong et al. (2006) also used a maximum entropy classifier, in this case to train the reordering component of their MT model. Lattice-and hypergraphbased variants of MERT (Macherey et al., 2008;Kumar et al., 2009) are more stable than traditional MERT, but also require significant engineering efforts. Conclusion We have described a simple technique for tuning an MT system that is on par with the leading techniques, exhibits reliable behavior, scales gracefully to high-dimension feature spaces, and is remarkably easy to implement. We have demonstrated, via a litany of experiments, that our claims are valid and that this technique is widely applicable. It is our hope that the adoption of PRO tuning leads to fewer headaches during tuning and motivates advanced MT feature engineering research.
2014-07-01T00:00:00.000Z
2011-07-27T00:00:00.000
{ "year": 2011, "sha1": "a13d46125ef505d4e687e25ded74b794efc18323", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "b1be889a6b6cf670b63625c1a2eb4ad000525067", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
117882898
pes2o/s2orc
v3-fos-license
On rate of Universe expansion It is offered another method how to describe contemporary stage of the Universe evolution. The method is based upon classical Einstein equations without a dark energy and other hypothetical fields. ON RATE OF UNIVERSE EXPANSION Lennur Ya. Arifov 1 Results of measurements of dependence z on photometric distances L d to supernovae SNeIa carried out on the Hubble Space Telescope revealed failure of standard cosmological models for modern epoch of Universe evolution. In order to adjust the theoretical curve with experimental one in the frames of standard models it was necessary to change the Einstein equations. Selection of special value for cosmological constant Λ allowed to restore the harmony but in the frames of CDM Λ+ standard model. Now the measurements of function ( ) L d z is taken as experimental proof of existence of dark energy with the density several times larger comparing with the average density of ordinary matter. According to the +CDM Λ model negative pressure of dark energy caused accelerated expansion of the Universe. This induced the author to think more carefully about the bases of standard cosmological models and propose another method for description of modern epoch of Universe in the frames of classic Einstein's equations. INTRODUCTION Use of the supernovae SNe Ia as a standard candle (Branch & Tammann 1992) of outof-galaxy objects allowed to increase measuring accuracy of dependence of objects photometric distance L d on shift of their spectral lines z . As a result of measurements of function ( ) L d z in the interval [ ] 0,0.9 z and theoretical interpretation of obtained data Riess et al. (1998), Garnavich et al. (1998), Perlmutter et al. (1999) came to fundamental for cosmology conclusion about accelerated expansion of the Universe. It means that exotic forms of gravitational interaction sources with energetic characteristics 30 р ρ +≤ (here ρ − mass-energy density, p − pressure) dominate over other for sure known forms with equation of state satisfying the condition 3 p ρ + >0. And what is more, if standard models are adequate to modern state of the Universe then the results of measurements of ( ) L d z for SNe Ia are to be interpreted as experimental proof of existence of hypothetical before form of dark energy (Weinberg 1989;Peebles 1999;Coldwell & Steinhardt 1998;Wang et al. 2000), which is connected with cosmological constant Λ (equation of state 0,. We do not call in question results of measurements of function ( ) L d z for SNe Ia. But we take note that negative value of deceleration parameter 0 q (corresponds to accelerated expansion of the Universe) is not a straight consequence of measuring results, but follows from their theoretical interpretation. Real result of the measurements is the detection of nonlinear dependence of registered energy current density of SNe Ia on shift parameter of their spectral lines z . The character of this nonlinear dependence is compatible with the theory of function ( ) L d z in the standard models only if equations of state of dominating component from all possible sources of gravitational field satisfies the condition 3<0. p ρ + In that way there is a dilemma: either bases of standard models are wrong or in addition to the for sure known forms of gravitational field sources the Universe is filled with other forms with exotic equations of state and mass-energy density several times of typical matter density. These forms define the character of Universe evolution. In the last case negative pressure acts as gravitational repulsive forces, and results of experiments are interpreted as accelerated expansion of the Universe. In CDM Λ+ standard model this dilemma is solved in favour of traditional homogenous isotropic models introduced in primary resulting articles (Friedmann 1922(Friedmann , 1924, dedicated to the construction of cosmological models on the basis of Einstein equations. Taking into account the fundamental role of value 0 <0 q for cosmology and physics as a whole, we think it is quite suitable to consider critically those of basic cosmological propositions which are the base for interpretation of experimental data and theory of function We investigate this problem in the following two sections. In sections 4− 6 we state alternative model of evolutional stage A of the Universe and derivation of theoretical formula for ( ) L d. z This formula adjusts with experimental curve without any additional parameters or exotic energy forms (see section 7). The Universe decelerates while expanding. Deceleration parameter 0 q equals to r Ω ( r ρ − cosmic microwave background energy density), and space warp of Universe is minus quantity because 1 k =− . BASIC PROPOSITIONS OF STANDARD MODELS Let's concentrate on the following basic propositions of standard cosmological models: 1. Space-time is homogeneous and isotropic. Its geometry is defined by quadratic form (Robertson 1933) ,, χθϕ − radial (dimensionless) and angular coordinates of points on the sphere. 2. Distribution of gravitational field sources is continues, homogeneous and isotropic. It is described by stress-energy tensor of multicomponent perfect fluid filling the space with mass energy density ( ) t ρ and pressure ( ). pt 3. In the reference frame co-moving by the perfect fluid Einstein equations are reduced to the following form: Here and further dot over the symbol denotes the derivative of the corresponding quantity with respect to time . t 4. From all known components of perfect fluid main contribution to the mass energy density comes from baryonic and dark matter. The most investigated baryonic matter is concentrated mainly in the stars of the galaxies and partly in interstellar gas-dust medium. Its mass density is evaluated by Dark matter cannot be observed directly. But according to many independent indirect measurements estimation (Olive 2002) of dark mass density gives the value Dark mass is also concentrated in galaxies and galactic clusters. That is why it consists of elements with non zero rest mass. Energy density of cosmic microwave background (CMB) radiation is measured much Spatial energy distribution of CMB radiation is homogeneous and isotropic, relative value of inhomogeneity 5 10 − : (Sazhin 2004 If put 0, k = then formulas (4)-(6) are reduced to the following form: In CDM standard models 0, Λ Ω= and formulas (4)-(6) can be reduced to: (Riess et al. 1998;Garnavich et al. 1998;Perlmutter et al. 1999) showed that formula (8) gives understated values with augmentation of z . Conclusion is that the results of measurements testify in favour of ( ) 0.60.7, c ρρ Λ ÷ : and correspondingly about the accelerated expansion of the Universe. CRITICAL ANALISIS OF BASIC PROPOSITIONS OF STANDARD MODELS States 2 and 3 are under doubt. State 2 replaces real matter distribution of the Universe by homogeneous multicomponent thermodynamic system. Such a replacement in cosmology is called matter "homogenization" (Weinberg 1972). State 3 for real matter distribution in Einstein equations is replaced by equations (2) for homogenized matter. These replacements need to be proved and can have a right to exist if they do not contradict observational data. Einstein equations are local and establish accurate equality between geometrical characteristic of space-time and energetic characteristic of gravitational field sources in every point of space at every time. Here − G Einstein tensor, − g metric space-time tensor, − T stress-energy tensor. Einstein and stressenergy tensors are defined on given metric tensor. If we turn to the evolution of the Universe, then in the past we can conventionally distinguish several significantly different stages of evolution ( Fig. 1). Probability of the Universe to be in stages C and B is quite high and follows from extrapolation into past of to known observational data: expansion of the Universe and non zero homogeneous isotropic CMB radiation in the Universe at present time. Stage C is characterized by high temperature of gravitational field sources and high value of energy density, matter is the state of electrically neutral plasma. Due to the expansion of the Universe structure of the plasma changes with time from elemental particles of different types on early stages of evolution C to electrons, neutrinos, hydrogen, helium, deuterium atomic nucleus and other chemical elements on late stages. Matter on that stage comprises the elements of future dark matter. As plasma and electromagnetic radiation were in thermodynamic equilibrium plasma should have homogeneous and isotropic mass and energy distribution with small deviation 5 10 − : from homogeneity. That is why it is possible to modulate right hand side of equations (9) by homogeneous equilibrium system in stage C. Form (1) in this case is the one which reveals the symmetry -homogeneous and isotropy -of the left hand side of equations (9). Therefore, standard cosmological models adequately describe stage C in the evolution of the Universe. Stage A radically differs from stage C. Distribution of gravitational filed sources is quite inhomogeneous in stage A. Here we have complicated system where defined enough and for sure possible to distinguish two thermodynamic phases and with under special conditions third phase 1 . Radiation phase stipulated mainly by CMB radiation occupies the most part of the Universe space. Relative value of local deviations from inhomogeneity is . Mass density inside an object of stellar phase changes in 10 10 times and even more, but stars number density changes from zero in intergalactic void (their volume is 233 10pc : ) to 73 10pc − in the center of the galaxies. Surface temperature of stellar phase much more exceeds the temperature of radiation phase. Galactic phase is an aggregate of galaxies, groups and galactic clusters. It comprises as a part of a system stellar phase, dark matter of the Universe and interstellar matter inside galaxies. Galaxies number density in the Universe changes from zero in the intergalactic void to . That is why volume of galactic phase is 78 1010 −− ÷ : of the volume of the Universe. Galaxies and galactic clusters are gravitationally tied local objects and have precise bounds. But the bounds of galactic phase do not coincide with the bounds of radiation phase. The bounds of radiation phase are surfaces of stellar phase and surfaces of ionized gas clouds inside galaxies. So separation of galactic phase is conventional and is due to convenience of analysis of space matter distribution. Exactly galactic phase constitute large-scale structure of the Universe (Oort 1983). Signs of homogeneity reveals in this structure in the scale >Mpc. 200 Lower limit of homogeneity scale according to different estimations (Tarakanov 2005) there are no any signs of inhomogeneity of galactic phase. For homogenization of real matter distribution according to state 2 of standard models existence of scale l satisfying the condition U lL = is demanded, and degree of homogeneity of matter has to be not less then 5 10 − (according to the degree of homogeneity of radiation phase). Assume that as a result of data's treatment scale l is found. After averaging of stressenergy tensor on scale l it is necessary to make double substitution in the right hand side of equations (9) Here .. il − T stress-energy tensor of perfect fluid, and l <> g corresponds to the metric tensor of form (1). Then equations (2a, b) used in standard models coincide with equations but do not coincide with the averaged Einstein equations Therefore we can define several problems without solution of which application of standard models for description of evolutionary stages A and B of the Universe is not correct. 1. Simple definition of value X <> for tensor quantities in Riemann geometry in the absence of space symmetry. (Definition of X <> for scalar functions is out of doubt.) 2. Substantiation of equality for real matter distribution in stage A of the Universe. 3. Proof of equality for arbitrary metric tensor. It seems first problem has not a solution. Summation (integration) of tensor quantities excepting scalar functions defined on some region of Riemann space has not single meaning. But definition of <X> is based on summation. Such problems already were discussed in general relativity while trying to formulate integral laws of energy, impulse and angular momentum conservation. As it is know satisfactory solution to this problem was not found. Second and third problems are connected with the first one. Proof of equation (14) because of high degree of nonlinearity of function ( ) Gg is not simple task even if first problem is somehow solved. To establish a connection between Einstein equations in the form of (9) or (12) and equations (2a, b) of standard models without proving equality (14) is not possible. Stage B of the evolution of the Universe is transitional. After recombination of negative and positive electric charges of plasma in stage C radiation phase separates from neutral matter, fluctuation of matter density increases and it structures. Description of dynamics of the matter in this stage demands kinetic methods and correspondingly transforms Einstein equations. Argumentation for applying standard models in stage B is even less with compared to stage A. EVOLUTIONARY STAGE OF THE UNIVERSE In the evolutionary stage A of the Universe we can distinguish quite definitely two forms of gravitationally tied objects comprising stellar and galactic phases with precise bounds. Denote part of space inside these bounds as . Equations (15a) and (15b) distinguish essentially from physical point of view. Mass density of matter in stellar phase exceeds value of mass density in galactic phase much more, and the last exceeds mass density of radiation phase. That is why equations (15а) define the role of gravitational field in the intrinsic structure of stars, in motion of stars and another matter in the galaxies and motion of galaxies in galactic clusters relatively centre of inertia. While homogeneous radiation phase fills the whole space of the Universe. Let us affirm that qualitative and mostly quantitative characteristics of dynamics of the evolution of the Universe in stage A are defined by equations (15b). Centers of inertia of galactic clusters are at rest in the referenced frame co-moving by the radiation phase. EINSTEIN EQUATIONS FOR RADIATION PHASE Energy of radiation phase is continuously distributed in space and is not strictly homogeneous. There are two reasons of its inhomogeneity. First reason -evolutionary. Fluctuations of energy density of plasma and electromagnetic field were in stage C conserved by radiation phase after its separation from matter in stage B and transformed during expansion of the Universe. The other reason is the gravitational interaction of radiation phase with matter of galactic phase. Inhomogeneous of radiation phase is not compatible with matter inhomogeneity in stellar and galactic phases in stage A because of equations (15с), sewing together gravitational field on the bound of space regions m D and r D . Estimation of homogeneity of this nature may be dimensionless ratio of mass to linear sizes of objects of galactic phase. For most of the galaxies this ratio is within the limits of 57 1010. −− ÷ Stress-energy tensor of radiation phase has a form ( ) Denote metric tensor of the form in brackets in the right hand side of equation (18) PHOTOMETRIC DISTANCE FUNCTION IN THE STAGE A OF THE UNIVERSE The Hubble's effect theory and derivation of photometric distance function L d is based on the assumption that the observer and the object emitting light registered by the observer are at rest in the reference frame co-moving by the radiation phase. Both the observer and object belong to the region m D , and light distributes in r D region. Information about spectral lines shifts of cosmological origin enclosed exactly in properties of region r D of the Universe. But it is overburden by Doppler effect (stipulated by speeds of the object and observer relatively centers of inertia gravitationally tied subsystems m D , where they locate) and gravitational shift of spectral lines (stipulated by local gravitational fields in m D ), that is why will be estimated with large mistake. Anyway relative errors 0 H in modern estimations are 1 10 − . Magnitude of this ratio . That is why geometrical and physical properties of radiation phase in stage A of the Universe with necessary accuracy are revealed by formulas (21), (22) and (27). Use the formula for definition of the frequency of electromagnetic wave 0 0 where − k is tangent vector of isotropic geodesics, 0, In space-time (27) Consequently expansion of the Universe in stage A takes place with deceleration. Parameter of deceleration is equal to . r Ω If L − the luminosity of the object on the world line 1 Γ (see Fig. .2), then photometric distance L d from observer on line 2 Γ to the object is defined by the equality Solve equality (32b) relatively function ( ) z η and substitute into eq. (39) and (38a), get When z> 100 , this model is not applicable. COMPARISON OF PHOTOMETRIC DISTANCE FUNCTION WITH EXPERIMENTAL DATA For comparison of theoretical function ( ) L dz , defined by formula (40), with experimental data numerical results of measurements of dependence of z on energy flow density coming from supernovae SNe Ia -standard candle -are demanded. As we do not have them we will use formulas (7) as an experimental function ( ) L dz . Formula (7a) is accurate theoretical formula in CDM Λ+ model and adjusts with the measuring results (Riess et al. 1998;Garnavich et al. 1998;Perlmutter et al. 1999) in the interval [ ] z0,0.9. Therefore denoting right hand side as L d Λ , rewrite (7a) as: (Riess et al. 1998;Garnavich et el. 1998;Perlmutter et al. 1999 corresponding to "data processing" of measuring results of photometric distances SNe Ia (Riess et al. 1998;Garnavich et al. 1998;Perlmutter et al. 1999) 1. Baryonic and dark matter distribution in the modern epoch of the evolution of the Universe is not homogeneous. Therefore use of Einstein equations in the form traditionally used in standard cosmological models -since publication of Friedman's papers (1922,1924) − can not be acknowledged to be correct. Replacement of real matter and radiation distribution by "homogenized" matter -perfect fluid -in the right hand side of Einstein equations assumes corresponding averaging of left hand side. Here is hidden the reason of impossibility to describe theoretically nonlinear dependence of function L d() z received in measurements on Hubble Space Telescope in the frames of CDM ( 0 Λ= ) standard models. Introducing special term interpreted as vacuum energy in classical Einstein equations and selecting value of additional parameter Λ it was possible to adjust with the theoretical curve in the interval [ ] z0,0.9 . When the interval increases (measurements of photometric distances of more distant supernovae SNeIa) we can expect new problems to appear. 2. Standard models adequately describe evolutionary stage C of the Universe when real matter and radiation distribution under high temperature with high degree of precision are homogeneous equilibrium thermodynamic system. Alternative to standard models in stage A including modern epoch is either transition from Einstein equations to averaged ones 1 , or statement of boundary value problem for Einstein equation. In this work an account of variant of second alternative is given. of radiation phase volume. 3.3. Centers of inertia of large-scale structure members are at rest in the reference frame co-moving by the radiation phase. 2 4. Within the scope of this model was received a function ( ) L dz precision of which is defined by degree of homogeneous accuracy of radiation phase energy distribution and exceeds by several order of magnitude the accuracy of modern measurements of dependence of L d on z . It satisfactorily adjusts with the results of measurements (Riess et al. 1998;Garnavich et al. 1998;Perlmutter et al. 1999) This calibration is incorrect because transformations (A19) do not change the character of reference frame. After coordinate transformations it stays co-moving as it is assumed in Einstein equations (A5). Equalities (A18) in calibration (A20) coincide with eq. (A8a, c, d, e). Use now the definition of kinematical functions of CMB radiation. In the co-moving reference frame they reduce to the following equalities: Taking into account lemma conditions (A7) and equality (A20a) we can easily derive equalities (A8f, g, i) from eq. (A21)-(A24). W Necessity. Equality (A7b) follows from eq. (A8c). Equating now left hand sides of equations (A21b) and (A23b) to zero, according to eq. (A8f) and (A8h), and taking into account eq. (A8a), get Combining these equations it is easy to make sure that 1 σ = and 0. k g = W Note that space 3 R , according (A8c), is homogeneous and isotropic. Homogeneous component of radiation phase density energy <> r ρ and its three isotropic (A8f, g, h) and one homogeneous (A8i) kinematical functions correspond to it. That is why physical meaning of lemma is in the fact that conditions (A7) are sufficient and necessary for derivation of simple solution of equations (A5), which adjusts with isotropic and homogeneous component of its right hand side. rm DD I Now we can formulate the following theorem.
2019-04-14T03:14:55.402Z
2005-12-06T00:00:00.000
{ "year": 2005, "sha1": "41f01af5c2e0f9d283a150fdfd490cf29558fc53", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "41f01af5c2e0f9d283a150fdfd490cf29558fc53", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
203829493
pes2o/s2orc
v3-fos-license
Primary renal cell carcinoma in crossed fused ectopia: Nephron sparing surgery for a rare of rarity entity Primary renal cell carcinoma (RCC) in crossed fused renal ectopia represents a rare of rarity entity. Only eight cases were reported in the literature, including seven RCC and one transitional cell carcinoma. This report presents a case of a 39-years old female presented with incidentally discovered renal mass in a crossed fused ectopia. Careful preoperative planning and meticulous delineation of renal vasculature were performed to avoid unpredicted anatomy. Nephron-sparing surgery with preservation of the normal-functioning moiety was performed with uneventful postoperative course. These clinical, morphological and immune-histochemical features will be presented with a review of the current literature. Introduction Crossed fused renal ectopia is a markedly rare congenital anomaly, where one of the kidneys crosses the midline and located at the other side, mostly fused with inferior ectopia. Association of renal malignancy with crossed ectopia is extremely rare, as well. 1 The exact incidence of crossed fused renal ectopia is not known, as most patients are asymptomatic. An estimated prevalence of 1:2000 live birth has been reported in autopsy series, with females are less frequently affected. Furthermore, the left kidney is usually the crossed and fused with the right kidney in most cases, between the inferior pole of the orthotopic kidney and superior pole of ectopic kidney. 2 Surgery in these patients may be challenging due to atypical vasculature of both moieties. In this report, a patient with left to right crossed ectopia harboring renal tumor will be presented with a review of the current relevant literature. Case presentation A 39-years old non-smoker female was referred to our tertiary care center after detected incidentally to have a right renal mass in a crossedfused renal ectopia. Her medical and surgical history were irrelevant apart from a long-standing umbilical hernia, since 2005. There were no constitutional symptoms, no family history of malignancy or similar congenital anomalies. Physical examination was unremarkable and routine laboratory investigations were within normal range, as well. Serum creatinine was 0.76 mg/dl, while hemoglobin level was 10.3 g/ dl. Contrast-enhanced CT of the abdomen and pelvis showed evidence of crossed fused renal ectopia on the right side, with single separate renal pedicles and collecting systems per each kidney, as confirmed with CT angiography (Fig. 1). A well-defined hypervascular mass originating from the lower pole of the left kidney was observed, measuring 4.6x4x4.7 cm, with no hydronephrosis. There were neither evidence of tumor thrombus in the venous drainage system, nor lymphadenopathy or radiological signs of distant metastasis. Prior to surgery, flexible cystoscopy was performed to check the urinary bladder and ureteral orifices, where no abnormalities were detected, with placement of bilateral ureteral stents. The patient underwent trans-peritoneal partial nephrectomy with preservation of the ureter and renal pelvis of the affected kidney on October 2018 ( Fig. 2A). Postoperative urinary leakage continued for 10 days, after which it stared to decrease gradually and stopped completely. The tube drain was removed after two weeks, where contrast-enhanced CT scan demonstrated normal function of the remaining part of the affected kidney ( Fig. 2-B). Otherwise, convalescence was uneventful, and the patient discharged home in a good general condition, with stable vital signs and normal kidney function. Serum creatinine at discharge was 0.76 mg/dl and hemoglobin level was 9.7 g/dl. Histopathology revealed unifocal chromophobe RCC, 6x 4 � 3.5 cm in size lower pole mass, which was yellowish in color, friable, with hemorrhagic areas and pathologic stage of pT1b, pNx, pMx, with no identified sarcomatoid or rhabdoid features (Fig. 3A-C). Immunohistochemistry showed CK7 to have strong diffuse positive (Fig. 3D), CD10 was weak multifocal positive, and CD34 did not highlight any venous invasion, while both C-kit and Vimentin were negative. Follow-up after 6-month showed normal kidney function and complete blood count. Contrast-enhanced CT study revealed no local recurrence or distant metastasis. Discussion How these kidneys are drawn to the opposite side of the body has never been satisfactorily explained. One theory linked that entity to abnormal development of the ureteric bud and the metanephric blastema during early gestational age. Therefore, both kidneys are fused into a single mass, giving rise to two separate and distinct ureters with normally located ureteral orifices in the urinary bladder. 3 Most of these patients usually have complications, such as hydronephrosis, nephrolithiasis, infection and rarely malignancy. 3 RCC is most frequently associated tumor with fusion anomalies. Meanwhile, the prevalence of malignancy in kidneys with congenital anomalies is comparable to those with normal kidney, with similar prognostic parameters. 4 Only eight cases of carcinoma in crossed fused renal ectopia were reported in the literature until 2017, since the first case was presented in 1942. A variety of surgical approaches were performed, where trasnperitoneal nephrectomy has been the slandered of care. Complete nephrectomy of renal moieties versus excision of the mass with preservation of normal functioning moiety depended on the clinical presentation and associated pathology. Four decades ago, Gerber and associates reported spread of malignancy to the normal moiety. The authors excised the tumor within the affected kidney with subsequent auto-transplantation of the residual kidney. 1 Only one case of renal malignancy in crossed fused ectopia has been managed with laparoscopic approach. 3 Intraoperative ultrasound was performed to determine the extent of tumor, which was excised and removed through a lower midline incision, with uneventful postoperative course. 5 In the present index case, certain factors encourage nephron-sparing surgery, including young patient age, small tumor size, location of the mass in a favorable site, in addition to the availability of computed angiography, which is necessary before intervention. Conclusion RCC in crossed fused renal ectopia represents a rare of rarity entity. Nephron-sparing surgery with preservation of normal-functioning moiety seems to be an excellent option in young patients with localized or small-sized mass. However, a careful preoperative planning and meticulous delineation of renal vasculature are mandatory prior to surgery for preservation of the uninvolved renal unit and to avoid unpredicted anatomy. CONSENT FORM: A written consent was obtained from the patient for publication of this case report and accompanying images. Conflicts of interest No potential conflicts of interest were disclosed.
2019-09-19T09:13:43.126Z
2019-09-17T00:00:00.000
{ "year": 2019, "sha1": "35ece8e87d671d0326a1bc2e8e363a0d668bd142", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.eucr.2019.101020", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ce0abc17ffa0d620d2b0352422f6b9c56c307079", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
134137861
pes2o/s2orc
v3-fos-license
Evaluation of the ultrasound test for estimating the depth of cracks in concrete The objective of this study is to evaluate the ultrasound test to estimate the depth of cracks in concrete, using a mathematical model published in the literature, and to verify this depth with more accurate results. Four concrete test specimens were molded for each proposed crack depth (5 cm, 10 cm, and 15 cm), simulated using zinc plates, placed during molding and removed before concrete hardening. The results show that the test is sensitive enough to detect the presence of the cracks in the concrete. The mathematical model used allowed for an estimation of the depths of most cracks, but the results are scattered and have a high margin of error for the depths of 5 cm and 15 cm. The cracks of 10-cm depth produced better results. INTRODUCTION Cracks are the most common pathological manifestations found in concrete structures, usually appearing as a result of tensile stresses, which concrete has difficulty absorbing. Among the types of cracks that occur are those caused by thermal phenomena or by shrinkage (which are not structural hazards but may compromise sealing and performance), and those due to the lack of capacity of the structure to absorb tensile stresses, either by underestimation of the forces during sizing or by decrease of the material strength, the latter being of concern according to Silva Filho and Helene (2011). The timely detection of these defects can prevent rapid deterioration and prolong the useful life of the structures (Aggelis et al., 2010). The evaluation of structures is usually performed through visual inspection, the results of which can be subjective because they depend on the experience of the inspector (Rocha and Póvoas, 2017). However, there are several non-destructive tests (NDTs) that allow for important information about concrete properties to be extracted (Rehman et al., 2016), and are usually used to locate and evaluate defects in hardened concrete (Lorenzi et al., 2016). Lee, Chai, and Lim (2016) consider that available methods for evaluating concrete cracks have their own limitations. The most commonly used NDT techniques for inspection of concrete structures are: ultrasound Revista ALCONPAT,9 (1) (Aggelis et al., 2010), thermography (Bagathiappan et al., 2013), pachymetry (Maran et al., 2015), radar (Dabous et al., 2017), and sclerometry (Tomazali and Helene, 2017). The ultrasound test can determine the modulus of elasticity and specific mass of the concrete (Pacheco et al., 2014), estimate the compressive strength with a reasonably good approximation (Bungey, Millarde Grantham, 2006), and locate and determine the size of discontinuities in the structure (Menezes et al., 2016). Several studies have been undertaken to detect cracks and fissures in concrete using the ultrasound test (Aggelis et al., 2010;Wolf, Pirskawetz, and Zang, 2015) and others to estimate their depth (Bungey, Millard, and Grantham, 2006;Pinto et al., 2010;Souza, 2016). The study developed by In et al. (2017) used the diffuse ultrasound technique to estimate the depth of cracks in concrete pieces that simulated real beams, performing a two-dimensional simulation of finite elements. This study concluded that it is possible to estimate the depth of cracks with deviations of 1 cm in relation to the real central measurement. Seher et al. (2013) also used diffuse ultrasound combined with two-dimensional finite element simulations, for which they analyzed the wave parameters to verify the variations between cracked and uncracked elements. It can be concluded that it is possible to estimate the depth of cracks with a maximum error of 10%. It has been demonstrated in all of the studies that the results are influenced by several factors, such as: crack depth, concrete quality, and material saturation, among others. The objective of the present article is to evaluate the ability of ultrasound method to estimate the depth of cracks in concrete structures and specifically the influence of the depth of the cracks on the results, by analyzing the time variation of the sound wave through the cracked and uncracked areas. ULTRASOUND TEST PROCEDURE The ultrasound equipment used for concrete is designed to generate longitudinal waves, also known as sound waves (Bungey, Millard, and Grantham, 2006). Those whose frequency falls within the range of 20Hz to 20,000Hz are audible to the human ear, whereas the waves below 20Hz are called infrasonic and those above 20,000Hz are known as ultrasonic. (Possani et al., 2017). The results obtained from the test may be affected by several factors, such as the distance between the contact surfaces of the transducers; the presence of reinforcement, especially if aligned in the direction of wave propagation; the specific concrete mass, which depends on the concrete mixture and conditions; the type, specific mass and other characteristics of the aggregate; the type of cement and degree of hydration; the densification type; and the age of the concrete (Pacheco et al., 2014, Lorenzi et al, 2013, Mohamad et al., 2015. There are several advantages to using ultrasonic tests on concrete structures, such as: the tests are non-destructive, the equipment is cheap and easy to operate, and the test can be applied at any time, as it will not contribute to deterioration of the structure. However, the test does have some limitations, because the interpretation of its results is merely qualitative in relation to the quality of the concrete. It is therefore necessary to use it in conjunction with other tests in order to obtain more conclusive results (Aggelis et al. al., 2010). Ultrasound tests in Brazil are regulated by NBR 8802 -Hardened concrete -Determination of the propagation of ultrasonic waves (ABNT, 2013). According to this standard, there are three ways for waves to be transmitted along the surface of the concrete: direct, semidirect, and indirect, as shown in Figure 1. METHODOLOGY To achieve the objectives of this study, concrete blocks were molded to represent real structural elements, in which cracks were induced to estimate depths using the mathematical model proposed by Bungey, Millard, and Grantham (2006). A total of 12 concrete blocks were molded, four for each of the three proposed crack depths (5 cm, 10 cm, and 15 cm). Four distances between the transducers were considered when performing the test (10 cm, 20 cm, 30 cm, and 40 cm). To facilitate the analysis of the results, the blocks were divided into three groups (series) according to the crack depth: Series I -blocks with 5 cm deep cracks; Series II -blocks with 10 cm deep cracks; and Series III -blocks with 15 cm deep cracks. The equipment used was the 58-E4800 UPV, with a standard frequency of 54 KHz using 50-mm diameter transducers (CONTROLS GROUP, 2017). Test specimens The concrete blocks had dimensions of 20 x 20 x 50 cm. The crack was induced along the axis of the block by placing a 0.95 mm thick zinc plate during the molding, which was then removed before the concrete hardened. All of the blocks were produced with the same depth, because research in the literature showed no influence on the results caused by the depth. Figure 2 shows the details of these test specimens. The water/cement ratio used was 0.5 and the mixture (cement:gravel:sand) was 1:1.46:2.51. The cement used was CPII Z-32. The gravel and sand were tested according to standard NBR 7211 (ABNT, 2009), where the granulometric distribution met the recommended limits and the maximum diameter of the gravel was 19 mm. In order for the number of blocks used in the study to be statistically representative for the analysis of results, it is important that the observations of the independent variables are in a proportion greater than 5 to 1, that is, more than 5 observations for each independent variable. The recommended level is between 15 and 20 observations per variable, so that the sample can be considered representative (Hair et al., 2009). This study analyzed two independent variables, crack depth and test execution distance. When multiplied by 20, this gives an ideal quantity of 40 observations. In total, 96 observations were performed (4 blocks x 3 depths x 4 distances x 2 repetitions), a value well above the recommended amount. Mathematical model for estimating crack depth proposed by Bungey, Millard, and Grantham (2006) The model allows for the estimation of crack depth perpendicular to the concrete surface when the mode of transmission is indirect. Therefore, the transducers should be placed equidistant from the crack, as shown in Figure 3 In order to apply this model, the velocity of the ultrasonic wave through the integral concrete obtained using the indirect mode should be adopted as "Vc". That is, a speed "Vc" is found in a region of the concrete without cracks, having a distance of Y = 2X between the transducers. Considering that the wave will deviate around the crack and that the velocity should remain the same because it is propagating through similar material, it is possible to estimate the depth of a crack that has its axis located at distance "X" from the transducers, as shown in Figure 3. The difference between these two paths will cause a slower wave propagation time, because the speed "Vc" should be the same. The model assumes that the velocity will be equal for the two paths and that the wave will deviate because it is a mechanical wave, which requires a medium through which to propagate. Equation (1) represents the proposed mathematical model, a result of the equality of velocities along the two wave paths. Where: h = crack depth estimated by the model (cm); x = distance from the transducer to the axis of the crack (cm); Tc = wave propagation time through unbroken whole concrete, defined as a (2). Tf = wave propagation time around the crack, defined as as (3). Test execution The ultrasound test was performed using the indirect mode, avoiding roughness on the tested surface as indicated by NM-58 (ABNT, 1996). The calibration of the equipment was performed before beginning the measurements, according to the procedure described in the manual (CONTROLS GROUP, 2017). An observations grid was marked on the surface used for the test, composed of an upper and lower line, the detail of which is shown in Figure 4. At all points where measurements were to be taken, Vaseline was applied to connect the transducer to the surface. In order to apply the Bungey, Millard, and Grantham (2006) model, it is necessary to obtain the propagation time of the wave using the indirect mode in intact concrete -Tc, which must be obtained for the same distances that are to be measured in the cracked region -Tf. To obtain Tc, the emitting transducer was fixed at the first point of the mesh and the receiving transducer was moved in 5-cm steps, obtaining times for the distances Y = 5cm, 10cm, and 15cm, according to Figure 5 (a), (b), and (c), where E is the transmitter, R is receiver, and Y is the distance between transducers (cm). The results of the three readings of distance (cm) versus time (μs) were plotted to obtain the best fit line ( Figure 6) and to find the propagation times through intact concrete, adjusted by the line Tc' for all distances required to apply the model: Y = 10cm, 20cm, 30cm, and 40cm, as shown in Table 1, which presents the results of the first repetition for the first test specimen of Series II. The measurements are identified first by the number of the block in the series (1, 2, 3, or 4), then by the depth (P5 = 5cm, P10 = 10 cm, P15 = 15 cm), followed by the test execution distance D10 = 10 cm, D20 = 20 cm, D30 = 30 cm, D40 = 40 cm) and finally by a unique number for each repetition of the test (1 for the first and 2 for the second). The adjusted times (Tc') were found for the distances Y = 10cm, 20cm, 30cm, and 40cm, where Y = 2X, with X being the distance between the axis of the crack and the transducer. In order to measure the propagation time of the wave around the crack (Tf), the readings were taken with distances between the transducers of Y = 10 cm, 20 cm, 30 cm, and 40 cm, as shown in Figure 7 (a) (b) (c) (d). Adjusted time -Tc' (μs) -Series II -block I Identification Tc (μs) Y (cm) Tc' (μs) Once the values for Tc' and Tf have been obtained for the same distances, it is possible to estimate the depth of the crack using the model proposed by Bungey, Millard, and Grantham (2006), following the procedure presented in the previous section. The procedure shown for obtaining the Tc' and Tf values was repeated two times for each of the four concrete blocks per series, for all three series. ANALYSIS AND DISCUSSION OF RESULTS A statistical analysis was performed with the results for the crack depths (h') for Series I, II, and III found by the application of the mathematical model proposed by Bungey, Millard, and Grantham (2006), in order to verify which series presented the most significant results. Table 2 shows that the mathematical model of Bungey, Millard, and Grantham (2006) could be applied to calculate the depth of cracks in 88.5% of observations. For the remaining percentage, 11.5%, it was not possible to determine the depth because the wave propagation time in the cracked region was less than the time in the uncracked region, making the model impossible to apply. Analysis of the results from the series using the descriptive statistics are presented in Table 3. It can be seen from the results that the model presented significant variation for all three series. Series III (15 cm) presented the highest variance and standard deviation in comparison to the other series, showing a high level of dispersion in the data set. Series II (10 cm) had the best indices in the analysis of the dispersion of the data, presenting smaller variance, standard deviation, and coefficient of variation. Series I (5 cm) had an intermediate level of dispersion, but the coefficient of variation was higher, as the standard deviation represented about 55% of the mean. Pinto et al. (2010) also studied the estimated depth of cracks in concrete blocks, analyzing four different depths (50 mm, 75 mm, 100 mm and 150 mm), with test execution distances of 100 mm and 150 mm, applying the same mathematical model and arrived at the estimates presented in Figure 8, where the tests were identified by the series, then the depth, and then by the test specimen analyzed. For example, S1-75-B indicated Series 1, depth 75 mm, block B. The authors concluded that the results were mostly within the margin of error of 15% of the actual crack depth. A similar result occurred in this study, where the best results were found for crack depths of 10 cm. To facilitate the understanding of the behavior of the results found, the boxplot of the three series was used in Figure 9, a graphical statistical tool that represents the variation of the data of a numerical variable through quartiles. A boxplot is formed by drawing a box parallel to the axis of the variable. The lower edge represents the 1st quartile, the thick line the median (2nd quartile), and the upper edge the 3rd quartile. The line that extends vertically indicates the upper and lower limit of the data. This box represents 50% of the central values of the distribution. The flatter the box, the less scattered the data is. It can be seen that the data from Series II (10 cm) were those with least variability, as data from Series III (15 cm) had greater dispersion. It can be said that the data from Series II (10 cm) behaved better in comparison to the other series in the analysis of descriptive statistics. To complement the analysis, inferential statistics were used, with the application of a confidence interval (CI) of 95%. This refers to a numeric interval around the mean that will contain 95% of the values, on average. The CI value represents, more or less, a margin of error in relation to the mean. For this study, a 20% margin of error was considered acceptable. Using the actual crack depth measurement as a reference, this implies margins of error of 1 cm for Series I, 2 cm for Series II, and 3 cm for Series III. Table 4 shows the confidence intervals of the depth variable for the 3 series analyzed. Figure 10 shows the graphs of the confidence intervals for each series with regard to depth. Corroborating what was determined in the descriptive analysis, the data from Series III (15 cm) had a higher value for the C.I. than the other series, while Series II (10 cm) had the smallest C.I. A greater C.I. means that the margin of error that ensures 95% confidence increases, making the interval larger, which can be seen in Table 4. With too large a range, meaning a high margin of error, the application of this procedure on real structures becomes impractical, as it will lead to a great variation in the estimate of the depth of the cracks. For Series III, the depth variation calculated by the model falls within the range of 13.14 cm to 17.57 cm, with the actual depth being 15 cm. In Series II, which had the smallest C.I., the depth calculated by the model varies from 6.48 cm to 8.14 cm. Although this is a small interval, the actual depth measurement of 10 cm does not fall within it, a fact that compromises the application of the model. For Series I, which had an intermediate C.I., with crack depths calculated by the model ranging from 5.51 cm to 8.40 cm, the actual depth of 5 cm also lies outside the range. It is also possible to see from the results that, for larger distances between the transducers (30 and 40 cm), there is a greater variation of the data, whereas the smaller test execution distances (10 and 20 cm) had less dispersion and a smaller margin of error. It is possible to apply the proposed model and determine the depths of cracks, but with a high degree of dispersion in the results. This high variability is mainly due to concrete that is not homogeneous, where the wave propagation velocity can vary. It is also possible that the propagation path of the wave may differ from the ideal path considered by the model. FINAL CONSIDERATIONS In the current study, an experiment was carried out to statistically evaluate the efficiency of the ultrasound method for the estimation of crack depth. The test provides clear information on crack detection in concrete, since the wave propagation time is considerably longer than that in areas of intact concrete. Revista ALCONPAT,9 (1) The estimation of crack depth using the model proposed in the literature made it possible to find values for a large percentage of the observations made. However, the values are widely dispersed, and have a high margin of error, compromising the results and the ability to apply the model in the field. Combining this test with other nondestructive tests may provide better characterization of these defects and more information, in this way eliminating some of the uncertainties presented by the ultrasound method alone.
2019-04-27T13:12:37.539Z
2018-12-30T00:00:00.000
{ "year": 2018, "sha1": "0f48da402ee8d76d19885ee328167cb2e84eeafb", "oa_license": "CCBY", "oa_url": "https://revistaalconpat.org/index.php/RA/article/download/289/431", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "92c2e5a5bec44b7fb8709a9a59d033374ee0a081", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
252868665
pes2o/s2orc
v3-fos-license
CEO and Chairperson Characteristics and Corporate Environmental Performance: A Study of Cooperatives in Vietnam This research aims to examine the influence of chairperson/CEO demographic characteristics on the level of cooperatives’ the environmental performance of Vietnamese cooperatives, based on Upper Echelon Theory. To measure environmental performance levels, this study uses energy consumption levels (electricity) to test the hypotheses. A sample of 1,508 cooperatives (from 2014 to 2016) has been used to carry out the OLS regression model, controlling for year and industry fixed effects. This study shows that the relationship between a chairperson’s educational level and electricity consumption is negative and significant (decreasing energy consumption). A similar finding is also found with CEO duality, which supports the negative nexus with electricity used. In addition, no significant association was found between chairperson/CEO gender and energy consumption level, whereas the relationship with a chairpersons’ age is positive. Drawing from Upper Echelon Theory, the current research provides novel insights into the relationships between chairpersons/CEOs’ characteristics and the cooperatives’ environmental performance. From practical implication, since cooperatives are a relatively common type of business in the rural areas of Vietnam and environmental protection is quite essential, it is necessary for cooperatives to choose suitable chairpersons/CEO based on their demographic characteristics. JEL: G30, K32, M14 Introduction Global warming and climate change are issues that challenge many countries in the world, including Vietnam. Vietnam, is a developing country and has experienced both rapid economic growth, and increasing environmental pollution due to high energy consumption (Tang & Tan, 2015). Energy consumption is considered to be of the reasons for increasing CO 2 emissions (Acaravci & Ozturk, 2010;Apergis & Payne, 2009;Qader et al., 2022;Tang & Tan, 2015). Reducing energy consumption is, therefore, a part of contributing to reducing CO2 emissions into the environment. Moreover, energy saving is one of main goals of sustainable development in many businesses. The study of Hori et al. (2014) found a positive relationship between environmental perception and energy-saving actions, and lower energy consumption also means a better corporate environmental performance. This study investigates one of the indicators of positive environmental performance levels: energy-saving actions. In Vietnam, the assessment of environmental performance is quite challenge because of the lack of corporate social responsibility (CSR) reports published and CSR assessment organisations. Thus, energy consumption level is a good assessment tool for environmental performance. Because environmental issues have risen in importance recently, scholars have paid more attention to corporate environmental actions (Lewis et al., 2014). The increasing of stakeholders' expectations regarding corporate environmental responsibility make firms change their strategies. In this situation, firms have to not only to concentrate on economic growth but also preserve sustainable development. Thus, firms, appreciated as performing positive environmental responsibility by stakeholders, can take advantage of low operating cost, low employee turnover, decreasing operating cost (fuel, energy and water costs), reducing the risk of legal sanctions relating to environment, better resource access and more market opportunities (de Villiers et al., 2011). In addition, corporate environmental responsibility is a factor impacting on customers' purchasing decisions (Bhattacharya & Sen, 2004). From that, firms receive sustainable benefits (McWilliams & Siegel, 2011), including beneficial influences on firms' performance (Kao et al., 2018), and positive shared value (McWilliams & Siegel, 2011;Porter & Kramer, 2006). Previous studies suggest that CEO/chairpersons have ultimate impact on their firms' policies and are under pressure from governments, regulators, investors, consumers and other stakeholders to make decisions that are favourable to environmental issues such as improving corporate energy efficiency and making greener decisions (Amore et al., 2019;Elsayih et al., 2021). Based on Upper Echelon Theory, there is a significant relationship between CEOs' characteristics and firms' strategies (Hambrick & Mason, 1984). The demographic characteristics of the firms' executives express high or low levels of social and environmental performance (Altunba+s et al., 2022;Slater & Dixon-Fowler, 2009) and lead to different social and environmental strategies (Cannella et al., 2008). For example, CEOs' characteristics, such as age, work experience and educational level, have been found to be related to firms' environmental performance or CEOs' incentive plans to impact on environmental performance level (McGuire et al., 2003). Sun et al. (2021) also find empirical evidence that CEOs with high education levels positively impact environmental and social performance. There has been a large number of papers to examine the relationship between senior executive/chairpersons characteristics and environmental performance (Kanashiro & Rivera, 2019;Kang, 2017;Le et al., 2015), environmental disclosure (Velte, 2019) and environmental strategy (Mazutis, 2013). However, the number of studies that examine this relationship in the context of Vietnam is limited, especially with corporate environmental performance because of the lack of a database, related to firms' environmental performance. To mitigate this limitation, this study uses energy consumption/ energy efficiency as indicators of corporate environmental performance and electricity consumption was chosen as a reliable indicator of the firms' whole energy consumption (Amore et al., 2019). Hence, the goal of this paper is to investigate the relationship between CEO/ chairperson characteristics and cooperatives' responsible behaviour for the environment: energy consumption. Cooperatives are a popular and encouraged economic organisational model in Vietnam, especially in rural areas. According to the Law on Cooperatives 2012, cooperatives are created when there are at least seven members, and are based on self-determination, selfresponsibility, equality and democracy. Although cooperatives are not the main economic sector in Vietnam, they play an important role in creating jobs and ensuring the living standards of many workers, contributing to socio-political stability and promoting socio-economic growth. There are three important factors in cooperatives: the board of directors, the chairperson and the manager. The board of directors consists of the members and the chairperson who is the legal representative of the cooperative and responsible for planning activities of the board of directors and assigning tasks to members. In addition, some cooperatives hire an outside professional manager/CEO for administrative jobs. The manager is responsible for organising the implementation of production and business plans; carrying out resolutions of the decisions of the board of directors; signing contracts are authorised by the chairperson of the board of directors; reporting annual financial statements to the board of directors; developing departmental organisation plan; recruiting of employees according to the decision of the board of directors. However, because the role of the cooperatives is to benefit their members, the managerial discretion of outside managers/CEOs are limited and overshadowed by the board of directors and chairpersons who also take responsibility for management. In summary CEO and chairperson are the most important factors when making environmental related decisions. In this study, I collected data from the General Statistics Office of Vietnam (GSO), including energy consumption and chairperson/CEO characteristics of 1,508 cooperatives from 2014 to 2016. The results display that chairperson/CEO characteristics impact on cooperatives' energy consumption. To be specific, chairpersons' age is positively associated with the energy consumption of cooperatives. By contrast, chairpersons' education and duality of roles have a negative influence on energy used. In addition, no a significant relationship was found between CEO/chairpersons' gender and energy consumption. The study has several theoretical and practical implications. Regarding the theoretical contribution, this work supports the Upper Echelon Theory, because it indicates that chairperson characteristics has an impact on cooperatives' environmentally performance. As the theory argues that leaders' demographic characteristics can have an influence on corporate performance, the findings of this study prove for this theories' suggestion. This paper also applies other theories to explain the impact trend of these characteristics on environmental performance. In addition, in terms of practical contributions, the study examines chairperson/CEO characteristics that critically impact the environmentally responsible behaviour of cooperatives in Vietnam. The results show that cooperatives can reduce the energy used by a chairperson with suitable personalities such as highly educated and younger. Moreover, to promote energy efficiency in cooperatives, they should prioritise a chairperson to handle the position of chief executive officer (duality) instead of hiring an outside manager. In addition, cooperatives can develop appropriate policies for economic and efficient use of electricity, avoid natural resource waste and protect the environment, especially in rural areas with low education levels and difficult access to information. Theoretical Foundation: Upper Echelon Theory Performance or growth can be affected by many aspects, and the top management team's characteristics are crucial factors. However, it is not easy to evaluate the impact of a CEO's background (his or her cognitive, social and psychological characteristics) on firm's outcomes. Hambrick and Mason (1984) developed a management theory called: 'The Upper Echelon Theory' to solve this issue. The idea is that each top manager has their own perceived base of values and observes business operations through a personalised lens (Finkelstein et al., 2009). The theory explains the correlation between organisational performance and the basic characteristics of managers and shows that organisational performance is predictable partly by demographic characteristics of the top executive (Nishii et al., 2007). These demographic characteristics insights into strategic situations arise from differences among executives in terms of experience, values, personality and other human factors. In addition, this theory also argues that the more complex a decision is, the more important the individual characteristics of the decision are, such as age, tenure and expertise. Nielsen (2009) also found that other top managers' characteristics such as age or experience will directly affect firms' strategic choices and organisational performance. Upper Echelon Theory is applied in much academic research: Manager's academic level impacts on environmental disclosure level (Lewis et al., 2014), environmental performance (Tran & Pham, 2020), internationalisation (Ramo´n-Llorens et al., 2017) and financial outcome (King et al., 2016). R&D activities are also affected by the characteristics of managers. Companies, managed by younger managers who have STEM backgrounds, intend to spend more on R&D (Barker & Mueller, 2002). Other previous research have also demonstrated that managers' characteristics influence firms' other activities. For example, managers' characteristics are a significant predictor of internal control quality (Lin et al., 2014), organisational culture (Giberson et al., 2009), internationalisation process (Hsu et al., 2013), adoption of IT (Hameed et al., 2012), firms' innovation activities (Lin et al., 2011) and firms' long-term financial outcomes (Muller & Kra¨ussl, 2011;Wang & Qian, 2011). Relating to social and environmental related activities, Manner (2010) concludes that female executives and executives with a bachelor's background in humanities are positively associated with environmental performance. In this study, because chairpersons take the most of the CEO, chairperson's leadership roles in cooperatives, it is appropriate to apply Upper Echelon Theory to both chairperson and managers' characteristics. Hypotheses CEO/chairperson age. Previous studies showed that age could be seen as an essential demographic factor, affecting the CEO's behaviours on firms' decisions and strategies (Sua´rez-Rico et al., 2018). Similarly, managers' age is the factor that impacts their attitude towards social and environmental issues (Fabrizi et al., 2014). Some studies suggested that young CEOs may be less willing to make long-term investments like environmental oriented activities. For instance, Fabrizi et al. (2014) found that firms managed by a CEO at 60-year-old have a 13% higher social and environmental performance level than firms with 50-year-old CEOs. The difference is because young executives have a concentration on promoting short-term performance such as financial outcomes. When young CEOs follow the goal of profit maximisation, corporate environmental responsibility issues are of less concerned to them (Shahab et al., 2020). Compared with the young, older people show more significant concern about firms' environmental issues and impact positively on environmental performance (Forte, 2004;Kollmuss & Agyeman, 2002). Due to being in early careers, younger CEOs are pressed to follow positive financial short-term results to the market and ignore social/environmental-related activities (Fabrizi et al., 2014). Moreover, older executives have a stronger orientation to support the development of local communities (McCuddy & Cavin, 2009) and understand the role of diversity management practices in business, than young ones (Ng & Sears, 2012). In addition, although many managers are willing to advance environmental performance, they lack expertise to do that. Thus, a higher level of management skill, experience and knowledge (intellectual capabilities of CEO) can also be the advantage of older executives. Building from these arguments, the following hypothesis is proposed: Hypotheses 1: There is a negative correlation between CEO/chairperson age and energy consumption. CEO/chairperson gender. Some previous studies have tried to investigate the impacts of CEO's gender on the decision whether or not to undertake firms' social and environmental oriented practices (Galletta et al., 2022;Manner, 2010). Based on Upper Echelon Theory scholars argued that there is an association between CEOs' gender and firms' social and environmental oriented practices. Taking a sample of 650 US firms, Manner (2010) concluded that female executives are positively associated with environmental performance. Similar to this, the study of Glass et al. (2016) also find that female CEOs positively impact a firm's CSR than males. In general, most studies found a positive relationship between female leaders and business environmental/social performance (Cook & Glass, 2018;Kassinis et al., 2016). There are several reasons for this. Theories on gender difference suggest that men and women follow different behaviour types because socialisation differentially encourages and rewards them (Glass et al., 2016). Socialisation makes women care and have concern for others, whereas men tend to be more autonomous and individualistic (Gilligan, 1982). Women are more stakeholder oriented, focus on the interest of various stakeholders including customers, employees, suppliers and communities (Harrison & Coombs, 2012;Matsa & Miller, 2013). Thus, previous studies argued that female CEOs are generally more concerned with social and environmental problem than male colleagues (Glass et al., 2016). Female managers are willing to explore more approaches to decrease corporate environmental pollution than male colleagues (Fukukawa et al., 2007). Other studies also demonstrate that female managers pay more attention to the problems related to ethics and social responsibility (Eagly & Johannesen-Schmidt, 2001) and are more environmental-oriented than men (Jiang & Akbar, 2018). In addition, while male managers tend to focus on their self-interest as their main goal in business, female colleagues take into account social and environmental issues (Jiang & Akbar, 2018).In summary, it can be suggested that CEO gender is a relevant factor in illustrating the relationship between CEO characteristics and environmental performance. As a consequence, the following hypothesis is proposed: Hypothesis 2: Female CEO/chairperson is associated with a lower level of energy consumption than their male counterparts. CEO/chairperson education. Regarding the CEO/chairperson's level of education, some notable studies focus on this topic. Most of these studies have mainly described the influence of the educational background of CEOs on firms' social and environmental performance (C xera et al., 2022;Manner, 2010;Slater & Dixon-Fowler, 2009). Studies have shown that CEOs' education level is a significant predictor of activities relating to firms' environmental responsibility behaviour (Shahab et al., 2020;Slater & Dixon-Fowler, 2010). Specifically, there is a positive correlation between CEO/chairperson education level and environmental awareness: highly educated CEOs tend to be more concerned about climate change (Amore et al., 2019). Bhagat et al. (2012) suggested that educational level is one of the criteria that reflects executives' knowledge and technical skills, which can impact responsible behaviour for the environment. The better-educated CEOs are better able to identify and possess energy-saving campaigns for lower utilisation of energy inputs because of their better managerial skills (Amore et al., 2019). Furthermore, higher educated individuals tend to behave responsibly towards society and the environment (Meyer, 2016). Moreover, highly educated CEOs are concerned with the benefit of shareholders and the environment (Amore et al., 2019). Nowadays, education about social and environmental issues has been integrated into universities' curriculum designs around the globe to increase social development and promote students' awareness of environmental issues (Matten & Moon, 2008). Although there is no conclusive evidence that going to college can change student behaviour, studies indicate that environmental lessons increase students' awareness of environmental problems (Thomas, 2005). Therefore, the education level of CEOs may be associated with firms' environmental responsibility performance. Taking into account the arguments as mentioned earlier, the following hypotheses is proposed below: Hypothesis 3: CEO/chairperson with higher education levels lead to the lower amount of energy consumption. CEOs' duality. Although prior studies have concentrated on the relationship between CEO characteristics and social and environmental performance, they mostly overlooked this relationship. CEO duality happens when the CEO of the company holds the chairman's position on the board of directors. The simultaneous holding of both positions enhances the power and control of a single individual, increasing conflicts of interest (Li et al., 2010). Previous studies suggested that the dominant power of the CEO in the board (duality) is the lack of socially and environmentally attention in the firm's that they manage (Mallin & Michelon, 2011). Garcı´a Martı´n and Herrero (2020) also supported for the separation of executive and chairman's positions to reduce the conflict between them, facilitating the promotion of social and environmental investment of firms. First, a dual role makes CEOs-chairpersons tend to concentrate on their self-interest rather than the benefit of firms' stakeholders, including society and environment (Simpson & Gleason, 1999). Second, according to Agency Theory, CEOs' powerful dominations of organisations tend to decrease the effectiveness and efficiency of the board's monitoring ability (Oh et al., 2016). Lim et al. (2008) also argued that CEO duality is the source of conflicts in governance mechanisms, especially with audit committees and nonexecutive/outside directors. Outside directors are important in encouraging firms to be involved in social and environmental responsibility because of their boarder experience and stakeholder orientation (Oh et al., 2016). However, the strong power of CEO duality in the board can distort and limit the ability and benefits that outside directors can bring to the CSR of firms (Oh et al., 2016). Moreover, the combination of CEO and chairman positions associates with lower corporate transparency and social/environmental performance (Li et al., 2010).In summary, the following hypothesis is proposed: Hypotheses 4: There is a positive correlation between CEO duality and cooperatives' energy consumption. Data Collection The sample was taken from the GSO (General Statistics Office of Vietnam) database from annual surveys. The subjects of the investigation are corporations, state corporations, and independent economic accounting enterprises, cooperative, cooperative union and credit funds, established and regulated by the Law on Enterprises. The survey was carried out nationwide. The method of data collection comprises of direct interviews and electronic questionnaires. The investigation information includes company's name, type of business, labour and income of employees and business' performance: asset, capital, financial performance, corporate tax and investment. The sample consists of cooperatives collected in this database, which followed for 3 years from 2014 to 2016 (because of the limitation of the database, this time period cannot be extended more). The study sample is strongly balanced (data were available for all cooperatives for the whole research period), with 4,524 firm-year observations (1,508 cooperatives) from 69 industries. Appendix A shows the composition of the sample. Frequency analysis indicates that the farming service activities industry takes the first position with 3,296 cooperates, which carries about 72.86%. The farming, mixed livestock industry occupies second place with 598 cooperates, accounting for about 13.22%. This statistical result is similar to the context in Vietnam where most cooperatives operate in the field of farming and livestock. Regression Model This research, uses the OLS regression model, controlling for year and industry fixed effects. In addition, a 1year time lag is applied to all explanatory and controlling variables to reduce the endogenous problem between CEO/chairperson demographic characteristics and electricity consumption and the potential for reverse causality (Abdullah et al., 2016;Shamir, 2011). Dependent variables. Energy consumption is measured by the logarithm of a firm's electricity used divided by the number of employees (KWHEMP). To improve the robustness of the results, there are three other approaches to measure energy consumption, the logarithm of a firm's electricity used divided by gross profit (KWHG), fixed asset (KWHFA), total asset (KWHA). This measurement method used is based on the study of Amore et al. (2019). Explanatory variables. The variable: CHAIRAGE, CEOAGE represents the age of chairperson (number of years) and CEO respectively. Two dummy variables were created to identify the gender of the chairperson and the CEO. These variables both take the value of 1 if the CEO or chairperson is a woman, 0 otherwise. Categorical variables were to identify chairperson/CEO education level, a higher number meanings that they have a higher education level: 1-untrained, 2-trained but no degree, 3vocational elementary degree, 4-vocational intermediate degree, 5-vocational college degree, 6-bachelor, 7-post-graduate degree. CEO duality is the situation that the CEO also holds the position of chairperson in the cooperative. A dummy variable was used for CEO duality, which takes the value of 1 if the CEO is also chairperson and 0 if not. Control variables. A set of control variables were chosen based on previous studies (Amore et al., 2019;Fan et al., 2017). Control variables are ASSET (firms' total Tran assets), EMP (firms' total labour force), FEEMP (female employee rate), ROA (return on assets) and REV (revenue). They are firms' characteristics and should be included in the model. All variables used are shown in Table 1. Descriptive Statistics The descriptive statistics of the sample are illustrated in Table 2. As shown from Table 2, the electricity usage is measured in 1,000 kWh used by cooperatives' activities annually. The variable: KWH has a mean of 42.3262, a minimum of 0.05 and a maximum of 8,000. Five dependent variables: KWHEMP, KWHG, KWHFA, KWHA are measured by the natural logarithm of electricity consumption divided by total labour workforce, gross profit, total fixed assets and total assets, respectively. The means of these variables are between 20.6415 and 24.9221. The age of chairpersons and CEOs has a mean age of 52 years. Only 4.08% of the CEOs and 3.79% of the chairpersons are female. Regarding CEOs' and chairpersons' education level, the average level is between 3 and 4, meanings that between vocational elementary degree and vocational intermediate degree. CEO duality is not common within the sample, as its dummy's mean value is at 0.1055. Table 3 reports the correlation matrix for the variables used in the estimations. To eliminate strong correlations between main explanatory variables that can distort the regression results, they were separated them into different regression. The VIF of all explanatory variables is below 5, which indicates the absence of multicollinearity in all the regression models. Main Results Tables 4 and 5, show the results of the pooled OLS regression controlling for industry fixed effects and year fixed effects with electricity consumption as the dependent variable. To eliminate causal effects and endogenous problems, explanatory and control variables are lagged by one period (1 year). In terms of chairperson age, the relationship between electricity consumption Discussion The main goal of this study is to examine the relationship between CEO/chairperson characteristics and cooperatives' energy consumption. Using a sample of 1,508 cooperatives in the period of 3 years from 2014 to 2016, this research supports the perspective of the Upper Echelon Theory, which CEOs/chairpersons characteristics impact organisations' activities. To be specific, the results showed that the age of the chairperson is positively related to energy consumption. This study also found that the gender of the chairperson and CEO does not correlate with electricity used, whereas CEO duality is negatively associated with energy consumption. Similar to previous studies, a higher level of education leads to a higher awareness of chairpersons' perception of environmental issues and decreases energy consumption in cooperatives. First, descriptive statistics show a positive nexus between chairperson age and the level of energy used in terms of chairperson age. The findings agree with the previous studies (Davidson et al., 2007;Gray & Cannella, 1997;Oh et al., 2016), which argued that older leaders associate with lower organisational outcomes. This is because of 'career horizon problem' which refers to the problem that CEOs, who are near retirement, tend to Note. Standard error is showed in parentheses. ***.01 Sig, **.05 Sig, *.1 Sig. decide towards a short-term orientation (Matta & Beamish, 2008;Oh et al., 2016). Previous studies also indicated that older managers are less motivated to invest in long-term activities such as R&D (Hambrick & Fukutomi, 1991), advertising (Dechow & Sloan, 1991) and long-term performance (Davidson et al., 2007). Reducing energy consumption by investing in energyreduction facilities will cost in the short term but contribute to the sustainable development of a firm. Thus, because of the long-term payoff of energy used efficiency, older CEOs, who are near retirement age and are less motivated to make long-term oriented decisions, are not favourable towards reducing energy consumption. Second, regarding CEO/Chairperson gender, there is an insignificant relationship between female CEO/chairperson and cooperatives' energy consumption levels. These results are inconsistent with the hypothesis proposed. This can be because of the token female representation on the top management team and the boards of directors. More specifically, a critical mass of female managers and directors is needed to positively impact female managers/directors on corporate social responsibility (Yarram & Adapa, 2021). A solo female member will be under the pressure of other male members on the board, so they tend to replicate the behaviour of the majority of male members and their points of view being token representation only (Yarram & Adapa, 2021). Moreover, Vietnam is an East Asian culture where prejudices about female roles (doing housework and taking care of their children) in society are common (Pham & Hoang, 2019). Thus, Vietnamese women have to surpass many challenges to achieve a position in top management team and the board of directors and their role in business is overshadowed by male colleagues. Third, in terms of chairperson education levels, the results indicate that better-educated chairpersons help to reduce electricity consumption by spurring firms' energy efficiency. This result is in align with the previous studies of Amore et al. (2019) and Zhou et al. (2021). Increased chairperson education level leads to managerial styles that concern corporate energy efficiency and environmental responsibility (Amore et al., 2019;Zhou et al., 2021). Chairpersons, trained at a higher education level, may have a broader knowledge of environmental issues. Fourth, regarding CEO duality, contrary to the original assumption, the study's results showed that there is a positive relationship between CEO duality and efficiency of energy used (reducing energy consumption). However, the results support the findings of Bear et al. (2010) and Jizi et al. (2014). The reasons can be that to increase tenure prospects and enhance reputation, powerful CEOs may pursue more activities in social and environmental related fields (performance and disclosure) (Jizi et al., 2014). In addition, cooperative business is an economic organisation belonging to the collective economic sector established on a voluntary basis for the common benefit of its members. They cooperate and support each other in operation, business and job creation to satisfy the interests and needs of members, based on autonomy, democracy, equality and self-responsibility in cooperative management communes. The situation when the Chief Executive Officer also takes a role of the presidency of the cooperative can decrease the conflict of interest between managers and owners, based on stewardship theory (Lam & Lee, 2008). Hence, the alignment of interest between members is very important to cooperatives, which requires cooperation and support. Conclusion This study's findings confirm that CEOs' characteristics and functions have an important effect on corporate environmental performance. However, although previous studies have addressed the relationship between CEOs'/ chairpersons' characteristics and firms' environmental performance, there is a lack of studies investigating the impact of CEOs'/chairpersons' characteristics on energy consumption. This study explores whether CEO characteristics matter for energy-saving. Using OLS method and lagged independent variables with a sample of 4,524 firm-year observations from cooperatives in Vietnam Table 6. The Summary of Results. Hypotheses Results H1: There is a negative correlation between CEO/ chairperson age and energy consumption. Reject: A positive relationship between chairperson age and energy used H2: Female CEO/chairperson is associated with a lower level of energy consumption than their male counterparts. Reject: CEO/chairpersons' gender impacts insignificantly on energy consumption H3: CEO/chairperson with higher education levels lead to lower amounts of energy consumption Support: Chairperson with higher education levels lead to the lower amount of energy consumption H4: There is a positive correlation between CEO duality and cooperatives' energy consumption. Reject: There is a negative correlation between CEO duality and cooperatives' energy consumption. from 2014 to 2016, it was found that chairperson age is positively related to energy consumption. By contrast, chairpersons with higher education levels and CEO duality are negatively-related, and CEOs'/chairpersons' gender is not related to energy consumption. This paper contributes knowledge to literature and practice. First, considering the theoretical contribution, this study supports the Upper Echelon Theory that CEOs'/chairpersons' demographic characteristics (age, education level and duality) can impact corporate environmental performance such as energy-saving actions (Hori et al., 2014). These findings again prove the correctness of this theory and future studies can continue to apply this theory in examining the relationship between executives' characteristics and firm performance. In addition, based on other theories and previous empirical findings, I explain the impact of CEOs'/chairpersons' characteristics as well as the direction of impact on environmental performance are explained. Second, regarding practical implications, because environmental performance assessment in Vietnam is not easy, the energy consumption level is considered an appropriate indicator. Thus, understanding the influence of CEO/chairperson characteristics on energy-saving actions is an appropriate approach to investigate the relationship between managers' demographic characteristics and environmental performance. Companies can use the findings of this study to select suitable CEO/chairperson with their characteristics to improve energy-savings as well as environmental performance. For example, a chairperson who is younger, more educated and holds an executive position will lead to a higher chance of reducing energy consumption. This is also a good indicator for stakeholders to know more about firms' environmental performance. However, this paper also has some limitations. First, this study only takes a sample of cooperatives in Vietnam, so the findings may be not appropriate with other samples of listed firms or SMEs. Thus, it is suggested that future studies should examine research questions in other samples such as in other countries. Second, this study uses electricity used as the dependent variable of energy consumption but cooperatives also use other types of energies such as coal, diesel and petroleum. Hence, future studies should use coal, diesel or petroleum consumption as dependent variables and test to examine whether the findings stay similar or not. Future studies should increase the robustness of results by this way.
2022-10-13T15:05:36.014Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "0735b2e07d9387243d28ad9625d3012616d36070", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1177/21582440221129241", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "bb9542271335c589c974ebde3e00719ebb990e1f", "s2fieldsofstudy": [ "Environmental Science", "Business" ], "extfieldsofstudy": [] }
258184702
pes2o/s2orc
v3-fos-license
Patient with myelodysplastic syndrome presented with recurrent pericardial effusion diagnosed as epicardial hemangioma; Case report of a rare diagnosis with rare presentation Primary cardiac tumors are extremely rare and cardiac hemangiomas comprise less than 3% of them. Presentation of such disease with recurrent pericardial effusion is even rarer. Our patient is known case of myelodysplastic syndrome and up to our knowledge there are no reported case in which cardiac hemangioma was diagnosed in a patient with myelodysplastic syndrome. This 64 years male patient presented to our department with recurrent pericardial effusion, diagnosis was a query after extensive work he was found to suffer from a cardiac tumor based on the pulmonary artery and right ventricle. We performed surgery for him on cardiopulmonary bypass and did complete resection of the mass for him and result of biopsy showed mixed hemangioma. Recurrent pericardial effusion is most commonly a sign of a malignancy. Even with advancement of medical technology diagnoses of cardiac hemangiomas are still difficult. Treatment and definitive diagnosis is still complete surgical resection and histopathological examination. Cardiac hemangioma Myelodysplastic syndrome Cardiac surgery Recurrent pericardial effusion Primary cardiac tumors a b s t r a c t Primary cardiac tumors are extremely rare and cardiac hemangiomas comprise less than 3% of them. Presentation of such disease with recurrent pericardial effusion is even rarer. Our patient is known case of myelodysplastic syndrome and up to our knowledge there are no reported case in which cardiac hemangioma was diagnosed in a patient with myelodysplastic syndrome. This 64 years male patient presented to our department with recurrent pericardial effusion, diagnosis was a query after extensive work he was found to suffer from a cardiac tumor based on the pulmonary artery and right ventricle. We performed surgery for him on cardiopulmonary bypass and did complete resection of the mass for him and result of biopsy showed mixed hemangioma. Recurrent pericardial effusion is most commonly a sign of a malignancy. Even with advancement of medical technology diagnoses of Introduction Primary cardiac tumors are extremely rare [1 ,2] . The majority of studies are based on autopsy findings, which comprise 17 in a million to 0.28 per cent of autopsy cases [1 ,2] . Secondary metastasis, on the other hand, is much more common [2] . Approximately 75% of these primary tumors are benign, and most of these are myxomas, which constitute 50% of all primary cardiac tumors [2] . Cardiac hemangiomas are extremely rare, accounting for less than 3% of all primary cardiac tumors [1 ,3] . In addition, cardiac hemangiomas presenting with effusion or tamponade are extremely rare [1 ,4] . Myelodysplastic Syndrome (MDS) is a clonal marrow stem-cell disorder characterized by ineffective haemopoiesis resulting in anemia [5] . The progression of acute myeloid leukemia has been reported in several cases of patients [5] . This report presents the case of a patient with recurrent pericardial effusion who, upon workup, was found to have a mass on the epicardium that was subsequently resected. Case report The patient, a 64-year-old man with a history of MDS, was referred to the Cardiac Surgery Department in Sulaymaniyah due to recurring pericardial effusions. It began 1 year earlier when he was diagnosed with pericardial effusion and had dyspnea and orthopnea. After he failed to respond to medical therapy, he underwent pericardiocente-sis, and 1 L of serous fluid was extracted. A cytological examination of the sample revealed no evidence of malignancy. After 3 months, the patient developed dyspnea and chest pain and was diagnosed with recurrent pericardial effusions. No diagnosis was found after multiple imaging and cytological examinations of the fluid after re-aspiration. The patient was referred to our department with signs of yet another pericardial effusion, and we began a diagnostic workup. Besides MDS, for which he is receiving Danazol treatment and repeated transfusions when hemoglobin levels fall below 8, he has no history of other chronic ailments. He has been put on Danazol treatment for the last 4 years because he developed thrombocytopenia. He receives no other form of treatment. He does not smoke or consume alcohol regularly. A physical examination revealed that the patient is dyspnoeic and cannot lie comfortably. A transthoracic echocardiogram (using Philips CX50 Ultrasound Machine and a linear transducer) revealed a hypoechoic mass near the pulmonary valve on the right side of the heart ( Fig. 1 ). Chest X-Ray showed cardiomegaly ( Fig. 2 ). CT scan of the chest with IV contrast (dilute iodinated contrast) revealed a 19 mm outpouching on the left of the pulmonary trunk, and the structure showed central isodensity. In addition, there was a large amount of pericardial effusion ( Figs. 3 A and B ). Surgical exploration was decided upon. A classical sternotomy was performed in which the pericardium was opened, a large quantity of fluid was drained from the pericardium, and a tissue sample was taken for cytology. The right ventric- ular and pulmonary trunk contained an epicardial vascular tumor. Aortic and right atrium cannulations were performed on the patient's heart, and the patient was put on cardiopulmonary bypass since we did not know whether the invasion was intracardiac or not, and the heart was halted in diastole ( Figs. 4 A and B ). The tumor was completely resected and sent for histopathological examination ( Fig. 5 ). An epicardial hemangioma of mixed Capillary and Cavernous type was found during the biopsy ( Fig. 6 ). Hematoxylin and eosin stain (H&E) was used with a magnification of (4 ×) We demonstrated the location of the mass in our drawing by sketching it out ( Fig. 7 ). The postoperative period was uneventful. The patient was discharged from the hospital after 4 days of hospitalization. Following 3 months of follow-up, neither the pericardial effusion nor the tumor was observed to recur. The preop blood investigations included: Hemoglobin 10.7 g/dL, Platelet count of 290,000 platelets per microliter of blood, Discussion Hemangiomas are benign, proliferative tumors characterized by an increased turnover of endothelial cells [2 ,3] Cardiac hemangiomas may originate from any of the layers of the heart, including the endocardium, myocardium, and epicardium [6 ,7] . In addition, there have been reports of tumors originating from the pericardium. A further classification of hemangiomas is based on their histopathological characteristics, such as cavernous, capillary, arteriovenous or dysplastic types [1 ,6] . The cavernous type is slightly more common than other types [1] . Mixtures of these types are not uncom- Fig. 6 -Histopathological examination showing mixed type epicardial hemangioma. mon [1] . The pathology of our patient's tumor revealed mixed cavernous and capillary features. The location of such masses varies, but 39% are found in the left ventricle, left atrium, mitral valve, and aortic valve [1] . A further 44.1% of hemangiomas are found in the right heart -the most common being the right atrium [1] . Depending on the origin, the right ventricle is usually the most common site, followed by the right ventricle [5] . Most tumors are located on the epicardium rather than inside the chambers [8] . The presentation of these tumors is usually asymptomatic in the early stages unless cardiac insufficiency occurs [1] . In the majority of cases, the patient is either asymptomatic or normal on physical examination [1] . A decrease in exercise tolerance is the most common presenting symptom [1] . In benign epicardial masses, pericardial effusion is rare, whereas it occurs in benign pericardial masses - [1 ,2 ,9] , especially in malignant tumors that cause bloody pericardial effusion or tamponade [1 ,9] . In our case, however, it was neither a bloody effusion nor a malignant tumor. The literature rarely mentions associated diseases associated with benign epicardial hemangiomas, such as Turner syndrome, pectus excavatum, and Ebstein anomaly [1] . There have been reports of cases of MDS with splenic hemangiomas [10] . As far as we know, there are no cases in the literature in which patients with MDS are diagnosed with epicardial hemangiomas. We also could not find any correlation in the literature regarding the treatment the patient has received in the last 4 years, Danazol, and the formation of any form of hemangiomas. It is extremely difficult to diagnose epicardial masses on the basis of clinical findings. It is still common for cases to be missed, even with the advancement of technology [1] . If contrast is used, transthoracic echocardiography should be the first modality of choice since it is a vascular tumor and will absorb the contrast rapidly [6] . Additionally, it illustrates the location and size of the tumor [8] . CT scans with contrast will also show intense enhancement. An MRI of the heart shows isointense T1-weighted images or hyperintense T2-weighted images, as compared to the myocardium [6 ,11] . After utilizing multiple imaging modalities in our patient, doubt remained as to the extent of the invasion. The definitive diagnosis of such entities remains surgical resection and histopathological examination [8 ,11] . Because the natural history of the disease does not favor conservative treatment, surgical resection is still the treatment of choice. A high recurrence rate is observed among large hemangiomas, particularly when partial resection has been performed [8] . Even asymptomatic patients should undergo surgery in case there are life-threatening complications [7 ,8 ,11] . Conclusion A benign primary cardiac tumor is extremely rare, and hemangiomas are among the rarest. Epicardial hemangiomas rarely present with recurrent pericardial effusions, and most recurrent pericardial effusions caused by tumors are malignant. The relationship between myelodysplastic syndrome and cardiac hemangiomas has not yet been established in the lit-erature. Despite advancements in medical technology, cardiac hemangiomas remain challenging to diagnose. For the diagnosis and treatment of such diseases, complete resection remains the best option. We recommend that clinicians be aware of the possibility of primary cardiac tumors when treating patients with recurrent pericardial effusions without a primary cardiovascular disease. Patient consent Written consent was provided from the patient to the Cardiac Center's legal committee and the surgeon in charge to publish this case and show photos that include investigations and intraoperative images. If the journal so requests, these are available for review. Ethical statement An ethical approval has been obtained from the hospital's Ethical Committee. The patient consented to participate in the study and written consent was obtained from the patient for this case report and accompanying images. This consent is available for review upon request. Authors' contributions Dler is Surgeon in charge and decided for surgery, also responsible for data collection. Shkar gave scientific opinion regarding the operation and reviewed the discussion of the case report. Yad wrote and finalized the manuscript and corresponding author. Erfan reviewed literature and helped with collection of data. Zryan helped with manuscript writing and sketched the figure illiustrating the tumor site. Razhan reviewed the manuscript for grammer and sentence structuring. Han is the Pathologist whom diagnosed the specimen and wrote pathology report. Othman is radiologist who wrote radiology report and defined the tumors location. All authors accepted final draft.
2023-04-18T15:04:14.859Z
2023-04-16T00:00:00.000
{ "year": 2023, "sha1": "5e0eeaa4817d0069ba6e0771347c0283c29b06a8", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "a4c926880be03411ab702b26cb58ffcb087669e6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
234287359
pes2o/s2orc
v3-fos-license
3D models of buildings of the future for use in construction printing . The article presents the development of conceptual models of buildings constructed with using Building Information Modeling software complexes. It describes the advantages and disadvantages of modern additive technologies. Features of their using are specified. Examples of their integration in modern construction are shown. Construction areas for which building models were developed were selected. The main ways of development of construction in the selected areas of activity are indicated. An analysis of current trends in the field of architecture is presented, and a forecast of future architectural trends for the coming decades based on the results provided by 3D Fast Build technology is made. The functional features of the buildings construction of the designated time period are taken into account. They were theoretically integrated into selected objects related to various construction areas, with their further design and modeling in the Autodesk Revit program, taking into account all identified trends and expected functionality. Introduction Erection of buildings using a construction 3D printer is a promising and rapidly developing area, where new results are achieved every year [1]. So, from 2009 to 2016, such innovative additive technologies as D-Shape -based on the stereolithographic printing process, Minibuilders -a system of three robotic 3D printers with a clear division of functionality, DeltaWASP-high -speed clay printing of residential buildings at an affordable price, as well as the S-1160 construction 3D printer, the Big Delta 3D printer, the ProTo R 3DP robot manipulator, the CyBe RC 3Dp 3D printer, and the INNOprint printer are included in the construction operation [2][3]. The use of 3D printing is cost-effective [4]. A study by the Irkutsk company ApisCor confirms this. Calculating of an approximate value of prices per 1 m2 show that the construction of a townhouse using additive technologies is 55% more profitable than construction using traditional methods [5]. It is already possible to construct buildings entirely on the location of the 3D printer. Examples of such projects can be found both: abroad (Villa of Hua Shang Tengda company, China; houses of Shanghai WinSun Decoration Design Engineering company, Shanghai; "office of the future" Dubai; hotel complex, Philippines) and in national construction practice (the project of a residential building of Apiscor company [6]). There is reason to believe that 3D printing of homes will continue to develop rapidly in the coming decades [3]. However, modern 3D printers are still limited in their capabilities. Research shows that in addition to their undeniable advantages, they have a number of disadvantages [1][2][3]5,7]. Among them: 1. The appearance of defects in the printing process due to errors in the digital model of the building; 2. Difficulties in the construction of complex architectural structures; 3. Lack of a complete regulatory and legal base for this type of construction; 4. Lack of the necessary material (concrete), which does not harden when applied to a high height and does not let the cold into the house; 5. The Dependence of construction work on the season and climatic conditions; 6. High cost of equipment associated with the lack of large series production; 7. The need for mechanical and chemical processing of parts due to rough edges and burrs. Currently, the Russian construction company AOCG is working on creating a printer «3D Fast Build». This is an additive hybrid technology for 3D printing that uses concrete on polymer binders with composite reinforcement of building structures of any shape without the use of formwork. The technology allows to print horizontal overhanging elements and integrate insulation into structures, which will allow to print curved structures and surfaces with reverse bending. The purpose of this research is to develop 3D architectural models of buildings that will be used in printing on a construction printer from AOCG. Models must combine internal functionality, curved shapes (to test the printer's capabilities), and the relevance of the "3D Fast Build"technology at the time of commissioning. Objectives of this article: 1. Analyze architectural trends for the next 5-10 years; 2. Select the most relevant areas of construction in which it is possible to use complex curved shapes; 3. Determine the type of building within the selected area and explore trends for these buildings; 4. Model architectural 3D models of buildings in the Autodesk Revit and Lumion software. Search and analysis of architectural styles and construction areas A study was conducted and several of the most relevant architectural styles were identified. They are focused on meeting the physical, aesthetic and other needs of a person. These are retrofuturism and futurism [8], bio-tech [9], eco-architecture [10][11]. Within the limits of the developed architectural style, the following areas of construction were considered: industrial, civil, administrative, and municipal [12]. Further, in these areas, various buildings were selected, which are supposed to have such technological innovations as energy efficiency, artificial intelligence, the emergence of new ways to generate energy, as well as the development of modern computer technologies, which will create a new fantastic virtual reality [13]. Special attention was paid to changes in people's needs at work [14], at school [15] or at home [16], as well as to changes in technological processes in these areas. All changes in the functionality will lead to changes in the space-planning solutions of future projects. Modeling 3D model buildings The modeling procedure consists of the following steps: 1. Create zoning plans in AutoDesk Revit 2019. This software package was chosen because it has a number of advantages; 2. In comparison with ArchiCAD, Revit has a number of unique features that provide a higher-level product, especially if we talk about models of complex structure [17]; 3. Support for third-party programs via the universal BIM format-IFC, and work with training manuals has been established [17]; 4. The shaping Process in SketchUp, AutoDesk Inventor Professional, AutoDesk Revit 2019, and AutoDesk Revit 2020; 5. The creation of the final models in AutoDesk Revit 2019 and 2020; 6. Create photorealistic images of developed projects in the Lumion 9.0 program. Long term architectural trends The analysis of various sources and articles by different authors substantiates the relevance and objectivity such directions as bionic architecture, sustainable architecture and neofuturism architecture. These directions reflect a lot of contemporary long term tendencies and trends, including application of green technologies, modularity and mobility of constructions and dwellings, space and property sharing, the policy of the new consumerism, etc. Modeling results The result of this research are the concepts of buildings. Their characteristics are shown in table 1. Industrial building Manufacturing zone sharing, modularity, "black box" zone, tourist zone, energy-efficient building shape, changing the functionality and size of zones for employees. Customization of production, minimal human participation in manufacturing, development of industrial tourism and edutainment service, customer participation in product creation. Office building Open-space, showroom, techno-office areas, increased recreation areas, flexible layouts. Introduction of artificial intelligence in production, rooms for VR meetings, interactive work, no strict division of worplaces, edutainment-service. Educational organization (school) Coworking, open-space, project-room areas, free layout of workspaces, no standard forms. Introduction of project-based education, creation of comfortable conditions for students, destruction of the hierarchical system "teacherstudent", multifunctionality of educational areas and zones. The seashell shape of the scool building allows to implement a wide open-space and nonstandard layouts; the dynamism of internal spaces supports with minimizing the number of walls and partitions. There opens a new perspective with new form, changing the ordinary idea of educational process, eliminating an archaic hierarchical system, introducing projectbased education and individual approach. 3D model was performed using Autodesk Revit 2020 software program and is shown in Fig. 1. Fig. 1. 3D model of the school building The visualization of the school building was performed in the Lumion 9.0 program is shown in Fig. 2, 3. The cottage building idea is three surging waves. This form provides efficient space usage and high functionality of the building. Communications and networks are located in the floor of the lower premises, and the new drone parking zone is combined with the technical room. 3D model was performed using Autodesk Revit 2019 software program and is shown in Fig. 4. Fig. 4. 3D model of the cottage building The visualization of the cottage building was performed in the Lumion 9.0 program is shown in Fig. 5, 6. The visualization of the office building was performed in the Lumion 9.0 program is shown in Fig. 8, 9. The industrial building was based on the idea of cyclicity, which is reflected in the form, zoning and energy efficiency. Consequently, the structure consists of three modular blocks in the form of turtles, which can be detached and transported to another location. Transport routes and workspaces inside the plant are connected by circular passageways. Also, there are recreation areas and interactive manufacturing zones for visitors and customers. The building is completely self-powered. 3D model of the industrial building was performed using Autodesk Revit 2019 software program and is shown in Fig. 10. The visualization of the industrial building was performed in the Lumion 9.0 program is shown in Fig. 11. Discussion The main objectives of this study was accomplished. The created projects can be used as a data source of the architectural section for design engineers and HVAC engineers. The project was created in the BIM software package Autodesk Revit, therefore, when the additive technology "3D Fast Build" comes into operation, the calculation of the volume and price of materials selected for the printer will be carried out quickly through the Revit calculation complex. The concepts can be the basis for future architectural projects shaping, and research in the field of architecture provides a clear understanding of functional and architectural changes in the selected areas of construction. 8 Additive technologies are promising way of implementing various construction processes. It possible to create complex curved shapes, using 3D printing, while using standard technological processes it is impracticable to achieve such atypical shapes. Created building concepts can fully demonstrate the potential of AOCG's 3D Fast Build printer and additive technologies in general. Usage 3D printer during construction can get buildings as comfortable as possible for people, since the convenience of the building is the focus of all subsequent construction works and planning processes. The models meet the functional and aesthetic requirements, which is a good example of top-quality design. Also, the models correspond to those functional and architectural trends, which is inherent for future buildings. It was take into account the emergence of new departments, rooms and functional divisions that will appear in the recent years. It was designed based on a clear tendency of minimizing human participation in manufacturing and spending resources on auxiliary process, so the complete automation is set in each building. The developed models have absorbed all the advantages that the use of additive technologies provides. In the future, they can be used to design other concepts of buildings and structures, as a basis for shaping.
2021-05-11T00:06:57.546Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "1b5e4729a67dbaa0ee0d2eecaf3733be89657433", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/20/e3sconf_emmft2020_05022.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "96b4f3a08627cc84aa53b449e0cb4009cbaf4b2c", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
210940354
pes2o/s2orc
v3-fos-license
A thermophilic phage uses a small terminase protein with a fixed helix-turn-helix geometry Tailed bacteriophage use a DNA packaging motor to encapsulate their genome during viral particle assembly. The small terminase (TerS) component acts as a molecular matchmaker by recognizing the viral genome as well as the main motor component, the large terminase (TerL). How TerS binds DNA and the TerL protein remains unclear. Here, we identify the TerS protein of the thermophilic bacteriophage P74-26. TerSP76-26 oligomerizes into a nonamer that binds DNA, stimulates TerL ATPase activity, and inhibits TerL nuclease activity. Our cryo-EM structure shows that TerSP76-26 forms a ring with a wide central pore and radially arrayed helix-turn-helix (HTH) domains. These HTH domains, which are thought to bind DNA by wrapping the helix around the ring, are rigidly held in an orientation distinct from that seen in other TerS proteins. This rigid arrangement of the putative DNA binding domain imposes strong constraints on how TerSP76-26 can bind DNA. Finally, the TerSP76-26 structure lacks the conserved C-terminal β-barrel domain used by other TerS proteins for binding TerL, suggesting that a well-ordered C-terminal β-barrel domain is not necessary for TerS to carry out its function as a matchmaker. INTRODUCTION Viruses infecting all domains of life, from bacteria to eukaryotes, replicate and encapsulate their genetic material to create infectious particles. For viruses with large genomes, transporting genetic material into the capsid is an energetic challenge, and many viruses have evolved motor systems to accomplish this task. Viruses with concatemeric double-stranded DNA genomes, such as herpesviruses, and most phage use a motor known as a 'terminase motor'. Terminase motors are composed of three components: a 'portal' channel, a 'small terminase' DNA recognition protein, and a 'large terminase' that contains both nuclease and ATPase activities (Feiss and Rao, 2011). The portal, which is embedded within the capsid wall, acts as an adaptor to connect the capsid to the large terminase. The large terminase (TerL) binds portal and pumps DNA through its pore into the capsid. In order for this packaging step to occur, the motor must first specifically recognize the viral genome. This DNA-recognition task is performed by the small terminase (TerS), which binds a recognition sequence known as 'cos' or 'pac' and transfers the DNA to TerL for subsequent cleavage and packaging. Cos-and paccontaining phage are distinct in their cleavage mechanisms, as cos-phage only cleave at the cos site between genomes, whereas pac-containing phage solely use the pac site for packaging initiation, with the position of subsequent cleavage events dependent on a head-full sensing mechanism. It has been demonstrated that TerS has an important role in packaging initiation, as aberrant pac recognition impedes faithful genome packaging (Casjens et al., 1992;Schmieger, 1972). Despite several decades of investigation, how TerS binds to pac is still unclear. In many viral genomes, the pac site is located within the gene for TerS itself (Baumann and Black, 2003;Casjens et al., 1987;Chai et al., 1995;Leavitt et al., 2013;Roy et al., 2012;Wu et al., 2002). The pac site of phage SPP1 appears to be flexible, suggesting a role for DNA bending in TerS recognition (Chai et al., 1995). Further clues for the DNA binding mechanism come from structures of TerS proteins. All currently known pac-recognizing TerS proteins multimerize into a ring with a central pore (Buttner et al., 2012;Roy et al., 2012;Sun et al., 2012;Zhao et al., 2010). In some of these assemblies, such as Shigella flexneri phage Sf6 and Bacillus subtilis phage SF6, the pore is too narrow to accommodate double-stranded DNA binding (Suppl. Table 1) (Buttner et al., 2012;Zhao et al., 2010). In these structures, the outward-facing N-terminal domain is a helix-turn-helix motif, a common DNA-binding domain. Studies of Sf6 TerS indicate that mutation of this region of the protein abrogates DNA binding, suggesting a nucleosome-like wrapping mechanism (Zhao et al., 2012). The exception to this model is the TerS structure of phage P22. In P22, the perimeter of the ring lacks the helix-turnhelix motif, and the pore is wide enough to accommodate DNA (Roy et al., 2012). This finding led to a second 'threading' model in which DNA binds in the center of the ring, traversing through the pore (Suppl. Table 1). Regardless of the location of the DNA binding regions, all known TerS rings retain the same mushroom-like shape with a C-terminal β-barrel. TerS interacts with TerL using this βbarrel region, which is conserved in all TerS structures to date (Gao and Rao, 2011;Roy et al., 2012). TerS binding increases TerL's ATPase activity while inhibiting nuclease activity (Baumann and Black, 2003;Gual et al., 2000;Leffers and Rao, 2000;Roy et al., 2012;Sun et al., 2012), suggesting that TerS has a regulatory effect on DNA packaging. Additionally, the β-barrel can control TerS assembly, as removing it causes polydisperse ring formation (Buttner et al., 2012;Sun et al., 2012). Therefore, the Cterminal β-barrel has been hypothesized to be important for both TerS oligomerization and regulation of TerL activity. In past studies, we have used the thermophilic phage model system P74-26 to probe the mechanisms behind different stages of the viral life cycle (Hilbert et al., 2015(Hilbert et al., , 2017Stone et al., 2018). Here, we identify and characterize the small terminase gene of phage P74-26, hereafter known as TerS P74-26 . TerS P74-26 binds DNA and both activates ATPase and inhibits nuclease activity of TerL P74-26 . We report symmetric and asymmetric cryo-EM reconstructions of TerS P74-26 to overall resolutions of 3.8 Å and 4.8 Å resolution, respectively. Our structures show that TerS P74-26 retains the N-terminal helix-turn-helix motif, while also having a wide enough pore for DNA binding. In comparison to other TerS proteins, the helix-turn-helix domain is in a distinct conformation, with implications for the DNA binding mechanism. Finally, the C-terminal region of P74-26 TerS is unstructured, indicating that the β-barrel fold is not strictly conserved, nor is it essential for regulating P74-26 TerL activity. Identification of P74-26 gp83 as the small terminase (TerS) To investigate how thermophilic small terminase proteins recognize the viral genome, we sought to identify and characterize the TerS of P74-26 phage. TerS proteins commonly exhibit low sequence conservation, which can make their identification challenging. However, synteny can be used to identify the gene, as the small terminase gene often directly precedes the large terminase gene. Because gene 84 encodes the large terminase (Minakhin et al., 2008), we hypothesized that the gp83 protein is TerS. Although gp83 has low sequence homology to any known TerS protein (closest relative being T4 TerS, which retains 19% identity), its length of 171 amino acids is similar to that of known TerS proteins. To further verify its identity, the putative TerS protein was recombinantly expressed and purified to homogeneity ( Figure 1A). Size-exclusion multi-angle light scattering (SEC-MALS) shows gp83 assembles into a stable 9-mer complex, with a measured molecular mass of 170 kDa (compared to 171 kDa calculated by sequence) and a polydispersity index of 1.000, indicating a monodisperse assembly ( Figure 1B). The oligomerization state of gp83 is consistent with that of mesophilic TerS proteins, which assemble into 8 to 11 subunit oligomers (Buttner et al., 2012;Roy et al., 2012;Sun et al., 2012;Zhao et al., 2010). To determine if gp83 binds DNA like other TerS proteins, we performed electromobility shift assays. Because many other TerS oligomers recognize a sequence within their own gene (Baumann and Black, 2003;Casjens et al., 1987;Chai et al., 1995;Leavitt et al., 2013;Roy et al., 2012;Wu et al., 2002), we used the P74-26 gp83 DNA sequence to evaluate DNA binding. The gp83 complex binds DNA weakly, as indicated by smearing within the gel ( Figure 1C). Low DNA binding affinity is commonly seen in other TerS proteins (Greive et al., 2016;Zhao et al., 2012). We also find that gp83 modulates the enzymatic activities of TerL. Upon mixing gp83 with TerL P74-26 , ATPase activity increases 4.4-fold ( Figure 1D). This suggests a direct interaction between TerL and gp83, as no DNA is present in the experiment. gp83 also inhibits TerL nuclease activity 3.3-fold ( Figure 1E). The modulation of TerL enzymatic activities is consistent with previous studies of TerS proteins from other phages (Alam et al., 2008;Gual et al., 2000;Leffers and Rao, 2000;Sun et al., 2012). Taken together, our results identify gp83 as the TerS of P74-26. The structure of TerS P74-26 We next used electron microscopy (EM) to determine the structure of TerS P74-26 . Negative stain EM shows homogenous TerS particles with even distributions of top and side views (Suppl. Figure 1A). From 2D classification, we observe that TerS P74-26 forms a ring-shaped assembly with a central pore. To further elucidate the structure of TerS P74-26 , we prepared samples of the complex for singleparticle reconstruction by cryo-EM. Unlike negative stain samples, cryo-EM samples show strong preferred orientation for the top and bottom views of the ring and slight aggregation (Suppl. Figure 1B). The lack of side views severely hampers initial structure determination, and the middle portion of the ring cannot be resolved (Suppl. Figure 1C). To increase particle side views, we used a combination of sample additives and tilted data collection. Out of the numerous additives tested, amphipol A8-35 had the greatest effect on particle view distribution. After collecting a set of un-tilted images, we used a 30° tilt to obtain additional particle views (Suppl. Figures 2A-C). Initial 3D classification of the combined datasets produces six different classes, several of which are of particular interest ( Figure 2A). Classes 1 and 2, which account for over 50% of all particles, show apparent 9-fold symmetry. Asymmetric refinement of these combined classes generates a reconstruction with an overall resolution of 4.4 Å (Figures 2B&C; Suppl. Figure 3B; Table 1). The features of this reconstruction remain 9-fold symmetric. Therefore, we refined class 1, the best resolved class containing 84,460 particles, with C9 symmetry to further improve the resolution. (Refinement including both class 1 and 2 resulted in a slightly poorer resolution.) 3D refinement of class 1 with imposed symmetry results in a reconstruction of the TerS ring to an overall resolution of 3.8 Å (Figures 2D&E; Suppl. Figure 3C; Table 1). Subsequent classification steps with and without alignment did not provide any improvement to the overall resolution. Using the symmetric reconstruction, we built an atomic model of TerS P74-26 (Figures 3A-C; Table 1). The model was constructed using the crystal structure of TerS from phage g20c as a starting model (PDB: 4XVN; 98.2% identity to TerS P74-26 for the full-length protein). Each TerS P74-26 monomer has an N-terminal helix-turn-helix (HTH) motif, followed by an oligomerization domain consisting of two antiparallel helices. These helices pack against the oligomerization domain helices of the neighboring subunit, forming a helical barrel. From the oligomerization domain barrel, the HTH domains extend outward like the spokes of a wheel. The helical barrel arrangement of the oligomerization domains is highly reminiscent of the central oligomerization domains of the TerS proteins from phages SF6 and 44RR, with ɑ-helix 5 of the oligomerization domain positioned in the crevice between ɑ-helices 4 and 5 of the counter-clockwise adjacent subunit when viewed from the C-terminal region (Buttner et al., 2012;Sun et al., 2012) ( Figure 3D). The central oligomerization domains appear to be well-ordered, as local resolution of the 3D reconstruction shows the center of the pore has the highest resolution at 3.6 Å (Suppl. Figures 4A&B). The poorest resolution, as low as 4.5 Å, is found around the perimeter of the ring in the tips of HTH domains (Suppl. Figures 4A&B). The HTH domain of one subunit interacts with both of the subunits to the right through a series of hydrophobic interactions ( Figure 4A). Furthermore, the linker connecting the HTH to the ring (residues 51 to 56) is firmly packed against the adjacent subunit's oligomerization domain ( Figure 4A). Altogether, the HTH domains and linkers bury ~1570 Å 2 of area and complete the hydrophobic core of the oligomerization domain. These interactions lock the HTH domains in place, as well as strengthen the nonameric ring by an estimated ~9 kcal/mol using the PISA server estimation tool (Krissinel and Henrick, 2007). In comparison to mesophilic TerS structures, no other TerS assembly has a similar interaction between the HTH domain and the neighboring oligomerization domain (Fig. 4). We propose that this unique arrangement in TerS P74-26 contributes to the rigidification of the HTH domains with implications for DNA binding mechanism (see below). Contrary to our expectations, the last 35 C-terminal residues of the protein are missing in the reconstruction. In mesophilic TerS proteins, this region forms a β-barrel with neighboring subunits and is responsible for TerL binding (Buttner et al., 2012;Gao and Rao, 2011;Roy et al., 2012;Zhao et al., 2010). Both asymmetric and symmetric TerS reconstructions lack density for this region (Figures 2C&E). In 2D classification, side views of the protein show blurry density in the region where the C-terminal region is expected, indicating the region is present, but not resolvable (Suppl. Figure 3A). Interestingly, secondary structure prediction designates this region of TerS P74-26 as ɑ-helical (Suppl. Figure 5), which is unexpected because all other TerS structures exhibit C-terminal β-barrels (Buttner et al., 2012;Roy et al., 2012;Zhao et al., 2010). Comparison of TerS P74-26 with mesophilic TerS proteins The oligomerization domain of TerS P74-26 is similar to that of phage 44RR, a close relative of T4 phage (Sun et al., 2012). In both species, the oligomerization domain consists of two straight, antiparallel helices that assemble into a helical barrel structure (Suppl. Figure 6). The overall Cɑ RMSD of the helices of the oligomerization domains of 44RR and P74-26 is 2.6 Å, suggesting the two domains have considerable structural similarity. However, the barrel of TerS P74-26 is a strict 9-mer (Fig 1B), while that of TerS 44RR is less well-defined, ranging from an 11-mer to a 12-mer (van Duijn, 2010; Sun et al., 2012). This suggests ring stoichiometry is controlled by slight differences in intersubunit interactions rather than overall secondary structure. In comparison, TerS of Shigella phage Sf6 uses a n/a 0 0 Rotamer outliers (%) n/a 0 0 C-beta deviations n/a 0 0 similar fold of antiparallel helices, although the helices are quite bent (Suppl. Figure 6) (Zhao et al., 2010). Furthermore, the interactions between neighboring oligomerization domains of TerS Sf6 are different than in other TerS proteins, as was pointed out previously (Sun et al., 2012). The oligomerization domain of TerS of Bacillus phage SF6 is also quite distinct, with a beta-hairpin inserted at the turn between the two antiparallel helices; these twisted betahairpins extend the barrel structure formed by the helical region of the oligomerization domain (Buttner et al., 2012). Despite the substantial differences in primary amino acid sequence, secondary structure (Suppl. Figure 7), and mechanism for assembly, the overall structure is remarkably similar across phage, with barrel architecture retaining an overall outer dimension of 52 to 77 Å between Cɑ atoms across the barrel. Therefore, we hypothesize that the core oligomerization fold is not conserved, but rather a barrel shape The HTH domain of TerS P74-26 is also arranged distinctly from other phage. In TerS SF6 and TerS Sf6 , the HTH domains are flexible in regards to the central oligomerization domain (Buttner et al., 2012;Zhao et al., 2010Zhao et al., , 2012. It is speculated that this flexibility permits the HTH domains to stagger during DNA wrapping, allowing DNA to adopt a less strained conformation. We performed several analyses to investigate if the same conformational changes occur between the HTH domains of TerS P74-26 . First, we examined class 6 (86,969 particles), which is the most asymmetric class, with only eight HTH domains visible ( Figure 2A). As other TerS structures show flexibility in the HTH domains (Buttner et al., 2012;Zhao et al., 2010Zhao et al., , 2012, it is possible the missing domain in this class is due to the inherent flexibility of this region. 3D refinement with no symmetry applied produces a reconstruction with an overall resolution of 4.8 Å (Suppl. Figure 8A-C; Table 1). The reconstruction was used to create an atomic model of the class 6 structure by rigid body fitting each domain of the symmetrical model into the density (Suppl. Figure 8D; Table 1). Comparing each chain of the class 6 asymmetric model to all other chains within the model, no differences in HTH motif orientation relative to the oligomerization domains were observed (Suppl. Figure 8E). To determine if the missing HTH domain is the result of proteolytic removal rather than protein flexibility, we ran concentrated purified protein on an SDS-PAGE gel. The gel shows minor proteolysis of TerS, with a band at the approximate size of a subunit missing a HTH domain (Suppl. Figure 8F). Using gel densitometry, we estimate that approximately 4.5% of the protein is proteolysed, which is comparable to the ~3% estimated by cryo-EM. This result suggests that the missing HTH domain in class 6 is due to proteolysis, rather than conformational heterogeneity within the TerS ring. Our attempts to visualize any conformational heterogeneity using multi-body refinement or localized reconstruction methods were complicated by the small size of the HTH domain (~6 kDa; data not shown). Nonetheless, our data indicate very little conformational heterogeneity in the HTH domains of TerS P74-26 . The arrangement of the HTH domains around the perimeter of the ring is critical for examining the wrapping model that has been proposed for most TerS proteins (Buttner et al., 2012;Gao et al., 2016;Zhao et al., 2010Zhao et al., , 2012. HTH domains usually contain three helices and interact with the DNA major groove using ɑ-helix 3 (Aravind et al., 2005). In comparison to the crystal structure of Shigella phage Sf6 TerS, the P74-26 HTH domains extend outward and rotate 56° counter-clockwise with respect to the central oligomerization domains ( Figure 4B). This rotation positions ɑ-helix 3 of TerS P74-26 nearly perpendicular to the central oligomerization domains, whereas in Sf6 this helix is at a 70° angle relative to the oligomerization domains. In the crystal structure of Bacillus SF6 TerS, the three HTH domains in the asymmetric unit are tethered to the ring by highly flexible linkers, with one HTH domain invisible and the other two positioned in dramatically different orientations (Buttner et al., 2012). Neither of the two visible conformations of TerS SF6 are similar to that observed in TerS P74-26 . While one HTH domain of TerS SF6 is oriented downward similarly to TerS P74-26 , it exhibits a 53° clockwise rotation with respect to the oligomerization domain ( Figure 4C). The second HTH orientation in the SF6 crystal structure is even more dissimilar, and is positioned in an 'up' conformation with a 113° clockwise rotation ( Figure 4C). Therefore, in comparison to Sf6 and SF6 TerS proteins, the helix-turn-helix domains of the P74-26 TerS model are oriented differently in relation to the oligomerization domains, suggesting there are mechanistic distinctions in how the three TerS proteins bind DNA. The 'turn' of the HTH domain in TerS P74-26 contains basic and polar residues. These residues, specifically Lys31, Arg32, Lys33, and Thr35, may potentially bind the DNA phosphate backbone. In phage SF6, it was shown that residues in his 'turn' region confer a non-specific effect on DNA binding (Greive et al., 2016). Helix 3 of TerS P74-26 is also lined with polar and charged residues (Suppl. Fig 9). This is similar to that found in other HTH domains (Beamer and Pabo, 1991;Brennan et al., 1990;Schultz et al., 1991). From this, we predict that the 'turn' region of the P74-26 HTH domain primarily binds DNA phosphates through nonspecific interactions, while polar residues of helix 3 interact with DNA bases and sugars. The unresolved C-terminal region A C-terminal β-barrel region is thought to be a necessary component in other phage TerS proteins, as the β-barrel stabilizes the oligomerization state of the complex and its removal results in polydisperse oligomers (Buttner et al., 2012;Sun et al., 2012). The formation of the barrel requires strict interactions between β-strands of neighboring subunits, which enforces proper stoichiometry of the ring. However, in our extensive analysis of the cryo-EM data, we find no evidence of β-barrel formation, yet our TerS assemblies remain completely monodisperse according to SEC-MALS ( Figure 1B). Moreover, the crystal structure of the nearly identical TerS protein from the Antson Lab (PDB code 4XVN) also lacks any density for this region. Therefore, we propose that a β-barrel is not critical for retaining correct stoichiometry in TerS P74-26 . Additionally, it is known that the TerS C-terminal region makes critical contacts with the large terminase for packaging (Gao and Rao, 2011;Roy et al., 2012). This raises the question of how the small terminase of this thermophilic phage binds TerL, and what the nature of this interaction is. It is possible that TerS P74-26 requires a partner, such as DNA, TerL, or a different protein to order the C-terminal region. Because the C-terminal region is predicted to be alphahelical, this interaction mechanism could be distinct from that of TerS proteins from other phage with β-barrel domains. The lack of a rigid connection between the βbarrel and the oligomerization domain core could have a functional role, as perhaps this flexibility allows the motor to function more efficiently. Future studies will investigate this issue. The role of the fixed HTH domains in binding DNA The HTH domains of TerS P74-26 are rigidly bound to the central hub of oligomerization domains. This is in contrast to structures of TerS rings from other phage in which the HTH domains are flexibly tethered to the hub. The interaction between the HTH domains and the oligomerization hub is mediated by residues in the cleft between helices 1 and 3 of the HTH domain. Other HTH domains often have idiosyncratic interactions or structural features that are positioned within this cleft, suggesting that the cleft is a hotspot for evolution of new interactions (Aravind 2005 FEMS Microbiology). We hypothesize that this interface was evolved by progenitors of TerS P74-26 to increase the stability of the TerS ring. By tightly locking to the oligomerization domains of neighboring subunits, the HTH domains of TerS P74-26 can also play a role in stabilizing the overall ring assembly. The interface formed between HTH domains and the neighboring oligomerization domains is substantial, and consists primarily of hydrophobic interactions ( Figure 4A). Because the entropically-driven hydrophobic effect becomes stronger at increasing temperature (Huang and Chandler, 2000), we anticipate the HTH domains remain locked in place even at the elevated temperature environment of phage P74-26. This unique interaction between the HTH and oligomerization domain serves to enforce the stability and stoichiometry of the TerS P74-26 ring. The linker between the HTH and oligomerization domain is nearly fully extended, yet locked in place through hydrophobic interactions forming part of the hydrophobic core ( Figure 4A). This constrains ring stoichiometry, as each HTH domain contacts two other subunits within the assembly through this linker, and other oligomeric states would likely not support the geometry of these interactions. With strict HTHoligomerization domain interactions enforcing stability and stoichiometry of the ring, the constraints of an ordered βbarrel domain are released. Thus, we hypothesize that these interactions allowed the C-terminal β-barrel domain of TerS P74-26 to no longer adopt a rigid conformation relative to the oligomerization domain. Furthermore, we propose the conformation of the HTH domains observed for apo-TerS P74-26 represents the overall location and orientation of TerS HTH motifs after DNA binding. Although we currently lack a DNA-bound structure of TerS P74-26 , the tight interaction between HTH and oligomerization domains makes it doubtful that the ring undergoes a substantial rearrangement upon binding DNA. If the HTH domain releases from the oligomerization domain, this would expose the hydrophobic core and linker to solvent. The energetic penalty for hydrophobic exposure would be even more acute at the elevated temperature of P74-26's native environment. Therefore, it is likely that the HTH domains remain locked into position, even after DNA binding. The fixed orientation of the HTH domains places major constraints on how TerS P74-26 wraps DNA around the ring. HTH domains most often bind DNA by inserting the recognition helix (in ringed TerS proteins helix 3) into the DNA major groove to achieve specificity, with residues in the 'turn' used for binding the phosphate backbone (Beamer and Pabo, 1991;Brennan et al., 1990;Schultz et al., 1991). The homologous protein TerS SF6 appears to adopt this typical HTH-DNA binding mode, as the 'turn' and N-terminal region of ɑ-helix 3 contributes to non-specific DNA binding (Greive et al., 2016). In TerS P74-26 , the localization of basic residues in this region (Lys31, Arg32, Lys33) creates a positively-charged surface ( Figure 5B) that could potentially interact with negatively-charged DNA phosphates. Helix 3 of TerS P74-26 lies on the top of the HTH domain, with the exposed surface containing several polar groups that could be used for hydrogen bonding to DNA bases and sugars (Suppl. Figure 9). Therefore, we predict that the DNA is positioned along the 'top' of the HTH domains of TerS P74-26 . The spacing between helix 3 of adjacent subunits is ~30 Å, which is approximately what is expected for the major groove spacing within DNA wrapping around the TerS P74-26 ring (~80-100 Å diameter between recognition helices). As a point of comparison, the major groove spacing in nucleosomal DNA is slightly tighter (~28 Å), for wrapping around a particle that is smaller (~65 Å) (Luger et al., 1997). We hypothesize a different DNA binding mode for TerS P74-26 compared to its mesophilic cousins. DNA wrapping would favor superhelix formation, as this allows the two ends of DNA to freely pass each other without steric hindrance (Suppl. Figure 10) (An example of a superhelix would be the nucleosome, in which the DNA spirals around the histone core.) The flexibility in the HTH domains observed for TerS SF6 and TerS Sf6 could possibly accommodate superhelix formation. However, the rigid orientations of the TerS P74-26 HTH domains would prevent a superhelical conformation. Therefore, we propose that at least one of the HTH domains is disengaged from DNA to allow DNA to pass by the other end unimpeded. Future studies will examine how DNA binding and sequence recognition are achieved. Alternatively, DNA could thread through the central pore instead of wrapping around the HTH domains. The narrowest diameter of the TerS P74-26 pore is 29 Å, which is large enough to accommodate double-stranded DNA (~20 Å diameter). Although some TerS proteins have central pores too small to accept double-stranded DNA (Buttner et al., 2012;Zhao et al., 2010), TerS P22 is hypothesized to bind DNA using a threading mechanism, as it lacks a HTH domain (Roy et al., 2012). Interestingly, it's predicted that TerS P22 has an ɑ-helical C-terminal region following the βbarrel (Roy et al., 2012), similar to the secondary structure prediction of TerS P74-26 (Suppl. Figure 5; Suppl. Table 1). The inner pore of TerS P74-26 has mixed electrostatic surface, with interspersed layers of basic and acidic residues. ( Figure 5A). The pore surface may potentially form tracts of attractive and repulsive DNA binding regions. If DNA threads through the central pore, the DNA may tilt relative to the central pore axis of TerS P74-26 to avoid interactions with acidic residues. There is precedent for an off-axis mode of DNA binding within a ring, as DNA binds inside DNA polymerase sliding clamps in a tilted fashion (Georgescu et al., 2008 et al., 2010). Future studies will test this threading model. (It is worth mentioning that the threading and wrapping models are not mutually exclusive.) Together, our work presents a novel thermophilic system for studying small terminase proteins and their role in viral maturation. To our knowledge, this is the first cryo-EM structure of a small terminase protein at a resolution permitting atomic modeling, yet the C-terminal region is not well-ordered. Future studies of TerS P74-26 will elucidate the conformation of the C-terminal region and its role in TerL binding and enzymatic regulation, as well as the DNA binding mechanism. Cloning: The TerS P74-26 gene was synthesized with codon optimization for expression in E. coli by Genscript Corporation. The gene was cloned into the BamHI and NdeI sites of a modified pET28a vector with an N-terminal His6-T7-gp10 expression tag and a Prescission protease cut site. Enzymes were purchased from New England BioLabs. Oligonucleotides were purchased from IDT. Protein expression and purification: Protein was expressed in BL21-DE3 cells containing the pET28a-TerS plasmid. Bacterial cultures were grown at 37°C in Terrific Broth supplemented with 30 µg/ml kanamycin until an OD600 of 0.7 was reached. Cells were moved to 4°C for 20 minutes, after which expression was i n d u c e d b y a d d i t i o n o f I P T G ( i s o p r o p y l -b -Dthiogalactopyranoside) to 1 mM. Cells were then returned to an 18°C incubator to shake overnight. Cells were pelleted and resuspended in 'Buffer A' (500 mM NaCl, 20 mM Tris pH 7.5, 20 mM imidazole and Roche cOmplete™ EDTA-free Protease Inhibitor Cocktail dissolved to a final concentration of 1x). Resuspended cells were flash frozen in liquid nitrogen for long-term storage at -80°C. Thawed cells were lysed using a cell disrupter, and lysis was pelleted by centrifugation. Cleared lysate was filtered using a 0.45 µM filter. All subsequent steps occurred at room temperature unless noted. Lysate was loaded and recirculated over Niaffinity beads (Thermo-Scientific) for 2.5 hours, which had been pre-equilibrated with Buffer A. Beads were subsequently washed with 5 column volumes of Buffer A without protease inhibitors. The protein-bound beads were transferred to a 50 mL conical containing 1.25 mg of purified prescission protease, which was incubated overnight on a nutator. The following day, the resin was transferred to a gravity flow column, and the flow-through was collected, alongside a 1 column volume wash of the resin with Buffer A. The flow-through was then concentrated and injected onto a HiPrep 26/60 Sephacryl S200-HR gel filtration column that had been pre-equilibrated with gel filtration buffer (250 mM NaCl, 20 mM Tris pH 7.5) at 4°C. Fractions corresponding to the TerS peak were pooled, concentrated to 17 mg/mL, and flash frozen in liquid nitrogen for storage at -80°C. TerL P74-26 was expressed and purified as previously described (Hilbert et al., 2015). Size exclusion chromatography Multi-angle light scattering (SEC-MALS): SEC-MALS was performed at room temperature using a 1260 Infinity HPLC system (Agilent), a Dawn Helios-II multiangle light scattering detector (Wyatt Technology), and an Optilab T-rEX differential refractive index detector (Wyatt Technology). Detectors were aligned, corrected for band broadening, and photodiodes were normalized using a BSA standard. Samples were diluted to 1 mg/mL with Gel Filtration buffer and filtered through a 0.22 µM filter. 50 µL of sample was injected onto a WTC-030S5 size exclusion column with a guard (Wyatt Technology) that had been preequilibrated overnight with Gel Filtration buffer. Data analysis was performed with Astra 6 software (Wyatt Technology). DNA binding and enzymatic assays: TerS DNA binding was performed using the P74-26 gp83 DNA sequence that was PCR amplified from the P74-26 p h a g e g e n o m e . P 7 4 -2 6 f o r w a r d p r i m e r : ATGAGCGTGAGTTTTAGGGACAGGG; P74-26 reverse p r i m e r : C T A G G T C T T A G G C G T T T C A T C C G C C . Oligonucleotides were purchased from IDT. To assess DNA binding, TerS was dialyzed into a buffer containing 25 mM potassium glutamate and 10 mM Tris pH 7.5. TerS was then incubated for 30 minutes with 50 ng of the P74-26 gp83 gene in an 8 µL volume sample. After incubation, 2 µL of 5x Orange G loading dye was added to the samples, yielding the final protein concentration indicated on the gel. Samples were run on a 1% (wt/vol) TAE-agarose gel with a 1:10,000 dilution of GelRed dye (Phenix Research) for 90 minutes at 80 volts. ATPase and nuclease experiments were performed as previously described (Hilbert et al., 2015(Hilbert et al., , 2017. Electron Microscopy: Negative Stain EM 3.5 µL of 900 nM TerS (monomer) was applied to a glowdischarged carbon-coated 400 mesh copper EM grid and incubated for 30 seconds. Sample was blotted off, and the grid was washed with water and blotted two times. Grid was stained with 1% uranyl acetate and imaged using a 120kV Philips CM-120 electron microscope with a Gatan Orius SC1000 detector. Relion 2.0 was used for 2D classification (Kimanius et al., 2016). Cryo-EM sample preparation For dataset one, 400 mesh 2/2 Holey Carbon C-Flat grids (Protochips) were incubated with ethyl acetate until dry. Grids were glow discharged for 60 seconds at 20 mA (negative polarity) with a Pelco easiGlow glow discharge system (Pelco). Samples were prepared to yield a final concentration of 19.5 µM TerS (nonamer), 150 mM NaCl, 20 mM Tris (pH 7.5), and 0.015% amphipol A8-35. For dataset two, the same sample was applied to a 200 mesh 2/2 UltrAuFoil Holey Gold grid (Quantifoil) that was glowdischarged for 60 seconds at 20 mA. For both datasets, 3 µL of sample was applied to the grid at 10°C and 95% humidity in a Vitrobot Mark IV (FEI). Samples were blotted for 4 seconds with a blot force of 5 after a 10 second wait time. Samples were then vitrified by plunging into liquid ethane and were stored in liquid nitrogen until data collection. Cryo-EM data collection Micrographs were collected on a Titan Krios electron microscope (FEI) at 300 kV fitted with a K2 Summit direct electron detector (Gatan). Images were collected at 130,000x in superresolution mode with a pixel size of 0.529 Å/pixel and a total dose of 50 e-/Å2 per micrograph. Micrographs were collected with a target defocus range of -1.4 to -2.6 for both datasets one and two. Dataset one was collected with one shot focused on the center of the hole. For dataset two, the first 549 images were collected with four shots per hole at 0° tilt, and the remaining 1,077 images were collected at a 30° tilt with two shots per hole. After combining datasets 1 and 2, a total of 2,822 micrographs were collected. Data Processing Micrograph frames were aligned using the Align Frames module in IMOD with 2x binning, resulting in a final pixel size of 1.059 Å/pixel. Initial CTF estimation was performed using CTFFIND (Rohou and Grigorieff, 2015) within the cisTEM suite. Particles were picked with a characteristic radius of 40 Å using 'Find Particles' in the cisTEM software package (Grant et al., 2018). Particles were then extracted with a largest dimension of 120 Å and a box size of 256 pixels. Selected particles were subjected to 7 rounds of 2D classification using cisTEM. Each round of 2D classification consisted of 20 iterative cycles with 50 to 100 classes. After each round, the classes were examined and noisy classes were excluded before subjection to the next round of classification. The final round of 2D classification yielded 295,395 particles, which were exported into Relion format. Ab-initio 3D reconstruction was performed with cisTEM using a particle subset selected for an even distribution of views from the 2D classification images. Ab-initio 3D reconstruction was performed using 2 starts with 40 cycles per start. CTF correction was re-estimated using gCFT (Zhang, 2016) and the particles were re-extracted in Relion 3.0 (Zivanov et al.). 3D Classification was done in Relion 3.0 using C1 symmetry into 6 classes for 60 iterations with a mask diameter of 140 Å. For the first asymmetric structure, classes 1 and 2 were combined (152,315 particles) for 3D refinement in Relion 3.0 using C1 symmetry. For the symmetric reconstruction, class 1 (84,860 particles) was sub-selected for 3D refinement in Relion 3.0 using C9 symmetry. For the second asymmetric structure, class 6 (86,969 particles) was sub-selected for asymmetric refinement using C1 symmetry. CTF refinement and subsequent post-processing were performed after 3D r e fi n e m e n t f o r a l l s y m m e t r i c a n d a s y m m e t r i c reconstructions in Relion 3.0. Resolution was calculated using gold-standard FSC curve calculation and a cutoff of 0.143. Model Building To build the atomic models of the TerS structure, the helixturn-helix motifs and oligomerization domains of the g20c crystal structure (PDB code 4XVN) were rigid body fit into the cryo-EM density for each subunit separately using the Chimera 'Fit to map' command (Pettersen et al., 2004). Each chain in the symmetric and asymmetric models consisted of residues 1 to 137. For the symmetric structure, one chain was manually refined in Coot (Emsley et al., 2010), and 9-fold symmetry was repopulated using PyMol. For the class 6 asymmetric structure, the symmetric model was fit into the density and each helix-turn-helix motif and oligomerization domain were separately fit in Coot using the 'rigid body refine' tool. Model refinement was performed in Phenix using the real-space refinement tool with three cycles of refinement per round. Rotamer restraints, Ramachandran restraints, and NCS restraints were used during refinement. Group ADP values were calculated on a per residue basis. Electrostatic maps were generated using the PyMol APBS plugin. Author Contributions and Notes JAH and BAK designed research, JAH and BJH performed research, JAH, BJH, C.G., N.P.S., and BAK analyzed data; and JAH and BAK wrote the paper. CG and NPS provided valuable insight into optimization of cryo samples and reconstruction refinement. The authors declare no conflict of interest. This article contains supporting information online.
2019-11-28T12:40:46.196Z
2019-11-21T00:00:00.000
{ "year": 2019, "sha1": "20f853f5f2aa4812b6df5c0813c33aed9dcdaa2b", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/article/S0021925817487734/pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "4950f1380d3d7d14726e08e2fb63fe6c3589b154", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Biology" ] }
253343336
pes2o/s2orc
v3-fos-license
Artificial Intelligence in Weaning Clinical Practice: Finding New Rules in Ventilator Support Care analyzed historical INTRODUCTION The ventilatory support is used for patients with post-operative, acute, or chronic respiratory failure to support their breathing and prolong their lives by helping them survive under critical situations.However, in approximately 5% to 13% of cases, when the ventilatory support system is used over 72 hours, the patients may be more dependent on the ventilator (Banfi et al., 2019;Knebel et al., 1994).According to clinical results, patients who are overly dependent on the ventilatory support system may result in acute respiratory distress syndrome (ARDS), multiple organ failure (MOF), and ventilator-induced lung injury (VILI), all of which lead to higher mortality rate (Gando et al., 2020;Gatto, Fluck, & Nieman, 2004;Slutsky, 2005;Spence et al., 2006).Prior literature also showed the use of ventilatory support over 48 hours may increase the risk of ventilator-associated pneumonia (VAP) which had a fatality rate of 43% (Chastre et al., 2003;Osman et al., 2020).Eliminating patients' reliance on the ventilatory support systems as soon as possible can minimize family members' suffering associated with patient care, national healthcare expenses, and the workloads of caregivers.Therefore, evaluating the proper time to remove the ventilatory support system is an important issue in critical care medicine. Physicians usually apply their expertise and experience to estimate when to remove a patient's respiratory support system.Such decision mode may lead to differences in medical care quality and the risk of medical misconduct (Mello, Frakes, Blumenkranz, & Studdert, 2020).To avoid this condition, clinical practice guidelines (CPGs) and clinical protocols are developed to provide physicians with references for medical care.These guidelines can be referenced in the clinical decision-making process to avoid subjectivity in decision making and provide consistent and appropriate medical care to patients (Correa et al., 2020;De Clercq, Kaiser, & Hasman, 2008;Ghai, Subramanian, Jan, Loganathan, & Doumouchtsis, 2021). With regard to the clinical protocols about the removal of ventilatory support, Randolph, Green, Peacey, and Rogers (2000) pointed out that the current agreement rate is only 66%, and as the protocols are not updated regularly, they cannot satisfy actual requirements.Removal of ventilatory support determined by individual decisions may be unsuccessful due to differences in physicians' experience and the support of medical institutions.The procedure may also deviate from the standard due to colleagues' suggestions and limited medical resources (Girard & Ely, 2008;Kydonaki, Huby, Tocher, & Aitken, 2016;Wennberg, 2002).As such, it is important to regularly update clinical protocols on the removal of ventilatory support system so as to increase compliance and reduce caregivers' workload (Gómez et al., 2020;Pereira Lima Silva et al., 2020). Considering the clinical decision support system in the Arden Syntax, medical knowledge is mainly expressed via if-then Medical Logic Modules (MLM) (Papadopoulos, Soflano, Chaudy, Adejo, & Connolly, 2022).The rulebased technology provides non-ambiguous and interpretable methods that are widely used in the evaluation of clinical protocols (Gomoi & Stoicu-Tivadar, 2010;Mani, Shankle, Dick, & Pazzani, 1999;Musen, Middleton, & Greenes, 2021;Papadopoulos et al., 2022).Off-line evaluations of clinical protocols for ventilatory support removal are safe and well-structured and automated analyses.This study used C4.5 decision trees to compare current ventilatory support protocols against the 1,014 weaning cases collected in Taiwan to develop a new comprehensive evaluation for ventilatory support removal. METHODOLOGY There were 1,014 medical cases collected and used in this research study for the mining analysis of the weaning clinical rules such that the existing weaning protocols could be improved.In this study, we used the C4.5 decision trees algorithm for analysis.The procedures were as follows: Step1: Collecting the meta-data of medical care activities related to the ventilator support system.Based on a literature review and interviews with medical personnel, the patients' data were collected from the following five perspectives when removing the ventilator support system: demographic characteristics, physiological condition, oxygenation and gas exchange conditions, blood, and psychological state. Step2: Discretized the collected numeric data attributes for rule mining and rule interpretation.Most attributes related to patients' physiological conditions, oxygenation and gas exchange conditions, and blood test results are numeric attributes.Although the C4.5 decision tree is able to handle the numerical attributes, the cut-point values generated by the C4.5 may not be meaningful for the human experiences in actual clinical practices.Therefore, the numeric attributes were discretized based on the clinical staffs' common practice for convenience of rule interpretation when significant clinical rules were extracted from the clinical cases through the C4.5 decision tree.For example, physicians could judge whether a patient may be eligible for the oxygenation control access based on the following two rules: the oxygen concentration setting value of the ventilator is lower than 50%, and the positive end-expiratory pressure (PEEP) is below 5 cm H2O.Thus, the values of 50% O2 and 5cm H2O were considered as appropriate discretized cut-points for the attributes FiO2 and PEEP.Setting appropriate parameters for machine learning is very important for discovering algorithms that have improved courtesy of experience derived from the norms of clinical practice.This study also adapt the machine learning platformWeka as a mining tool, two parameters. Step3: Interpret the extracted rules and find the characteristics and differences of the rules with existing weaning protocol/plan. Data Collection The participants were from the medical and surgical intensive care unit of a teaching hospital located in southern Taiwan, those also had respiratory failure and required ventilatory support.The study got the IRB permission from the Chi-Mei medical center before collecting the data.All collected data were re-coded to unlinked their identifications and just keep the descriptive information such as physiological condition, and state during the removal, functions of oxygenation and gas exchange before the removal, and blood analysis.All of the participants met the following two requirements: (1) they were greater than 20 years of age and (2) they used ventilator support due to respiratory failure.The cases were excluded with the following conditions: (1) those who required intubation because of receiving anesthesia for surgery; (2) those who have used a respirator for more than 7 days; (3) Using a ventilator after tracheostomy, and (4) an end-stage malignant tumor. Based on literature review and consulting with 3 medical experts, we adopt 48 hours as the successful ventilatory support removal criteria (Ashutosh et al., 1991;Grieco et al., 2021;Griffiths et al., 2019).A patient may be viewed as an unsuccessful removal if he/she requires intubation and continued ventilatory support within 48 hours after being weaned from the ventilator.We collected 1,021 cases which met the mentioned criteria from medical case reports from January to December 2012.After deleting the cases with incomplete data, we had 1,014 cases for this study, which 901 successful cases and 113 unsuccessful cases of ventilatory support removal, and the demographic characteristics for the cases was shown in Table 1. Variable Categorization There were eight categorical variables (i.e., sex, diagnosis, cough function, sedation, mode, irritability, cold sweat, and weaning result) and 35 numerical variables, which were shown in table 1.To interpret and apply the generated decision tree rules, the numeric values of 35 attributes were discretized and converted into categorical variables.The 25 variables of 35 attributes were converted into 3-level value (i.e., L1, L2, and L3) categorical variables.L1 indicated the attribute value was lower than the clinical norm or average value, L2 indicated the attribute had the clinical norm or average value, and L3 Cheng,Chen,Chiu,Su 206 indicated the attribute value was higher than the clinical norm or average value.For example, the attribute Ca was recoded as: 1 for low value, 2 for normal value, and 3 represents high value. There are 6 attributes (i.e., BMI, BH, ICU_day, RR, Hb, and Sugar) were converted into 4-level categorical value.For example, normal value of variable Hb (Hemoglobin) is 13-17 g/dl, but the value 10 g/dl is also often used in clinical judgement.Thus, we converted the Hb into a 4-level value categorical variable and its variable value meanings were shown as followings: L1 was low value (Hb<10), L2 was acceptable value (Hb=10-13), L3 was normal value (Hb=13-17), and L4 was high value (Hb>17).For convenience of interpretation, the 3 variables (age, urine output, and body weight) were converted into sixlevel value categorical variables.However, the attribute RSI was converted into 2-level categorical value.Regardless of number of attribute value in a categorized variable, the higher-level value represented higher-numeric value. Parameter Tuning and Rule Generating We adopted the C4.5 algorithm of machine learning platform Weka 3.8.1 (Ng, Ling, Chew, & Lau, 2021) to build a C4.5 decision tree for extracting the clinical experience on ventilator support removal.To build an accurate decision tree model, we had to tune two parameters, C and M, for C4.5 decision tree.The C parameter referred to a confidence factor for tree pruning and subtree raising during tree generation, and the M parameter referred to the minimum number of objects in the leaf nodes.The default values for C and M were 0.25 and 2, respectively.This study implemented 10-fold cross-validation procedure.Three performance metrics widely were used in clinical studies, namely, sensitivity, specificity, and accuracy, to evaluate the performance of the classification tree for developing an effective diagnostic model.Sensitivity measured the predictive performance of the successful removal of ventilatory support.Specificity measured the predictive performance of the unsuccessful removal of ventilatory support.Accuracy measured the extent to which prediction for each case was accurate.To develop an accurate model, C value was set from 0.15 to 0.35 in increments of 0.05, and M value was set from 1 to 15 in increments of 2. The model evaluation results indicated that sensitivity ranges between 93.0% and 98.9%, specificity ranges between 92.6% and 98.9%, and accuracy ranges between 92.5% and 99.0% while changing C and M according to mentioned parameter tuning procedure.The classification model reaches the best performance when the C was set at 0.35 and M was set at 7. The performance of a decision tree model was shown in Table 2.We employed the tuned best parameter values to generate a decision tree and to extract clinical weaning practice rules.The generated decision tree was shown in Figure 1.The decision tree was constructed based on the following eight attributes: ICU_day, TV_hour, Cold sweat, Cough Function, Sedation, APACHE II, MIP, and RSI.The decision tree had a six-level structure.When tracking the decision tree from root nodes to leaf nodes, we extracted 17 rules that included 10 rules for describing the practice of successful ventilator removal cases and 7 rules for describing the unsuccessful removal ones.The coding schema of variable categorization for the tree variables was shown in Table 3. In Figure 1, each tree leaf node was shown as (S/E), S referred the total classified cases and E referred the misclassified cases.For example, an extracted rule: ICU_day=2 → Y (556.0/8.0), S was 556 and E was 8.0.We adopted the rule criteria of "Support" and "Confidence" of association rule technique to evaluate the significance of each decision tree rule.The support value of a tree rule was S/N (N referred all cases in the tree).The confidence value of a rule was (S -E)/S.The support of the mentioned rule (ICU_day=2 → Y) was calculated as 556/1014=0.548,and its confidence was calculated to (556-8)/556=0.986.The support and confidence values for each extracted tree rule were shown in Table 4.With the assistance of the clinical staff and rule evaluation results, we extracted the No. 1, 3 and 5 rule for the successful weaning cases and No. 13 rule for the failure case.The confidence of the extracted rules was greater than 0.7.In the study, this hospital had a flowchart weaning protocol for removing ventilatory support.However, the flowchart-like protocol cannot fully meet the immediate needs of on-site clinical care due to the protocol were too complicated.Therefore, the medical staff developed a lean and simple weaning plan as follows to monitor patients' status and improve weaning-care performance: 1. Spontaneous tidal volume (TV)=5-8 ml/kg Besides the original simple weaning plan, the extracted decision rules were used to improve weaning-care.The study found rule 1 and rule 2 with a support value greater than 0.2 were the most common successful rules, and these extracted decision tree rules could provide important clinic assessment criteria (i.e., duration of ICU_day, Cold_Sweat, and Cough_function, etc.).Comparing the successful rule 2 and failure rule 4, we found the cough function and APACHE II were important factors for successful weaning.Comparing rule 3 and rule 4, cough function and Sedation were two important factors to determine patients' weaning condition. DISCUSSION AND CONCLUSIONS Currently in-use clinical protocols may not be able to provide precise medical recommendations as they cannot be regularly updated; thus, more precise recommendations may be delayed once an issue is discovered in an existing protocol.This becomes a major issue for junior doctors who might keep using an old clinical protocol to evaluate the rules of removing ventilatory support.For even higher precision, medical records should also be integrated into personal decision-making experience.This results of this study provide a model to update existing clinical protocols.Based on actual medical cases, updating existing clinical protocols can increase new protocol compliance, reduce debates among physicians, and provide more stable quality of care.That is why there is a growing trend of the development of automated clinical protocols based on medical record databases and a logical architecture (López-Espuela et al., 2022). This study adopted the C4.5 machine learning method for analysing medical records related to describing the removal of ventilatory support.The study aimed to identify statistically significant correlations among removalrelated indices mentioned in the cases.Awareness of these relationships could help care providers judge the correct time for weaning patients from the ventilator, reduce patients' dependence on ventilatory support, and increase patients' quality of life.In addition, medical institutions can allocate reasonable medical resources.The main findings of this study were as follows: First, the large number of numeric variables derived from the practical data may cause difficulties in their interpretation in both research and clinical practice.Thus, these variables were discretized based on clinical definitions in order to provide more easily understood rules to fit clinical needs.By using the C4.5 algorithm for decision tree generation, a classification model was constructed which provided simplified rules, allowing improved applicability and timeliness in clinical practice.Second, C4.5-generated rules for the removal of ventilatory support included eight attributes: duration of ICU stay (in days), duration of ventilatory support (in hours), severity of disease according to APACHE II, sedation, cough function, RSBI, MIP, and cold sweat.The rules generated in this study could be applied by care providers when evaluating the appropriate time for weaning patients who had stayed in the ICU for 8 to 14 days from ventilatory support.Application of these rules could help decrease the time of mechanical ventilation and increase the probability of successful removal. LIMITATIONS AND FURTHER STUDY Limitations of this study and suggestions for future research are described as followings: First, this study collected 1,014 clinical records from a tertiary and teaching hospital in southern Taiwan.Diagnoses for each case were complex and it was difficult to determine whether all the attributes were considered.Second, the 43 variables collected were based on the assumption of their relation to the removal of ventilatory support in past literature.Further analysis using different classification approaches could be conducted to identify potential or extraneous variables in order to improve the efficiency and quality of the classification model.Third, this study only discussed two variables regarding the psychological state of patients weaned from ventilatory support, namely, irritability and cold sweat, and did not provide a complete description of patients' psychological states.It is suggested that future studies use an objective psychological scale and evaluate patients' physiological reactions to create an objective reference for predicting procedural outcomes.Fourth, indicators such as minute ventilation volume (MVV) and tidal volume (Vt) were not considered in this study; Thus, their relation to the removal of ventilatory support could not be confirmed.in the future, more accurate rules for the removal of ventilatory support can be generated with larger datasets. Compliance with Ethical Standards 1. Conflict of Interest Statement: All of the authors declare that they have no conflict of interest. 2. Role of Funding Source: This work was supported by the Ministry of Science and Technology [Grant no.NSC102-2410-H-218-018] 3. Ethical Approval: All subjects gave their informed consent for inclusion before they participated in the study.The study was conducted in accordance with the Declaration of Helsinki. 4. Informed Consent: Written informed consent was obtained from all subjects before the study. 5. Authors' Contributions: Tsang-Hsiang Cheng: Conceptualization, Methodology, Formal analysis, and Review and editing; Shih-Chih Chen: Methodology, Data curation, and Review and editing; Mei-Lan Su: Data curation; Mai-Lun Chiu: Project administration, Methodology, Formal analysis, and Review and editing.All authors have read and agreed to the published version of the manuscript. Table 1 . Descriptions and Statistics of Variables Table 2 . Performance Assessment of a Decision Tree Model with M=7 Table 3 . Coding Schema of Variable Categorization for the Tree Variables Table 4 . Support and Confidence of Each Extracted Rule
2022-11-05T16:00:16.822Z
2022-10-31T00:00:00.000
{ "year": 2022, "sha1": "d9165ec867b646881b7da832e34584a74bbb36d3", "oa_license": "CCBY", "oa_url": "https://journal.formosapublisher.org/index.php/ijba/article/download/1252/1318", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "fe5e4a97f1299d10104de0963504d1b869785260", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [] }
225118582
pes2o/s2orc
v3-fos-license
Association of Rs61764370 polymorphism within let-7 microRNA-binding site with lung cancer in Iranian population Introduction Polymorphisms within miRNAs binding sites are associated with miRNAs function. The aim of this study was to investigate the relationship between rs61764370 polymorphism within let-7 miRNA binding site in KRAS gene and the risk of lung cancer in Iranian population. Methods This case-control study was conducted with 100 lung cancer patients and 100 healthy persons. The rs61764370 polymorphism was analyzed using PCR-RFLP technique and direct sequencing. Results We found a significant relationship between rs61764370 (T / G) polymorphism and lung cancer risk, the GT genotype (OR: 6.25; 95% CI = 2.605–15.00; P= 0.000) and G allele (OR: 5.25; 95% CI = 2.259–12.208; P= 0.000) were significantly associated with an increased risk of lung cancer. Conclusion According to our findings, there is a significant relationship between the KRAS rs61764370 polymorphism and lung cancer risk in Iranian population and this polymorphism may be used as a marker in detection of lung cancer in the future. Introduction Lung cancer is one of the most common cancers in the world, and more than 80% of patients with lung cancer die within the first five years after diagnosis. The two major types of lung cancer are small cell lung cancer (SCLC), which accounts for about 20% of patients with lung cancer, and non-small cell lung cancer (NSCLC), which accounts for the other 80% of patients 1, 2 . Environmental factors and genetic factors are two main risk factors of lung cancer. Identification of molecular markers for early diagnosis of this disease can be effective in decreasing the mortality rate and patients treatment 3 . The KRAS gene is one of the most im-portant human oncogenes that span/play an important role in MAPK signaling. KRAS mutations, particularly mutations in the regulatory region of this gene, can increase the expression of this gene and are associated with various human cancers including lung cancer 4-5 . MicroRNAs are a group of non-coding RNAs of about 25-18 nucleotides in length, which play important roles in gene regulation, they can change the expression of the target gene by coupling with their complementary regions in the miRNA of target gene 6 . MicroRNAs regulate about 60% of all coding genes and play important roles in many biological processes including proliferation, differentiation, development, cell death and cancer 7 . Single-nucleotide polymorphisms (SNPs) located in the KRAS promoter region within miRNA binding site can regulate the KRAS gene expression 8 . A single-nucleotide polymorphism (T/G) rs61764370 within the binding site of let-7 miRNA in the KRAS 3'UTR region cause change in the expression of this gene 9,10 . A number of recent studies have shown that rs61764370 polymorphism is associated with an increased risk of different cancers, including ovarian cancer, cervical cancer, breast cancer, colorectal cancer, and lung cancer [11][12][13][14][15] . In a study on tissue and blood samples in Italy and United States, Ratner et al. reported that there is a significant relationship between rs61764370 and ovarian cancer; they span described this polymorphism as an ovarian cancer genetic marker 16 . On the other hand, some studies reported no association between this polymorphism and cancer. In a study on the Norwegian population, Kjersem et al. reported no significant relationship between rs61764370 polymorphism and colorectal cancer 17 . Luong et al. did not find a significant relationship between rs61764370 polymorphism and uterine cancer in Australia 18 . Considering the importance of KRAS gene and Let-7 in the development of cancers, we decided to investigate the association between this polymorphism and lung cancer risk in the Iranian population in this study. Materials and methods: Sample collection This case-control study included 100 patients with lung cancer and 100 healthy individuals. The patient group included patients with lung cancer who had referred to Maseeh Daneshvari Hospital in Tehran from 2017 to 2018. The control group consisted of healthy volunteers who had no known systemic disease and who were referred to the hospital for periodic examinations; a pathologist confirmed lung cancer of all patients. All subjects filled out thinformation forms that included age, occupation, and family history, smoking, and alcohol use. Then, 4 cc venous blood sample was obtained from each person. The blood samples were maintained at -20 ° C until DNA extraction. Genomic DNA was extracted from 200 μL peripheral blood using a standard DNA isolation kit (DNPTM, CinnaGen Co, Iran) according to the manufacturer's protocol and stored at -70°C for future using. rs61764370 Genotype Determination PCR-RFLP method was used for rs61764370 genotyping. The forward and reverse primers were 5`-GT-GTCAGAGTCTCGCTCTTGTC-3` and 5`-AGAC-CACATAGCACTACCTAAGGA-3`,respectively. To conduct the PCR reaction, 20mL, Master mix (10μl), Forward primer (0.5μl), Reverse primer (0.5μl), ddH2O (8μl) and 1μl of genomic DNA were mixed together. Then, the PCR reaction was performed on the Thermosocker in the denaturation stage (95 ° C for two periods of 15 minutes and 10 seconds), annealing (62 ° C for 32 seconds), and an extension stage (72 ° C for 34 seconds) in 32 cycles. To investigate KRAS gene polymorphism (rs61764370), the restriction enzyme Hinf I was used according to the manufacturer's protocol. The results of the Hinf I enzyme digestion were confirmed by direct sequencing (Fig. 1). Statistical analysis Data were analyzed using SPSS version 23. Chi-square and t-test were used to compare the data between the two groups. P<0.05 was considered as significant in all calculations. Clinical features of the subjects The total number of patients (patients group) and healthy subjects (control group) was 100. The characteristics of the study population are presented in Table 1. The average age of patients group and control group were 52 years (range: 77-35 years) and 61 years (range: 77-34 years), respectively. There was no significant difference between the control group and the patients group in terms of age, sex and cigarette smoking and lung cancer occurrence. Discussion According to the findings of this study, there is a significant relationship between the rs61764370 polymorphism and the risk of lung cancer; the genotype GT In previous studies, it has been shown that low expression and decreased level of the Let-7 is associated with different tumors, including lung cancer and the reduc-tion of let-7 expression has been considered as a risk factor for lung cancer. Hollestelle et al. reported that rs61764370 G allele in the KRAS gene was associated with an increased risk of breast cancer 22 . These findings are consistent with our findings for the Iranian population, on the other hand, Pharoah et al. did not find any association between the rs61764370 genotypes and the risk of ovarian cancer 23 . In a meta-analysis study Zhang et al, reported no significant relationship between rs61764370 polymorphism and lung cancer in the Caucasian population 24 . In previous studies, it has been shown that KRAS variants of RS61764370 interfere with the binding of Let-7 miRNA and increase the expression of KRAS 20 . This polymorphism was first introduced as a biomarker for lung cancer in the American population 21 . Due to findings of this study and previous studies and the function of the Let-7 miRNA as a tumor suppressor miRNA, it seems that the presence of the G allele at the Let-7 binding site blockage Let-7 binding in this region and results in an increase in the expression of the KARS gene, which can increase the risk of lung cancer. Conclusion We found a significant relationship between the rs6176437, G allele and lung cancer risk in Iranian population and it seems that this polymorphism can be used as a biomarker for lung cancer in the future, however further studies with more samples are recommended to confirm the findings of this study.
2020-10-28T19:10:55.012Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "019b059878e584fbba39d90dbb1e2c832435234b", "oa_license": "CCBY", "oa_url": "https://www.ajol.info/index.php/ahs/article/download/200337/188935", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c13d8ab07f90b8dc2f588cedd271a901d5eda5b8", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
62426092
pes2o/s2orc
v3-fos-license
A Study on Sonet and SDH with their Defects in Optical Network Data transmission Introduction In today's fast moving world with uncountable use of telephones and hike in total number of internet users, network providers find it challenging to efficiently manage the increase in telephone traffic.In this growing market, since, telephone connections and users increase, propotionally we have technologies that were developed over the past 60 years to be addressed in the markets of data providing and it is made as economical as possible. This process resulted in Introducing FDM called Frequency Division Multiplexing System where every channel of telephones is modulated with unique carrier frequency.This would then be converted into different ranges and transmitted over telephone channel.An innovation of semiconductor circuit with the communication line tried to increase the transmission capacity over the telephone line with Pulse Code Modulation (PCM) in 1960. PCM method firstly samples the analog signal at 3.1 kHz Bandwidth.After quantization and encoding the signals are transmitted at 64 kbps bit rate.A 2048 kbps of tramission is achieved with 30 coded channels, all collected into a frame along with all the necessary signalling information.2048 kbps is considered the primary rate and is followed across the world except for countries like USA, Canada and Japan where the primary rate is 1544kbps.With greater demand for bandwidth, more stages of multiplexing is required throughout the universe. SDH is an ideal and particular network especially for network providers, with efficient delivery and economical network management system that can easily be adapted to accommodate the demands on BANDWIDTH for applications and services [1][2][3] .lights or by Light Emitting Diodes (LEDs).SONET was established and introduced in United States by ANSI T1X1.5 committee.ANSI works started in 1985 with the help of CCITT called as ITU by initiating a standardization in 1986.The United States seeks for the data rate to close with 50Mbpa and Europeans wanted data rates to be around 150Mbps.US data rates were finally issued as subset to ITU specification and was called as Synchronous Digital Hierarchy (SDH). SONET and SDH are variant terms which are used often to represent and explain the same features and functions.This may leads to dilemmatic situation and elaborate their differences and with some exception SDH is ment to be a super set of SONET, and this type of technicalities called as protocols.The displayed figure.1 shows where SONET and SDH are placed in core networking areas and how the data is transmitted over the Optical fiber 1,2 .All the protocols are fully weighted-multiplexed designed such that the header solves the data's into a complex and the encapsulated data is allowed its own rate for the frame.Multiplexing needs a physical medium to transfer or carry various signals.A SONET is a set of links between any other ends.I this SONET TDMA is used in case of calling through one line needs time division multiplexing concepts.Telephone lines are made or manufactured or designed to carry 1.5Mbps of data over a single line.Since it is SONET concepts there is no priority based data transfer is used. SONET and SDH have their own frame structure of splitting while transferring the data from one end to another.The SONET by its nature called as Synchronous Transport signal level and SDH by its nature called as Synchronous Transport Module Level.The rates over cable for SONET and SDH are shown in Figure 2. By technically there is no difference between SONET and SDH but to show the data rate of those technologies is displayed. PDH Plesiochronous Digital Hierarchy(PDH) is a network for transmission which is not raised for synchronous operations but those signals entering a digital multiplexing may not be synchronous even though their bit rate of transfer is similar and they begin from distinct crystal oscillators and vary.The high order digital multiplexing was implemented for this situation called first generation of high order digital multiplexing.There are three standards available for Plesiochronous digital multiplexing situated in Europe, North America and Japan.It could not able to give the satisfaction of the customer in the terms of bandwidth, quality and etc.So it fails to the network provider. SDH Since the internet connection and cell phone has been increased the terms Bandwidth has been called for often for increase in bandwidth, reliability in connection and in high quality services.SDH has come up to a position in 1980s overtaking many demerits of PDH.This is the situation where network providers comes into economic growth and in technological events.This technological raise up deals with major categories named as High transmission rate, Simplified add and drop function, High availability and capacity, reliability, interconnection and the synchronous digital hierarchy of layer model. Above mentioned terms which helps the Synchronous Digital Hierarchy and optical fiber is the medium of cable which is used mostly to transfer data from one to another.The main advantages of these optical fiber cables are they can transmit the data in very faster speed which no other medium can pass the data and no distraction will takes place in the data and no damages.The main disadvantages of this optical fiber cable, cost of installation is very high.The regenerator is a section which is a path between regenerator is available for the signaling purpose through the cable for communication and it is mentioned as regenerator section overhead (RSOH).The multiplex section is over the PSOH layer which is used to link between the multiplexers.Carrier or virtual container is installed to proceed the process of payload at two of this section.The SDH layer representation is shown in Figure 3. SONET SONET can be expanded as Synchronous Optical Network and is designed mainly to use in optical network cable for transfer the data at the faster speed than the other network medium.SONET was initiated and implemented by ECSA called Exchange Carriers Standard Association, which allows standardized and normalized connection between the fiber optics systems, though it has been schemed by different manufactures.SONET and SDH have been designed and implemented mostly for the same purpose and were designed for transport circuit mode communication from various sources to various destination.The additional features which it is implemented for real time, uncompressed, circuit switched voice encoded in PCP format.The major struggle was in proceeding with SONET/SDH is the synchronization sources of these various circuit will differ in data rate transfer with different phases of circuit.SONET/SDH allow simultaneous transport of data over various circuit of various origins whose protocol uses a single frame.SONET and SDH is a transport protocol than it is a communication protocol. The transport technology defined by SONET has multiple signals whose capacity varies through an optical synchronous hierarchy.Signals with byte interpolation are multiplexed to achieve the same.Multiplexing is simplified because of byte interpolation and also there are network admin at every point offered by byte interpolation. The SONET multiplexing implies the generation of several lower level signals in the structure.The basic signal is abbreviated as Synchronous Transport Signal Level 1 (STS-1).These STS-1 are compacted by 810 Bytes spread over 9 rows with 90 bytes each.These mentioned set of bytes is transmitted in every 125 microseconds. Terminal Multiplexor Terminal multiplexor which concentrates tributary DS-1 signals as well as other signals which derives it and transforms the electric signal in optical signal and vice versa.Simplest of the SONET links are made by two ends of fiber optics joined multiplexor with or without signal regenerator. Signal Regenerator Signal regenerator is needed when the distance is too long between the two terminal multiplexor or the optic signal is too low.On receiving the signal, the signal regenerator closes and a header is added to the signal pattern before transmission.This way the information in the data is not affected. Add/Drop Multiplexor (ADM) ADM gives access to new traffic from a particular point or implement the same, in addition it also absorbs a section of data traffic. .On implementing ADM, it can download or insert into the main flow or on to other signals which can be altered. A Study on Sonet and SDH with their Defects in Optical Network Point to Point Point to point setup is formed with two terminals multiplexor which is connected with fiber optics cable and with the feature of using a single regenerator.This single regenerator cable used whenever the user needs. Point to Multipoint This architecture includes ADM network elements to the network.ADM has been designed specifically for this work.This ADM avoids cross-connect connectors, remultiplexor and demultiplexor. ADM connects intermediate network points to the network channels. HUB Network A HUB deals with sudden data growth and network changes smoothly and efficiently over point to point networks.A HUB, distributes the signal traffic at the central point to various circuits. RING Network The most valuable and important element of ring network is ADM.Many ADM can be placed in a ring structure for single way or two way data traffic.The ring network is advantageous because of its security.The working ring nodes can distribute the data trafiic in case of damage to any fiber optic cable of a multiplexor.The ring network is represented in Figure 4. What Makes SONET Popular? The fact that relies behind the SONET that allows different interface with asynchronous sources and existing machines can be substituted by a new machine which is supported by SONET network.Flexibility in Bandwidth gives SONET a major advantage towards the telecom industries.It also indulge in multiplexing, traffic-injection and extraction in intermediate point of system reduce the cost of creating SONET implementation.The network reliability increases proportionally the users also increase and this make network and connection more efficient.It allows Header Bytes which allows administration of data bytes and the maintenance of system which will reduce significantly maintenance cost of SONET network infrastructure.The generic standard of the existing system allows interconnection of various products created by different manufacture has allows them to use SONET and support main network standard. Structure of SONET The structure of SONET is revealed in Figure 5. Threads in SONET and SDH This concentrates on the vulnerabilities on all optical networks which is called as AON contains and describes on SDH, SONET and all other branches to understand the possible attacks on optical networks.Here the attacks are most concerned with jamming and network defected signals.Device crosstalk is available in most of the devices where signal leaks from one portion of optical network device to another.Crosstalk can be used for service denial or eavesdropping attacks. Safety Measure on Attacks There are many reasons for which an AON should be secure when the data transmits over the optical network is, • Every point of a network should be able to detect and identify the attack on its data.• Attack detection speed should be proportional to the rate to data transmission, since with higher data rates of AON amount of data attacked can be huge.• Identification of attacks at possible target locations, should occur irrespective of high AON data rate. Attack Types and Methods Attacks on a network can be widely sliced into six different areas based on aim of the attacker and hacker.The areas of attacker are mentioned below • Traffic Analysis, • Eaves dropping, • Data Delay, • Service Denial, • Quality of Service Degradation and • Spoofing.There are some systems which already exist to find the attack detection in optical network.The system which are listed below are the finest system which is available for the attack detection and correction has been done with the help of the system. • Wideband Power Detection Method. • Optical Spectral Analysis Method. • Optical Time Domain Reflectometry Method. New Method for Detecting Defect's of all Optical Networks This content explains a new method for detecting attacks upon optical network and with amplified links with transparent AONs.There are some methods which is used to recover the data infected data and retransfer the data to the respective end through the optical networks called AONs.The below mentioned techniques are the new arrival of attack detection methods, they are • Amplitude Comparison. • Phase and Amplitude Comparison. • Important Detection Issues. Conclusion This huge raise in communication network supports infrastructure of network, traffic data transfer which flows all over the globe.PDH was the good transfer communication medium and finally it could not able to gain the advance features like fast data rate transfer and with fiber optics improvement in communication transfer cable.New standard was introduced after a great improvement in mentioning above additional functionalities and to fulfill the requirements of fast moving communication globe.Further improvement on network includes Ethernet over SDH and set of rules which allows Ethernet traffic with efficient and with flexible manner.This SONET and SDH is going to handle the communication channels all over the globe for maintenance of data rate transfer and more number of users to added efficiently. Figure 2 . Figure 2.Table content on Signal rates of SONET and SDH.
2019-02-15T14:21:23.438Z
2015-11-24T00:00:00.000
{ "year": 2015, "sha1": "534f7139d442256716edf0f1697cd6abedca09fe", "oa_license": "CCBY", "oa_url": "https://sciresol.s3.us-east-2.amazonaws.com/IJST/Articles/2015/Issue-29/Article7.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8b41ae4fd58bf9554c19574b45bf0d0b9322508a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
252928177
pes2o/s2orc
v3-fos-license
Diltiazem as a cyclosporine A-sparing agent in heart transplantation: Benefits beyond dose reduction Diltiazem (DZ) is widely prescribed in transplant recipients because of its drug-drug interactions with calcineurin inhibitors (CNI). However, these interactions have been primarily investigated in renal transplantation, and data regarding the long-term efficacy and safety of DZ in orthotopic heart transplantation (OHT) are still sparse. Our study aimed to elucidate the extent to which the co-prescription of DZ reduces the dose required to maintain adequate blood levels of cyclosporine A (CsA) and the resulting effect on morbidity and mortality in OHT recipients. We performed a retrospective single-center analysis of OHT recipients on a long-term immunosuppressive regimen based on CsA and mycophenolate mofetil (MMF). The study population consisted of 95 adult OHT recipients with a mean follow-up of 15.8 ± 6.7 years. DZ was co-prescribed in 39 subjects (41.1%) and was associated with a 28.6% reduction of the mean CsA daily dose (P < .001). Patients on DZ had less frequent rejection episodes (P = .002), better renal function (P = .009) and a lower rate of end-stage renal disease (P = .008). Additionally, they developed later cardiac allograft vasculopathy (CAV). We observed no prognostic relevance of DZ co-prescription in univariate and multivariate Cox-regression analyses. In addition to reducing the CsA dose required to maintain adequate blood through levels, DZ may have nephroprotective properties in OHT. The co-administration of DZ may decelerate the development of CAV and reduce the frequency of the rejection episodes. However, the beneficial influence on morbidity has no impact on mortality. Introduction Since 1967 orthotopic heart transplantation (OHT) has been the ultima ratio therapy in selected patients with terminal heart failure. [1] After initial drawbacks due to graft rejection, the introduction of cyclosporine A (CsA) in the 1980s revolutionized the world of organ transplantation. [2,3] However, the improvement in survival is still limited by the side effects of immunosuppressants, which cause increased morbidity over time. Thus, alternative approaches have been suggested, such as the use of potential drug-drug interactions and identical metabolic pathways. One such consideration is the co-administration of the calcineurin inhibitor (CNI) sparing agent diltiazem (DZ) along with immunosuppression. This concept may aid in reducing costs while limiting the side effects of the immunosuppressive therapy. [4] Furthermore, previous studies revealed that DZ might reduce the hepatotoxicity and nephrotoxicity of CsA, limit the incidence and progression of cardiac allograft vasculopathy (CAV), and improve survival. [5,6] However, the clinical utilization of this concept is primarily based on investigations in a real-life setting in the field of renal transplantation. [6] Our study aimed to examine the extent to which the co-administration of DZ can reduce morbidity and mortality in the population of patients who have undergone OHT. The datasets generated during and/or analyzed during the current study are not publicly available, but are available from the corresponding author on reasonable request. The data that support the findings of this study are not publicly available due to privacy restrictions. The data are available on reasonable request from the corresponding author. Medicine Study design Our study was based on a retrospective analysis of patient data collected during the most recent routine follow-up in the outpatient clinic for terminal heart failure and heart transplantation. The overall study population consisted of 268 OHT recipients. Of these, 114 (42.5%) patients were on an immunosuppressive regimen containing an agent other than CsA. Additionally, 15 (5.6%) patients were excluded due to insufficient data, and 3 subjects (1.1%) due to heart-lung transplantation or re-OHT. In 41 (15.3%) OHT recipients, CsA was combined with azathioprine, everolimus, or prescribed as a monotherapy. The largest homogenous group on an immunosuppressive regimen based on CsA was the 1 receiving maintenance therapy with CsA and mycophenolate mofetil (MMF) (n = 95, 35.4%). We stratified the patients into 2 groups according to whether DZ was co-prescribed or not (Fig. 1). Inclusion criteria were long-term maintenance immunosuppressive therapy with CsA and MMF with/without prednisone, at least 1 year follow-up after OHT, and stable clinical condition at last presentation in the outpatient clinic. We note that 63 patients (66.3%) were on an immunosuppressive maintenance regimen including CsA/MMF for the entire posttransplant period. The remaining 32 patients (33.7%) were either on a different immunosuppressant initially, or data regarding the therapy modality in the immediate posttransplant period were insufficient. However, the mean time span in which the immunosuppressive regimen of this cohort was CsA/MMF comprised 9.7 ± 4.7 years until the last visit. The study was performed in compliance with the Declaration of Helsinki, and data sampling was approved by the local ethics committee (2019-021-f-S). Laboratory and clinical examinations The post-OHT follow-up of the study population was performed in 3-months intervals. It included patients' history, clinical examination, electrocardiogram, laboratory assessment of the liver and renal functions, cardiac enzymes, complete blood count, and N-terminal prohormone of brain natriuretic peptide (NT-proBNP). Echocardiography was routinely performed at every other follow-up or if clinically indicated. Additional tests were conducted if necessary. Venous blood sampling for estimation of CsA trough levels was performed at every presentation prior to the next dose so that the values represent blood levels approximately 12 hours post-dosing. All measures were expressed in ng/mL, and the target dose was based on the recommendations of the Guidelines of the International Society for Heart and Lung Transplantation (ISHLT). [3] As our study was conducted in a retrospective setting, and CsA trough levels were used to monitor the immunosuppressive therapy at our center, we cannot provide any CsA blood levels at 2 hours postdose (C2) measures. However, recent studies revealed no beneficial effects of the estimation of CsA blood levels at 2 hours postdose (C2) over trough levels (CsA trough levels [C0]) on the frequency of the rejection episodes, the incidence of hypertension, and renal parameters in heart transplantation. [7] Additionally, research in the field of renal transplantation revealed that both C0 and C2 were useful in predicting the immunosuppressant-related side effects. [8] Furthermore, the utility of C2 is yet to be proven in a maintenance setting and a long-term follow-up. [9] CAV was defined according to the International Society for Heart and Lung Transplantation classification. We differentiated between patients with no evidence of CAV on the last invasive assessment (corresponding to ISHLT CAV 0 ) and patients with detectable coronary lesions irrespective of the graft function or grade of angiographic involvement (≥ISHLT CAV 1 ). [10] Statistical analysis IBM SPSS statistics software was used for the performed analyses. The assessment of the continuous variables, expressed as mean ± standard deviation (mean ± SD) was performed with Student t-test. Categorical variables were reported as numbers (percentages) and examined with the chi-square test. The risk estimation was based on univariate and multivariate logistic regression analyses. The prognostic evaluation of all factors was performed using univariate and multivariate Cox-regression models. For all statistical analyses, P < .05 was defined as significant. Demographics Our study population consisted of 95 OHT recipients with a mean follow-up of 15.8 ± 6.7 years. The mean age at the time of OHT was 47.4 ± 14.4 years. One-fourth of the population were females (n = 25, 26.3 %). DZ was a concomitant medication in 39 patients (41.1%). Males were more likely get prescribed DZ (P = .017), although we do not observed relevant gender-related differences in the prevalence of hypertension (n = 59, 84.3 % in male vs n = 18, 72.0% in female, P = .234) as well as no disparities concerning the heart rate (83.9 ± 15.4 in male vs 84.9 ± 14.6/minute in female, P = .772) or the systolic blood pressure (126.2 ± 17.9 in male vs 126.0 ± 18.1 mm Hg in female, P = .975) at the most recent presentation. We found no differences between the DZ and non-DZ groups regarding the etiology of the pretransplant heart disease or the recipient age at OHT (Table 1). The dosing of diltiazem (DZ) was stable in most of the patients. The mean daily doses of DZ at the first evaluable follow-up and at the last presentation were 182.3 ± 69.6 mg/day and 160.8 ± 83.0 mg/day, respectively (P = .065). In 19 cases (48.7% of the DZ cohort), the daily dose was constant over the years. Clinical characteristics Patients not receiving DZ experienced more often rejection episodes in the past (OR 2.9, 95% CI 1.2-7.1, P = .019), although no association with rejections requiring therapy (≥2R) according to the revised classification of the ISHLT was observed (OR 2.2, 95% CI 0.8-6.3, P = .139). [2] The left ventricular ejection fraction was within the normal range with no significant difference. The prevalence of hypertension was similar between both groups, and there were no disparities in the estimated blood pressure values at the most recent examination. The prevalence of cancer, representing one of the most common comorbidities in patients on long-term immunosuppression, was also comparable and we observed no association of DZ with the incidence of cancer (OR 1.3, 95% CI 0.5-3.1, P .571). However, patients on DZ had a significantly better renal function, expressed as glomerular filtration rate (GFR). Additionally, DZ was associated with a lower rate of end-stage renal disease (ESRD) (OR 0.2, 95% CI 0.1-0.7, P = .012). Immunosuppressive regimen Patients were on an immunosuppressive maintenance regimen containing CsA, MMF with/without prednisone. The co-administration of DZ was associated with lower CsA dose (1.5 ± 0.6 on DZ vs 2.1 ± 0.8 mg/kg/day without DZ, P < .001, respectively) while achieving comparable blood trough levels (122.6 ± 46.0 on DZ vs 120.3 ± 55.1 ng/mL in the non-DZ group, P = .0.834, respectively). We observed no significant differences in the daily MMF dose between groups (Fig. 2). Treatment with higher doses of DZ was associated with a greater reduction in the CsA requirements (P = .003). However, this effect was primarily observed in doses of up to 180 mg/day, and the CsA-sparing effect was limited when higher drug doses were applied (Fig. 3). Low-dose prednisone (either 2,5 mg/day or 5 mg/day) was a co-medication in 65 patients (68.4%) without significant differences between both groups ( Table 2). Concomitant medication Although with significant intergroup contrast, betablockers were commonly prescribed in OHT recipients, covering more than half of the DZ and three-fourths of the non-DZ groups. As expected, further differences were detected in the use of calcium channel blockers (CCB), as only 1 patient from the DZ group was on an additional CCB. In contrast, in the non-DZ group, one-fourth of the population was receiving a CCB as an antihypertensive agent. Except for 1 patient on lercanidipine, amlodipine was the drug of choice in the remaining patients. Angiotensin-converting enzyme inhibitors (ACE-inhibitors) and angiotensin II type 1 receptor antagonists (AT1-antagonists) were utilized in more than half of the overall population, without significant differences between both study groups. Diuretics were prescribed in 60% of the overall cohort with almost the same frequency. Aldosterone antagonists were only a concomitant medication in 7.4% of the population. Statins were the most commonly prescribed agents covering up to 87.4% of the overall cohort (Table 2). Survival We observed no significant impact of DZ use on the posttransplant survival in a univariate Cox regression analysis (HR 0.8, 95% CI 0.4-1.5, P = .471) or in a multivariate analysis after adjustment for the factors that are not in direct association with the use of DZ (pretransplant age and diagnosis). Discussion To our knowledge, this is the first study examining the potential benefit of DZ co-prescription in a relatively large cohort and of adult OHT recipients. The study was conducted in an observational setting but provides evidence in a very longterm follow-up. Additionally, most of the patients were on a CsA/MMF-based immunosuppression for almost the entire post-transplant period. Gender-related differences We did not observe relevant differences in age, pretransplant disease, and survival status between both study groups, but there were sex-related differences. While 50% of male OHT recipients were on DZ, the co-administration of a CNI-sparing agent was considered in only 20% of the females. Interestingly, there were no statistically significant sex-related disparities in the use of betablockers as a potential explanation for the limited utilization of DZ in women (n = 46, 65.7% in males vs n = 13, 52.0% in females, P = .240). We were unable to identify any prior research focusing on sex-related differences in the metabolism of DZ, which may result in its differing utilization. Additionally, a focused assessment of the incidence of cancer (P = .330), hypertension (P = .234), CAV (P = .471), ESRD (P = .597), rejection episodes (P = .353) or rejections requiring therapy (P = .583) according to the revised classification of the International Society for Heart and Lung Transplantation, showed no significant sex-related differences in potential longterm immunosuppressant related side-effects, which may explain these results. It was previously reported that males undergoing OHT have enhanced morbidity. [11] Awareness for the comorbidities may influence the clinical decision-making process and, as our results show, possibly influence the comorbidity profile in a very long-term follow-up. CsA-sparing We observed a significant reduction of the mean CsA daily dose associated with DZ use (28.6% reduction in the mean CsA daily dose). In contrast, the estimated blood through levels were comparable, thus confirming the CsA-sparing effect of DZ as previously reported. However, a steep decrease of the CsA dose requirements potentially resulting from the co-administration of DZ was observed primarily in patients receiving up to 180 mg/ day, and the CsA-sparing effect was limited when higher doses were prescribed. This is in line with the findings of previous studies, reporting on the increase of CsA blood concentrations at initial up-titration, but no further benefits and potentially increasing side effects when higher DZ doses are used. [12] Clinical benefit CAV is common in long-term follow-up after OHT and may have prognostic implications. [13,14] In contrast to previous reports on the potential of DZ to reduce its progression in short-term follow-up, we found no association between DZ prescription and CAV prevalence. [5] However, CAV was diagnosed significantly earlier following transplantation in patients not receiving DZ. Thus, the almost equalized prevalence may be a consequence of the prolonged follow-up in the DZ group. Our observations indicate the potential beneficial effect of DZ in decelerating CAV development in OHT recipients, resulting in a delayed onset. [13] In line with previous studies reporting on the potential of DZ to reduce the hepatotoxicity and nephrotoxicity of CsA in kidney transplant recipients, patients from the DZ group had a significantly better renal function and less frequent ESRD at the last follow-up. [5,6] Additionally, as observed concerning CAV, ESRD was diagnosed in a more prolonged follow-up in patients on DZ, thus indicating the nephroprotective properties of CsA-sparing. We observed no differences in systolic left ventricular function (LVEF) of the allografts between the groups. However, the estimated NT-proBNP values were significantly elevated in patients from the non-DZ group. This may be a consequence of the more impaired renal function in this population. Cancer is one of the most common comorbidities among patients on long-term immunosuppressive therapy. It was previously reported that the incidence of malignancies is up to 30% in 10-years follow-up and has a significant influence on patients' survival. [15,16] The overall incidence of cancer in 15-years follow-up in our population was 32.6% without significant differences between both study groups, except that we observed non-significant time-related differences with an earlier diagnosis in patients not receiving DZ. The potential impact of CsA-sparing on the incidence of hypertension cannot be evaluated in an observational setting as it is also a common comorbidity in the pretransplant period. [11] However, the current antihypertensive therapy was optimized over the years of continuous monitoring, and the blood pressure measurements at the most recent follow-up delivered normal results. We observed no prognostic relevance of DZ co-prescription in a univariate Cox regression analysis and a multivariate analysis after adjusting for the pretransplant factors evaluated in our study. However, the impact on morbidity is a factor justifying its use in OHT recipients. Additional drug-drug interactions Dihydropyridine CCBs are commonly prescribed for the treatment of hypertension. The most frequently prescribed antihypertensive agent from this groups in our patient population was amlodipine. Lercanidipine was considered in only 1 case. Previous research focusing on the possible interactions of CCBs from this group in renal transplant recipients revealed no relevant interactions with CsA. Investigations in a real-life setting demonstrated that cyclosporine biotransformation was not altered by the concomitant administration of amlodipine. [17,18] In terms of the additional use of corticosteroids, the results of the studies to date were conflicting. However, as we observed no statistically significant differences in the frequency of their use between both study groups, potential bias related to corticosteroid use are limited in our population. [19] Strengths and limitations Our study provides evidence based on a very long-term follow-up in a relatively large cohort. Additionally, this is the first study investigating the potential benefit of DZ use in a reallife setting in a population of OHT recipients, as the previous evidence is derived from the field of renal transplantation. However, its observational nature is a factor limiting the utility of the study findings. Furthermore, due to data storage regulations, we had access to the information concerning the last 30 years. As a result, we have insufficient evidence regarding the pretransplant factors and the immediate posttransplant period in some patients. Additionally, as some OHT recipients considered for our study were on a long-term therapy at our center but had undergone OHT in other transplant centers, we had no detailed information on the perioperative period. Nevertheless, an asset of our study is that it is conducted in an exceptional long-term follow-up following OHT. Conclusions DZ has CsA-sparing properties and may aid in reducing the CsA-dose required to maintain adequate blood through levels. Consequently, DZ may ameliorate its side effects in the OHT recipients. We observed a positive association of DZ prescription with a better renal function, less frequent ESRD, later onset of CAV and ESRD, and less frequent rejection episodes. This evidence suggests a potential beneficial effect of DZ on patients' morbidity. However, we observed no mortality benefit in a very long-term follow-up.
2022-10-18T15:48:12.160Z
2022-10-14T00:00:00.000
{ "year": 2022, "sha1": "5fc3c944067cd14882ae24348c6e9c2a05f53b4b", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "c5a2ee33ab4d3f27de897c5f0261a1db3991acd3", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
248668028
pes2o/s2orc
v3-fos-license
Medical students’ attitude towards cultural diversity: a cross-sectional study at a health sciences university in eastern Nepal Objectives To assess the attitude of medical students towards cultural diversity aiming to elucidate our current status in understanding cultural awareness and sensitivity. Design, setting and participants A web-based cross-sectional study was carried out among 601 undergraduate health science students (medical and dental courses) at a health sciences university in eastern Nepal via various modes of social-media platforms like WhatsApp, Messenger, Gmail, etc. Outcome measures Medical students’ attitude towards cultural diversity and its association with the sociodemographic profile of the students. Results A total of 601 students participated in the study, out of which, 64.2% were men with a sex ratio of 1.8:1 and a mean age of 22.3±1.9 years. More than two-thirds (77.2%) of the students had an excellent to good attitude towards cultural diversity. The proportion of students reporting ‘excellent’ attitude towards cultural diversity was higher among male students compared with female students (37.8% vs 20.5%) and students aged >22 years compared with younger students (37.1% vs 26.7%). Gender (p<0.001) and age (p=0.009) were significantly associated with the attitude towards cultural diversity. Conclusions Medical students, in general, are aware of the impacts of a cross-cultural society on the delivery of quality healthcare and also about the need to be aware of prejudices doctors may have towards certain cultures. Majority suggest the inclusion of concepts of multicultural awareness and sensitivity in the medical curriculum itself. INTRODUCTION Nepal is a culturally diverse country consisting of people of 126 different castes and ethnic groups speaking around 123 different languages. 1 In a multicultural society, the delivery of quality healthcare hinges on providers' ability to understand, communicate with and care for patients from various ethnic backgrounds. The essence of cultural competence lies in the acknowledgement of the significance of culture in lives, respect and minimising any repercussions due to cultural differences. Having culturally competent clinicians supports the idea of ethical medical practice by advancing patient autonomy and justice. 2 'Tomorrow's doctors', the General Medical Council's publication, which sets out the framework for undergraduate medical education in the United Kingdom states that 'students should have acquired respect for patients and colleagues that encompasses, without prejudice, diversity of background and opportunity, language, culture, beliefs, race, colour, gender, sexuality, age, mental or physical disability and social or economic status and way of life. They must understand a range of social and cultural values and differing views about healthcare and illness'. 3 The various studies done about the influences of cross-cultural set-up on the professional competence of healthcare providers have highlighted the lack of awareness of the importance of cultural competence and the unpreparedness to provide cross-cultural care owing to exclusion of formal training in these areas. 4 5 Cultural competence or STRENGTHS AND LIMITATIONS OF THIS STUDY ⇒ This research is on an under-researched topic of medical humanities among a large number of future healthcare professionals in a culturally diverse lowincome country. ⇒ This study advocates the idea of incorporating cultural competence and intersectionality in the curriculum of medical schools in Nepal. ⇒ Findings from a web-based survey at a single health sciences university may not be generalisable to all medical students in Nepal; however, the findings provide a basis for future research on the attitude of medical students towards cultural diversity in Nepal. ⇒ The study assesses only the attitude using closeended questions, at one point in time, and does not monitor the changes in attitude over time owing to its cross-sectional nature. Open access competemility is intricately linked to social justice aimed at removing institutional, social and systemic oppression to ensure equity for all individuals. 6 However, the concept of cultural competence or competemility does not suffice to attain the aim of equity for all. Thus, the concept of intersectionality is needed to understand the individual experiences or the ways in which systems of power intersect in individual lives as the culture is not static and social identities are not operating in isolation. 7 Studies suggest the diversity in culture, race and ethnicity and their understandings and attitude do affect the medical practices. 4 5 8-16 According to a study done on the first-year medical students in the USA, it was found that the students were unfamiliar with the key concepts of culture, race and ethnicity and struggled with the issues that diversity raise in the medical practice. 4 Medical students have also realised the need of developing cultural competence at the medical colleges and also in the hospital environment. 8 A lag in the preparedness of the resident physicians to deliver cross-cultural care was indicated with a very little clinical time being allotted during residency to address real-world cultural issues advocating for training to help them alleviate the ethnic and racial disparities in healthcare. 5 It has been established that cultural competency training programmes integrating topics of culturally and linguistically appropriate healthcare standards improve not only only the knowledge, attitude and skills of healthcare providers but also the patient's satisfaction. [9][10][11][12] Schoolwide culturally responsive behaviour support and incorporation of diversity education within a coherent educational framework are thought to help enhance cultural responsiveness. 13 14 As sociodemographic characteristics of medical students are likely to impact their attitude towards patient-centred care significantly, having a curricula that includes cultural diversity and competence is a necessity for medical schools. 15 16 There is a dearth of literature regarding the issue of medical students' attitude towards cultural diversity in Nepal. In this study, we aim to assess the attitude of medical students towards cultural diversity and assess the factors associated with this attitude. Study setting, analysis period and participants A web-based cross-sectional study was carried out among 601 undergraduate medical and dental students at the B.P. Koirala Institute of Health Sciences (BPKIHS), Dharan, Nepal from 1 July to 15 July 2019. Students at BPKIHS use internet regularly and web-based study in the past have received good response in participation. 17 18 BPKIHS is a leading and the oldest public-funded health sciences university in Nepal. The teaching hospital of BPKIHS serves as a tertiary referral specialist centre with a catchment area for more than a quarter of the country's population. 19 Patient and public involvement statement The study did not have the direct involvement of the participants in the development of the study design, research questions, data collection, result analysis and interpretation. The consenting participants were enrolled in this study individually as an independent sampling unit. Data collection We used a structured questionnaire including questions on multiple aspects of cultural competence and diversity, which was developed by reviewing previous literature 4 and modified after consultation with the sociology and public health academics. The questionnaire was pretested on 20 undergraduate nursing students, which were not included in the final data set. Then, the pretested and verified questionnaire was sent to the participants using the convenience sampling method via different modes of online media such as emails or other platforms of social media (Messenger, WhatsApp, etc) in the form of a Google Form. The first page of the 'google form' contained the 'Information and Informed Consent Sheet' which detailed the participants on the objectives and aims of the study and e-consent was obtained as their approval to participate in the study. A prior verbal request was also made personally meeting the individual or through phone call wherever possible to enhance the participants' interest in participating in the study with reminders being sent every 4th day till the 15th day of the start of the study. After the 15th day, no more entries were taken. Outcome variables and analysis We employed sociodemographic variables like age, sex, address, nationality, ethnicity, academic category of study, stream of study, economic status and socioeconomic status as our independent variables and attitude towards cultural diversity as our dependent variable. The responses on the questions of cultural issues were categorised as strongly agree, agree, neutral, disagree and strongly disagree on a 5-point Likert Scale. Sentences that supported the notion of cultural diversity and tolerance with 'agree' or 'strongly agree' as the response was given a score of 1 while responses as 'neutral' or 'disagree' or 'strongly disagree' were given a score of 0. Conversely, the sentences which opposed the notion of cultural diversity and tolerance with 'disagree' and 'strongly disagree' as the responses were given a score of 1 as above. For example, in question 2.1.1 in the questionnaire (online supplemental file), responses 'strongly agree' and 'agree' got a score of 1 with the other responses getting a score of 0. But in the question 2.1.2, responses 'disagree' and 'strongly disagree' got a score of 1 with other responses getting a score of 0. The 'not answered' responses regarding the individual items were considered to be 'neutral' and scored a 0. Based on this, the attitudes of the students were categorised using cumulative percentage method of 5 points Likert's scale analysis as excellent attitude (≥80%), good attitude (60%-80%) and poor attitude (<60%). 20 21 Open access Similarly, the economic status of the students was evaluated with the help of per capita income (PCI) per day and categorised as below poverty line (PCI per day US$<1.9) and above poverty line (PCI per day US$≥1.9). 22 Likewise, using the Modified Kuppuswami Scale considering occupation, education of the head of the family and average family income per month as parameters, the socioeconomic status of the students were categorised as upper class (score: 26-29), upper middle class (score: [16][17][18][19][20][21][22][23][24][25], lower middle class (score: [11][12][13][14][15], upper lower class (score: 5-10) and lower class (score:<5). 23 The final data set was analysed using Microsoft Excel and Statistical Package for Social Sciences V.11.0 and interpreted using descriptive statistics. Sociodemographic details A total of 601 undergraduate medical students participated in the study. The mean age of the students was found to be 22.3±1.9 (18-31) years with 52.9% of them aged 22 years or less while 47.1% were 22 years and above. Of the total respondents, 64.2% were men and 35.8% were women with a sex ratio of 1.8:1. Three-fourth of the students involved in this study were Nepali (76.0%) and the rest of the one-fourth were Indians (24.0%). Of all Nepali students, almost equal numbers of students were from Province 1 (20.3%) and Madhesh Province (19.3%) followed by Bagmati Province (14.6%) while the rest 21.8% were from other provinces (Gandaki Province, Lumbini Province, Karnali Province and Sudurpaschim Province). Considering the ethnicity of the Nepalese students, 38.3% were Brahmin/Chhetri from hills and mountains, 12.5% were Brahmin/Chhetri from Terai or Madhesh, 9.6% were Janajati from mountain, hills and terai while the rest 15.6% comprised other ethnic groups such as Dalit, Muslim, etc. Likewise, this study included 71.2% of students from the Bachelor of Medicine and Bachelor of Surgery (MBBS) stream and the rest 28.8% were from the Bachelor of Dental Surgery (BDS) stream. Similarly, 35.9% of the students were from the preclinical courses, 46.3% from the clinical courses and 17.8% were pursuing their internship at the institute. The average per capita income per day of the students was found to be US$10.3±26.5. There were 15.14% of the students below the poverty line and the remaining majority 84.9% were above the poverty line. Likewise, 52.7% belonged to the upper class, 35.3% to the upper middle class, 9% to the lower middle class and the remaining 3% to the lower class (table 1). Responses of the students on various cultural issues Out of the total participants, more than 90.0% agreed on the notion that every individual has a responsibility to learn about other ethnicities and cultures. Likewise, more than half of the respondents disagreed with the belief that minority members of the population should adopt the values and customs of the majority. Nearly, three-fourth of the respondents disagreed that international students should abandon their customs and values with 90.0% of them also agreeing that they may adapt to a new culture but not necessarily let go of their own values and cultures. About three-fourths of the participants agreed on the belief that different cultures could coexist in harmony. Almost all of them agreed that belonging to a particular ethnic group should not be a barrier to establishing a friendship with someone from a different cultural background. *Acceptable responses that were given a score of 1. Open access Though two-thirds of the students felt that doctors are not exempt from prejudices, more than 90.0% of the individuals agreed that all doctors need to familiarise themselves with customs of the different cultures and ethnicities within their practice. Adding to this notion, more than 80.0% also believed that a doctor more versed with the mother tongue of the patient was more likely to be perceived as more competent than others not knowing the language. More than two-thirds of the students believed that issues concerning the provision of holidays on major festivals must be addressed by the college administration. The majority also believed that various student clubs in their college must ensure proportional involvement of students from all cultures and nationalities. Almost three-fourths of the participants also felt the need for the incorporation of concepts of multicultural awareness and sensitivity in the curriculum (table 2). Based on the responses of the students on various cultural issues, excellent, good and poor attitudes of the medical students towards cultural diversity were observed in 31.6%, 45.6% and 22.8% of the respondents (table 2). Responses to hypothetical situations The students were given two hypothetical situations and were asked how they would respond to these situations. When asked how they would prepare themselves for a visit if one of their friends belonging to a different ethnic background invited them to their home to meet and interact with their parents, 62.6% of the students preferred asking their friend what to do, 20.6% preferred reading on their culture while rest 16.8% were reluctant to do anything by themselves ( figure 1). Similarly, when asked about how they would respond to an invitation to celebrate a festival native to someone of a different ethnic group than theirs, more than half of the respondents (62.6%) said they would be eagerly joining and learning about the respective traditions and rituals of that culture without any hesitation, about one-third of the respondents (33.1%) said that they would be happy to join but would make sure that there were not any customs or rituals those would make them feel uncomfortable while the remaining 4.3% preferred declining such an offer ( figure 2). Association of the attitude of medical student with various sociodemographic variables A significant association was found between the attitude of the students towards cultural diversity and gender (p<0.001), with more percentage of men having an 'excellent' attitude compared with women (37.8% vs 20.5%). Even as 52.5% of the women fell into the category of those having a 'good' attitude, there were significantly more percentage of women who had a 'poor' attitude as compared with the men (27.0% vs 20.5%). Also found significant was the association between the attitude towards cultural diversity and the age of the respondents (p=0.009). Though the proportion of those that had 'poor' attitude was similar for age categories ≤22 years and >22 years, a significantly high proportion of over 22 years old fell into the category of those with 'excellent' attitude than those below 22 years (37.1% vs 26.7%). Other variables like stream of study, economic status, socioeconomic class and academic level of the respondents were not found to be significantly associated with the attitude with p values of 0.148, 0.087, 0.750 and 0.081, respectively (table 3). DISCUSSION This study presents first evidence in context of Nepal where previous per reviewed literature on this topic is not found on the issues of cultural diversity and sensitivity among medical students. Nepal has a collectivistic society, multicultural and multilinguistic population, which is expected to influence the professional behaviour of all healthcare professionals towards their patients. This study on future medical professionals demonstrated that medical students, in general, demonstrate a positive attitude towards the issues of cultural diversity. These findings are consistent with the literature that report respondents generally having an open attitude towards cultural diversity. 4 The respondents in our study widely acknowledged that healthcare providers should be culturally competent in order to cater to people of diverse backgrounds. Participants also acknowledged that doctors may be prejudiced against certain cultures. This is consistent with findings from the literature. 4 Older students displayed better attitudes regarding cultural diversity in our study that is in line with the findings of a study, 15 which reported older individuals to display a significantly more positive attitude Figure 1 Response of the students on preparing themselves to visit his/her friend. Figure 2 Response of students on being invited to a festival at his/her friend. Open access towards patient-centred care as the concept of 'patientcentred care' itself incorporates the issue of cultural competence. 15 Male students showed a more positive attitude towards cultural diversity in our study. As a patriarchal society, 24 it is interesting to see that male students have a more positive attitude regarding cultural diversity. These findings need further exploration as it is equally possible that female students may have expressed openly in their responses. 25 26 This finding is in contradiction with the finding of a study, in which women exhibited more positive attitudes than men. 15 Similarly, we did not find a significant association between attitude and socioeconomic status of the students, which is not in line with the findings of a study where the medical students from a lower socioeconomic background were found to have a more positive attitude towards patient-centred care as compared with their upper-class counterparts. 15 Another point of discussion in this study is a potential similarity with the findings of a fairly recent study, which argued that the self-belief of healthcare providers to cater to the healthcare needs of the diverse population seemed to stem from mere knowledge of few key norms and customs, rather than on principles of systematic cross-cultural approaches. 27 In other words, the healthcare providers ignored things like power balance, recognition of systematic racism and their own prejudices and other subtle nuances of cross-cultural setup. 27 This clearly indicates that the concept of intersectionality is equally important to be imparted to the healthcare providers along with the concept of cultural competence. In this study too, we could only access the participants' beliefs and attitudes through very direct questions. Hence, the assessment of attitude might have not been completely valid. In other words, the responses might have been to simply remain politically correct. Likewise, we assigned the score of 0 to the 'neutral' responses to the questions assessing cultural competences, similar to the 'disagree' or 'strongly disagree' responses as the concept of cultural competency is interlinked to social justice where remaining 'neutral' cannot be considered as a progressive response. The ultimate aim of all academic and research activities in health should be patient satisfaction and improved care. Studies in the past have demonstrated the efficacy of interventions to improve the cultural competence of health professionals in the form of improved knowledge, attitude skills as well as patient satisfaction. 12 Thus, it is also apt that we stress the immediate need for inclusion of concepts of cross-cultural care in the undergraduate curriculum at the very least. Finally, a lot more detailed study is definitely needed in this regards to address this issue better in our setup. Open access LIMITATIONS OF THE STUDY As we intended to collect data online, there had been chances of information bias to some extent. There might be social desirability bias, as the identity of the participants was not anonymous. This might have affected the findings; however, this might be less as we conducted the study online. There are multiple questions to calculate the overall attitude of the participant, which might also minimise the effect of bias in total. Also, the attitude being itself a normative idea, it was difficult to objectively classify as poor, good and excellent attitude. We have tried our best to objectively present the finding after a literature review. Likewise, we ended up getting fewer responses than expected from some batches such as the intern batch being busy with their clinical practice and obligatory services. However, this limitation was best avoided with regular reminder emails and personal calls and messages to meet sufficient responses. Likewise, excluding other undergraduates within the institute like nursing and imaging technical students could hinder with the generalisation of the findings of this study to health sciences students in general. CONCLUSION Cultural diversity and competence required for health professionals are considered a key topic for the medical education curriculum to consider. This study formally calls on medical educators in Nepal to consider cultural diversity and competence into the medical education curriculum. Cultural competence as well as the concept of intersectionality needs to be included in the curriculum. How exactly lack of cultural awareness impacts healthcare is, however, beyond the scope of this study, for which further works are recommended.
2022-05-11T06:23:26.172Z
2022-05-01T00:00:00.000
{ "year": 2022, "sha1": "ddc5bff6e214b82f82a04c229dd60b4b6351366f", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/12/5/e057062.full.pdf", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "aa8456f13bd7dd5afb472a66c744fa65f841f686", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
259212174
pes2o/s2orc
v3-fos-license
First-principles prediction of structural, magnetic properties of Cr-substituted strontium hexaferrite, and its site preference To investigate the structural and magnetic properties of Cr-doped M-type strontium hexaferrite (SrFe$_{12}$O$_{19}$) with x = (0.0, 0.5, 1.0), we perform first-principles total-energy calculations relied on density functional theory. Based on the calculation of the substitution energy of Cr in strontium hexaferrite and formation probability analysis, we conclude that the doped Cr atoms prefer to occupy the 2a, 12k, and 4f$_{2}$ sites which is in good agreement with the experimental findings. Due to Cr$^{3+}$ ion moment, 3 {$\mu_B$}, smaller than that of Fe$^{3+}$ ion, 5 {$\mu_B$}, saturation magnetization (M$_{s}$) reduce rapidly as the concentration of Cr increases in strontium hexaferrite. The magnetic anisotropic field $\left(H_{a}\right)$ rises with an increasing fraction of Cr despite a significant reduction of magnetization and a slight increase of magnetocrystalline anisotropy $\left(K_{1}\right)$.The cause for the rise in magnetic anisotropy field $\left(H_{a}\right)$ with an increasing fraction of Cr is further emphasized by our formation probability study. Cr$^{3+}$ ions prefer to occupy the 2a sites at lower temperatures, but as the temperature rises, it is more likely that they will occupy the 12k site. Cr$^{3+}$ ions are more likely to occupy the 12k site than the 2a site at a specific annealing temperature (>700{\deg}C). I. INTRODUCTION Hexaferrites, also known as hexagonal ferrites or hexagonal ferrimagnets, are a class of magnetic materials that have been of great interest to researchers since their discovery in the 1950's. These hexaferrites are found in numerous types such as M, Y, Z, W, X, U-type commonly doped with zinc, strontium, nickel, aluminum, and magnesium. The most common properties to all hexaferrites include that all are ferrimagnetic, their properties of magnetism are based on the crystal structure, and they take different amounts of energy to magnetize in a specific direction within the crystal because of spin-orbit interaction 1 . Particularly, we are interested in M-type strontium hexaferrite (SrFe 12 O 19 , SFO) that falls to space group P 63/mmc which has a crystal structure of hexagonal magnetoplumbite. The unit cell of SFO having two formula units is presented in Fig(1). The iron ions in this structure are coupled in a tetrahedral, trigonal bipyramidal, and octahedral manner by oxygen ions.The magnetic property is retained in SFO mainly due to the occupancy of Fe +3 ions in five inequivalent sites (namely 2a, 2b, 4f 1 , 4f 2 , and 12k), three octahedral sites (namely 2a, 12k, and 4f 2 ), one trigonal bipyramidal site (2b), and one tetrahedral site (4f 1 ). However, the range (degree) of magnetic properties will be influenced by the shape and size of material particles mainly in the context of thin films, nanoparticles [2][3][4][5] . In SFO, there is an involvement of interactions between the moments or moments with lattice ions, tend to contribute anisotropic energy which is termed as magnetocrystalline anisotropy (MA). In a more explicit way, a) Author to whom correspondence should be addressed. Electronic mail: sk162@msstate.edu it is the dependence of the magnetic properties on the applied field direction relative to the crystal lattice. Magnetocrystalline anisotropy energy (MAE), an integral property of a ferromagnetic crystal, is the energy difference to magnetize a crystal along easy and hard direction of magnetization 2,6 . The primary source of MA is spin-orbit coupling (SOC). Because of this coupling, orbitals of electrons are coupled with the spin of electrons and follow the spin direction no matter how the magnetization changes its direction in space 6 . The anisotropy that arises on a crystal are mainly due to the shape of a magnetic particle at the quantum scale, atomic diffusion at sufficiently high temperature, and the interaction between a ferromagnetic and an antiferromagnetic materials [6][7][8][9] . SFO is one of the best candidates among hexaferrite groups owing to its industrial and electronic implications. In the beginning years, Technophiles were motivated about SFO to make permanent magnets, recording media, electric motors because of its high saturation magnetization, large coercivity, optimum Curie temperature, supreme magnetocrystalline anisotropy, and more chemical stability. Nowadays, due to technological advancement, it is equally growing interest in the development of nano fibres, electronic components for mobile and wireless communications. Recently, researchers have been characterized Y, M, U, and Z ferrites as multiferroics even at room temperature. These multiferroics have a wide range of practical implications like multi-state memory elements, memory media, and novel functional sensors 2,10,11 . Several successful investigations have been done on SFO to understand its electronic structure by computational and experimental approach. To uplift the strength of magnetic and electric properties, researchers are substituted ions or pair of ions in a different concentration mainly on Fe sites of SFO. Majority of researchers have followed the non-magnetic ions substitution into Fe sites to enhance further the value of saturation magnetization (M s ). In case of Zr-Cd substitution (SrFe 12−2x (ZrCd) x O 19 ), the value of M s augmented in the limit of concentration x = 0.2, whereas the value of coercivity declined with increasing concentration of Zr-Cd 12 . Substitution of Er-Ni pair in SFO showed the continuous rise in the value of M s and coercivity in accordance with the concentration 13 . However, the substitution of certain pairs like Zn-Nb, 14 Zn-Sn, [15][16][17] and Sn-Mg 18,19 showed the increasing trend of M s and decreasing pattern of coercivity. In this study, we performed the first-principles totalenergy calculations to analyze the link between site occupation and magnetic properties of substituted strontium hexaferrite, SrFe 12−x Cr x O 19 with x = 0.5 and x = 1.0. Every configuration of substituted SFO appears with a particular probability. To determine the formation probabilities of its various configurations at a typical annealing temperature (1000K), we used the Boltzmann distribution function. We show that our calculation predicts a decrease of saturation magnetization (M s ) as well as a decrease in magnetic anisotropy energy (MAE) of SrFe 12−x (Cr) x O 19 at x = 0.5 and 1.0 compared to the pure M-type SFO. This result is also in good agreement with the experimental observation as observed in Ghasemi et al. (2009) 19 . II. COMPUTATIONAL DETAILS We pursued the first-principles total-energy calculations for configuration SrFe 12−x (Cr) x O 19 at x = 0.5 and 1.0. In our calculation, a unit cell of two formula units of SFO is used. The structural optimization calculation along with total energies and forces were carried out using density functional theory by projector augmented wave (PAW) potential as executed in VASP. Depending on the ground state ferrimagnetic spin ordering of Fe, our calculations were solely focused on spin-polarized 3,20 . The expansion of the wave function was in the form of plane waves with a 520eV energy cut-off used for pristine SFO, Cr-substituted SFO. A 7 × 7 × 1 Monkhorst-Pack k-mesh was used to sample a Brillouin zone with a Fermilevel smearing of 0.2eV applied through the Methfessel-Paxton method 21,22 . The electronic relaxation was accomplished till the change in free energy and the band structure energy less than 10 −7 eV . In addition, we fully optimized the structure by relaxing the positions of ions and cell shape till the change in total energies between two ionic steps less than 10 −4 eV . We used the Perdew-Burke-Ernzerhof (PBE) generalized gradient approximation (GGA) to describe fully the electron exchangecorrelation effect 23 . Furthermore, we implemented the GGA + U method in the simplified rotationally invariant approach described by Dudarev et al. to address the localized 3d electrons in Fe 24 . The necessity of U ef f for Fe was fulfilled by setting 3.7eV based on previous study. we set U ef f to be zero for all other elements 25 . To evaluate the magnetocrystalline anisotropy energy, we first carried out an accurate collinear calculation in the ground state, and then we followed the spin-orbit coupling calculations in two different spin orientations within non-collinear setup. The substitution of foreign atoms in five crystallographic inequivalent Fe sites can change the magnetic characteristics of SFO. When foreign atoms are substituted in a SFO unit cell, there are a variety of energetically distinct configurations. The magnetism of substituted SFO is highly dependent on the on-site preferences of the replaced atoms since SFO is ferrimagnetic. Understanding the site preference of substituted atoms is crucial in order to research how substitution affects magnetic characteristics. The substitution energy can be calculated to find the substituted atom's preferred site. The substitution energy E sub [i] for configuration i at 0 K is given by is the total energy per unit cell of Crsubstituted SFO in configuration i, whereas E SFO is the total energy per unit cell of pure SFO and β is the total energy per atom for element β (β= Cr and Fe) in its most stable crystal structure. n β is the number of atoms of type β added or removed; if one atom is added, n β = +1, and when one atom is withdrawn, n β = −1. The calculation of Magnetic Anisotropy Energy (MAE) is important for understanding the preferred magnetization directions in a material. Mathematically, MAE is defined as the difference between the two total energies where the spin quantization axes are oriented along two distinct directions: 26 where E (100) is the total energy with the spin quantization axis in the magnetically hard axis and E (001) is the total energy with the spin quantization axis in the magnetically easy axis. The total energies in Eq.(2) are computed by the non-self-consistent calculations, where the spin densities are kept constant. With the help of MAE, the uniaxial magnetic anisotropy constant, K 1 , can be computed as 27,28 where V is the equilibrium volume of the unit cell and θ is the angle between the two spin quantization axis orientations (90 • in the present scenario). The anisotropy field, H a , which is related to the coercivity can be expressed as 29 where K 1 is the magnetocrystalline anisotropy constant and M s is the saturation magnetization. When the difference in substitution energies E sub between different configurations is relatively small compared to the thermal energy at high annealing temperatures ( 1000 K), the site preference of substituted atoms in hexaferrite can change. This change in site occupation preference can be described using the Maxwell-Boltzmann distribution, which relates to the formation probability. The site occupation probability or the formation probability P i (T ) of configuration i at temperature T is given by where ∆G i , ∆E i , ∆V i , and ∆S i are the change in free energy, substitution energy, unit cell volume, and entropy of the configuration i relative to the ground state configuration. P , k B , and g i are the pressure, Boltzmann constant, and multiplicity of configuration i. g 0 is the multiplicity of the ground state configuration. we considered ∆S i to be the same for all configurations based on prior literature 2 . Eq.(7) enhances the model through the explicit computation of the entropy change concerning the most stable configuration 30,31 . Hence, when the probability of higher energy configurations becomes significant at the annealing temperature, it can be inferred that in a substituted SFO sample, multiple configurations exist rather than a single one. Consequently, any physical quantity of the SFO sample will be a weighted average of the corresponding properties in these different configurations. where P 1000 K (i) and Q i are the formation probability at 1000 K and a physical quantity Q of the configuration i. The weighted average calculated by Eq. (8) is the material's low-temperature property even though 1000 K is used for computation because the crystalline configurations of CSFO will be distributed according to this value during the annealing process. III. RESULTS AND DISCUSSION In order to visualize the doping effect of the Cr +3 ion on the structural and magnetic properties of SFO, we replaced the F e +3 from various lattice locations. We found that it has a significant effect on the structural and magnetic properties of this system. We fully relaxed the volume, ionic positions, and shape of SFO and Cr-doped SFO. The crystal structure remains hexagonal under all circumstances. We further carried out our calculations when we match the optimized lattice parameter of pure SFO (a = 5.928 Å, c = 23.195 Å) with experimental lattice constants (a = 5.890 Å, c = 23.182 Å); less than 1% difference between the calculated and experimental values. For x = 1.0, the calculated lattice parameters (a = 5.930 Å, c = 23.076 Å) were found to be very consistent with experimental lattice parameters (a = 5.902 Å, c = 23.024 Å) 32,33 . Fig.2 shows the variation of lattice parameters from theory and experiment explicitly. To compensate for the unavailability of the experimental lattice parameter at x = 0.5, we are comparing the experimental values at x = 0.6 with our calculated values at x = 0.5 for the comparison in Fig. 2. The substitution of Cr in SFO has little effect on the lattice parameters or unit cell volume. Because the radius of Cr +3 (0.630 Å) is similar to that of Fe +3 (0.645 Å), this is an expected outcome. In this paper, we measured the various physical quantities by varying the concentration of Cr in the unit cell of SFO. For the x = 0.5, one Cr atom was substituted at one of the 24 Fe sites of the unit cell. Many of these Fe sites are equivalent when crystallographic symmetry operations are applied, leaving only five inequivalent structures. We label these inequivalent configurations to comprehend the site preference of the substituted Cr atom. Table I displays the results of our calculation for each of the five inequivalent configurations in ascending order of substitution energy(E sub ). The configuration [2a] has the lowest E sub followed by [12k], and [4f 2 ] which is consistent with the experimental outcomes 32,34 . We can conclude that the [2a] site is the most preferred site for the Cr atom at 0 K. We used Eq.(5) to calculate the probability of forming each configuration as a function of temperature. Because the change in volume between different configurations is so small (less than 0.3 Å 3 ), we may discard the P ∆V term as negligible (in the order of 10 −7 eV at a standard pressure of 1 atm) compared to the ∆E sub (i) term in Eq. (6). The entropy change ∆S has two components: configurational, ∆S c , and vibrational, ∆S vib· 31 . ∆S vib is around 0.1-0.2 k B /atom for binary substitutional alloys like the present system, and ∆S c is 0.1732k B /atom. As a result, we assign ∆S = 0.3732k B / atom. Fig.(3) For the x = 1.0, two Cr atoms were substituted at two of the 24 Fe sites of the unit cell. Many of these Fe sites are equivalent when crystallographic symmetry operations are applied, leaving only 15 inequivalent structures. These structures were found by fully optimizing the unit cell shape, volume, and ionic positions. To understand the site preference of a substituted Cr atom, we estimated the substitution energy, E sub . Table II provides the results of our calculation for each of the fifteen inequivalent configurations in ascending order of substitution energy(E sub ). The configuration [2a, 2a] has the lowest E sub followed by [12k, 2a], and [12k, 12k]. We also used Eq.(5) to calculate the probability of forming each configuration as a function of temperature. Because the change in volume between different configurations is so small (less than 0.7Å 3 ), we may discard the P ∆V term as negligible (in the order of 10 −7 eV at a standard pressure of 1 atm) compared to the ∆E sub (i) term in Eq. (6). Fig.(4). A Cr 3+ ion has a 100% probability of occupying the [2a, 2a] site at 0K and it's value sharply declines as the temperature rises, while the probability of occupancy of Cr 3+ at the [12k, 12k] site is maximum (66.4 %) at 1000 K. At a typical annealing temperature of 1000 K for CSFO, the site occupation probability of the site [12k, 2a] is 17.8%. In CSFO, the doped Cr 3+ ions are more likely to occupy Fe 3+ ions at the [12k, 12k] site than the [2a, 2a] site because of it's higher multiplicity. We exclusively utilize the formation probability at elevated temperatures for computing weighted averages. This is because the arrangement of CSFO configurations during the annealing process will be distributed according to these values. Table III displays the weighted average of corresponding quantities as the concentration of Cr 3+ increases. The volume of CSFO decreases as we increase the concentration of Cr 3+ be- I. Physical properties of inequivalent configurations of SrF e12−xCrxO19 with x = 0 and 0.5: Doped amount(x), multiplicity(g), substitution energy(E sub ), total magnetic moment (Mtot), volume of the unit cell (V ), saturation magnetization (Ms), magnetocrystalline anisotropy energy (Ea), uniaxial magnetic anisotropy constant (K1), anisotropy field (Ha), and the formation probability at 1000 K (P1000K ). All values are for a double formula unit cell containing 64 atoms. cause of the smaller atomic radius of Cr 3+ ion. The magnetic moment of CSFO is also in the decreasing trend as the amount of doped Cr 3+ increases owing to its smaller magnetic moment (3µ B ) than the Fe 3+ (5µ B ). Similarly, the saturation magnetization is decreasing monotonically as we increase the Cr 3+ concentration which is consistent with the K. Praveena et al. 35 . Although the value of magnetocrystalline anisotropy (K 1 ) is slightly increased, the reduction in saturation magnetization (M s ) is much more significant. So, their resultant effect causes the anisotropy field (H a ) to increase as the fraction of Cr is raised. In Table IV, we have provided the atomic contribution from each sublattice to the overall magnetic mo-ment of CSFO. It can be observed that the total magnetic moment of the unit cell is slightly distinct from the sum of local magnetic moments. This disparity arises from the contribution of the interstitial region to the overall magnetic moment. IV. CONCLUSIONS First-principles total-energy calculations based on density functional theory were used to study Cr-substituted SFO (SrFe 12−x Cr x O 19 ) with x = 0.0, 0.5, 1.0. The results showed that increasing the fraction of Cr atoms reduced the total magnetic moment of the SFO unit cell. This reduction in magnetization was obtained by low magnetic Cr atoms replacing Fe 3+ ions at two of the majority spin sites, 2a and 12k, resulting negative contribution to the magnetization. Our substitution energy and formation probability analysis predicts that Cr atoms preferentially occupy the 2a, 12k, and 4f 2 sites, consistent with experimental observations. Increasing the fraction of Cr in SFO leads to a rise in the magnetic anisotropic field (H a ) despite a decrease in magnetization and a slight increase in magnetocrystalline anisotropy (K 1 ). This increase in anisotropic field(H a ) is supported by a formation probability study, which shows that at higher temperatures (>700°C), Cr 3+ ions are more likely to occupy the 12k site rather than the 2a site because of its higher multiplicity. ACKNOWLEDGMENTS This work was supported by the Center for Computational Science (CCS) at Mississippi State University. Computer time allocation has been provided by the High-Performance Computing Collaboratory (HP C 2 ) at Mississippi State University.
2023-06-22T06:42:53.101Z
2023-06-21T00:00:00.000
{ "year": 2023, "sha1": "6ff86419400842339e3f493f8e14003ab81181b9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "6ff86419400842339e3f493f8e14003ab81181b9", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
240289286
pes2o/s2orc
v3-fos-license
Adenocarcinoma mucinosum of extrophy bladder: A rare case report Introduction Bladder exstrophy is a rare congenital anomaly while, bladder adenocarcinoma mucinous type is a rare type of bladder cancer, with aggressive behavior and inadequate response to radiation and chemotherapy. In extremely rare cases, untreated bladder exstrophy could transform into bladder mucinous adenocarcinoma. Case presentation We report a case of a 41-year-old male with untreated bladder exstrophy that transformed into mucinous adenocarcinoma. The patient also had epispadias and a right inguinal hernia. Joint procedures were conducted to perform radical cystectomy, total penectomy and W-Pouch continent urostomy, inguinal hernia repair, osteotomy, and keystone and scrotal flap by split-thickness skin graft (STSG) for wound closure. The patient progressed well after surgery, two months after initial procedure, nephrostomies were conducted due to pouches stenosis. Due to the government's limited transportation and lockdown policy, as the Covid-19 pandemic occurred, the patient could not come to the hospital for routine follow-up and died nine-month after surgery. Clinical discussion Bladder exstrophy is one of the risk factors of bladder cancer. Transformation of bladder exstrophy into mucinous adenocarcinoma is extremely rare, as the case is the first case to be discovered in Indonesia. Surgery, followed with a strict follow-up regime, is mainstay of treatment in this type of malignancy. Conclusion Adenocarcinoma of mucinous type is a scarce type of bladder exstrophy malignancies. A multidiscipline approach is mandatory in these cases. Strict and regular follow up are suggested for these cases. Introduction Being one of a rare congenital anomaly with an incidence of one per 50,000 newborns, neglected cases of bladder exstrophy is even more rare, with known total number of reported cases less than 90 cases [1][2][3]. One of the most common types of malignancy in bladder exstrophy is adenocarcinoma (80%). Of all cases, mucinous type was quite rare, comprising only two cases reported to date [4]. We reported a case of mucinous adenocarcinoma in an untreated bladder exstrophy. We proposed that the pathogenesis of malignancy is neglection and progression from chronic severe inflammation. Previous studies have also shown that severe inflammation led to progressive changes from mucinous metaplasia to mucinous adenoma to mucinous adenocarcinoma [4,5]. Case presentation We report a case of 41-year-old male, in compliance with SCARE Guidelines [6], who has lived with bladder exstrophy without notable prior medical history or family illness. No procedure had been performed for the bladder exstrophy. The patient initially sought medical treatment for enlarged lump in his right inguinal region. By the time patient came to our hospital, the bladder was already ulcerated and infected. During physical examination, there were bladder exstrophy and epispadias complex in suprapubic region measuring 10 × 9 cm in size. The mass was granulated, easily bleeding on all surfaces of the bladder (Fig. 1). Penile epispadias was present with both testes were normally descended. There was a well-defined lump in the right inguinal area with incarcerated type hernia. Other physical examinations were within the normal limit. There was no comorbid illness known. Excisional biopsy under anesthesia was conducted with mucinous adenocarcinoma were found. CT scan of the whole abdomen showed lobulated undefined mass in the suprapubic area and hernia in the right inguinal, and there were no lymph nodes involved nor other organ metastasis (Fig. 1). The masscaused obstruction in both kidneys caused moderate hydronephrosis and hydroureter in the right kidney and mild hydronephrosis in the left kidney. Chest x-ray was normal. In pelvic X-ray, 6,8-cm symphyseal diastasis was found. During surgery preparation, the creatinine level was increased with worsen hydronephrosis found in ultrasonography. A multidisciplinary team board meeting was conducted with the digestive, plastic, and orthopedic departments and concluded to hold a joint surgery from respective departments, after the diverted urine with nephrostomies placed by urologist. Wide excision with a two-cm margin of resection was done along with the distal of ureter, prostate, seminal vesica, followed by lymph node dissection of both obturator nodes. There was a proof of penile skin infiltration, resulting in continuing the procedure to total penectomy. The digestive department evaluated the digestive system, and there was no proof of infiltration; the operation then continued to appendectomy and preparation of ileal section for Wpouch. The osteotomy and plate-screw reconstruction with fibular graft was done by the orthopedic. Continent urinary diversion (W-shaped ileal reservoir) was conducted using a 40-cm long ileal segment. At last, the defect closure was done using scrotal flap and keystone flap-type 4 with split-thickness skin graft (STSG) from the right femoral ( Fig. 2). Histopathological examination revealed mucinous adenocarcinoma type of bladder infiltrated to prostate (shown in Fig. 3). There was no lymph node involvement. The penis and the ureter margin were free of tumor. No adjuvant therapy was given. 2 months after the surgery, bilateral nephrostomies were conducted by urologist due to stenosis of the W-pouch. Subsequent follow-up examinations were altered because of difficulty in accessing our institution due to lockdown policy. The patient died nine months after surgery with unknown causes. Discussion Epidemiologically, the global data have shown that bladder exstrophy occurs approximately in 2 every 100,000 stillbirths [1,3,7]. In Indonesia, the data are not well documented. In neglected cases of bladder exstrophy, malignancy incidence happens in the third to the fourth quarter of life. Patients with bladder exstrophy have 700-fold incidence of bladder cancer than the general population at the same age [8]. Such cases are reported both in developed and developing countries. Surgery is usually advised in early age in such condition to repair exstrophy. However, in this case, the patient's parents had chosen not to have the surgery due to the family's economic condition. Later, when the patient reached adolescence, the patient felt that he already accepted his condition and was again did not consider the surgery because he did not have any complaint. This case shows that patients with untreated exstrophy bladder can live healthily until other conditions emerged. In this case, the patient sought help for his hernia. Grignon, et al. identified five subtypes of bladder adenocarcinoma: Papillary, mucinous, signet-ring cell, adenocarcinoma not otherwise specified, and mixed [9]. Currently, there was only one case been reported in the literature suggesting extreme rarity of this variant. The risk factor of carcinoma in untreated bladder exstrophy was male, with average cases in the fourth quarter of life, with around 80% were adenocarcinoma, 12.5% were squamous cell carcinoma, and 5% were unknown [4,5]. In contrast, the adenocarcinoma of bladder cancer is only 0,5-2% of the average population. In normal bladder population, the 5-year survival rate of this type of cancer is 35-55% [4]. The pathogenesis of these cases has been proposed, but the exact cause was difficult to determined due to the rarity of cases. Smeulders et al. describe that chronic inflammation and infection caused the metaplastic transformation of the urothelium [4]. McIntosh et al. concluded that recurrent infection and environment exposure resulted in glandular metaplasia that produce the protective mucus. However, overtime, malignant changes occur [5]. Surgery becomes the leading choice of treatment due to the ineffectiveness of systemic chemotherapy for non-urothelial carcinoma cases. To date, no comprehensive survival data after treatment have been published. Biopsy to distinguish the tumor followed by radical cystectomy with neobladder could be chosen, and permanently nephrostomy is another option if the strict follow up could not be done because of any condition. Multidisciplinary approaches are recommended as abdominal defects are also an issue because unprecise repair can lead to tension wounds and accompanying morbidity. In our cases, the abdominal defect is corrected with scrotal and keystone flap-type four and STSG from the right femoral with help from plastic surgeon colleagues. Other choices are tensor facia Lata flap as suggested by Bango et al. We did an extensive search and came across about 90 cases of adenocarcinoma of neglected bladder exstrophy patients, none of which was from Indonesia. There was only 1 case from India with the same type. In 2016 Abhishek et al. report the same case in India of a 63-yearold male, who never sought help for her condition, and came for other complain that his left flank pain and after holistic exam found mucinous type bladder cancer [2]. The patient then underwent radical cystectomy, lymph node excision and, wide local excision with 1 cm margin of resection of skin with ileal conduit for urinary diversion. For surgery wound closure, the rectus abdominis rotation flap was done due to a significant defect. The patient was not given adjuvant therapy, and until the case was reported, the patient was followed up for one year without any complaint. There is no specific guideline for follow-up cases of such cases, and we applied the general bladder cancer follow-up plan. Slaton et al. recommended that evaluation of physical examination, serum chemistry indexes, and computed topography depends on the stage can be done annually or in a shorter time [10]. Austen et al. propose an annual endoscopy from the 3rd postoperative year due to mixed urinary and fecal stream use as a diversion [11]. In this case, periodic, routine follow-up examinations was not properly conducted, as there were difficulties to access our center and seek medical treatment due to COVID-19 lockdown policy. Subsequently, he died nine months after surgery. Conclusion Untreated extropy and primary adenocarcinoma bladder are rare cases. Adenocarcinoma is a more common type, however, mucinous type is less common in previous reports. Surgical treatment is the first choice for this case. Multidisciplinary approach followed by carefully planned management is important for better holistic care in such patient. Ethics approval and consent to participate This case report has been exempted from ethical approval by Universitas Indonesia Ethical Committee. Availability of data and materials The datasets generated during and/or analysed during the current study are available on demand. Funding This study received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors. Provenance and peer review Not commissioned, externally peer-reviewed. Declaration of competing interest The authors declare that they have no competing interests.
2021-10-18T15:09:42.303Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "21987feeef3506274a84bf18ea406f4e2ee3517c", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ijscr.2021.106493", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "026b8a797aa422908ee5b79de54c6ae5af5cd0fa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235850307
pes2o/s2orc
v3-fos-license
Mechanical stretching induces the apoptosis of parametrial ligament fibroblasts via the actin cytoskeleton and Nr4a1 Background The anatomical positions of pelvic floor organs are maintained mainly by ligaments and muscles. Long-term excessive mechanical tension stimulation of pelvic floor tissue beyond the endurance of ligaments or muscles will lead to the occurrence of pelvic organ prolapse (POP). In addition, cytoskeletal reconstitution is a key process by which cells respond to mechanical stimulation. The aim of the present study was to investigate the protective effect of actin cytoskeleton in resist mechanical stretching (MS)-induced apoptosis of parametrial ligament fibroblasts (PLFs) and the underlying mechanisms. Methods 8 women who underwent hysterectomy surgery for reasons excluding the presence of malignant tumors and POP served as controls, and 7 patients who underwent hysterectomy surgery for only advanced POP comprised the POP group. MS was provided by a fourpoint bending device. We examined the effects of MS on actin cytoskeleton and apoptosis of PLFs. Then the apoptosis was detected after latrunculin A (Lat-A, a potent inhibitor of actin) exposure and the interference of Nr4a1. Results MS could significantly induce apoptosis of PLFs from non-POP patients, which exhibited an apoptosis rate close to that of PLFs from POP patients, and the apoptosis rate was higher following latrunculin A treatment. In addition, Nr4a1 and Bax expression was increased while Bcl-2 and caspase-3 expression was clearly decreased after treatment with MS and Lat-A. However, the MS-induced apoptosis of PLFs was reduced when treatment with siRNA targeting Nr4a1 was used to downregulate the level of Nr4a1. Conclusions These outcomes reveal a novel mechanism that links the actin cytoskeleton and apoptosis in PLFs by Nr4a1; this mechanism will provide insight into the clinical diagnosis and treatment of POP. Background The anatomical positions of pelvic floor organs are maintained mainly by ligaments and muscles. Long-term excessive mechanical tension stimulation of pelvic floor tissue beyond the endurance of ligaments or muscles will lead to the occurrence of pelvic organ prolapse (POP). In addition, cytoskeletal reconstitution is a key process by which cells respond to mechanical stimulation. The aim of the present study was to investigate the protective effect of actin cytoskeleton in resist mechanical stretching (MS)-induced apoptosis of parametrial ligament fibroblasts (PLFs) and the underlying mechanisms. Methods 8 women who underwent hysterectomy surgery for reasons excluding the presence of malignant tumors and POP served as controls, and 7 patients who underwent hysterectomy surgery for only advanced POP comprised the POP group. MS was provided by a fourpoint bending device. We examined the effects of MS on actin cytoskeleton and apoptosis of PLFs. Then the apoptosis was detected after latrunculin A (Lat-A, a potent inhibitor of actin) exposure and the interference of Nr4a1. Results MS could significantly induce apoptosis of PLFs from non-POP patients, which exhibited an apoptosis rate close to that of PLFs from POP patients, and the apoptosis rate was higher following latrunculin A treatment. In addition, Nr4a1 and Bax expression was increased while Bcl-2 and caspase-3 expression was clearly decreased after treatment with MS and Lat-A. However, the MSinduced apoptosis of PLFs was reduced when treatment with siRNA targeting Nr4a1 was used to downregulate the level of Nr4a1. Conclusions These outcomes reveal a novel mechanism that links the actin cytoskeleton and apoptosis in PLFs by Nr4a1; this mechanism will provide insight into the clinical diagnosis and treatment of POP. Full Text Due to technical limitations, full-text HTML conversion of this manuscript could not be completed. However, the manuscript can be downloaded and accessed as a PDF. E, Protein levels in PLFs were determined by Western blotting and normalized to those of GAPDH. F, Band intensities were quantified by Quantity One. G, mRNA levels in PLFs were quantified by real-time RT-PCR and normalized to those of GAPDH. *** indicates p < 0.001. Figures (CON: PLFs isolated from patients without POP; MS: PLFs isolated from patients without POP that were exposed to mechanical stretching; POP: PLFs isolated from patients with POP). Figure 4 The effect of mechanical stretching on apoptosis after actin cytoskeleton disassembly. A, Cell apoptosis was detected by flow cytometry analysis; B, Quantified apoptosis rates in each group. C, PLFs were stained with phalloidin and imaged by fluorescence microscopy (magnification: 200×). D, Relative cell surface areas were quantified by ImageJ software. E, Protein levels in PLFs were determined by Western blot analysis and normalized to those of GAPDH. F, Band intensities were quantified by Quantity One. G, mRNA levels in PLFs were quantified by real-time RT-PCR and normalized to those of GAPDH. * indicates p < 0.05, ** indicates p < 0.01 and *** indicates p < 0.001 compared with the CON group; # indicates p < 0.05, ## indicates p < 0.01 compared with the MS group; ^ indicates p < 0.05, ^^ indicates p < 0.01 and ^^^ indicates p < 0.001 compared with the Lat-A group. (CON: PLFs isolated from patients without POP; MS: PLFs isolated from patients without POP and exposed to mechanical stretching; Lat-A: PLFs isolated from patients without POP and exposed to Lat-A; Lat-A+MS: PLFs isolated from patients without POP and exposed to Lat-A and mechanical stretching). Figure 5 The effect of mechanical stretching on apoptosis after Nr4a1 deficiency. A, Cell apoptosis was detected by flow cytometry analysis. B, Protein levels in PLFs were determined by Western blot analysis and normalized to those of GAPDH. C, The levels of Nr4a1 after Nr4a1 gene interference. D, Quantified apoptosis rates in each group. E, Band intensities were quantified by Quantity One. F, mRNA levels in PLFs were quantified by real-time RT-PCR and normalized to those of GAPDH; * indicates p < 0.05, ** indicates p < 0.01 and *** indicates p < 0.001 compared with the CON group; # indicates p < 0.05, ## indicates p < 0.01 compared with the MS group; ^ indicates p < 0.05 ^^^ indicates p < 0.001 compared with the si-Nr4a1 group. (CON: PLFs isolated from patients without POP; MS: PLFs isolated from patients without POP and exposed to mechanical stretching; si-Nr4a1: si-Nr4a1-mediated transfection was used to silence Nr4a1 in PLFs; si-Nr4a1+MS: si-Nr4a1-treated cells treated with mechanical stretching).
2020-03-05T11:02:47.961Z
2020-02-28T00:00:00.000
{ "year": 2020, "sha1": "58488260fc7cb7b23147bbef5d674dfe0dbe19de", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-15383/v1.pdf?c=1585628154000", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "0b38f705da9c9821c8b848b62657de8d72f16979", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Chemistry" ] }
205325308
pes2o/s2orc
v3-fos-license
Tropomyosin Dephosphorylation Results in Compensated Cardiac Hypertrophy* Background: Changes in phosphorylation status of sarcomeric proteins allows rapid alteration of cardiac function. Results: Tropomyosin dephosphorylation results in myocyte hypertrophy with increases in SERCA2a (sarcoplasmic reticulum Ca2+ ATPase 2a) expression and phospholamban phosphorylation but without functional changes. Conclusion: Tropomyosin phosphorylation can influence calcium regulatory proteins and cardiac remodeling in response to stress. Significance: This is the first report detailing that altering tropomyosin phosphorylation affects calcium handling proteins. Phosphorylation of tropomyosin (Tm) has been shown to vary in mouse models of cardiac hypertrophy. Little is known about the in vivo role of Tm phosphorylation. This study examines the consequences of Tm dephosphorylation in the murine heart. Transgenic (TG) mice were generated with cardiac specific expression of α-Tm with serine 283, the phosphorylation site of Tm, mutated to alanine. Echocardiographic analysis and cardiomyocyte cross-sectional area measurements show that α-Tm S283A TG mice exhibit a hypertrophic phenotype at basal levels. Interestingly, there are no alterations in cardiac function, myofilament calcium (Ca2+) sensitivity, cooperativity, or response to β-adrenergic stimulus. Studies of Ca2+ handling proteins show significant increases in sarcoplasmic reticulum ATPase (SERCA2a) protein expression and an increase in phospholamban phosphorylation at serine 16, similar to hearts under exercise training. Compared with controls, the decrease in phosphorylation of α-Tm results in greater functional defects in TG animals stressed by transaortic constriction to induce pressure overload-hypertrophy. This is the first study to investigate the in vivo role of Tm dephosphorylation under both normal and cardiac stress conditions, documenting a role for Tm dephosphorylation in the maintenance of a compensated or physiological phenotype. Collectively, these results suggest that modification of the Tm phosphorylation status in the heart, depending upon the cardiac state/condition, may modulate the development of cardiac hypertrophy. Phosphorylation of tropomyosin (Tm) has been shown to vary in mouse models of cardiac hypertrophy. Little is known about the in vivo role of Tm phosphorylation. This study examines the consequences of Tm dephosphorylation in the murine heart. Transgenic (TG) mice were generated with cardiac specific expression of ␣-Tm with serine 283, the phosphorylation site of Tm, mutated to alanine. Echocardiographic analysis and cardiomyocyte cross-sectional area measurements show that ␣-Tm S283A TG mice exhibit a hypertrophic phenotype at basal levels. Interestingly, there are no alterations in cardiac function, myofilament calcium (Ca 2؉ ) sensitivity, cooperativity, or response to ␤-adrenergic stimulus. Studies of Ca 2؉ handling proteins show significant increases in sarcoplasmic reticulum ATPase (SERCA2a) protein expression and an increase in phospholamban phosphorylation at serine 16, similar to hearts under exercise training. Compared with controls, the decrease in phosphorylation of ␣-Tm results in greater functional defects in TG animals stressed by transaortic constriction to induce pressure overloadhypertrophy. This is the first study to investigate the in vivo role of Tm dephosphorylation under both normal and cardiac stress conditions, documenting a role for Tm dephosphorylation in the maintenance of a compensated or physiological phenotype. Collectively, these results suggest that modification of the Tm phosphorylation status in the heart, depending upon the cardiac state/condition, may modulate the development of cardiac hypertrophy. Tropomyosin (Tm) 2 is an ␣ helical coiled coil protein involved in the Ca 2ϩ -dependent regulation of the thin filament of the sarcomere. Upon binding of Ca 2ϩ to the troponin complex, a conformational change occurs that allows the Tm filament to move away from the myosin-head binding site on the sarcomeric actin filament. Previous and recently published studies show striated muscle ␣-Tm is phosphorylated at one site, the penultimate amino acid, serine 283, by several potential kinases including tropomyosin kinase, protein kinase A, and protein kinase C (PKC) (1)(2)(3)(4)(5)(6)(7)(8). During fetal development, 70% of cardiac ␣-Tm in rat hearts is phosphorylated, which decreases to ϳ30% post-natally (9). In vitro studies investigating the functional role of Tm phosphorylation indicate that low phosphorylation levels decrease the ability of ␣-Tm to polymerize in a head-to-tail fashion; conversely, increasing phosphorylation enhances the interaction between the C-and N-terminal ends of adjoining Tm molecules. Additionally, changes in ␣-Tm phosphorylation status seem to alter sarcomeric function, as shown by differential function of the actin-activated myosin S1-ATPases (4,10). Taken together, these in vitro data suggest that altering phosphorylation status affects the ability of Tm to cooperatively activate the thin filament upon binding of Ca 2ϩ to troponin (Tn). In recent years in vivo studies performed on animal models indicate that changes in the phosphorylation status of sarcomeric proteins such as troponin I (TnI), myosin binding protein C (MyBPC), and the regulatory myosin light chain result in alterations in Ca 2ϩ sensitivity of the myofilament and changes in cardiac function and may play a role in the development of cardiac disease (11)(12)(13)(14)(15). Investigation of a dilated cardiomyopathy transgenic (TG) mouse model bearing a human ␣-Tm mutation (E54K) shows that phosphorylation levels of Tm decrease relative to non-transgenic (NTG) littermates (16,17). Additionally, phosphorylation is increased in familial hypertrophic cardiomyopathy ␣-Tm N175D mice generated by this laboratory indicating a link between striated muscle Tm phosphorylation, sarcomeric function, and cardiac disease (18). 3 To investigate the in vivo effect of decreased or ablated Tm phosphorylation, we substituted serine 283 with an alanine (S283A), removing the phosphorylation site and effectively inhibiting the ability of ␣-Tm to be phosphorylated. Several TG mouse lines expressing this ␣-Tm S283A mutation were generated and analyzed. These TG hearts show no changes in functional parameters when investigated by echocardiography, myofilament Ca 2ϩ -tension relations, or in studies of work-performing heart during ␤-adrenergic stimulus. However, these animals do have sex-specific differences in heart morphology likely due to the cardioprotective effects of estrogen that have been described previously (19,20). Male TG mice show a hypertrophic phenotype as measured by echocardiography and supported by cardiomyocyte cross-sectional area measurements, whereas female mice do not. Male TG mice also show significant modifications in proteins controlling Ca 2ϩ fluxes such as increases in the expression of the sarcoplasmic Ca 2ϩ ATPase (SERCA2a) and phosphorylation of phospholamban (PLN). Thus, phosphorylation of ␣-Tm may be part of a signaling cascade that results in changes in Ca 2ϩ handling protein levels and may explain the tight regulation of ␣-Tm phosphorylation levels. Additionally, when male TG animals are subject to pressure overload via transaortic constriction (TAC), they exhibit a significant increase in hypertrophy as well as functional defects including a striking decrease in fractional shortening compared with NTG litter mates. This is the first investigation to show that alterations in the phosphorylation status of a thin filament protein, namely ␣-Tm, can cause a moderate hypertrophic response and increase SERCA2a expression and PLN phosphorylation. Taken with our previous findings in cardiomyopathy models, these results firmly establish that ␣-Tm phosphorylation is necessary for an appropriate response during cardiac disease. EXPERIMENTAL PROCEDURES Generation of S283A ␣-Tm TG Mice-Mouse striated muscle ␣-Tm cDNA was subjected to QuikChange II site-directed mutagenesis (Agilent Technologies) utilizing the primer 5Ј-CAC GCT CTC AAC GAT ATG ACT GCC ATA TAA GTT TCT TTG CTT CAC-3Ј mutating the penultimate serine to an alanine. The mutation was verified through sequencing of the construct by Genewiz. The ␣-Tm S283A construct was then cloned into a vector containing the ␣-myosin heavy chain (␣-MHC) promoter and a human growth hormone 3Ј-UTR and poly-A tail sequence (21). Transgenic mice were generated using the FVB/N strain as previously described (22). Founder mice were identified using PCR. Copy number was determined using genomic Southern blot analysis. Nucleotide sequencing of TG mouse DNA verified the sequence of the ␣-Tm S283A transgene. Genotyping-DNA samples were obtained from 14-day-old mice, and PCR was utilized to determine which animals carried the transgene. Primers specific for the transgene are ␣-MHC forward 5Ј-GCC CAC ACC AGA AAT GAC AGA-3Ј and ␣-Tm reverse 5Ј-TCC AGT TCA TCT TCA GTG CCC-3Ј. GAPDH is used as an internal control, and primers are as follows: GAPDH forward 5Ј-AGC GAG CTC AGG ACA TTC TGG-3Ј and GAPDH reverse 5Ј -CTC CTA ACC ACG CTC CTA GCA-3Ј. Transgenic Protein Quantification and Western Blot Analyses-Myofibrillar proteins were extracted from NTG and TG male mouse ventricles as previously described (22). 30 g of the myofibrillar protein preparations were separated on a 10% SDS-PAGE gel. The Tm band was excised from the gel, reduced, alkylated, and subject to a tryptic digest. Recovered peptides were desalted with a -C18 ZipTip and spotted onto a matrixassisted laser desorption/ionization time of flight (MALDI-TOF) target plate. All spectra were acquired in reflector positive ion mode on an ABSciex 4800 MALDI-TOF/TOF instrument. The percentage TG protein was calculated after normalization and subtraction of background contributions. Western blot analyses on myofibrillar protein preparations (4 g) from 3-month male NTG and TG hearts were conducted using the Tm-specific antibody CH1 (Sigma), Tm Ser-283 phosphorylation-specific antibody generated for this laboratory (YenZyme), and sarcomeric ␣-actin antibody 5c5 (Sigma) as a loading control. To confirm the mutant Tm was properly assembled into the sarcomere, cytoplasmic protein fractions (4 g) isolated from 3-month male NTG and TG male mice, were examined by Western blot analyses. Two-dimensional Isoelectric Focusing-PAGE-Two-dimensional isoelectric focusing-PAGE was performed on mouse hearts as previously described with modifications (17). 3 g of myofibrillar preparations were resolved on a 24-cm 4.0 -5.0 immobilized pH gradient isoelectric focusing strip. After isoelectric focusing, the samples were resolved in the second dimension on a 10% SDS-PAGE gel and transferred to nitrocellulose membrane for Western blotting. Tm muscle-specific antibody CH1 was used to visualize both the unphosphorylated Tm and phosphorylated Tm species. Percentage of Tm phosphorylationwascalculatedas:(phosphorylatedTm)/(phosphorylated Tm ϩ nonphosphorylated Tm) ϫ 100, where the values for the two protein species have been determined using Image Quant v5.1. Histopathologial Analyses and Cardiomyocyte Cross-sectional Area Analyses-Male mouse hearts at 3, 6, and 9 months were analyzed. Heart weight to body weight ratios were calculated to evaluate for the presence of cardiac hypertrophy. For histological analyses, the hearts were stained with hematoxylin/ eosin or Masson's Trichrome and evaluated for the presence of necrosis, fibrosis, myocyte disarray, and calcification. Images were taken on a Nikon SM2-2T dissecting microscope and an Olympus BX4C compound microscope. To quantify changes in cardiomyocyte cross-sectional area, tissue sections were stained with wheat germ agglutinin from Triticum vulgaris conjugated with Texas Red (Sigma) to visualize cardiomyocyte membranes. DAPI was used to stain the nuclei of cardiomyocytes. Randomized images of the left ventricular free wall were taken using a fluorescent camera mounted on a Zeiss Axioskop, and the cardiomyocyte crosssectional area was measured using ImageJ (NIH). Quantitative Real-time PCR Analyses-RNA isolated from 3-month-old NTG and TG male mouse ventricular tissue was isolated using TRIzol reagent (Invitrogen). Real time RT-PCR was performed using an Opticon 2 real time RT-PCR machine (MJ Research). Each sample was measured in triplicate, and each experiment was repeated twice. Target mRNA was normalized to GAPDH expression as described by Pfaffl (23). Echocardiographic and Pressure Overload Measurements-Echocardiographic measurements were performed utilizing a 30-MHz high resolution transducer (Vevo 770 high resolution imaging system) after anesthetization of 3-month-old mice as previously described (24). Echocardiographic dimensions and thicknesses were taken from two-dimensional guided M-Mode from the parasternal long axis view in triplicate on NTG and TG 12-16-week-old mice. Fractional shortening (in %) was obtained by the formula 100 ϫ (LVIDd Ϫ LVIDs)/LVIDd, where LVIDd and LVIDs are left ventricular (LV) internal dimensions in diastole and systole, respectively. The relative wall thickness indices were calculated by the formula (LVAW ϩ LVPW)/ LVIDd, where LVAW and LVPW indicate anterior and posterior wall thicknesses, respectively, and LVIDd is the LV diastolic internal dimension. The LV outflow tract (LVOT) diameter (D) was measured to calculate the LVOT cross-sectional area (LVOT CSA ϭ (D/2) 2 ). The velocity-time integral (VTI, in cm) was calculated by integrating the Doppler velocities in the LVOT. The product LVOT CSA ϫ VTI is the LV stroke volume, which multiplied by heart rate gives us the cardiac output (ml/min). 12-16-Week-old male mice of both genotypes were subject to TAC or sham operation as previously described (25). Echocardiographic measurements were taken in M-Mode. Because only male TG mice exhibit changes in echocardiographic measurements, only male animals are used in this study. Pressure gradients across the constriction were measured using Doppler echocardiography as previously described (25). Two weeks post-surgery, mice were again subject to echocardiography and sacrificed. Calcineurin/Protein Phosphatase Activity Assay-Calcineurin activity (CnA), also known as protein phosphatase 2B, was measured using a calcineurin/protein phosphatase 2B activity kit (Calbiochem). Cardiac homogenates from 3-month-old male NTG and TG hearts were used. CnA activity is measured as the rate of dephosphorylation of a synthetic peptide in the presence and absence of EGTA, okadaic acid, and EGTA with okadaic acid. Phosphate release was measured by the colorimetric Green Reagent (Calbiochem). Measurements of Ca 2ϩ -dependent Activation of Tension-Fiber bundles from papillary muscles of 5-month-old male NTG and TG hearts were detergent-extracted in high relaxing buffer as described previously (26) and mounted between a force transducer and a micromanipulator. The sarcomeric length was adjusted to 2.0 and 2.2 m using laser diffraction patterns, and isometric tension was measured. Fiber bundles were then subjected to sequential Ca 2ϩ solutions (pCa), and isometric tension was again measured. All experiments were carried out at 22°C. Isolated Work-performing Heart Model-Three-month-old male NTG and TG were anesthetized and treated with heparin to prevent microthrombi as previously described (27). The aorta was cannulated, preserving the aortic valve and the coronary artery. To measure intraventricular systolic and diastolic pressures, an intraventricular catheter was inserted into the left ventricle. A cannula was also inserted into the left pulmonary vein, allowing the direction of the perfusate to be switched from retrograde (Langendorff) to anteriograde (working). COBE pressure transducers were utilized to measure aortic pressure, atrial pressure, and left ventricular pressure and were recorded using a Grass polygraph and digital acquisition system. Statistics-All statistics are presented as the mean Ϯ S.E. Where appropriate, paired and unpaired t tests, analysis of variance with Bonferroni correction, and analysis of variance with repeated measures were used to detect significance. Significance was set at p Ͻ 0.05. RESULTS Generation of ␣-Tm S283A TG Mice-To determine the functional significance of Tm phosphorylation, we generated TG mice in which the Tm phosphorylation site (serine residue 283) was replaced with a non-phosphorylatable alanine residue (S283A). The transgene construct used to generate ␣-Tm S283A TG mice is shown in Fig. 1A. Multiple TG lines were generated and studied. Line 2 has the highest TG mRNA expression and the second highest copy number of all transgenic animals generated (17 copies) determined by genomic Southern blot analysis. Forward and reverse sequencing of the construct indicates no mutations or deletions in the transgene. Cardiac ␣-Tm S283A Protein Expression and Phosphorylation in Transgenic Mice-Often, mutations in Tm isoforms lead to differential migration on SDS-PAGE gels (16,18,22,28). However, because serine is only 16 daltons larger than alanine and has nearly an identical isoelectric point, expression levels of TG and endogenous protein cannot be separated using traditional methods. Instead, myofibrillar protein preparations of age-matched NTG and TG mouse hearts as well as recombinantly expressed NTG or TG protein are resolved by a combination of SDS-PAGE and MALDI-TOF analyses (Fig. 1B). The ratio of serine containing peptides (endogenous Tm) and alanine containing peptides (TG Tm) is calculated after normalization and background subtraction. As an additional control, the peptides corresponding to both the serine and alanine profiles are further fractionated to ensure that the proper tryptic peptide is being analyzed. Line 2 has ϳ93.7% TG protein expression, Line 25 has ϳ86% TG protein expression, and Line 97 has ϳ88% TG protein expression (Fig. 1C), with a concomitant decrease in NTG protein, maintaining total Tm levels at 100%. Investigation of the cytosolic fraction shows no significant accumulation of either endogenous or TG Tm, indicating that the TG protein is properly incorporated into the myofibril (data not shown). Additionally, myofibrillar protein prepara-tions run on an SDS-PAGE gel show that all myofibrillar proteins are present in the proper ratio in all three TG lines, indicating that the myofibrils are being properly assembled and there is no change in total Tm levels (data not shown). Tm Phosphorylation in NTG and ␣-Tm S283A Mouse Hearts-To study Tm phosphorylation in TG mice, it was necessary to establish the basal level of Tm phosphorylation in NTG hearts using two-dimensional isoelectric focusing-PAGE. Results show that an unphosphorylated and a single-phosphorylated species of Tm appears in NTG heart samples ( Fig. 2A). Upon calf intestinal phosphatase treatment, the phosphorylated Tm protein species is lost. These results are in agreement with previously published studies that identify Ser-283 as the phosphorylation site in striated muscle Tm (1,8,9,17). Further analysis shows a trend of decreasing Tm phosphorylation from 6 weeks to 5 months of age with an average of ϳ30%. At 15 months of age, animals show a significant increase in Tm phosphorylation, indicating a possible return to fetal gene programs due to senescence (Fig. 2B) (29 -31). To determine the phosphorylation status of Tm in TG myofibrillar preparations, we generated a Tm Ser-283 phosphorylation-specific antibody. As seen in Fig. 2, C and D, there is a clear decrease in the phosphorylation status of Tm in the TG myofibrillar preparations compared with the NTG preparations. As TG Line 2 had the greatest decrease in phosphorylation and exhibited the same phenotype in comparison to the other TG lines, we chose to focus on Line 2 TG mice. When considering the phosphorylation status of these S283A TG mice, it is important to remember that endogenous Tm in NTG mice is phosphorylated at 30%. Line 2 has ϳ5-fold less (or 80% less) phosphorylation than NTG littermates, corresponding to 6% endogenous Tm available for phosphorylation in this line. We believe Line 25 has more endogenous Tm phosphorylation because of its lower level of transgene expression. These results suggest that most, if not all, of the endogenous Tm in the TG mice is being phosphorylated. Gravimetrics and Cardiac Morphology of ␣-Tm S283A TG Hearts-Morphological analyses of the left ventricular wall shows a very mild increase in cardiomyocyte disarray and disorganization as indicated by centrally located nuclei and partial loss of the typical cobblestone shape of the cardiomyocyte (Fig. 3A). Staining the membrane with wheat germ agglutinin and measurement of cross-sectional area shows a significant increase in TG cardiomyocyte area (445.5 Ϯ 17.4 m 2 versus Tm Dephosphorylation Results in Compensated Cardiac Hypertrophy 686.9 Ϯ 66.9 m 2 p Ͻ 0.05, NTG and TG, respectively) (Fig. 3B). Gravimetric analysis was performed on TG animals from 1 month to 9 months of age. Interestingly, results show no changes in heart weight to body weight ratios, likely due to the moderate nature of this hypertrophy (Fig. 3C). There are no differences in the survival of NTG and TG mice. Cardiac Function of ␣-Tm S283A TG Hearts-To determine whether the relationship between Ca 2ϩ concentration and force-tension development is altered in myofilaments at the sarcomeric level in TG hearts with significantly decreased phosphorylation of Tm, we analyzed skinned fiber bundles from the papillary muscle of 5-month-old hearts. No significant changes in absolute tension or normalized tension in NTG versus TG mice were found (Table 1, Fig. 4, A and B). Additionally, there are no significant differences in pCa 50 or the Hill coefficient (n H ), a measure of the cooperative activation of the thin filament of the sarcomere. The work-performing heart model was utilized to determine ex vivo functional effects of the decrease in Tm phosphorylation status. These measurements were performed in mice at 3 months of age. At basal levels, there are no changes in contraction and relaxation parameters in TG hearts. Additionally, when isoproterenol is administered to determine whether the ␤-adrenergic response is impaired, there are no significant differences in contraction and relaxation (Fig. 4, C and D). To assess whether decreasing the phosphorylation level of ␣-Tm has an effect on in vivo cardiac function, we performed echocardiographic analysis on 3-month-old NTG and TG mice. There are no physiological changes in heart function between the NTG and TG mice as shown by fractional shortening, cardiac output, or ejection fraction (Table 2). However, there are sex-specific differences in cardiac morphology. Male TG animals show significant increases in LV mass, LV anterior wall thickness, LV posterior wall thickness, and LV relative wall thickness index, indicating that TG mice have a hypertrophic phenotype without attendant functional defects. Female TG mice show no changes when compared with female or male NTG hearts. Differences in the development of cardiac hypertrophy between sexes have been previously noted (20,32). Thus, the increase in cardiomyocyte area and left ventricular hypertrophy with no change in heart weight to body weight ratio or female cardiac enlargement demonstrates the moderate nature of this hypertrophic phenotype. Gene Expression and Protein Changes in ␣-Tm S283A TG Hearts-Given that histological and echocardiographic analyses indicate that TG mice exhibit a moderate hypertrophic phenotype at 3 months of age, altered gene expression was determined in ␣-Tm S283A TG hearts. Real time RT-PCR analysis of the RNA isolated from ventricular tissue indicate a trend toward an increase in ␤-MHC, brain natriuretic peptide (BNP) and atrial natriuretic peptide (ANP) without significant statistical increases (Fig. 5A). Genes involved in cardiomyocyte Ca 2ϩ handling were also examined. Interestingly, there are no changes in gene expression of SERCA2a, the L-type Ca 2ϩ channel, NCX, PLN or RyR 2 (Fig. 5B). However, there is a significant increase (p Ͻ 0.05) in the gene expression of MCIP1, a protein involved in modulating CnA activity in vivo (33). To determine whether real time RT-PCR levels of Ca 2ϩ handling genes correlate with corresponding protein expression in the TG myocardium, protein expression from whole hearts was determined by Western blot analysis (Fig. 5, C and D). Results indicate that the phosphorylation site at PLN Ser-16 and SERCA2a protein expression are increased by Ͼ30% over NTG levels. There are no changes in total PLN, phosphorylation at PLN Thr-17, TnI, or phosphorylation at TnI 23/24 or CnA. The lack of increased SERCA2a gene expression by real time RT-PCR analysis compared with the significant increase in protein expression suggests that increased protein stability or translation may be operative. MICP1 gene expression is utilized as a marker of alterations in CnA protein expression or activity. Although MICP1 gene expression is increased, there is no increase in CnA expression. However, it is possible that CnA activity may be altered without altering protein expression. Calcineurin Activity Assay-As MIC1P can be both a facilitator and an inhibitor of CnA activity, it is necessary to determine whether changes in mic1p gene expression alters CnA activity, as CnA is an important regulator in the Ca 2ϩ handling process. A CnA/ protein phosphatase 2B activity assay was performed (Calbiochem) on whole heart preparations from 3-month-old mouse hearts. There are no significant changes in CnA activity (Fig. 5E). Cardiac Function in ␣-TM S283A Pressure Overload Mouse Hearts-To determine the effect of decreased ␣-TM phosphorylation during cardiac stress and disease, 12-16-week-old TG mouse hearts were subject to TAC along with NTG littermates and sham-operated animals from both genotypes. Animals were subject to echocardiography before surgery as well as 2 weeks after TAC and were then sacrificed for histology and gravimetric studies. NTG and TG TAC-operated animals FIGURE 2. A, two-dimensional isoelectric focusing-PAGE gels show that Tm has one phosphorylation site indicated by the arrow (upper panel) that can be removed after calf intestinal phosphatase treatment (lower panel). Cardiac myofibrillar protein preparations were probed with the CH1 striated muscle Tm antibody. B, percent of total Tm phosphorylated in NTG mice was measured using two-dimensional isoelectric focusing on myofibrillar preparations taken at 1.5, 3, 5, and 15 months of age. C, shown is Western blot analysis of in ␣-Tm phosphorylation (pTm) from hearts at 3 months of age. n ϭ 3. D, shown is quantification of phosphorylation levels of ␣-Tm found in panel C. show significantly increased pressure gradients at 2 weeks, indicating the efficacy of the pressure overload model (Fig. 6A). LVIDd, LVIDs (LVID systole), and percent fractional shortening are the only parameters that show significant alterations in function between NTG TAC-and TG TAC-operated animals (Fig. 6, B and D). Interestingly, although NTG TAC operated hearts have a greater pressure gradient, TG TAC-operated hearts are the only group that experiences a significant decrease in percent fractional shortening (Fig. 6D). NTG sham, TG sham, and NTG TAC all have fractional shortening at 33%, whereas the TG TAC-operated group has fractional shortening at 24%, indicating impairment in cardiac function in that group. These data indicate that significantly decreasing the phosphorylation status of ␣-Tm impairs the ability of the myocardium to properly respond to acute stress. Gravimetrics and Cardiac Morphology in ␣-TM S283A TAC and Sham-operated Animals-NTG TAC operated animals have a significant increase in heart weight to body weight when compared with NTG sham-operated animals (p Ͻ 0.0001) (Fig. 6E). Additionally, TG TAC-operated animals show an increase in heart weight to body weight ratios compared with TG shamoperated animals (p Ͻ 0.05). Hematoxylin/eosin staining of TG sham-operated hearts show a mild increase in disorganization compared with NTG sham-operated hearts. Masson's Trichrome staining of both NTG and TG sham-operated hearts show no significant increases in the deposition of fibrotic tissue in the left ventricular free wall (Fig. 6G, i and ii). NTG TAC-and TG TACoperated heart sections stained with hematoxylin/eosin show cardiomyocyte disorganization and centrally located nuclei. Both NTG and TG TAC-operated hearts stained with Masson's Trichrome show increases in fibrosis. Cardiomyocyte crosssectional analyses demonstrate significant increases in TG sham and NTG TAC-and TG TAC-operated hearts (p Ͻ 0.001) compared with NTG sham mice (Fig. 6Giii). TG TAC cardiomyocytes show greater increases in size compared with both TG sham and NTG TAC cardiomyocytes (p Ͻ 0.0001, p Ͻ 0.01) (Fig. 6F). DISCUSSION Post translational modifications, such as alterations in the phosphorylation status of sarcomeric and Z-disc proteins can result in altered cardiac contractility with progression to disease and death (11)(12)(13)(14). This is the first in vivo study investigating the functional role of cardiac ␣-Tm phosphorylation. To address this, the single Tm phosphorylation site, serine 283, was changed to an alanine, and TG animals were generated for study. In vivo assessment of basal cardiac function of ␣-Tm S283A TG mice shows that the hearts exhibit a moderate compensated hypertrophic phenotype with an increase in myocyte size due to a stimulus initiated by decreased Tm phosphorylation. It is possible that the increase in cardiomyocyte size occurs in response to mechanical defects induced by autophagy or apoptosis, two cell death processes involved in the transition from compensated to decompensated hypertrophy (34,35). However, the data suggest this compensatory response is an FIGURE 3. A, shown is immunohistochemistry of 3-month ␣-Tm S283A TG hearts stained with hematoxylin and eosin (i) and wheat germ agglutinin (ii). B, cardiomyocyte cross-sectional area measurements are shown. * indicates p Ͻ 0.05; TG, n ϭ 4; NTG, n ϭ 4. C, shown is heart weight (HW) to body weight (BW) ratios of 3-month-old mice. TG, n ϭ 6; NTG, n ϭ 6. attempt to normalize LV wall stress and preserve pump function, which may point toward the role that autophagy places in maintaining cell and tissue homeostasis (36). Increases in LV mass, LVAW, LVPW, relative wall thickness, and increases in cardiomyocyte cross-sectional area coupled with a lack of functional defects in contractility indicate the TG hearts are in a compensated or adaptive state of hypertrophy in response to decreased Tm phosphorylation. Further characterization of contractile function assayed by skinned fiber preparations indicate that TG myofibers develop force-tension relations that are similar to NTG controls. Also, TG myofibers lacking Tm phosphorylation do not exhibit changes in Ca 2ϩ sensitivity or cooperative activation of the thin filament. Tm phosphorylation has long been speculated to play a role in the modulation of the structural and functional properties of the thin filament given that the site of phosphorylation is serine 283 and is located in the 7-11-amino acid head-to-tail overlap region between neighboring Tm molecules. Although cooperativity is not entirely understood, studies suggest that Tm headto-tail interactions between contiguous Tm dimers play a role. A study by Gaffin et al. (37) indicates that substitution of neg-atively charged amino acids at the C terminus causes a significant change in the distance between Tm monomer strands and possibly alterations in contiguous Tm molecule interactions. In vitro studies investigating the striated muscle Tm phosphorylation site indicate that changing the phosphorylation status of ␣-Tm alters the head-to-tail interaction between neighboring Tm molecules (4). Contrary to expectation, removing the phosphorylation of ␣-Tm at Ser-283 in an in vivo system does not result in any alterations in cooperative activation of the thin filament. Thus, phosphorylation may not be a major modulator of cooperative spread of activation in the myofilament lattice and may be more significantly related to the actin filament independent of Tm head-to-tail interactions (38). NMR studies of the interaction between the N-and C-terminal dimers indicate that the last 2-5 amino acids at the C terminus are flexible (39). The fact that the very last C-terminal residues are mildly disordered may offer some explanation as to why loss of additional negative charges in the form of phosphorylated serine has no effect on cooperativity or Tm head-to-tail interactions. The phosphorylated Ser-283 may be in an area too flexible to allow for strong interaction with residues in the N terminus of the subsequent Tm molecule. TABLE 1 Parameters involved in Ca 2؉ -tension relations in skinned fiber bundles The lack of change in cardiac function, myofilament cooperativity, and Ca 2ϩ sensitivity coupled with the development of compensated hypertrophy in the TG animals warranted an investigation into the possible mechanisms involved in the hypertrophic response. The gene expression profile of the ␣-Tm S283A TG found that mcip1 significantly increases. Mcip1 gene expression is utilized as a marker for CnA activity and has been alternatively shown as both a facilitator and inhibitor of CnA activity in vivo (33,40,41). Increases in CnA activity have been shown to induce cardiac hypertrophy in mouse models, and conversely, the development of a hypertrophic phenotype can be prevented via CnA inhibition (42,43). Interestingly, FIGURE 4. A, normalized force in skinned fiber bundles from NTG and TG hearts is shown. B, Ca 2ϩ -tension relations in NTG-and TG-skinned fiber bundles. C and D, isoproterenol dose-response curves in TG and NTG mouse hearts are shown. Hearts were subjected to isolated heart analysis with increasing concentrations of isoproterenol (10 Ϫ11 -10 Ϫ6 mol/liter). in the ␣-TM S283A TG animals, the increase of mcip1 mRNA did not result in changes of CnA expression or activity, indicating another effector downstream of Tm phosphorylation loss may be responsible for the hypertrophic phenotype. As numerous studies have demonstrated the importance of Ca 2ϩ in the modulation of cardiac hypertrophy, we examined whether alterations in expression of Ca 2ϩ proteins occurred in TG mice. Although there are no changes in total TnI expression or TnI phosphorylation at amino acids 23 or 24, there are alterations in sarcoplasmic reticulum proteins. Increasing SERCA2a protein expression and/or activity can rescue multiple disease phenotypes and improve myofibrillar efficiency and contractile parameters both in human cardiomyocytes and rodent hearts (44 -47). Conversely, in animal models of cardiac disease as well as human patients with heart failure, SERCA2a protein levels and activity often decrease (48,49). Surprisingly, SERCA2a protein levels are increased in the ␣-Tm S283A TG hearts by ϳ30% over NTG levels. Additionally, the 30% increase in PLN phosphorylation at Ser-16 indicates that further restriction on SERCA2a activity has been released, as PLN in an unphosphorylated state results in inhibition of the pump (50 -52). Similar to increasing SERCA2a expression and activity, phosphorylation of PLN Ser-16 results in a hypercontractile heart. However, there are no changes in cardiac contractility in the ␣-Tm S283A mice. Rather, normal cardiac function as measured by multiple methods is preserved in the TG hearts rather than enhanced. This indicates that the increase in SERCA2a protein levels as well as the increase in PLN Ser-16 phosphorylation may be necessary to maintain normal cardiac function in a heart in which Tm has largely been dephosphorylated. It is possible, therefore, that if SERCA2a activity was inhibited, a greater degree of hypertrophy and/or a progression to decompensated cardiomyopathy and heart failure would result. Increased SERCA2a protein and activity levels are associated with physiological hypertrophy. In most exercise trained models, PLN protein levels are unchanged, although changes in PLN FIGURE 5. A, shown is quantitative real-time-PCR analysis of cardiomyopathy marker genes in 3-month-old mouse hearts. There is no significant difference in gene expression for any of the genes included in this profile. NTG n ϭ 12, TG n ϭ 10. B, shown is quantitative real-time-PCR analyses of cardiac gene expression normalized to GAPDH. SERCA2a, Mcip1: NTG n ϭ 12, TG n ϭ 10. NCX, PLN, L-type, RyR2: NTG n ϭ 6, TG n ϭ 5. C, Western blot analysis of Ca 2ϩ handling proteins in NTG and TG hearts is shown. D, shown is quantification of Ca 2ϩ handling protein levels. PLN Ser-16 and PLN Thr-17 phosphorylation levels were normalized to total PLN. All other protein expression was normalized to actin. n ϭ 8. E, shown is phosphatase activity in mouse heart extract. NTG, n ϭ 3; TG, n ϭ 3. *, indicates p Ͻ 0.05; **, indicates p Ͻ 0.01. phosphorylation are seen (53,54). This is similar to the results found with the ␣-Tm S283A mice. Additionally, in animals that have compensated or physiological hypertrophy due to exercise training, there are no changes in gene expression of common cardiomyopathy markers, identical to what is seen in the TG mice investigated here (55). Exercise training improves cardiomyocyte contractility and calcium handling and often improves disease in both animal and human models of cardiac disease (56 -58). Although the mechanisms responsible for the development of physiological hypertrophy during exercise training are not well elucidated, we speculate that ablating the phosphorylation site of ␣-Tm results in a similar signaling cascade that occurs in response to exercise training. In a physiologically exercised heart, the signaling pathways that are activated result in improved cardiac function, whereas in response to Tm dephosphorylation, the TG hearts are able to function normally with a mild hypertrophic phenotype. Increases in the level of SERCA2a appear to be responsible for cardiac dysfunction in the TG TAC-operated animals compared with NTG TAC-operated animals. Previous work indicates that SERCA2a up-regulation by ϳ20% did not result in increased energy consumption by the heart at basal levels (59). However, when those animals were subjected to pressure overload hypertrophy, SERCA2a overexpressing TG mice show significant decreases in contractile force and free energy, leading to increased morbidity. The adaptive response involving SERCA2a up-regulation and increased PLN Ser-16 phosphorylation ensures that Tm dephosphorylated hearts remain compensated in a physiological state and do not progress to cardiomyopathy and heart failure under normal conditions. However, this increased SERCA2a expression and PLN phosphorylation most likely leads to cardiac dysfunction and pathology after TAC operation. Animals were sacrificed 2 weeks after TAC, but we speculate, based on the increase in FIGURE 6. Echocardiographic analyses of NTG sham, TG sham, NTG TAC, and TG TAC hearts from 12-16-week-old mice. A, pressure gradients in NTG sham, TG sham, NTG TAC, and TG TAC hearts are shown. B and C, diastolic and systolic left ventricular internal dimensions (LVIDd, LVIDs), respectively. D, fractional shortening (% FS) is shown. E, heart weight (HW) to body weight (BW) ratio of NTG and TG sham-operated hearts and NTG and TG TAC-operated hearts is shown. F, cardiomyocyte cross-sectional area measurements are shown. n ϭ 6 for all groups. * indicates p Ͻ 0.05, ** indicates p Ͻ 0.01, *** indicates p Ͻ 0.001. G, shown are tissue sections from NTG sham, TG sham, and NTG TAC-and TG TAC-operated hearts stained with hematoxylin and eosin (i), Masson's Trichrome (ii), and wheat germ agglutinin (iii). All images were taken at 40ϫ. The scale bar indicates 50 m. hypertrophy in the TG TAC operated animals, that this group would exhibit increases in hypertrophic markers as well as increased lethality. This is the first study indicating that dephosphorylating a sarcomeric protein can result in maintenance of a compensated or physiological hypertrophic phenotype. Additionally, to our knowledge, this is the first study in which a TG animal with alterations in a sarcomeric protein results in increases in SERCA2a protein expression and PLN Ser-16 phosphorylation. Tm phosphorylation appears to be involved in the development of compensated or physiological hypertrophy, possibly through proteins involved in signaling at the z-disc. Novel PKC␦ and PKC⑀ are two molecules shown to promote physiological hypertrophy. Both molecules translocate to the z-disc upon cardiomyocyte stimulation (60,61). PKC⑀, specifically, has been shown to associate with the myofilament and bind strongly to actin, resulting in constitutively active PKC⑀ (62). Previous studies indicate that dephosphorylated Tm binds actin differentially from phosphorylated Tm, and it is possible that the replacement of Ser-283 with an Ala residue affects nearest neighbor interactions in the sarcomere, allowing PKC⑀ greater access to the binding site on actin (4,10). Mice expressing a PKC⑀-specific activator exhibit normal cardiac function and a compensated hypertrophic phenotype indicating that PKC⑀ can be a positive modulator of compensatory cardiac hypertrophy (63). Additionally, activated PKC⑀ can activate MEK1 through Raf1, which has been shown to also result in compensated hypertrophy, indicating that the PKC⑀-Raf1-MEK1-ERK1/2 pathway may be playing a role in the S283A TG mouse phenotype (64 -66). Studies examining the potential signaling pathways activated by Tm dephosphorylation are currently in progress. In summary, the results presented here firmly establish that the status of Tm phosphorylation can influence expression of Ca 2ϩ regulatory proteins and the response of the heart to acute cardiac stress.
2018-04-03T03:57:59.731Z
2012-11-12T00:00:00.000
{ "year": 2012, "sha1": "59fffe7922f5536e9cb8b5aa4a7e8d205d02d5c1", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/287/53/44478.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "ec9add0f44f736aed1f35412755c39bde1253ec1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
5193828
pes2o/s2orc
v3-fos-license
On the nature of Be/X-ray binaries It has been suggested that most Be/X-ray binaries are low X-ray luminosity nearby objects, containing white dwarfs (Chevalier&Ilovaisky 1998). We show that existing evidence indicates that all known Be/X-ray binaries are relatively bright X-ray sources containing neutron stars and that the spectral distribution of this group differs considerably from that of isolated Be stars. We suggest that the different X-ray properties of the systems can be explained by the sizes of the orbits of the neutron stars. Systems with close orbits are bright transients which show no quiescent emission as a consequence of centrifugal inhibition of accretion. Systems with wide orbits are persistent sources and display no large outbursts. Systems with intermediate orbits present a mixture of both behaviours. Introduction Be/X-ray binaries are X-ray sources composed of a Be star and a compact object. The high-energy radiation is believed to arise due to accretion of material associated with the Be star by the compact object. The name "Be star" is used as a general term describing an early-type non-supergiant star, which at some time has shown emission in the Balmer series lines (Slettebak 1988, for a review). Both the emission lines and the characteristic strong infrared excess when compared to normal stars of the same spectral types are attributed to the presence of circumstellar material in a disc-like geometry. The causes that give rise to the disc are not well understood. Different mechanisms (fast rotation, non-radial pulsation, magnetic loops) have been proposed, but it seems that none of them can explain the observed phenomenology on its own. The discs are rotationally dominated (Hanuschik 1996), but UV spectra of Be stars show evidence of a highvelocity low-density wind, suggesting that mass-loss from Send offprint requests to: ind@astro.livjm.ac.uk Be stars takes the shape of a fast radiative wind in the polar regions and a slow higher-density outflow in the equatorial regions, which generates the disc (Lamers & Waters 1987). It is generally believed that the material forming the disc accelerates radially at distances larger than those probed by the optical emission lines (see Chen & Marlborough 1994, Okazaki 1997). X-ray activity in Be/X-ray binaries would then be due to the interaction of the neutron star with this radial outflow (Waters et al. 1988) Be/X-ray binaries can present very different states of X-ray activity (Stella et al. 1986): -Persistent low-luminosity (L x < ∼ 10 36 erg s −1 ) X-ray emission or no detectable emission. -Short (a few days) X-ray outbursts (L x ≈ 10 36 − 10 37 erg s −1 ) separated by the orbital period (Type I outbursts), generally (but not always) occurring close to the time of periastron passage of the neutron star. -Giant (Type II) X-ray outbursts (L x > ∼ 10 37 erg s −1 ), which do not show clear orbital modulation and last several weeks. Some systems only display persistent emission, but most of them show outbursts and are termed Be/X-ray transients. Both kinds of systems seem to fall in a relatively narrow region of the P orb /P spin diagram, known as Corbet's diagram (Corbet 1986; see also Waters & van Kerkwijk 1989). Based on distance measurements to several proposed counterparts of Be/X-ray binaries by the Hipparcos satellite, Chevalier & Ilovaisky (1998, henceforth CI98) have suggested that the compact object in most Be/X-ray binaries is a white dwarf (WD) and that the class of objects can be characterised as nearby low-luminosity sources. In this paper, we set out to show that the existing evidence does not favour that interpretation, and that Be/X-ray binaries contain mostly neutron stars. 2. The sample of Be/X-ray binaries CI98 use a sample of 13 proposed counterparts to Be/Xray binaries. Their sample is limited to objects with V < ∼ 12 so that they can be observed with Hipparcos. Seven of Table 1. Known galactic Be/X-ray binaries with detected X-ray pulsation and their basic parameters. Orbital periods marked with ' * ' represent the recurrence time of X-ray outbursts and not orbital solutions. Objects for which the orbital period is noted as 'large' are persistent low-luminosity X-ray sources, likely to have periods of a few hundred days. Spectral types marked ' * ' are estimated from photometry and the distances derived should be treated with caution. Objects for which no quiescence luminosity is given have been detected only during outbursts. The distance to EXO 2030+375 and its luminosity, estimated from the change rates in spin period and X-ray luminosity, are from Parmar et al. (1989). Name Ps ( their sources are unconfirmed candidates to faint unidentified hard X-ray sources observed during the HEAO − 1 all-sky survey with the Modulation Collimator. Tuohy et al. (1988) proposed their association with Be stars on the basis of positional coincidence. Because of the large error boxes, Tuohy et al. (1988) warned that several of these identifications could be spurious. Since no further detection of any of these sources has been reported, the question of their identification and the real nature of these X-ray sources remains open. This has not been taken into account by CI98. Moreover, their sample is magnitudelimited and necessarily includes only nearby sources (since there is only a limited range of absolute magnitudes for Be stars), In order to compare these candidates with more secure identifications of Be/X-ray binaries, we set out to select a more appropriate sample. In Table 1, we have listed known galactic Be/X-ray binaries with detected Xray pulsation and a proposed optical counterpart. Hard X-ray spectra and pulsations are the most typical characteristics of a Massive X-ray Binary. Distances in Table 1 are derived from the spectral type of the counterpart, assuming that they have the average optical luminosity for their spectral type, as given by Vacca et al. (1996) or Schmidt-Kaler (1982) -except for EXO 2030+375 (see caption to Table 1). X-ray luminosities have been calculated using these distances. No attempt has been made to take into account errors due to the uncertainty in the spectral classification or in the luminosity corresponding to a given spectral type, since they are not supposed to be systematic. An important point to be considered here, relevant for the following discussion, is that the optical counterparts to Be/X-ray binaries are supposed to have the same physical characteristics as normal Be stars of the same spectral type.. Detailed simulations by Vanbeveren & de Loore (1994) and de Loore & Vanbeveren (1995), in which Be/X-ray binaries are formed from moderately massive close binaries that undergo mass transfer, show that the properties of the Be star are those of a normal star of the same mass, at least while it remains in the main sequence. Under certain circumstances, the star can become an overluminous supergiant at a later stage. Table 2 lists all known Be/X-ray binaries in the Magellanic Clouds (MCs) with detected X-ray pulsation and a proposed optical counterpart. X-ray luminosities in this table, taken from the literature, are calculated assuming standard distances to the MCs. [ d ] Charles et al. (1983) [ e ] Crampton et al. (1985) [ f ] Schmidtke et al. (1995) [ g ] Burderi et al. (1998) [ h ] Haberl et al. (1997) Spectral distribution As can be seen in Tables 1 and 2, all the optical counterparts to galactic and MC sources have spectral types earlier than B2, and there are several Oe stars. Most objects have firm spectroscopic classifications. A few have spectral classifications based on photometric colours or continuum fitting and, due to the intrinsic reddening of Be stars, could be slightly earlier than classified. Also within this spectral range are the optical counterparts to Be/Xray binaries with no detected pulsations -LS I +61 • 303 (B0V, Steele et al. 1998), BD +53 • 2790 (O9.5III, Hiltner & Bautz 1963), RX J0117.6−7330 (∼B1III, ) -and all the probable counterparts to likely Be/X-ray binaries in the Magellanic Clouds proposed by Crampton et al. (1985) and Schmidtke et al. (1994) -e.g., RX J0501.6−7034, RX J0520.5−6932. The distribution of isolated Be stars is completely different. The number of Oe stars is very low, but the distribution rises sharply at B0, peaking around B2 and then falls down gradually extending up to at least spectral type A0 (Slettebak 1988). In Fig. 1, the spectral distribution of optical components of Be/X-ray binaries is compared with a sample of 150 bright Be stars taken from the catalogue of Slettebak (1982), after Porter (1996). A Kolmogorov-Smirnov test of the probability that both samples are extracted from the same population gives a K-S statistic D = 0.84 with a significance of 5.3 × 10 −12 , clearly indicating that the two samples are extracted from different populations (a χ 2 -test gives a reduced χ 2 of 7.1). In order to assess the statistical significance of this result, we must considerer the possible biases in the selection of the two samples compared. The Be star list con- Fig. 1. The spectral distribution of isolated Be stars is compared to that of Be/X-ray binary optical components. Negative spectral subtypes are used to represent O-type stars. Top panel: The spectral distribution of 150 Be stars present in the BSC, after Porter (1996). Bottom panel: The spectral distribution of 20 Be/X-ray binary components, comprising 13 pulsars with spectroscopic spectral-type determinations, 4 pulsars with photometric spectral-type estimation and 3 sources without detected pulsations with spectroscopic determinations. tains the majority of Be stars in the Bright Star Catalogue (BSC) and it is therefore limited by their optical magnitude. The BSC contains stars brighter than V < ∼ 6.5 and it is therefore biased towards earlier spectral types. As a consequence, in a volume-limited sample, the peak of the distribution would be towards later spectral types. Abt (1987) found the maximum of the distribution to be at B3-B4 for a volume-limited sample of field Be stars. In the BSC sample, the higher proportion of Be stars in comparison with normal B stars (27%) occurs at B4 (Jaschek & Jaschek 1983). The sample of Be/X-ray binaries, on the other hand, is limited by their X-ray luminosity. The spectral distribution of this sample could be biased if there exists a direct correlation between spectral type of the optical component and the X-ray luminosity, i.e., if there are Be/X-ray binaries containing late-type Be stars, but all of them are very weak X-ray sources. However, there are two strong arguments against this hypothesis. First, there is no evidence of any dependence of the X-ray luminosity with spectral type among the known Be/X-ray binaries -including those in the Large Magellanic Cloud, which are all at approximately the same distance. The bright tran-sients approaching Eddington luminosity extend over the whole spectral range with the brightest transient known (A 0535−668) having the latest spectral type (B2IV). This is in clear contrast with the sharp cut-off at B3. Second, there is no known correlation between the observable properties of Be stars and their spectral type. The sizes of their envelopes (as reflected in the emission lines) do not seem to depend at all on spectral type. However, if a correlation was to exist between spectral type and X-ray luminosity, it would imply that there is a fundamental difference in the mass-loss processes taking place in early-type and latetype Be stars. From the above arguments, we conclude that the difference seen between the spectral distributions of field Be stars and optical components of Be/X-ray binaries must reflect a real difference in the populations from which they are drawn. With a sample of 20 objects all earlier than B3, it seems unlikely that any optical member of a Be/X-ray binary is going to have a later spectral type. The early limit in the spectral range of Be/X-ray binary components could be simply due to the cessation of the Be phenomenon at earlier spectral types. Although a few O7e stars are known (Conti & Leep 1974), they are very rare. The upper limit is in broad agreement with the predictions of the models of close binary evolution by Van Bever & Vanberen (1997). Models in which a large amount of angular momentum per unit mass is lost from the system during non-conservative mass transfer predict no Be + neutron star binaries with late-type Be stars (Portegies Zwart 1995;Van Bever & Vanberen 1997). The distribution shown in Fig. 1 indicates that all the Be stars with neutron star companions have masses M * > ∼ 8 − 9M ⊙ . A phenomenological model The X-ray characteristics of the confirmed Be/X-ray candidates are sufficiently consistent to derive a phenomenological model for these systems. The low-luminosity persistent X-ray emission seen in many objects is due to accretion of low-density material. This could be the fast polar wind, but it is more likely to be the equatorial outflow beyond the regions in which motion is rotationally dominated (and where the optical emission lines form), since Xray emission from 4U 1258−61 ceased completely when the disc around the companion star disappeared even though the polar wind should still be present ). In the sources with short pulsation (and therefore orbital) periods, this quiescent emission is prevented by centrifugal inhibition of accretion (Stella et al. 1986): due to the fast rotation and strong magnetic field of the neutron star, matter approaching the magnetosphere is shocked by supersonic rotation and ejected beyond the accretion radius (propeller mechanism). Previous authors (e.g., Corbet 1986) have assumed that the long periods without outbursts are due to the shrinkage of the disc and that series of outbursts take place after discrete episodes of mass ejection from the Be star. The results of Reig et al. (1997b) point very strongly to the possibility that the size of the disc is limited by the orbit of the neutron star, presumably due to tidal truncation (Okazaki 1998). The existence of X-ray outbursts, indicating that the neutron star interacts with material from the dense regions of the disc, implies that the density distribution in the disc can differ from this quiescence configuration. Negueruela et al. (1998) have shown how the presence of a density wave in the disc can provide such a perturbed configuration. Systems with small orbits will then accrete from very dense regions and become highluminosity transients. Systems with wider orbits, in which centrifugal inhibition does not occur, accrete from less dense regions and show smaller outbursts. Like the transients, A 0535+262 and GRO J1008−57 display both Type I and Type II outbursts, but in the case of 4U 1145−619 and A 1118−616 the distinction is not so clear. In systems with relatively wide orbits, outbursts can only occur close to periastron passage (e.g., A 0535+262), but in closer systems they can take place at different orbital phases, depending on the actual density distribution in the disc, e.g., recent outbursts at phase ∼ 0.3 from 4U 0115+634 ) and at phase ∼ 0.5 from 2S 1417−624 (Finger et al. 1996). The two systems with longer spin periods, X Per and LS I +61 • 235 have never been observed to undergo Xray outbursts. In both cases, however, long periods of increased X-ray luminosity have been observed (see Haberl et al. 1998). Given the known relationship between the spin and orbital periods of Be/X-ray binaries (Corbet 1986), both systems are expected to have very long orbital periods (many hundred days). The neutron star in a very wide orbit can only accrete from the outer lowdensity regions of the circumstellar disc, which explains the low X-ray luminosity. The long periods of increased X-ray luminosity seen in both systems could be associated with the dissipation of the discs (Haberl et al. 1998). Type II outbursts are very likely to be the corresponding events in systems with close orbits. Discussion The X-ray luminosities of all the objects listed in Tables 1 and 2 are too high for the expected luminosities of Be + WD binaries, estimated to be in the range 10 29 − 10 33 erg s −1 , indicating that they contain neutron stars. It is worth noting that only three of these sources could be observed by Hipparcos. The distances given in Table 1 for A 0535+262 and 4U 1145−619 are those derived from their spectral types, since Steele et al. (1998) have shown that the distances to A 0535+262 and LS I +61 • 303 calculated by Hipparcos (which are based on very poor astrometric solutions) are inconsistent with several other distance indicators (and, at least in the case of A 0535+262, its X-ray spectrum, which can only be ex-plained in terms of accretion on to a neutron star). This could also be true of the distance to 4U 1145−619. Since γ Cas seems unlikely to be a binary X-ray source (Smith 1997), the sample of objects with accurate distances in CI98 consists of only one confirmed Be/X-ray binary and eight unconfirmed identifications. Six of these objects have spectral types later than B3 (up to B8) and therefore there is an almost negligible statistical probability that they are extracted from the same population as the optical components of standard Be/X-ray binaries. Moreover, if these identifications are correct, they represent a class of objects with much lower X-ray luminosities than those of our sample of Be/X-ray binaries (and this also applies to the two objects with spectral types in the acceptable range, HD 34921 and BZ Cru). The conclusion is that, if the identifications are correct, they represent a class of objects extracted from a different population to the standard Be/X-ray binaries. Can they represent a sample of the population of Be + WD binaries? Since we have no previous sample of this population, we do not know its spectral distribution. Indeed, the only known white dwarf orbiting a massive star is the companion of the B5V star HR2875 (Vennes et al. 1997). However, there is a major drawback to this interpretation: if the X-ray activity of these sources is attributed to accretion on to a white dwarf, centrifugal inhibition is not a possibility and there is no reason why these systems should not be persistent X-ray sources. However, none of these sources has been detected during the Rosat All-Sky Monitor survey, in spite of thorough searches for possible binaries (Meurs et al. 1992;Motch et al. 1997). Berghöfer et al. (1996) list BZ Cru, µ 2 Cru and HD109857 (the three objects in the sample that appear in the BSC) as nondetections. No detections of any of the objects have been reported since the discovery paper, where it is reported that BZ Cru and HD 34921 (the two B0 counterparts) had been observed by other satellites (Tuohy et al. 1988). If the identification of these X-ray sources with the proposed Be stars is real, they represent a population of very low luminosity transients. These low-luminosity transients cannot be explained in terms of Be + WD binaries or neutron stars in very wide orbits -since these objects would not be transients. The simplest explanation is that most of these counterparts -if not all -are really field Be stars and not accreting binaries, i.e., they are not optical counterparts to X-ray sources. This is not surprising, given that they were proposed only because of positional coincidence with very large error boxes. The sample used by CI98 is, in consequence, not representative of Be/Xray binaries and therefore their conclusions do not apply to these systems. Conclusions We have studied the global characteristics of Be/X-ray binaries by comparing the properties of different systems. We find that all the optical counterparts have spectral types in the range O8-B2, which represents a distribution very different from that of isolated Be stars. The very different spectral type distribution of Be/X-ray binaries and isolated Be stars sets strong limits on acceptable models of close binary evolution. We have developed a coherent model to explain the different X-ray properties of Be/X-ray binaries, in which the main parameter is the size of the orbit of the neutron stars. Systems with close orbits are fast spinners and show no quiescence emission as a consequence of centrifugal inhibition. When the density distribution in the circumstellar disc of the Be star becomes very asymmetric, they become bright transients. Systems with wide orbits are persistent sources accreting from a low-density radial outflow and display no large outbursts. Systems with intermediate orbits present a mixture of both behaviours. We have shown that the sample recently used to conclude that Be/X-ray binaries are low-luminosity X-ray sources containing white dwarfs consists of objects extracted from a different population and therefore it is not relevant to the study of Be/X-ray binaries.
2014-10-01T00:00:00.000Z
1998-07-15T00:00:00.000
{ "year": 1998, "sha1": "87bf04dd45a5b9d93499f890085f146c0c3df0a1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b6fbc5b9973d7e166825d047b68b0fc8ceef1b27", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
245515810
pes2o/s2orc
v3-fos-license
Equine nonneoplastic abnormal ovary in a draft mare with high serum anti-Müllerian hormone: a case study We performed a standing hand-assisted laparoscopic ovariectomy in a draft mare that presented with high serum anti-Müllerian hormone (AMH) level and had an enlarged single cystic ovary. Histopathological examination revealed no tumor cell proliferation in the ovary, but the presence of a large ovarian cyst was confirmed. In the diagnosis of abnormal ovaries in mares, a comprehensive assessment should be performed, including the monitoring of ovarian morphology and biomarkers over time, to determine the disease prognosis and treatment plan. The case of this mare with a nonneoplastic abnormal ovary and increased serum AMH level was rare. We suggest that standing hand-assisted laparoscopic ovariectomy is useful for the removal of large ovaries in draft mares. mares present with symptoms such as anestrus, persistent estrus, and male behavior and are infertile in most cases. To our knowledge, there is currently no effective medical treatment for GCT, which necessitates the surgical removal of the affected ovary [4]. For GCT diagnosis, in addition to clinical signs and rectal examination, ultrasound may be performed, and blood inhibin and testosterone levels may be measured. However, owing to the lack of consistency among cases and the difficulty in differentiating GCT from similar diseases, such as ovarian hematoma and abscesses, making a definitive diagnosis in clinical settings is challenging [16]. Recently, anti-Müllerian hormone (AMH) has attracted scientific attention as a potential biomarker for GCT. AMH is expressed by granulosa cells in the ovary, and mares with GCT have increased serum AMH levels [1,3]. Increased serum AMH level has a sensitivity of 97.7% in the diagnosis of GCT [2]. In our case, we performed a standing laparoscopic ovariectomy in a draft mare with a unilateral single cystic large ovary and a high serum AMH level. Approximately 5 months after surgery, the mare mated, conceived, and eventually gave birth. This case report presents an overview of our treatment plan in combination with clinical examination and laparoscopy for an ovarian disease in a draft mare with the hope that it will aid in the diagnosis of abnormal ovaries in clinical settings. During a genital examination of a primiparous draft mare (age, 9 years; body weight [BW], 726 kg) on day 9 after foaling, a large right ovary, which was approximately the size of a basketball, and a normal-sized left ovary were observed. The uterus had sunk deeply into the bottom of the abdominal cavity. An ultrasound examination revealed the presence of a single cyst with a diameter of >25 cm in the right ovary but no clear structures in the left ovary ( Fig. 1). Hysteroscopy, endometrial biopsy, bacterial cultures, and cytology were then performed, but no abnormal findings were observed. Serum samples were submitted to the laboratory of the Hidaka Training and Research Center, Japan Racing Association, for measurement of the AMH level. The serum AMH level was 33.47 ng/ml, which was higher than the reference range (0.1-6.9 ng/ml) [16]; therefore, a GCT was suspected. Thereafter, the blood AMH level remained high, but cyclic and clear signs of estrus were confirmed in the contralateral (left) ovary. For this reason, the mare was mated, and ovulation was confirmed. In total, the mare mated three times in the season but did not conceive after any of the mating events. After obtaining consent from the owner, laparoscopy-assisted ovariectomy was performed on the right ovary. Before the surgery, the mare was subjected to fasting for 24 hr and was retained in the treatment stock with standing sedation using intravenous medetomidine (3 µg/kg BW; Dorbene, Kyoritsu Seiyaku, Tokyo, Japan) and butorphanol (0.1 mg/kg BW; Vetorphale, Meiji Seika Pharma, Tokyo, Japan) injections. The location of the blood vessels running through the right flank was confirmed by ultrasound examination (LOGIQ e Premium, GE Healthcare, Tokyo, Japan). In addition, intravenous flunixin meglumine (1 mg/kg BW; Forvet, MSD Animal Health, Tokyo, Japan) and intramuscular penicillin and streptomycin (penicillin, 10,000 IU/kg BW; streptomycin, 12.5 mg/kg BW; Mycillin, Tamura Seiyaku Corp., Saitama, Japan) were administered before surgery. Subcutaneous 2% lidocaine solution (Xylocaine, AstraZeneca, Osaka, Japan) was used to anesthetize the incision sites on the skin. A cannula was then inserted into the central part of the right flank, and a pneumoperitoneum was produced using CO 2 at an intra-abdominal pressure of approximately 10 mmHg. A cannula was inserted into the last intercostal space and then used as the laparoscopic port, and a third cannula was inserted into the ventral side of the flank. Using a laparoscope (Stryker Japan K.K., Tokyo, Japan), the right ovary was observed to have significantly descended into the deep site of the abdominal cavity, with only the dorsal part of the ovary visualized. Local anesthetic (2% lidocaine solution) was injected into the mesovarium, and approximately 7.5 l of ovarian fluid was aspirated to reduce the size of the ovary. A part of the mesovarium was then separated using the LigaSure Vessel Sealing System (LigaSure, Covidien Japan, Tokyo, Japan). A hand was inserted through the 12-cm incision site on the right flank (hand assisted) [9], and the wide area of adhesion between the large ovary and mesovarium was removed manually. The ovary was then separated from the mesovarium and uterus with LigaSure and scissors. The large ovary was put into a sterilization pouch and then cut into tissue masses with a large quantity of fluid [6]. The sterile bag and excised ovary were then removed from the mare. To close the incision site, three-layer suturing was performed on the muscular layer using absorbable suture, and the skin was subsequently sutured. To close the cannula insertion sites, only the skin was sutured. The total duration of the surgery was approximately 4 hr. The total weight of the excised ovary, including the ovarian fluid, was approximately 15 kg. No postoperative complications were encountered. Intravenous flunixin and intramuscular mycillin were administered once daily for 3 days after the surgery. The postoperative course was uneventful, and the mare was discharged after 3 days of hospitalization. After the ovariectomy, the serum AMH level decreased to 0.31 ng/ml (Fig. 2). For pathological analysis, some parts of the excised ovary were fixed in 10% formalin solution and embedded into paraffin. The paraffin block was sectioned into 4-µm slices using a REM-710/SB gliding microtome (AS ONE Corp., Osaka, Japan). For histopathological evaluation, tissue sections were stained with hematoxylin and eosin. Histopathological examination of the excised ovary revealed that the lumen of the cyst was filled with inflammatory cell debris and exfoliated epithelial cells. In the examined area, the epithelial lining was unclear (Fig. 3A). The cyst wall was thick. It contained a smooth muscle layer, collagenous tissue, and few glandular structures lined with cuboidal epithelium. The epithelial cells did not show cellular atypia, and there were no neoplastic structures. The histological findings suggested that the cyst was an ovarian cyst; however, precise identification could not be achieved. For immunohistochemical analysis, sections were incubated with an anti-AMH goat antibody (1:500; sc-6886, Santa Cruz Biotechnology Inc., Dallas, TX, U.S.A.). For color development, a Vector ® NovaRED™ Peroxidase (HRP) Substrate Kit (SK-4800, Vector Laboratories, Inc., Burlingame, CA, U.S.A.) was used according to the manu-facturer's guideline. A tissue section from an equine ovary with GCT was used as a positive control, and a tissue section incubated with diluent buffer instead of primary antibody was used as a negative control. Immunostaining of AMH showed negative staining in the ovarian cyst wall (Fig. 3B), whereas the positive control showed positive staining of AMH in granulosa cells (Fig. 3D). The mare mated naturally 5 months after the surgery, conceived, and gave birth in the following year. In the diagnosis of abnormal ovaries in mares, the anatomical and physiological changes of the ovaries vary greatly, and there is no consistency among cases, which poses diagnostic challenges [16]. In the present case, the mare presented with high serum AMH levels immediately after delivery, and a large cyst was found in the right ovary. Although cyclic estrus with ovulation was confirmed several times in the contralateral ovary, the mare did not conceive. The serum AMH level has recently been proposed as a useful biomarker for the diagnosis of GCT in abnormal ovaries [1,3]. However, in our case, a histopathological examination refuted the diagnosis of GCT. In addition, the mare showed normal estrus cycling despite the high AMH level; therefore, we inferred that granulosa cells did not secrete excessive inhibin, which is one of the biomarkers of GCT. Furthermore, after ovariectomy, the serum AMH level in the mare decreased rapidly, suggesting that the source of the excess AMH was the removed abnormal ovary. The function of abnormal ovaries is not uniform; in some cases, even in mares diagnosed with GCT, the blood AMH level repeatedly fluctuates within the normal range [16]; in other cases, however, mares have a functional contralateral ovary [11]. A previous study reported that GCT healed naturally without ovariectomy [12]. The literature thus suggests the need for continuous observation of ovarian morphology and the monitoring of biomarkers to diagnose abnormal ovaries. Murase et al. [14] reported increased serum AMH levels in two mares with transient ovary enlargement, which was not secondary to GCT. Notably, a histopathological examination was not performed for either mare [14]. In addition, Renaudin et al. [16] reported two cases of mares with abnormal ovaries and increased serum AMH levels, although the diagnosis of GCT was excluded after a postovariectomy histopathological examination. The pathophysiology was similar in all these cases. In our case, the serum AMH level went from 33.47 ng/ml on postpartum day 9 to 43.94 ng/ml on postpartum day 30. Thereafter, it decreased to 16.41 ng/ml on postpartum day 60, and estrus returned (Fig. 2). This suggests that ovarian function and AMH secretory ability were altered between postpartum days 30 and 60. The serum AMH level was 12.28 ng/ml at the time of ovariectomy (Fig. 2); the immunostaining of AMH, however, showed negative staining of the ovarian cyst wall. The excised ovary was quite large, so the lining cells of the cystic ovary could have been compressed and degenerated. We tried immunostaining the remaining cystic wall but failed to evaluate the AMH-producing cells. Thus, we speculate that the serum might have derived AMH from the remarkably enlarged ovary because the systemic AMH level dropped immediately after ovariectomy. Further studies on other cases of abnormal equine ovaries based on the measurement of AMH levels are warranted. In humans, AMH is an important regulator of follicle maturation; abnormally increased AMH negatively affects reproductive functions through androgen overproduction due to the expression of aromatase and decreased folliclestimulating hormone sensitivity in ovarian follicles [8]. Similar to the reports in humans, the increased serum AMH level in our case might have negatively affected the reproductive performance of the mare. In addition, the large ovary was adhered to a wide area of the mesovarium, and the uterus was excessively pulled together, with the enlarged ovary toward the bottom of the abdominal cavity. Therefore, physical disorder of the ovary and uterus might have also been a reason for infertility. Equine laparoscopic surgery has previously been applied to ovariectomy, cryptorchidectomy, and intestinal surgery [10]. In our case, we performed a standing laparoscopic ovariectomy on a draft mare. This procedure is effective, safe, and minimally invasive [9]. To our knowledge, there are no previous reports of a successful standing laparoscopic ovariectomy in a draft mare, although there have been successful reports in other breeds, such as thoroughbreds and warmbloods [9]. Laparoscopic ovariectomy in the supine position should be performed in the Trendelenburg position to facilitate access to ovaries, and general anesthesia is necessary [10]. From the perspective of respiratory management risk reduction under anesthesia in large mares, such as draft mares [7,13], standing laparoscopic ovariectomy carries a decreased risk. Furthermore, in the present case, we were able to remove the large ovary with a minimal incision by chopping it into small pieces in a sterilization pouch. Our results suggest that standing laparoscopic ovariectomy is a minimally invasive and safe procedure for draft mares. The abnormal ovary of the mare in our case was confirmed to contain an ovarian cyst. Although some mares might have increased serum AMH levels in association with nonneoplastic abnormal ovaries, this is considered a very rare occurrence. In the diagnosis of abnormal ovaries in mares, it is important to determine the treatment plan and prognosis based on a comprehensive assessment, including the monitoring of ovarian morphological changes and biomarkers over time, and the confirmation of periodic estrus. Furthermore, standing laparoscopic-assisted ovariectomy may be useful for the removal of enlarged ovaries in draft mares with a high risk of anesthesia-related complications.
2021-12-29T16:21:30.387Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "7301ba2ca0a4add05e8a6934d3db801a7a7a80a1", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/jes/32/4/32_2127/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "800ec1a3391590a1ffd9b599a4552976a51705b6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
80221545
pes2o/s2orc
v3-fos-license
REHABILITATION PROGRAM FOR CHILDREN WITH BIRTH TRAUMA OF THE BRACHIAL PLEXUS analysis of of a Birth trauma of the brachial plexus is most often caused by diffi cult delivery of a large neonate. The data on the so-called obstetrical paralysis is scarce and controversial. In current life conditions and development of the medical science the number of children with plexus brachialis plexitis is supposed to decrease, but still this injury is present. The purpose of the study is analysing the results of the conducted complete one-year rehabilitation program for children with birth trauma of the brachial plexus, at the Clinic of Physical and Rehabilitation Medicine at the University Hospital Pleven. The study covers 24 children with the birth trauma of the brachial plexus, aged from 10-days old newborns to 13 years old children, who passed the complete rehabilitation program during the entire 2015; the program includes paraffi n application, remedial massage, kinesitherapy, electrical stimulation, electrophoresis with Nivalin, labour therapy, occupational therapy and everyday life activities (for children over 5). Inquiry among parents, with open and standardized questions, is conducted. For good results from rehabilitation of children with the birth trauma of the brachial plexus the early diagnostics and initiation of the complete rehabilitation program are of crucial importance. Good results come hard and slowly, but the quality of life and work of the traumatized children is signifi cantly improved. Introduction Birth trauma of the brachial plexus is most often caused after hard delivery of a big infant, at breech presentation or other abnormal infant presentation, unsuccessful manipulations with forceps and vacuum extractors, narrow pelvis of the mother or raised tone of perineal muscles, congenital anomaly of infant's spinal column etc. [1]. Data on frequency of Erb's palsy are scarce and contradictory, and the reasons for this are explicable. According to the data provided by Gacheva, Y. (1982) the annual birth traumas in the country are between 320 and 380, which is 0,27% of all the newborns [1]. In present day conditions of life, with increased number of operative deliveries in risk pregnancies and the existing demographic collapse (annual decrease of number of newborns) we expect the decrease in number of infants born with the brachial plexus trauma. According to statistical data of the National Statistical Institute, in 2015 there were 66 370 children born in Bulgaria, but we haven't found any statistics on the number of children diagnosed with the Erb's palsy of the brachial plexus [7]. The fact that the brachial plexus is in close proximity to specifi cally moving structures of the shoulder girdle (fi g. 1) is a prerequisite for traumatic injuries due to direct trauma (pressing) and stretching of nerve fi bers in the area of the supraclavicular axilla (upper type injury), and in the area of axilla (lower type of injury) [3]. The disease is diagnosed mainly through anamnesis, applying various refl ex and manual tests, and Rö-graphyto exclude other pathology (clavicle fracture) [1]. In order to establish the damage degree and the level of injury, in the context of the modern medical diagnostics, the EMG is produced. The clinical picture is typical for peripheral paresis or upper limb paresis, depending on the severity and the level of injury. Two main types of upper limb dysfunction are distinguished: upper type (Duchen-Erb), lower type (Dejerin-Klumpke) and total type. The paresis causes full or partial immobility of the limb [2]. As the child grows up, all structures of the upper limb show the delayed development (muscle hypotrophy and hypotonia, shorter limb, often spinal bending -scoliosis to the direction of the parenthetical limb). With age signifi cant diffi culties in performing everyday activities appear, as well as the diminished capacity for work [10]. Neuropraxia (neuropraxia) is a lower level of injury; treatment is mostly conservative and is performed by a team of specialists. In severely traumatic conditions total disruption of axons or nerves is found (axonotmesis, neurotmesis) [3]. Along with medicament treatment (anti-swelling therapy, Nivalin subcutaneously as per schedule etc.), it is very important to place the injured limb in suitable orthosis for the positional therapy. Main role in the treatment of such an injury is played by the physical and rehabilitation medicine; during the age periods of growing the means vary, being individually selected and dosed [8]. While growing, the child (especially in case of severe, total injury) is examined by an orthopedist, in order to detect possible corrective operative interventions to facilitate self-service skills and to improve the patient's life quality. In 2012 Assoc. Prof. PhD M. Kateva, MD, head of the 1 st Clinic of Traumatology at the "N.I. Pirogov" University Hospital in Sofi a, with the help of Prof. Edgar Bimer organized the fi rst course on microinvasive neurosurgery on arms, including the early operative treatment of birth traumatized plexus brachialis. This was the beginning of new surgery for the world as well, and the aim was to save children with injures on upper limbs that lead to full disability [12]. Objective of this study is analysis of results of a one-year complex rehabilitation programme for children with birth trauma of plexus brachialis, conducted in the Clinic of Physical and Rehabilitation Medicine at the University Hospital -Pleven. Materials and methods Twenty-four children with birth trauma of the plexus brachialis, aged between 10 days and 13 years (15 boys and 9 girls) are included in the study; the rehabilitation is conducted in the Clinic of Physical and Rehabilitation Medicine at the University Hospital -Pleven. From all monitored patients, 11 Birth trauma of the brachial plexus is most often caused by diffi cult delivery of a large neonate. The data on the so-called obstetrical paralysis is scarce and controversial. In current life conditions and development of the medical science the number of children with plexus brachialis plexitis is supposed to decrease, but still this injury is present. The purpose of the study is analysing the results of the conducted complete one-year rehabilitation program for children with birth trauma of the brachial plexus, at the Clinic of Physical and Rehabilitation Medicine at the University Hospital Pleven. The study covers 24 children with the birth trauma of the brachial plexus, aged from 10-days old newborns to 13 years old children, who passed the complete rehabilitation program during the entire 2015; the program includes paraffi n application, remedial massage, kinesitherapy, electrical stimulation, electrophoresis with Nivalin, labour therapy, occupational therapy and everyday life activities (for children over 5). Inquiry among parents, with open and standardized questions, is conducted. For good results from rehabilitation of children with the birth trauma of the brachial plexus the early diagnostics and initiation of the complete rehabilitation program are of crucial importance. Good results come hard and slowly, but the quality of life and work of the traumatized children is signifi cantly improved. Keywords: birth trauma, rehabilitation, everyday life activities, occupational therapy. In compliance with the National Frame Contract for 2015 the Health Fund provides for a 10-day rehabilitation programme, following a clinical path relevant to the age of the children with birth trauma of the brachial plexus -12 rehabilitation courses annually (each month) for children aged 0-24 months, 4 rehabilitation courses annually for children aged 2-5, and 2 rehabilitation courses annually for children over the age of 5 [6]. Grown up children with the Erb's palsy can visit additional rehabilitation courses without limitations, but on ambulatory basis. The average number of completed rehabilitation courses (totally through clinical path and ambulatory) conducted over the one-year period we monitored is displayed on fi g. 2; the children are allocated in age groups. All children with the birth trauma of the brachial plexus participated in a complex rehabilitation programme for 10 consecutive days, dosed suitably for their age and that included: paraffi n applications; healing massage; kinesiotherapy; electrical stimulation; electrophoresis with Nivalin; occupational therapy, ergotherapy (for children over the age of 5) [5,9]. In order to register the level of recovery at the beginning and at the end of the monitored period, measurements and tests are conducted as per the injury type and children's age. The children between 0-5 are centimeter-measured (measurements of a forearm, armpit and length of upper limb). With children over 2 various arm grips are tested (grabsspherical, cylindrical, fi st grip, hook grip) due to their lower type injury [4], and children with upper type of injury and over the age of 5 are tested through manual muscle tests of mm. rhomboidei (major et minor), m. supraspinatus, m. infraspinatus, mm. deltoidei, m. biceps brachii. Because of the small number of children in the age groups as per type of injury, the results received have no statistical reliability, and this provoked us to do an inquiry (with open and standard replies) among the parents, having in mind the age of their children, so we could register the effi cacy of the treatment. Parents of 2-year old children replied to the following questions: 1. Do you understand the importance of systematic and active rehabilitation treatment, including treatment in home conditions? 2. Are you familiar enough with the supporting kinesiotherapeutic procedures for your child outside the medical centres? 3. Do you see improvement in movements of the injured limb? 4. Do you consider that your child falls behind in their physical development? 5. Do you have fi nancial means to provide your child with treatment? The questionnaire for parents with older children (aged over 2) contains additional questions: 6. Does your child go to a kindergarten? 7. Does your child cope with everyday life activities -toilet and personal hygiene, eating, dressing up and putting shoes on, other domestic activities? 8. According to your observations, does the child use the injured limb in everyday life activities or avoid using it? Results and analysis The centimeter measures of children up to the age of 2, at the beginning of the monitored period showed on average 0,5 cm hypotrophy of the injured limb (difference between the good limb, forearm and armpit) and preserved length of the limb. These results remain unchanged at the end of the one-year period. For the children up to the age of 5 the hypotrophy at the beginning of the study reached on average 1,6 cm, the limb was shortened by 2,1 cm. For children over 5 the average results are respectively 2,3 cm hypotrophy and 3,4 cm shortened limb. At the end of the monitored period changes are diffi cult to be registered, which is explained by the common growth at this age. The grip test is made in age group 3 and 4 (4 children with total injury), and the average values are 3(+) for the fi rst rehabilitation course for the year and 4(-) for the end of the monitored period. The results from the muscle tests of 6 children with upper type of injury, aged over 5, showed he improved condition of the tested muscles, but also a tendency for contractures in shoulder and elbow joints. When analyzing the results of fi g.2, we can summarize that for the 2-year old children the completed rehabilitation courses are extremely insuffi cient -on average 3,32 (27,1%), against the need and ability to complete the courses each month. The children from the second age group completed on average 3 rehabilitation courses, which is 75% from the 4 courses during the year they are entitled to. The fact is explained by the replies to the fi rst question "Do you understand the importance of systematic and active rehabilitation treatment, including treatment in home conditions?", to which 27% of the parents show hesitation in assessing the situation properly. This is also connected with the replies to the next question "Are you familiar enough with the supporting kinesiotherapeutic procedures for your child outside the medical centres?", where the same parents gave similar replies. The fact calls for targeted work with the parents, because they are the adults supposed to accompany the children to the rehabilitation ward and be responsible for their health. Of crucial importance is training of the parents as to how to work daily with the child between the rehabilitation courses, to apply possible kynesiotherapeutic techniques and healing procedures, and with older children -sports activities or functional occupational therapy. The children from the 3 rd age group completed the optimum number of rehabilitation courses, but they were not suffi cient enough for a better functional recovery. The question "Do you see improvement in movements of the injured limb?" received positive replies from all parents, especially in relation to smaller children with the upper type (neuropraxia). Re-innervation processes are observed after the age of 5-6, which confi rms the need for systematic rehabilitation healing simultaneously with the growth [1]. The question "Does your child go to a kindergarten?" received mainly positive replies from the parents, but they state the fact that the kindergarten staff is not qualifi ed to stimulate and exercise the injured limb during the daily activities. The replies to the question "Do you consider that your child falls behind in their physical development?" are in all directions, though corresponding to the injury type and age of the child. The motor defi cit has no consequences for the mental development, and the parents do not observe any delayed development. Parents of older children and those having children with the upper type of injury do not point out the substantial delay in physical development, but in cases of total injury and lower type of injury the lack of grip seriously impede the use of the upper limb in occupational and school activities, and this is refl ected on the general physical development of the child. As for the questions relevant to everyday life activities, the parents of older children also give various replies depending on the type of injury. In families where the child is being observed during the play time, remarks are made for the position of the limb and its use in all possible activities, etc., improved skills for self-service are found. Parents, who are not able to spend more time with their children and belittle the problem, point out that there are more diffi culties in including the paretic limb in everyday activities. The rehabilitation process is slow and extended. It depends on the parents' responsibility and ability to accompany their children to treatment procedures, especially during the fi rst months after birth. Some parents do not wish or cannot apprehend the crucial character of the problem and this inevitably affects the result. The parents must be convinced that major part in the complex treatment of birth trauma of the brachial plexus is the systematic rehabilitation. As per data from medical literature, about 25% of children with the upper type and neuropraxia, undergoing systematic physiotherapeutic and rehabilitation treatment, recover functionally [1]. Often the results are barely seen or unsatisfactory, especially in relation to hard total injuries (in cases of axonotmesis or neurotmesis), but this is not a reason to discontinue the rehabilitation. 2/3 from the parents replied to the question "Do you have fi nancial means to provide your child with treatment?" stating that they face serious fi nancial diffi culties in conducting the monthly rehabilitation treatment of smaller children, and for grown up children -diffi culties in accompanying by the adult (sick-leave permission or paid leave). All parents point out the unsatisfactory work of the specialized state institutions expected to support the healing process of their children. Conclusion In order to achieve good result from rehabilitation of children, suffering from the birth trauma of the brachial plexus, early diagnostics and timely initiation of treatment, regular conduct of rehabilitation courses and making clear the severity of the problem to the parents, as well as their role in the treatment are of crucial importance. The purpose of the complex physiotherapeutic and rehabilitation programme for children with birth trauma of the brachial plexus is to improve their self-service skills and to support the performance of various occupational and domestic activities as children grow up. Improvement of various hand grips, work with tools and utensils, inclusion of amusing and functional occupational therapy stimulate the performance of everyday life activities, promote the children's independence, future professional orientation and realization. Good results come hard and slow, but they signifi cantly raise the options for better quality of life and professional realization of children with birth trauma of the brachial plexus.
2019-03-17T13:02:42.572Z
2017-05-12T00:00:00.000
{ "year": 2017, "sha1": "3011999bb7eb1b1e7072b00a7fa2770e4cdc05ed", "oa_license": null, "oa_url": "https://doi.org/10.18007/gisap:msp.v0i12.1605", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "947ad9505a2d86e60899113c60b05e1a1d20a6a6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
19612456
pes2o/s2orc
v3-fos-license
Cooperative Binding of the Cytoplasm to Vacuole Targeting Pathway Proteins, Cvt13 and Cvt20, to Phosphatidylinositol 3-Phosphate at the Pre-autophagosomal Structure Is Required for Selective Autophagy* Autophagy is a catabolic membrane-trafficking mechanism involved in cell maintenance and development. Most components of autophagy also function in the cytoplasm to vacuole targeting (Cvt) pathway, a constitutive biosynthetic pathway required for the transport of aminopeptidase I (Ape1). The protein components of autophagy and the Cvt pathway include a putative complex composed of Apg1 kinase and several interacting proteins that are specific for either the Cvt pathway or autophagy. A second required complex includes a phosphatidylinositol (PtdIns) 3-kinase and associated proteins that are involved in its activation and localization. The majority of proteins required for the Cvt and autophagy pathways localize to a perivacuolar pre-autophagosomal structure. We show that the Cvt13 and Cvt20 proteins are required for transport of precursor Ape1 through the Cvt pathway. Both proteins contain phox homology domains that bind PtdIns(3)P and are necessary for membrane localization to the pre-autophagosomal structure. Functional phox homology domains are required for Cvt pathway function. Cvt13 and Cvt20 interact with each other and with an autophagy-specific protein, Apg17, that interacts with Apg1 kinase. These results provide the first functional connection between the Apg1 and PtdIns 3-kinase complexes. The data suggest a role for PtdIns(3)P in the Cvt pathway and demonstrate that this lipid is required at the pre-autophagosomal structure. Autophagy (Apg) 1 and the cytoplasm to vacuole targeting (Cvt) pathway are distinct membrane trafficking processes that share overlapping mechanistic components. During vegetative conditions, these components are constitutively employed in a biosynthetic capacity by the Cvt pathway. However, under nutrient starvation conditions, intracellular signaling events induce the autophagic pathway, enabling the degradative recycling of proteins and organelles in the lysosome-like vacuole of yeast (1)(2)(3)(4). Efficient regulatory mechanisms are necessary to balance these biosynthetic and degradative activities and maintain cellular homeostasis. The majority of cvt mutants is allelic with apg and aut mutants that are defective in autophagy (1,2). The characterization of their gene products has begun to delineate the molecular events of the Cvt and autophagy pathways (reviewed in Refs. 5 and 6). There is significant overlap between the targeting and transport mechanisms of these pathways; however, there also exist proteins specific for each. Interestingly, the specific components appear to interact directly or indirectly with a central component, Apg1. Apg1 is a Ser/Thr protein kinase, which is required for both the Apg and Cvt pathways (7)(8)(9). The substrate of Apg1 kinase has not been identified, although Apg1 is known to be involved in autophosphorylation (7). Apg1 catalytic activity is presumably regulated through the association of specific proteins such as Apg13, another component required for both the Cvt pathway and autophagy (8 -10). Apg13 interacts with Vac8, a Cvt-specific protein initially characterized as functioning in vacuolar inheritance (9,11). The Apg1 kinase also physically interacts with Cvt9 and Apg17, proteins that are specific for the Cvt and Apg pathways, respectively (8,12). A second complex that is required for both the Cvt and autophagy pathways includes the yeast PtdIns 3-kinase Vps34. This lipid kinase functions as part of a core complex, consisting of Vps34, Vps15, and Vps30/Apg6. Either Vps38, which is required for Prc1 transport, or Apg14, which is required for the autophagy and Cvt pathways, are also part of the complex (13). The association of Vps38 or Apg14 appears to confer specificity, perhaps by targeting the PtdIns 3-kinase to its functional location. Both the Apg1 kinase and the Apg14-containing lipid kinase complex localize to perivacuolar punctate structures. At present, a direct connection between these complexes has not been established. To gain a better understanding of the molecular mechanisms involved in the Cvt and autophagy pathways, we carried out a systematic analysis of a yeast ORF deletion library. Our screen for strains defective for import of precursor Ape1 (prApe1) identified several previously uncharacterized mutants. Two of these mutants, yjl036w/snx4 and ydl113c, renamed cvt13 and cvt20, respectively, correspond to genes that encode proteins reported to interact with Apg17 by the two-hybrid assay (14). In addition, both proteins contain phox homology (PX) domains. The PX domain is an ϳ120 amino acid domain that functions as a phosphoinositide-binding module. Protein-lipid interactions enable the PX domain specifically to target proteins to organelle membranes. Through regulation of intracellular localization, the PX domain has been implicated in a wide variety of cellular processes including cell growth, intracellular signaling, cytoskeletal organization, neutrophil defense, protein transport, and vesicular trafficking (reviewed in Refs. 15 and 16). A recent analysis extended phosphoinositide binding to all 15 PX domains encoded by the Saccharomyces cerevisiae genome (17). In most cases, however, a connection has not yet been made between the PX domain and physiological function. In this paper we show that Cvt13 and Cvt20 interact with a component of the Apg1 kinase complex and are specifically required for the Cvt pathway. The PX domains within Cvt13 and Cvt20 drive PtdIns(3)P-specific recruitment to a perivacuolar structure. Mutations in the PX domain block function. Finally, we demonstrate that correct localization of Cvt13 and Cvt20 requires the synthesis of PtdIns(3)P by the Apg14-containing lipid kinase complex at the pre-autophagosomal structure. EXPERIMENTAL PROCEDURES Strains and Media-The S. cerevisiae knockout library in strains BY4742 and BY4743 was purchased from ResGen TM (Invitrogen). The strains used in this study are listed in Table I. The cvt13⌬, cvt20⌬, and apg17⌬ strains were generated by PCR-mediated disruption of the YJL036w, YDL113c, and YLR423c loci, respectively, using amplified sequences from either the S. cerevisiae TRP1 gene, the Saccharomyces kluyveri HIS3 gene, or the Escherichia coli kan r gene (18) flanked by sequences homologous to the CVT13, CVT20, or APG17 coding sequences. All primer sequences will be made available upon request. PCR-based integration of YFP at the 3Ј end of CVT13 and CVT20 was used to generate strains expressing fusion protein under control of the native CVT13 or CVT20 promoter. The template for integration of YFP was pDH5 (19). Media for growth of yeast strains, starvation, and induction of per-oxisomes were described previously (20). Materials and Antiserum-The preparation of antisera to Ape1 (21), Fox3 (22), Prc1, and Pho8 (23) were described previously. Antibodies to GST and the hemagglutinin (HA) and c-Myc epitopes were from Santa Cruz Biotechnology (Santa Cruz, CA). Anti-mouse horseradish peroxidase conjugate was from Zymed Laboratories Inc. (South San Francisco, CA). Antiserum to Pgk1 and Apg13 was generously provided by Dr. Jeremy Thorner (University of California, Berkeley) and Dr. Yoshinori Ohsumi (National Institute for Basic Biology, Okazaki, Japan), respectively. Other reagents were identical to those described previously (20,24,25). Plasmid Constructions-A carboxyl-terminal fusion of HA to Cvt13 (pCVT13-HA(426)) was made by PCR amplification of the CVT13 ORF and upstream sequence from genomic DNA and ligation into pRS426 -3xHA. Additional details of the plasmid constructions will be provided upon request. To construct a carboxyl-terminal fusion of GFP to Cvt13, GFP was excised from pRS416GFP (26) using EagI and XhoI. After EagI, but prior to XhoI digestion, the 5Ј overhang was filled in, creating a blunt end on GFP. Plasmid pCVT13-HA was digested with SmaI and XhoI to remove the DNA encoding the HA epitope and subsequently ligated with GFP, yielding pCVT13-GFP(426). To construct amino-terminal HA fusion plasmids, the 3xHA epitope was PCR-amplified from pRS416 -3xHA. The PCR product was digested with SpeI and ClaI. Plasmid pProtA-CVT20 was digested with SpeI and ClaI to remove the DNA encoding protein A and subsequently ligated with the above PCR product, yielding pHA-Cvt20(416,424). DNA encoding the PX domain of Cvt20 was amplified by PCR with primers containing 5Ј restriction enzyme sites that allow for in-frame gene fusions with the carboxyl terminus of GST. Amplification included the DNA encoding the PX domain of Cvt13 (amino acids 156 -295) and Cvt20 (amino acids 173-301). PCR product and plasmid pGEX-KG (Amersham Biosciences) were digested with the appropriate restriction enzymes and ligated to generate plasmid pGEX-KG-CVT13PX and pGEX-KG-CVT20PX. Alkaline Phosphatase Enzyme Assay-Induction of autophagy was estimated by measuring the activity of alkaline phosphatase from Pho8⌬60 with p-nitrophenyl phosphate as substrate essentially according to methods published previously (29). Protein concentrations were determined using the BCA protein assay (Pierce). Protein A Affinity Isolation Analyses-For isolation of protein A fusion proteins and associated proteins, 30 A 600 equivalents of yeast cells were converted to spheroplasts as described previously (30). Spheroplasts were washed in phosphate-buffered saline, pH 7.4, 1 M sorbitol, and 1 mM phenylmethylsulfonyl fluoride. Spheroplasts were lysed on ice in phosphate-buffered saline lysis buffer, and protein A was recovered on Dynabeads as described previously (27). Bound protein was eluted with sample buffer, resolved by SDS-PAGE, and visualized by immunoblotting. Fluorescence Microscopy-Cells were grown to midlog phase in SMD, induced with 10 M CuSO 4 where appropriate for 1-2 h, and viewed on a Nikon E-800 fluorescent microscope as described previously (27). FM 4-64 labeling was as described previously (12). Protein Purification and Protein-Lipid Overlay Assays-GST-Cvt20 PX domain fusion protein was purified from glutathione-Sepharose beads by elution with reduced glutathione according to manufacturer's specifications. Purified protein was then desalted with PD-10 columns and concentrated to 5 g/ml with Centriprep YM-30 centrifugal filters. Protein-lipid overlay assays were conducted as described previously (24,31). Membranes were incubated with purified GST-Cvt13 and GST-Cvt20 PX domains at 0.05 g/ml overnight at 4°C. Binding of GST fusions to phosphoinositides was detected by enhanced chemiluminescence. No binding of GST alone to phosphoinositides was detected. RESULTS cvt13⌬ and cvt20⌬ Cells Are Specifically Defective in the Cvt Pathway-In order to develop a comprehensive understanding of the protein components involved in cytoplasm to vacuole protein transport pathways, we undertook a systematic analysis of the yeast ORF deletion library. Each strain was initially screened for the accumulation of precursor aminopeptidase I (prApe1). The vacuolar hydrolase, aminopeptidase I (Ape1), is synthesized in the cytosol as a precursor protein and is transported to the vacuole through the Cvt or Apg pathway (reviewed in Refs. 5 and 6). Two of the mutants that we identified were cvt13 (YJL036w/ snx4) and cvt20 (YDL113c). Both mutants accumulated the precursor form of Ape1 in rich medium, indicating a block in FIG. 1. The cvt13 and cvt20 mutants are specifically defective in the Cvt pathway. A, processing of vacuolar hydrolases. Cells from wild type (WT) (SEY6210), cvt13⌬, and cvt20⌬ strains were pulse-labeled for 10 min and chased for either 90 (Ape1) or 30 min (Prc1 and Pho8). Ape1, Prc1, and Pho8 were immunoprecipitated from cell lysates and resolved by SDS-PAGE. Prc1 and Pho8 processing was essentially normal in both mutant strains. B, kinetics of transport by the Cvt pathway. Cells from wild type (SEY6210), apg1⌬ (NNY20), cvt13⌬ (D3Y108), cvt20⌬ (D3Y109), and apg17⌬ strains were pulse-labeled for 10 min in SMD, washed, and resuspended in either SMD or SD-N medium, and chased for the indicated times. Ape1 was immunoprecipitated from cell lysates and resolved by SDS-PAGE. Note that there is a background band that migrates just below the position of mApe1. C, Pho8 activity assay for autophagy. Wild type (TN124), cvt13⌬ (D3Y111), cvt20⌬ (D3Y112), and apg17⌬ (D3Y113) cells expressing Pho8⌬60 were shifted from nutrient-rich medium (white bars) to SD-N medium (black bars) for 4 h. Autophagy induction was determined by a Pho8 activity assay as described under "Experimental Procedures." Error bars represent the standard deviation from three separate experiments. The cvt13 and cvt20 mutants are not defective in autophagic induction. D, pexophagy analysis. Cells from wild type (BY4742) and the cvt13⌬ and cvt20⌬ mutant strains in the BY4742 background were grown under conditions that induce peroxisomes (see "Experimental Procedures"), washed, and resuspended in SD-N for the time indicated. The presence of the peroxisomal thiolase enzyme, Fox3, was detected by immunoblotting. The S. cerevisiae strains cvt13⌬ and cvt20⌬ are defective in pexophagy. the Cvt pathway (Fig. 1A). The cvt13 mutant was initially isolated in a random screen for mutants defective in prApe1 maturation (1), but the corresponding gene had not been identified. A yeast genome database search for sequences with similarity to human sorting nexins (SNXs) identified a number of genes, including the open reading frame YJL036w (32). The sequence similarity of YJL036w with human SNX4 grouped it as the yeast ortholog SceSNX4. However, due to its functional placement within the Cvt transport pathway, YJL036w was renamed CVT13. Sequencing of the YJL036w gene from the cvt13 mutant revealed that deletion of the guanine base at position 113 of the ORF changed amino acid 38 and induced a frameshift. 2 No function for the CVT20 gene product has been shown previously in prApe1 transport. To determine whether these proteins functioned in additional vacuolar transport pathways, we examined the processing of resident vacuolar hydrolases carboxypeptidase Y (Prc1) and alkaline phosphatase (Pho8). Prc1 and Pho8 transit to the vacuole through the CPY and ALP pathways, respectively, and are useful marker proteins to monitor the state of these transport processes (33). Both Prc1 and Pho8 undergo glycosylation in the endoplasmic reticulum and Golgi complex and are proteolytically processed in the vacuole. The molecular mass shifts that correspond to these modifications provide a convenient means to monitor the transport process. Yeast cells were subjected to a radioactive pulse/chase analysis to determine the kinetics of protein delivery to the vacuole. Wild type, cvt13⌬, and cvt20⌬ cells efficiently processed both proteins, although the cvt13⌬ strain displayed a weak defect in processing of Prc1 and accumulated ϳ5% of the protein in the p2 Golgi-modified form (Fig. 1A). These results suggest that the Cvt13 and Cvt20 proteins do not have major functions in Prc1 and Pho8 vacuolar delivery pathways. Precursor Ape1 is transported to the vacuole through both the Cvt pathway and autophagy (2). Most of the characterized apg/cvt mutants are defective in delivery through both pathways and accumulate prApe1 under both rich and starvation conditions (reviewed in Ref. 34). Some mutants, however, are specific to each pathway. For example, cvt9⌬ and vac8⌬ strains are blocked in the Cvt pathway but are not completely blocked for autophagic import. These mutants are able to "reverse" their prApe1 accumulation defect through starvation-induced autophagy (9,12). In wild type cells, prApe1 is matured with a half-time of ϳ30 min in synthetic minimal medium containing nitrogen (SMD). In contrast, very little mature Ape1 was detected in cvt13⌬ and cvt20⌬ cells under these conditions (Fig. 1B). The cvt13 and cvt20 mutants reversed the accumulation defect and processed prApe1 when shifted to SD-N, nitrogen starvation conditions (Fig. 1B). As a control, we examined the apg1⌬ mutant. Apg1 is a Ser/Thr kinase essential for both Cvt transport and autophagy (7-9). As expected, apg1⌬ cells did not import prApe1 under either condition. High throughput two-hybrid analysis indicated a possible three-way interaction between Cvt13, Cvt20, and Apg17 (8,14). In contrast to cvt9⌬ and vac8⌬ strains, the apg17⌬ mutant is not defective for prApe1 import through the Cvt pathway but is defective for autophagy (8,9,12). We found that apg17⌬ cells processed prApe1, albeit with a slightly lower efficiency (Fig. 1B). It is thought that starvation conditions result in a signal transduction event that induces autophagy and causes the Apg/Cvt machinery to switch from the production of Cvt vesicles to the formation of autophagosomes. As a result, precursor Ape1 is selectively transported to the vacuole by autophagosomes under starvation conditions (3). Even though autophagy is defective in the apg17⌬ mutant (8), we found that prApe1 processing occurred under starvation conditions in apg17⌬ cells (Fig. 1B). This result suggests either that transport through the Cvt pathway normally continues even when the autophagic pathway is induced or that Apg17 is necessary for the negative regulation of the Cvt pathway. The data for the apg17⌬ strain indicate that the ability to process prApe1 under starvation conditions is not a valid measure of autophagic function. Accordingly, we utilized an alternative approach to assess further the autophagic capacity of the cvt13⌬ and cvt20⌬ mutant strains. Pho8⌬60 is a soluble derivative of the integral membrane vacuolar protein Pho8 (alkaline phosphatase) that lacks the transmembrane domain. Pho8⌬60 is only delivered to the vacuole through the autophagy pathway (35). Upon vacuole delivery it is matured into its active form. Cells from wild type (TN124) and the cvt13⌬, cvt20⌬, and apg17⌬ mutant strains expressing the Pho8⌬60 protein were measured for alkaline phosphatase activity following a shift to starvation conditions. Comparable with wild type cells, alkaline phosphatase enzymatic activity in cvt13⌬ and cvt20⌬ cells increased significantly after shifting to SD-N ( Fig. 1C), indicating that Pho8⌬60 delivery and subsequent processing to its enzymatically active form occurred in the vacuole. By contrast, no increase in enzyme activity was detected upon shift to SD-N in the autophagy mutant apg17⌬, in agreement with previous studies (8). We also examined a cvt13⌬ cvt20⌬ double mutant strain to determine whether the autophagic capacity of the cvt13⌬ and cvt20⌬ strains was due to functional overlap between these two proteins. Alkaline phosphatase enzymatic activity in the double mutant strain was comparable with wild type (data not shown), suggesting that Cvt13 and Cvt20 are not essential for autophagy. Yeast mutants defective for autophagy are unable to recycle cytosolic material and thus die rapidly upon shifting to starvation conditions. When cvt13⌬ and cvt20⌬ cells were shifted to SD-N, their viability was comparable with wild type cells (data not shown), further indicating that autophagy is not defective in these strains. Peroxisomes are induced under specific nutritional conditions such as growth in oleic acid as the sole carbon source. When cells grown on oleic acid are shifted to glucose-containing medium, peroxisomes are specifically degraded by a process termed pexophagy. The molecular machinery involved in pexophagy has been shown to overlap with that of the autophagy and Cvt pathways (reviewed in Ref. 34). We examined the degradation of the peroxisomal enzyme Fox3 (thiolase) to determine the requirement of Cvt13 and Cvt20 for the specific uptake of peroxisomes by pexophagy. Cells were grown in an oleic acid-containing medium to induce peroxisomes, switched to SD-N, and the level of Fox3 monitored by immunoblotting (22). In wild type cells, Fox3 was rapidly degraded. However, Fox3 was stable in cvt13⌬ and cvt20⌬ cells, indicating they are defective in the uptake of peroxisomes by pexophagy (Fig. 1D). We did detect a very slight decrease in Fox3 over the time course examined in the cvt13⌬ and cvt20⌬ strains, but the level was clearly stabilized relative to the wild type strain. With the exception of Cvt19, all cvt and apg mutants characterized thus far display defects in degradation of peroxisomes by pexophagy (9,22). Both Cvt13 and Cvt20 Distribute between the Cytosol and Punctate Structures near the Vacuole-To gain insight into the function of Cvt13 and Cvt20, we examined the localization of these proteins in vivo through the use of GFP fusion proteins. Expression of Cvt13-GFP under the endogenous CVT13 promoter fully complemented the prApe1 transport defect in a cvt13⌬ mutant, indicating that it was a functional chimera (data not shown). GFP-Cvt20 and GFP-Apg17, both under control of the CUP1 promoter, were induced with a relatively low concentration of CuSO 4 (10 M) for 1-2 h. GFP-Cvt20 expression fully complemented the prApe1 defect in a cvt20⌬ mutant (data not shown). After labeling the vacuoles with the dye FM 4-64, cells expressing the various GFP constructs were examined by fluorescence microscopy. Cvt13-GFP and GFP-Cvt20 displayed both a diffuse cytosolic distribution and punctate structures near or on the surface of FM 4-64-labeled vacuoles (Fig. 2). Additional faint perivacuolar and vacuolar rim staining was also detectable. GFP-Apg17 displayed punctate structures similar to those seen with Cvt13-GFP and GFP-Cvt20, but no cytosolic staining was evident. GFP-Apg17 punctate structures were found adjacent to the vacuole, however, in contrast to Cvt13-GFP and GFP-Cvt20, they were also seen in other areas of the cell (Fig. 2). The perivacuolar staining seen with both GFP-Cvt20 and Cvt13-GFP is reminiscent of the staining pattern seen with many of the proteins that are involved in the Apg and Cvt pathways (12,20,25,27). Although the nature of this perivacuolar structure is still unknown, it appears to play a physiological role in Cvt vesicle and autophagosome formation (27,36). We co-localized Cvt13 and Cvt20 with Cvt9, Cvt19, and Aut7 to determine whether these components localize to the same perivacuolar structure. Cvt19 is a cargo receptor or adaptor required for transport of the prApe1 complex (37), and Aut7 plays a role in vesicle formation and size regulation (25, 38 -40). Co-localization of Aut7, Cvt19, and Cvt9 has been demonstrated recently (27). Chromosomally expressed Cvt13-YFP or Cvt20-YFP displayed punctate structures that co-localized with Cvt19-CFP, CFP-Cvt9, and CFP-Aut7 (Fig. 3A). A number of cells expressing Cvt13-YFP and Cvt20-YFP also displayed diffuse cytosolic staining. These results indicate that a population of Cvt13 and Cvt20 localizes to the same perivacuolar structure as Cvt9, Cvt19, and Aut7. The localization of some Apg/Cvt proteins has been shown to depend on other proteins within these pathways. For example, membrane recruitment of Aut7 requires the function of the Apg12-Apg5 and the Aut1 conjugation systems (25,40). We examined the localization of Cvt19-CFP and its co-localization with YFP-Cvt9 and YFP-Aut7 in the cvt13⌬ and cvt20⌬ strains. Tagged proteins were co-expressed in all combinations, and the localization of each protein appeared normal ( Fig. 3B and data not shown). These data suggest that Cvt19, Cvt9, and Aut7 localize to the perivacuolar structure independent of Cvt13 and Cvt20. Furthermore, these results imply that Cvt13 and Cvt20 are not required for the functions associated with these proteins, such as Cvt19-prApe1 complex binding (37), and Aut7dependent membrane expansion (38). Cvt13 and Cvt20 Fractionate with Membranes-To extend the localization analysis in vitro, we examined the subcellular localization of Cvt13 and Cvt20 by differential centrifugation. The cvt13⌬ and cvt20⌬ strains were transformed with plasmids pCVT13-HA and pHA-CVT20, respectively, and grown to midlog phase in SMD. The expression of the respective HA epitopetagged proteins completely complemented the prApe1 accumulation defect in the cvt13⌬ and cvt20⌬ strains (data not shown), indicating that the addition of the HA epitope did not alter Cvt13 and Cvt20 function. Spheroplasts were lysed and centrifuged to separate the lysates into low speed supernatant (S13) and pellet (P13) fractions (see "Experimental Procedures"). The S13 fraction was then further separated into high speed supernatant (S100) and pellet (P100) fractions. Immunoblot analysis revealed that HA-tagged Cvt13 and Cvt20 were distributed essentially equally throughout all pellet and supernatant fractions (data not shown), suggesting the presence of both cytosolic and membrane-associated protein. The distribution of the cytosolic protein, phosphoglycerate kinase (Pgk1), confirmed separation of membrane and soluble proteins. The nature of the Cvt13 and Cvt20 pellet association was examined by treating the pellet fraction with various reagents. Treating the pellet fractions with 1 M salt, 3 M urea, or 1% Triton X-100 resulted in partial stripping of HA-tagged Cvt13 and Cvt20 into the supernatant fraction. However, both proteins were completely removed from the pellet fraction by extraction with 0.1 M Na 2 CO 3 , pH 10.5 (data not shown). These results indicate that Cvt13 and Cvt20 exist as both cytosolic and peripheral membrane proteins, and are in agreement with the in vivo fluorescent analysis that revealed both a punctate and diffuse cytosolic pool of both proteins (Fig. 2). Cvt13, Cvt20, and Apg17 Interact-The analyses of GFP localization patterns along with two-hybrid studies suggested The cvt13⌬ (D3Y108), cvt20⌬ (D3Y109), and apg17⌬ strain were transformed with plasmids encoding GFP fused to the carboxyl or amino termini of the corresponding ORFs (pCVT13-GFP, pGFP-CVT20 and pGFP-APG17, respectively). Cells were grown in SMD, treated with FM 4-64 to label vacuoles, and visualized by fluorescence microscopy as described under "Experimental Procedures." Cvt13-GFP and GFP-Cvt20 display punctate perivacuolar dots and diffuse cytosolic staining. DIC, differential interference contrast. that Cvt13, Cvt20, and Apg17 may compose a protein complex (14). Previously published data have demonstrated that Apg1 physically interacts with Apg17, an autophagy-specific component, and Cvt9, a Cvt-specific component (8,12). To investigate potential physical associations among Cvt13, Cvt20, and Apg17, a series of affinity purification experiments was undertaken with fusions to protein A. Either protein A alone or a protein A fusion was co-expressed in combination with Cvt13-HA. The expression of protein A-tagged Cvt13 or Cvt20 complemented the prApe1 accumulation defect in cvt13⌬ and cvt20⌬ strains, respectively (data not shown). Cells were grown to midlog phase, converted to spheroplasts, and lysed in the presence of detergent. Protein A and associated proteins were then affinity-isolated by binding to IgG-coupled Dynabeads as described under "Experimental Procedures." The recovered proteins were separated by SDS-PAGE and examined by immunoblot. Cvt13-HA bound to protein A-Cvt13, protein A-Cvt20, and protein A-Apg17 (Fig. 4). Cvt13-HA did not bind to protein A alone, indicating that the interaction with Cvt13-HA was dependent on Cvt13, Cvt20, or Apg17 fused to protein A. These data support the existence of a protein complex with a three-way physical interaction between Cvt13, Cvt20, and Apg17. This complex may also include multiple copies of Cvt13. At least two proteins that interact with Apg1 modulate its kinase activity. One of these proteins, Apg13, is hyperphosphorylated in a Tor-dependent manner under rich media conditions. This form of Apg13 has a low affinity for Apg1, and under these conditions in vitro Apg1 kinase activity is reduced (8). Starvation results in dephosphorylation of Apg13, a greater affinity for Apg1, and an increase in Apg1 kinase activity in vitro. It is possible that the phosphorylation state of Apg13 is the signal that regulates the conversion between the Cvt and Apg pathways. We determined whether the phosphorylation state of Apg13 was altered in the cvt13⌬, cvt20⌬, or apg17⌬ mutants. Wild type or mutant cells were grown in SMD and shifted to starvation conditions for 10 min, and Apg13 was examined by immunoblot (Fig. 5). In both wild type and mutant cells, Apg13 was seen primarily as the hyperphosphorylated form in SMD prior to shifting to starvation conditions. A smaller population of the protein was also detected as the hypophosphorylated form. Within 10 min in SD-N medium, the protein was efficiently dephosphorylated and migrated at a single position on the gel in all of the strains examined. Shifting the culture back to SMD resulted in rapid hyperphosphorylation (data not shown). These results suggest that Cvt13, Cvt20, and Apg17 do not affect the Cvt and Apg pathways by regulating the phosphorylation of Apg13. The PX Domains of Cvt13 and Cvt20 Bind the Phosphoinositide PtdIns(3)P and Are Necessary for Cvt Transport-The PX domain has been identified as a novel ϳ120-residue domain that functions as a phosphoinositide-binding module (24). Sequence database analyses identified a number of proteins in yeast, including Cvt13 and Cvt20, that contain a PX domain (17). However, the in vivo role of the PX domain for Cvt13 and Cvt20 has not been addressed. The PX domain binding specificity of Cvt13 and Cvt20 was examined using a protein-lipid binding assay (31). Nitrocellulose membranes were spotted with decreasing amounts of various phosphoinositides and incubated with affinity-purified PX domains of Cvt13 or Cvt20 fused to glutathione S-transferase (GST) (Fig. 6, A and B). Both GST-PX domain fusions displayed a preferential interaction with PtdIns(3)P and a weaker interaction with PtdIns(3,5)P 2 . The VPS34 gene encodes the sole yeast PtdIns 3-kinase (41). Fab1 is a PtdIns(3)P 5-kinase, which phosphorylates PtdIns(3)P to produce PtdIns(3,5)P 2 (42). Localization of Cvt13-GFP and GFP-Cvt20 was examined in both vps34⌬ and fab1⌬ strains to confirm the PtdIns(3)P specificity and determine whether membrane binding was dependent upon phosphoinositide synthesis. Cvt13-GFP and GFP-Cvt20 punctate staining completely redistributed into the cytosol in the vps34⌬ strain (Fig. 6C). In contrast, fluorescence staining in the fab1⌬ strain remained similar to that seen in wild type cells, with distribution between the cytosol and punctate structures. These data indicate that PtdIns(3)P binding is necessary for Cvt13 and Cvt20 membrane localization to perivacuolar punctate structures. Analysis of the PX domain by NMR spectroscopy identified a FIG. 3. Cvt13, Cvt20, Cvt9, the cargo specificity factor Cvt19, and the vesicle component Aut7 co-localize on a punctate, perivacuolar structure. A, cells of strains PSY14 (CVT13-YFP) and PSY15 (CVT20-YFP) were transformed with pCVT19-CFP, pCuCFP-CVT9, or pCuCFP-AUT7. Transformed cells were grown to midlog stage in SMD and shifted to YPD for 2-3 h prior to microscopy analysis. Images were captured and analyzed as described under "Experimental Procedures." B, co-localization of Cvt19 with Cvt9 and Aut7 occurs independent of Cvt13 and Cvt20. D3Y108 (cvt13⌬) and D3Y109 (cvt20⌬) cells expressing YFP-Cvt9 or YFP-Aut7 along with Cvt19-CFP were prepared and examined as in A. DIC, differential interference contrast. basic binding pocket and membrane attachment loop as the structural basis for PtdIns(3)P binding (24). Altering a key tyrosine residue in one of two basic motifs, RR(Y/F) (where R is arginine, Y is tyrosine, and F is phenylalanine) within the binding pocket (Fig. 7A), has been shown to block phosphoinositide-binding (24,43). A Tyr to Ala mutation of the highly conserved RRY motif was generated within the PX domain of Cvt13 and Cvt20. Cvt13 Y79A -GFP and GFP-Cvt20 Y193A punctate staining completely redistributed to the cytosol (Fig. 7B). Immunoblot analysis indicated that Cvt13 Y79A -GFP and GFP-Cvt20 Y193A were stable and corresponded to predicted molecular weights (data not shown). Thus, depleting the cell of PtdIns(3)P or mutating the PX domain abolished intracellular punctate localization of Cvt13 and Cvt20. Next, we investigated the role of Cvt13 and Cvt20 PX domains in Cvt transport. Expression of Cvt13 Y79A -GFP in the cvt13⌬ strain showed an incomplete block in prApe1 processing (data not shown). A similar result was seen with GFP-Cvt20 Y193A expressed in the cvt20⌬ background. Because Cvt13 and Cvt20 physically interact, partial complementation of prApe1 transport in these single PX domain mutants may have occurred through membrane targeting by the wild type PX domain partner. To address this possibility, we generated a cvt13⌬ cvt20⌬ double mutant strain expressing Cvt13-HA, Cvt13 Y79A -HA, GFP-Cvt20, and GFP-Cvt20 Y193A in varying combinations. When the PX domain of either Cvt13 or Cvt20 contained the Tyr to Ala mutation, prApe1 processing was reduced to ϳ50%. However, when both PX domains contained the Tyr to Ala mutation, prApe1 processing was almost completely blocked (Fig. 7C). Thus, Cvt13 and Cvt20 PtdIns(3)Pbinding capacity is necessary for Cvt transport. Apg14, a Component of the PtdIns 3-Kinase Complex, Is Necessary for Cvt13 and Cvt20 Punctate, Perivacuolar Localization-To investigate further possible Cvt13 and Cvt20 connections with the PtdIns 3-kinase complex, co-localization with Cvt19 was examined in the apg14⌬ mutant strain. Apg14 is a component of the PtdIns 3-kinase complex required for the Cvt pathway and autophagy but not for the CPY pathway. As shown in Fig. 3, Cvt13-YFP and Cvt20-YFP co-localize with with IgG-coated Dynabeads and visualized by immunoblotting with antiserum to HA. For each experiment ϳ2% of the total lysate or 10% of the total eluate was loaded per lane. Cells from wild type (WT) (SEY6210), cvt13⌬ (D3Y108), cvt20⌬ (D3Y109), or apg17⌬ strains overexpressing Apg13 were grown in SMD and then shifted to SD-N at time 0. Samples were collected after 10 min, and Apg13 was detected by immunoblotting. Nutrient-dependent dephosphorylation of Apg13 is normal in strains cvt13⌬, cvt20⌬, and apg17⌬. A schematic representation of the Apg13 dephosphorylation and interaction with Apg1 is depicted below the blots. Cvt19-CFP in wild type cells. Similarly, Cvt19-CFP co-localization with Cvt13-YFP and Cvt20-YFP was normal in aut7⌬ cells that are defective in the Cvt pathway and autophagy (Fig. 8). In contrast, punctate structures labeled with Cvt13-YFP or Cvt20-YFP were less frequent and of lower intensity in the apg14⌬ strain (Fig. 8). Furthermore, in those cells that displayed punctate staining for Cvt13-YFP or Cvt20-YFP, the labeled structures seldom co-localized with Cvt19-CFP (Fig. 8). Loss of Cvt13 and Cvt20 punctate perivacuolar localization in this mutant strain occurred concomitant with an increased staining of more diffuse punctate structures near the vacuole and in the cytosol. These results suggest Apg14 may be dispensable for Cvt13 and Cvt20 PtdIns(3)P binding at other membranes that contain this lipid, such as the endosome, but necessary for recruitment to the pre-autophagosomal structure where they participate in Cvt transport. This result is in contrast to the complete loss of punctate staining seen with the vps34⌬ strain that completely lacks PtdIns 3-kinase activity (Fig. 6). These data represent the first evidence of the requirement of PtdIns(3)P at the presumed site of pre-autophagosome/ Cvt vesicle formation. DISCUSSION Cvt13 and Cvt20 Are PX Domain Proteins That Associate with Components of the Apg1 Kinase Signaling Complex-The studies presented in this paper have defined two new proteins in the Cvt pathway, Cvt13 and Cvt20. These proteins contain PX domains that bind to PtdIns(3)P. Both Cvt13 and Cvt20 are localized to a punctate perivacuolar structure, and binding to PtdIns(3)P is critical for this localization. Finally, correct localization of both proteins is necessary for import of prApe1 through the Cvt pathway. Cvt13 is the first Cvt pathwayspecific protein that has been characterized that has a mammalian homolog, Snx4 (32). This is an interesting finding because there are no data at present demonstrating that the Cvt pathway exists in any organism other than S. cerevisiae. Autophagy and the Cvt pathway are distinct membrane trafficking processes that share overlapping mechanistic components. These components include the Apg1 kinase and its associated proteins, Apg13, Apg17, and Cvt9 (8,12). A second set of proteins required for both pathways is the PtdIns 3-kinase complex composed of Vps34, Vps15, Apg6, and Apg14 (13). A separate PtdIns 3-kinase complex containing Vps38 instead of Apg14 is needed for protein transport through the CPY pathway. This latter complex is responsible for the synthesis of the majority of PtdIns(3)P in the cell and at present it has not been possible to assess directly the site of action of the Apg14-dependent PtdIns 3-kinase. Cvt13 and Cvt20 were shown to interact with Apg17 by a two-hybrid screen (14). These proteins were also identified as belonging to a family containing the PX domain (17). However, neither of these analyses provided information on the physiological roles of Cvt13 or Cvt20. We identified the cvt13⌬ and cvt20⌬ strains in a screen of the S. cerevisiae deletion library as being defective in the processing of the vacuolar hydrolase Ape1. Both proteins are necessary for vacuole delivery of prApe1, and the mutant strains accumulated the precursor form of Ape1 in rich medium, indicating a defect in the Cvt pathway (Fig. 1). In contrast, the cvt13⌬ and cvt20⌬ strains were not defective in autophagy. Other vacuolar transport systems including the CPY and ALP pathways are essentially normal in cvt13⌬ and cvt20⌬ cells. However, both mutants are defective in the uptake of peroxisomes by pexophagy. Consistent with two-hybrid results (14), immunoisolation experiments demonstrate that Cvt13, Cvt20, and Apg17 directly interact with each other (Fig. 4). In addition, Cvt13 forms homodimers or higher order oligomers. These data support a model in which Cvt20, Apg17, and one or more Cvt13 subunits form a protein complex. This complex may include additional components, such as Apg1, Apg13, and Vac8. Apg1 has recently been ascribed a central role in the events signaling a switch between the Cvt and autophagy pathways (8). We were unable to detect an interaction between Apg1 and either Cvt13 or Cvt20 in our affinity isolation experiments (Fig. 4). This may simply mean that Apg1 does not interact directly with either protein, and our experimental conditions did not preserve ternary or higher order complexes. Alternatively, this result may indicate the presence of multiple complexes containing discrete subsets of proteins. Cvt13, Cvt20, Cvt9, Cvt19, and Aut7 Co-Localize on a Punctate, Perivacuolar Structure-Biochemical analyses indicate that Cvt13 and Cvt20 are membrane-associating proteins present in both a soluble and membrane fraction. In vivo analyses of GFP-tagged proteins suggest that Cvt13 and Cvt20 distribute between the cytosol and punctate structures near the vacuole (Fig. 2). Recent studies have placed Apg1 and Cvt9 at a perivacuolar compartment (12,27). Various other components involved in the Cvt and Apg pathways including the autophagy/Cvt-specific PtdIns 3-kinase complex have also been localized to this pre-autophagosomal structure (27,36). Fluorescence microscopy showed Cvt13 and Cvt20 co-localization with Cvt9, Cvt19, and Aut7 (Fig. 3A). This perivacuolar compartment may mark the site of vesicle nucleation and cargo sequestration and has been termed the pre-autophagosomal structure (36). Cvt13 and Cvt20 PX Domains Are PtdIns(3)P-specific Binding Modules That Direct Recruitment to a Perivacuolar Structure and Are Necessary for Cvt Transport-PX domains are involved in the targeting of various proteins to membranes that contain PtdIns(3)P. Proteins with PX domains have been implicated in a range of cellular processes (reviewed in Ref. 15). Until now, PX domain-containing proteins have not been implicated in the Cvt pathway or autophagy. A mutation in the RR(Y/F) basic motif demonstrates that Cvt13 and Cvt20 membrane association requires a functional PX domain (Fig. 7). The FIG. 8. Deletion of APG14 disrupts Cvt13 and Cvt20 co-localization with Cvt19. The aut7⌬ and apg14⌬ strains expressing chromosomal Cvt13-YFP or Cvt20-YFP, PSY24, PSY25, PSY26 and PSY27, respectively, were transformed with pCVT19-CFP. Cells were prepared and examined as in Fig. 3. Contrast was enhanced for YFP in the apg14⌬ strain relative to the aut7⌬ strain to allow detection of the Cvt13-YFP and Cvt20-YFP signals. DIC, differential interference contrast. (17,24). Elements of Vam7 secondary structure are shown below the sequences (24). Two highly conserved PX domain sequence motifs, (R/K)(R/K)(Y/F)XXFXXLXXXL and R(R/K)XXLXX(Y/F), are highlighted in green and the proline-rich motif, PXXP, is highlighted in purple (reviewed in Ref. 15). The Cvt13 and Cvt20 point mutations are indicated by an asterisk. B, Cvt13 and Cvt20 punctate localization requires a functional PX domain. D3Y108 (cvt13⌬) cells expressing Cvt13-GFP (WT) or Cvt13 Y79A -GFP (Tyr to Ala (Y to A)) and D3Y109 (cvt20⌬) cells expressing GFP-Cvt20 (WT) or GFP-Cvt20 Y193A (Tyr to Ala (Y to A)) were grown in SMD and visualized by fluorescence microscopy. C, prApe1 trafficking requires functional Cvt13 and Cvt20 PX domains. PSY10 (cvt13⌬ cvt20⌬) cells expressing combinations of Cvt13-HA, GFP-Cvt20, Cvt13 Y79A -HA, and GFP-Cvt20 Y193A , as indicated, were grown to midlog phase in SMD. Protein extracts were prepared and analyzed by immunoblot with antiserum to Ape1. presence of a PX domain in both associating proteins, Cvt13 and Cvt20, may allow for increased affinity and PtdIns(3)Pbinding capacity. Similarly, the selective targeting of Cvt13 and Cvt20 to perivacuolar structures may be directed through interaction with additional proteins. The PX domains in Cvt13 and Cvt20 bind PtdIns(3)P with relatively low affinity ( Fig. 6; 17) so that they likely require interaction with other components to direct membrane binding and membrane specificity. We are currently investigating the interaction of Cvt13 and Cvt20 with other Apg and Cvt proteins. Vps34, the yeast PtdIns 3-kinase, interacts with Vps15, a membrane-associated Ser/ Thr kinase that regulates Vps34 activity (44). Vps15 and Vps34 are subunits of a core PtdIns 3-kinase complex that may function at different membrane sites, regulating different protein trafficking events (13). Vps15/Vps34 complexed with accessory proteins Vps30 and Vps38 may concentrate primarily at the Golgi, functioning in Prc1 targeting to the endosome/ prevacuolar compartment. Vps15/Vps34 complexed with accessory proteins Vps30 and Apg14 may concentrate primarily at the pre-autophagosomal structure, functioning in Cvt and autophagy membrane traffic to the vacuole. Processing of prApe1 is completely blocked in vps15, vps30, vps34, and apg14, but not vps38 mutant strains (13). Alkaline phosphatase activity assays employing Pho8⌬60 indicate severely impaired autophagic capacity in vps15, vps30, vps34, and apg14 mutant cells in response to starvation. Thus, a functional autophagy/Cvtspecific PtdIns 3-kinase complex and PtdIns(3)P production is necessary for autophagy and Cvt transport, although the site of action of the Apg14-dependent PtdIns 3-kinase has not been demonstrated. PtdIns(3)P concentration by specific PtdIns 3-kinase complexes may target PX domain proteins to their site of function on discrete membrane domains. Indeed, Cvt13 and Cvt20 punctate perivacuolar localization was lost in apg14⌬ cells (Fig. 8), suggesting that the Apg14-dependent PtdIns 3-kinase must function at the pre-autophagosomal structure. Cvt13 and Cvt20 PtdIns(3)P binding and targeted membrane association is required for prApe1 transport through the Cvt pathway. Mutating tyrosine in the PX domain RR(Y/F) basic motif of either Cvt13 or Cvt20 resulted in diminished prApe1 processing (Fig. 7). With only one functional PX domain, the interaction of Cvt13 and Cvt20 may enable partial, reduced capacity Cvt transport. The PX domain mutation appears to abolish localization of the protein to the punctate perivacuolar structure; however, the partial function suggests that a low level of the protein may still be binding membrane. If so, this level is below detection. Alternatively, Cvt13 and Cvt20 retain sufficient activity in the absence of efficient membrane binding of either individual protein to permit partial import of prApe1. When the PX domain of both Cvt13 and Cvt20 are mutated, Cvt transport is blocked (Fig. 7). These data indicate a direct physiological function for the yeast PX domain. Furthermore, we know that a PtdIns 3-kinase complex is required for the autophagy/Cvt pathways (13). The PX domains of Cvt13 and Cvt20 function as PtdIns(3)P-binding modules; thus these data provide the first molecular connection between the autophagy/Cvt pathways and the autophagy/Cvt specific-PtdIns 3-kinase complex.
2018-04-03T04:04:59.306Z
2002-08-16T00:00:00.000
{ "year": 2002, "sha1": "e8b2cd84c0d6deec4ceda11edca439ed1b730f6c", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/277/33/30198.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "eb46609212715a6d07b63d6aff57b8e75b7cad70", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
234067714
pes2o/s2orc
v3-fos-license
Design of Real-Time Control Based on DP and ECMS for PHEVs A real-time control is proposed for plug-in-hybrid electric vehicles (PHEVs) based on dynamic programming (DP) and equivalent fuel consumption minimization strategy (ECMS) in this study. Firstly, the resulting controls of mode selection and series mode are stored in tables through offline simulation of DP, and the parallel HEV mode uses ECMS-based real-time algorithm to reduce the application of maps and avoid manual adjustment of parameters. Secondly, the feedback energy management system (FMES) is built based on feedback from SoC, which takes into account the charge and discharge reaction (CDR) of the battery, and in order to make full use of the energy stored in the battery, the reference SoC is introduced. Finally, a comparative simulation on the proposed real-time controller is conducted against DP, the results show that the controller has a good performance, and the fuel consumption value of the real-time controller is close to the value using DP. The engine operating conditions are concentrated in the low fuel consumption area of the engine, and when the driving distance is known, the SoC can follow the reference SoC well to make full use of the energy stored in the battery. Introduction Hybrid electric vehicles use at least two power sources, usually driven by an internal-combustion engine associated with a motor, in order to minimize the fuel consumption and/or emissions. e energy management of a PHEV is often divided into two categories. e first concerns global optimization based on offline simulation. In this case, the vehicle speed is regulated to follow a speed cycle using a torque at the wheel controller. Examples of such methods include Pontryagin's minimum principle [1,2], dynamic programming (DP) [3][4][5][6][7], and genetic algorithm [8]. A second class of algorithms is real-time optimal control strategy that can be used to control a vehicle. Several algorithms have been proposed, some of which are based on rulers [9,10] and Equivalent fuel Consumption Minimization Strategy (ECMS) [11][12][13][14][15][16], and others are approximate real-time control strategies based on DP [17][18][19]. ECMS has strong dynamic adaptability and can get similar results with DP in theory [20]; therefore, it has been extensively studied. In this paper, a real-time control for PHEV based on DP and ECMS is studied. Real-time implementation has remained a major challenge in the design of complex control systems. To address this hurdle, simple and efficient models and fast optimization algorithms are developed. e real-time controllers must be simple in order to be implemented with limited computation and memory resources. Moreover, manual tuning of control parameters should be avoided to reduce the calibration work efforts. DP can obtain global optimal solutions, and ECMS can realize real-time computing and can theoretically get similar results with DP. is study combines the advantages of both to establish a real-time controller. e contribution of the paper is to use the DP algorithm solving the optimal controls of driving cycle to establish the framework of FEMS. In order to fully utilize the potential of the battery, the charge and discharge reaction (CDR) of the battery is taken into account in the DP-based FEMS, and the reference SoC is introduced into the FEMS. e ECMS realtime algorithm is used for the parallel HEV mode to reduce the application of maps and avoid manual adjustment of parameters. Hybrid Vehicle Modeling For this study, two levels of modeling are considered. e first, called plant model (PM), shown in Figure 1, is used to simulate the vehicle over speed cycles [21]. It only represents the longitudinal behavior and is designed for the energeticconsumption simulation. It includes the following: Dynamic response of engine torque Motor model based on the characteristic map provided by the motor supplier Dedicated hybrid transmission (DHT) model (including the shift strategy) Full dynamic vehicle model High-voltage lithium battery model based on battery charge and discharge characteristics An important part of PM is the fuel consumption model of engine. is is done only for fuel consumption using classical map and is validated according to real data results, as shown in Figure 2. Based on this PM, a simplified model, called Energy Consumption Model (ECM), has been derived. e purpose of this paper is not the vehicle modeling, but control law synthesis. So, only ECM is used to derive the optimization algorithm. PM is omitted here, but PM is used for the simulation results at the end of this paper. Figure 1 is the simulation model of PHEV. Energetic Consumption Modeling. e power flows of the PHEV and connections between components are shown in Figure 3. e vehicle has three energy converters, an internal-combustion engine (ENG), a drive electric motor (DEM) connected through a dedicated hybrid transmission (DHT), and a generator electric motor (GEM) as a generator connected to the engine via DHT. Both electric machines can work in both motoring and generating modes. e main component parameters of the powertrain are listed in Table 1. As shown in Figure 3, the powertrain allows the vehicle to be driven in the following four modes: Mode 1: one-motor pure electric mode: only the DEM is connected to DHT. Mode 2: two-motor pure electric mode: the DEM and GEM are connected to DHT. Mode 3: series HEV: only the DEM is connected to DHT. e ENG and GEM work as an auxiliary power unit (APU), producing electric power. Mode 4: parallel HEV: all energy converters are connected to the DHT. e following relations can be described as shown in Figure 3: where n and j correspond to the engine transmission gear and the motor transmission gear, respectively. Optimal Control Problem e objective in energy management for hybrid vehicles is to minimize the cumulative fuel consumption, which is equivalent to minimizing the power consumption of the engine. e battery is considered as a dynamical system, with the state of charge From (1) and (3), formula (5) can be expressed as follows: e objective function is , ω e (k) Δt. e speeds and torques of both engine and motor are limited by the following mechanical constraints. Constraints on speeds: Constraints on torques: However, the constraints on state of charge are With ΔSoC, the desired electric energy consumption over the speed cycle is called overall SoC variation. e relationships between the different torques and speeds, (2)-(4), allow writing the constraints (8) and (11) as where T e min ′ � max 0, [EMS] [Gearbox] [Clutch1] [Clutch2] [Driver] : desired torque T wh (k) can be produced by both motor and engine 3.1. DP Formulation. Dynamic programming (DP) is a multistep decision process, which uses Bellman's optimal principle for making hierarchical decisions and solving optimal controls [19]. For an optimal decision, regardless of the initial state and initial decision (stage cost) d(x k , x k−1,i ), the remaining decisions (cost-to-go) J k−1 (x k−1,i ) must be optimal for the first decision. at is, the second section of the optimal trajectory is also the optimal trajectory. e following is the equation of the multistep decision process: where J k (x k ) is the optimal value function of k-stage decision process starting state x k to the end state x f and u k,i is the control strategy at starting state x k of k-stage decision process so that the state is transferred to next state. In this paper, reverse solution is used. Figure 4 shows the optimal path of WLTC using DP reverse solution (Figure 4(a)) and the cumulative fuel consumption of the corresponding optimal path (Figure 4(b)). ECMS Formulation. After dividing by η e q LHV , this results in the following objective function: Introducing the Lagrangian parameter λ(k), the Hamiltonian function can be written as , ω e (k) In order to avoid exceeding the boundary value of the constraint condition, introducing an additional cost function, then, (19) can be rewritten as , ω e (k) In order to make the SoC meet the constraint condition (12), a penalty function is introduced: en, the Hamiltonian function can be rewritten as , ω e (k) where s(k) is the equivalent factor. According to Pontryagin's minimum principle, the optimal controls are obtained by solving the minimum value of Hamiltonian function, shown as follows: 3.2.1. ECMS Algorithm. e following steps must be executed to implement ECMS, as also illustrated in Figure 5: (1) Identify the acceptable range of control [T gem, min (k), T gem, max (k)] and [T dem, min (k), T dem, max (k)] which satisfies the instantaneous torque constraints into a finite number of controls T dm ,i and T gm,i , where i � 1, 2, . . . , q and j � 1, 2, . . . , p, a total of q × p control candidates (3) Calculate the equivalent fuel consumption H corresponding to each control candidate (4) Select the control values T gem (k) and T dem (k) that minimize H Steps 1 to 4 are computed at each instant of time over the entire duration of the driving cycle. is approach has been shown to closely approximate the global optimal solution. Control Design In order to reduce the amount of memory use and improve the calculation speed, the offline simulation is used to calculate the fuel cost in series mode and mode selection for a given combination (T w , ω w , SoC) [7,19,25]. Because the efficiency of the battery does not change greatly with the change of SoC in the desired operating region, the SoC is found to have minor effects on the optimal solution, so that effect is ignored. However, not only are all control variables stored in tables, but also some insights can be gained from the kinematic relations in (1) Mode 4 is implemented using ECMS algorithm, and the algorithm flow is shown in Figure 4. ere could be instances where an engine torque command produces the minimum cost but differs greatly from the previously selected engine torque. is can occur when higher engine torque and lower engine torque produce minimum costs that are close in value, which causes the Min function to alternate between higher and lower engine torque outputs. erefore, the difference between the current engine power vector (P * e (k)) and the previously selected engine power (P * e (k − 1)) is introduced into the Hamiltonian function and will help limit the rate at which the engine power (and torque) can change from time step to time step, and the Hamiltonian function (23) can be rewritten as , ω e (k) 4.1. Controller. e structure of the controller is shown in a block diagram in Figure 7, which consists of three main subsystems. e first subsystem is the operation mode detection, combined with formulas (14)-(16) as the boundary Mathematical Problems in Engineering condition of the mode selection; the second subsystem is operation mode management, which mainly realizes the transition of the four modes by the state machine; the third subsystem is torque distribution amnagement, which mainly realizes the torque distribution of pure electric mode (modes 1 and 2), series mode (mode 3), and parallel hybrid mode (mode 4). Energy Management Charge-depleting charge-sustaining strategy (CDCS) is to make use of all the stored electric energy in the battery. e PHEV is run as an electric vehicle until the SoC is under a certain limit and then operates as a hybrid in the chargesustaining mode. It is guaranteed to make use of the stored electric energy, and it does not need information about the future driving mission, which is the main advantage of this strategy. Global optimal strategy based on DP is to mix usage of fuel and electricity throughout the driving cycle. Comparing the optimization-based strategies with the CDCSbased strategies, the optimization-based strategies may result in a lower fuel consumption than the CDCS-based strategies [24]. However, in order to use all the energy in the battery for the global optimal strategy, the distance of the driving cycle must be known. In order to make full use of the electric energy in the battery, in this paper a mix between global optimal strategy and CDCS strategies is implemented, and in order to reduce the application of maps and avoid manual adjustment of parameters, parallel HEV is implemented based on ECMS algorithm. Charge-Discharge Reaction. In order to extend the life cycle of the battery, the charge and discharge reaction (CDR) of the battery is taken into account in the energy management strategy. e CDR of the battery is divided into 5 states: discharging, effective (Eff) discharging, normal, effective (Eff) charging, and critical (Crit) charging, as shown in Figure 8. When the SoC is close to the maximum boundary value, the CDR is in discharging state. With the SoC gradual decrease, the CDR will be in the effective discharging state and the normal state and then in the effective charging state, and when the SoC is close to its minimum boundary value, the CDR will be in the critical charging state to avoid the voltage of the battery and the discharge depth of the battery into the nonlinear region [22,23]. In Figure 9, with SoC as the feedback variable, a feedback energy management system (FEMS) is established to maintain the SoC within an allowable interval, as shown in Figure 10(d). When the SoC decreases, the CDR also decreases accordingly; then, the FEMS will select charging maps, shown in Figure 11; when the SoC increases, the CDR also increases; then, the system will select the discharging maps. Each map is approximate estimates of the corresponding optimal trajectory of DP, which can be generated with the help of the Model-Based Calibration (MBC) toolbox of MathWorks. Reference SoC. In order to make full use of all the energy stored in the battery, a blended strategy that the instantaneous optimal strategy based on ECMS is combined with CDCS strategy is implemented. In order to avoid SoC not reaching the final value of the reference SoC, when the end is reached, the strategy is to underestimate the approximate distance by 15% and use it as the horizon for the blended strategy and then switch to CS mode. is is achieved by setting a reference SoC [25], x rf , which is linear in the ratio of traveled distance versus expected distance according to equation (27). Minimum x rf is set to 0.3 in order to ensure that the final SoC is 0.3. e shape of x rf is shown in Figure 12. With reference SoC Mathematical Problems in Engineering where x f is the minimum reference SoC. In order to improve the robustness of the system, the PI controller is designed according to the following formula: Adaptive Optimal Supervisory Control. e adaptive optimal supervisory control is designed based on SoC feedback, which is to dynamically change s(k) (without using past driving information or trying to predict future driving behavior) to compare SoC changes and maintain its value near the reference value [26][27][28]. An adaptation law based on the PI controller of the type: In (29), s 0 represents the initial value of s at time t � 0, and K p and K i are the proportional and integral gains of the adaptation law. e initialization of this algorithm, i.e., the choice of s 0 , is arbitrary, and it can be done by averaging different optimal initial values obtained offline [28,29]. Simulation Result in MATLAB-Simulink e controller is evaluated in a closed loop together with PM, and the simulation results are compared with the global optimal results of DP offline simulation. e offline simulation results of using DP reverse to solve WLTC are presented in Figure 4. Figure 4(a) shows the optimal paths with different SoC initial values, and the cumulative fuel consumption of the corresponding optimal path is shown in Figure 4(b), and the SoC constraints are 25% ≤ x ≤ 100%, In Figure 4, the optimal paths with different SoC initial values converge to one path at 900 s, and the fluctuation range of SoC is in a larger interval [30,95]; the average fuel consumption of all optimal paths after the WLTC is 820 g, corresponding to the one-hundred-kilometer fuel consumption which is 4.69 L. Figure 13 shows that the engine operating points are concentrated in the low fuel consumption area of the engine and the speed is in the interval [1000 r/min, 3500 r/min]. It can also be seen from Figures 10(b) and 14(a) that the engine torque is mostly concentrated around 80 Nm, and the number of engine starts with the reference SoC (27 times) is lower than the number of engine starts without the reference SoC (31 times). Figure 10(c) is the trajectory of the equivalent factor, and the overall trend of the equivalent factor is stable with the peak upward. e larger the peak value, the greater the desire for engine power. Conversely, as shown in Figure 14(b), the equivalent fuel factor decreases with the decrease of the reference SoC, the peak is down, and the smaller the peak value, the greater the desire for motor Mathematical Problems in Engineering 9 power. e resulting SoC trajectories for the tested cycle are shown in Figures 10(d) and 14(c). In Figure 10(d), the SoC fluctuation range of the tested cycle is narrower than the DP offline simulation result in Figure 4(a), which is located in the interval [61, 65]. Compared with Figure 14(c), SoC can better follow the reference SoC, and the range of SoC variation is relatively large, indicating that the energy stored in the battery can be fully utilized. As shown in Figure 10(a), the measured vehicle speed can follow the target vehicle speed very well. In order to verify the adaptability of the controller to different tested cycles, in addition to the WLTC tested cycle, two tested cycles, China Urban Driving Cycle (CUDC) and NEDC, are also selected for simulation comparison. e results for the 3 tested cycles are shown in Table 2. In fact, in WLTC testing, the final SoC may not reach exactly the target value (75%) of DP; therefore, in order to fairly compare fuel consumption results, a linear correlation between final SoC and fuel consumption is visible, which is easily approximated by the linear expression [30]. where m f is the actual fuel consumption, m f0 is the value that would correspond to a zero SoC variation, and σ is a curve fitting coefficient that translates ΔSoC into a corresponding amount of fuel; here, σ ≈ s. In Table 2, the fuel consumption of the WLTC without reference SoC is 4.81 L/100 km with the final SoC 63%. After correction, the fuel consumption is 4.83 L/100 km, which is 0.14 higher than the average fuel consumption of DP simulation with the final SoC value 75%. For the 3 test cycles, the fuel consumption without the reference SoC is higher than the fuel consumption with the reference SoC; the final value of the SoC without the reference SoC is close to the target value of 75%; the final value of the SoC with the reference SoC is close to 30%. Conclusion is study proposes a real-time control of PHEV based on DP and ECMS. In order to fully exploit the potential of the battery, combined with the CDR and CDCS, the FEMS was established, and the controller was evaluated by closed-loop simulation. e conclusion is as follows: (1) is study proposes a real-time control of PHEV based on DP-ECMS, which is a suboptimal solution, and the results show that the real-time controller has good control ability and better robustness, and the fuel consumption value of the real-time controller is close to the offline simulation results of DP. (2) e engine operating points are concentrated in the low fuel consumption area of the engine, and the engine starts and stops are evenly distributed. ey effectively avoid alternating output between higher and lower engine torques. (3) When the future driving distance is unknown, the controller can make the SoC within a admissible interval, but the SoC change range is relatively small, and the system cannot make full use of the energy stored in the battery. When the future driving distance is known, the system can make the SoC better follow the reference SoC, which can make full use of the energy stored in the battery; therefore, fuel economy is effectively improved. Nomenclature q LHV : Fuel lower heating value (J/Kg) U a : Vehicle speed (Km/h) T: Torque (Nm) i: Gear ratio (-) η: Efficiency (-) ρ: Air density (kg/m 3 ) g: Gravitational acceleration (m/s 2 ) Δt: Sample time (s) Q: Battery capacity (As) P: Power (W) ω: Angular velocity (rad/s) b h : Fuel consumption (Kg/h) x: State of charge (-) _ m f : Fuel mass flow (g/s) m f : Fuel consumption (L/100 km) m f xf : Fuel consumption with reference SoC (L/100 km) D real : Actual distance traveled (km) D cycle : Estimated driving cycle distance (km) λ: Lagrangian parameter (-) s: Equivalent factor (-) Subscripts wh: Wheel req: Requirement gb: Gear box elec: Electricity e: Engine m: Motor gem: Generator electric motor dem: Drive electric motor red: Reducer opt: Optimal rf: Reference BT: Battery APU: Auxiliary power unit Acronyms ENG: Engine Eload: Electronic load Data Availability e Models.slx data used to support the findings of this study are currently under embargo while the research findings are commercialized. Requests for data 6 months after publication of this article will be considered by the corresponding author. Conflicts of Interest e authors declare that they have no conflicts of interest.
2021-05-10T00:03:13.319Z
2021-02-03T00:00:00.000
{ "year": 2021, "sha1": "1e00f1a7036b867de72773288efafe8962bbc6f4", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/mpe/2021/6667614.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "8eb71f5384690371b2d742ff2c1486b2e6a5b8a9", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
55981094
pes2o/s2orc
v3-fos-license
The Cygnus region of the galaxy: A VERITAS perspective The Cygnus-X star-forming region (“Cygnus”) is the richest star-forming region within 2 kpc of Earth and is home to a wealth of potential cosmic ray accelerators, including supernova remnants, massive star clusters, and pulsar wind nebulae. Over the past five years, discoveries by several gamma-ray observatories sensitive in different energy bands, including the identification by Fermi-LAT of a potential cocoon of freshly accelerated cosmic rays, have pinpointed this region as a unique laboratory for studying the early phases of the cosmic ray life cycle. From 2007 to 2009 VERITAS, a very high energy (VHE; E > 100 GeV) observatory in southern Arizona, undertook an extensive survey of the Cygnus region from 67 to 82 degrees Galactic longitude and from −1 to 4 degrees in Galactic latitude. In the years since, VERITAS has continued to accumulate data at specific locations within the survey region. We will review the discoveries and insights that this rich dataset has already provided. We will also consider the key role that we expect these data to play in interpreting the complex multiwavelength picture we have of the Cygnus region, particularly in the vicinity of the Cygnus cocoon. As part of this discussion we will summarize ongoing studies of VERITAS data in the Cygnus region, including the development of new data analysis techniques that dramatically increase VERITAS’ sensitivity to sources on scales larger than a square degree. Introduction Supernova remnants (SNRs) are believed to produce the bulk of Galactic cosmic rays up to the "knee" at 10 15 eV, a theory supported by recent GeV and TeV γ -ray and X-ray observations [1][2][3][4][5].However, so far conclusive evidence for protons being accelerated all the way to the knee by SNR shocks has been elusive.Recent theoretical models indicate that SNRs are PeVatrons for only a brief portion of their life cycle (as little as 30 years) and that the highest energy protons escape early [6].The number of nearby historical supernova remnants is small and the number of potential PeVatrons even smaller.It is therefore interesting to hunt for other sources of gamma-ray emission that can be identified with populations recently-accelerated cosmic rays, whether these are cosmic rays recently escaped from their accelerator or the result of the collective action of multiple hadronic cosmic ray accelerators (e.g.superbubbles). From this point of view, the Cygnus region-rich both in potential VHE gamma-ray sources and likely cosmic ray accelerators, including a superbubble-was a natural choice for a VHE gamma-ray survey by the VERITAS observatory.Cygnus is known to be the richest starforming region within 2 kpc of Earth and to contain high mass of molecular gas (more than ten times that of its neighboring star-forming regions combined) [7].It contains a treasure trove of massive stars in stellar nurseries, young open clusters, and OB associations, two of which-Cyg OB1 and OB2 [8]-are of particular interest.The EGRET catalog as well as known supernova a e-mail: amandajw@iastate.eduremnants, pulsar wind nebulae, and high-mass x-ray binaries also highlighted potential gamma-ray source candidates. Survey strategy and initial results In April 2007 VERITAS began a two-year survey of the portion of the Cygnus region of the Galactic Plane between 67 • < l < 82 • and −1 • < b < 4 • .The survey consisted primarily of overlapping 20-minute observations on a grid of points with 0.8 • separation in Galactic latitude and 1 • separation in galactic longitude, coupled with targeted follow-up observations and serendipitous observations of transients.This strategy provided a fairly uniform effective exposure of ∼6 hours across the entire survey area, plus smaller regions with significantly enhanced exposure [9]. Details of the preliminary survey analysis, based on all VERITAS data in the region through Fall 2009, are given in [9,10].Figure 1 shows the resulting VERITAS background-subtracted sky map of the portion of the survey region with 74 • ≤ l < 82 • .Clear detections were obtained of two moderately extended sources: the wellestablished VHE gamma-ray source TeV J2032+4130 and a newly-discovered patch of VHE gamma-ray emission, VER J2019+407, overlapping the northwest rim of SNR G78.2+2.1.No other significant source detections were found in the survey region. Multiwavelength context The multiwavelength picture of the region, particularly in gamma rays, has evolved significantly since the inception EPJ Web of Conferences of the VERITAS survey.This is particularly true for the two regions associated with Cyg OB2 and Cyg OB1.Each of these regions contains highly extended gammaray sources detected at median energies of ∼12 TeV by the Milagro Gamma Ray Observatory as well as gammaray pulsars detected by the Fermi Gamma-ray Space Telescope (Fermi LAT).Most significantly, the region around Cyg OB2 also contains a large region of hardspectrum gamma-ray emission observed by Fermi LAT above 3 GeV (1FHL J2028.6+4110e)[11,12].Because the emission fills a cavity carved in the ISM by stellar winds from a nearby collection of OB stars, it is frequently dubbed the "Cygnus cocoon" for convenience.We can infer from the gamma-ray spectrum between 3 GeV and 500 GeV that the generating cosmic ray spectrum is quite hard, which suggests the cosmic rays within the cocoon are freshly accelerated [11,12].Ackermann et al. [11] argue strongly that the Cygnus cocoon is evidence for cosmic-ray acceleration due to collective action of massive stellar winds within a superbubble.However, contributions from SNR G78.2+2.1 and the Cyg OB2 association, while disfavored, are not completely ruled out. Measurements of the cocoon spectrum and energydependent morphology at energies above a few hundred GeV are a critical input when differentiating between scenarios, both in terms of the type of accelerated particles and their source.Figures 2 and 3 illustrate the most current knowledge of the cocoon in gamma rays between a GeV and 10 TeV [13].The cocoon is co-located with an extended source of > 10 TeV gamma rays seen by Milagro (MGRO J2031+41) [14,17].A recently-updated measurement by ARGO-YBJ shows a similarly extended gamma ray source at intermediate energies.Figure 3 shows the spectrum of the cocoon as measured by all three experiments [11][12][13][14]17].The spectrum is well-fit by a pure hadronic emission model with a primary proton energy cutoff somewhere between 40 and 150 TeV and Large circles denote the positions and 68% containment regions of ARGO J2031+4157 (blue) [13], MGRO J2031+41 (black) [14] and the Cygnus cocoon (dashed black) [11].Crosses denote the position and extension of TeV J2032+4130 [15], VER J2019+407 [16]; small circles denote the positions of pulsars PSR J2021+4026 and PSR J2032+4127.Reproduced from [13]. remains consistent with the superbuble scenario advanced in [11].However, this spectrum is still subject to significant statistical and systematic uncertainties.In particular, ARGO-YBJ's angular resolution means that they can only directly measure the combined emission produced by VER J2019+407, TeV J2032+4130, and the Cygnus cocoon.They must rely on measurements from VERITAS and other IACTs to subtract off the contributions from VER J2019+407 and TeV J2032+4130 [13].VERITAS in principle has the angular resolution to separately measure these contributions, but, as previously noted, does not at this time detect the cocoon emission. VER J2019+407 As noted earlier, the VERITAS Cygnus region survey led to the discovery of VER J2019+407, a compact (σ ≈ 0.23 • ) region of gamma-ray emission above a few hundred GeV.VER J2019+407 overlaps the brightest part of the northern radio shell of SNR G78.2+2.1, also known as the γ Cygni SNR [16].This radio, X-ray, and gammaray SNR [7,[19][20][21] is believed to be in an early phase of adiabatic expansion into a low-density medium.The radio and X-ray emission reveal a shell-like structure ∼1 • across with high-intensity features to the north and south [22,23].By contrast, the hard (power-law spectral index 2.39 ± 0.14) gamma-ray emission detected by Fermi LAT (1FHL J2021.0+4031e) between 10 GeV and 500 GeV is well-represented by a featureless disk slightly larger than the radio remnant [11,12,21].The remnant is thought to be at a distance of ∼1.7 kpc.The center of the γ Cygni SNR boasts a low-luminosity gamma-ray pulsar, PSR J2021+4026, which may or may not be the remnant of the γ Cygni SNR's progenitor star [25][26][27]. It is plausible that VER J2019+407 originates from protons and heavier nuclei accelerated in the SNR shock, as argued in [16].Yet this interpretation is by no means iron-clad and a number of puzzles remain to be addressed.VER J2019+407 is peculiarly compact, given that highintensity radio features are visible in both the north and south and the only indications of molecular material lie in the portion of the shell opposite to VER J2019+407.The discrepancy in size between 1FHL J2021.0+4031e and VER J2019+407 is even more striking, given that a naive power-law extrapolation of 1FHL J2021.0+4031e up to 1 TeV should suggests a much larger portion of the SNR should be detectable by VERITAS. This puzzle admits at least two competing physical solutions.First, the spectra of different regions of the γ Cygni SNR may evolve differently above 500 GeV, with the emission from the northern shell having a higher-energy cutoff than the remainder of the remnant.This portion of the remnant would consequently appear brighter at higher energies.The effect could be accentuated by systematic effects in the data analysis arising from the size of the γ Cygni SNR and the presence of a magnitude-2 star overlapping the southeastern portion of the shell.Such effects would dilute VERITAS' sensitivity to emission from the entire SNR.This scenario is nicely consistent with the fact both 1FHL J2021.0+4031e and VER J2019+407 have measured spectra consistent with a power-law spectrum with = 2.4.A lessfavored interpretation is that VER J2019+407 is a chance superposition of a PWN in the γ Cygni SNR line of sight, possibly associated with a recently-discovered X-ray point source nearby [28]. TeV J2032+4130 The HEGRA collaboration first discovered the VHE gamma-ray source TeV J2032+4130 in 2002 [29,30].The detection and source extension were later confirmed by IACT observatories MAGIC and VERITAS, which also provided more precise measurements of its spectrum.All instruments find a power-law spectrum consistent with = 2.1, although VERITAS and MAGIC disagree slightly as to the flux normalization [15, No evidence of either an energy-dependent morphology or a spectral cutoff (up to 20 TeV) has been seen thus far [15,31].The source overlaps a gamma-ray pulsar, PSR J2032+4127.When Cordes and Lazio [32] models for dispersion in the Milky Way are applied to recent radio observations [33] of PSR J2032+4127, they place it at 3.6 kpc, beyond Cyg OB2.Other models place it at 1.7 kpc, consistent with standard distance estimates for both Cyg OB2 and the γ Cygni SNR [33]. Subsequent to the survey, VERITAS undertook a set of deep observations of TeV J2032+4130, reported on in Aliu et al. [31].The gamma-ray emission appears to be confined to one of the rare voids in bright diffuse radio and infrared emission seen from Cygnus.Aliu et al. [31] argue that the void could be due to a long-ago (>30 kyr) supernova explosion, with TeV J2032+4130 being a relic PWN powered by PSR J2032+4127 [25].It remains to be seen what implications the recent timing studies of PSR J2032+4127, which point to its being part of a binary system, have for this theory [34].An alternate scenario that has TeV J2032+4130 powered by winds from OB stars in the Cyg OB2 association cannot be ruled out, but is less attractive given the paucity of massive OB stars overlapping the observed VHE gamma-ray emission. The Cygnus cocoon At first glance VERITAS' failure to detect the Cygnus cocoon, clearly seen by ARGO-YBJ, is puzzling.A straightforward folding of the measured cocoon spectrum with the VERITAS response functions suggests that VERITAS should detect a substantial number of photons from the Cygnus cocoon above a few hundred GeV, more than enough for a clear detection.However, the two results are consistent when the limitations of the standard VERITAS data analysis methods are taken into account.The reflected region model and ring background model [35] both estimate the cosmic-ray background level using regions of the field of view away from the γ -ray source.When a source fills a large fraction of the VERITAS field of view, it becomes impossible to both exclude the source and select an adequate background estimation region that is not photon-contaminated.When the source is not excluded portions of the source itself will be incorporated into the background estimation.If the source is sufficiently large it will self-subtract and become undetectable.Figure 5 illustrates this effect for a toy model simulation of VERITAS observations of the cocoon region.For the sake of simplicity, this study was limited to photon candidate energies between 500 GeV and 1 TeV.The four primary sources in the region-the γ Cygni SNR, TeV J2032+4130, VER J2019+407, and the Cygnus cocoonare modeled as Gaussians with power-law spectra, with the relevant parameters either taken directly or extrapolated from previous measurements [11,12,16,21,31].Although the modeled exposure is increased with respect to both the 2009 survey analysis and the current VERITAS archival data set, the standard ring background model analysis still reveals notable (> 4σ ) excesses only for VER J2019+407 and TeV J2032+4130. Future prospects As noted in the preceding sections, VERITAS has continued to accumulate data in the Cygnus region.Two different studies using all VERITAS data to date in the Cygnus region are planned.One is an updated version of the standard analysis used to obtain the preliminary survey results.The other approach uses a maximum likelihood method designed to enhance VERITAS' sensitivity to highly extended sources such as the Cygnus cocoon.This "3D" maximum likelihood method (3D MLM) describes the data in terms of two spatial position coordinates, a parameter known as mean-scaled width (MSW) that is used to distinguish between gamma rays and cosmic rays, and energy.The fit is extended and unbinned in three of the four variables, leading us to refer to the maximum likelihood as "three-dimensional" or "3D."The inclusion of MSW as a parameter in the fit permits extended gammaray sources that fill the field of view to be distinguished from the cosmic ray background, even when the two have strikingly similar spatial distributions.The effectiveness of the technique is easily demonstrated by applying it to the toy model simulation discussed in Sect.3.3.The 3D MLM detects significant emission ( √ T S > 30) from the simulated cocoon and the simulated γ -Cygni SNR. Figure 6 shows that even with minimal smoothing, a broad region of γ -ray excess corresponding to the cocoon appears in the spatial residual map. Disentangling MGRO J2019+307 MGRO J2019+37, in the vicinity of Cyg OB1, is the brightest Milagro source in the Cygnus region, with a flux of about 80% of the Crab Nebula flux at 20 TeV and a bright core over 1 • in extent [14].A campaign of deep 04005-p.4observations targeting MGRO J2019+37 revealed gammaray emission not detected in the original survey and strongly suggest that MGRO J2019+37, rather than being a unique source, is a synthesis of multiple contributions.Figure 7 shows VERITAS' current best picture of MGRO J2019+37 and its vicinity between 600 GeV and 10 TeV [36]. corresponds best with SNR CTB 87.CTB 87's radio morphology buttresses VER J2016+371's identification as a relic PWN, as does the presence of pulsar candidate CXOU J201609.2+371110within the radio contours [36].The other, VER J2019+368, is ∼1 • ridge of diffuse emission, roughly bounded by the bright bubble H II region Sh 2-104 to the west and the energetic gamma-ray and radio pulsar PSR J2021+3651 to the east.Figure 8 shows that the VER J2019+368 spectrum plausibly dominates that of MGRO J2019+37 at high energies, particularly when the ARGO-YBJ upper limits are taken into account.However, VER J2019+368 itself likely incorporates emission from several unresolved sources.These may include the PWN of PSR J2021+3651 and the H II region Sh 2-104. Future outlook The recent surveys of the Cygnus region by VERITAS, Milagro, ARGO-YBJ and Fermi LAT have filled in a complex and fascinating picture in gamma rays from 1 GeV to several tens of TeV.Vivid as this picture is, parts of it remain fuzzy and incomplete.In the short term, we hope to make further advances by combining current and future VERITAS observations of this region with new data analysis techniques and by leveraging the synergy between VERITAS, Fermi LAT and Milagro's more sensitive successor, the recently-commissioned water Cherenkov observatory HAWC.In the longer term, the northern half of the next-generation IACT observatory the Cherenkov Telescope Array (CTA) is perfectly suited to studies of this challenging portion of the gamma-ray sky. Figure 1 . Figure 1.A background-subtracted photon excess map of the portion of the VERITAS Cygnus region survey with l > 74 • , using all data taken through November 2009.This is a slightly modified version of the map shown in [9]. Figure 3 . Figure 3. Current constraints on the cocoon spectrum from Fermi LAT (filled [11], ARGO-YBJ (filled squares), and Milagro (hollow circles).Arrows denote Fermi LAT upper limits; the MGRO J2031+41 flux points are at 12, 20, and 35 TeV [14, 17, 18].The fourth hollow circle, below the first at 12 TeV, has had the TeV J2032+4130 flux subtracted.The black dotdashed line indicates a power-law fit to the combined Fermi LAT and ARGO-YBJ spectrum; the red lines are hadronic model fits to the combined data assuming proton cutoff energies at 150 TeV (solid) and 40 TeV (dashed).Reproduced from [13]. Figure 4 . Figure 4. Background-subtracted gamma-ray counts map showing VER J2019+407 and its fitted extent (black dashed circle).The radio remnant is traced by Canadian Galactic Plane Survey (CGPS) 1420 MHz continuum contours at brightness temperatures of 23.6 K, 33.0 K, 39.6 K, 50 K and 100 K (white) [24].The star symbol shows the location of PSR J2021+4026.The fitted centroid and extent 1FHL J2021.0+4031e are indicated by the inverted triangle and dot-dashed circle (yellow).The open and filled triangles (black) show the positions of the Fermi LAT catalog sources 1FGL J2020.0+4049 and 2FGL J2019.1+4040,now subsumed into the extended GeV emission from the entire remnant [21].The 0.16, 0.24, and 0.32 photons bin −1 contours of the Fermi LAT detection of the Cygnus cocoon are shown in cyan[11].The VERITAS gamma-ray PSF is shown for comparison (white circle, bottom right).Reproduced from[16]. Figure 5 . Figure 5. Significance map produced by applying ring background model analysis to a toy model simulation of ∼200 hours of VERITAS observations of the Cygnus cocoon region.(Caveat: while this illustrates the relevant principle it is not a precise analog of the VERITAS survey exposure as of 2009). Figure 7 . Figure 7. Map of the gamma-ray excess above 600 GeV, seen by VERITAS in the vicinity of MGRO J2019+37.The color bar indicates the number of excess events within a 0.23 • search radius.White dashed circles indicate the regions used to extract the spectra of VER J2016+371 and VER J2019+368.The 9σ significance contour of MGRO J2019+37 is overlaid in solid white.The remaining solid ellipses, diamonds and crosses indicate the locations of potential counterparts.Reproduced from [36].
2018-12-11T20:19:11.715Z
2015-12-01T00:00:00.000
{ "year": 2015, "sha1": "2a1218972124c4eb9ba6fe78388e9d28d9f6add4", "oa_license": "CCBY", "oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2015/24/epjconf-SuGAR2015_04005.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2a1218972124c4eb9ba6fe78388e9d28d9f6add4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
237912511
pes2o/s2orc
v3-fos-license
The Implementation of the Policy on School Operational Assistance (BOS) at Junior High Schools in Indragiri Hilir Regency (A Case Study at Junior High Schools in Tembilahan Sub-District) BOS policy is one of the public policies made by the government aiming at improving the quality of the human resources as the indicator of the progress of a nation’s development. This study aims to analyze and describe the implementation of BOS budget, and the supporting factors also inhibiting factors in implementing BOS at Junior High Schools in Tembilahan sub-district. The research design was qualitative using a descriptive method. The data and information were collected through an observation technique, interview, and documentation. Data analysis was conducted using classification steps, analysis, and data interpretation until the conclusion was drawn. The result showed that BOS policy was an appropriate and effective public policy for people in general and, particularly, for the school, teaching staff, and the students. The implementation of BOS at Public Junior High Schools in Tembilahan sub-district, was good and met the BOS technical guideline. However, it had still an obstacle that several school activities could not be accommodated using BOS budget. The supporting factors of the BOS policy implementation at Public Junior High Schools in Tembilahan subdistrict such as a good communication between several relevant parties and the availability of competent human resources. Meanwhile, the inhibiting factors were financial resources and the characteristics of the policy were too rigid in the use in every school. INTRODUCTION Education is one of the sectors with a big impact in improving the nation's human development. The government as the highest responsible party in a country has issued the public policy supporting the national education vision for actualizing the intellectual and productive Indonesian people with noble characters. One of the programs for implementing public policies is distributing BOS funds to schools. The implementation of public policies is an activity of applying a program (Jones, 1991). Further, Van Meter and Van Horn also stated that the activity contains several actions performed by several parties (individuals/state officials or government or private sector) in the policy to achieve the desired goals (Wahab, 1997). The mechanism of allocating the BOS funds and the usage at schools is regulated in the Regulation of the Indonesia Ministry of Education and Culture Number 3 of 2019 in the Technical Guidance of Reguler School Operational Assistance 2019 that regulates the procedure and the responsibility of using regular school operational assistance. The School Operational Assistance policy (BOS) performed by the government gives a positive impact on education in Indonesia. This government policy has been implemented in Indragiri Hilir Regency since July 2006. Indragiri Hilir Regency consists of 12 subdistricts. The total number of junior high schools receiving the BOS funds is 135 schools. The practice in the field still finds many inappropriate implementations of BOS policy to the SOP/Standard Operational Procedure (BOS technical guidance). Whereas, the SOP (BOS technical guidance) is made to regulate and control as well as direct a certain policy to be implemented based on its path so it will not cause a new problem that violates the law in the future and it leads to achieving the purposes of why the policy is made. From several mass media, there are many cases of abuse in using the allocated funds. Consequently, the distribution of BOS funds does not meet the expectation. Several findings that appeared in the mass media showed that the weaknesses of implementing the School Operational Assistance policy were the management, lacking transparency in using the BOS funds by the school parties. Besides, in compiling the Treasurer Accountability Report for the use of Operational Funds, the school parties often make mistakes and are late. Therefore, accountability and credibility are still skeptical. In a study by Nurul Hariswati, it was found that the use of the School Operational Assistance policy was inappropriate to the regulation of using BOS funds mentioned in the BOS technical guidance (Hariswati, 2015). It was also found that several schools had not performed the transparency in managing the BOS funds and the education related to the use of BOS funds had not been implemented in the schools (Regina, Soeaidy, & Ribawanto, 2014) and (Fauzan, 2014).The problems above, based on the preliminary interview on September 24 th , 2018, along with the Board of Education, were also happened in Indragiri Hilirregency in terms of managing and using the BOS funds. To discover the implementation of this BOS funds policy, the researcher used the implementation model from the theory by Van Meter and Van Horn. The model of a policy implementation process according to Donald S. Van Meter and Carl E. Van Meter Horn emphasizes the characteristics of a policy in each policy implementation and connects it with the policy issue and policy implementation as well as a conceptual model that links the policy to the policy performance. Based on the explanation above and considering the importance of high-quality education as well as the benefits of the BOS funds, the researcher aimed at: 1) Analyzing and describing the BOS policy to the public Junior high schools in Tembilahan Sub-district; 2) Analyzing and describing the implementation of BOS policy at the public Junior high schools in Tembilahan Sub-district; and 3) Analyzing and describing the supporting factors and the inhibiting factors of the BOS policy implementation at the public Junior high schools in Tembilahan Sub-district RESEARCH METHOD This study was Qualitative Research using a descriptive method. The qualitative study is the method to explore and understand the meaning that is considered from social or humanity issues by individuals or a group of people (Creswell, 2009). The sampling technique in determining the source of information was the purposive sampling technique. The source of information was from 4 public junior high schools in Tembilahan subdistrict, namely SMPN 1 Tembilahan, SMPN 2 Tembilahan, SMPN 3 Tembilahan, and SMPN 4 Tembilahan, and the informants related to the process of implementing the BOS program, such as the officers of the Board of Education in Indragiri Hilir regency, Executing Officer team of BOS in Indragiri Hilir regency, Headmasters, the school treasurers for BOS, the chairman of teaching staff and the School Committee Chairman of Public Junior High Schools in Tembilahan Sub-district, Indragiri Hilir Regency.The instruments used here were interview transcripts, observation records, and documentation tools, in the form of a camera or handphone, that were shown to some informants. To collect the required data, the following data collection techniques were used: Observation; Interview, and Documentation The data analysis was done qualitatively; it is the data analysis process conducted by analyzing the data entirely from several sources. The existing data were then classified and compared to their phenomena. Then, it was continued by analyzing the data using the descriptive-qualitative analysis, whereby the collected data were classified, analyzed, and interpreted to obtain a conclusion. RESULT AND DISCUSSION The BOS Policy in Tembilahan Sub-district Based on the general and specific goals of BOS policy, namely reducing costs or exempting the payments for the students whose parents have less income, no school in this study object that asked the students to do payments. It was found from the interview with some informants in this study. All schools as the object of this study exempted the payments and reduced the school operational costs. It was discovered from the result of the interview with the regency regular BOS team, and after checking it in the field based on the interview findings along with the school regular BOS team, it was confirmed by the school committee as the representative of the people/students' parents. Based on the result of the interview above, it shows that BOS is a public policy as a decision made by a state institution, the government, to solve public issues. Based on the findings in the interview, the purposes of the BOS policy are considered successful. It is in line with the theory stated by Harold D. Laswell and Abraham Kaplan that a policy is a program to achieve directional goals, values, and practices (Lasswel & Kaplan, 2013). A public policy is a series of decisions related to realistic, directional, and measurable public interests performed by the government by involving the stakeholders in certain fields leading to a certain goal (Ramdhani & Ramdhani, 2017).The BOS policy aims to assist the operational funding and nonpersonnel matters as well as reduce the students' costs on behalf of improving the quality of learning process at school that will eventually improve the quality of education. Thus, it is in line with the three (3) general goals of BOS mentioned in the BOS technical guidance Number 3 of 2019. The goal achievement is the result of policymaking as the important factor for an organization (Iskandar, 2012). Based on the explanation from the informants in the interview, observation, and the study of documents above, it can be said that the policy on BOS funds is an appropriate public policy and its impacts can be felt by many people, especially schools, teaching staff, and students. Based on the opinion by Yoyon (Irianto, 2011), a public policy is a series of actions as an instruction to achieve the goals. Public policy can also be defined as a series of activities established and performed or not performed by the government with a certain goal on behalf of all people's importance. The form of the public policy can be laws or regional regulation and others (Ambarsari, 2002). The Implementation of a Public Policy (BOS) at Junior High Schools in Tembilahan Sub-district According to Van Meter and Van Horn (Subarsono, 2011), it is stated that the implementation of a policy is performed by the government or private sector, both individually or in groups, intended to achieve some goals. Six phenomena affect the implementation of a policy in this study, namely: Standards and Targets of the Policy According to Agustino, when the standard and the target of a policy are too ideal (utopian/visionary), they will be difficult to be achieved. To measure the performance of the policy implementation, it certainly determines a certain standard and target that should be achieved by the policy implementers; generally, the performance of a policy is the assessment for the achievement of the standard and target (Agustino, 2016). Based on the result of the interview, nearly all schools that had been interviewed related to the accountability for the use of BOS funds had met the regulation of the Ministry of Education and Culture and BOS technical guidance, namely the regulation of the Ministry of Education and Culture Number 3 of 2019. In the observation and the study of documents conducted by the researcher, it was found that the schools had some documents, such as a circular letter, the regulation of the Ministry of Education and Culture, and BOS technical guidance, as the guideline in managing and using the BOS funds. In further detail, the standard and target of a policy become one of the reasons for the success of policy implementation. The standard and the goal of a policy are strongly correlated with the implementers' disposition.Consequently, in this study, all junior high schools in Tembilahan sub-district, in using and managing the regular BOS funds, followed the obvious standard and target of the policy, namely satisfying the regulation of the Ministry of Education and Culture and BOS technical guidance Number 3 of 2019. This result is supported by Van Meter and Van Horn, that to avoid the interpretation that will cause a conflict between implementers, a clear and measurable policy is required (Widodo, 2011). Resources The policy implementation requires support from resources, both human resources and non-human resources. The success of policy implementation strongly depends on the capability in using the available resources (Agustino, 2016). Frank Jefkins in Public Relations, stated that the existence and the influence of limited time, money, and other resources need to be concerned (Jefkins, 1992). The development of human resources is proven empirically towards the improvement of public service quality (Mujtahid & Darmi, 2014). Besides human resources, financial resources and time become important considerations in the success of policy implementation. According to Phoebe Wong, et. al., implementing a policy effectively and efficiently requires the process of analyzing and identifying the improvement and the use of available resources (Wong, Ng, Mak, & Chan, 2015). Based on the result of the observation, interview, and the study of documents, it was shown that the capability of resources in managing and using the BOS funds at Public Junior high schools in Tembilahan sub-district strongly supported to achieve the goals of the implementation of BOS policy in Tembilahan sub-district; it can be seen from the human resources capacity in completing the activities and making a report of managing and using the regular BOS funds in time.Even though the financial resources could not entirely accommodate the operational and nonpersonnel costsat the schools, it was categorized as good based on its implementation. Communication among Organizations To perform the public policy effectively, Van Meter and Van Horn (Widodo, 2011), stated that the standard of the goals should be understood by individuals (implementers). Communication is an intended activity by the spokesperson or author through a common system, either symbols, signals, or behaviors (Wardhani, Hasiolan, & Minarsih, 2016). The communication phenomenon among organizations is identified from the community outreach conducted by the regular BOS team, at both the regency level and school level. Based on the result of the observation, interview, and the study of documents, it was found that communication among organizations related to the implementation of BOS policy by the public junior high schools in Tembilahan sub-district, performed by both the BOS management team at regency level and the BOS management team at a school level to student guardians, people, or other relevant parties, was categorized very good. In with the opinion by Van Meter and Van Horn (Widodo, 2011), the prospect of effective policy implementation is strongly determined by the communication with the implementers of a policy accurately and consistently. Communication becomes one of the key factors for the success of policy implementation (Syarif, Unde, & Asrul, 2014). The Characteristics of Implementers The characteristics of implementers including the structure of bureaucracy, norms, and the patterns of a relationship that happens in the bureaucracy will determine the result of the implementation of a program. Therefore, this discussion is inextricably linked to the structure of bureaucracy. In this study, the result of the study of documents and the interview showed that the phenomenon of implementers' characteristics would be recognized from the implementers' willingness to participate in all rules in the BOS technical guidance. The implementers' characteristics in the implementation of BOS policy in terms of managing and using the BOS funds at public junior high schools in Tembilahan sub-district were categorized as very good. It is seen from the competencies of regency-level regular BOS team and schoollevel regular BOS team, support from the government and people, and communication openness with outer parties by the regular BOS team; it is in line with the SOP/BOS technical guidance that has been arranged. It is relevant to the statement by Van Meter and Van Horn (Subarsono, 2011), that the characteristics of an organization, determining whether the program is successful or not, consist of competencies, the total staff, legislative and executive supports, the power of organization, the degree of communication openness with both outer partiers and the board of policy makers. Implementers' Disposition The implementers' perception in an organization where the policy is implemented can be in the form of rejection, neutrality, and acceptance related to the personal value system, loyalty, personal interest, and the like. According to the opinion by Van Meter and Van Horn (Agustino, 2016), it is stated that acceptance or rejection from the policy implementers strongly affects the success or failure of the public policy implementation. It is extremely possible since the policy is not implemented based on the formulation result from the local residents who strongly understand the issues and problems they feel. The researcher conducted this study by identifying how far the importance of the targeted groups involved in the use and the management of BOS funds and the implementers' readiness in performing the policy. From the result of the interview, it can conclude that the implementers' disposition, related to the implementation of BOS policy at public junior high schools at Tembilahan subdistrict in managing and using the BOS funds, is categorized as excellent. Good implementers' disposition will strongly support the implementation of BOS policy (Fauzi, 2019). The disposition of implementers against the policy implementation will strongly help the process of achieving the goals of the policy implementation (Handani & Frinaldi, 2020). Social Environment, Economy, and Politics Non-conducive social environment, economy, and politics can be the source of problems from the failure of policy implementation performance. Therefore, the effort for policy implementation requires an external conducive environment. The result of data collection from the informants showed that the level of support from several parties toward the progress of this BOS funds program was related to the involvement of external parties in using and managing the BOS funds. Support or involvement by the school committee as the representative of students' parents in supporting the BOS funds program is high. It is seen from the involvement of the school committee along with the school parties in designing the use and the management of the BOS funds at school. The implementation of BOS policy through the phenomenon of a social environment, economy, and politics is seen from the result of the observation, the study of documents, and the result of the interview conducted at public junior high schools in Tembilahan sub-district, and it has already been good in giving support to the implementation of BOS policy. The social condition, politics, and economy in the public policy are a big concern; even though, the implementation of policy decision receives a small concern (Septian & Suryaningsih, 2019). Even though the phenomenon of social condition, economy, and politics supports the success of performing the policy implementation (Handani & Frinaldi, 2020). It can conclude that the implementation of BOS policy at Public junior high schools in Tembilahan sub-district, based on the result of the interview, observation, and the study of documents, uses 6 phenomena. Based on the phenomena above, the implementation of BOS funds at public junior high schools in Tembilahan sub-district, in using and managing the BOS funds, was categorized as good in terms of following the SOP/BOS technical guidance and the regulation of the Ministry of Education and Culture No. 03 of 2019. The Supporting Factors and the Inhibiting Factors of BOS funds policy at Junior High Schools in Tembilahan Sub-district Supporting Factors The first supporting factor is communication. It strongly determined the success of achieving the goals from the implementation of a public policy. Communication is a vital phenomenon that affects the implementation of a public policy. Effective policy implementation will be performed if the decision-makers know the things they will do. In terms of implementing the BOS funds program at public junior high schools in Tembilahan sub-district, the communication that was performed was between the regency-level regular BOS team to the school-level regular BOS team and the school-level BOS team to the parents of public junior high school students in Tembilahan subdistrict. Besides communication, another supporting factor in implementing the BOS program at public junior high schools in Tembilahan sub-district was human resources. It is one of the several phenomena determining the success of the implementation of BOS policy. Good quality human resources in managing BOS funds will make the process of achieving the goals of the policy implementation easier. It is in line with a study by Azis Rachman, that good quality human resources in managing BOS funds can improve the education quality by managing and using the BOS funds in supplying facilities that support the teaching and learning activity (Rachman, 2020). Regarding the BOS program at the regency level and school level, human resources acting as implementers are the school-level BOS management team. In providing good knowledge and skills for the BOS management team, it requires instruction from the municipal board of education through community outreach. The regency board of education is known to perform community outreach to schools that have received BOS funds in Indragiri Hilir regency. Inhibiting Factors The inhibiting factor in the implementation of the BOS program at public junior high schools in Tembilahan sub-district was financial resources. First, it was related to the time of disbursement that almost reaches the end of the quarter year. The late disbursement of the BOS funds still occurrs. This delay was caused by several schools that had not finished the Treasurer Accountability Report of BOS to the office. Another thing was caused by the change in the school BOS management team due to a certain cause. Secondly, the nominal fund received by the schools, based on the total student, does not cover the school operational and non-personnel costs. Whereas, the adequacy level, by William N. Dunn, of a policy is related to the effectiveness level for satisfying the needs, values, or the opportunities to solve the problems (Dunn, 2003). The nominal fund received by the public junior high schools in Tembilahan sub-district did not satisfy and cover the needs and the school condition in Tembilahan sub-district. Therefore, the additional fund from the regional government, besides BOS funds, is required to help the school operational costs. The second inhibiting factor is the characteristics of a policy, namely the existence of BOS technical guidance that is less flexible. The actual BOS technical guidance is the transcript for the implementers in performing the BOS program, however, the existence of BOS technical guidance also inhibits the school parties in using the available BOS funds. The school BOS management team felt that many other components should be covered by the BOS funds. CONCLUSION AND SUGGESTION The policy on the BOS funds at public junior high schools in Tembilahan sub-district is an accurate public policy, and its impact can be felt by many people, especially the school parties, teaching staff, and students. By seeing the 6 phenomena for the BOS policy implementation according to the theory by Van Mater and Van Horn, namely the standard and target of the policy, resources, communication among organizations and activity reinforcement, implementers' characteristics, implementers' disposition, as and social, economic, and political conditions, at public junior high schools in Tembilahan sub-district, it can conclude that the implementation is excellent even though several obstacles still occur, such as the nominal fund and the legalization in using the BOS funds. The supporting factors in the implementation of BOS policy at public junior high schools in Tembilahan sub-district are communication and human resources. Meanwhile, the inhibiting factors in the implementation of BOS policy at public junior high schools in Tembilahan sub-district are financial resources and the characteristics of a policy. The result of this study can be an input for the Ministry of Education and Culture and the Ministry of Finance in terms of managing and using the BOS funds at school. Thus, they can make some improvement in terms of the legalization of a policy. This study can be an input for the regional government to cover the remaining school operational costs as the counterpart budget of the BOS funds from the central government.
2021-09-01T15:12:28.626Z
2021-06-22T00:00:00.000
{ "year": 2021, "sha1": "dc78f29f42ecc6a8d0023849b74cac47944088f5", "oa_license": "CCBY", "oa_url": "https://ejournal.uksw.edu/kelola/article/download/4505/1788", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4e8ef3c5d2298bd33ca2df392e184c90cf004e89", "s2fieldsofstudy": [ "Education", "Economics" ], "extfieldsofstudy": [ "Business" ] }
265976354
pes2o/s2orc
v3-fos-license
COVID-19 lockdown: Impact on online gambling, online shopping, web navigation and online pornography Background: The COVID-19 pandemic and control measures may have had an impact on unpleasant emotions experimented during the lockdown (LD). This may have increased the number of hours spent online and could have impacted the quality of the enacted behavior, in terms of loss of control of Internet use. In this online survey, we were interested in measure how much loss of control was perceived regarding online gambling, online shopping, the fruition of online pornographic content and web navigation. Design and methods: The online survey was carried out during the COVID-19 pandemic in the post-lockdown and 1232 subjects participated in the survey. In the participating sample, healthcare workers (HW) were 43.1% of the sample, of which 18.7% were directly involved in the Coronavirus emergency, and 52.3% of the sample is not a HW. Only 0.6% of the sample gambled online and 37.5% of those reported losing control of their gambling mode. Most of the sample shopped online during the LD (70.1%), but only 7.2% of those lost control by buying and/or spending more than what they had set themselves. Results: Significant data emerged showing that those who lost control while online shopping also lost control regarding the amount of time spent online (p<0.001); 21.6% of the sample, reported making use of online pornographic material during LD, 4.7% of them stated that the frequency increased and 5.1% reported losing control by having spent more money or more time than what was intended. Finally, 44.7% of the sample have experienced loss of control during the web navigation. Furthermore, during the LD 67.8% of the sample reports having experienced unpleasant emotions. Of these, 8.4% state that they enacted behaviors such as online gambling, online shopping, online pornographic material viewing and web navigation to counter their negative emotions. Interestingly, we found a correlation between loss of control during web navigation and online shopping and the emotional states “upset”, “scared” and “restless” (p<0.05). Conclusion: To conclude, there was no significant increase in potentially addictive behaviors, nor an increase in loss of control of these behaviors when enacted online. However, the loss of control in online shopping and web navigation was significantly correlated to the unpleasant emotional states of nervousness, fear and restlessness, whereas those who reported feeling strong and able to handle the situation experienced a lower loss of control in their web navigation. These correlations may suggest that these online behaviors may act as modulators of unpleasant emotional states. Significance for public health This online survey was carried out during the COVID-19. The COVID-19 pandemic and control measures may have had an impact on unpleasant emotions experimented worldwide during the lockdown and online behaviors, such as online gambling, online shopping, the fruition of online pornographic content and web navigation, may act as modulator of unpleasant emotions. These online behaviors may be potentially addictive for people with a vulnerability in developing ad addiction disorder and may represent a new challenge for the public health system. Introduction One of the most relevant events of global significance since the beginning of the millennium is no doubt the spread of the SARS-CoV-2 coronavirus, which has been declared a pandemic by the World Health Organization (WHO) since March 11, 2020. Its socioeconomic and psychological impact has been so great, that it has been considered a psychosocial catastrophe. 1,2 The radical changes in modus vivendi and the persistent perceived threat to one's survival have had the magnitude of a traumatic event that introduced a breaking point between life pre-and post-lockdown (LD) not unlike a natural disaster, that is an event of unpredictable arrival and duration (source: Italian Prime Minister's Office, 2020). Indeed, trauma represents a complex emotional response to a stressful life-threatening event, regarding which one feels helpless. The traumatic event is not easily integrated nor processed by the individual due to its pervasive nature; therefore, trauma response is associated to psychological and behavioral alterations such as emotion regulation disorders, alterations in the person's system of meanings and dysfunctional defense mechanisms. Starting from this definition of trauma, COVID-19 can be seen as the cause of individual and collective traumas, 3,4 the recovery from which has not been facilitated by the support of physically close loved ones due to preventive measures such as the stay-athome order and social distancing, in addition to the fear of getting sick, of being hospitalized, of dying alone in the hospital or of infecting friends and family members. 5 Given the scope of the event, the Addiction Medicine team (AM) decided to evaluate the coping skills of the Italian popula- Significance for public health This online survey was carried out during the COVID-19. The COVID-19 pandemic and control measures may have had an impact on unpleasant emotions experimented worldwide during the lockdown and online behaviors, such as online gambling, online shopping, the fruition of online pornographic content and web navigation, may act as modulator of unpleasant emotions. These online behaviors may be potentially addictive for people with a vulnerability in developing ad addiction disorder and may represent a new challenge for the public health system. Article Article tion in the context of the pandemic. Measuring psychological suffering and distress in phase 1 of the pandemic, which corresponds to the "hot" phase of a traumatic event, is an important element to predict and prevent the risk of future development of Post-Traumatic Stress Disorder (PTSD). Peritraumatic stress reactions refer to behavior, emotions, thoughts and symptoms associated with stress during or immediately after the traumatic event. 6 The set of containment and contrast measures defined and implemented by the Italian executive against the COVID-19 diffusion have suddenly revolutionized one's own routine, social life, means of access to work and the fruition of the most diverse services, along with the delivery of courses and exams for schools at all levels. The requirement to go out of one's apartment only for matters of proven urgency and necessity, along with the near-total closing of offices, shops, schools and universities, in addition to causing a restriction of personal liberty have necessarily led to changes in internal emotional states and an increase in the use of Internet and online platforms, which have been the almost exclusive link between one's own four walls and the outside world in this period of "forced confinement". During the LD, the time spent online has inevitably increased: training activities, work, socializing, shopping and leisure have taken on an almost exclusively digital set-up. During the LD, the existential constant for young and not-so-young alike has been the so-called On-Life 1 : more than in the recent past, it has become obvious how it is no longer possible to clearly distinguish "real life" from "virtual life", "Vital, relational, social and communicative dimensions are the result of a continuous interaction between the material, analog reality and the virtual, interactive reality". Due to the extraordinary scope of the event several studies have been conducted, also in Italian contexts, in order to evaluate changes in lifestyle, 7 the psychosocial effects induced by the pandemic, 1,8 along with consumer habits, the manners of use of gambling, 9 addictions and the relationship with digital media. 10,11 Di Renzo and colleagues, for example, reported that during the LD 37.3% of the 3533 Italian subjects involved in their study (from Northern to Southern Italy) modified their eating habits, even though only 16.7% of them made improvements by following a balanced diet. 7 Regarding tobacco use, 3.3% of the sample has reportedly stopped smoking during the LD, probably due to the fear of incurring in a greater risk of developing respiratory problems and dying because of the COVID-19. 7,12 From a psychological point of view, conversely, it appears that the Italian general population has reported a high prevalence of mental health issues during the last few weeks of the LD. 8 A study conducted in China on 1210 subjects residing in 194 different cities shows that 53.8% of the sample reported a moderate to severe psychological impact of COVID-19, with moderate to severe depressive symptoms in 28.8% of cases and moderate to severe suffering in 8.1% of cases. 13 Another Chinese study conducted on 7236 people found symptoms of anxiety in 35% of participants, depression in 20.1% and sleep disturbances in 18.2% of participants. 14 In an Italian study, 24.7% of the sample (1515 subjects) presented depressive symptoms and 23.2% an anxiety disorder. Regarding sleep quality, it emerged that 42.2% of the sample exhibited sleep disorders and, of these, only 1.1% manifested severe clinical insomnia. 8 Healthcare professionals and people living in Northern Italy have perceived a significantly higher impact of the epidemic on their health compared with people not working in healthcare and people living in Central and Southern Italy. 1 Scientific literature highlights the fact that psychoactive substance use and other potentially addicting behaviors such as gambling, playing videogames, watching TV series, using social media, watching pornographic content and web navigation have often been employed to reduce stress and anxiety or to lift a low mood. 11,15 Therefore, the tendency to use psychoactive substances or enact said behaviors as putative coping strategies to manage a moment of crisis, such as the one that was triggered by the COVID-19 pandemic, considerably increases the chances of developing behavioral conducts that may be difficult to eradicate, 16 and habits that can evolve in problematic behaviors. Indeed, to intervene for preventive purposes on problematic Internet use during the pandemic, an international and interdisciplinary group of experts in the matter has prepared some guidelines for the general and clinical population. 15 Literature shows that Internet use, especially regarding access to websites relative to pornography and videogames, has considerably increased during the LD. 15 Among behavioral addictions, Internet addiction (especially regarding the use of social media), online sex and videogame addiction stand out at the top of the list. 15,16 Eating disorders and compulsive shopping are less prevalent in the Indian context, but they are increasingly reported in Western Countries. 17,18 The quarterly report by Salesforce (2020), the world-leading company in Customer Relationship Management, reports that digital purchases during the quarantine have quickly surpassed the entity of online shopping during the Christmas holidays, and that between March 10 th and 20 th 2020 the amount of money spent to buy basic commodities via digital means rose by 200%, remaining high throughout the quarter. Regarding gambling, an online survey conducted in Sweden revealed that only 4% of the participants (74 subjects) had increased their gambling behavior in response to the pandemic; a more in-depth analysis revealed that this subgroup significantly correlated with a greater severity of gambling addiction, lower age, a longer permanence inside their homes, greater alcohol consumption, psychological discomfort and a history of social withdrawal; for these reasons, this subgroup may represent an especially vulnerable population to which specific care services should be given. 9 Starting from this background, AM questioned how much the possible presence of unpleasant emotional states and the increase in the number of hours spent online could have impacted the quality of the enacted behavior, in terms of loss of control of Internet use. Specifically, we were interested in how much loss of control was perceived regarding online gambling, online shopping, the fruition of online pornographic content and web navigation. Design and methods The survey was carried out during the COVID-19 pandemic in the post-LD period (from May 18 th to June 26 th , 2020) by means of an online questionnaire developed using Google Forms, an app to create online surveys. The questionnaire required 10 min to be filled out and it was distributed via social networks and the AM mailing list. To broaden the involvement in the survey, messaging apps were also employed. Participation in the survey was voluntary and without compensation. The questionnaire comprises 56 items. The first section examines the socio-demographic characteristics of the subjects: gender, age, region and province of residence, region in which the LD was spent, employment status, marital status, and the presence of cohabiting individuals. Moreover, we asked if the interviewee was a healthcare worker or an active volunteer worker during the health emergency or not, if he or she had contracted the virus, if he or she had ever been hospitalized due to a COVID-19 infection or if a cohabitee had gotten sick. In the second section, four potentially addictive online behaviors were examined: online gambling, online shopping, online pornography and aimless web navigation. For each of these behaviors, the presence of the behavior before the LD and its variations during the LD, in terms of frequency and loss of control of the behavior, were investigated. In the third section, 20 mood states referring to the LD period were listed, each to be evaluated using a 5-point Likert scale (from not at all to very much). Participants were asked if the presence of unpleasant emotions led them to enact the said online behaviors and if enacting them effectively alleviated their unpleasant emotions. Statistical analysis All tests were carried with the IBM SPSS version 20.0 statistical package. The Pearson's chi-square test was used for categorical variables, p<0.05 (two-tailed) was taken as the significance threshold for all the tests. Results One thousand two hundred and thirty-two (1232) subjects participated in the survey. Of these, 1202 gave their informed consent, but data from 1196 responders were deemed valid. The sample comprised 35.1% males and 64.6% females; 0.3% of subjects reported their gender as "other". The mean age of the sample is 43.25 years (SD ±14.5). Regarding employment status, 19.4% of the subjects do not currently have a job (pensioners, students, unemployed), 60.5% are employees, 20.1% are independent professionals. In Table 1 the marital status of the sample is shown. Regarding the geographic distribution of the sample, the data were divided into four areas: Veneto, Lombardy, Piedmont (the regions which were most affected by the SARS-CoV-2 virus), and a single category comprising the remaining Italian regions; 51.8% of the sample spent the LD in Veneto, 15% in Lombardy, 2.1% in Piedmont, and 31.1% in another Italian region. In the participating sample, healthcare workers (HW) were 530 (43.1% of the sample), of which 18.7% were directly involved in the Coronavirus emergency; 52.3% of the sample is not a HW, and 1.7% was an active volunteer worker during the pandemic; 2.9% of the sample provided no answer. Data regarding SARS-CoV-2 infection of the responders are listed in Table 2. Going into detail, we divided the sample for infection status and field of employment (Table 3). Merging the data regarding both the certainty and the possibility of having contracted COVID-19 (Table 3), 6.4% of the subjects may have been infected; 28.8% of the sample had a negative swab test, while 64.5% believes to not have been infected; 0.2% of the sample has been hospitalized for an acute clinical picture due to COVID-19. For statistical analyses we separately considered each of the four potentially addictive online behaviors that were investigated. Before the LD, 1.6% of the interviewees had gambled live. During the LD, 2.2% managed to gamble live, 0.6% of the sample gambled online and 1.7% reported an increase in online gambling frequency. Of those who had gambled online (8 subjects), 3 reported losing control of their gambling mode (37.5% of those who had gambled online during the LD). Regarding online shopping, 74.7% of the sample had made online purchases before the LD, while 70.1% of the sample shopped online during the LD. The frequency of online shopping during the LD proves unchanged in 56.1% of cases, increased in 14% of cases and decreased in 29.9% of cases. Sixty subjects (7.2%) out of those who have shopped online during the LD stated that they lost control by buying and/or spending more than what they had set themselves. Significant data emerged showing that those who lost control while online shopping also lost control regarding the amount of time spent online (p<0.001). In Table 4, the sampling distribution regarding participants' loss of control of the time spent online, divided by HW and NHW, is shown. Two hundred and fifty-seven (257) of their fruition of online pornographic content remained unchanged, for 5.2% it decreased, for 4.7% it increased. Of those that had made use of online pornographic material, 5.1% (n=13) report losing control by having spent more money or more time than what was intended. During the LD, 67.8% of the sample (n=835) reports having experienced unpleasant emotions. Of these, 8.4% (n=104) state that they enacted behaviors such as online gambling, online shopping, online pornographic material viewing and web navigation to counter their negative emotions. Dividing the sample in HW and NHW, no significant differences in said behaviors emerge. We will now analyze in detail the emotions that participants in the study reportedly experienced during the LD. Dividing the sample in two populations, HW and NHW, we obtained what is shown in Table 5. In Table 6 it emerges that only irritability is significant, that is the NHW group manifests higher levels of irritability than the HW group. Considering the specific sample of HW, both active and inactive during the pandemic, no significant differences regarding the emotions experienced during the LD arise. Discussion The participating sample shows to be biased towards the female gender, with a 1:3 ratio. Mean age covers quite a wide range, offering a quite heterogeneous sample from this point of view. The sample's employment status presents a 2:10 ratio of unemployed individuals, with a balanced distribution regarding gender. The geographical distribution of the sample is clearly biased towards a higher prevalence of people that spent the LD in Veneto (51.8%). The infection status remained quite limited: 1.4% of participants are certain of having contracted COVID-19 and 4.9% suspect having gotten sick. The population that participated in the study is not clinical but general, and indeed shows a low prevalence of gamblers (of any severity) and online players. As was evident from the data, the increase in online gambling has proven to be trifling in the reference sample, contradicting our expectations about a change in direction from offline to online gambling. The use of online apps and stores is ever more widespread and with a constant annual increase, and it is radically modifying our consumer goods shopping habits and means. 19 In the sample considered in the present work, the habit of buying goods and services online is common: 2 people out of 3 already made use of online stores. With the LD and the closing of most productive and sales activities, the increase in online store revenues has been a predictable phenomenon. In our sample, 14% of subjects increased this practice. However, loss of control in online shopping has been limited: less than 1/10 has not been able to manage their online purchases. What is interesting, though, is the fact that those who lost control of their online shopping have also reportedly lost control of the time they spent online. Regarding the emotional states we considered, it is important to highlight that the sample is quite randomly distributed between HW and NHW; that is, the different percentages that emerged aren't ascribable to the participants' employment status. Regarding irritability, we detected a significant difference between HW and NHW: in detail, it seems present in the NHW group. We also found a correlation between loss of control during web navigation and online shopping and the emotional states "upset", "scared" and "restless". This correlation may suggest that these online behaviors may act as modulators of unpleasant emotional states. Conclusions Given the restrictions caused by the LD, we were expecting people to use the Internet more than they did before. The question we wanted to address was whether or not there could be a loss of control in online activities. From the survey, what emerges is that there was no significant increase in potentially addictive behaviors, nor an increase in loss of control of these behaviors when enacted online, so we did not find a change in trend towards online activity. However, it is interesting to note how the loss of control in online shopping and web navigation was significantly correlated to the unpleasant emotional states of nervousness, fear and restlessness, whereas those who reported feeling strong and able to handle the situation experienced a lower loss of control in their web navigation. Limitations of the study The survey was designed and implemented during the health emergency, which entailed a tight time schedule during the phenomenon itself. The objective has been that of creating a "photograph" of the state of the situation to understand whether or not the LD experience had led to an increase of potentially addictive online behaviors. Given the above, this work presents a few limitations: -the sample is not randomized, but the data was collected through the contacts of the AD of Verona, and therefore influenced by a strong presence of HW; -it was not possible to check that participants did not fill out the questionnaire more than once; -no power analysis to estimate the necessary sample size was carried out; -no standardized questionnaires were used.
2021-02-23T05:18:25.696Z
2021-01-14T00:00:00.000
{ "year": 2021, "sha1": "4f1a23a1873158b53aeaddfe6ac1cde881790552", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.4081/jphr.2021.1759", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4f1a23a1873158b53aeaddfe6ac1cde881790552", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
4714556
pes2o/s2orc
v3-fos-license
Is it possible to reduce intra-hospital transport time for computed tomography evaluation in critically ill cases using the Easy Tube Arrange Device? Objective Patients are often transported within the hospital, especially in cases of critical illness for which computed tomography (CT) is performed. Since increased transport time increases the risks of complications, reducing transport time is important for patient safety. This study aimed to evaluate the ability of our newly invented device, the Easy Tube Arrange Device (ETAD), to reduce transport time for CT evaluation in cases of critical illness. Methods This prospective randomized control study included 60 volunteers. Each participant arranged five or six intravenous fluid lines, monitoring lines (noninvasive blood pressure, electrocardiography, central venous pressure, arterial catheter), and therapeutic equipment (O2 supply device, Foley catheter) on a Resusci Anne mannequin. We measured transport time for the CT evaluation by using conventional and ETAD method. Results The median transport time for CT evaluation was 488.50 seconds (95% confidence interval [CI], 462.75 to 514.75) and, 503.50 seconds (95% CI, 489.50 to 526.75) with 5 and 6 fluid lines using the conventional method and 364.50 seconds (95% CI, 335.00 to 388.75), and 363.50 seconds (95% CI, 331.75 to 377.75) with ETAD (all P<0.001). The time differences were 131.50 (95% CI, 89.25 to 174.50) and 148.00 (95% CI, 116.00 to 177.75) (all P<0.001). Conclusion The transport time for CT evaluation was reduced using the ETAD, which would be expected to reduce the complications that may occur during transport in cases of critical illness. This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http:// creativecommons.org/licenses/by-nc/4.0/). What is already known Critically ill patients require many devices. Depending on the transportation time of these patients, these devices can cause numerous complications. It is expected that the patient's risk would be reduced by reducing the transportation time of the patients. What is new in the current study We have developed a device (Easy Tube Arrange Device) to transfer a patient with many fluid lines in a short time. This device was used to compare the transportation time for a CT scan of a mannequin, and the time was shortened. INTRODUCTION Most hospitalized patients receive intravenous hydration, electrolytes, medications, nutrients, and blood transfusions. 1,2 Patients with serious conditions receive a larger number of intravenous infusions than those with less severe conditions and require additional devices in order to monitor electrical activity in the heart, O2 saturation, blood pressure, central venous pressure, and arterial pressure. The patients in those states are transported between various places within the hospital for additional testing or treatment, which can cause numerous complications ranging from minor to life-threatening. [3][4][5][6][7][8][9] One report stated that the likelihood of those complications increases as the transport time or number of fluids attached to the patient increases. 10 According to the research by Parmentier-Decrucq et al., 11 120 of 262 patients (45.8%) had complications during transport, to computed tomography (CT) in 93.6% of cases, followed by magnetic resonance imaging, angiography, and nuclear medicine testing. 12 Nurses are often charged with patient transport. One report stated that, next to checking vital signs, patient transport is the most frequent activity performed by nurses. 13 Therefore, nurses inevitably bear the burden of organizing the monitoring devices and fluid lines during patient transport. Therefore, this study aimed to compare the time required for transport to CT, the most common reason for patient transport within the hospital, with versus without the use of the Easy Tube Arrange Device (ETAD) developed in a previous study; examine the complications that occur during patient transport; and verify the convenience of the newly developed ETAD. Participant This study, which received institutional review board approval, included 60 volunteers who were responsible for patient transport, including nurses, emergency technicians, and doctors. The profession, sex, and length of employment of the volunteers were recorded. Newly developed ETAD To withstand the weight of the fluids, a 400 mm × 400 mm × 5 mm-thick acrylic plate was attached with a 500 mm × 500 mm × 3 mm-thick ethylene-vinyl acetate copolymer plate on each side. Three fluid bags were attached to each plate, with clips under the plates to hold the fluid lines and an attached flexible cable cut on one side to organize the fluid lines. To overcome the shortcoming of the inability to identify the fluid line in the flexible cable, each line was marked at the start and the end of the flexible cable with distinctive colors for identification (Fig. 1). Study protocol Each participant attached an electrocardiogram, O2 saturation measuring device, automatic blood pressure measuring device, central venous pressure measuring device, continuous arterial pressure monitor, oxygen supply, and Foley catheter to an ordinary CPR manikin (Resusci Anne, Laerdal Medical, Stavanger, Norway). In the starting setting, the conventional method or ETAD was used to organize the fluid lines. Each manikin was treated and attached with IV fluids as a patient in the intensive care unit (ICU) would be treated by the ICU nurse. For each method, a total of five or six fluids were attached, making a total of four different settings (three fluids; one for measuring central venous pressure, one for maintaining arterial cannulation, and one for peripheral venous fluid were used as the default). The central venous line was connected to the right subclavian vein, while the arterial maintaining line was connected to the left radial artery. The remaining intravenous fluids are connected to both arms. The number of fluids was selected as five or six which was determined as statistically significant in the previous study. 14 The time intervals from the starting position to the six steps was measured in seconds. The time spent by each participant to perform the step of moving fluids and monitoring devices from the default position ( Fig. 2A) to the gurney (Fig. 2B) for patient transport (A period), the step of transporting the patient in the gurney from the ICU to the CT room (B period), the step of moving the patient from the gurney (Fig. 2C) to the CT bed ( Fig. 2D) (C period), the step of moving the patient from the CT bed (Fig. 2D) to the gurney (Fig. 2C) (D period), the step of transporting the patient from the CT room to the ICU (E period), and the step of returning to the starting position ( Fig. 2A) from the gurney (Fig. 2B). The test was carried out using the conventional method or the ETAD by using five or six fluids each. The data recording sheet with a random order of settings and the number of fluids was randomly selected by the participant before the start of the test. The time elapsed for each step was measured from the moment the participant handled the fluid, vital sign monitoring device, or therapeutic device to prepare for the transport to the moment the participant stopped handling the fluid, vital sign monitoring device or therapeutic device (A, C, D, and F period), and the transport time was measured based on the moment the gurney passed through the ICU door and the moment the gurney passed through the CT room door. All six steps were individually measured using conventional method or ETAD, with five or six fluid lines for each case and measured in seconds rounded to the first decimal place. The complications encountered during the study were divided into detachment of the fluid line, detachment of the monitoring device, and dropping of the fluid, monitoring device, or ETAD. Finally, the participants were asked to score the convenience of the conventional method and ETAD on scale of 0 to 10 from most uncomfortable to most comfortable, respectively. For the statistical analysis, IBM SPP Statistics ver. 21.0 (IBM Corp., Armonk, NY, USA) was used and the normality test for each variable was performed using the Shapiro-Wilk test. For analysis of the time elapsed for each group, a paired t-test was used to examine normally distributed variables while the Wilcoxon signed rank test was used to examine non-normally distributed variables. For analysis of the participants'characteristics and the time elapsed for each group, the Mann-Whitney U-test and independent twosample test were used for sex and one-way analysis of variance was used for occupation. Spearman's correlation coefficient was used to compare employment lengths. To analyze complications, the chi-square test was used, and P-values < 0.05 were considered statistically significant. Comparison of time required to prepare patient transport by using conventional versus the ETAD method (A period) The median and the inter quartile range of the time required to prep the five fluid lines using the conventional method was 178.00 seconds (95% confidence interval [CI], 165.25 to 187.75), while that for prepping six fluid lines was 185.48 ± 11.69 seconds. Using ETAD, the time required to prep to the standard state was 124.00 seconds (95% CI, 110.25 to 139.00) and 120.53± 19.67 seconds, respectively, and the differences were statistically significant (all P < 0.001) ( Table 2). Comparison of time required to transport patient from ICU to CT room by using conventional versus ETAD method (B, E periods) For the conventional method, the time required to transport the patient from the ICU to the CT room for five fluid lines was 20.50 seconds (95% CI, 19.00 to 21.00) and that for six fluid lines was 21.00 seconds (95% CI, 19.00 to 21.00), while the time required to transport from the CT room back to the ICU was 21.00 seconds (95% CI, 20.00 to 22.00) regardless of the number of fluid lines. When ETAD was used, the time required to transport the patient from the ICU to the CT room was 21.00 seconds (95% CI, 20.00 to 22.00) regardless of the number of fluid lines, while the time required to transport from the CT room back to the ICU was 20.50 seconds (95% CI, 19.00 to 21.00) and 21.00 seconds (95% CI, 20.00 Values are presented as number (%) or mean ± standard deviation. Comparison of time required to transport patient from gurney to CT room and CT room to gurney by using conventional versus ETAD method (C, D periods) Using the conventional method, the time required to transport a patient from the gurney to the CT bed was 48.62± 7.49 and 46.82 ± 7.60 seconds, respectively, while that using the ETAD method was 35.62± 5.71 and 34.70± 4.90 seconds, the differences of which were statistically significant (P < 0.001). Using the conventional method, the time required to transport a patient from the CT bed back to the gurney was 47.60 ± 7.19 and 44.50 seconds (95% CI, 41.00 to 49.00), while that using the ETAD method was 34.85 ± 4.74 and 33.00 seconds (95% CI, 31.00 to 36.00), the differences of which were statistically significant (all P < 0.001) ( Table 2). Comparison of time required to return to starting point after ICU arrival by using conventional versus ETAD method (F period) Using the conventional method, the time required to return the patient to the starting point was 174.00 seconds (95% CI, 164.00 to 182.00) and 186.00 seconds (95% CI, 175.00 to 192.00) respectively for 5 and 6 fluid lines, while that using the ETAD was 123.00 seconds (95% CI, 108.75 to 137.00) and 122.00 seconds (95% CI, 108.25 to 133.50) respectively for 5 and 6 fluid lines, the differences of which were statistically significant (all P < 0.001) ( Table 2). Comparison of total time required for intra-hospital transport based on method and number of fluid lines Using the conventional method, the total transport time for five fluid lines was 489.00 ± 38.66 seconds, while that for six fluid lines was 504.80± 29.04 seconds, and the differences were statistically significant (P < 0.004). Using ETAD, the total transport time for five fluid lines was 364.50 seconds (95% CI, 335.00 to 388.75), while that for six fluid line transport was 363.50 seconds (95% CI, 331.75 to 377.75), and the differences were not statistically significant (P< 0.101). Comparison of total time consumed for intra-hospital transport by using conventional versus ETAD method in same fluid state Using the conventional method, the total time consumed for the transport was 488.50 seconds (95% CI, 462.75 to 514.75) and 503.50 seconds (95% CI, 489.50 to 526.75) for five and six lines of transport, respectively, while that using the ETAD, was 364.50 seconds (95% CI, 335.00 to 388.75) and 363.50 seconds (95% CI, Comparison of total time consumed for intra-hospital transport by using the conventional versus ETAD method based on participant demographics The total time consumed to prepare the transport and return to the initial status using the conventional method for men was 492. Table 3). By profession, the total time using the conventional method for nurses was 473.38± 31.16 and 502.16± 29.96 seconds for five and six fluid lines, respectively, while that for the emergency medical technicians was 501.67 ± 35.35 and 510.83 ± 32.41 seconds for the five and six fluid lines, respectively, and the differences were statistically significant for the five fluid lines (P < 0.001), but not for the six fluid lines (P = 0.648). When ETAD was used, the total time for the nurses was 347.27 ± 52.60 and 343.49 ± 46.10 seconds, respectively, 368.33 ± 30.30 and 359.50 ± 27.86 seconds for emergency medical technicians, and 366.73± 23.50 and 363.55 ± 18.89 seconds for doctors, and the differences were not statistically significant (P = 0.243, P = 0.232) ( Table 3). The correlation analysis of employment length showed that the correlation coefficient of the total time consumed using the conventional method was -0.163 and -0.730, respectively, for five and six fluid lines (P = 0.215, P = 0.580), while that for ETAD was Transportation time for computed tomography -0.360 and 0.052, respectively, which implied that employment length and total time consumed do not have a meaningful correlation (P = 0.784, P = 0.695) ( Table 4). Frequency of complications using the conventional vs. ETAD method For the five fluid line setting, a total of ten complications occurred with the use of the conventional method, including three of fluid line detachment, four of monitoring device detachment, and three of fluid or device, while a total of two complications occurred with the use of the ETAD, including one of fluid line detachment and one of monitoring device detachment, the differences of which were statistically significant (P = 0.015). For the six fluid line setting, a total of 12 complications occurred with the use of the conventional method, including five of fluid line detachment, three of monitoring device detachment, and four of fluid or device dropping, while a total of two complications occurred with the use of ETAD, including one of monitoring device detachment and one of fluid or device dropping, and the differences were statistically significant (P = 0.040). Comparison of conventional and ETAD use The survey convenience score of the conventional method was 4.0 (range, 4.0 to 5.0), while that of ETAD use was 8.0 (range, 7.0 to 9.0), the differences of which were statistically significant (P < 0.001). DISCUSSION Patients with critical illness are often transported within the hospital with several fluid and monitoring devices attached, may cause complications such as decreased O2 saturation, hypotension, arrhythmia, cardiac arrest, and device detachment. Numerous studies have reported that 6% to 71% of patient transports had complications. 3,15,16 Many studies have been conducted to develop efficient methods for decreasing complications during patient transport. The studies included ensuring sufficient oxygen supply for patients with a respirator, confirming appropriate device and medical staff, sustaining sedation, securing professional medical staff in case of emergency, following the transport protocol accurately, and transporting to an accessible area. [17][18][19][20] Based on a previous study concluding that ETAD could decrease the transport time, 14 this study examined patient transport for CT scan within the hospital, which comprises the majority of patient transports, to determine whether it could aid the involved medical staff by decreasing transport related complications. This study showed that the ETAD decreased the time required for in-hospital patient transport to a CT scan. The total time required for in-hospital patient transport for a CT scan using the conventional method was 488.50 seconds (95% CI, 462.75 to 514.75) and 503.50 seconds (95% CI, 489.50 to 526.75), for five and six fluid lines, respectively, while that using the ETAD method was 364.50 seconds (95% CI, 355.00 to 388.75), and 363.50 seconds (95% CI, 331.75 to 377.75), values that decrease as the number of fluid lines increases, and the differences were statistically significant (P < 0.001, P < 0.001). These results showed a larger decrease in time compared to the previous study on preparation time. The research conducted by Doring et al. 10 reported that the increased intra-hospital patient transport time increases the occurrence of hematological instability. Thus, decreasing the intra-hospital patient transport time for CT scans by using the ETAD is expected to decrease complications. The transport time between the ICU and the CT room did not differ statistically significantly between the conventional and ETAD methods. This result implies that the person pushing the gurney does not affect the time; rather, the distance and route affect transport time. Employment length did not show statistically significant correlation with transport time. This may imply that the ETAD is effective regardless of staff proficiency, but further research is necessary to confirm this hypothesis. The analysis of profession showed statistically significant differences in transport time with the five fluid line setting using the conventional method. This result may be because nurses who are normally responsible for patient transport, are more highly proficient than those in other professions are. Since the statistically significant difference disappears when the number of fluid lines increased to six, it is reasonable to assume that the larger number of fluid lines causes difficulty organizing lines attached to the patient regardless of proficiency. The sex-based analysis did not show a significant difference with the use of the conventional method, but when ETAD was used, the transport by male participants significantly decreased compared to that by female participants. This result is likely caused by the fact that five or six fluids must be moved together using the ETAD, which requires significantly more strength than moving one fluid. In addition to decreasing transport time, the complications considered in this study were confirmed to occur less often when ETAD was used and the participants reported that the ETAD was more convenient that the conventional method. Thus it is expected to ease patient transport and decrease nurse workload. The limitations of this study are as follows. First, it considered limited number of fluid lines and monitoring devices when the real patient may be attached to more monitoring devices or additional devices. Second, the use of the Resusci Anne, which is lighter than real patients, did not accurately simulate real patients. Patient height and weight are expected to affect transport time, thus further research in the clinical setting is necessary. Third, the study was conducted with a pre-installed ETAD. Although the installation time was disregarded in this study since it compared transport time, it is clear that installing the ETAD in reality requires more time than the conventional method since the lines must be organized in the flexible cable. However, the inconvenience caused by disorganized lines during transport is greater than organizing the fluid line in one flexible cable, and the installation is not part of the transport. Fourth, patient transport is performed by many people in real life; however, only one individual performed the transport in this study. More research is needed to determine the effects of transport by more than two individuals. Finally, only device-related complications were considered in this study, and complications such as hypotension, decreases in O2 saturation, and cardiac arrest were not considered. Future studies are warranted to overcome these limitations.
2018-04-26T19:49:05.238Z
2018-03-01T00:00:00.000
{ "year": 2018, "sha1": "c792574f3869a56117814998241850ef17047d90", "oa_license": "CCBYNC", "oa_url": "http://www.ceemjournal.org/upload/pdf/ceem-16-183.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c792574f3869a56117814998241850ef17047d90", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259317145
pes2o/s2orc
v3-fos-license
The maximum number of odd cycles in a planar graph How many copies of a fixed odd cycle, $C_{2m+1}$, can a planar graph contain? We answer this question asymptotically for $m\in\{2,3,4\}$ and prove a bound which is tight up to a factor of $3/2$ for all other values of $m$. This extends the prior results of Cox--Martin and Lv et al. on the analogous question for even cycles. Our bounds result from a reduction to the following maximum likelihood question: which probability mass $\mu$ on the edges of some clique maximizes the probability that $m$ edges sampled independently from $\mu$ form either a cycle or a path? Introduction For graphs G and H, let N(G, H) denote the number of (unlabeled, not necessarily induced) copies of H in G. Furthermore, for a planar graph H, define N P (n, H) def = max N(G, H) : G is an n-vertex planar graph . The study of N P (n, H) was initiated by Hakimi and Schmeichel [11] who determined both N P (n, C 3 ) and N P (n, C 4 ) precisely. Later, Alon and Caro [1] continued this line of inquiry by pinning down the value of N P (n, K 2,k ) for all values of k. Wormald [16] and Eppstein [5] independently argued that N P (n, H) = Θ(n) when H is a 3-connected planar graph. Huynh, Joret and Wood [12], demonstrated that N P (n, H) = Θ(n f (H) ) for every planar graph H, where f (H) is a graph invariant called the flap number. See also [13] for a further generalization of this result. Since the order of magnitude of N P (n, H) is now understood, the next question is to pin down the coefficient in front of the leading term. This leading coefficient has been found for several small graphs beyond those mentioned above: C 5 [9], P 4 [7], P 5 [6] and P 7 [3]. The strongest result along these lines to date is that N P (n, C 2m ) = n m m + o(n m ) for all m ≥ 3, which was proved for small m in [3,4] and then extended to all m in [14]. This paper is motivated by a desire to understand the maximum number of copies of an odd cycle a planar graph can hold. For m ≥ 3, a lower bound of N P (n, C 2m+1 ) ≥ 2m n m m − O(n m−1 ) is realized by starting with a copy of C m and replacing each edge xy by a path on approximately n/m − 1 many vertices and connecting each of these new vertices to both x and y (see Figure 1a). We believe this construction to be asymptotically tight for all m ≥ 3, and we make strides toward proving this to be the case. (a) Blowup of the cycle C 5 . (b) Blowup of the cycle C 6 with edges between consecutive large-degree vertices. We note that the constant 3 can be improved without much effort, especially for larger values of m; however, bringing this constant all the way down to our conjectured value of 2 is currently beyond our reach. We additionally note that the value of N P (n, C 5 ) was previously determined exactly for all n by Győri et al. [9] through wildly different means. However, we include this (weaker) result to demonstrate the method developed in this paper. Reduction to maximum likelihood estimator problems on graphs Very generally, maximum likelihood estimator question asks: which probability distribution maximizes the probability of a certain set of observations? Historically, these questions were focused on determining the member of a family of probability distributions (e.g. the family of normal distributions) that best fits a set of observed data. Recently, Cox and Martin [3] showed that bounding N P (n, H), assuming H has a special subdivision structure, can be reduced to a question asking: which probability distribution µ on the edges of a clique maximizes the probability that e(H ′ ) many edges sampled independently from µ yields a copy of H ′ ? While significantly different in scope, this question can be viewed as a "maximum likelihood estimator question on graphs" and appears to be absent from the literature, save the papers resulting from this line of inquiry [2,3,4,14]. The biggest success of this reduction to maximum likelihood estimators to date is the proof that N P (n, C 2m ) = n m m + o(n m ) for every m ≥ 3, in which case the corresponding maximum likelihood question was solved for small m in [3,4] and for all m by Lv et al. [14]. We discuss the actual maximum likelihood question in this case shortly (see Lemma 1.4). The key contribution of this manuscript is an extension of the methods of Cox and Martin in order to relate the problem of bounding N P (n, C 2m+1 ) to a maximum likelihood estimator question on graphs. Definition 1.2. An edge probability measure µ is a probability measure on the edges of some complete graph. For a complete graph K, we denote by ∆ K the set of all edge probability measures on K. For any clique K, note that ∆ K is naturally identified with the ( |V (K)| 2 − 1)-dimensional simplex. β(µ, H) can be viewed as the probability that e(H) many edges sampled independently from µ form a copy of H. The following is one of the key reduction lemmas of Cox and Martin. where the implicit constant in the big-oh notation depends on m. The key contribution of this paper is an analogous reduction lemma for odd cycles. where the implicit constant in the big-oh notation depends on m. The reduction lemma for even cycles is more general than stated; it actually applies to the class of graphs with linearly many edges and no copy of K 3,t for some t. This includes, in particular, the class of graphs embeddable onto surfaces of any fixed genus. However, the reduction lemma for odd cycles developed in this paper relies critically on the topology of the plane. The majority of this manuscript is dedicated to proving the reduction lemma for odd cycles. After proving the reduction lemma, we then bound the resulting maximum likelihood questions in order to produce Theorem 1.1. for m ∈ {3, 4}, and As mentioned previously, the constant 2.7 can be lowered, especially for larger values of m, but it is currently beyond our reach to bring it all the way down to our conjectured value of 2. Furthermore, if one seeks only a bound of the form C/m m−1 for m ≥ 5 where C is some absolute constant, then one can naïvely use the known bounds of β(µ; C m ) ≤ 1 m m (Lv et al. [14]) and β(µ; P m+1 ) ≤ 20 (m+1) m−1 (Antonir and Shapira [2]). The proof of Theorem 1.6 can be found in Section 4. There, the three stated bounds are proved separately as Proposition 4.1, Theorem 4.6 and Theorem 4.7, respectively. Notation In this paper, all graphs are simple and we use standard graph theory definitions and notation (generally following [15]). For a graph G, we write e(G) For distinct elements x, y, we abbreviate the set {x, y} to xy, mirroring common shorthand for an edge in a graph. Throughout this paper, we fix the value m and obtain results to compute upper bounds for N(G, C 2m+1 ) for graphs G of large order. As such, all implicit constants in any big-oh notation will depend on m. Preliminaries The following fact is well-known: Proposition 2.1. If G is a planar graph, then e(G) ≤ 3v(G), If G is a planar bipartite graph, then e(G) ≤ 2v(G). One immediate consequence of these bounds is that a planar graph cannot have too many vertices of large degree: 2. An n-vertex planar graph contains at most 6n/d many vertices of degree ≥ d. A considerable number of arguments in this manuscript rely on the fact that planar bipartite graphs are sparse. In particular, a planar bipartite graph cannot have many vertices of degree 3 or larger: Proof. Consider the bipartite subgraph of G with parts A and B. By construction, this subgraph has at least k|A| many edges. Additionally, this is a bipartite planar graph and so it has at most 2(|A| + |B|) many edges (Proposition 2.1). Therefore, One particular consequence of the above proposition is that a bounded-degree planar bipartite graph cannot contain too many copies of P 3 : Proposition 2.4. Let G be a planar bipartite graph with parts A, B. The number of copies of P 3 with both endpoints in B and midpoint in A is bounded above by |A|+4d|B| where d = max v∈A deg v. Proof. For each positive integer k, let A k ⊆ A denote the set of vertices of A with exactly k neighbors in B. Then the number of copies of P 3 of the desired type is precisely By then applying Proposition 2.3, we continue to bound Finally, we will rely on known orders of magnitude of the maximum number of paths and cycles in a planar graph: Theorem 2.5 (Győri et al. [10]). for all m ≥ 0, and N P (n, P 2m+1 ) = Θ(n m+1 ), for all m ≥ 0, and Note that these formulas work even in the trivial cases of P 0 , P 1 , P 2 , where P 0 is the null graph. We will need this result, even in the trivial cases, in our proofs. 3 Proof of Lemma 1.5: The reduction lemma for odd cycles Given a planar graph G on n vertices, we will find graphs G 1 and then G 2 so that the total number of copies of C 2m+1 in G is the same as in G 2 , up to a small error term, where G 2 will be highly structured and in which counting the cycles is asymptotically equivalent to solving a maximum likelihood problem. Both G 1 and G 2 will be so-called tumor graphs: The sets S xy (H) are called tumors. When the tumor graph is understood, we drop the parenthetical and simply write S ∅ , S x , S xy . By the definition of a tumor graph, Notice that our conjectured asymptotic extremal examples for N P (n, C 2m+1 ) are all tumor graphs (see Figure 1a). We first extract a tumor graph G 1 from the original graph G. The proof of Lemma 3.2 is in Section 3.1. The next step will be to prove that, in G 1 , almost all of the copies of C 2m+1 alternate between vertices in B and vertices in S as much as possible. Observe that in Figure 1a, every good copy of C 11 has the form BS · · · BSS, whereas in Figure 1b, every good copy of C 11 has the form BS · · · BSB. Lemma 3.4. Let (G 1 ; B, S) be a planar tumor graph on n vertices. If d denotes the largest degree of a vertex in S, then By our choice of G 1 from Lemma 3.2, d ≤ n 1/5 , |S| ≤ n and |B| ≤ O(n 4/5 ). Thus, N(G 1 , C 2m+1 ) ≤ G (G 1 ; B, S), C 2m+1 + O(n m−1/5 ). The proof of Lemma 3.4 is in Section 3.2. Finally, we will find a planar tumor graph G 2 which is more refined than G 1 but which contains asymptotically the same number of good copies of C 2m+1 as G 1 . Note that our conjectured asymptotic extremal examples for N P (n, C 2m+1 ) are all benign tumor graphs (see Figure 1a). Putting together the definitions of G 1 and G 2 along with Lemmas 3.2, 3.4 and 3.6, we obtain The final step in the proof of the reduction lemma is to actually count good cycles in a benign planar tumor graph. The proof of Lemma 3.7 is in Section 3.4. In light of eq. (1), this will complete the proof of Lemma 1.5. Proof of Lemma 3.2: Finding a tumor graph within a planar graph Let G ′ be the subgraph of G formed by deleting all edges between S ′ and B ≥D . After these edges are deleted, every vertex in S is now adjacent, in the resulting G ′ , to at most two vertices in B ≥D . Proof. We simply need to bound the number of copies of C 2m+1 in G which use some edge within We may now disregard the graph G and work solely with the graph G ′ . Let S ′′ ⊆ S be the set of all vertices in S which have at least three neighbors in B and let G 1 be the subgraph of G ′ formed by deleting all edges between S ′′ and B <D . By construction, every vertex in S (and hence in S ′′ ) has at most two neighbors in B ≥D , so G 1 has the property that every vertex in S has at most two neighbors in B. That is to say, (G 1 , B, S) is a tumor graph. By reasoning similar to that behind eq. (2), we have Claim 3.9. With G 1 defined as above, Proof. We simply need to bound the number of copies of C 2m+1 in G ′ which use some edge within E G[S ′′ , B <D ] . In other words, we need to bound the number of copies of C 2m+1 of the form (v 1 , . . . , v 2m+1 ) where v 1 ∈ B <D and v 2 ∈ S ′′ . In order to bound these, we consider cases according to the nature of v 4 . Here eq. (3) tells us that there are at most O(n/d) choices for the pair (v 1 , v 2 ). Then there are at most deg v 1 < D choices for v 2m+1 and at most deg v 2 < d choices for v 3 . After picking v 3 ∈ S, we then have at most deg v 3 < d choices for v 4 . Together, this yields a total of at most O(dDn) choices for the tuple (v 2m+1 , v 1 , v 2 , v 3 , v 4 ). Finally, by Theorem 2.5, there are at most 2 N(G ′ , P 2(m−2) ) ≤ O(n m−2 ) choices for the path (v 5 , . . . , v 2m ). We conclude that there are at most Here eq. (3) tells us that there are at most O(n/d) choices for the pair (v 1 , v 2 ). Then there are at most 2 choices for v 3 since v 2 ∈ S and G 1 is a subgraph of G ′ . Therefore, there are In this situation, Proposition 2.4 implies that there are at most This completes the proof of Lemma 3.2. Proof of Lemma 3.4: Most cycles in a tumor graph are good Fix a graph G and subsets V 1 , . . . , V ℓ ⊆ V . For k ≥ ℓ, we say that a k-cycle C in G contains the pattern V 1 V 2 · · · V ℓ if we can cyclically label the vertices of C as In order to prove that most copies of C 2m+1 in a planar tumor graph (G; B, S) are good, we identify the patterns that a bad copy of C 2m+1 must contain. To do so, we define the following sets: • SSS is the set of all copies of C 2m+1 that contain the pattern SSS. • 2SS is the set of all copies of C 2m+1 that contain the pattern SS at least twice. Note that 2SS ⊇ SSS. • BBB is the set of all copies of C 2m+1 containing the pattern BBB. • BBSS is the set of all copies of C 2m+1 containing the pattern BBSS. • 1BB1SS is the set of all copies of C 2m+1 containing both the pattern BB and the pattern SS. • 2BB is the set of all copies of C 2m+1 containing the pattern BB at least twice. Note that 2BB ⊇ BBB. We note that all bad cycles must fall into at least one of these categories because good cycles contain exactly one instance of either BB or SS, and otherwise alternate between B and S. We now show that each of these six sets is small. Lemma 3.10. The following bounds hold: Case: 2SS \ SSS. We look for a pair of SS and there will be two paths between them on the cycle, one of odd length and one with even length. Thus, we may assume that the SS pairs occur at (v 2 , v 3 ) and at Finally, by Theorem 2.5, there are at most 2 N(G, P 2(i−3) ) ≤ O(n i−3 ) choices for the path (v 5 , . . . , v 2i−2 ) and at most 2 N(G, P 2(m−i) ) ≤ O(n m−i ) choices for the path (v 2i+2 , . . . , 2m + 1). In conclusion, there are at most cycles of this form. Case: BBB. By Theorem 2.5, the number of cycles that contains no S vertices at all is at most Once these are selected, there are at most two choices for v 5 because by the definition of a tumor graph, v 4 can be adjacent to at most 2 members of B. In this case, m ≥ 3 because any five-cycle containing both the patterns BB and SS vertices must be of the form BBSSV , i.e. it belongs to BBSS. Now, any cycle within 1BB1SS \ (2SS ∪ BBB ∪ BBSS) must have the form and there is at least one i for which k i = 2. In fact, for parity reasons, there are at least two i's for which k i = 2. Therefore, let t ∈ [ℓ − 1] be the smallest index for which k t = 2. If t = 1, then our cycle has the form . That is, the initial segment of the cycle If t ≥ 2, then our cycle has the form . As In this case, m ≥ 3 because any five-cycle with two instances of BB must be of the form BBB. Any cycle within 2BB \ (2SS ∪ BBB ∪ BBSS) must have the form where k 1 , . . . , k ℓ ∈ {1, 2} and ℓ i=1 (k i + 1) = 2m − 2 ≥ 4 and there is an i for which k i = 2. In fact, for parity reasons, there are at least two i's for which k i = 2. If k i = 2 for all i ∈ {1, . . . , ℓ}, then 3ℓ = 2m + 1 (hence m ≥ 4) and our cycle has the form There are at most 2|S| choices for each BSB piece. Hence there are at most cycles of this form for all m ≥ 4. If not all k i 's are equal to 2, we may assume, without loss of generality, that k ℓ = 1 and there exists a t ∈ {1, . . . , ℓ − 2} which is the smallest index for which k t = 2. If t = 1, then our cycle has the form BSB. There are at most 2|S| choices for each of the BSB pieces. If t ≥ 2, then our cycle has the form BSB. As above, there are at most 2|S| choices for each of the BSB pieces. Proof of Lemma 3.6: Cleaning a tumor graph The process of cleaning G 1 to arrive at G 2 will go through several stages and requires a number of facts. In Section 3.3.1, we establish that there are few edges between distinct tumors. In Section 3.3.2, we establish that deleting edges between vertices in S does not decrease the number of good cycles by much and establish that an operation we call "contraction-uncontraction" maintains planarity and the number of vertices but does not decrease the number of good cycles at all. In Section 3.3.3 (Cleaning Stage I), we delete certain edges and vertices that cannot participate in good cycles and perform contraction-uncontraction on some edges between vertices in x∈B S x . In Section 3.3.4 (Cleaning Stage II), we delete and contract-uncontract some edges such that every vertex in x∈B S x has at most one neighbor in a tumor and there are no edges between distinct tumors. In Section 3.3.5 (Cleaning Stage III), we perform deletion and contraction-uncontraction and modify S and B into S ′ and B ′ so that any edge induced by S ′ is within some tumor. In each stage, we ensure that the total number of good cycles does not change by too much. Few edges between distinct tumors We begin with two simple observations about planar tumor graphs. Proposition 3.11. Let (G; B, S) be a planar tumor graph. • G has at most four edges between any fixed pair of distinct tumors. Proof. If |N (z)∩S xy | ≥ 3, then x, y, z would be three distinct vertices with three common neighbors, so G would contain a copy of K 3,3 , contradicting the fact that G is planar. This establishes the first item. To prove the second item, fix two distinct tumors S xy , S zw of G and set T = G[S xy , S zw ]. Since xy = zw, we may relabel these vertices so that x, y, z are distinct. Now, suppose that T contained at least five edges; by the first item, the maximum degree of T is at most two and so T must contain a matching on three edges. Let a 1 b 1 , a 2 b 2 , a 3 b 3 ∈ E(T ) be such a matching where a i ∈ S xy and b i ∈ S zw for each i ∈ [3]. But then these six vertices along with x, y, z contain a subdivided copy of K 3,3 with parts {x, y, z} and {a 1 , a 2 , a 3 }; a contradiction. The goal of the remainder of this section is to prove that planar tumor graphs have few edges between distinct tumors (Lemma 3.12) and that most other vertices in S interact sparsely with the set of tumors (Lemma 3.13) 1. v ∈ S ∅ ⊔ x∈B S x and v has at least three neighbors within xy∈( B 2 ) S xy . 2. v ∈ x∈B S x and v has neighbors in distinct tumors. 3. v ∈ S x for some x ∈ B and v has two neighbors within S yz for some yz ∋ x. Unsurprisingly, separated tumor graphs are much easier to handle and so we will need to "separate" a tumor graph in order to obtain the bounds in the preceding lemmas. -If s ∈ S x , then s ∈ S x ′ for some x ′ ∈ B ′ x , and -If s ∈ S xy , then s ∈ S x ′ y ′ for some x ′ ∈ B ′ x and y ′ ∈ B ′ y . That is, each x ∈ B corresponds to a set of vertices B ′ x ⊂ B ′ , and the vertices of S adjacent to x in G must be adjacent to a unique member of B ′ x in G ′ . A separation of a tumor graph preserves the following crucial property: Seeing how both Lemmas 3.12 and 3.13 bound interactions between distinct clusters, the first step in their proofs will be to find a separation of the original tumor graph that is not too large. This is the content of the following proposition. Proof. Set T = T (G; B, S) Note that G contains a subdivision of T and so T is additionally planar. Due to this, the bound of 2e(T ) + |{x ∈ B : deg T x = 0}| ≤ 6|B| is immediate, so we focus only on the first inequality We prove the claim by double induction on the pair (∆, η) (induction is done on ∆ first and then η) where ∆ is the maximum degree of T and η is the number of vertices in T of degree ∆. If ∆ ≤ 1, then (G; B, S) is separated and so there is nothing to prove. Thus, suppose that ∆ ≥ 2. To begin, we may suppose that G has no edges between vertices of B since we may remove any such edges without affecting the conclusion of the lemma. Now, fix a straight-edge planar embedding of G and let x ∈ B be any vertex with deg T x = ∆. We may label the neighbors of x as s 0 , . . . , s k−1 in counter-clockwise order around x. Since G has no edges between vertices of B, each s i resides within S. Let {t 0 , . . . , t ℓ−1 } = {s 0 , . . . , s k−1 } \ S x where the t i 's remain in counterclockwise order about x; note that each t i resides within S xy for some y ∈ B. Note that ℓ ≥ 2 since x has degree ∆ ≥ 2 in T . We claim that there is some y ∈ B for which {t 0 , . . . , t ℓ−1 } ∩ S xy is a non-trivial cyclic interval; that is {t 0 , . . . , t ℓ−1 } ∩ S xy = {t i , t i+1 , . . . , t i+r } for some i ∈ {0, . . . , ℓ − 1} and r ∈ [ℓ − 1] where the indices are computed modulo ℓ. If no such y were to exist, then we could locate indices a < b < c < d ∈ {0, . . . , ℓ − 1} for which t a , t c ∈ S xy and t b , t d ∈ S xz for some distinct y, z ∈ B \ {x}. Now, consider the subgraph of G induced by x, y, z, t a , t b , t c , t d , inheriting its planar embedding from G; call this plane graph H. In H, the neighbors of x are t a , t b , t c , t d , which appear in counterclockwise order around x. By a standard argument in planar graph theory, we may suppose that t a t b t c t d forms a cycle in H since we can add these edges without violating the planarity of H. However, as in Figure 2, t a y, yt c and t b z, zt d are edges of H, and so H is a subdivision of K 5 ; a contradiction. x y z t a t b t c t d Figure 2: The vertices t a , t c ∈ S xy and t b , t c ∈ S xz in a counterclockwise order. The vertices {x, t a , t b , t c , t d } form the vertices of a subdivision of K 5 . Thus, without loss of generality, let y ∈ B be such that {t 0 , . . . , t ℓ−1 } ∩ S xy = {t 0 , . . . , t r } for some r ∈ [ℓ − 1]. We may additionally suppose that t 0 = s 0 and that t r = s r ′ for some r ′ ∈ [k − 1]. We form the new tumor graph (G ′ ; B ′ , S) by introducing a new vertex x ′ to have B ′ = B ⊔ {x ′ } and replacing all edges of the form s i x by s i x ′ for each i ∈ {0, . . . , r ′ } (see Figure 3). Observe that G ′ is still planar since s 0 , . . . , s r ′ is a cyclic interval of neighbors of x. x y s 0 = t 0 s 1 = t 3 s 4 = t 7 t 1 t 2 t 4 (a) A subgraph with several tumors at x before separation. x ′ Furthermore, if ∆ ′ denotes the maximum degree of T ′ and η ′ denotes the number of vertices in T ′ of degree ∆ ′ , then we find that (∆ ′ , η ′ ) is strictly smaller than (∆, η) in the lexicographic ordering. Thus, the claim follows from the induction hypothesis. Now that we understand how to separate a tumor graph, we can prove Lemmas 3.12 and 3.13. Both proofs follow the same philosophy: separate, contract, bound. Proof of Lemma 3.12. Let (G ′ ; B ′ , S) be the separation of (G; B, S) guaranteed by Proposition 3.17. This guarantees that edges between distinct tumors of G are between distinct tumors of G ′ (Observation 3.16). Moreover, |B ′ | ≤ 6|B|, so it suffices to prove the claim for (G ′ ; B ′ , S). In other words, we may suppose that (G; B, S) is already separated. Let R denote the set of edges with end-points in distinct tumors and set T = T (G; B, S), which is a matching since (G; B, S) is separated; in particular, e(T ) ≤ |B|/2. Now, create a graph H whose vertex set is E(T ) where {xy, zw} ∈ E(H) if there is an edge between S xy and S zw in G. Due to Proposition 3.11, we know that |R| ≤ 4e(H). The key observation is that H is a planar graph. Indeed, consider starting with G and contracting all edges of the form sb ∈ E(G) for s ∈ xy∈( B 2 ) S xy and b ∈ B. Since T is a matching, the effect of these contractions is to replace each tumor by a single vertex; in particular, H is isomorphic to a subgraph of this contracted graph. Putting these observations together, we finally bound The proof of Lemma 3.13 follows along similar lines, but is more involved since we will need to perform many different sequences of contractions. Proof of Lemma 3.13. Let (G ′ ; B ′ , S) be the separation of (G; B, S) guaranteed by Proposition 3.17. Since |B ′ | ≤ 6|B|, it suffices to prove the claim for (G ′ ; B ′ , S) (see Observation 3.16). In other words, we may suppose that (G; B, S) is already separated. Begin by fixing any xy ∈ B 2 and consider S xy . We build an auxiliary graph H xy whose vertex set is S xy where ab is an edge if there is some s ∈ S ∅ ⊔ z∈B S z for which s is adjacent to both a and b in G. Observe that any such s is adjacent to at most two vertices in S xy (by Proposition 3.11), hence G has a subdivision of H xy and so H xy is a planar graph. In particular, the chromatic number of H xy is bounded by some absolute constant C. 2 We may therefore fix a coloring χ : xy∈( B 2 ) S xy → [C] so that χ is a proper coloring of each H xy . Next, fix an arbitrary orientation − → T of the matching T (G; B, S). For each t ∈ [C], we build a graph G t from G as follows: for each (x, y) ∈ − → T , • Contract all edges of the form xa where a ∈ S xy and χ(a) = t, and • Contract all edges of the form ya where a ∈ S xy and χ(a) = t. We will use the graphs G t to define sets A t such that X ⊆ A 1 ∪ · · · ∪ A C and then show that each A t has size at most 2|B| and that the number of edges between A t and xy∈( B 2 ) S xy is at most 6|B|. To that end, for each t ∈ [C], let A t ⊆ S ∅ ⊔ x∈B S x denote those vertices that have at least three neighbors within B in the graph G t . (Note that no vertices of S ∅ ⊔ x∈B S x were lost when creating G t .) We first consider those v ∈ S ∅ ⊔ x∈B S x which have at least three neighbors within xy∈( B 2 ) S xy . Suppose first that v has neighbors within at least three distinct tumors: S x 1 y 1 , S x 2 y 2 , S x 3 y 3 . Since G is separated, each of the x i 's and y i 's are distinct. As such, in each G t , v is adjacent to either x i or y i (or both) for each i ∈ [3] and so v ∈ A t for each t ∈ [C]. If this is not the case, then since v has at most two neighbors within any individual S xy (by Proposition 3.11), this means that v has neighbors within two distinct tumors, S x 1 y 1 , S x 2 y 2 , such that it has two neighbors a, b ∈ S x 1 y 1 . Again, since G is separated, x 1 , y 1 , x 2 , y 2 are distinct. Now, since v is a common neighbor of a and b, we know that ab ∈ E(H x 1 y 1 ) and so χ(a) = χ(b). In particular, if t = χ(a), then vx 1 and vy 1 are both edges of G t . Finally, since v has a neighbor in S x 2 y 2 , either vx 2 or vy 2 is an edge of G t and so v ∈ A t . Next, suppose that v ∈ S x for some x ∈ B and suppose that v has neighbors within distinct tumors S yz and S wa . Since G is separated, we know that y, z, w, a are distinct; in particular, at most one of these four vertices is equal to x. Without loss of generality, we may suppose that x ∈ {y, z, w}. Now, since v has some neighbor within S wa , there is some value of t for which vw is an edge of G t . Within this same G t , either vy or vz is also an edge. Finally, vx is additionally an edge of G t and so v ∈ A t . Finally, suppose that v ∈ S x for some x ∈ B and that v has two neighbors within S yz for some {y, z} ∋ x; call these two neighbors a, b. As before, we know that ab ∈ E(H yz ) and so χ(a) = χ(b). Thus, if t = χ(a), then both vy and vz are edges of G t . Additionally, vx is an edge of G t and so v ∈ A t since x, y, z are distinct. Thus X ⊆ t∈[C] A t . Since G t is planar and each vertex in A t has at least three neighbors in B and A t is disjoint from B, Proposition 2.3 implies that |A t | ≤ 2|B|. Next, since each v ∈ A t had at most two neighbors in G within any particular tumor (Proposition 3.11), the number of edges between A t and xy∈( B 2 ) S xy in G is at most twice as large as the number of edges between A t and B in G t . Of course, the number of edges between A t and B in G t is at most 2(|A t | + |B|) ≤ 6|B|. Putting these together, we have shown that |X| ≤ C · 2|B| and the number of edges between X and xy∈( B 2 ) S xy is at most C · 6|B|. Since C is bounded, the claim follows. We will not need the notion of separation in the rest of the proof but will instead use Lemmas 3.12 and 3.13 to conclude structural facts about the graph (G; B, S). Contraction-uncontraction In this section, we introduce the main operation used to control tumor graphs: contractionuncontraction. The first observation is that, given a specific P 3 in a planar graph, we may "uncontract" the middle vertex so that the new vertices are adjacent and both adjacent to the end-points of the P 3 . We omit a proof since a straight-line drawing makes it clear that the operation is valid, as demonstrated in Figure 4. Since contracting an edge into a single vertex preserves planarity and we just observed that one can "uncontract" a vertex to create a new edge while preserving planarity, we can perform these two operations in sequence: first contracting an edge and then uncontracting the resulting vertex. See Figure 4 for a demonstration of this operation, which we dub "contraction-uncontraction". (uv) and then uncontracting along the 3-path x(uv)y to recover the vertices u and v. G ′ has the same vertex-set as does G and, due to Observation 3.18, it additionally satisfies: • G ′ is planar, and • uv is an edge and both u and v are adjacent to both x and y, and In Lemma 3.20, we show that, in a tumor graph, if a vertex in S has exactly one B neighbor (that is, in x∈B S x ), then under certain conditions we can find a graph with at least as many good cycles with one fewer vertex in x∈B S x . This is the key ingredient necessary to "clean" a tumor graph and is accomplished by first contracting an edge and then uncontracting the resulting vertex. Proof. The result of performing a contraction-uncontraction operation along the path xuvy adds the edges xv and uy (should they not already exist) and perhaps "scrambles" the other neighbors of u and v. Every good cycle will either have one SS edge and otherwise alternate between vertices in B and vertices in S or will have one BB edge and otherwise alternate between vertices in B and vertices in S. We will classify the good cycles of both G and G ′ according to the vertex in the SS edge (if it exists) that is neither u nor v. That is, • The set C ∅ (G) is the set of all good cycles in G that have no SS edge or for which the SS edge contains neither u nor v or for which the SS edge is uv and contains the path xuvy. The set C ∅ (G ′ ) is similarly defined. • The set C * (G ′ ) is the set of all good cycles in G ′ for which the SS edge is uv and contains the path xvuy. Note that xvuy is not a path in G because u ∈ S x . • For any w ∈ {u, v}, the set C w (G) is the set of all good cycles in G for which the SS edge is either uw or vw. The set C w (G ′ ) is similarly defined. The cycles in C ∅ (G) are unchanged after contraction-uncontraction and so map to themselves in C ∅ (G ′ ). Note that if uv is the SS edge then the cycle must contain the path xuvy. For C w (G), we create a map from C w (G) to C w (G ′ ) according to the subgraph induced by {u, v, w}. • If {u, v, w} induces a path vuw, then the cycle must contain a path xuwz for some z ∈ B \{x}. Depending on whether w is a neighbor of u or v in G ′ , keep the cycle with xuwz or replace that path with xvwz. This is a one-to-one map. • If {u, v, w} induces a path uvw, then the cycle contains the path yvwz for some z ∈ B \ {y}. We either keep the aforementioned path or replace v with u. This is a one-to-one map. • If {u, v, w} induces a triangle, there are two types of cycles in this case, those that contain the path xuw and those that contain the path wvy. Replace these two paths with either (a) the two paths xuw, wuy or (b) the two paths xvw, wvy. This is a two-to-two map. The cycles in C ∅ (G) are unchanged after contraction-uncontraction and so map to themselves in C ∅ (G ′ ). Note that if uv is the SS edge then the cycle must contain the path xuvy. For C w (G), we make a map from C w (G) to C w (G ′ ) ∪ C * (G ′ ) according to the subgraph induced by {u, v, w}. • If {u, v, w} induces a path vuw, then the cycle must contain a path xuwz for some z ∈ B \{x}. Depending on whether w is a neighbor of u or v in G ′ , keep the cycle with xuwz or replace that path with xvwz. This is a one-to-one map. • If {u, v, w} induces a path uvw, then the cycle either contains the path xvwz for some z ∈ B \ {x} or contains the path yvwz for some z ∈ B \ {y}. In either case we either keep the aforementioned path or replace v with u. This is a one-to-one map. • If {u, v, w} induces a triangle, this is the unique w by Proposition 3.11. So, there are three types of cycles in this case, those that contain the path xuwy, those that contain the path xwvy and those that contain the path xvwy. Replace these three paths with either (a) the three paths xuwy, xwuy, xvuy or (b) the three paths xvwy, xwvy, xvuy. This is a three-tothree map. We say that a planar tumor graph (G ′ ; B, S ′ ) with properties (ii) and (iii) is a Stage I graph. Cleaning Stage I Proof. We repeatedly modify the graph G until it has the desired properties. To begin, set S * = S \ S ∅ and let G * be the graph where we remove all vertices within S ∅ and remove all edges within S x for each x ∈ B. Certainly (G * ; B, S * ) is still a planar tumor graph and also the set of good cycles remains unchanged since none of these cycles can use any of the deleted vertices and edges. Now, we define G ′ by repeating the following: while there is an edge uv with u ∈ S x and v ∈ S y for some x = y ∈ B, perform contraction-uncontraction along the path xuvy. Each time we perform such a contract-uncontract operation, the resulting graph is planar and the size of x∈B S x strictly decreases. So, eventually this process terminates and we have that, in the resulting G ′ , x∈B S x is an independent set. Furthermore, setting S ′ = S * , it is the case that G (G ′ ; B, S ′ ), C 2m+1 ≥ G (G * ; B, S * ), C 2m+1 by Lemma 3.20. Cleaning Stage II First, we observe that removing few edges within G[S] results in a negligible reduction in the number of good cycles. We now apply this observation along with contraction-uncontraction to clean a planar tumor graph further. Recall that tumors S xy and S zw are distinct if xy = zw. (ii) If v ∈ S x for some x ∈ B, then v has at most one neighbor within yz∈( B 2 ) S yz . Furthermore, if v has a neighbor within S yz , then x / ∈ {y, z}. We say that a Stage I tumor graph (G ′ ; B, S) with properties (i) and (ii) is a Stage II graph. Proof. Define U ⊆ x∈B S x to be the set of vertices v with any of the following properties: • v has at least three neighbors within xy∈( B 2 ) S xy , or • v has neighbors in distinct tumors S xy and S zw , or • v ∈ S x for some x ∈ B and v has two neighbors within S yz for some {y, z} ∋ x. • v has at most two neighbors within yz∈( B 2 ) S yz , and • If v does have two neighbors, then both neighbors reside within the same S xy for some y ∈ B. Now, we define G 2 by repeating the following: while there is an edge uv with u ∈ S x and v ∈ S xy for some x = y ∈ B, perform contraction-uncontraction along the path xuvy. Each time we perform such a contract-uncontract operation, the size of x∈B S x strictly decreases and so eventually this process terminates, resulting in the planar graph G 2 . Thus, G 2 has the property that if v ∈ S x for some x ∈ B, then v has at most one neighbor within yz∈( B 2 ) S yz and that if v has a neighbor within S yz , then x / ∈ yz. Now, recalling that G 1 was a Stage I graph, at no point in this process do we introduce any new edges incident to x∈B S x . Thus, if we perform a contract-uncontract operation along the path xaby where a ∈ S x and b ∈ S xy , then N (a) ⊆ {x} ∪ S xy at this point. In particular, Lemma 3.20 implies that G (G 1 ; B, S) Finally, we form G ′ by removing all edges between S xy and S zw for all xy = zw ∈ B 2 . According to Lemma 3.12, there are at most O(|B|) many edges of this form and so Proposition 3.22 implies that G (G 2 ; B, S), C 2m+1 ≤ G (G ′ ; B, S), C 2m+1 + O(|B|n m−1 ), which concludes the proof. Cleaning Stage III Recall that a tumor graph (G; B, S) is benign if whenever uv ∈ E(G[S]), then u, v ∈ S xy for some xy ∈ B 2 . Observe that any tumor graph that is benign is also Stage II. The only difference between a Stage II and a benign tumor graph is that a Stage II tumor graph can contain edges of the form uv where u ∈ S x and v ∈ S yz provided that x / ∈ {y, z}. Thus, the last step needed to prove Lemma 3.6 is to control all edges of this form. Proposition 3.24. Let (G; B, S) be a planar tumor graph. If Z denote the set of all vertices z ∈ S such that z ∈ S xy and z has some neighbor within S w for some x, y, w ∈ B with w / ∈ {x, y}, then |Z| ≤ 2|B|. Proof. Let G ′ be the graph formed from G by contracting all edges of the form wu for w ∈ B, u ∈ S w ; note that G ′ is still planar. Within the bipartite graph G ′ [Z, B], each vertex in Z has at least three neighbors; thus |Z| ≤ 2|B| by Proposition 2.3. Proof. Let Z be as in Proposition 3.24. Now, let R denote the set of edges between Z and xy∈( B 2 ) S xy and set G ′ = G − R. Since G is a Stage II graph, if z ∈ Z ∩ S xy , then the only neighbors of z are in {x, y} ⊔ S xy ⊔ w∈B S w . In particular, any z ∈ Z has at most two neighbors within xy∈( B 2 ) S xy due to Proposition 3.11. Thus, |R| ≤ 2|Z| ≤ 4|B| by additionally applying Observe that if u ∈ S, z ∈ Z with uz ∈ E(G ′ ), then u ∈ S x for some x ∈ B and z / ∈ S xy for any y ∈ B. Additionally, since G was Stage II, if this is the case then z is the unique neighbor of u within Z. Now, set B ′ = B ⊔ Z and S ′ = S \ Z. Therefore, u ∈ S ′ xz in (G ′ , B ′ , S ′ ); in particular, (G ′ ; B ′ , S ′ ) is a planar tumor graph. Consequently, it is quick to observe that this tumor graph is both Stage II and has no edges between S xy and S zw for any distinct xy, zw ∈ B ′ 2 . Thus, (G ′ ; B ′ , S ′ ) is benign. Moreover, B ′ = B ⊔ Z and so |B ′ | ≤ |B| + |Z| ≤ 3|B|. Next, consider a good cycle in (G ′ ; B, S) which does not exist in (G ′ ; B ′ , S ′ ). Any such cycle contains at least one member of Z = S ∩ B ′ which appears in a path of the form BZB or of the form BZZB, otherwise there would be only one member of Z which would have to be in a path of the form BZS ′ B, which is still good in (G ′ ; B ′ , S ′ ). For cycles with the pattern BZB, the number of ways to choose that path is 2|Z| ≤ O(|B|) and the number of ways to choose the remaining path is 2 N(G ′ , P 2m−2 ). By Theorem 2.5, N(G ′ , P 2m−2 ) ≤ O(n m−1 ), and so the total number of such cycles is bounded above by O(|B|n m−1 ). For cycles with the pattern BZZB, the ZZ edge must be in the same tumor S xy . Thus, the number of ways to choose the BZZB piece is at most 4e is planar. The number of ways to choose the remaining path is N(G ′ , P 2m−3 ) ≤ O(n m−1 ) by Theorem 2.5, and so the total number of such cycles is bounded above by O(|B|n m−1 ). As a result, which concludes the proof. Proof of Lemma 3.7: Reduction to maximum likelihood estimators For a set X and a positive integer k, we write (X) k to indicate the set of all tuples (x 1 , . . . , x k ) ∈ X k with x 1 , . . . , x k distinct. This notation mirrors that of the falling-factorial. We begin by constructing an edge probability measure µ on the clique with vertex set B where Since the tumors are disjoint, we know that |S xy | ≤ µ(xy) · n. Furthermore, by Proposition 3.11, e G 2 [S xy ] ≤ |S xy | ≤ µ(xy) · n. Next, recall that the good cycles in (G 2 ; B, S) alternate between B vertices and S vertices except for one consecutive pair which can either be BB or SS. Let C S denote those good copies of C 2m+1 containing an SS edge and let C B denote those good copies of C 2m+1 containing a BB edge. Of course, G (G 2 ; B, S), C 2m+1 = |C S | + |C B |. Fix a cycle in C S and label its vertices cyclically as (v 1 , . . . , v 2m+1 ) so that v 2m , v 2m+1 ∈ S. Then (x 1 , x 2 , . . . , x m ) = (v 1 , v 3 , . . . , v 2m−1 ) has the property that (x 1 , . . . , Thus, the number of cycles in C S which yield the tuple (x 1 , . . . , x m ) ∈ (B) m is precisely Of course, there are two cyclic orderings of the vertices of each of these cycles and so Bringing in the edge probability measure µ, we further bound Next, fix a cycle in C B and label its vertices cyclically as (v 1 , . . . , v 2m+1 ) so that v 1 , v 2m+1 ∈ B. Then (x 1 , x 2 , . . . , x m+1 ) = (v 1 , v 3 , . . . , v 2m+1 ) has the property that (x 1 , . . . , x m+1 ) ∈ (B) m+1 and v 2i ∈ S x i x i+1 for all i ∈ [m]. Thus, the number of cycles in C B which yield the tuple (x 1 , . . . , Again, there are two cyclic orderings of the vertices of each of these cycles and so By dropping the requirement that x 1 x m ∈ E(G 2 ) and also bringing in the edge probability measure µ, we bound Combining eqs. (4) and (5) Proof of Theorem 1.6 We first address the case where m = 2. Proof. We use the definition of β(µ; P 3 ): Note that the last inequality is an equality if and only if |supp µ| = 1. Many of our arguments focus on the mass of a vertex in an edge probability measure: Fix an edge probability measure µ ∈ ∆ K for some clique K. The function µ : That is,μ(x) is the probability that an edge sampled from µ is incident to the vertex x, and can be understood as the weighted degree of x. Note that x∈V (K)μ (x) = 2 (handshaking lemma). The next lemma, Lemma 4.3, is a very general statement in the setting of the maximum likelihood graph problems, that establishes regularity conditions for local optimizers. It is a direct consequence of the Karush-Kuhn-Tucker (KKT) conditions (see [8, Corollaries 9.6 and 9.10]). We will apply the lemma in the case of k = 2, H 1 = C m , and H 2 = P m+1 . for each e ∈ E(K); We quickly remark that the above maximum is indeed achieved since ∆ K is compact for any clique K and β(·; H) is a continuous function on ∆ K . Proof. By definition, we can write for all e ∈ E(K). In particular, we may apply the KKT conditions to this optimization problem to find that if µ ∈ ∆ K achieves O, then there is some fixed λ ∈ R such that D(e) = λ for all e ∈ supp µ, where Of course, whether or not e ∈ supp µ, we always have By then summing over all e ∈ E(K), we find where the penultimate equality follows from the assumption that each H i has exactly m edges. Substituting this value of λ = m · O into eq. (6) yields the first part of the lemma. For the second part of the lemma, we use the first part to find that for any fixed x ∈ V (K), We primarily use Lemma 4.3 to understand how β(µ; H) changes upon deleting a vertex from suppμ, which is key to the proof of Lemma 4.5. Before we can established Lemma 4.5, we need a brief, general fact about paths. Proposition 4.4. Fix a clique K and a vertex x ∈ V (K). For any µ ∈ ∆ K and any integer m ≥ 2, Proof. We prove this by induction on m, starting with m = 2. In this case, we have where the inequality follows from the fact thatμ(x) +μ(y) ≤ 1 + µ(xy). Now suppose that m ≥ 3. Observe that ifμ(x) = 1, then the inequality trivially holds since there are no positive-mass copies of P m emanating from x. Therefore, we may suppose thatμ(x) < 1 and define a new probability mass ν ∈ ∆ K by effectively deleting the edges incident to x: Then we have Applying the inequality of arithmetic and geometric means (AM-GM inequality) to the pieces of the above expression involvingν(y) then yields We now use the above facts to derive an inequality on the vertex-masses in an optimal measure. Proof. Note that O > 0 since the uniform distribution on K contains positive-mass copies of C m since K has at least m vertices. Fix any x ∈ V (K) and note thatμ(x) < 1. We define a new probability mass ν ∈ ∆ K by effectively deleting the edges incident to x: Since ν ∈ ∆ K and O is the optimal value, we bound O ≥ 2m · β(ν; C m ) + β(ν; P m+1 ) = 2m Rearranging this expression yields Next, Lemma 4.3 tells us that and so 2m C∈C(K,Cm), Substituting this expression into eq. (7) and then applying Proposition 4.4 finally yields the claim: We now solve the maximum likelihood question in the case where m ∈ {3, 4}. Proof. Fix a clique K on at least m vertices and set O = max ν∈∆ K 2m · β(ν; C m ) + β(ν; P m+1 ) . Note that O ≥ 2/m m−1 since this is the value achieved by the uniform distribution on a copy of C m , which is a member ∆ K . We conclude by establishing a bound on 2m · β(µ; C m ) + β(µ; P m+1 ) for all m ≥ 5 which is tight up to a constant which is independent of m. where the final inequality follows from the fact that m ≥ 5. This implies that s ≥ 1.3644. Even paths. In [3], Cox and Martin additionally proved a reduction lemma for paths on an odd number of vertices which used many of the same ideas as their reduction lemma for even cycles. It is natural to wonder if the ideas introduced in this paper can be applied to produce an analogous reduction lemma for even paths. This is especially motivated by the fact that the conjectured (asymptotic) extremal structure for N P (n, P 2m ) is identical to that for N P (n, C 2m+1 ), namely a balanced blow-up of C m (see [6]). Lemmas 3.2 and 3.4 have direct analogues when trying to bound N P (n, P 2m ); in fact, the proof that most copies of P 2m in a planar tumor graph are "good" (contain at most one instance of BB or SS) is arguably simpler than the proof of Lemma 3.4. Furthermore, there is a direct analogue to Lemma 3.7 relating the number of good copies of P 2m in a benign planar tumor graph to a maximum likelihood problem, although this maximum likelihood problem is significantly more complex. Unfortunately, there are major obstructions to proving an analogue of Lemma 3.6, the cleaning lemma. The main operation used in the cleaning lemma is contraction-uncontraction (Observation 3.19) in order to rearrange misbehaving edges. Our argument that contraction-uncontraction does not decrease the total number of good cycles relied on "locally rerouting" the good cycles (Lemma 3.20). That is to say, we made no global considerations about the total number of good cycles nor their overall structure. Consider the graph in Figure 6a, which has x, y ∈ B, u ∈ S x and v ∈ S y . The graph in Figure 6b is the graph obtained by performing contraction-uncontraction along the path xuvy. Note that there are 7 copies of P 3 starting at x and not using y in the former graph, whereas there are only 6 such copies in the latter graph. Because of this fact, upon performing contraction-uncontraction along xuvy, there may be no way to "locally reroute" good paths of the form SB · · · SBSS which use u or v as part of its terminal SS edge. If it is not possible to salvage the contract-uncontract lemma, perhaps the notion of benign tumor graphs can be modified to account for this structure, resulting in this structure being accounted for in the maximum likelihood problem. Currently, we do not see a path around this (and similar) obstacle(s), but we do expect that one exists.
2023-07-04T06:42:14.986Z
2023-06-30T00:00:00.000
{ "year": 2023, "sha1": "9ac69b4b2e21114c20eea6059649f557bd7aabc7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "9ac69b4b2e21114c20eea6059649f557bd7aabc7", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
246370818
pes2o/s2orc
v3-fos-license
Thalamocortical contribution to flexible learning in neural systems Abstract Animal brains evolved to optimize behavior in dynamic environments, flexibly selecting actions that maximize future rewards in different contexts. A large body of experimental work indicates that such optimization changes the wiring of neural circuits, appropriately mapping environmental input onto behavioral outputs. A major unsolved scientific question is how optimal wiring adjustments, which must target the connections responsible for rewards, can be accomplished when the relation between sensory inputs, action taken, and environmental context with rewards is ambiguous. The credit assignment problem can be categorized into context-independent structural credit assignment and context-dependent continual learning. In this perspective, we survey prior approaches to these two problems and advance the notion that the brain’s specialized neural architectures provide efficient solutions. Within this framework, the thalamus with its cortical and basal ganglia interactions serves as a systems-level solution to credit assignment. Specifically, we propose that thalamocortical interaction is the locus of meta-learning where the thalamus provides cortical control functions that parametrize the cortical activity association space. By selecting among these control functions, the basal ganglia hierarchically guide thalamocortical plasticity across two timescales to enable meta-learning. The faster timescale establishes contextual associations to enable behavioral flexibility, while the slower one enables generalization to new contexts. One can roughly categorize the credit assignment into context-independent structural credit assignment and context-dependent continual learning. In structural credit assignment, animals may make decisions in a multi-cue environment and should be able to credit those cues that contribute to the rewarding outcome. Similarly, if actions are being chosen based on internal decision variables, then the underlying activity states must also be reinforced. In such cases, neurons that are selective to external cues or internal latent variables need to adjust their downstream connectivity based on its contribution of their downstream targets to the RPE. This is a challenging computation to implement because, for upstream neurons, the RPE will be dependent on downstream neurons that are several connections away. For example, a sensory neuron needs to know the action chosen in the motor cortex to selectively credit the sensory synapses that contribute to the action. In continual learning, animals not only need to appropriately credit the sensory cues and actions that lead to the reward but also need to credit the sensorimotor combination in the right context to retain the behaviors learned from different contexts and even to generalize to novel contexts. Therefore, animals can continually learn and generalize across different contexts while retaining behaviors in familiar contexts. For example, when one is in the United States, one learns to first look left before crossing the street, whereas in the United Kingdom, one learns to look right instead. However, after spending time in the United Kingdom, someone from the United States should not unlearn the behavior of looking left first when they return home because their brain ought to properly assign the credit to a different context. Furthermore, once one learns how to cross the street in the United States, it is much easier to learn how to cross the street in the United Kingdom because the brain flexibly generalize behaviors across contexts. Reward prediction error: A quantity represented by the difference between the expected reward and actual reward. Credit assignment: A computational problem to determine which stimulus, action, internal states, and context lead to outcome. Continual learning: A computational problem to learn tasks sequentially to both learn new tasks faster and not forget old tasks. COMMON MACHINE LEARNING APPROACHES TO CREDIT ASSIGNMENT One solution to structural credit assignment in machine learning is backpropagation (Rumelhart et al., 1986). Backpropagation recursively computes the vector-valued error signal for synapses based on their contribution to the error signal. There is much empirical success of backpropagation in surpassing human performance in supervised learning such as image recognition (He, Zhang, Ren, & Sun, 2016;Krizhevsky, Sutskever, & Hinton, 2012) and reinforcement learning such as playing the game of Go and Atari (Mnih et al., 2015;Schrittwieser et al., 2020;Silver et al., 2016;Silver et al., 2017). Additionally, comparing artificial networks trained with backpropagation with neural responses from the ventral visual stream of nonhuman primates shows comparable internal representations Yamins et al., 2014). Despite its empirical success in superhuman-level performance and matching the internal representation of actual brains, backpropagation may not be straightforward to implement in biological neural circuits, as we explain below. In its most basic form, backpropagation requires symmetric connections between neurons (forward and backward connections). Mathematically, we can write down the backpropagation in Equation 1: where E is the total error, e i is the vector error at layer i, W i is the synaptic weight connecting layer i − 1 to layer i, and f is the nonlinearity. Intuitively, this is saying that the change of synaptic weight W i is computed by a Hebbian learning rule between backpropagation error e i and activity from last layer f(a i−1 ), while the backpropagation error is computed by backpropagating the error in the next layer through symmetric feedback weights W ⊤ iþ1 . Importantly, in this algorithm, error signals do not alter the activity of neurons in the preceding layers and instead operate independently from the feedforward activity. However, such arrangement is not observed in the brain; symmetric connections across neurons are not a universal feature of circuit organization, and biological neurons may encode both feedforward inputs and errors through changes in spike output (changes in activity; Crick, 1989;Richards & Lillicrap, 2019). Therefore, it is hard to imagine how the basic form of backpropagation (symmetry and error/activity separation) is physically implemented in the brain. Backpropagation: An algorithm to compute the error gradient of an artificial neural network through chain rules. Furthermore, while an animal can continually learn to behave across different contexts, artificial neural networks trained by backpropagation struggle to learn and remember different tasks in different contexts: a problem known as catastrophic forgetting (French, 1999;Kemker, McClure, Abitino, Hayes, & Kanan, 2018;Kumaran, Hassabis, & McClelland, 2016;McCloskey & Cohen, 1989;Parisi, Kemker, Part, Kanan, & Wermter, 2019). Specifically, this problem occurs when the tasks are trained sequentially because the weights optimized for former tasks will be modified to fit the later tasks. One of the common solutions is to interleave the tasks from different contexts to jointly optimize performance across contexts by using an episodic memory system and replay mechanism (Kumaran et al., 2016;McClelland, McNaughton, & O'Reilly, 1995). This approach has received empirical success in artificial neural networks, including learning to play many Atari games (Mnih et al., 2015;Schrittwieser et al., 2020). However, since one needs to store past training data in memory to replay during learning, this approach demands a high computational overhead and can be is inefficient as the number of the contexts increases. On the other hand, humans and animals acquire diverse sensorimotor skills in different contexts throughout their life span: a feat that cannot be solely explained by memory replay (M. M. Murray, Lewkowicz, Amedi, & Wallace, 2016;Parisi et al., 2019;Power & Schlaggar, 2017;Zenke, Gerstner, & Ganguli, 2017). Therefore, biological neural circuits are likely to employ other solutions to continual learning in addition to memory replay. Therefore, to solve these two credit assignment problems in the brain, one needs to seek different solutions. One of the pitfalls of backpropagation is that it is a general algorithm that works on any architecture. However, actual brains are collections of specialized hardware put together in a specialized way. It can be conceived that through clever coordination between different cell types and different circuits, the brains can solve the credit assignment problem by leveraging its specialized architectures. Along this line of ideas, many investigators have proposed cellular (Fiete & Seung, 2006;Kornfeld et al., 2020;Kusmierz et al., 2017;Liu et al., 2020;Richards & Lillicrap, 2019;Sacramento et al., 2018;Schiess et al., 2016) and circuit-level mechanisms (Lillicrap et al., 2016;O'Reilly, 1996;Roelfsema & Holtmaat, 2018;Roelfsema & van Ooyen, 2005) to assign credit appropriately. In this perspective, we would like to advance the notion that the specialized hardware arrangement also happens at the system level and propose that the thalamus and its interaction with basal ganglia and the cortex serve as a systemlevel solution for these three types of credit assignment. A PROPOSAL: THALAMOCORTICAL-BASAL GANGLIA INTERACTIONS ENABLE META-LEARNING TO SOLVE CREDIT ASSIGNMENT To motivate the notion of thalamocortical-basal ganglia interactions being a potential solution for credit assignment, we will start with a brief introduction. The cortex, thalamus, and basal ganglia are the three major components of the mammalian forebrain-the part of the brain to which high-level cognitive capacities are attributed to (Alexander, DeLong, & Strick, 1986;Badre, Kayser, & D'Esposito, 2010;Cox & Witten, 2019;Makino, Hwang, Hedrick, & Komiyama, 2016;Miller, 2000;Miller & Cohen, 2001;Niv, 2009;Seo, Lee, & Averbeck, 2012;Wolff & Vann, 2019). Each of these components has its specialized internal architectures; the cortex is dominated by excitatory neurons with extensive lateral connectivity profiles (Fuster, 1997;Rakic, 2009;Singer, Sejnowski, & Rakic, 2019), the thalamus is grossly divided into different nuclei harboring mostly excitatory neurons devoid of lateral connections (Harris et al., 2019;Jones, 1985;Sherman & Guillery, 2005), and the basal ganglia are a series of inhibitory structures driven by excitatory inputs from the cortex and thalamus (Gerfen & Bolam, 2010;Lanciego, Luquin, & Obeso, 2012;Nambu, 2011) (Figure 1). A popular view within system neuroscience stipulates that BG and the cortex underwent different learning Catastrophic forgetting: A phenomenon in which the network forgets about the previous tasks upon learning new tasks. paradigms, where BG is involved in reinforcement learning while the cortex is involved in unsupervised learning (Doya, 1999(Doya, , 2000. Specifically, the input structure of the basal ganglia known as the striatum is thought to be where reward gated plasticity takes place to implement reinforcement learning (Bamford et al., 2018;Cox & Witten, 2019;Hikosaka, Kim, Yasuda, & Yamamoto, 2014;Kornfeld et al., 2020;Niv, 2009;Perrin & Venance, 2019). One such evidence is the high temporal precision of DA activity in the striatum. To accurately attribute the action that leads to positive RPE, DA is released into the relevant corticostriatal synapses. However, DA needs to disappear quickly to prevent the next stimulus-response combination from being reinforced. In the striatum, this elimination process is carried out by dopamine active transporter (DAT) to maintain a high temporal resolution of DA activity on a timescale of around 100 ms-1 s to support reinforcement learning (Cass & Gerhardt, 1995;Ciliax et al., 1995;Garris & Wightman, 1994). In contrast, although the cortex also has dopaminergic innervation, cortical DAT expression is low and therefore DA levels may change at a timescale that is too slow to support reinforcement learning (Cass & Gerhardt, 1995;Garris & Wightman, 1994;Lapish, Kroener, Durstewitz, Lavin, & Seamans, 2007;Seamans & Robbins, 2010) but instead supports other processes related to learning (Badre et al., 2010;Miller & Cohen, 2001). In fact, ample evidence indicates that cortical structures undergo Hebbian-like long-term potentiation (LTP) and long-term depression (LTD; Cooke & Bear, 2010;Feldman, 2009;Kirkwood, Rioult, & Bear, 1996). However, despite the unsupervised nature of these processes, cortical representations are task-relevant and include appropriate sensorimotor mappings that lead to rewards (Allen et al., 2017;Donahue & Lee, 2015;Enel, Wallis, & Rich, 2020;Jacobs & Moghaddam, 2020;Petersen, 2019;Tsutsui, Hosokawa, Yamada, & Iijima, 2016). How could this arise from an unsupervised process? One possible explanation is that basal ganglia activate the appropriate cortical neurons during behaviors and the cortical network collectively consolidates high-reward sensorimotor mappings via Hebbian-like learning (Andalman & Fee, 2009;Ashby, Ennis, & Spiering, 2007;Hélie, Ell, & Ashby, 2015;Tesileanu, Olveczky, & Balasubramanian, 2017;Warren, Tumer, Charlesworth, & Brainard, 2011). Previous computational accounts of this process have emphasized a consolidation function for the cortex in this process, which naively would beg the question of why duplicate a process that seems to function well in the basal ganglia and perhaps include a lot of details of the associated experience? The answer to this question is the core of our proposal. We propose that the learning process is not a duplication, but instead that the reinforcement process in the basal ganglia selects thalamic control functions that subsequently activate cortical associations to allow flexible mappings across different contexts ( Figure 2). To understand this proposition, we need to take a closer look at the involvement of these distinct network elements in task learning. Learning in basal ganglia happens in corticostriatal synapses where the basic form of reinforcement learning is implemented. Specifically, the coactivation of sensory and motor cortical inputs generates eligibility traces in corticostriatal synapses that get captured by the presence or absence of DA (Fee & Goldberg, 2011;Fiete, Fee, & Seung, 2007;Kornfeld et al., 2020). This reinforcement learning algorithm is fast at acquiring simple associations but slow at generalization to other behaviors. On the other hand, the cortical plasticity operates in a much slower timescale but seems to allow flexible behaviors and fast generalization (Kim, Johnson, Cilles, & Gold, 2011;Mante, Sussillo, Shenoy, & Newsome, 2013;Miller, 2000;Miller & Cohen, 2001). How does the cortex exhibit slow synaptic plasticity and flexible behaviors at the same time? An explanatory framework is meta-learning (Botvinick et al., 2019;Wang et al., 2018), where the flexibility arises from network dynamics and the generalization emerges from slow synaptic plasticity across different contexts. In other words, synaptic plasticity stores a higher order association between contexts and sensorimotor associations while the network dynamics switches between different sensorimotor associations based on this higher order association. However, properly arbitrating between synaptic plasticity and network dynamics to store such higher order association is a nontrivial task (Sohn, Meirhaeghe, Rajalingham, & Jazayeri, 2021). We propose that the thalamocortical system learns these dynamics, where the thalamus provides control nodes that parametrize the cortical activity association space. Basal ganglia inputs to the thalamus learn to select between these different control nodes, directly implementing the interface between weight adjustment and dynamical controls. Our proposal rests on the following three specific points. First, building on a line of the literature that shows diverse thalamocortical interaction in sensory, cognitive, and motor cortex, we propose that thalamic output may be described as control functions over cortical computations. These control functions can be purely in the sensory domain like attentional filtering, in the cognitive domain like manipulating working memory, or in the motor domain like preparation for movement (Bolkan et al., 2017;W. Guo, Clause, Barth-Maron, & Polley, 2017; Z. V. Mukherjee et al., 2020;Rikhye, Gilra, & Halassa, 2018;Saalmann & Kastner, 2015;Schmitt et al., 2017;Tanaka, 2007;Wimmer et al., 2015;Zhou, Schafer, & Desimone, 2016). These functions directly relate Figure 2. Two views of learning in the cortex. (A) One possible view is that the Hebbian cortical plasticity consolidates the sensorimotor mapping from BG to learn a stimulus-action mapping a t = f(s t ). (B) We propose that thalamocortical systems perform meta-learning by consolidating the teaching signals from BG to learn a context-dependent mapping a t = f c (s t ), where the context c is computed by past stimulus history and represented by different thalamic activities. Meta-learning: A learning paradigm in which a network learns how to learn more efficiently. thalamic activity patterns to different cortical dynamical regimes and thus offer a way to establish higher order association between context and sensorimotor mapping within the thalamocortical pathways. Second, based on previous studies on direct and indirect BG pathways that influence most cortical regions (Hunnicutt et al., 2016;Jiang & Kim, 2018;Nakajima, Schmitt, & Halassa, 2019;Peters, Fabre, Steinmetz, Harris, & Carandini, 2021), we propose that BG hierarchically selects these thalamic control functions to influence activities of the cortex toward rewarding behavioral outcomes. Lastly, we propose that thalamocortical structure consolidates the selection of BG through a two-timescale Hebbian learning process to enable meta-learning. Specifically, the faster corticothalamic plasticity learns the higher order association that enables flexible contextual switching with different thalamic patterns (Marton, Seifikar, Luongo, Lee, & Sohal, 2018;Rikhye et al., 2018), while the slower cortical plasticity learns the shared representations that allow generalization to new behaviors. Below, we will go over the supporting literature that leads us to this proposal. MORE GENERAL ROLES OF THALAMOCORTICAL INTERACTION AND BASAL GANGLIA Classical literature has emphasized the role of the thalamus in transmitting sensory inputs to the cortex. This is because some of the better studied thalamic pathways are those connected to sensors on one end and primary cortical areas on another (Hubel & Wiesel, 1961;Lien & Scanziani, 2018;Reinagel, Godwin, Sherman, & Koch, 1999;Sherman & Spear, 1982;Usrey, Alonso, & Reid, 2000). From that perspective, thalamic neurons being devoid of lateral connection transmit their inputs (e.g., from the retina in the case of the lateral geniculate nucleus, LGN) to the primary sensory cortex ( V1 in this same example case), and the input transformation (center-surround to oriented edges) occurs within the cortex (Hoffmann, Stone, & Sherman, 1972;Hubel & Wiesel, 1962;Lien & Scanziani, 2018;Usrey et al., 2000). In many cases, these formulations of thalamic "relay" have generalized to how motor and cognitive thalamocortical interactions may be operating. However, in contrast to the classical relay view of the thalamus, more recent studies have shown diverse thalamic functions in sensory, cognitive, and motor processing (Bolkan et al., 2017;W. Guo et al., 2017;Z. V. Guo et al., 2017;Rikhye et al., 2018;Saalmann & Kastner, 2015;Schmitt et al., 2017;Tanaka, 2007;Wimmer et al., 2015;Zhou et al., 2016). For example in mice, sensory thalamocortical transmission can be adjusted based on prefrontal cortex (PFC)-dependent, top-down biasing signals transmitted through nonclassical basal ganglia pathways involving the thalamic reticular nucleus (TRN; Nakajima et al., 2019;Phillips, Kambi, & Saalmann, 2016;Wimmer et al., 2015). Interestingly, these task-relevant PFC signals themselves require long-range interactions with the associative mediodorsal (MD) thalamus to be initiated, maintained, and flexibly switched (Rikhye et al., 2018;Schmitt et al., 2017;Wimmer et al., 2015). One can also observe nontrivial control functions in the motor thalamus. Motor preparatory activities in the anterior motor cortex (ALM) show persistent activities that predicted future actions. Interestingly, the motor thalamus also shows similar preparatory activities that predict future actions and by optogenetically manipulating the motor thalamus activities, the persistent activities in ALM quickly diminished (Z. V. . Recently, Mukherjee, Lam, Wimmer, and Halassa (2021) discovered two cell types within MD thalamus differentially modulate the cortical evidence accumulation dynamics depending on whether the evidence is conflicting or sparse to boost the signal-tonoise ratio in decision-making. Based on the above studies, we propose that the thalamus provides a set of control functions to the cortex. Specifically, cortical computations may be flexibly switched to different dynamical modes by activating a particular thalamic output that corresponds to that mode. On the other hand, the selective role of BG in motor and cognitive control also has dominated the literature because thalamocortical-basal ganglia interaction is the most well studied in frontal systems (Cox & Witten, 2019;Makino et al., 2016;McNab & Klingberg, 2008;Monchi, Petrides, Strafella, Worsley, & Doyon, 2006;Seo et al., 2012). However, classical and contemporary studies have recognized that all cortical areas, including primary sensory areas, project to the striatum (Hunnicutt et al., 2016;Jiang & Kim, 2018;Peters et al., 2021). Similarly, the basal ganglia can project to the more sensory parts of the thalamus through lesser studied pathways to influence the sensory cortex (Hunnicutt et al., 2016;Nakajima et al., 2019;Peters et al., 2021). Specifically, a nonclassical BG pathway projects to TRN, which in turn modulates the activities of LGN to influence sensory thalamocortical transmission (Nakajima et al., 2019). On the other hand, it has also been argued that BG is involved in gating working memory (McNab & Klingberg, 2008;Voytek & Knight, 2010). This shows that BG has a much more general role than classical action and action strategy selection. Therefore, combining with our proposals on thalamic control functions, we propose that BG hierarchically selects different thalamic control functions to influence all cortical areas in different contexts through reinforcement learning. Furthermore, there are series of the work that indicates the role of BG to guide plasticity in thalamocortical structures (Andalman & Fee, 2009;Fiete et al., 2007;Hélie et al., 2015;Mehaffey & Doupe, 2015;Tesileanu et al., 2017). In particular, there is evidence that BG is critical for the initial learning and less involved in the automatic behaviors once the behaviors are learned across different species. In zebra finches, the lesion of BG in adult zebra finch has little effect on song production, but the lesion of BG in juvenile zebra finch prevents the bird from learning the song (Fee & Goldberg, 2011;Scharff & Nottebohm, 1991;Sohrabji, Nordeen, & Nordeen, 1990). Similar patterns can be observed in people with Parkinson's disease. Parkinson's patients who have a reduction of DA and striatal defects have troubles in solving procedural learning tasks but can produce automatic behaviors normally (Asmus, Huber, Gasser, & Schöls, 2008;Soliveri, Brown, Jahanshahi, Caraceni, & Marsden, 1997;Thomas-Ollivier et al., 1999). This behavioral evidence suggests that thalamocortical structures consolidate the learning from BG as the behaviors become more automatic. Furthermore, on the synaptic level, a songbird learning circuit also demonstrates this cortical consolidation motif (Mehaffey & Doupe, 2015;Tesileanu et al., 2017). In a zebra finch, the premotor nucleus HVC (a proper name) projects to the motor nucleus robust nucleus of the arcopallium (RA) to produce the song. On the other hand, RA also receives BG nucleus Area X mediated inputs from the lateral nucleus of the medial nidopallium (LMAN). The latter pathway is believed to be a locus of reinforcement learning in the songbird circuit. By burst stimulating both input pathways in different time lags, one can discover that HVC-RA and LMAN-RA underwent opposite plasticity (Mehaffey & Doupe, 2015). This suggests that the learning is gradually transferred from LMAN-RA to HVC-RA pathway (Fee & Goldberg, 2011;Mehaffey & Doupe, 2015;Tesileanu et al., 2017). This indicates a general role of BG as the trainer for cortical plasticity. THE THALAMOCORTICAL STRUCTURE CONSOLIDATES THE BG SELECTIONS ON THALAMIC CONTROL FUNCTIONS IN DIFFERENT TIMESCALES TO ENABLE META-LEARNING In this section, in addition to BG's role as the trainer for cortical plasticity, we further propose that BG is the trainer in two different timescales for thalamocortical structures to enable metalearning. The faster timescale trainer trains the corticothalamic connections to select the appropriate thalamic control functions in different contexts, while the slower timescale trainer trains the cortical connections to form a task-relevant and generalizable representation. From the songbird example, we see how thalamocortical structures can consolidate simple associations learned through the basal ganglia. To enable meta-learning, we propose that this general network consolidation motif operates over two different timescales within thalamocortical-basal ganglia interactions (Figure 3). First, combining the idea of thalamic outputs as control functions over cortical network activity patterns and the basal ganglia selecting such functions, we frame learning in basal ganglia as a process that connects contextual associations (higher order) with the appropriate dynamical control that maximizes reward at the sensorimotor level (lower order). Under this framing, corticothalamic plasticity consolidates the higher order association within a fast timescale. This allows flexible switching between different thalamic control functions in different contexts. On the other hand, the cortical plasticity consolidates the sensorimotor association over a slow timescale to allow shared representation that can generalize across different contexts. As the thalamocortical structures learn the higher order association, the behaviors become less BG-dependent and the network is able to switch between different thalamic control functions to induce different sensorimotor mappings in different contexts. By having two learning timescales, animals can conceivably both adapt quickly in changing environments with fast learning of corticothalamic connections and maintain the important information across the environment in the cortical connections. One should note that this separation of timescales is independent from different timescales across cortex (Gao, van den Brink, Pfeffer, & Voytek, 2020; J. D. Murray et al., 2014). While different timescales across cortex allows animals to process information differentially, the separation of corticothalmic and cortical plasticity allows the thalamocortical system to learn the higher contextual association to modulate cortical dynamics flexibly. Some anatomical observations support this idea. The thalamostriatal neurons have a more modulatory role to the cortical dynamics in a diffusive projection, while thalamocortical neurons have a more driver role to the cortical dynamic in a topographically restricted dense projection (Sherman & Guillery, 2005). This indicates that thalamostriatal neurons might serve as the role of control functions in the faster consolidation loop with the feedback to striatum to conduct credit assignment. On the other hand, thalamocortical neurons might be more involved in the slower consolidation loop with the feedback to striatum coming from the cortex to train the common cortical representation across contexts. Figure 3. Two-timescale learning in thalamocortical structures. We propose that one can learn the thalamocortical structure to enable meta-learning by applying the general network motif in two different timescales. First, one can learn the corticothalamic connections by applying the motif on the blue loop with a faster timescale. This allows the network to consolidate flexible switching behaviors. Second, one can learn the cortical connections by applying the motif on the orange loop in a slower timescale. This allows cortical neurons to develop a task-relevant shared representation that can generalize across contexts. In summary, this two-timescale network consolidation scheme provides a general way for BG to guide plasticity in the thalamocortical architecture to enable meta-learning and thus solves structural credit assignment as a special case. Along these lines, experimental evidence supports the notion that when faced with multisensory inputs, the BG can selectively disinhibit a modality-specific subnetwork of the thalamic reticular nucleus (TRN) to filter out the sensory inputs that are not relevant to the behavior outcomes and thus solve the structural credit assignment problem. In the discussion above, we discuss our proposal under a general formulation of thalamic control functions. In the next section, we will specify other thalamic control functions suggested by recent studies and observe how they can solve continual learning under this framework as well. THE THALAMUS SELECTIVELY AMPLIFIES FUNCTIONAL CORTICAL CONNECTIVITY AS A SOLUTION TO CONTINUAL LEARNING AND CATASTROPHIC FORGETTING One of the pitfalls of the artificial neural network is catastrophic forgetting. If one trains an artificial neural network on a sequence of tasks, the performance on the older task will quickly deteriorate as the network learns the new task (French, 1999;Kemker et al., 2018;Kumaran et al., 2016;McCloskey & Cohen, 1989;Parisi et al., 2019). On the other hand, the brain can achieve continual learning, the ability to learn different tasks in different contexts without catastrophic forgetting and even generalize the performance to novel context (Lewkowicz, 2014;M. M. Murray et al., 2016;Power & Schlaggar, 2017;Zenke, Gerstner, & Ganguli, 2017). There are three main approaches in machine learning to deal with catastrophic forgetting. First, one can use the regularization method to mostly update the weights that are less important to the prior tasks (Fernando et al., 2017;Jung, Ju, Jung, & Kim, 2018;Kirkpatrick et al., 2017;Li & Hoiem, 2018;Maltoni & Lomonaco, 2019;Zenke, Poole, & Ganguli, 2017). This idea is inspired by experimental and theoretical studies on how synaptic information is selectively protected in the brain (Benna & Fusi, 2016;Cichon & Gan, 2015;Fusi, Drew, & Abbott, 2005;Hayashi-Takagi et al., 2015;Yang, Pan, & Gan, 2009). However, it is unclear how to biologically compute the importance of each synapse to prior tasks nor how to do global regularization locally. Second, one can also use a dynamic architecture in which the network expands the architecture by allocating a subnetwork to train with the new information while preserving old information (Cortes, Gonzalvo, Kuznetsov, Mohri, & Yang, 2017;Draelos et al., 2017;Rusu et al., 2016;Xiao, Zhang, Yang, Peng, & Zhang, 2014). However, this type of method is not scalable since the number of neurons needs to scale linearly with the number of tasks. Lastly, one can use a memory buffer to replay past tasks to avoid catastrophic forgetting by interleaving the experience of the past tasks with the experience of the present task (Kemker & Kanan, 2018;Kumaran et al., 2016;McClelland et al., 1995;Shin, Lee, Kim, & Kim, 2017). However, this type of method cannot be the sole solution, as the memory buffer needs to scale linearly with the number of tasks and potentially the number of trials. We propose that the thalamus provides another way to solve continual learning and catastrophic forgetting via selectively amplifying parts of the cortical connections in different contexts ( Figure 4). Specifically, we propose that a population of thalamic neurons topographically amplify the connectivity of cortical subnetworks as their control functions. During a behavioral task, BG selects subsets of the thalamus that selectively amplify the connectivity of cortical subnetworks. Because of the reinforcement learning in BG, the subnetwork that is the most relevant to the current task will be more preferentially activated and updated. By selecting only the relevant subnetwork to activate in one context, the thalamus protects other subnetworks that can have useful information in another context from being overwritten. The corticothalamic structures can then consolidate these BG-guided flexible switching behaviors via our proposed network motif, and the switching becomes less BG-dependent. Furthermore, our proposed solution has implications on generalization as well. Different tasks can have principles in common that can be transferred. For example, although the rules of chess and Go are very different, players in both games all need to predict what the other players are going to do and counterattack based on the prediction. Since BG selects the subnetwork at each hierarchy that is most relevant to the current tasks, in addition to selecting different subnetworks to prevent catastrophic forgetting, BG can also select subnetworks that are beneficial to both tasks as well to achieve generalization. Therefore, the cortex can develop a modular hierarchical representation of the world that can be easily generalized. The idea of protecting relevant information from the past tasks to be overwritten has been applied before computationally and has decent success in combating catastrophic forgetting in deep learning (Kirkpatrick et al., 2017). Experimentally, we also have found that thalamic neurons selectively amplify the cortical connectivity to solve the continual learning problem. In a task where the mice need to switch between different sets of task cues that guided the attention to the visual or auditory target, the performance of the mice does not deteriorate much after switching to the original context, which is an indication of continual learning (Rikhye et al., 2018). Through electrophysiological recording of PFC and mediodorsal thalamic nucleus (MD) neurons, we discovered that PFC neurons preferentially code for the rule of the attention, while MD neurons preferentially code for the contexts of different sets of the cues. Thalamic neurons that encode the task-relevant context translate this neural representation into the amplification of cortical activity patterns associated with that context (despite the fact that cortical neurons themselves only encode the context implicitly). These experimental observations are consistent with our proposed solution: By incorporating the thalamic population that can selectively amplify connectivity of cortical subnetworks, the thalamus and its interaction with cortex and BG solve the continual learning problem and prevent catastrophic forgetting. Figure 4. A thalamocortical architecture with interaction with BG for continual learning. During task execution, BG selects thalamic neurons that amplify the relevant cortical subnetwork. This protects other parts of the network that are important for another context from being overwritten. When the other task comes, BG selects other thalamic neurons and since the synapses are protected from the last task, animals can freely switch from different tasks without forgetting the previous tasks. Furthermore, as the corticothalamic synapses learn how to select the right thalamic neurons in a different context (blue dashed line), task execution can become less BG dependent. CONCLUSION In summary, in contrast to the traditional relay view of the thalamus, we propose that thalamocortical interaction is the locus of meta-learning where the thalamus provides cortical control functions, such as sensory filtering, working memory gating, or motor preparation, that parametrize the cortical activity association space. Furthermore, we propose a two-timescale learning consolidation framework in which BG hierarchically selects these thalamic control functions to enable meta-learning, solving the credit assignment problem. The faster plasticity learns contextual associations to enable rapid behavioral flexibility, while the slower plasticity establishes cortical representation that generalizes. By considering the recent observation of the thalamus selectively amplifying functional cortical connectivity, the thalamocortical-basal ganglia network is able to flexibly learn context-dependent associations without catastrophic forgetting while generalizing to the new contexts. This modular account of the thalamocortical interaction may seem to be in contrast with the recent proposed dynamical perspectives (Barack & Krakauer, 2021) on thalamocortical interaction in which the thalamus shapes and constrains the cortical attractor landscapes (Shine, 2021). We would like to argue that both the modular and the dynamical perspectives are compatible with our proposal. The crux of the perspectives is that the thalamus provides control functions that parametrize cortical dynamics, and these control functions can be of modular nature or of dynamical nature depending on their specific input-output connectivity. Flexible behaviors can be induced by selecting either the control functions that amplify the appropriate cortical subnetworks or those that adjust the cortical dynamics to the appropriate regimes.
2022-01-29T16:21:40.137Z
2022-01-26T00:00:00.000
{ "year": 2022, "sha1": "340f83e97f0ae6b9d6d2eb6aa3e893222b09f5eb", "oa_license": "CCBY", "oa_url": "https://direct.mit.edu/netn/article-pdf/doi/10.1162/netn_a_00235/2021014/netn_a_00235.pdf", "oa_status": "GOLD", "pdf_src": "MIT", "pdf_hash": "0035ac565082dbada399cb13d49822aab1c691ff", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
237522696
pes2o/s2orc
v3-fos-license
An Amalgam of Mg-Doped TiO2 Nanoparticles Prepared by Sol–Gel Method for Effective Antimicrobial and Photocatalytic Activity In this study, undoped and Magnesium doped TiO2 nanoparticles (Mg-TiO2 NPs) are successfully synthesized via a simple sol–gel method cost-effectively. The prepared Mg- TiO2 NPs is characterized by UV–Vis, FTIR, PL, XRD, FESEM, TEM, and EDAX. UV–Visible Spectroscopy showed that an increase in the optical bandgap concerning the concentration of dopant Mg increases. The bandgap values were found to be 3.57–3.54 eV. FTIR spectra shows that the presence of the characteristic stretching and bending vibrational band of Ti–O bonding at 468 cm−1 and shifts in vibrational bands were observed for Mg-TiO2 NPs. PL spectra of Mg- TiO2 NPs at different concentrations exhibit a strong UV emission band. X-ray diffraction confirmed the formation of the tetragonal anatase phase. The average crystallite size of synthesized samples was found to be 22–19 nm. The average crystallite size of Mg- TiO2 NPs decreases with increasing the concentration of dopant Mg. The FESEM and TEM analysis confirmed that the spherical morphology for both TiO2 and Mg-TiO2 NPs. SAED pattern confirms the crystalline nature of prepared samples. EDAX spectra confirm the presence of Ti, O, and Mg and confirm that Mg2+ ions are present in the TiO2 lattices. The prepared samples were investigated against gram-positive and gram-negative bacteria. The prepared samples exhibit potent antibacterial activity against gram-negative bacteria than the gram-positive bacteria. The prepared samples exhibit significant photocatalytic degradation for Methylene blue (MB). Introduction Nanotechnology has a wide range of applications such as electronics, catalysis, agriculture, optical communications, food packaging, etc. [1][2][3]. In the present era, nanomaterials have a greater interest in many fields because of change their optical and physical properties when particle size reduces to the nanoscale. The recent studies on semiconductor nanoparticles also suggested that optical bandgap becomes increased that as particle size decreased thereby change in its optical and electrical properties and thus making the nanomaterials suitable for several applications [4][5][6]. The performance of nanomaterials depending on the size and shape that are affected by the high surface to volume ratio. Different types of nanomaterials are used to enhance the optical, electrical, thermal, photocatalytic, antibacterial, and gas sensing properties [7,8]. Exploiting solar energy for elimination of the variant kinds of organic contaminants from water with the help of photocatalyst has been offered as a logical and advantageous path to obviate the topic of energy trouble. Solar energy can be transformed to chemical energy with application of photocatalyst [9][10][11]. Fe 2 O 3 , WO 3 , Bi 2 O 3 , MgO, ZnO, and TiO 2 are the most semiconductor nanomaterials that are used for photocatalyst, antibacterial applications, and safe for human beings, animals, and plants [12]. Among all the metal oxide nanomaterials, TiO 2 is an n-type semiconductor with a wide bandgap of 3.2 eV, UV light absorption, high chemical and thermal stability, and tetragonal structure [13]. TiO 2 has many applications in the field of biomedical, photocatalytic activity, antibacterial activity, gas sensors, solar cells, agriculture, water purification, textiles, food packaging, etc. [14]. Chemical and physical attribute and efficiency of the nanoscale compounds can be dependent to fabrication path, size distribution, purity rate, shape of them [15]. To date, intense research attempt has been undertaken to adjust size distribution, purity rate, shape of the nanoscale compounds [16]. TiO 2 occurs in three crystalline forms such as anatase, rutile, and brookite phase. These three forms have high refractive index values, which are 2.488, 2.609, and 2.583 respectively. Among these three forms, anatase is metastable, rutile is highly stable and brookite is unstable [17]. The anatase form is considered the most physically and chemically active phase of TiO 2 [18]. Shape and size-controlled synthesis of TiO 2 nanoparticles enhance its properties have been extensively studied in recent years [19]. There are several methods for the synthesis of TiO 2 nanoparticles such as sol-gel [20], wet chemical [21] co-precipitation [22] hydrothermal [23] ball milling [24] combustion [25], and biological method [26]. Among these methods, sol-gel is the most feasible method for the synthesis of TiO 2 nanoparticles because of its ability to control size and surface morphology. The sol-gel method has greater advantages, which include high purity, the low temperature required for synthesis, and excellent homogeneity of nanoparticles [27,28]. TiO 2 nanoparticles also exhibit potent antibacterial properties that are useful in many biological applications. Recently, there are increasing research in alkali metal ion (Al, Ca, Ce, Co, Cu, Fe, Ga, In, Mn, Mg, Nb, Sn, and Sr) doping method, which leads to change in physical and chemical properties of TiO 2 nanoparticles and enhances the antibacterial applications [29]. Among these metals, Mg-TiO 2 NPs exhibit potent antibacterial activity, because Mg 2+ can be substituted into the Ti 4+ ion owing to their smaller ionic radii. The smaller ionic radii of Mg 2+ ion helpful to enhance the antibacterial efficiency [30]. The efficiency of antibacterial activity also depends on the structure of microbes. Matsunaga et al. [31] reported that TiO 2 nanoparticles showed good antimicrobial activity against Escherichia coli under UV irradiation. To overcome the drawbacks transition metals are doped to enhance the antibacterial efficiency under visible light. Karunakaran et al. [32] demonstrated that Cu doped TiO 2 nanoparticles effective antibacterial activity towards E. coli and S. aureus under visible light. Meanwhile, they also studied Ni-TiO 2 NPs against gram-positive and gram-negative bacteria. Hamal et al. [33] reported that Agdoped TiO 2 nanoparticles enhance antibacterial activity against E. coli and B. subtilis and suggesting that the Ag is responsible for the enhancement of antibacterial efficiency. According to earlier reports, few works have been carried out on the effect of TiO 2 nanoparticles on antimicrobial activity [34]. Sahar Zinatloo-Ajabshir et al. [35] reported that the dysprosium stannate nanoparticles was synthesized by using Ficus carica extract and applied as novel kind of visible light sensitive photocatalyst for efficient removal and destructive of organic contaminants in water. Saeed Moshtaghi et al. [36] exhibited that the nanocrystalline barium stannate was synthesized by simple coprecipitation method and also examined degradation of erythromycin dye as water pollutant. Sahar Zinatloo-Ajabshir et al. [37] described that the Nd 2 Sn 2 O 7 nanostructures was synthesized by using the date palm extract and explored electrochemical hydrogen storage through chronopotentiometry way and the discharge capacity of synthesized sample of around 4013 mA h/g. Sahar Zinatloo-Ajabshir et al. [38] reported that the Dy 2 Sn 2 O 7 nanostructures was synthesized by using banana juice and also investigated the electrochemical hyderogen storage through chronopotentiometry method and the discharge capacity of synthesized sample of around 4023 mA h/g after 20 cycles. However, to the best of our knowledge, only a few works have been reported on the antibacterial and Photocatalytic activity of Mg-TiO 2 NPs with different dopant concentrations. Arrak Klinbumrung et al. [39] reported the antibacterial activity for gram positive bacteria (S.aureus) only microwave assisted method, but our work reported both gram positive and gram negative bacteria by simple sol gel method. Moreover, Mg-TiO 2 NPs increase the concentration of oxygen species (ROS), and this oxygen species leading to the death of bacterial cells. The antibacterial activity of Mg-TiO 2 NPs depends on the variation in the doping concentration of Mg and the nature of bacterial species. Herein, undoped and Mg-TiO 2 NPs were synthesized via the simple sol-gel method. The synthesized nanoparticles were investigated for structural, morphological, optical, antibacterial, and photocatalytic activity. The effect of various concentrations of Mg on the synthesis of Mg-TiO 2 NPs against gram-positive and gram-negative bacteria under visible light. The photocatalytic degradation efficiency of Mg-TiO 2 NPs for Methylene Blue under UV irradiation was also studied. Preparation of Undoped and Mg-TiO 2 NPs Chemicals were analytical grade and no need to further purification. S.No Chemical name Chemical formula For the preparation of Mg-TiO 2 NPs, 0.2 mol % of Magnesium nitrate was prepared in 100 ml of deionized water. Subsequently, 5 ml of Titanium (IV) Isopropoxide was prepared in 100 ml of Isopropyl Alcohol. Then the aqueous solution of Magnesium nitrate was added drop wise to form a homogenous mixture. After that, aqueous NaOH solution was added dropwise to this homogenous mixture to form white precipitation. Then the homogenous mixture was stirred at room temperature. Further, a homogenous mixture could age for 24 h. and then the white precipitate was washed with ethanol and distilled water to removed unwanted impurities present in the solution. Then the solution was centrifuged, and the precipitate was dried at 120° C for 2 h and annealed at 450° C for 5 h to obtain Mg-TiO 2 NPs. The same procedure was followed for different concentrations of dopant Mg (0.3 mol %, 0.4 mol % and 0.5 mol %). The obtained samples were ground with pestle and mortar and stored in an airtight container. The annealed samples were used for further studies. The same method was followed for TiO 2 nanoparticles without the addition of Magnesium nitrate. Characterization The prepared undoped and Mg-TiO 2 NPs were examined using the following characterization techniques. UV-Visible absorption spectroscopy was obtained in the wavelength range 200-800 nm using a UV visible spectrophotometer (JASCO-V-770 Spectroscopy). Fourier transform infrared spectroscopy (FTIR) was carried out by using Bruker Alpha FTIR spectrometer at a wavenumber range of 400 cm −1 -4000 cm −1 . Photoluminescence spectroscopy of the prepared samples was analyzed using an FP-3800 spectrofluorometer. XRD diffraction pattern was analyzed using Bruker D8 Advance X-ray diffractometer with Cukα1 (l = 1.54060 Å) and Cukα2 (l = 1.54443 Å) radiation operating at 30 mA and 45 kV at 2θ range of 10° to 90°. The Surface morphology of Mg-TiO 2 NPs was analyzed using a Field emission scanning electron microscope (SIGMA HV-CARL ZEISS) and HRTEM images were taken in (JEOL -JEM -2010, JAPAN) with an accelerating voltage of 200 kV. Antibacterial Experiment E.Coli, Pseudomonas aureginosa, Bacilus sp and Staphylococcus aureus were chosen as microbes for antibacterial assays. The antibacterial activity of undoped and Mg-TiO 2 NPs was tested using the disc diffusion method. In brief, the microbes were cultivated in Müller-Hinton broth at 35 °C ± 2 °C on detour shuddering incubator (Remi, India) at 160 rpm. A pasture of microbial culture was arranged by dispersion of 10 mL culture broth of all test microbes on dense nutrient agar plates. The dishes were permitted to stand for 10-15 min for culture absorption. The 5 mm size discs/wells were perforated into the agar with the dome of sterilized micropipette tips. Using a spatula, 100 μg of undoped and Mg-TiO 2 NPs were kept into each of the discs on all plates. The microbes were inoculated to the culture media by inoculation in the petri dishes and incubated at 35 ± 2 °C for 24 h for culturing bacteria. After incubation, the diameter of zone of inhibition were examined. Photocatalytic Degradation Study UV light irradiation of Photocatalytic degradation of undoped and Mg-TiO 2 NPs was analyzed for Methylene blue dye. In a typical photo degradation analysis, 50 ml of methylene blue (40 mg /L) solution was mixed with the appropriate amount of prepared samples (undoped and Mg-TiO 2 NPs) and stirred well in a glass beaker. The obtained suspension was kept under darkroom for 30 min and then irradiated with UV light with constant stirring. 3 ml aqueous solution was extracted from the obtained suspension under UV irradiation with equal intervals of time. The absorption spectra of the solution were analyzed by UV visible spectrophotometer (JASCO-V-770 Spectroscopy). The photocatalytic rate constant for Methylene blue of prepared samples were calculated using the first-order equation where A 0 is the initial absorption, A is the absorption after a time t and k is the first-order rate constant. Structural Determination and Purity XRD pattern of undoped and Mg-TiO 2 NPs exhibited peaks with tetragonal anatase phase reflections (JCPDS Card no. 78-2486) and possesses pure crystalline nature with trigonal (1) In A 0 ∕A = kt planar O-3 and Ti-6 Octahedral coordination geometry [40]. The Scherrer's formula was used to calculate the crystallite size of undoped and Mg-TiO 2 NPs as follows [42], where K is the Scherrer's constant, β is the full wave half maximum (FWHM) of the X-ray diffraction (radians), λ is the wavelength of the X-ray (nm) and θ is the diffraction angle. The assessed crystallite size of as-prepared nanoparticles was found to be 22 nm, 21 nm, 20.4 nm, 20 nm, and 19.6 nm respectively. The crystallite size is found to decrease with Mg-TiO 2 NPs increases which are due to Mg 2+ ion is incorporated into the TiO 2 lattice. The doping with Mg with TiO 2 also increases the oxygen species and these oxygen species are responsible to enhance the antibacterial and photocatalytic activity. The lattice constant of the tetragonal anatase phase of undoped and Mg-TiO 2 NPs was calculated using the formula, where d is the interplanar spacing, a and c are lattice constants, h k and l are the miller indices. Positional parameter (u), bond length (l), and volume of the unit cell (V) of asprepared samples were obtained using the following relation. The obtained value of crystallite size, lattice parameter, positional parameter, bond length, and unit cell volume of undoped and Mg-TiO 2 NPs are summarized in Table 1. As presented in Table 1, the crystallite size of as-prepared nanoparticles decreases when increases the dopant concentration Mg and also a slight variation in the positional parameter, bond length, and volume of the unit cell values, this might be due to the incorporation of Mg 2+ ion into the TiO 2 lattice. The different modes of vibration of as-prepared nanoparticles and chemical purity can be studied by using Fourier Transforms Infrared Technique (FTIR) and depicted in Fig. 2. The broadband at 3748 cm −1 could be attributed to the hydroxyl group of stretching mode of vibration of TiO 2 nanoparticles and forms oxygen vacancies in the occurrence of water. The presence of the OH group also increases the photocatalytic activity since the OH group helps as a scavenger for photogenerated charge carriers [43]. The absorption band at 2936 cm −1 and 2348 cm −1 related to symmetric and asymmetric vibration of -CH 2 and -CH 3 groups. The band at 1626 cm −1 related to characteristics of amide I and II and indicating the formation of the band at an infrared region which increases the surface hydroxylation of TiO 2 nanoparticles when doping Optical Properties and Bandgap Assessment The optical properties of as-prepared nanoparticles were examined using UV-visible absorption spectroscopy carried out at room temperature and depicted in Fig. 3. Generally, TiO 2 nanoparticles tend to absorb UV light of bandgap 3.2 eV. The absorption peak exhibit UV cutoff wavelength which is attributed to photoexcitation of electron from the valence band (formed from 2p orbital of the oxide anion) to conduction band (formed from the 3d orbitals of the Ti 4+ cation [47]. The shift in the absorption edge was observed for Mg-TiO 2 NPs which are ascribed to the acceptor tendency of Mg in the TiO 2 and creation of additional state within the TiO 2 lattice which leads to reducing the bandgap. To estimate the bandgap of as-prepared samples, Tauc's formula is used from UV visible spectra [48]. The structural defects and crystal properties of asprepared samples were analyzed using PL analysis with an excitation wavelength of 345 nm at room temperature. Photoluminescence (PL) spectra of undoped and Mg-TiO 2 NPs depicted in Fig. 5. PL spectra exhibit a peak at 388 nm, 458 nm, and 535 nm respectively. There is no new peak arises for Mg-TiO 2 NPs. The prominent peak at 388 nm is due to UV emission and self-blocking excitons near the band edge of TiO 2 . The UV emission band arises due to the recombination of electron-hole pair which is near band edge emission (NBE). The intensity of the peak slightly increases when Mg added to TiO 2 which is due to an increase in electron-hole pair recombination. The shifts in peak also due to a decrease in particle size and bandgap energy of asprepared nanoparticles. The highest peak at 388 nm also confirms the formation crystalline nature of TiO 2 . Another peak at 458 nm due to deep level emission from the structural defects such as oxygen vacancies and impurities on the surface of TiO 2 [49]. PL spectra were observed between 300 and 550 nm this might be due to intrinsic and extrinsic structural defects. This may because when an electron from the valence band to the conduction band and forms the electron-hole pair. The low-intensity peak at 535 nm corresponds to green emission and it can be arising due to charge carriers formed after the recombination process take place and oxygen species on the surface of TiO 2 nanoparticles. The increase in oxygen vacancies happens with increasing dopant Mg and this may cause development in photocatalytic activity. Morphology and Elemental Analysis The surface morphology of the as-prepared undoped and Mg-TiO 2 NPs was examined using FESEM analysis and results are shown in Fig. 6. The morphology of as-prepared nanoparticles shows a spherical shape. The particle size of undoped and Mg-TiO 2 NPs is around 25 nm. From the XRD results, it can be inferred that the crystallite size is less than the particle size and it proves that the prepared nanoparticle is in crystalline nature. In addition, the aggregation and agglomeration occur in the prepared nanoparticles and it is shown in the FESEM images. The decrease in agglomeration can be attributed to the increasing the dopant concentration Mg and particle size also decreases. The crystalline is defined as the lowest even crystallographic unit based on the disorientation of the adjacent atoms and the nanoparticle consists of more than one crystalline with dissimilar direction. Here, particle size obtained from FESEM results is in good agreement with XRD results and this also actually happens in the case of nanoparticles. The high crystalline nature of prepared nanoparticles enhances the antibacterial and photocatalytic activity. The quantitative microanalysis of undoped and Mg-TiO 2 NPs was analyzed by EDAX and shown in Fig. 7 and the results are listed in Table 2. The results from the EDAX spectra confirm the presence of Mg, Ti, and O and the percentage of Mg increases with increasing the dopant concentration, thereby decreasing the concentration of Ti into the TiO 2 lattice. Furthermore, the percentage of Mg and Ti indicates the substitution mode of doping on the surface of the TiO 2 lattice. The results also show that the increase in the percentage of oxygen on the surface of TiO 2 attributed to enhances the antibacterial and photocatalytic activity. No other additional impurities are detected in the EDAX spectra. Figure 8(A-B) depicts the TEM analysis of undoped and Mg-TiO 2 NPs. The results illustrate the prepared nanoparticles are spherical. The entire prepared sample shows a uniform spherical morphology with well crystalline nature. Figure (C-D) indicates the lattice fringes of undoped and where L is the camera length (120 nm), l is the wavelength of the electron beam and R is the radius diffraction ring respectively. d-spacing values of undoped and Mg-TiO 2 NPs were found to be 0.239 nm and 0.268 nm respectively which corresponds to the (1 0 1) tetragonal anatase phase of TiO 2 nanoparticles. Lattice spacing values were found to be slight increases when Mg-TiO 2 NPs which are ascribed to the imperfections in TiO 2 lattice due to metal ion doping [51]. The intensity of the crystalline phase of TiO 2 nanoparticles was decreased which are well-matched with intensity peaks of XRD results. The crystallinity of as-prepared samples was assessed using the selected area diffraction pattern (SAED) and portrayed in Fig. 8 (E-F). The ring pattern confirms the anatase crystalline nature of as-prepared nanoparticles and a bright spot indicates the formation of high crystallinity nature of undoped and Mg-TiO 2 NPs (1 0 1) anatase phase. The mean particle size of undoped and Mg-TiO 2 NPs was obtained to be 24.6 nm and 21.9 nm respectively. The assessed particle size is in good accord with the crystalline size of XRD results. From the TEM results, particle size decreases the increasing the dopant concentration which is due to Mg 2+ ion is incorporated into the TiO 2 lattice. The smaller particle size improves photocatalytic activity. Antibacterial Activity Generally, the antibacterial activity of nanoparticles depends on various factors such as phase formation, particle size, surface morphology, specific surface area, chemical composition, and surface hydroxyl groups [52,53]. The antibacterial activities of Pure and different concentrations of Mg-TiO 2 NPs were investigated against E.Coli, Pseudomonas aureginosa, Bacilus sp and Staphylococcus aureus by disc diffusion method. The zone of inhibition was increased when increasing the doping concentration Mg as shown in Fig. 9. Mg-TiO 2 NPs (0.5 mol%) exhibited the best antibacterial activities of both gram-negative and gram-positive bacteria as shown in Fig. 10. Moreover, gram-negative bacteria are comparatively more sensitive to Mg-TiO 2 NPs than grampositive bacteria. This might be due to the difference in the cell structure of bacteria. As the gram-positive bacteria have a thick lipopolysaccharide cell membrane as related to gramnegative bacteria. And lipopolysaccharide cell membrane acts as an additional barrier for undoped and Mg-TiO 2 NPs, leading to relatively lower antibacterial activities for grampositive bacteria. The several killing mechanisms of TiO 2 nanoparticles explained in literature such as ROS generation, superficial tension leads to cell damage, Ti 2+ ion penetrates cell membrane leads to damage of cell wall, hole creation, and leakage of intracellular electrolytes [54,55]. Amongst, ROS creation was mostly used to describe the antibacterial activities of TiO 2 nanoparticles. According to the ROS creation of TiO 2 nanoparticles, additional electron-hole pairs might be formed on the surface of nanoparticles. ROS creation mainly consists of hydroxyl radicals, hydrogen peroxide (H 2 O 2 ), and superoxide anion radicals [56]. Furthermore, TiO 2 nanoparticles bind with the external microbial membrane and enter the cell wall. This damage the cell wall, DNA, lipids, and protein synthesis and leads to bacteria viability [57]. The killing mechanism of TiO 2 nanoparticles is given below [58]. Carol López de Dicastillo et al. [59] reported the higher antibacterial activity of TiO 2 nanosphere against Escherichia coli and Staphylococcus aureus. Zimbone et al. [60] reported the antibacterial activity of TiO 2 nanoparticles against gram-negative bacteria Escherichia coli. The antibacterial activities of undoped and Mg-TiO 2 NPs also depend on the crystallite and morphology. The efficacy of antibacterial activities of undoped and Mg-TiO 2 NPs are shown in Table 1. The results reveal that antibacterial activities of Mg-TiO 2 NPs are higher for gram-negative bacteria than gram-positive bacteria. When increasing the concentration of Mg, the antibacterial activities also increase. In this study, Mg-TiO 2 NPs (0.5 mol%) exhibited higher antibacterial activity because of the smaller crystallite size with a larger surface area. In addition, Mg also increases the oxygen vacancies in ROS generation to enhance the antibacterial activity. The doping of Mg with TiO 2 nanoparticles leads to the variation in particle size, morphology, and solubility of Ti 2+ . The results reveal that Mg-TiO 2 NPs will be a promising candidate for a potential drug delivery system to cure some significant infections in the future Table (3). Photocatalytic Activities UV irradiated Photocatalytic degradation of Methylene Blue for undoped and Mg-TiO 2 NPs are depicted in Fig. 11. Mg-TiO 2 NPs show enhanced Photocatalytic degradation [61]. The photocatalytic mechanism of as prepared samples were shown in Fig. 12. The Photocatalytic degradation efficiency of TiO 2 nanoparticles from 0 -45 min was found to be 68%, 74%, and 86% and 76%, 88%, and 95% respectively for Mg-TiO 2 NPs. From the result of photodegradation efficiency, Mg-TiO 2 NPs exhibit potent performance than Pure TiO 2 nanoparticles. The higher photocatalytic efficiency for Mg-TiO 2 NPs due to smaller crystallite size and higher bandgap energy than TiO 2 nanoparticles. Because the crystallite size and bandgap energy play an important role in photocatalytic activity. And also dopant Mg modified the physical and chemical properties of TiO 2 nanoparticles. The reduced crystallite size of Mg-TiO 2 NPs nanoparticles also decreases the recombination of photogenerated electron-hole pair and (12) it also enhances the photocatalytic activity. The higher concentration of Mg-TiO 2 NPs shows enhanced photocatalytic activity than TiO 2 nanoparticles and this is because of the charge separation efficiency of electron-hole pair. And also, when Mg-TiO 2 NPs could form surface defects and oxygen species on the surface of prepared nanoparticles. The enhanced photocatalytic activity of Mg-TiO 2 NPs is due to smaller crystallite size and bandgap energy which was confirmed by XRD and UV analysis respectively. The calculated Fig. 13. The higher rate constant was obtained for Mg-doped TiO 2 nanoparticles and this is due to higher bandgap energy which causes higher redox potential for photogenerated electron-hole pair which extensively enhances the photocatalytic activity. The photodegradation efficiency of undoped and Mg-TiO 2 NPs over 5 cycles for Methylene Blue over 45 min is shown in Fig. 14. The efficiency of TiO 2 nanoparticles was decreased from 86 to 76% and 95% to 92% for Mg-TiO 2 NPs. The lower efficiency of TiO 2 nanoparticles is due to the photo corrosion phenomenon during photocatalytic reactions. On the other hand, doping with alkali metals increases the photocorrosive resistance and increases the chemical stability of TiO 2 nanoparticles during photocatalytic reactions. The photo corrosion phenomenon in the TiO 2 nanoparticles can be occur using the relations [62]. The photo corrosion is caused by the reaction of oxygen species and holes present on the surface of TiO 2 nanoparticles. According to XRD results, Mg-TiO 2 NPs leads to increases in the oxygen species and also increases the chemical stability of photocatalytic reactions. Conclusion Bestowing to this work, undoped and Mg doped TiO 2 nanoparticles were synthesized by facile sol-gel technique. The synthesized nanoparticles were characterized by XRD, FESEM with EDAX, TEM, UV, FTIR and PL analysis. The XRD spectrum confirms the presence of tetragonal anatase phase of TiO 2 nanoparticles with lesser crystallite size. The crystallite size decreases from 22 nm to 19 nm with increasing the concentration of dopant Mg. FESEM and TEM analysis also confirms the various sized smooth spherical morphology of TiO 2 nanoparticles was achieved from this method. EDAX analysis also confirms the presence of Ti, O and Mg without any other impurities. Red shift in UV analysis confirms the incorporation of Mg into TiO 2 nanoparticles. PL analysis confirms the UV emission and green emission region and change in intensity of the peak confirms the incorporation of Mg. The bandgap energy values decrease from 3.57 to 3.54 eV with Mg doping, resulting decreasing in crystallite size. The presence of functional group was confirmed by FTIR spectrum and stretching modes of vibration of TiO 2 nanoparticles was observed at 468 cm −1 . The prepared nanoparticles were also investigated for antimicrobial and photocatalytic activity. Pure and Mg doped TiO 2 nanoparticles show potent killing effect against the gram-negative bacteria than the gram-positive bacteria. The difference in killing effect due to cell structure and smaller crystallite size of prepared nanoparticles. Mg doped TiO 2 nanoparticles shows higher degradation efficiency than TiO 2 nanoparticles for Methylene Blue and this is because of smaller crystallite size. This method was a simple, coat effective, good dopant to preparation of low crystallite size TiO 2 nanoparticles.
2021-09-16T13:43:22.511Z
2021-09-15T00:00:00.000
{ "year": 2021, "sha1": "5dbfaf664d0051fd824a51c0fefccee1313675a6", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-550044/v1.pdf?c=1621989879000", "oa_status": "GREEN", "pdf_src": "Springer", "pdf_hash": "5dbfaf664d0051fd824a51c0fefccee1313675a6", "s2fieldsofstudy": [ "Materials Science", "Environmental Science", "Chemistry" ], "extfieldsofstudy": [] }
257715729
pes2o/s2orc
v3-fos-license
Sub-sewershed Monitoring to Elucidate Down-the-Drain Pesticide Sources Pesticides have been reported in treated wastewater effluent at concentrations that exceed aquatic toxicity thresholds, indicating that treatment may be insufficient to adequately address potential pesticide impacts on aquatic life. Gaining a better understanding of the relative contribution from specific use patterns, transport pathways, and flow characteristics is an essential first step to informing source control measures. The results of this study are the first of their kind, reporting pesticide concentrations at sub-sewershed sites within a single sewer catchment to provide information on the relative contribution from various urban sources. Seven monitoring events were collected from influent, effluent, and seven sub-sewershed sites to capture seasonal variability. In addition, samples were collected from sites with the potential for relatively large mass fluxes of pesticides (pet grooming operations, pest control operators, and laundromats). Fipronil and imidacloprid were detected in most samples (>70%). Pyrethroids were detected in >50% of all influent and lateral samples. There were significant removals of pyrethroids from the aqueous process stream within the facility to below reporting limits. Imidacloprid and fiproles were the only pesticides that were detected above reporting limits in effluent, highlighting the importance of source identification and control for the more hydrophilic compounds. Single source monitoring revealed large contributions of fipronil, imidacloprid, and permethrin originating from a pet groomer, with elevated levels of cypermethrin at a commercial laundry location. The results provide important information needed to prioritize future monitoring efforts, calibrate down-the-drain models, and identify potential mitigation strategies at the site of pesticide use to prevent introduction to sewersheds. ■ INTRODUCTION Pesticides, including pyrethroids, fipronil, and imidacloprid, have been reported in treated wastewater effluent at concentrations that exceed aquatic toxicity thresholds. 1 In arid regions, the discharge of treated effluent can dominate flow in streams and rivers and can contribute to estuarine environments with limited hydrodynamic exchange with the ocean, posing a potential risk to aquatic organisms. Limited data exist on the ability of wastewater treatment technologies to remove pesticides; however, available results suggest treatment may be insufficient to reduce pesticide concentrations to levels below aquatic toxicity thresholds. 1−3 Historically, the primary aim of wastewater treatment technologies was to remove bulk organic matter and pathogens and to reduce nutrients present in influent in the parts per million concentration range. The ability to reduce trace organic chemicals (TOrCs) present in the parts per billion to sub parts per trillion ranges has been an area of intense research over the past several decades. Advanced treatment technologies have been evaluated for their ability to reduce TOrC concentrations, with variable success. 4 However, the increased energy requirement and corresponding greenhouse gas emissions associated with advanced treatment technologies have also been documented. 5 Reducing toxicity related to pesticides may be better addressed through source control strategies than treatment. The occurrence of pesticides within wastewater treatment facilities at concentrations of ecological concern has been established. 1−3 However, little information exists on pesticide transport within a sewershed. The use of patterns for a particular chemical may lead to small and continuous ubiquitous sources throughout a sewershed catchment or episodic, large volume pulses. For example, personal care products and pharmaceuticals may be discharged to the system by users throughout the day, in contrast to timed releases of industrial chemicals. 6 Large pulses from specific uses have been the focus of wastewater treatment plants' pre-treatment programs. For example, the US EPA codified pre-treatment standards to address the discharge of mercury from dental offices to WWTP (https://www.ecfr.gov/current/title-40/ chapter-I/subchapter-N/part-441e). Advances in wastewater treatment technologies have been driven by the need to address continuous discharges such as personal care products, flame retardants, pharmaceuticals, and pesticides. 4,7,8 Generally, the variability of TOrC concentrations increases with decreasing catchment size. 9 Sutton et al. (2019) provided a comprehensive conceptual model of potential sources and pathways for pesticides to enter wastewater catchments. 1 Every pesticide product used in California must be registered by both the United States Environmental Protection Agency (USEPA) and the California Department of Pesticide Regulation (CDPR), with permitted applications detailed within its label. For pyrethroids, there are additional California specific regulations imposed on professional use products during structural applications. 10 Thus, it is possible to determine which sources and pathways are possible on a chemical-by-chemical basis. This allows for a qualitative interpretation of measurements and prediction of the potential for high-use pulses to enter the sewershed. For example, source identification can be elucidated from isolating specific sewershed laterals, which are pipes that connect a structure to a municipal main sewer line. Elevated pesticide residues may be predicted at sewershed laterals originating from pet grooming facilities, pest control operators, or nurseries. However, quantitative information on relative contribution of these sources is limited. Teerlink et al. demonstrated the washing of dogs treated with spot-on fipronil products is a significant source of fipronil entering wastewater treatment plants. 11 Verifying and quantifying the relative contribution of other source pathways are key data gaps. Single source monitoring is a unique technique to isolate potential source contributions into the sewershed. The goals of this study are to: (1) quantify pesticide residues in influent, effluent, and at the sub-sewershed scale; (2) characterize the variability in sewershed laterals to assess whether pesticides are introduced through large pulses or ubiquitous releases; (3) assess contribution from specialty sites with potential for increased pesticide discharge; and (4) investigate the relative contribution as a function of sub-sewershed characteristics (e.g., residential, industrial, and commercial). Sampling was conducted over a 9 month period to capture seasonal variability, with sampling events occurring on both weekdays and weekends across an entire municipal sewershed. It is recognized that evaluating the partitioning of hydrophobic TOrCs to biosolids is essential to fully understand the fate of contaminants in wastewater systems. However, this study will focus on the aqueous process stream. ■ MATERIALS AND METHODS Study Site and Sampling. Samples were taken in the Palo Alto Regional Water Quality Control Plant (PAWT) between May 2016 and January 2017. PAWT is a tertiary treatment facility that employs a trickling filter, an activated sludge system, clarifiers, dual medium filtration, and ultraviolet disinfection prior to effluent discharge into the San Francisco Bay. The sewershed is classified as a separate sewer system, with storm water runoff directed into a different drainage system. The system serves approximately 236,000 residents, with a plant capacity of 148 megaliters per day (MLD) and an average of 68 MLD during dry weather flows. 24 h timeweighted composite samples were collected from influent, effluent, and seven sub-catchment locations. With the exception of one lateral location and the three specialty sites described below, all locations were instrumented with flow meters during the sampling periods (Table S5). Three specialty sites were selected to investigate the potential for intensive pesticide release from a pet groomer, a pest control operation, and a laundromat. Sampling was conducted as close to the source as possible, with limited contribution from additional locations. Prior to sample collection, the sewer was physically cleaned from the structure to the monitoring location to eliminate sediment that may contribute residual pesticides. All samples were collected with 15 min sampling intervals as 24 h time weighted composite samples using a combination of ISCO Model 2910, Hach Sigma 900 Max, and Sigma SD900 autosamplers, except for the June 2016 sample, which was collected at 30 min intervals. Analytical Method Summary. A total of 25 target compounds were selected (Table S1) based on a shelf-survey Environmental Science & Technology pubs.acs.org/est Article of products available directly to consumers, with consideration of toxicity. 12 Procedures for sample extraction and analysis were optimized for target compounds and derived from methods previously used to investigate pesticide occurrence in surface water samples; 3 key details including method validation are summarized here and methods are completely described in the Supporting Information (Tables S1−S4). Wastewater samples (200 mL of raw wastewater or 1 L of treated effluent) were filtered, spiked with a stable isotope-labeled surrogate solution, and passed over solid-phase extraction cartridges. Cartridges were dried and eluted first with ethyl acetate and then with methanol. Filters were extracted by sonication with hexane/acetone. Each extract was evaporated separately under nitrogen. Samples were analyzed using liquid and gas chromatography coupled with quadrupole time-of-flight mass spectrometers. LC−MS analysis of combined methanol/ethyl acetate/filter extracts was performed in the positive electrospray ionization mode and GC−MS analyses were performed on combined ethyl acetate/filter extracts in both negative chemical ionization and electron ionization modes. Data Analysis. Pesticide monitoring data are typically right-skewed in distribution. 13 It is also common to introduce bias in data sets utilizing targeted site selection in which previous information of the distribution is known. 14 Distribution histograms were created to confirm the skewness of the dataset. To offset the bias, all statistical analysis tests were conducted using R version 4.1 with NADA package macros for censored data as described by Helsel. 13 The Mann−Whitney test was used to determine significant differences in test concentrations. Significant differences in median values between multiple groups were evaluated using the Kruskall− Wallis test. Both statistical techniques account for multiple analytical reporting limits. An α of 0.05 was used as a level of significance in all statistical analysis tests. Descriptive statistics for left-censored data were calculated using the Kaplan−Meier method. Influent: What Is Entering the Waste Stream? The detection frequencies (DF) of pesticides in lateral and influent samples are summarized in Figure 1 and Table S6. Fipronil and its degradates, with concentrations summed to produce a total fiprole concentration, imidacloprid, and six pyrethroids (bifenthrin, cyfluthrin, cyhalothrin, cypermethrin, deltamethrin, and permethrin) were detected in more than 50% of influent and/or lateral samples ( Figure 1). There were less frequent detections of some other pyrethroids [e.g., prallethrin (29% DF), bioallethrin (14%), cyphenothrin (4%), and esfenvalerate (2%)], as well as the fungicide chlorothalonil (2%). Chlorpyrifos was detected in 22% of sewer lateral samples at levels near detection limits. There is currently one registered chlorpyrifos pesticide product for use by professionals for cockroaches in sewers, which could result in a periodic, direct, low volume source into the sewershed. Etofenprox, novaluron, phenothrin, propoxur, pyriproxyfen, and tetramethrin were not detected in any sample, indicating their presence in this wastewater system is negligible. Concentrations of the seven compounds (or compound groups in the case of fiproles) with DF above 50% in influent and lateral samples are summarized in Figure 2. Concentrations measured in municipal wastewater influent in this study are generally in good agreement with those available in the literature. 1 The maximum observed influent concentrations in this study are below the maximum values reported 1 The median concentrations in influent samples were lower than the highest reported median value reported by Sutton et al. for imidacloprid and most fiproles, but were consistently higher than previously reported median concentrations for fipronil amide, fipronil sulfide, and all of the pyrethroid insecticides. 1 For several compounds the values were only slightly higher (2.5% for bifenthrin, 86.8% for permethrin), but for others, the values were substantially larger than those of the highest previously reported median, ranging from 2.6 times higher for cypermethrin to 9.0 times larger for fipronil amide. Previous influent collected at the study facility found similar total fiprole concentrations (11% RPD), but much higher imidacloprid concentrations (37% RPD) than those of the current study. 2 This might indicate differences in spatial and/or temporal use patterns. Previous studies encompassed a large spatial distribution of facilities within the United States, while the current study focused on one facility during multiple time periods. In general, concentration variability was larger in lateral samples than in influent samples ( Figure 2). This could be a result of sub-sewershed pulses that when combined have a more consistent signal. Effluent: What Is Entering Surface Waters? Effluent concentrations for the pyrethroid insecticides in wastewater effluent were all below the limits of quantification, despite being frequently measured in influent ( Figure 1). The LOQs varied by sampling events but had median values of 1 ng/L (bifenthrin), 2 ng/L (cyfluthrin), 3 ng/L (deltamethrin), 5 ng/ L (cypermethrin), and 25 ng/L (permethrin). Fiproles and imidacloprid were the only pesticides detected above LOQs in effluent samples at 100 and 71% frequency, respectively (Table S6). The median values in wastewater effluent were below the low end of the median range reported from other peerreviewed literature for imidacloprid and fipronil (20 and 7% lower, respectively), and 48−275% higher than those reported for the fipronil degradates. 1 Although the present study was not designed to directly determine removal (e.g., influent and effluent sampling times were coincident rather than lagged by average hydraulic residence time), the comparison of influent and effluent concentrations is still useful in adding to the body of knowledge regarding the treatability of pesticides within the waste stream. The percent change in the concentration between influent and effluent was calculated as The minimum percent change was determined by using the maximum value of the result or LOQ; therefore, the actual removal efficiencies could be higher than those reported here. The median percent removal for pyrethroids was >90% for bifenthrin, cypermethrin, cyfluthrin, permethrin, and cyhalothrin and ≥80% for deltamethrin, prallethrin, and bioallethrin (Table 1). Weston et al. observed similar removal of pyrethroids from the aqueous process stream within a Sacramento, CA WWTP, with >84% removal of bifenthrin, cyhalothrin, cypermethrin, and permethrin during six monitoring events. 3 It is important to note that the method detection limits were above U.S. EPA aquatic toxicity thresholds for some samples; thus, a non-detect in effluent samples does not confirm a lack of potential toxicity for any sample of permethrin, three samples of deltamethrin, and one of bifenthrin (Table S7). All other effluent samples were below their respective minimum aquatic benchmarks, indicating a significant reduction in potential ecological risk. Removal of the neonicotinoid imidacloprid was less complete, with a median removal value of 46% (10−62%). Two previous studies have demonstrated that imidacloprid is incompletely removed by typical municipal wastewater treatment operations. 2 15 The amount of imidacloprid remaining in treated effluent in the current study is still of ecological concern because on five out of seven monitoring dates, the imidacloprid in the effluent exceeded the 10 ng/L aquatic toxicity benchmark by factors of 1.03 to 13. Addressing the removal of fipronil during wastewater treatment operations is more complex since five different fipronil degradation products were routinely detected in this study and they are formed by different environmental processes (photodegradation, aerobic biodegradation, anaerobic degradation, and hydrolysis). With a median reduction of 21% for the parent compound between influent and effluent concentrations, this indicates moderate transference to byproducts or sorption and subsequent settling within the facility ( Table 1). The median total fiprole removal was 33%, which was similar to Sadaria et al. (2016), who observed a 35% reduction in the total fiprole concentration from the median 2 Regardless, the incomplete removal/transference of fipronil poses a potential ecological risk to sensitive aquatic species, with all seven effluent concentrations of the parent chemical exceeding the 11 ng/L aquatic benchmark value. Influent total fiprole loads were highly variable, ranging from 2 to over 20 g/d. The parent compound contributed the majority of fiprole loading throughout the sewershed. Of the degradates, sulfone and sulfide had the largest average contributions to fiprole loading in influent and effluent, each ranging from 7 to 14% of the total mass loads (Figure 3). On average, the desulfinyl and amide forms combined contributed less than 8% of the total fiprole loads. Findings regarding fiprole speciation in municipal wastewater in this study are largely consistent with the work of , who found that the amide and desulfinyl forms were minor contributors to overall speciation, with concentrations frequently below detection limits in influent, while the sulfone and sulfide forms were always detectable in influent samples. The primary qualitative difference between our results and those of is that for most of the plants they studied, fipronil sulfone concentrations were much higher than fipronil sulfide concentrations; in our study, the two species were nearly equal in concentration and loading; reasons for this discrepancy are not immediately clear. 2 Fipronil sulfide consistently had the highest percent decrease in concentrations of all fiproles, while desulfinyl effluent concentrations increased by an average of 264% from incoming influent concentrations. This may suggest the aerobic metabolic and photolytic processes have a stronger influence of fiprole speciation than anaerobic metabolism within the treatment train. Sub-sewershed Evaluation. This study was not designed to quantify the magnitude of concentrated pulses that would require high-frequency grab samples; however, the variability within the sewershed can highlight potential concentrated sources. 9 For each pesticide, a Kruskall−Wallis test was conducted to identify statistically significant (p < 0.05) differences by lateral sampling locations (n = 9). While pesticides were ubiquitous throughout the sewershed, significant differences in concentrations were noted. Cypermethrin (p = 0.004) and prallethrin (p < 0.000) concentrations varied significantly by site. This result appears to be largely a function of the consistently high concentrations observed at one lateral sampling site (Table S8). In addition to cypermethrin and prallethrin concentrations, the highest median concentrations were observed within the same lateral for bifenthrin, bioallethrin, cyhalothrin, deltamethrin, and permethrin. It is unclear what factor is driving the elevated pyrethroid concentrations found within this lateral. However, the overlying area is composed primarily of residential land (79%), consisting primarily of land zoned for high-density residential parcels ( Figure S1). This lateral had the highest percent of residential zoning of the laterals with available land use data. This could indicate indoor pesticide use is more concentrated within high-density residential areas, leading to a higher pesticide loading. Temporal Variability. A similar approach was undertaken to evaluate seasonal differences in lateral concentrations by the sampling event. Several pesticide concentrations were observed to vary significantly by the sampling event, including bifenthrin, cyfluthrin, cyhalothrin, cypermethrin, permethrin, chlorpyrifos, imidacloprid, and fiproles (Table S9). For the pyrethroids, there is a general trend of lower mass loading in May and June sampling, with a dramatic increase for July− January (Figure 4). Maximum mass loads of most pyrethroid insecticides were in the range of 1−10 g/d except for permethrin, which had a maximum load an order of magnitude larger. The mass loading of fipronil and its derivatives followed a pattern similar to that for bifenthrin, cyfluthrin, and deltamethrin, with very low loads in May and June, increased loads in July−November, and distinctly higher loads in January. Loads for imidacloprid followed a nearly inverse trend compared to the other active ingredients, with maximum loads in May/June, moderate loads during July−September, and very low loads in the cooler months (November, January). The significant increase in fiprole and several pyrethroid concentrations observed during the January 2017 event could be attributed to storm water runoff unintentionally entering the waste stream. The influent flow rate in January (111 MLD) was significantly higher than the upper end of the 95% confidence interval around the influent flow rate ( Figure S2). The large increase in influent flow was caused by a significant storm system that delivered 7.1 cm of precipitation (based on the nearest California Irrigation Management Information System site) over the period Jan 7−10, 2017. The wastewater treatment plant operation staff reported that the sewer flows were significant enough to lift off manhole covers at various locations across the system during this storm. This suggests an influx of pesticide loads into the waste stream from surface runoff, despite the system typically receiving negligible 19 It is possible that the residual concentrations of imidacloprid on the landscape had been depleted during previous storm systems ( Figure S4). There were a few instances where a particular lateral sampling event corresponded with an elevated mass flux, indicating a use or event that resulted in the large flux of pesticides. In some cases, single samples collected within a lateral represented a disproportionate amount of loading compared to all sampling events, including cyfluthrin (31%, site F on Jan 2017), fipronil (20%, site A on Jan 2017), fipronil amide (36%, site B on Jan 2017), fipronil desulfinyl amide (64%, site B on Jan 2017), and permethrin (27%, site E on Nov 2016). Except for permethrin, the instances of high fluxes within the laterals occurred during the January 2017 storm event, in which there was evidence of urban runoff entering the system. To evaluate whether sample timing has any effect on observed concentrations, sampling events were grouped based on the day of the week in which they were collected. Samples collected during the weekend included the June and September sampling events, while the May, July, August, October, and January events occurred during the weekday (Table S5). While there were not enough data to perform a robust statistical analysis, there were no obvious trends based on timing. Most pesticide concentrations were slightly higher during the weekday; however, average bifenthrin and imidacloprid concentrations were higher during the weekend ( Figure S3). Potential Sources: What Are the Major Contributors. Sutton et al. present a comprehensive conceptual model that describes potential pesticide sources and associated pathways that can introduce pesticides to wastewater influent. 1 Evidence provided by single source monitoring data in conjunction with an evaluation of registered product use types for individual active ingredients detected in influent samples can provide a line of evidence to identify significant source pathways for pesticides entering the waste stream. The information gained during this analysis is utilized to further refine the conceptual model of transport pathways from urban sources into the waste stream ( Figure S5). Among possible pathways, there is a growing body of evidence that topical pet products can contribute significantly to pesticide loading within the waste stream. 2,11,21 Fipronil, imidacloprid, and permethrin have been identified as the most common active ingredients within spot-on pet products by mass. 11 Of these three AIs, fipronil has the fewest registered uses in California and is primarily used in spot-on pet and structural pest control products. Structural applications are not expected to be a significant pathway for fipronil into municipal wastewater, as all registered uses are either outdoors or into structural voids within the structure. As previously discussed, the extreme precipitation event during January may have provided a temporary direct pathway for fipronil in surface runoff to enter the sewershed within the separate sewer system. Imidacloprid and pyrethroids, including permethrin, are registered for many uses in California besides topical pet products. These include products intended as indoor area sprays, bed bug control, and total release foggers, which may have the ability to transfer pesticides down the drain through direct contact and washing activities. 20,21 Previous work directly measured fiproles released during routine bathing of treated dogs and concluded that spot-on fipronil products are a significant source of fiproles to wastewater. 11 It was estimated that if just 25% of pet owners washed animals 7 days after application in locations plumbed to sewers, this could account for the entire per-capita mass loading observed in the San Francisco Bay area. While the previous study focused on fipronil, it is reasonable to assume that the same transport principles would apply to the other active ingredients. This hypothesis is supported by samples collected at the sub-sewershed locations. Fiprole and imidacloprid concentrations collected at the pet groomer location were significantly elevated above the maximum observed concentrations at the main lateral locations ( Figure 5), providing direct evidence that topical pet products can be a Environmental Science & Technology pubs.acs.org/est Article significant source of these active ingredients to the waste stream. While lower than the maximum observed concentration, the median groomer sub-sewershed permethrin concentration was approximately five times higher than the associated median concentration in the laterals, highlighting topical pet products as an important source for all three active ingredients. 25 percent of currently registered residential products containing deltamethrin are also pet products. 21 However, all are formulated as pet collars. Given that all deltamethrin concentrations in the groomer samples were below the median lateral concentrations, this suggests that the use of this product type results in minimal transference from the pet to wastewater. Elevated median concentrations were observed at the commercial laundry sub-sewershed site compared to lateral concentrations for every monitored pesticide (Table S10). The median concentration of cypermethrin observed at the laundry sub-sewershed site was significantly greater than the maximum concentration found within any lateral sample ( Figure 5). An analysis of sales data of residential indoor-use products that were identified in previous store surveys revealed the mass of cypermethrin sold was approximately an order of magnitude larger than the next highest active ingredient. 22 This is in agreement with an extensive analysis of product use and sales data designed to identify products with down-the-drain potential. 23 Over 80% of identified products containing cypermethrin available to consumers are labeled for indoor use only, with 45% of products formulated as total release foggers. Keenan (2009) found that upward of 30% of cypermethrin mass was available for transfer from various indoor horizontal surfaces after discharging fogger products indoors. 24 Cypermethrin is not currently present in topical personal care products, supporting the hypothesis that indirect transfer from surface applications to clothing prior to washing activities represents a pathway for this active ingredient ( Figure S5). 20,21 An extensive shelf survey of available pesticide products to homeowners was conducted at retail locations during a similar time frame (Mar−May 2017) and in regional proximity to the monitored sewershed. The survey identified 140 products containing 66 individual active ingredients, including those found within influent samples. Cypermethrin (19) was identified in the largest number of products for indoor residential use, followed by deltamethrin (14), prallethrin (10), permethrin (8), cyhalothrin (8, γ and λ combined), imidacloprid (7), and bifenthrin (2). 22 The prevalence of these active ingredients within a wide range of readily available indoor use products suggests transport from a variety of potential application sources. However, foggers may be the most influential of indoor application types for providing transferable mass to the sewershed. In an assessment of dispersion factors on horizontal surfaces, there was an observed 100% dispersion for fogger applications, followed by perimeter sprays (50%), crack and crevice treatments (15%), and spot applications (2%). 25 Although cyfluthrin (cyfluthrin and βcyfluthrin combined) was identified in 14 homeowner products in regional store surveys, only four of these are registered for indoor applications, and cyfluthrin was not identified in any pet product. However, two cyfluthrin products are used to control bed bugs. 12,22 The potential for bed bug control products to contribute pesticides down the drain after residential use is unclear. The median concentrations of bifenthrin, cypermethrin, and fipronil were slightly elevated at the pest control operator's sub-sewershed location, with median bifenthrin concentrations upward of 8 times the median lateral concentration. However, all concentrations were generally low. Considering this location type would typically have professional-grade pest control products containing a higher percentage of active ingredients than are typically available to homeowners, this suggests that regulations on cleaning equipment and containers are working. However, future sampling efforts should isolate commercial laundry services in which pest control operator uniforms may be washed. Implications. Pyrethoids, fiproles, and imidacloprid were prevalent in wastewater influent throughout the study period. There was significant removal of pyrethroids from the aqueous process stream within the facility to below reporting limits. Although the associated ecological risks were unable to be determined in all instances, this study suggests that treated wastewater may not be a significant source of pyrethroids to receiving surface waters. On the other hand, fiproles and imidacloprid were present in effluent at levels of potential ecological concern, suggesting the WWTP should be considered when evaluating total pesticide loading to surface waters, particularly for these more hydrophilic compounds within the sewershed. The speciation within the facility suggests photolytic and aerobic digestion are more influential drivers for fiprole transference than anaerobic metabolism. Using the single source monitoring data from this study in conjunction with information gained by evaluating registered product labels, we are able to validate both transport pathways and detected pesticides in the development of a refined conceptual model for pesticides entering the waste stream ( Figure S5). Elevated levels of active ingredients found at the pet groomer location further support the hypothesis that topical pet products are a significant source of pesticide loads to wastewater. All pesticide concentrations of samples collected at the laundry and the pest control operator were elevated above the respective median lateral for each pesticide. The elevated level of cypermethrin above lateral concentrations provides evidence that applications of total release foggers may serve as a major source for pesticides to transfer down the drain. This also indicates that pesticides in registered products for home pest control have the ability to be transferred from the application point down the drain, likely through laundering/cleaning of materials that come into contact with the treated surface. Information gained in this study may be utilized to inform down-the-drain evaluations as part of future pesticide registration processes to help predict the relative contributions of pesticides entering the waste stream to surface water loadings. Future Needs. While the data from this study provide much needed evidence of pesticide occurrence and associated sources within wastewater, data gaps in understanding the full impacts of this pollutant class as a down-the-drain concern remain. Target analytes in this study consisted predominantly of insecticides. Future monitoring should expand to include other chemistries to evaluate the potential risk from other product types. Large-scale evaluations of monitoring data are required to determine the spatial extent of pesticides in the waste stream and to assess whether regional differences in concentrations exist. Long-term monitoring data are necessary to evaluate temporal trends and verify the seasonal variability observed during this study. It is critical to determine removal Environmental Science & Technology pubs.acs.org/est Article efficacies resulting from various treatment technologies to identify important parameters responsible for removal. Also, a more comprehensive evaluation of the fate of pesticides within a facility is warranted, including the sorption to the biosolid fraction, to provide a mass balance of chemical transport pathways for future modeling efforts. Lastly, assessing contributing source transport pathways not evaluated in this document is necessary to build a more complete model for pesticides entering the waste stream. ■ ASSOCIATED CONTENT
2023-03-25T06:17:34.273Z
2023-03-24T00:00:00.000
{ "year": 2023, "sha1": "54626c2edeb81e203a71cb977d30d9ad106669e9", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1021/acs.est.2c07443", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "a33612b5b3174a28fa10da52e38c777ccaf0e658", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
232385735
pes2o/s2orc
v3-fos-license
The Cellular Senescence Stress Response in Post-Mitotic Brain Cells: Cell Survival at the Expense of Tissue Degeneration In 1960, Rita Levi-Montalcini and Barbara Booker made an observation that transformed neuroscience: as neurons mature, they become apoptosis resistant. The following year Leonard Hayflick and Paul Moorhead described a stable replicative arrest of cells in vitro, termed “senescence”. For nearly 60 years, the cell biology fields of neuroscience and senescence ran in parallel, each separately defining phenotypes and uncovering molecular mediators to explain the 1960s observations of their founding mothers and fathers, respectively. During this time neuroscientists have consistently observed the remarkable ability of neurons to survive. Despite residing in environments of chronic inflammation and degeneration, as occurs in numerous neurodegenerative diseases, often times the neurons with highest levels of pathology resist death. Similarly, cellular senescence (hereon referred to simply as “senescence”) now is recognized as a complex stress response that culminates with a change in cell fate. Instead of reacting to cellular/DNA damage by proliferation or apoptosis, senescent cells survive in a stable cell cycle arrest. Senescent cells simultaneously contribute to chronic tissue degeneration by secreting deleterious molecules that negatively impact surrounding cells. These fields have finally collided. Neuroscientists have begun applying concepts of senescence to the brain, including post-mitotic cells. This initially presented conceptual challenges to senescence cell biologists. Nonetheless, efforts to understand senescence in the context of brain aging and neurodegenerative disease and injury emerged and are advancing the field. The present review uses pre-defined criteria to evaluate evidence for post-mitotic brain cell senescence. A closer interaction between neuro and senescent cell biologists has potential to advance both disciplines and explain fundamental questions that have plagued their fields for decades. Introduction Many debilitating diseases affecting our modern population have resulted from the deterioration of biological processes suited for a 40-year lifespan. Exceptional examples are neurodegenerative diseases. Age is the single greatest risk factor for the most common neurodegenerative diseases including Alzheimer's disease (AD), Parkinson's disease (PD), and amyotrophic lateral sclerosis (ALS). Sporadic neurodegenerative diseases, i.e., those not inherited, rarely affect adults before the age of 50 and are nearly absent in adults younger than 40-years-old. In this way, evolution strongly favored nervous system health. Indeed, adaptation accounts for the success of humans in our recent history, a behavioral flexibility dependent on the nervous system. For these reasons neurodegenerative diseases and central nervous system (CNS) injuries (stroke) are devastating because of the limited regenerative capacity of the mature tissues. The complexity of the structure and function of the nervous system is achieved through extensive developmental processes and is maintained in part due to the resiliency of the cells to maintain function and resist activating cell death processes. This first evidence of this phenomenon was reported by Rita Levi-Montalcini and Barbara Booker in 1960 in their pioneering experiments demonstrating the critical role of nerve growth factor (NGF) in sympathetic neuron growth and survival [1]. Applying NGF antiserum to newborn mouse neurons resulted in a 97-99% cell loss; the same strategy only resulted in a 34% neuron loss in the adult mouse [1]. Less differentiated cells that fail to interact with their target appear to die by a morphological process (nuclear cell death) that we can now attribute to apoptosis [2]. Apoptosis is an effective and efficient mechanism of cell death involving robust activation of caspases. More mature neurons appear to die in a slower process that was termed cytoplasmic with prominent changes occurring in mitochondria, endoplasmic reticulum (ER), and lysosomes. Several forms of neuronal cell death have been described [3]. Reactivation of cell cycle in post-mitotic neurons has also been reported to be an initiating event for neuronal death (reviewed in [4]). The alterations in mitochondrial function, fission and fusion, ER stress, protein misfolding and aggregations, autophagy, and expression of proteins associated with cell cycle regulators are observed in neurodegenerative diseases and assumed to be processes associated with the neuron's cell death. However, in a post-mitotic system, cellular evolution may have favored a pro-survival response for difficult-to-replace cells in post-mitotic tissues that simultaneously prevented malignancy in mitotically competent cells-senescence. In 1961, Leonard Hayflick and Paul Moorhead reported a dogma-shifting observation from their cell culture experiments. At that time, primary cells grown in culture were believed to be immortal with indefinite replicative potential. However, Hayflick and Moorhead reported cessation of growth and eventual loss of the lines routinely after 50 passages or one year in culture. They referred to the phenomenon as "senescence at the cellular level" and hypothesized that it was attributed to intrinsic factors [5]. Subsequent studies revealed that human cells track their cell divisions. Cultured human fibroblasts replicate 50-80 times and then no longer divide, which is referred to as replicative senescence or the "Hayflick limit" [6,7]. By mathematical definition, a "limit" can be approached but not achieved, which is ironically fitting for this phenomenon-translating these in vitro observations to tissues and living organisms have been proposed by many, but a consensus definition has not been reached. Toward this end, we frame senescence as a complex stress response that culminates a change in cell fate. Replicative senescence, as defined by Hayflick and Moorhead, has clearly defined underlying biology (telomere attrition) and functional phenotypes (inability to divide). Subsequent studies have identified exogenous factors that can cause cell cycle arrest through telomere-independent mechanisms. As the number of exogenous senescenceinducing factors has expanded, so has the number of unique cell types being interrogated. Numerous phenotypes have emerged as a result. Senescent cells have been identified using various morphology markers; gene, protein, metabolic changes; and functional readouts and have been a subject of earlier reviews [8][9][10][11]. A specific combination of phenotypes defining senescence currently does not exist [12]; however, most agree that it is a stress-induced change in cell fate which includes a stable cell cycle arrest and cell death resistance. The identity of the parent cell type and upstream signals has consequences on the post-senescence phenotype [13,14]. The resulting heterogeneity has presented challenges for identifying, defining, and studying senescent cells in vivo and across disciplines. Where biologists agree is that interpreting the senescence phenotype requires integrating various lines of distinct evidence placed in appropriate context. This is especially true for postmitotic tissues such as the brain. The intent of this literature review is not to list studies that used an umbrella term "senescence" to describe a physiological response. Rather, it is to critically evaluate results from reports on brain cell senescence using pre-defined senescence-defining criteria: proliferative/cell cycle arrest, apoptosis resistance, senescenceassociated secretory phenotype (i.e., cytokines, chemokines, pathogenic proteins, exosomes, miRNA, enzymes, etc.), and senescence associated b-galactosidase activity (SA β-gal) ( Figure 1). The International Cell Senescence Association (ICSA) recently provided a list of key cellular and molecular features of senescence [9]. They acknowledged that post-mitotic cells may develop features of senescence, however, their recommendations largely focused on dividing cells. Some of which have major limitation when evaluating brain tissue (i.e., lipofuscin and SA β-gal). Here we provide an overview of senescent phenotypes, as relevant to post-mitotic cells, the assays used to assess them, and the markers relevant to senescence, each in the context of brain cell biology. We then review different brain cell types using these criteria. Overall, the present contribution aims to provide an accessible summary on senescent post-mitotic brain cells with criteria and interpretations relevant to the neurobiology of aging and disease. Figure 1. Senescence-associated mechanisms may confer exceptional resistance to cell death, but contribute to pathogenesis through inflammatory SASP. (a) The accumulation of stress, including protein aggregates and DNA damage, contributes to both senescence and apoptosis in post-mitotic cells. Protein aggregates including hyperphosphorylated tau, β-amyloid, and α-synuclein are seen in senescent cells, apoptotic cells, and patients with AD, PD, ALS and DLB. (b) Mitotic cells exit cell cycle and terminally differentiate into post-mitotic brain cells. Post-mitotic cell cycle re-entry can lead to cell death or senescence. In senescence, post-mitotic cells show stable cell cycle arrest, upregulate SCAPs, and release SASP. Neuronal SASP includes proinflammatory molecules and neurotoxic proteins. The SASP activates glia, drives inflammation, loss of neuronal connectivity, and perpetuates toxicity in a prion-like spread. SA β-gal can detect senescence, but it has a limited ability to discern between quiescent and senescent cells. AD: Alzheimer's disease; PD: Parkinson's disease; ALS: amyotrophic lateral sclerosis; DLB: dementia with Lewy bodies, CKI: cyclin-dependent kinases inhibitors; SCAPs: senescent cell anti-apoptotic pathways; SASP: senescence associated secretory phenotype. Identifying Senescent Brain Cells Heterogeneity of senescent cells has been revealed through transcriptomic profiling [14][15][16]. The senescence phenotype is guided by context differences in cell type, upstream stressor and environment. To yield a deeper understanding of cellular senescence signatures across distinct cell types in vivo, a careful examination of multiple key biomarkers is needed and has been the subject of many reviews, for example [17]. Below we provide an overview of this strategy, with specific focus on its utility to identifying senescent post mitotic brain cells. Absence of Proliferation/Stable Cell Cycle Arrest Various indicators of cell cycle arrest such as cell cycle inhibition and telomere attrition have been discussed for mitotically competent brain cells, for example [18]. Despite the persistent dogma that neurons permanently withdraw from the cell cycle upon terminal differentiation, numerous studies have demonstrated the expression of cell cycle proteins in post-mitotic neurons, that can give rise to dysfunctional hyperploid cells [19]. A decrease in telomere length is associated with cell division; notably telomere shortening has been observed in non-replicative neural brain populations in C57BL/6 mice in a cell cycleindependent manner [20]. Telomere shortening may occur in replication-independent scenarios in long-lived post-mitotic cells through oxidative stress and downregulation of telomeric factor POT1 and shelterin subunit TRF2 [21]. While markers of proliferation may not necessarily indicate mitotic competency in post-mitotic cells, they may reflect cell cycle re-entry out of G0 into G1 which could make them vulnerable to apoptosis or senescence. Cell cycle re-entry has been estimated to occur in~11.5% of post-mitotic cortical neurons through DNA content variation and~20% of post-mitotic neurons in AD through both DNA content variation and expression of cyclin B1 [22,23]. The data collectively indicate a link between aberrant neuronal cell cycle activity and neuronal dysfunction and disease. For example, multiple studies link AD associated Ab [24][25][26] and phosphorylated tau with aberrant cell cycle activity [27][28][29]. Cell cycle re-entry in the absence of AD pathology also has been described [30] and put forth as a potential novel therapeutic target for neurodegenerative diseases [31]. In this context, an open question remains whether proliferation markers may also apply to a pre-senescent phase in post-mitotic neurons. Co-expression of cell cycle mediators in post-mitotic cells, such as G1 proteins, in the absence of apoptotic markers (i.e., caspase 3) suggests an arrest of aberrant cell cycle activity, consistent with senescence. Alternatively, post-mitotic quiescent cells may more readily transition to senescence than mitotically competent cells, for example through changes in lysosomal activity [32]. Nonetheless, measuring molecular signatures of cell cycle activity may provide evidence for or against a senescence stress response in post-mitotic cells. Examples of studies looking at stable cell cycle arrest are discussed in Section 3. Cell Death Resistance Senescent cells display enhanced survival over their non-senescent counterparts by activating senescent cell anti-apoptotic pathways (SCAPs) [33,34]. Similarly, post-mitotic cells, including neurons, acquire a greater resistance to cell death as they mature [35,36]. A complete understanding of neuronal cell death resistance is not known; however, some pathways have been identified [37][38][39][40]. Similarly, while some SCAPs have been identified for senescent cells, this is a burgeoning research area. It is tempting to speculate that SCAP-mediated degeneration resistance may contribute to (or use similar mechanisms as) post-mitotic cell death resistance. In this way, identifying molecular regulators of cell death resistance in post-mitotic cells may apply to senescence, and vice versa. In response to injury, mitotically competent cells may proliferate; instead post-mitotic cell cycle reentry triggers degenerative processes [41]. In this review, we provide evidence that post-mitotic senescence in neural tissue may preserve cellular integrity by avoiding cell death [42]. Thus, cell cycle inhibitors such as INK4 cyclin-dependent kinases inhibitors (CKIs) (i.e., p16, p18, p19) and CK-interacting protein/kinase inhibitor protein CKIs (i.e., p21, p27, p57) may be protective and contribute to cell death resistance in post-mitotic cells [41]. Though a stable senescence cell cycle arrest may confer degeneration resistance to the affected post-mitotic cell, downstream consequences of its preserved survival may include neural network dysregulation and chronic inflammation through secreted factors. Secretory Phenotype Post-mitotic cells can produce a senescence-associated secretory phenotype (SASP) consistent with that of mitotically competent cells. For details on the definition, role, and common factors used to identify SASP in the brain, please refer to previous reviews on this topic [18,43,44]. Briefly, upregulation of NFκb activity and consequential production of canonical pro-inflammatory markers may occur in post-mitotic cells similar to that of dividing cells. Moreover, post-mitotic cells may produce unique SASP factors, like aggregation-prone proteins that impact protein homeostasis and drive neurodegenerative proteinopathies. For example, human postmortem brains from patients clinically diagnosed with AD, PD, and dementia with Lewy bodies (DLB) all show severe hyperphosphorylated microtubule-associated tau, β-amyloid, and α-synuclein loads, although topographical distribution of protein aggregates are different [45]. These protein aggregates may influence each other and synergistically promote the accumulation of one another [46,47]. Thus, SASP is an important component of post-mitotic senescence with implications for neurodegeneration. Senescence Associated β-Galactosidase In-depth details of the SA β-gal assay specific to brain tissue were described in our previous review [18]. Briefly, SA β-gal detects lysosomal-β-galactosidase activity at pH 6.0 [48]. While useful for distinguishing senescent cells in culture, it is detected in brain tissue independently of age or senescence [49]. The GLB1 gene encodes for lysosomal beta-D-galactosidase and is the origin of the SA β-gal activity, but SA β-gal's cellular roles and mechanism in senescence are not fully understood [50]. In post-mitotic cells, interpretation of SA β-gal requires extra caution. Purkinje neurons in the cerebellum, CA2 neurons, and a subset of cortical neurons all display SA β-gal even in young mice [49]. A recent study demonstrated that lysosomal activity mediates the transition from deep quiescence to senescence [32]. Given that neurons are quiescent, assigning positive staining to senescence versus quiescence becomes especially challenging in static tissue. Even in vitro, SA β-gal positivity has been reported in neurons in the absence of other markers of senescence [51]. A more detailed review of SA β-gal used to identify senescence in post-mitotic cells is described in later sections. Concluding Remarks on Identifying Senescent Cells Evaluating senescence requires in-depth understanding of the cell type and markers of senescence. Post-mitotic cells have different characteristics from mitotically competent cells that should be considered when evaluating for senescence. In the following sections, we review studies reporting senescence in post-mitotic brain cells. We evaluate the methods used by key studies and compare them to our pre-defined criteria presented above and summarized in Figure 1. Neurons The adult human brain contains an estimated 86 billion neurons [52]. Barring neurodegenerative disease or brain trauma, nearly all cortical neurons (96-98%) remain alive during the lifespan. Their exceptional survival has been attributed to their restriction of apoptotic pathways, though the precise molecular details are not fully understood [16]. An appreciation for dysfunctional, not missing, neurons has emerged over the past decade. For example, age-associated cognitive decline has been attributed to changes in neuronal chemistry, metabolism, and/or morphology, but not necessarily the progressive loss of neurons [53]. Re-evaluation of the literature and accumulating experimental evidence suggests that age-and disease-induced stressors on neurons initiates a neuronal senescence stress response as a means to avoid active degeneration and cellular loss. However, as we discuss, these neuronal structural and functional changes contribute to pathogenesis in neurodegenerative diseases [49,[54][55][56]. For example, p16 and p21 expression has been reported in neurons and glial cells in postmortem motor and frontal association cortex of ALS patients [57], while microglia express p16, p53, and SASP in late-stage spinal cord of the ALS rat model [58] (please refer to mitotic review for microglia senescence [18]). As terminally differentiated cells arrested in G0, neurons either inherently fulfill one of the key defining features of senescence (near permanent cell cycle arrest); alternatively, they may arrest in G1 after cell cycle re-entry, which has been described in numerous degenerative diseases [22,23] (Figure 1). This phenomenon is not unique to neurons; a recent review provides a discussion on the topic of post-mitotic senescence across tissues [42]. Here we review supporting evidence that neurons, like mitotically competent cells, have the ability to mount a canonical senescence stress response. Neuronal Senescence in Tauopathies and Peripheral Neuropathies Senescent cell heterogeneity, in part, arises from differences in cell biology of the parent cell. Growing experimental evidence demonstrates that the phenotypic diversity of neuronal senescence reflects the heterogeneity of neuronal subpopulations. Historically, neurons have been classified by morphology, anatomical location and/or distinct shapes, and function that can be further classified by direction, action on other neurons, discharge patterns, and neurotransmitter utilization. Recent methodologies, in particular single nucleus transcriptomics, have provided an even deeper insight into neuronal heterogeneity [59]. For example, MAPT encodes the microtubule-associated protein tau proteins, which are often referred to as "neuron-specific" or "axon-specific" proteins. However, the diversity in tau proteins arise from extensive processing at the mRNA and protein levels. Six major tau protein isoforms are expressed in the adult brain which arise through alternative splicing; post-translational modifications further amplify the tau protein diversity by producing dozens of unique forms of tau protein that are differentially regulated and expressed based on developmental age and neuronal subtype [60,61]. Tau protein accumulation is the most common intraneuronal pathology among neurodegenerative diseases, though neuropathology and clinical presentations differ across diseases [62]. Among tauopathies, neurons containing neurofibrillary tangle (NFT) aggregates of heavily post-translationally modified tau are the closest correlate with neurodegeneration and dementia in AD, yet they are long-lived [63]. We recently determined that these neurons display a canonical senescence stress response [49]. Analyzing transcriptomic data from postmortem human brain provided the opportunity for a within-subjects comparison between neurons with or without NFTs. The transcriptomic and pathway analyses revealed expression patterns in NFT-bearing neurons consistent with senescence including upregulated anti-apoptotic/pro-survival pathways and concomitant inflammatory and secretory pathways. Using four independent tau transgenic mouse models, we found evidence for DNA damage; aberrant cellular respiration; karyomegaly; upregulation of cell cycle inhibitors, inflammation and inflammatory mediator NFκB. These phenotypes occurred concomitant with NFT formation and were reduced by genetically removing endogenous tau protein, indicating a molecular link between tau and neuronal senescence. Moreover, intermittent treatment with senolytics (dasatinib plus quercetin) caused~35% reduction in NFTs that coincided with a reduction in senescence-associated gene signature (cell cycle inhibitors and inflammation). We did not observe neuronal senescence phenotypes in thalamic, midbrain, or cerebellar neurons. It remains unknown whether or not high expression of transgenic tau could ultimately drive neuronal senescence in these other neuronal subpopulations. However, work from other groups suggest that midbrain [56] and cerebellar neurons [54] may utilize other molecular mediators aside from MAPT/tau. Overexpression of human non-mutated tau and its persistent phosphorylation also contributes to peripheral neuropathy and memory deficits [64]. Long-term and short-term memory were significantly impaired in female transgenic mice expressing all six human tau isoforms [64]. Peripheral neuropathy was evidenced as motor nerve conduction velocity (MNCV) slowing, paw tactile allodynia, paw heat hypoalgesia, and low paw density of intraepidermal nerve fibers in human tau mice compared to wild type mice [64]. Notably, neuronal senescence has been associated with cisplatin-induced peripheral neuropathy (CIPN). Primary DRG neurons treated with cisplatin upregulate SA β-gal activity and expression of Mmp-9, Cdkn1a, Cdkn2a, and display elevated translocation of HMGB1 compared to controls [65]. In a mouse model of CIPN, dorsal root ganglia (DRG) neuronal populations upregulated the DNA damage response pathway and Cdkn1a gene expression as determined by single-cell RNA-sequencing. Neuronal senescence was further verified by increased protein expression of p21, p-H2AX, NFκB-p65; SA β-gal; and lipofuscin granules [66]. Clearing p16 and/or p21 positive cells either pharmacologically with ABT263 or by utilizing suicide gene therapy (i.e., p16-3MR ganciclovir/herpes simplex virus thymidine kinase system) reversed CIPN as evidenced by improved mechanical and thermal thresholds [65]. Collectively these studies indicate senescence-associated neuronal dysfunction in the central and peripheral nervous system where tau may be linked. Neuronal Senescence in Parkinson's Disease The H1 MAPT haplotype [67] and single nucleotide polymorphisms in MAPT have been associated with age of onset and progression of PD [68]. Despite the strong genetic association with MAPT, tau pathology occurs only in about 50% of patients with PD [69]. Though cognitive dysfunction may occur, PD primarily affects motor behavior. Neurodegeneration in PD predominantly occurs in the substantia nigra where up to 70% of dopaminergic neurons can be lost in late disease stages [70,71]. These neurons express significantly lower levels of MAPT and tau protein than those in the cortex or hippocampus (~4-fold and~6-fold difference, respectively) and do not develop tau pathology [72]. Instead, the hallmark protein deposit in PD is α-synuclein. Experiments in cell lines suggest that α-synuclein expression levels differentially regulate the cell cycle [73]; however, conclusive studies demonstrating α-synuclein-mediated neuronal senescence have not been reported. To date, the most comprehensive work on dopaminergic neuronal senescence involves special AT-rich sequence-binding protein 1 (SATB1) [56]. SATB1 functions as a transcription factor and chromatin architecture organizer [74]. SATB1 is overexpressed in various tumors and has been referred to as a T-cell-specific transcription factor given its importance in T cell development. A meta-analysis of genome-wide association studies comparing PD cases with controls identified SATB1 as a candidate risk gene [75]. Neurons in PD-vulnerable brain regions (e.g., substantia nigra pars compacta) display lower levels of SATB1 than neurons from the less susceptible ventral tegmental area [76]. Genetically reducing SATB1 in dopaminergic neurons drives a neuronal senescence response including elevated p21 protein expression, karyomegaly, SASP, and mitochondrial dysfunction [56]. SA β-gal and lipofuscin, hallmarks of mitotically competent senescence that also co-occur in neurons, were also observed. Mechanistically, SATB1 repressed dopaminergic neuron senescence by binding the regulatory region of CDKN1A. In the absence of SATB1, the CDKN1A encoded protein, p21, expression level increased to perpetuate the neuronal senescence stress response. When the authors reduced Cdkn1a in SATB1 knockout neurons, fewer senescent cells (as determined by SA β-gal) were observed, providing evidence for the mechanistic link between SATB1-p21 mediated neuronal senescence. Interestingly, reducing SATB1 in cortical neurons did not modulate Cdkn1a/p21 levels, which was attributed to a more open Cdkn1a locus in dopaminergic than cortical neurons. In contrast, tyrosine hydroxylase expressing neurons require Satb1 expression for their survival and will undergo neurodegeneration within three weeks of downregulated Satb1 [76]. This observation indicated that de-repression of Cdkn1a and concomitant increased p21 expression caused apoptosis and clearance by microglia. Evidence for this was observed by neuronal SASP production and concomitant microglia co-localization with tyrosine hydroxylase positive neurons. Follow-up experiments to deplete microglia after Satb1 reduction would conclusively demonstrate whether or not Cdkn1a-expressing dopaminergic neurons fulfill the criterion of apoptosis resistance in neuronal senescence. Indeed, elevated p21 expression induced apoptosis in vitro, indicating that it may not confer neuronal apoptosis resistance [77]. Nevertheless, the study by Riessland et al. determined a dopaminergic neuron-specific role of SATB1 in modulating Cdkn1a/p21 expression and downstream senescence phenotypes including karyomegaly, mitochondrial dysfunction, production of SASP, lysosomal dysfunction and presence of SA β-gal and lipofuscin [56]. In PD, disease-related stressors on neurons contribute to defects in several cellular systems ultimately involving alterations in Bcl-2 family signaling, JNK activation, p53 activation, expression of cell cycle regulators [78]. While many of these processes including those addressed above are thought to contribute to neuronal degeneration, some are also hypothesized to reflect survival-promoting mechanisms such as senescence. More recent studies focused on neuronal senescence in PD have revealed that overexpression of mutant p53, p21, or mutant Leucine-rich repeat kinase 2 (LRRK2) increased SA β-gal, and αSyn protein expression and fibril accumulation in vitro [77]. Transgenic mice expressing the same mutant LRRK2 G2019S displayed elevated oligomeric αSyn, β-galactosidase and p21 expression. The increase in αSyn was due to impaired degradation, not increased transcription [77]. The results suggest that the LRRK2 G2019S mutation may activate the p53-p21 senescence pathway, which is upstream of α-synuclein accumulation. While suggestive of senescence, the study did not evaluate cell cycle activity, apoptosis resistance or SASP production in the affected cells. Nonetheless, future studies to dissect if or how PD mutations may interact with SATB1 would elucidate whether these pathways converge on a common senescence-associated pathway relevant to PD pathogenesis. An intellectual framework for proteinopathy-induced senescence in neurodegenerative diseases was first proposed in 2009 by Golde and Miller [79]. The idea warrants further studies as the emerging data from mechanistic studies that have directly tested this hypothesis (i.e., tauopathy, α-synucleinopathy, and β-amyloid) indicate that post-mitotic neurons are especially vulnerable to protein aggregation stress as highlighted in these aforementioned studies. Neuronal Senescence in Aging Senescent cells accumulate with advanced age even in the absence of disease. In 2012, Diana Jurk et al. evaluated neuronal senescence in naturally aged mice with or without increased DNA damage by genetically manipulating telomerase [54]. Age-associated DNA damage was associated with neuronal senescence in the brains of 32-month-old mice [54]. DNA damage foci, as determined by gH2A.X immunostaining, was elevated in cerebellar Purkinje and cortical neurons from 32-month-old mice compared to 4-month-old mice. These neurons also displayed activated p38 MAPK (phosphorylated at Thr 180 /Tyr 182 ) indicative of DNA double strand breaks. Oxidative stress was assessed by visualizing cells with elevated lipid peroxidation product, 4-hydroxynonenal (4-HNE). Immunostaining with 4-HNE revealed cytoplasmic granular accumulation within the same subpopulations of cells. Similarly, these large neurons expressed higher levels of inflammatory protein IL-6 than other cell types. SA β-gal activity and lipofuscin (as measured by autofluorescence) showed similar overlapping patterns. Given the overlapping co-staining of multiple marker combinations, the authors hypothesized that DNA damage (gH2AX) increased with advanced age, which activated the DNA damage response (p-p38 MAPK) to induce a senescence-like pro-inflammatory (IL-6) and pro-oxidant phenotype (4-HNE) similar to mitotically competent cells (lipofuscin and SA β-gal). To begin evaluating mechanistic mediators of the senescence phenotype, they utilized transgenic mice with telomere dysfunction with or without Cdkn1a. Neurons from mice with telomere dysfunction (late generation telomerase knockout mice, F4 TERC-/-) displayed elevated levels of gH2AX, p-p38MAPK, 4HNE and IL6 compared to those with one functional copy of TERC. The genetic removal of Cdkn1a modulated these phenotypes in mice regardless of telomerase activity, however genotype and cell type specific phenotypes were observed. For example, in TERC wild type mice, the absence of p21 only significantly altered the 4HNE phenotype and only in Purkinje neurons (not cortical neurons). In contrast, in mice with telomere dysfunction, removing p21 did not modulate 4HNE. Instead, the absence of p21 significantly reduced gH2Ax and IL6 in Purkinje neurons and p-p38 and IL6 in cortical neurons. These results again highlight heterogeneity of the senescence stress response unique to different neuronal subpopulations. Nonetheless, removing p21 robustly reduced inflammation, as assessed through IL6, in both cellular populations to provide evidence that neuronal senescence may contribute to sterile inflammation with advanced age. Insulin provides trophic support and drives excitatory signaling in neurons [80,81]. A loss of neuronal sensitivity to insulin, referred to as insulin resistance, coincides with their dysfunction and disease. The mechanisms driving insulin resistance in brain cells are not well understood, but risk factors include advanced age, obesity, peripheral insulin resistance, and metabolic dysfunction [82,83]. Recent studies in mice have demonstrated that brain insulin resistance induces neuronal senescence, which leads to synaptic dysfunction [55,84]. In these studies, insulin resistant neurons display several molecular, functional and morphological changes consistent with senescence [55]. Specifically, mice that developed spontaneous peripheral insulin resistance at either young (3-months-old) or old (24-months-old) age also displayed signs of brain insulin resistance (i.e., elevated insulin in the CSF, elevated pIRS1 (Ser 307 and Ser 612 )], and senescence (i.e., neurite loss, elevated Cdkn1a and Cdkn2a and SA β-gal activity). This finding indicates that insulin resistance, like tau accumulation or loss of SATB1, may drive premature neuronal senescence in the absence of advanced age. The insulin resistant mice, regardless of age, behaved poorly on cognitive behavior tasks to indicate that neuronal insulin resistance/senescence co-occurred with poor brain function. Mechanistically, chronic insulin was shown to reduce hexokinase 2, impair glycolysis and increase levels of p25, a potent activator of both CDK5 and GSK3β. The simultaneous signals from CDK5 (neuronal cell death) and β-catenin (cell cycle reentry) pushed neurons to enter a senescence-like state. A detailed signal transduction cascade was elucidated in vitro whereby insulin increased Ccnd1 and Cdkn2a expression, nuclear localization of β-catenin, cyclin D1 and p19ARF. The increase in p16INK4a and PML occurred later. Aberrant β-catenin also induced a parallel p53-p21 senescence pathway. The authors concluded that chronic insulin signaling induced a neuronal senescence phenotype through the over-stabilization and nuclear localization of β-catenin. Tau phosphorylation was not assessed, but given the increased activity of tau kinases Cdk5 and GSK3β and parallels with findings in Musi et al. [49], it is tempting to speculate that aberrant tau may also contribute to insulin resistance-mediated neuronal senescence. General Considerations for Evaluating Neuronal Senescence Observations across the aforementioned studies highlight the complexity of applying canonical senescence measures to post-mitotic cells. For example, we caution the use of lipofuscin and SA β-gal for neuronal senescence as these markers seemingly reflect shared phenotypes among neurons, across age and/or disease, that requires further investigation into their association with other senescence markers. The best example are cerebellar Purkinje neurons that display SA β-gal throughout the lifespan [49]. Jurk et al. noted, "the frequencies of neurons showing multiple markers of a senescent phenotype are very substantial, going well beyond 20% in Purkinje cells already in young mice brains" [54]. A key readout for this conclusion was SA β-gal staining. Given the early stages of defining neuronal senescence in vivo, it remains unknown whether SA β-gal positivity truly reflects senescence in Purkinje neurons, which could become senescent in early life due to their high energetic and metabolic demands. Other studies have demonstrated that cerebellar Purkinje neurons can survive and function as polypoid cells [85]. Neuronal polyploidy suggests that DNA replication occurred, but that neuronal mitosis stalled. Indeed, hyperploid neurons have been reported in preclinical and mild stages of AD as evidenced by immunofluorescence and slide-based cytometry methods cross-validated by chromogenic in situ hybridization [86]. The neurons avoid apoptosis, upregulate several cell cycle mediators and survive months in the adult mouse brain, which meets several criteria of a senescent cell. Importantly, cerebellar Purkinje neurons are indispensable for motor movement control. Notably, gait speed, coordination, and balance are significant predictors of mortality [87,88]. It is tempting to speculate that senescence of these neurons may contribute to the overall decline in health and increased mortality with advanced age. Alternatively, the physiological function of these neurons may require signaling through cellular and molecular pathways resulting in phenotypes typically attributed to senescence. For example, we routinely observe neuronal lipofuscin throughout the lifespan, though it notably increases with age; similarly, we observed high levels of SA β-gal activity in these same neuronal populations throughout the mouse lifespan [49]. Moreno-Blas et al. also proposed that SA β-gal may not be a reliable marker of senescence by itself [89]. Despite cortical neurons expressing senescence-associated phenotypes such as p21, γH2AX, ruptures of DNA, lipofuscin, SASP, and irregular nuclear morphology, they observed normal nuclear morophology in some neurons with high SA β-gal [89]. Instead, their data suggested that autophagy impairment/dysfunction, perhaps through lysosomal fusion with autophagosomes, critically contributed to the neuronal transition from quiescence to senescence, similar to that reported by [32]. Since SA β-gal positivity overlaps with lysosomal dysfunction, it may be useful to narrow down potential senescent cell candidates; however, as indicated by Moreno-Blas [89] and several studies reviewed here (i.e., [49,51,54]) it cannot be used in isolation. Similarly, neuronal lipofuscin staining was first reported in children in 1903 and has been later confirmed in several studies where it occurs in at least 20% of neurons by 9-years-old [90]. These aggregates of oxidation products of lipids, proteins, and metals autofluoresce non-specifically bind antibodies which can complicate interpretations of immunofluorescence assays and thus requires multiple controls. The pigment granules change with aging by increasing progressively in size, as well as their subcellular localization thus appropriate age-matched negative controls and antibody controls are necessary to interpret results. Within the aging field, the increased rate of lipofuscin formation and accumulation is considered a hallmark of both replicative and stress-induced senescence [91,92] and methods for its specific staining (i.e., Sudan Black B) are increasingly used to detect senescence in vitro and in vivo [93]. It is our opinion that at this time both lipofuscin and SA β-gal require further investigation before using them as decisive markers for neuronal senescence. Concluding Remarks Differentiated neurons are remarkably apoptosis resistant, but their vulnerability to excitotoxicity increases with age [94]. Neurons inherently lack the option to divide, but they upregulate cell cycle proteins in response to stress. The inability to replace these critical cells indispensable for maintaining life may have placed strong evolutionary pressure to favor stress-induced senescence over apoptosis. In this way, neuronal survival would be maintained though the number of dysfunctional cells would increase with advancing age. Indeed, this is what is observed in the human brain [52,53]. As the burgeoning field of neuronal senescence advances, we expect that the next wave of studies will reveal additional molecular regulators, clarify pathways previously identified, and differentiate between shared pathways and neuron subtype specific mechanisms. Additionally, with the increasing use of single cell technologies, we anticipate an increased ability to identify, track and study senescence with greater clarity on the phenotype(s) and how they change across the lifespan and in disease. Astrocytes Astrocytes are an abundant and heterogenous cell population within the central nervous system (CNS). They comprise 20-40% of the total glial cell population in the brain, depending on region, developmental stage, and species [95][96][97]. Along with oligodendrocytes, astrocytes originate from the neural tube [98]. Astrocytes differentiate from the glial progenitor cells proliferating in the forebrain subventricular zone as they migrate outwards to other regions of the brain [99]. The majority of astrocytes are considered post-mitotic, and in the absence of pathology or disease, they display low rates of turnover and proliferation [100]. Astrocytes vary in function and morphology. Distinct types, including radial astrocytes, fibrous astrocytes, and protoplasmic astrocytes have been elucidated within the CNS based on structure, distribution, and function, as well as their expression level of the different isoforms and splice variants of the intermediate filament protein glial fibrillary acidic protein (GFAP) [101,102]. Astrocytes have been implicated in maintaining water and ionic homeostasis, providing metabolic and structural support to neurons, and regulating the blood-brain barrier (BBB) [102,103]. They also cooperate with microglia to control local neuroinflammation and neuronal restoration following damage to the CNS. Similar to microglia, astrocytes prune synapses and remove cellular debris within the synapse in healthy and diseased brains [104,105]. Genes crucial for astrocyte function such as Excitatory Amino Acid Transporters 1 (EAAT1) and 2 (EAAT2), potassium transporter Kir4, and water transporter AQP4 involved in glutamate, glutamine, potassium, and water homeostasis in the brain have shown to be downregulated when astrocytes become senescent [106]. Thus, their change in function associated with senescence can lead to detrimental effects including the onset of various neurodegenerative pathologies [103,[107][108][109][110]. Astrocyte senescence is often wrongly conflated with astrogliosis or astrocyte reactivity. Reactive astrogliosis involves structural changes to the astrocytes alongside cellular proliferation and migration [100,109]. Reactive astrocytes, also known as A1 cells, have been shown to be induced by activated neuroinflammatory microglia through the secretion of Il-1α, TNFα, and C1q cytokines. Upregulated expression of GFAP is a known marker of reactive astrocytes, and its levels are also increased during aging [111,112]. In contrast, radiation-induced senescent astrocytes demonstrated a downregulation of GFAP [113]. Reactive A1s lose their ability to promote neuronal survival, outgrowth, synaptogenesis, and phagocytosis and induce death of neurons and oligodendrocytes [114]. A1s have also been shown to be present in the brains in many neurodegenerative disorders, including AD, PD, and Huntington's disease [114,115]. The benefit of astrogliosis and subsequent scar formation is the protection of the surrounding neurons and tissue and restriction of inflammation and pathology. However, dysfunction in reactive astrocytes can lead to neuronal dysfunction, and eventually degeneration that can contribute to various CNS disorders. Many of these features are similar with a senescence phenotype, including morphology changes and secretion of pro-inflammatory molecules. Astrocytes undergo a senescence-like stress response, which has been referred to as "astrosenescence" and described as a functional change from neurosupportive to neuroinflammatory [116]. Oxidative stress, exhaustive replication, inhibition of proteasomes, and an increase in glucose concentration elicit an astrocyte response consistent with senescence, in vivo and in vitro (reviewed: [116,117]). For example, replicative senescent primary human fetal [118] and rat [119] astrocytes displayed an arrest of growth and cell cycle progression; the human fetal astrocytes also upregulated gene expression of TP53 and CDKN1A. Astrocytes do not express TERT [120] and replicative senescence was not avoided with telomerase reverse transcriptase (hTERT) expression [118], indicating that telomerelength independent mechanisms govern replicative senescence in astrocytes. Inhibiting p53 function with human papillomavirus type 16 E6, however, delayed the onset of senescence, implying a p53-dependent mechanism of replicative senescence in astrocytes [118]. Increased SA β-Gal activity, marked by staining kits, was also observed in many of these studies [113,117,121,122]. Strengths and weaknesses of using this method for labeling brain cells have been discussed [18]. Radiation cancer therapy has potential to induce senescence [123]. The effect of radiation therapy on astrocytes in vivo was examined by evaluating human brain from individuals receiving cranial radiation cancer therapy [113]. Senescent cells were identified with immunohistochemical labeling of p16, heterochromatin protein Hp1γ, and expression of ∆133p53, an inhibitory isoform of p53. Elevated p16 and Hp1γ largely co-localized with astrocytes in patient brains that had received radiation, but not in control tissue. Expression of ∆133p53 was primarily in astrocytes, and its role in senescence was explored in vitro. They found that these irradiated astrocytes in vitro had diminished ∆133p53, and developed a phenotype associated with other senescent cells, such as increased SA β-Gal activity, p16, and IL-6. However, restoration of ∆133p53 expression inhibited and prevented further senescence, promoted DNA repair, and prevented astrocyte-mediated neuroinflammation and neurotoxicity [113]. Collectively, this study [113] and others [106,124] have characterized the radiation-induced senescence phenotype in astrocytes to include decreased proliferation and increased SA β-Gal activity, along with typical increased expression of p53, p21, and p16, which were analyzed using Western Blot [113,121]. Senescent astrocytes downregulate genes associated with activation, including GFAP and genes involved in the processing and presentation of antigens by major histocompatibility complex class II proteins, while upregulating pro-inflammatory genes [121]. Increased expression of p16, p21, p53, and MMP3 have also been associated with astrocytes undergoing senescence and those isolated from aged brains [125]. The downregulation of genes associated with development and differentiation, coinciding with the upregulation of pro-inflammatory genes, manifest as functional changes (i.e., inflammatory stress response). This may perpetuate a pro-inflammatory feedback loop that is stably maintained by senescence-associated changes in gene expression and transcript processing [126]. Astrocyte senescence increases with age in the human brain and in AD [127,128] and PD [129]. The consequences of astrocyte senescence are myriad. Functionally, astrocytes communicate with nearby neurons and the surrounding vasculature to clear disease-specific protein aggregates, including β-amyloid, the accumulation of which has been linked to the progression of AD [96,130]. The release of SASP factors by senescent astrocytes including IL-6, IL-8, MMP3, MMP10, and TIM2 were found to contribute to β-amyloid accumulation, phosphorylation of tau protein, and an increase in NFTs [125,131]. An increased risk for PD has been linked to contact with the herbicide paraquat (PQ), which an environmental neurotoxin. Complementary in vivo and in vitro approaches were used to evaluate mouse and human astrocyte responses to PQ [129]. PQ-treated astrocytes developed several features consistent with senescence, including upregulated Cdkn2a/p16. Importantly, senescent cell removal improved neurogenesis in the subventricular zone, reduced neuronal loss and rescued motor function deficits in PQ-treated mice [129]. Collectively their results highlight astrocyte senescence as a mechanism of PQ-associated neuropathology and brain dysfunction, and represents an appealing therapeutic target for the treatment of PD. Concluding Remarks "Astrosenescence" is a complex and heterogeneous process that necessitates evaluating astrocyte structure, distribution, function and molecular expression profiles. Measuring the expression level of GFAP [113,125] can help differentiate whether upregulated pro-inflammatory cytokines and chemokines expression reflect astrogliosis or astrosenescence [125,131]. The most consistently shared features across senescent astrocytes were arrest of growth and cell cycle progression, increased expression of p53 and p21, and p16 [113,117,121] and some evidence of increased SA β-Gal activity [117]. Collectively the studies reviewed here indicated that functional changes associated with senescent astrocytes contribute to chronic neurodegenerative diseases and may propagate inflammation and induce senescence to surrounding cells [132,133]. Targeting them for removal represents an opportunity to intervene in neurodegenerative diseases. Endothelial Cells Endothelial cells form a single layer of cells called endothelium that line the blood vessels of the circulatory system. They have an array of functions in vascular homeostasis such as regulating blood flow, immune cell recruitment, maintaining blood vessel tone, and hormone trafficking [134,135]. While endothelial cell function is heterogeneous and tissuespecific, several studies have demonstrated that endothelial cells can become senescent in adipose tissue, coronary arteries, and in the human umbilical cord using observations of morphology changes, SA β-gal activity, and SASP through DNA microarray [136][137][138]. While there is a great literature describing senescent endothelial cells throughout the body, the focus of this section turns to brain microvascular endothelial cells. Brain endothelial cells are mostly post-mitotic with minimal proliferation [139][140][141]. They express a high density of tight junction and adherens junction proteins and high transendothelial electrical resistance [142][143][144]. Functionally, brain endothelial cells contribute to the BBB, regulate local cerebral blood flow as a part of the neurovascular unit (NVU), and thus have important implications for brain diseases [145][146][147]. The BBB is a highly selective semipermeable barrier with tight junctions that closely regulates the biochemical composition of the brain by restricting the free diffusion of nutrients, hormones, and pharmaceuticals [148]. The tight junctions force molecular traffic to take place through the endothelial membrane through sealing of the paracellular space and by establishing a polarized, transporting epithelial and endothelial phenotype [149]. During aging, endothelial cells experience senescence-associated stressors including oxidative stress, DNA damage accumulation, telomere shortening, increased NFκB signaling and decreased Sirt1 expression [136,150]. Recent studies suggest that brain endothelial cell senescence could contribute to BBB dysfunction though neurovascular uncoupling and reactive oxygen species [151]. Indeed, increased BBB permeability and vascular dysregulation have been observed in patients with early cognitive dysfunction, cerebral microvascular diseases, and AD [152][153][154]. However, the co-occurrence of senescent endothelial cells with aging and disease makes it difficult to discern whether they are upstream mediators or downstream consequences of diseases. BBB dysfunction has been observed in patients with AD, Multiple Sclerosis (MS), traumatic brain injury (TBI), and stroke, featuring overexpression of MMP-2 and MMP-9 [155][156][157][158][159]. Molecular cascades such as activation of MMP's have been suggested to induce senescence [160]. Thus, it is possible that brain insult leading to BBB dysfunction causes senescence as well. Other studies also highlight the difficulty of disentangling cause and effect of brain cell senescence, aging and disease. Emerging evidence suggests aberrant angiogenesis, and potentially endothelial senescence, may occur as bystander effects of other cell's SASP. Studies using the rTg4510 mouse model of tauopathy have revealed an increased number of blood vessels and concomitant upregulation of angiogenesis-related genes such as Vegfa, Serpine1, and Plau [166]. Confocal imaging demonstrated aberrant vasculature near neurons with tau-containing NFTs which display a senescence-like phenotype (please refer to Section 3: Neurons) [49]. Together, these studies suggest that factors secreted by senescent NFT-containing neurons may negatively impact surrounding cells, which could drive aberrant angiogenesis. Alternatively, aberrant cerebrovasculature could be upstream of tau accumulation and contribute to NFT formation. To translate these studies to human clinical conditions, postmortem human AD brains with tau pathology were investigated for cerebrovascular senescence [167]. Cerebral microvessels were isolated from 16 subjects with a Braak NFT score of V/VI (B3) and 12 subjects with a Braak NFT score of 0/I/II (e.g., high neuropathology versus low neuropathology). Upregulation of senescence was inferred by elevated expression of Serpine1, Cxcl8, Cxcl1, Cxcl2, Csf2, and Cdkn1a; however, other markers of senescence were not evaluated [167]. Whether tauopathy causes endothelial senescence and induces a leaky BBB and/or endothelial senescence affects the vascular microenvironment will require further investigation [168]. Concluding Remarks Most of the aforementioned studies examined brain endothelial cell senescence by analyzing expression of senescence-associated genes [161,163,167]. Some studies also examined SASP genes [163,167]. Future studies are needed to evaluate cell cycle arrest, SCAPs, DNA-damage responses, resistance to apoptosis to define and validate senescence in brain endothelial cells [169,170]. Of interest for future studies will be determining brain region-specific differences in endothelial senescence and to better identify their mechanistic impact on the neighboring cells and environment. Oligodendrocytes Oligodendrocytes (OLs) are derived from oligodendrocyte precursor cells (OPCs) in a highly regulated process [171]. OPCs differentiate into pre-OLs, and later into mature, myelinating OLs in the presence of differentiation-promoting transcription factors [172]. The primary role of mature OLs includes myelination of neuronal axons in the CNS. Additionally, OLs play a role in providing metabolic support to myelinated axons, especially in axons that spike at high frequencies [173]. OLs have also been implicated in information processing, and defects in OL maturation are linked with behavioral abnormalities [173,174]. OLs are highly vulnerable to oxidative stress and mitochondrial injury, and OL loss occurs upon exposure to inflammatory cytokines [171,175,176]. OLs are also highly susceptible to accumulation of DNA damage during normal aging and have been indicated as a potential upstream cause of cellular aging leading to neurodegeneration, illustrated by the involvement of myelin in several neurodegenerative disorders [175,176]. DNA damage is a known mediator of senescence suggesting a potential relationship between senescence of oligodendrocytes and neurodegenerative disorders. Senescent OLs could result in defective myelination as seen in several neurodegenerative disorders [177][178][179]. For instance, loss of OLs can lead to demyelination as seen in MS. Only a few studies exist that try to validate senescence in OLs. A rodent model with a novel senescence marker utilizing the p16 promoter, ZsGreen, crossed with the established APP/PS1 AD model was used to look for senescence in different cell types, including OLs [180]. OPCs showed upregulation of p21, p16, and SA-β-gal activity [180]. However, no senescence was observed in OLs (immunohistochemically stained with OL marker, CNP, and ZsGreen p16 senescence reporter) while OPCs were senescent and unable to differentiate into OLs [180]. It is likely that increased susceptibility of OPCs to the microenvironment increase the incidence of senescence in these cells compared to OLs. It may be possible that senescence in OLs occurs through p16-independent mechanisms. For example, a recent study reported an age-associated increase in p16-positive oligodendrocytes, but they were not cleared using senolytic approaches [181]. Brain cell type specific responses to senolytic clearance [181] highlights the heterogeneity of senescent cells even when in the same tissue, which may (in part) reflect cell type diversity in complex tissues such as the brain [59,182,183]. In a study of white matter lesions of frozen postmortem human brain tissue from patients who were over 65 years old, OLs exhibited elevated SA-β-gal [184]. Immunohistochemistry was used to double-label white matter tissue with SA-β-gal to identify cell types [i.e., astrocytes (GFAP+), microglia (CD68+), and oligodendrocytes (OSP+)]. Additionally, OLs also showed increased levels of 8-OHdG, a marker for oxidative stress, but did not display high levels of p16 [184]. Comparison of mRNA using qRT-PCR revealed a 1.5-fold increase in TP53, H2AX, and CDKN1B [184]. CDKN1B encodes p27kip1 and its upregulation results in the induction of a senescent phenotype [185]. Elevated H2AX and TP53 are indicative of increased DNA damage and are also suggestive of a senescent phenotype. However, to confirm true senescence in these OLs, additional inspection of (1) SASP factors, (2) resistance to apoptosis through SCAPs, and (3) the presence of proliferation markers would be beneficial. Concluding Remarks Although there are several studies that examine OPC senescence (refer to our other review on senescence in mitotically active brain cells for literature regarding OPC senescence) in multiple disease processes, there are limited data regarding the senescence of OLs in natural aging and other disease models. Limited studies mentioned suggest that SA-β-gal and gene expression analysis may be used to see if OLs have a senescent, but the results are inconsistent. For example, SA-β-gal activity was seen while high levels of p16 was not observed. Further study is required to establish the true senescence status of these cells and their potential role in aging and disease. Summary Cellular senescence has been best studied and characterized in mitotically competent cells [180]. However, most cells in the brain including neurons, astrocytes, endothelial cells, and oligodendrocytes have very low or no cell turnovers and show mostly post-mitotic phenotypes. Most post-mitotic brain cells that survive brain development will remain throughout the lifespan. While this feature historically precluded the study of senescence in the brain due to the early definition requiring mitotic competency, brain cell types are highly susceptible to acquiring a lifetime worth of damage known to drive senescence (Table 1). These include oxidative stress, DNA damage, and protein accumulation, which impact cell cycle and secretory phenotypes. While senescent cells continue to survive due to their apoptosis resistance, they tend to partially lose (or change) their function and increase expression of pro-inflammatory molecules. SASP from senescent cells can affect the microenvironment in the brain by its paracrine effect, causing other neighboring cells regardless of cell types to go senescent [133]. In the brain, these dysregulations manifest as an increase in neuroinflammation, increased BBB permeability, loss of neuronal synapses, demyelination, and dysregulated metabolism [179,186]. Collectively, these features have been associated with impaired cognition, and clearance of senescent cells as a therapeutic strategy has shown to reduce pathology, inflammation, and neuronal dysfunction [49,65,181,187]. Table 1. Biomarkers previously described to verify senescent cells. Although senescence was initially, and exclusively, studied in mitotic cells, the literature reviewed here ( Table 1) provides evidence that post-mitotic cells also undergo senescence as a complex stress response. The emergence of this new field, senescence in the brain, requires clarity of defining features. While tempting to label cells as "senescent" in many of these studies, a thorough evaluation of cell biology placed in the context of the cellular environment must also be considered. As a result, the field of neuroscience has pressured senescence biologists to clarify definitions and labels. As neuroscientists, we are in the early stages of applying methodologies and principles form senescence biology to the brain. In return, neuroscientists have over 60 years of lessons and principles of exceptional resistance to cell death to share with senescence biologists. A closer interaction and sharing of concepts between neuroscience and senescent cell biologists will propel both fields. As these efforts progress, we will continue to clarify definitions and revisit interpretations from the foundational studies reviewed here.
2021-03-29T05:25:08.909Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "8ae8c7a667ada67f5aca6c7a0a25146666652f46", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-1729/11/3/229/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8ae8c7a667ada67f5aca6c7a0a25146666652f46", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
233803
pes2o/s2orc
v3-fos-license
Emergence of Respiratory Streptococcus agalactiae Isolates in Cystic Fibrosis Patients Streptococcus agalactiae is a well-known pathogen for neonates and immunocompromized adults. Beyond the neonatal period, S. agalactiae is rarely found in the respiratory tract. During 2002–2008 we noticed S. agalactiae in respiratory secretions of 30/185 (16%) of cystic fibrosis (CF) patients. The median age of these patients was 3–6 years older than the median age CF patients not harboring S. agalactiae. To analyze, if the S. agalactiae isolates from CF patients were clonal, further characterization of the strains was achieved by capsular serotyping, surface protein determination and multilocus sequence typing (MLST). We found a variety of sequence types (ST) among the isolates, which did not substantially differ from the MLST patterns of colonizing strains from Germany. However serotype III, which is often seen in colonizing strains and invasive infections was rare among CF patients. The emergence of S. agalactiae in the respiratory tract of CF patients may represent the adaptation to a novel host environment, supported by the altered surfactant composition in older CF patients. Introduction Streptococcus agalactiae (group B streptococci, GBS) is an important human pathogen causing invasive disease in neonates and immunocompromized adult patients. While the microorganism is still the major cause of invasive bacterial infections in newborns, cases among newborns declined during the last years due to effective peripartal antibiotic prophylaxis. Nowadays over 80% of invasive infections are observed in patients older than 18 years and invasive disease in older patients appears to become more prevalent. Interestingly in the years 2003, 2004 and 2005 more adult patients died following invasive S. agalactiae infections than invasive S. pyogenes disease (http://www.cdc.gov/ncidod/dbmd/ abcs/survreports/gbs05.pdf). These data suggest that the epidemiology of S. agalactiae is changing and that the bacteria are adapting to novel environments within the human host. In contrast to closely related bacterial species like Streptococcus pyogenes, S. agalactiae is only rarely seen as a colonizer in respiratory secretions from pediatric patients beyond the neonatal period [1,2]. Investigations concerning the presence of S. agalactiae in the respiratory tract of cystic fibrosis patients have to our knowledge not been published. Microorganisms most frequently seen in the respiratory secretions of patients suffering from cystic fibrosis (CF) are Pseudomonas aeruginosa, Staphylococcus aureus and Haemophilus influenzae [3,4]. The occurrence of beta hemolytic streptococci is rarely reported in CF patients. One study recovered 6 isolates from the analysis of 258 CF respiratory tract specimens [5]. A more recent comprehensive investigation of 465 CF patients for colonizing gram positive microorganisms isolated S. pyogenes in 4 samples but did not find a single patient colonized by S. agalactiae [4]. Traditional subtyping methods for S. agalactiae rely on antibody determination of the capsular serotypes and surface protein structures. During the last years the repertoire of subtyping methods for S. agalactiae have been expanded by DNA sequence based methods like multi locus sequence typing (MLST) [6]. The technique has become one of the most powerful tools to characterize bacterial populations. It offers major advantages in comparison to older methods. The results are unambiguous, stable and the profiles of strains can be easily compared between different laboratories without the need to analyze strains of interest in one experiment. Moreover the data are comparable via the internet between different laboratories, which facilitates population analyzes worldwide. During the last years hundreds of invasive and colonizing S. agalactiae strains have been characterized with this method und today more than 380 MLST sequence types for S. agalactiae are recognized. Evolutionary relationships in the population structure of S. agalactiae can be analyzed with the eBURSTV3 program which identifies clonal complexes based on variations in the allelic MLST profiles of analyzed strains and allows a graphic representation of genetic relatedness [7]. The detection of S. agalactiae isolates in 16% of the patients from a population of 185 CF patients during 2002-2008 was surprising and to rule out a local outbreak, the molecular epidemiology of the strains was investigated. Furthermore the CF strains were compared to the general population structure of German S. agalactiae strains. For this purpose, 72 colonizing strains that were collected during a recent German colonization study of healthy adults [8] were characterized by MLST. Processing of CF respiratory tract specimens Respiratory specimens from CF patients were collected during clinical visits between 2002 and 2008. Samples were sent in from two different clinics spezialized on the treatment of CF patients in the region of Münster, Germany. Sputa were processed as follows: 500 ml sputum was mixed with 500 ml 0.5% acetylcystein (Merck), vortexed, incubated for 30 min at 35uC and homogenized by vortexing. 100 ml of the liquidified sputum was plated on Columbia blood and Endo agar for semiquantitative analysis. Primary cultures were performed on Columbia (Becton Dickinson, Heidelberg, Germany) sheep blood (Oxoid, Wesel, Germany) agar for Gram-positive cocci, on Endo agar (Merck, Darmstadt, Germany) for Gram-negative rods for 48 h at 35uC and on chocolate agar (Mast, Reinfeld, Germany) for Haemophilus influenzae for 24-48 h at 35uC under 5% CO 2 . Additionally, specimens were cultured in dextrose broth to enrich bacterial growth and streaked on blood and Endo agar after 48 h. Species identification was performed by respective standard procedures and the VITEK2 (bioMerieux, Marcy l'Etoile, France). Molecular typing of capsular antigens and surface proteins Capsular serotypes of analyzed strains were determined by PCR of cps genes for serotypes Ia, Ib, III, IV, V and VI and DNA sequencing for serotype II strains was carried out as described by Kong F, et al. [9]. To determine the surface proteins of the strains a multiplex PCR reaction was carried out as described in [10]. For strains that did not yield any results in the multiplex PCR reaction, the PCR was repeated using the primer pairs specific for one surface protein in a single reaction mixture. Multi locus sequence typing Multi locus sequence typing (MLST) of the S. agalactiae strains from CF patients and the 72 colonizing isolates was carried out as described by [6]. The colonizing strains were collected in a previous study [8], they originated from either vaginal or rectal specimens. Primers as listed in [6] were obtained from a commercial supplier (Thermo Hybaid, Ulm, Germany). Genomic bacterial DNA of S. agalactiae was isolated by using the QiaAmp DNA kit (Qiagen, Hilden, Germany) as described by the manufacturer. PCR conditions to amplify the adhP, pheS, atr, glnA, sdhA, glcK and tkt genes were set as described previously [6]. Sequencing of the generated PCR products was performed on an ABI Prism 310 Genetic Analyser (ABI Prism Biosystems, Warrington, UK) according to the instructions of the manufacturer. For analysis of the obtained nucleotide sequences and assignment of MLST profiles the following website: http:// pubmlst.org/sagalactiae/ was used. Burst analysis to reveal the relationship of MLST sequence types and to analyze clonal complexes was carried out with the eBURSTV3 version that is accessible under: http://eburst.mlst.net. Antimicrobial susceptibility determination For all CF S. agalactiae isolates, minimal inhibitory concentrations (MIC) of antibiotics were determined. The Merlin MICRO-NAUT system was used to determine MIC values for penicillin, ampicillin, cefuroxime, ceftriaxone, erythromycin, clarithromycin, clindamycin, doxycycline and moxifloxacin. Standard susceptibility testing was performed after overnight culture on blood agar plates as detailed in [11]. MIC values for gentamicin, vancomycin, rifampicin and levofloxacin were determined by E-test in accordance with the manufacturer's instructions (AB-Biodisk, Solna Sweden). Results Clinical characteristics of S. agalactiae positive CF patients From 16 of the 30 S. agalactiae positive CF patients bacterial isolates for analysis by MLST were available. All of the strains displayed a typical S. agalactiae phenotype. From the 16 patients a total of 29 S. agalactiae strains were recovered and analyzed. Among these strains there were 19 unique S. agalactiae strains and 10 strains that turned out to be duplicates of previous isolates in patients that were colonized for a longer period of time. Clinical characteristics of these patients are shown in Table 1. The female to male ration among the patients was 50%. Only 1 patient of the 16 was younger than 10 years at the time S. agalactiae was first detected. While the median age of the S. agalactiae positive CF patients was 16,5 (range 10-22) and 22,5 (range 19-44) respectively at the two different clincis at the end of 2008, the median age of the general CF population not harboring S. agalactiae, at this clinics was 13,6 (range 1,3-35,8) and 16,22 (range 0,7-50). To assess the potential clinical implications of S. agalactiae in the respiratory tract of CF patients, we obtained the information, if S. agalactiae isolation occurred during a routine visit or during a visit due to an exacerbation of clinical symptoms. For two of the patients it was not possible to gather this information retrospectively. For the other patients, 24 clinical visits were recorded in connection with positive S. agalactiae isolation, 12 of these visits occurred due to an exacerbation of symptoms, while the others were scheduled routine visits. In most cases S. agalactiae was not the sole bacterial isolate from the sputum samples. The concomitant pathogenic bacteria that were isolated are listed in Table 1. Interestingly in 10 of 16 patients Staphylococcus aureus was found, while only 4 patients harbored Pseudomonas aeruginosa in their respiratory secretions in parallel to S. agalactiae. Diabetes as a clinical presentation is not unusual in older CF patients and S. agalactiae infections are more prevalent in diabetic patients [12]. However among the CF patients with S. agalactiae none had diabetes, when S. agalactiae was first isolated, even though one patient became diabetic 4 years later. MLST sequence types and burst analysis Unequivocal MLST sequence types could be assigned to all of the 19 CF strains and to all of the 72 colonizing strains that were investigated in this study. Among the S. agalactiae strains from CF patients ten different MLST sequence types were observed ( Table 2). This is a clear indication that we did not encounter the situation of a local outbreak. To identify clonal complexes and to reveal relationships among strains, the colonizing strains as well as the S. agalactiae strains isolated from CF patients were subjected to an analysis by the eBURSTV3 program. Most of the colonizing strains (66%) and 13 of 19 CF isolates belonged to BURST group 1 ( Table 2 and Table 3), which represents the biggest clonal complex in the S. agalactiae population. Strains belonging to the largest 3 of the so far recognized BURST groups of S. agalactiae were present in the CF isolates as well as in the colonizing strains. The pattern observed for the colonizing strains did not appear to differ substantially from the pattern and clonal complexes that are present in the S. agalactiae MLST database, even though some STs were present only in colonizing or CF strains (Table 2 and Table 3). A graphic representation of the data is shown in Fig. 1. Capsular serotype and surface protein distribution In addition to the MLST analysis, all of the S. agalactiae strains included in this study were characterized by capsular serotyping and surface protein determination. Capsular serotypes could be obtained for 17 of the 19 CF strains and all of the colonizing strains. With the exception of fives strains (one CF and four colonizing isolates) all of the strains also harbored a gene for one of the following surface proteins: alpha C, Epsilon, Alp2/3 or Rib. Association of surface protein antigens with MLST sequence types It has repeatedly been shown in epidemiologic investigations that a specific capsular serotype of S. agalactiae can harbor different surface protein antigens [10,13]. To investigate if this is also true for MLST sequence types, we determined the surface antigen profile of the analyzed strains and compared it with the respective MLST sequence types. While the number of strains belonging to a specific sequence type in this investigation was of course limited, the association between sequence type and surface protein antigen appeared to be closer for some sequence types than the association between capsular serotype and surface proteins (Fig. 2). Among the ST-10 isolates from CF patients (n = 2) and the colonizing strains (n = 7), we detected three different serotypes (Ia, Ib and II) Antimicrobal susceptibilty CF patients are routinely and repeatedly treated with antibiotics for their respiratory infections. To investigate, if increased antibiotic resistance rates are observed in the CF patients that harbor S. agalactiae in respiratory secretions, MIC (minimal inhibitory concentration) values for a panel of different antibiotics were determined and MIC50 and MIC90 data is shown in Table 4. All of the strains were fully susceptible to penicillin. One of the CF isolates displayed high level gentamicin resistance with a MIC of $1024 mg/l. In addition a macrolide resistance was observed in 4 of 19 unique strains and 11 of 19 strains were resistant to doxycycline. Any further relevant resistance patterns could not be detected. Discussion S. agalactiae is only rarely seen as a colonizing pathogen of the respiratory tract of healthy pediatric patients beyond the neonatal period. A comprehensive study screening for the presence of beta hemolytic streptococci found group B streptococci in less than 3% [1] of 184 healthy pediatric patients. But varying colonization rates have been published for throat cultures of healthy patients. One older US study showed a rate of 9% in a control group of students with a mean age of 23 [14]. This study however employed selective enrichment broth to optimize S. agalactiae recovery and was not performed in CF patients, which harbor a multitude of different bacteria in respiratory secretions. Studies on throat colonization in Europe revealed much lower rates [1,2,15]. Investigations of the microbiological species isolated from the respiratory tract of CF patients hardly ever reveal the presence of S. agalactiae [4]. Therefore, we were surprised to detect S. agalactiae in 16% in a population of 185 CF patients that were regularly screened during 2002-2008 for the presence of potential pathogens in their respiratory tract samples. Especially in view of the fact, that the samples were processed on regular blood agar plates, which grow a multitude of different organisms in CF patients. The detection rate of S. agalactiae in swabs increases about 100% if selective media for the cultivation of S. agalactiae are used [16]. Thus it is likely that the true isolation rate of S. agalactiae from the CF patients would be much higher, if antibiotic supplemented media that inhibit the growth of other pathogens were used for cultivation. This question however, could only be answered by a prospective investigation of respiratory tract samples from CF patients with liquid enrichment broth that optimizes the detection rate of S. agalactiae. Increased numbers of S. agalactiae infections have also been linked to diabetes [12] and CF patients may become diabetic during the course of their disease. But since none of the CF patients was diabetic at the time S. agalactiae was first isolated from their sputum, there is no indication that the high S. agalactiae isolation rates are due to an altered sugar metabolism in these patients. S. agalactiae belongs to the group of beta hemolytic streptococci, which have a high virulence potential. In newborns and especially premature infants with reduced pulmonary surfactant it causes pneumonia and is a well recognized respiratory pathogen [17]. Surfactant associated protein A (SP-A) is the most abundant pulmonary surfactant protein [18] and its importance for S. agalactiae infections has been demonstrated in animal models [19]. In SP-A deficient knock out mice the clearance of intratracheally administered S. agalactiae is delayed. Interestingly the concentration of SP-A can be reduced in brocheo alveolar lavage fluid from CF patients [20,21]. Older children and adult CF patients show consistently decreased levels of SP-A, in contrast to young children with CF. In our investigation, only one of the CF patients was younger than 10 years at the time S. agalactiae was first isolated from the sputum. Moreover the median age of CF patients with S. agalactiae isolates was 3 and 6 years older than the S. agalcatiae negative CF patients from the respective clinic. A finding which supports the hypothesis, that altered surfactant properties in this age group could be responsible for the emergence of S. agalactiae in the respiratory tract. It could also explain, why respiratory S. agalactiae isolates have previously not been reported in CF patients. Due to improved therapeutic regimens for CF patients in recent years, more than 35% of CF patients are now older than 18 years. Whereas in the 60's the predicted mean survival was only about 10 years [22]. But since we did not investigate the SP-A levels of the patients which harbor S. agalactiae, this is only a speculation that may be the target of further investigations. To rule out a local outbreak among the CF patients in our area, 29 S. agalactiae strains from CF patients were collected and analyzed by molecular subtyping. Among these strains 19 unique isolates were found and 10 strains that were identical to previous isolates ( Table 1). The presence of many different sequence types in the CF isolates (Table 2) shows that these strains are heterogeneous. We found no indication for a spread of a single S. agalactiae clone among the patients. Despite the limited number of strains from CF patients that we had for analysis, it is striking to see, that serotype III was rarely isolated in CF patients. This is in contrast to other German studies, in which serotype III isolates were most prevalent in invasive neonatal strains (65%) [23] as well as in colonizing strains (28%), obtained from adult women [8]. Analysis of the distribution of surface proteins among the CF strains revealed a similar pattern. As expected, the surface protein Rib was present in the serotype III strains we analyzed. Especially striking was the absence of any ST-17 strains in the samples from the CF patients. ST-17 serotpye III strains have been described as hypervirulent isolates in many different studies, and have a strong association with neonatal invasive disease and meningitis [24,25]. In contrast to the total lack of ST-17 strains in CF patients, nine isolates of 72 from the German colonizing strains were ST-17 strains (Table 3), representing the third most frequent ST, an indication that ST-17 strains are present in considerable numbers in the pool of colonizing strains. MLST characterization of S. agalactiae isolates from Germany has not been performed and published previously. Therefore in addition to the MLST analysis of the CF strains, we determined the MLST profiles of 72 colonizing S. agalactiae isolates, which were collected recently, as a reference population [8]. The MLST pattern we found in our collection of colonizing strains compares well to the MLST profiles seen in colonizing strains from other countries [24,25]. The great majority of the MLST sequence types that we determined in the CF isolates, were also present in the German colonizing strains, indicating that the respiratory CF strains and the urogenital colonizing strains belong to the same population. In order to correlate the MLST sequence types with known molecular markers of S. agalactiae, the CF isolates as well as the colonizing strains from the urogenital tract of healthy patients were characterized by serotyping and surface protein determina- tion. While it is well-known, that a correlation between S. agalactiae serotypes and surface protein antigens exists [13], a detailed analysis of the association of MLST sequence types and surface proteins has not been published. Since there are many more MLST sequence types than recognized S. agalactiae serotypes, it was not surprising to see, that for most of the sequence types, only a single type of surface protein was found in our investigation. However for sequence types with many isolates like ST -1, ST-19, ST-23 and ST-28, we were able to detect strains with varying surface proteins. For selected sequence types, like ST-10 and ST-12 the association between MLST profiles and surface proteins appeared to be closer than the association between sequence type and serotype (Tables 2 and 3). In ST-10 and ST-12 isolates, different serotypes were detected in one MLST sequence type, but all of the strains belonging to that specific sequence type harbored the same surface protein (Table 2 and 3). This was however not true for all of the sequence types that we found, since in ST-28 only serotype II was observed, but alpha C protein as well as Alp2/3 were detected in these strains. Overall the number of isolates we had for analysis is too small to reach any definite conclusions about the association of sequence types and surface protein antigens. During the course of many years CF patients are repeatedly treated with anitibiotics to limit respiratory infections. Under these conditions exposure of bacterial strains to various antibiotics for a prolonged time is quite common, in contrast to the pregnant and Figure 2. A: Association between surface proteins and sequence types. For each sequence type found in either respiratory strains from CF patients or colonizing strains, the number of isolates and the surface proteins of these strains are shown. Genes coding for alpha C, Epsilon, Rib or Alp2/3 were detected in the vast majority of strains, only five isolates failed to generate a PCR product with the specific primers. B: Association between serotypes and sequence types. For each sequence type found in either respiratory strains from CF patients or colonizing strains, the number of isolates and the serotypes of these strains are shown. doi:10.1371/journal.pone.0004650.g002 mostly healthy patients, which are usually screened for the presence of S. agalactiae colonization. For other bacterial pathogens high levels of antibiotic resistance rates have been reported in CF patients [26]. In our collection of S. agalactiae isolates from CF patients we were also able to observe an unusual resistance pattern. One of the strains exhibited a high level gentamicin resistance. This type of resistance is not common in S. agalactiae but has previously been reported [27,28] in a few isolates. Resistance rates to macrolides were found to be 21%, which is comparable to the macrolides resistance rates of recent studies for S. agalactiae [29,30]. In these investigations rates between 11% and 38% were published. In conclusion, we report the detection of S. agalactiae in a considerable proportion of respiratory samples of CF patients. Detailed molecular analysis of the strains did not reveal a local outbreak. S. agalactiae positive patients were several years older than a reference population of CF patients, which is consistent with the hypothesis, that an altered surfactant composition in this age group supports S. agalactiae growth. A condition, which may help the adaptation of S. agalactiae to a novel, not yet recognized host environment.
2016-05-04T20:20:58.661Z
2009-02-27T00:00:00.000
{ "year": 2009, "sha1": "d4e912f86d80caeb08b80e70cff350b7472deeb6", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0004650&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d4e912f86d80caeb08b80e70cff350b7472deeb6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
6013462
pes2o/s2orc
v3-fos-license
Detection of the quantitative trait loci for α-amylase activity on a high-density genetic map of rye and comparison of their localization to loci controlling preharvest sprouting and earliness The objectives of the research were to determine the position of quantitative trait loci (QTL) for α-amylase activity on the genetic map of a rye recombinant inbred line population—S120 × S76—and to compare them to known QTL for preharvest sprouting and heading earliness. Fourteen QTL for α-amylase activity on all seven chromosomes were identified. The detected QTL were responsible for 6.09–23.32% of α-amylase activity variation. The lowest LOD value (2.22) was achieved by locus QAa4R-M3 and the highest (7.79) by locus QAa7R-M1. Some QTL intervals for features of interest overlapped partially or completely. There were six overlapping QTL for α-amylase activity and preharvest sprouting (on 1R, 3R, 4R, 6R, 7R) and the same number for preharvest sprouting and heading earliness (on 1R, 2R, 6R, 7R). Furthermore, there was one interval partially common to all three traits, mapped on the long arm of chromosome 1R. Testing of lines originating from hybrid breeding programs, such as S120 and S76, may provide important information about the most significant genes and markers for selection in commercial breeding. Among the statistically significant markers selected in the Kruskal–Wallis test (P < 0.005), there were 55 common ones for preharvest sprouting and heading earliness (1R, 2R, 6R), 30 markers coinciding between α-amylase activity and preharvest sprouting (5R, 7R) and one marker for α-amylase activity and heading earliness (6R). Electronic supplementary material The online version of this article (doi:10.1007/s11032-011-9627-1) contains supplementary material, which is available to authorized users. Introduction Recent advances in molecular marker technology and the development of high-density molecular marker linkage maps have provided powerful tools for elucidating the genetic basis of quantitatively inherited traits (Kunpu et al. 2009). The first highresolution map of rye constructed with the use of Diversity Arrays Technology (DArT) (Jaccoud et al. 2001) was created in 2009 (Bolibok-Brągoszewska et al. 2009). The next high-density linkage map of rye, based on DArT and PCR markers, was recently used for detecting quantitative trait loci (QTL) for preharvest sprouting (PHS) and heading earliness (Myśków 2011). These agriculturally important traits are connected with two crucial events in the plant's life cycle. First is the end of dormancy and seedling emergence, because until then there is no photosynthesis and plants cannot gain mass. The second event is anthesis which is important because it determines the time of grain maturity during the season (Jamieson et al. 1998). The appropriate timing of flowering is a critical adaptive trait for the propagation and survival of a plant species. It should be late enough to avoid frequent exposure of the sensitive reproductive organs to freezing temperatures early in the season, but not so late as to expose the crop to damaging drought conditions and high temperatures during anthesis and grain filling (Chen et al. 2009). Premature breaking of dormancy can result in PHS of spikes in wet weather conditions. PHS is a major cause of excess a-amylase activity (AA), which impairs grain quality since enzymatic hydrolysis of starch during food manufacture can lead to processing problems and unsatisfactory end products (Graybosch et al. 2000;Lunn et al. 2001;Mares et al. 2002;Tjin Wong Joe et al. 2005). Much research on wheat grains has proved that there are a number of additional causes for the deterioration of meal properties, which reflects low falling number. These include late maturity a-amylase (LMA) (Mrva and Mares 1996a, b, 1999, also known as prematurity a-amylase (PMAA) in the UK (Lunn et al. 2001;Tjin Wong Joe et al. 2005), and retained pericarp a-amylase (RPAA) (Lunn et al. 2001). It is well established that the genetic mechanism of AA, PHS and heading/flowering in crops is complex. Earliness is controlled by photoperiodic response, chilling requirement and narrow sense earliness. Each of these components are under the control of multiple loci, localized on most of the wheat and barley (bibliography in: Cockram et al. 2007;Kunpu et al. 2009) and all rye chromosomes (Plaschke et al. 1993;Masojć and Milczarski 1999;Börner et al. 2000;Korzun et al. 2001;Stojałowski and Łapiński 2002;Myśków 2011). All chromosomes of Triticeae are also involved in determining PHS and AA Milczarski 2005, 2009, and bibliography therein). Extensive research involving crop species has been conducted on the relationship between flowering time (heading date), plant adaptation and yield (Laurie 1997). Several data suggest that loci controlling photoperiod response, vernalization requirement and earliness per se may exert pleiotropic effects on yield and yield-related traits (Worland et al. 1998;Buck-Sorlin and Börner 2001;Lewis et al. 2008). Studies of QTL controlling PHS and AA in rye demonstrated that these two systems coincide in great part Milczarski 2005, 2009;Masojć et al. 2011). Partial overlap between QTL for heading earliness (HE) and PHS in rye has been found by Myśków (2011). However, a combined relationship between AA, PHS and earliness has not been defined yet. The objectives of our research were to determine the position of QTL for AA on the genetic map of a rye recombinant inbred line (RIL) population-S120 9 S76-and to compare them to known QTL for PHS and HE. Materials and methods The mapping population of S120 9 S76 consisted of 143 genotypes of the RIL-F8 generation. The pedigree of parental inbred lines is described in Myśków et al. (2001). S120 and S76 differ in values of AA, PHS and time of ear emergence. The genetic map of the S120 9 S76 RIL population has been constructed by Myśków (2011) with the use of JoinMap 3.0 software (Van Ooijen and Vorrips 2001). All seven linkage groups comprised 1285 DArT loci and 62 PCR-based loci. Individual chromosomes included from 123 (5R) to 261 (6R) loci, and spanned distances of 76 cM (3R) to 233 cM (1R). The whole map length was 962 cM and the average density varied from 0.9 to 2.4 markers/cM (average distance 0.4-1.1 cM). AA was analyzed during the period 2008-2010, for self-pollinated plants of the RIL-F8 to RIL-F10 generations. Plant material was grown in an experimental field at the West Pomeranian University of Technology in Szczecin, Poland. Each RIL was represented by 1-8 spikes (4-5 for most genotypes) from different plants. Equal amounts of flour (250 mg) obtained from all seeds with no visible signs of sprouting were used for AA determination. The method of estimating the level of AA was described by Masojć and Larsson-Raźnikiewicz (1991). Additionally, PHS was measured in the year 2010 as a percentage of growing seeds per total seeds in the ear, after watering of ten mature, harvested spikes, according to the method described by Masojć et al. (2007). Statistical analysis was carried out using the STATISTICA version 9.0 package (http://www. statsoft.com). Broad-sense heritability (h 2 ) was calculated using the formula h 2 ¼ V G =V F , where V G is the genetic variance estimated as a difference between V F and V E ; V F is the RIL population variance, V E is the mean variance of parental lines S120 and S76, estimated with using at least ten plants of each one. The significance of difference between parental lines trait values was established by employing the Cochran and Cox test. The relationship between segregations of single marker and trait was analyzed with the Kruskal-Wallis test using MapQTL 5.0 package (Van Ooijen 2004). Linkage analysis was performed using the composite interval mapping (CIM) method (Zeng 1994) with Windows QTL Cartographer version 2.51 (http://www.statgen.ncsu.edu/qtlcart/ WQTLCart.htm, Wang et al. 2007). The step size chosen for all QTL was 2 cM. Significant thresholds for declaring the presence of a QTL were estimated from 1,000 permutations of the data (Doerge and Churchill 1996). Results of composite interval mapping performed on the data of AA and new data of PHS were compared with localization of QTL for PHS and HE obtained previously by Myśków (2011) with the use of the same genetic map of the S120 9 S76 RIL population. Phenotypic variation and correlation analysis Parental phenotypic variation and the distribution among RILs for AA, PHS and HE in different years are shown in Table 1. Parental lines differed significantly (P = 0.001) with respect to PHS and HE, but not significantly in the case of AA values. The measured target traits varied in the RIL population following a continuous distribution representing a normal phenotypic segregation for QTL mapping. All trait value ranges were higher in the RIL population than in both parents, suggesting that transgression was observed. The estimated heritability of AA, HE and PHS was 24.7, 34.5 and 46.7%, respectively. Pairwise correlation coefficients between the traits are given in Table 2. Significant correlations were observed between result data obtained in all years for PHS, and for almost all variants of HE and for AA assessed in 2008 and 2009. There was no correlation found between different traits, except one-between AA and PHS during the 2009 season. Table 2 Phenotypic correlation between a-amylase activity (AA), preharvest sprouting (PHS) and heading earliness (HE) in rye RIL population of S120 9 S76 cross 1). None of the QTL controlling AA were identified in 2 years. However, three of them were confirmed by the detection on the basis of mean values (Table 3). Three of the QTL for PHS detected in 2010 were mapped in an earlier study (Myśków 2011) and the fourth was found additionally using the mean score over all years (Table 3). Eighteen of 20 segregations of markers most strongly linked to the QTL were significantly associated with segregations of AA or PHS, as revealed by the Kruskal-Wallis test; six of them were significant at P \ 0.001 (Table 3). The locations of all QTL for AA, PHS and HE in the RIL population of S120 9 S76 are shown in Fig. 1 and in Electronic Supplementary Material 1. If any locus was found in different years, it was presented only as a single rectangle and counted as a single QTL. There were 14 QTL detected for AA, 33 for PHS and 17 for HE. Considering QTL revealed twice (in 2 years or both in 1 year and as a result obtained on the basis of the average data), there were 4, 16 and 9 loci, for AA, PHS and HE, respectively. Co-localization of QTL and common markers Some QTL intervals for measured features overlapped partially or completely (Fig. 1, Supplement 1). There were six overlapping QTL for AA and PHS (on 1R, 3R, 4R, 6R, 7R) and the same number for PHS and HE (on 1R, 2R, 6R, 7R). Furthermore, there was one interval partially common to all three traits, mapped on the long arm of chromosome 1R. Among the statistically significant markers selected using the Kruskal-Wallis test (P \ 0.005), there were 55 common ones for PHS and heading earliness: 29 located on chromosome 1R, 10 from linkage group 2R and 16 from chromosome 6R. There were 30 coincident markers identified between AA and PHS (one from linkage group 5R, the rest from 7R). There was one significant marker for AA and HE (6R). Markers common to two traits of interests are listed in Supplement 2. Discussion In rye, QTL mapping for AA and PHS has been an area of intense research, which has led to the identification of many loci and to the discovery of the phenomenon of overlapping intervals of QTL for these two traits Milczarski 2005, 2009). A large number of QTL showed relatively small effects and only a few major QTL with large phenotypic effects were previously identified. This presents a serious challenge when using single QTL with small effects for markerassisted selection (MAS), and therefore there is a need to pay more attention to exploring those stable QTL detected in different environments/seasons and different genetic backgrounds and those revealing pleiotropic effects (Wang et al. 2009). The present research is an extension of earlier studies on AA and PHS and a summary of results from QTL analysis of AA, PHS and HE. This is the first attempt to use a high-density genetic map of rye for QTL analysis of these three important traits. It provides more opportunities for the identification of markers tightly linked to the target features. The study highlights the importance of some co-localized QTL, which could show the direction of studies on pleiotropic effects. The first case of application of an advanced RIL population in research on rye allowed for verification and phenotypic assessment in different environments. AA QTL Since in our study seeds were harvested at the stage of full ripeness, the majority of the a-amylase developed Table 3 Characteristics of putative QTL and markers for a-amylase activity and pre-harvest sprouting in rye RIL population of S120 should represent LMA, which was shown to be the a-AMY1 group of isozymes Mares 1999, 2002;Mares and Mrva 2008). In order to find out whether the AA QTL reported here correspond to any of the known QTL, the map used in the present study was aligned to the other published genetic maps used in studies of QTL for AA and PHS. Nine out of 14 AA QTL reported here were probably mapped in the regions previously indicated in studies performed in two other rye populations Milczarski 2005, 2009). QAa7R-M1 seems to be the most important due to its highest LOD of 7.79 and because it explained as much as 23% of AA. The DArT marker XrPt402657 linked to this QTL was statistically significantly related to the trait at P \ 0.0001 and it separates the genotypes into two groups with differing values of AA. The most recent report on loci linked to AA detected using bidirectional selective genotyping (BSG) method coupled with molecular mapping (Masojć et al. 2011) pointed to seven markers distributed on the long arm of chromosome 7R. QAa7R-M1 may contain one or a few of them. However, the loci revealed therein were considered to be the ones with minor value for AA synthesis; the approach presented by that team did not allow for estimated parameters obtained by QTL analysis. BSG gives no information about the relative magnitude of the QTL effects acting in favour of the valuable trait Fig. 1 Localization of a-amylase activity QTL (white rectangles) and QTL for preharvest sprouting (grey rectangles) revealed in 2010 on the linkage map of S120 9 S76 RIL population and their alignment with QTL for preharvest sprouting (grey rectangles) and heading earliness (black rectangles) detected previously (Myśków 2011). QTL detected both in 1 year and on the base of mean values are indicated by one asterisk, QTL detected in two seasons are indicated by two asterisks. The ruler on the left side shows map distances in cM. Markers situated on the right of each linkage group are those most strongly linked to the newly detected QTL. All markers from the map are listed in Supplement 1 Mol Breeding (2012) 30:367-376 373 (Masojć et al. 2011). In a recently published report on localization of genes affecting AA (Tenhola-Roininen et al. 2011), only one QTL was detected in breeding materials analyzed, on the long arm of chromosome 5R. Interestingly, the coinciding locus detected by Masojć et al. (2011) was considered to belong to the hypostatic, less important class of loci. This leads to the question whether research performed only on genotypes very different with respect to AA and PHS, such as lines 541 and Ot1-3 used by Masojć et al. (2011), properly reflects the genetic diversity of actual breeding materials. Testing of populations developed from crosses between cultivars like Amilo and Voima (Tenhola-Roininen et al. 2011) or lines originating from hybrid breeding program, such as S120 and S76, may provide important information about the most significant genes and markers for selection in commercial breeding. In the past, it has been suggested that the QTL identified in more than one environment or those identified using data pooled over environments are useful from the point of view of MAS. Furthermore, for use in MAS, it is desirable to have one or a few QTL, each with a major effect on the trait (Kulwal et al. 2005). In the light of this observation, the QTL from chromosome 7R with DArT marker XrPt402657 and the next eight QTL detected in the present study and reported previously should prove valuable in MAS aimed at improving the grain quality of rye in terms of decrease in AA. Interestingly, QTL analysis performed on the F2-F3 populations of S120 9 S76 (Myśków et al. 2010) allowed for the detection of only one QTL for AA, not corresponding to any of the QTL for AA revealed in the present study. The comparable or even higher number of QTL for AA on the map presented compared to the results in other rye maps suggested that there is no need to detect significant differences between values of the trait of parental lines in QTL analysis. Both high-and lowactivity QTL alleles were found in each parental line, which explains the similarity of the mean values of both lines and the appearance of transgressive recombinants in the segregating population. The lack of repeatable QTL through the years reflects the low heritability of AA. However, it does not diminish the importance of the QTL, since they are confirmed in different genetic backgrounds. PHS QTL Out of the seven QTL for PHS detected in 2010, three were homeologous to the loci from the same map identified before (Myśków 2011). One of these three loci, QPhs3R-M4, was probably detected either in the F2 population of S120 9 S76, as a Phs1 (Myśków et al. 2010). However, QPhs5R-M3 was revealed in one season only; it probably coincided with Phs2 from the F2 population and with QTL for PHS or AA mapped on chromosome 5R, near the structural locus of a-amylase in other populations (Masojć et al. 2007, Masojć andMilczarski 2009). A marker linked to this locus, XrPt398706, was statistically significant at P \ 0.0005. The next QTL apparently important was QPhs6R-M7 from group 6R, due to its identification using pooled data, its expression in different genetic backgrounds and because its interval overlapped with the region containing a locus controlling AA-QAa6R-M3. In total 33 QTL for PHS were detected on the present map, 17 of them identified in at least two seasons. Similar to the case of QTL for AA analysis, this number exceeds the quantity of loci known from the earlier studies using the CIM method in a S120 9 S76-F2 population (Myśków et al. 2010) and two others (Masojć and Milczarski 2009). This could partially be a result of different threshold values of LOD obtained with the permutation test in the present study and threshold values used in previously published reports (LOD = 3.0). This could also be the consequence of using a high-density map and more abundant RIL population. Even if the new QTL are of less importance, they add important information to the growing knowledge concerning PHS resistance and AA in rye. HE QTL Exploiting the high-density map of the S120 9 S76 population resulted in detection of 17 QTL for HE falling on all chromosomes except 3R (Myśków 2011). Ten of them were identified in two seasons at least. This was the first report on QTL controlling earliness mapped on chromosome 1R in rye. There were therefore at least five new loci detected. QHe5R-M1 from chromosome 5R was probably homeologous to locus Vrn-R1, previously known as Sp1 (Plashke et al. 1993), responsible for plant reaction for vernalization. Relationship between AA, PHS and HE Correlation between pairs of analyzed traits showed no significant relationship. AA and PHS also showed no correlation within two other mapping populations of rye (Twardowska et al. 2005), suggesting their independent genetic basis. However, it was found that QTL systems controlling PHS and AA partially overlap. Nine common QTL were detected, among 16 QTL for AA and 13 QTL for PHS (Masojć and Milczarski 2009). In the present study, six coinciding QTL for AA and PHS were found, which confirms the previous observations showing partial coincidence of the genomic regions controlling these two traits. Despite the higher density of our map, it is still not resolvable if loci for both traits are only linked or have pleiotropic effects. However, three perfectly overlapping QTL from chromosomes 1R, 3R and 6R, with the same markers linked both to AA and PHS QTL, make the second explanation probable. The fourth coinciding QTL from linkage group 6R had only a slight shift and the same marker linked, too. Such ideal or almost ideal matches of short intervals seem not to be accidental, taking into account that there is often no or weak correlation between QTL for one trait detected in different seasons. There were no reports on the relationship between potentially related traits of PHS and HE in rye, except for the publication by Myśków (2011). The connection of PHS and HE in other crops is not well documented. PHS resistance was associated with later heading date (HD) in white wheat population. The mean HD and PHS scores were significantly negatively correlated (r = -0.39), probably due to a major HD QTL found to be tightly linked to the PHS QTL on chromosome 2B (Munkvold et al. 2009). Despite the lack of correlation between PHS and HE, implying their independent genetic basis, there were as many as six coinciding QTL for both traits found. It shows that the relationship between PHS and both AA and HE is on the congruent level. However, such an ideal overlapping as between some QTL for AA and PHS was not observed, which suggests linkage rather than a pleiotropic effect of the revealed loci. Additionally, one interval partially overlapping for all three studied traits was detected, which was the first report of this kind. The reason for co-localization of the QTL for two or all three target features remains unknown and requires developing studies. Future research involving fine mapping of these coinciding QTL may resolve whether the co-localized QTL represent a single locus with pleiotropic effect or whether there are two linked loci. The present results show that there is no rule that plants prone to PHS reveal high activity of LMA. Similarly, earlier heading plants are not always early sprouting, and are then not susceptible to PHS in wet weather conditions. In any case, focusing on overlapping QTL should allow the choice of markers beneficial in the selection of plant materials with favorable values of PHS, AA and HE. This work reveals 55 markers linked to PHS and HE simultaneously, 30 to PHS and AA and one to AA and HE. Knowledge of DArT marker sequences enables their conversion to PCR markers, which are easy to use in MAS.
2016-05-04T20:20:58.661Z
2011-09-28T00:00:00.000
{ "year": 2011, "sha1": "140f2a625cdd50908c95bcbd44e27d0c1d263dd1", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11032-011-9627-1.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "9ce3c00174dc201231c937996bea7f05c174f082", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
14743554
pes2o/s2orc
v3-fos-license
ADONIS high contrast infrared imaging of Sirius-B Sirius is the brightest star in the sky and a strong source of diffuse light for modern telescopes so that the immediate surroundings of the star are still poorly known. We study the close surroundings of the star (2 to 25 arcsec) by means of adaptive optics and coronographic device in the near-infrared, using the ESO/ADONIS system. The resulting high contrast images in the JHKs bands have a resolution of ~ 0.2 arcsec and limiting apparent magnitude ranging from mK = 9.5 at 3 arcsec, from Sirius-A to mK = 13.1 at 10 arcsec. These are the first and deepest images of the Sirius system in this infrared range. From these observations, accurate infrared photometry of the Sirius-B white dwarf companion is obtained. The JH magnitudes of Sirius-B are found to agree with expectations for a DA white dwarf of temperature (T=25000K) and gravity (log(g) = 8.5), consistent with the characteristics determined from optical observations. However, a small, significant excess is measurable for the K band, similar to that detected for"dusty"isolated white dwarfs harbouring suspected planetary debris. The possible existence of such circumstellar material around Sirius-B has still to be confirmed by further observations. These deep images allow us to search for small but yet undetected companions to Sirius. Apart from Sirius-B, no other source is detected within the total 25 arcsec field. The minimum detectable mass is around 10 MJup inside the planetary limit, indicating that an extrasolar planet at a projected distance of ~ 25 AU from Sirius would have been detected (abridged abstract). Introduction Although Sirius is the brightest star in the sky, it is by no means an easy target for modern astronomy. Its extreme brightness (m v =-1.46) in fact presents significant problems for both observations and precise photometric in the immediate surroundings of the star. Sirius is known to be a binary system since the prediction of a companion by Bessel in 1844 and subsequent observation by Alvan Clark in 1862 of Sirius-B which worked out to be the closest white dwarf (see Wesemael & Fontaine 1982). The system was also proposed to be triple because a visual companion was reported consistently around 1930 (see Baize 1931) and persistent periodic (∼ 6yr) residuals were also noticed in the A-B binary orbit (Volet 1932;Benest & Duvent 1995). The existence of an interacting third star in an eccentric orbit was also proposed to explain the apparent historical change in color of Sirius (Gry & Bonnet-Bidaud 1990; Send offprint requests to: J.M. Bonnet-Bidaud 1991). Over many years, Sirius-B was monitored extensively in the optical (Gatewood & Gatewood 1978), although the stellar field around Sirius was virtually unknown till recently. Due to the high diffuse background produced by the bright Sirius-A, long exposures such as those of the Palomar plates generate a large ∼ 1 • overexposed spot at the star position. The first catalogue of stars in a (2.5x4) field around Sirius was provided in an effort to isolate possible companion candidates (Bonnet-Bidaud & Gry 1991;Bonnet-Bidaud, Colas & Lecacheux 2000). It enabled the identification of an unrelated m g 12 background star that was in close (∼ 7 ) conjunction with Sirius during the years around 1930, due to its high proper motion. This conjunction most likely explains the spurious companion reported at that time. Modern techniques for data from space and ground-based observatories have allowed considerable progress to be achieved in the search and study of faint companions around bright stars. Schoeder et al. (2000) imaged Sirius-A at 1.02 µm, with the Hubble Space Telescope (HST) Planetary Camera Fig. 1. ADONIS Ks-band image of the white dwarf companion of Sirius. The pixel scale is 0.1 /pixel, the total field is 25.6x25.6 and the linear flux scale is given to the right. The Sirius star is centered under a 3.9 diameter coronographic mask and an additive numerical mask has been added to provide a clearer display. The white dwarf companion, Sirius-B, is clearly seen in the lower-left quadrant at a separation of 5.0 . and provided first constraints within 17 of the star. Kuchner & Brown (2000) using the HST-NICMOS camera in coronographic mode covered the central 3.5 at a similar 1.10 µm wavelength in a search of exozodiacal dust around Sirius-A. The HST-STIS spectrograph was also used to measure accurate UBVRI magnitudes of Sirius-B from its visual spectrum (Barstow et al. 2005). Since these observations, ground-based coronographs using adaptive optics in the near infrared have emerged as a powerful new tool in searching for faint companions to nearby stars. We present the first JHK infrared images of a 25 field around Sirius-A acquired using such a device. The high constrast images allow the precise determination of Sirius-B infrared colors and provide the strongest constraints in the region 3-10 arcsec-onds from Sirius-A of the existence of a small companion in the Sirius system, down to a planetary size. Observations Sirius was observed during two epochs from 2000 January, 14 to 16 and 2001 January, 13, using the SHARP II+ camera coupled with the adaptive optics system ADONIS, and mounted onto the ESO 3.6m telescope at La Silla, Chile (Rousset & Beuzit 1999). A pre-focal optics coronograph (Beuzit et al. 1997) was used to reject the direct starlight of Sirius-A and increase the integration time in each elementary exposure. Because of the high brightness of Sirius-A, we had to use a large mask (diameter of 3.92 ). The SHARP camera was used with a pixel scale of 100 mas to increase the sensitivity to point-like faint companion and provide a total (25.6x25.6) field of view. J (1.25 µm), H (1.64 µm), and Ks (2.15 µm) large band exposures were obtained. The seeing was quite stable during these observations at ∼ 1 on average (on a timescale of 1 night), but could reach a value of 1.7 for H band data of January 2000. We spent a total observing time of 300 s in J band, 410 s in H band and 800 s in the Ks band. The Point Spread Function (PSF) was monitored frequently by interlaced observations of two reference stars, 2 Cma (B1 II/III, m V =1.98) and γ Cma (B8 II, m V =4.097). For selecting suitable reference stars and minimizing the PSF diffferences, three criteria were considered in order of importance : the distance on the sky to the target, the brightness match at the wavefront analysis wavelength (0.6 µm), and the spectral type match. The two reference stars are the most effective compromise and cover both the brightness and spectral type range. These reference stars were used later in the reduction process to subtract the wings of the stellar PSF, and increase the sensitivity to a faint companion. Typical FWHM of the PSF were 0.2 in J band and 0.3 in H and Ks bands. Observations of empty fields were also performed to estimate and remove the background flux, which can be significant in K band observations. The reference stars HR 3018 and HD 19904 were used for photometric calibration. Reduction procedure First, standard reduction techniques (including bias subtraction and flat-field correction) were applied to the data. For each filter, we obtained a set of Sirius observations and corresponding PSFs. In spite of the use of a coronograph mask, the image surface brightness was still dominated by the stellar emission at any distance from the star. To search for faint companions, we had to subtract numerically the starlight wings. The approximate subtraction of a scaled PSF to Sirius images provides inaccurate results because of slight shifts (up to 1 pixel) in position on the array between the reference star and the object, uncertainties in the fluxes (given by the literature), and a residual background (ADONIS bench emission, different airmasses). We developed a specific method to achieve an optimum subtraction. Subtraction of the PSF For a pair of Sirius image (Obj) and corresponding PSF (Psf), we attempt to estimate automatically three parameters: a shift (δx,δy) between the two images, a scaling factor R, and a residual background Bg. These parameters are estimated by minimizing the following error functional : where J neg = negative pixels and where [quad i ] designates one of the 4 square quadrants around the star, the [S] function represents the image shift, and [med] function is the median estimator. The sum of the χ 2 term is performed over a subframe located in a region of the images close to the star (typically between circles of 25 and 50 pixels in radii), from which unreliable pixel values are excluded (bad pixels, pixels belonging to areas contaminated by diffracted light from the coronograph support/telescope spider). The J χ 2 expresses the fidelity of the shifted/rescaled PSF to the object image, the J neg prevents overestimation of the ratio (R) parameter that would produce a large, centered zone of negative pixels in the PSF subtracted result, while, J bal prevents non uniformities in the four quadrants, which are particularly high when the shift parameters are estimated incorrectly. We note that the central part of the images is not saturated and that bad pixels and the centermost region (where the coronographic mask is located) are excluded from the computation of the median value. α neg and α bal are two weight parameters with optimal values determined experimentally as 1.0 and 2.0 respectively, so that the χ 2 minimum value is of the order of the number of pixels. The functional minimum is found using a zeroth-order minimization algorithm called "simplex" (Press 1993). The method was first verified using simulated data whose input parameters (shifts, scaling factor) were recovered with an accuracy of higher than 5 %. To evaluate the errors in the subtraction process and test the stability of the PSF, the same subtraction process was applied using the two calibration stars. Figure 1 shows the resulting subtracted image in the Ks filter. Residuals level can be seen from random structures around the coronographic mask. A point-like object can be seen easily in this subtracted high contrast image. This is the first direct image of the white dwarf companion of Sirius (pinpointed by the arrow) in this energy range. The image has a sharper contrast than a comparable one produced using the WFPC2 camera on the HST telescope (Barstow et al. 2005), although the A-B magnitude difference does not differ significantly between the optical and infrared. Infrared photometry of Sirius-B In spite of the high image quality, the photometric measurements for the companion observed in Fig. 1 is difficult. To obtain accurate photometric measurements, special care must be taken to remove systematic effects due to light contamination. The high level of surrounding residuals prevents the use of standard photometric estimation methods such as encircled energy summation. After assessing many methods using simulated data, the most accurate one proved to be a method based on PSF fitting of the source. It is unaffected by the surrounding residual structures produced by the PSF subtraction, and depends weakly on uncertainties when estimating the background in the image. The photometric measurements of the WD are given in Table 1 for the two epochs. The uncertainties given in this table were estimated by combining the actual surrounding residual uncertainties and the results of simulations based on Monte Carlo trial processes. Fluxes were converted into magnitudes using the filter responses and zero-points defined by Tokunaga (2000) in the MKO system, which do not differ significantly from the calibration given by Cohen et al. (2003b). The last column of Table 1 provides the average absolute magnitudes computed using the parallax distance determined by the Hipparcos satellite (ESA 1997). These are the first accurate JHK magnitudes for Sirius-B, which can be compared to theoretical expectations. The optical photometry as well as the temperature and gravity of the white dwarf (logT= 25 190K, logg=8.556) were determined accurately using HST Balmer lines spectroscopy (Barstow et al. 2005). Figure 2 shows the measured energy distribution of Sirius-B from optical to infrared, compared to the flux distribution from a pure blackbody at T=25 190K, scaled to the optical flux at 5500Å. Also shown in the optical range is the synthetic WD spectrum interpolated in tem-perature and gravity from a grid of WD models with LTE atmospheres (Finley et al. (1997), Koester (2000) private communication). UBVRI fluxes were converted to the magnitude system of Cohen et al. (2003a). The flux measured at 1.104 µm from HST-NICMOS observations (Kuchner & Brown 2000) is also shown. The JHKs infrared fluxes are already in remarkable agreement with this simple black body extrapolation. In the infrared range, comparison to more detailed synthetic colors of DA white dwarfs was performed by interpolating the grid of synthetic photometry provided by Holberg & Bergeron (2006) on the basis of LTE model atmospheres. In these new models, JHKs magnitudes are computed in the filters and magnitude scale of Cohen et al. (2003b), which are equivalent to our measurements. Considering the remaining uncertainities in the WD mass and radius (Barstow et al. 2005), the model magnitudes for a pure H atmosphere white dwarf at the Sirius-B temperature and gravity were scaled to the V absolute magnitude (M v =11.422). The theoretical predictions for the Sirius white dwarf magnitudes are then M J =12.033, M H =12.120, and M K s =12.213 which yield "observed-theoretical" magnitude differences of 0.001 +0.12 −0.09 , −0.058 +0.23 −0.16 , and −0.309 +0.18 −0.14 , respectively in the JHKs bands. The error bars were computed here not from an "assumed" normal distribution but from the true distribution of residuals amplitudes, derived from the image statistics in the related region. They correspond to an exclusion of a false excess detection at a 99,64% confidence level. Whereas the J and H magnitudes reproduces accurately the predicted values, the K magnitude has a small but significant excess of ∼ 0.3 magnitude (see Figure 2). Interestingly, a similarly small K excess was also recently measured for selected cool (T ≤ 12 000K), isolated white dwarfs for which an overall excess of flux at wavelength longer than 2m with characteristics consistent with circumstellar dust or debris is found (von Hippel et al. 2007). In a Spitzer mid-infrared survey of 124 white dwarfs, four "dusty" white dwarfs were found with dusty environment that may represent the remains of planetary systems (Reach et al. 2005) and a metal-rich gas disk was discovered around a hotter (T=22 500K) WD, possibly associated with planetary debris material (Gaensicke et al. 2008). It is therefore possible that the small departure observed in the K band indicates a similar circumstellar material around Sirius-B. Looking for a third star The high contrast images obtained here in the near infrared are useful for constraining the existence of a possible small mass companion in the Sirius system. No point-source other than Sirius-B can be detected in the field, down to a certain limiting sensitivity that depends on the filter used. We estimate our limiting magnitudes in each filter (J,H,Ks) by simulating a point source hidden in the residuals of the PSF subtracted image. The minimum detectable magnitude in the different regions of the image was computed by evaluating the standard deviation and cumulative probability of the residuals in each sector. The detection limit was set from the true probability to correspond to a significance P ≥ 0.9, which in our case corresponds to a 10 σ limit for the peak intensity detection and a 500 σ limit for a PSF integrated intensity. Figure 3 shows the results for the K s image. Significant azimuthal variations of up to ∼1 mag. are present in the image with variations decreasing toward the outer part of the image as the level of PSF subtraction residuals decreases with the distance to Sirius. Typical limiting sensitivities as a function of distance to Sirius were derived by azimuthally averaging the residual levels of the PSF subtracted image and are shown in Figure 4 for the different filters. The most robust constraints are obtained for the J and K s filters. For the filter K s , the upper limits range in absolute magnitude from M K s ∼ 12.4 at 3 of Sirius-A to M K s ∼ 16.0 at 10 . At these levels, M dwarfs are already excluded. Using an extrapolation of the empirical mass-luminosity relations estimated by Delfosse et al. (2000), M K s ≥ 10 as in Figure 4 corresponds to a mass M≤ 0.08 M . These magnitudes are only comparable with those observed for the faintest dwarfs known. In the last ten years, hundreds of L and T dwarfs have been discovered. We used the catalog of 71 L and T dwarfs of Knapp et al. (2004), in which 45 have known distance and therefore absolute K magnitudes (see their table 8). Figure 5 shows this selected observed sample with our magnitude upper limits at the different distances from Sirius-A. Using the polynomial fit of Knapp et al. (2004, table 12), the upper limits at 5 and 10 correspond to spectral types later than T4.8 and T7.0, respectively. There is no simple mass-luminosity relation for L and T dwarfs since many parameters (gravity, age, metallicity,..) are involved (see Burrows et al. 2006) but a approximate estimation can be obtained from distributions computed from various models (Burgasser 2004 limits. Independent estimations can also be obtained using theoretical L-T dwarfs models. We used published spectral models for brown dwarf (Burrows et al. 2006) and planetary (Burrows et al. 2003) masses to compute the expected infrared magnitudes. We selected the models with solar abundances and an age of 250 Myr appropriate for Sirius (Liebert et al. 2003) and derived the temperature and gravity corresponding to a given mass, using the Burrows Brown Dwarf and Extra-Solar Giant Planet Calculator. The corresponding theoretical spectra were then convolved with the filter response to compute the magnitudes. Knapp et al. (2004). per limit at a shorter (1.1 µm) wavelength (Kuchner & Brown 2000), we used the same above method to derive a limit of ∼ 45 M Jup and ∼ 15 M Jup , closer to Sirius, at a separation of 2 and 3 , respectively. With the negative optical search in the wider (2.5x4 ) field (Bonnet-Bidaud, Colas & Lecacheux 2000), this weakens considerably the possibility of a third star in the Sirius system. The high resolution achieved by adaptive optics also allows the search for a suspected faint star in a close orbit around Sirius-B. From an analysis of the orbit residuals, a ∼ 6yr periodicity is present and a general three-body model indicates that possible stable orbits exist with a restricted range of masses M≤ 0.038 M (40 M Jup ) and semi-major axis a 0 =(1-2.5) AU (Benest & Duvent 1995). The reconstructed PSF of FWHM = 0.31 +/-0.05 in the Ks band is equivalent to a 0.8 AU separation from Sirius-B at the system distance therefore the companion could be resolved in our image. At the position of Sirius-B (5 ), the upper limit in our K image is M K s = 14.1. This corresponds to a theoretical mass of ∼ 20 M Jup (Table 2), lower than the predicted mass. Our limit therefore also excludes a suspected faint component to Sirius-B unless the orientation is very unfavourable. Conclusions The infrared image of the Sirius field presented here is the first high constrast image in the JHK wavelength range. Despite the coronagraphic device associated with adaptive optics, the light contamination of the brightest Sirius dominates the field. A precise subtraction of the diffuse background and a careful calibration enable us however to derive very accurate constraints on the different objects in the field. The infrared absolute magnitudes of Sirius-B are determined to be M J =12.03 +0.12 −0.09 , M H =12.06 +0.23 −0.16 , and M K s =11.90 +0.18 −0.14 . The JH values are in ex-cellent agreement with the white dwarf theoretical model of a DA white dwarf at a temperature and gravity determined accurately from the HST image. A small departure is visible in the K band, which may indicate a possible circumstellar material around Sirius-B, similar to that observed around some selected "dusty" white dwarfs. This has yet to be confirmed by observations at longer (2 − 15µm) wavelengths, where most of the "dust" emission is expected. The high quality image also allows a deep search for possible low-mass objects in the field. Although the residual background after subtraction shows significant azimuthal variations, mean limiting magnitudes in the field reach the planetary limit for an object located at the Sirius distance. The deep field obtained here around Sirius provides a limit of (30-10) M Jup in the (8-26) AU region and complementary HST-NICMOS observations yield a similar limit down to 5 AU. Since the most central part of the image (≤5 AU) has still not been covered, this does not fully eliminate the possibility of a third member in the sytem but the probability of a triple system is now low.
2008-09-28T21:06:08.000Z
2008-09-28T00:00:00.000
{ "year": 2008, "sha1": "41c36f32069801698529ac99c828ad71976fffd3", "oa_license": null, "oa_url": "https://www.aanda.org/articles/aa/pdf/2008/38/aa8937-07.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "8c39592f56add0c1c594950770d48ff44bff626f", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Physics" ] }
73441164
pes2o/s2orc
v3-fos-license
Condições pregressas e saúde no estudo “ Saúde , Bem-Estar e Envelhecimento ” ( SABE ) Early life conditions and current health status as per the study “ Health , Well-being and Aging ” ( SABE ) REV BRAS EPIDEMIOL 2018; 21(SUPPL 2): E180011.SUPL.2 RESUMO: Introdução: Condições da infância podem influenciar peculiaridades individuais do desenvolvimento e assim afetar a saúde dos adultos. Objetivo: Avaliar associações entre condições pregressas e saúde, como informadas nas pesquisas SABE de 2000, de 2006 e de 2010. Métodos: Condições pregressas referem-se a situações anteriores aos 15 anos: a condição econômica, a fome, a avaliação da saúde, a presença de doenças e ter vivido em ambiente rural por mais de cinco anos. As variáveis de controle foram o sexo, a escolaridade e a renda. O desfecho é a autoavaliação da saúde nas categorias “Boa” e “Má”. A análise abrangeu pessoas entre 60 e 65 anos. Resultados: A análise bivariada mostrou associações segundo a origem nas três coortes. Foram ainda significantes a condição econômica e ter passado fome, para os entrevistados em 2006. Na análise multivariada pela regressão de Poisson, o elemento de comparação foi a razão de prevalência. Origem rural foi a única entre as condições pregressas a apresentar significância no modelo inicial. As variáveis de controle — sexo, coorte, escolaridade — também apresentaram significância. No modelo final, foram consideradas as variáveis significantes no inicial e uma interação entre origem rural e número de doenças. Permaneceram significantes a coorte, o sexo, a escolaridade e o número de doenças quando o indivíduo teve origem rural. Este número não foi associado ao desfecho se a origem fosse urbana. Conclusão: Há conexões entre as condições pregressas e a saúde do idoso, o que constitui em importante instrumento para a atenção à saúde, tanto para o indivíduo como para a comunidade. Early life conditions and current health status as per the study "Health, Well-being and Aging" (SABE) Condições pregressas e saúde no estudo "Saúde, Bem-Estar e Envelhecimento" (SABE) PREVIOUS HEALTH CONDITIONS Finding a diagnosis as early as possible, preventing diseases, and avoiding adverse health conditions in the elderly have stimulated studies and research on the history of the diseases and its associations with consequent disabilities.The possible relationship between the context of the beginning of people's life and their health conditions during aging has been studied in an attempt to at least identify hypotheses to be tested. Authors have confirmed a connection between childhood life conditions and functional performance in adult life, overall health, and mortality at older ages 1 . Barker and Bagby 2 indicated that a context of poverty in childhood, added to the individual factors of development, can increase vulnerability to certain chronic diseases in the old age.Blackwell Hayward and Crimmins 3 corroborated this theory by associating the conditions in early life stages with diseases in the elderly.They suggested that individuals who were exposed to unfavorable social and economic conditions as well as family conflicts and other situations in childhood were at a greater risk of becoming ill from chronic diseases.Some diseases that can cause disability to the elderly, such as cancer, lung disease, cardiovascular disease, arthritis, and rheumatism, could result from problems in the childhood.In addition, some authors wonder whether not considering health in the childhood when RESUMO: Introdução: Condições da infância podem influenciar peculiaridades individuais do desenvolvimento e assim afetar a saúde dos adultos.Objetivo: Avaliar associações entre condições pregressas e saúde, como informadas nas pesquisas SABE de 2000, de 2006 e de 2010.Métodos: Condições pregressas referem-se a situações anteriores aos 15 anos: a condição econômica, a fome, a avaliação da saúde, a presença de doenças e ter vivido em ambiente rural por mais de cinco anos.As variáveis de controle foram o sexo, a escolaridade e a renda.O desfecho é a autoavaliação da saúde nas categorias "Boa" e "Má".A análise abrangeu pessoas entre 60 e 65 anos.Resultados: A análise bivariada mostrou associações segundo a origem nas três coortes.Foram ainda significantes a condição econômica e ter passado fome, para os entrevistados em 2006.Na análise multivariada pela regressão de Poisson, o elemento de comparação foi a razão de prevalência.Origem rural foi a única entre as condições pregressas a apresentar significância no modelo inicial.As variáveis de controle -sexo, coorte, escolaridade -também apresentaram significância.No modelo final, foram consideradas as variáveis significantes no inicial e uma interação entre origem rural e número de doenças.Permaneceram significantes a coorte, o sexo, a escolaridade e o número de doenças quando o indivíduo teve origem rural.Este número não foi associado ao desfecho se a origem fosse urbana.Conclusão: Há conexões entre as condições pregressas e a saúde do idoso, o que constitui em importante instrumento para a atenção à saúde, tanto para o indivíduo como para a comunidade. analyzing chronic diseases could lead to an overestimation of the effects of socioeconomic status in the analysis of health in adult life 3 . One must consider that studies addressing the early times of life present some challenges because they are retrospective, the information obtained is influenced by the memory of the informant, life conditions are reported only by those who are still alive, and some important events may not be mentioned. SELF-REPORTED HEALTH Self-reported health status can replace more expensive tests for predictor factors of future disability, risk of hospitalization, and mortality, especially among the older people 4,5 . Even with subjective connotations, information about the actual health status has shown results similar to objective assessments and, therefore, is widely used in health research 6,7 . Self-reported health status is an important marker of overall life conditions, especially among the elderly.Thus, Lima-Costa, Firmo, and Uchoa 8 found associations between reports by interviewees with social support, effective health, and access to services.With regard to mortality, Maia, Duarte, and Lebrão 6 found that health self-rated as "bad" increases the risk of death by 2.69 compared with "good," "very good," or "excellent." SELF-REPORTED HEALTH AND PREVIOUS HEALTH CONDITIONS IN THE STUDY HEALTH, WELL-BEING AND AGING (SABE) In the SABE study, current health status questions were formulated in the classic format of five categories on a scale from "bad" to "excellent."Likewise, questions regarding the first 15 years of life of participants were applied to three samples of SABE.We can therefore evaluate the possible effects of previous health conditions on their self-reported health on each sample interview.For example, with information of the sample from the year 2000, Santos, Oliveira, and Lebrão 8 concluded that having tuberculosis during the first 15 years of life was associated with the elderly health self-rated as "bad," even according to age and gender 9 . Moreover, individuals aged 60 to 64 years in each survey can provide a clear picture of the different contexts in which they lived until 15 years of age, which is a rich opportunity to evaluate period effects.This study aimed at evaluating the possible effects of previous health conditions on self-reported current health status of a group of elderly people interviewed in the three waves of the SABE study, which has been held in São Paulo and had phases completed in 2000, 2006, and 2010. SABE AND COHORTS SABE began as a multicenter study in seven cities of Latin America and the Caribbean.In Brazil, it had its first round held in São Paulo in 2000, when 2,143 people aged 60 years and above were interviewed, representing the elderly population of the municipality.A second wave of the interview was held in 2006 and a third wave in 2010.At each stage, "survivors" of the previous sample would be interviewed, and a new cohort aged 60 to 64 years would then be gathered, resulting in the object of analysis of this study.Three cohorts will be then studied (A, B, and C), and all members were found to be born approximately in the following five-year periods: 1935-1940, 1940-1945, and 1945-1950.Each individual from each sample received the relative weighting of the sample design effects and poststratification, therefore being representative of the population of the municipality in the range of ages in the respective year.Samples totaled 426, 298, and 355 people in 2000, 2006, and 2010, respectively. PREVIOUS CONDITIONS STUDIED For the analysis of the context of early stages of life of the elderly, SABE assessed the conditions in their first 15 years of life by means of the matters mentioned below.After each question, the variable used in data processing was written down: • Economy -"How do you describe the economic situation of your family during most of the first 15 years of your life?"; • Health before 15 -"Would you describe your health as excellent, good, or bad in most of the first 15 years of your life?"; • Type of disease -"Before turning 15 years old, do you remember having had any of these diseases?: Nephritis, hepatitis, measles, tuberculosis, rheumatic fever, asthma, chronic bronchitis"; • Confined to bed -"Did you ever stay confined to bed for a month or more because of a health problem in the first 15 years of your life?"; • Famine -"Would you say that there was a time in the first 15 years of your life when you would not eat well enough or went through famine conditions?"; • Rural -"From your birth until 15 years old, did you live in the countryside for 5 years or more?". DEPENDENT VARIABLE Dependent variable self-reported health corresponds to the information obtained by question C01 of survey SABE, that is, how the respondent assesses his/her current health status. Possible effects of the early stages of life in the adult and elderly health status certainly influence an individual's trajectory, including school history, income at the time of interview, and gender.The very date of the interview may bring about effects of moments lived (present and past), and that is the importance of considering the cohort as a possible variable and trying to catch the so-called period effects. INDEPENDENT VARIABLES AND COVARIABLES Sociodemographic variables and the date of the interview (cohort) were taken as covariates; independent variables were gender, education measured in years of study, and income measured as the position/function of the respondent in distribution tertiles. Independent variables related to previous conditions arose from the answers to the questions from SABE: • Economy -economic situation until turning 15 years old; • Health 15 -health assessment until turning 15 years old; • Famine -famine conditions until turning 15 years old; • Nephritis, hepatitis, measles, tuberculosis, fever, asthma, and bronchitis-history of any of these diseases until turning 15 years old; • Confined to bed -confined to bed for at least a month. In addition to direct responses, the variable "diseases" (number of diseases mentioned) was added. ANALYSES Analyses included a bivariate stage with the description of the samples by means of relative distributions of each variable observed in all surveys and the relative distribution of dependent variable according to covariates and variables inherent to previous conditions.Distributions were obtained by the expansion of the sample by relative weighting to the sample design and post-stratification.Thus, results of each cohort represented estimates of the true population values.Rao-Scott 10 tests were applied to demonstrate possible associations, and results were considered significant when the p-value was lower than the significance level set at 0.05. Multivariate analysis was made with Poisson regression, which allowed direct estimation of prevalence ratios (PR) 11 , with the self-reported health as outcome.For these regressions, the variables indicating the presence or absence of disease before the age of 15 years were not considered; instead, the variable "number of diseases" was used.This was mandatory to avoid large error ranges in the estimates, as the number of events was often low.Tuberculosis, for example, was referred by six patients, four in cohort A and one in each of the other two cohorts.It happened because previous history events can really be scarce, and only one age group was considered in our study. Two models were adjusted: an initial, with all variables (specific diseases replaced by the number of diseases) and a final one, which considered variables that were significant in the first regression.Possible interactions of the independent variables with the variable "rural" were studied, and the significant ones were also included in the final model. Adjustments were also made by design and stratification weighting with robust estimation of standard errors 12 . RESULTS Table 1 shows the distributions of each variable in the three waves of research.Important to note was the stability of the composition by gender between sociodemographic variables resulting from poststratification and the significant improvement observed from the first to the last cohort in the levels of schooling. With regard to the context before 15 years of age, worse and significant differences were seen in the generation born during the World War II as to the economic conditions at 15 years of age and the number of diseases.The percentages that reflect the rapid urbanization at that period were also relevant, as they showed samples having smaller proportion of people from the countryside in every phase of the study.The proportion of people who reported having measles before the age of 15 years was also decreasing and significant and followed the temporal trend in the country. The dependent variable also had bad assessment for the current health status among the generation born during the war, which suggested an association with the previous conditions reported. Table 2 shows relations of dependent and sociodemographic variables.Association with schooling was significantly different in the three surveys, reproducing the well-known result of better health situation in higher levels of education.Difference between the genders was only significant in the last cohort, but males usually did a better self-evaluation than the females. Better self-assessments of health were also systematic when it comes to the highest tertiles of income, but association was only significant in cohort A. Relations between the health status and the previous conditions are shown in Table 3. Rural origin was the only variable with significant association in all the three cohorts.The effect of this variable can be more easily observed as the distribution of categories was more balanced.Furthermore, living at the countryside may lead to a higher probability of adverse conditions mentioned herein.Economic and famine conditions before the age of 15 years had significance for the cohort born during World War II.It is interesting to note that precisely in this cohort the proportion of people in good economic conditions and not exposed to famine conditions before 15 years of age was higher, as seen in Table 1.Moreover, there were lower proportions of good health evaluation for the other category of each variable.In other words, the generation born during war assessed their health as "bad," and the difference between those who had or not had adverse conditions was also greater. Interrelations between variables were grouped in the multivariate analysis.Table 4 contains the results of the initial model, with adjusted PR, standard errors with robust estimation, p-values, and respective 95% confidence intervals. Considering only significant PR, one can conclude that: (a) there was a period effect in cohort C, the latest showing better health assessment compared with cohort A; (b) males evaluated their health better than females; (c) highly educated individuals (12 years of school or more) also rated their health as "good" in greater proportion; and (d) health was more assessed as "good" among those who did not spend at least 5 years of their lives in the countryside before 15 years of age.Living in "rural" area is the only variable that presented significant prevalence ratio among all the other previous conditions reported.Again, it must be considered that this variable is the one with less scarce cases and that it may reproduce the effects of other conditions in the outcome.Thus, it was convenient to study its possible interactions with the other variables, and evaluate their significance.The only variables presenting significant interaction were "rural" and "number of diseases" (p = 0.009).For the adjustment of the final model, significant variables of the initial model were considered, and the variable "rural" was partitioned according to the number of diseases (Table 5). In the final model, cohort C (the latest) no longer presented significant prevalence ratio, and cohort B showed significant prevalence ratio with the prevalence of health assessed as "good" being lower than that in the reference cohort A. Being a female has been related to lower prevalence of "good" health assessment compared with being a male, and education also stands out.There is a gradient in PR and people with more years of schooling had better self-rated health, with significant prevalence ratio in groups with 4 -11 and 12+ education years.The interaction between the number of diseases reported before 15 years of age and people's origin was quite interesting.For those from urban areas, there was no significant difference in the number of selected diseases when compared with current health.But for those who came from the countryside, all categories differed significantly from the reference.And also, the PR increased along with the number of diseases, that is, the more diseases reported, the worse the evaluation of their current health. DISCUSSION This study has peculiarities that should be highlighted because of their unusual and relevant character to the evidence found.Three groups of individuals in the same age have been compared, all of them being born in the five-year periods that stand out in the history of the country: right before, during, and after the World War II.This was when labor relations changed, trade unions became relevant and politically active 13 , health care services was expanded 14 , and public policies for education were established 15 .But the locus of these transformations had mainly been the city, and echoes in the countryside were not immediate. Thus, the mechanisms that connect the previous health conditions to current health conditions of the elderly were expected to be more active and relevant in rural areas, as urban areas were constantly subjected to transformation and modernization.There were five mechanisms that connect the early life context to the health of the elderly: nutritional status, specific diseases, recurrent infections, chronic stress and stressful situations, and poor socioeconomic conditions 16 .The mechanisms did not act homogeneously according to the region where the individual was raised and, therefore, the place where the person lived during the childhood had been identified as a predictor of diseases in adult life and old age 17 . According to Poel, O'Donnell, and Van Doorslaer 18 , children in urban areas enjoyed better health conditions compared with children in rural areas of the developing countries.As a result and as predicted by the mechanisms mentioned, self-reported health of people who spent their childhood in rural areas would be worse than that of the people who had always lived in the urban areas, as assessed and pointed out in Table 3, with significant differences in all the three cohorts. Important to note is that connections between the early life conditions and self-reported health of the elderly were established under the control of the main social variables such as gender, education, and income.Adjusted regression therefore presented associations regardless of the presence of diseases in rural children with poor evaluation of their current health.Some authors pointed out the origin of the elderly as a possible marker of health and mortality 19,20 , as found in this article: the number of select diseases in the first 15 years of age was significant when assessing health as "bad" in advancing ages if the individual had lived in the countryside.Access to care, education, information, food, and other conditions of the urban area were good potential predictors of this condition. On the protection of urban environment compared with the rural environment, the individual had a better health status and reported it as being better in old age; therefore, they seemed to have a lower risk for negative health outcomes and even mortality, as shown by Van den Brink et al. 21. CONCLUSIONS The analyses have shown the influence of previous health conditions as reported by the elderly respondents in the three SABE surveys.Individuals who spent more than five of their first 15 years of life in the rural areas rated their health as "bad" more often in all the three cohorts. The occurrence of the selected diseases before 15 years of age in individuals who had lived for more than 5 years in the countryside during infancy has been identified as an associated factor.For the others, this association was not significant. As self-assessment is an important marker of health, function, and survival of the elderly, identifying the most remote conditions becomes relevant for the care of this population, either individually or in group care programs. Table 1 . Relative distribution of variables as per cohorts studied. Continue... Table 2 . Percentages of elderly people who rated their health status as "good" in each cohort, according to sociodemographic variables. Table 3 . Percentages of elderly people who rated their health status as "good" in each cohort, according to previous health conditions. Table 4 . Poisson regression results for the variable self-reported health: initial model. Table 5 . Poisson regression results for the variable self-reported health: final model.
2019-03-08T14:09:44.730Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "7835aa8440f8a45ad4213648cd4c006413b4f6e4", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/rbepid/v21s2/en_1980-5497-rbepid-21-s2-e180011.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ec0e08dd0ca67d186ce778e1632008f032950a08", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Sociology" ] }
2596778
pes2o/s2orc
v3-fos-license
Alternative splicing of CCN mRNAs …. it has been upon us Variant CCN proteins have been identified over the past decade in several normal and pathological situations. The production of CCN truncated proteins have been reported in the case of CCN2(ctgf), CCN3(nov), CCN4(wisp-1) and CCN6(wisp-3). Furthermore, the natural CCN5 is known to miss the C-terminal domain that is present in all other members of the CCN family of proteins. In spite of compelling evidence that assign important biological activities to these truncated CCN variants, their potential regulatory functions have only recently begun to be widely accepted. The report of CCN1(cyr61) intron 3 retention in breast cancer cells now confirms that, in addition to well documented post-translational processing of full length CCN proteins, alternative splicing is to be regarded as another effective way to generate CCN variants. These observations add to a previous bulk of evidence that support the existence of alternative splicing for other CCN genes. It has become clearly evident that we need to recognize these mechanisms as a means to increase the biological diversity of CCN proteins. Introduction The production of CCN proteins lacking one or more of the four structural modules (IGFBP, VWC, TSP and CT) that constitute the prototypic CCN protein (Bork 1993;Holbourn et al. 2008) has already been discussed (Perbal 2001(Perbal , 2004Planque and Perbal 2003). Aside from the natural CCN5 protein that does not contain a CT domain (Perbal 2001), several other examples of truncated CCN proteins were identified in biological fluids, cell culture lysates, cell culture medium, and normal or tumor tissues (Perbal 2001(Perbal , 2006Leask and Abraham 2006;Holbourn et al. 2008). In the case of CCN2, the production of short variants in pig uterine flushings was hypothesized to result from proteolytic digestion of the full length protein (Brigstock et al. 1997). Indeed, inter-domain regions that show much less organization than the modules themselves (Holbourn et al. 2008) may be the target for proteolytic activity as shown by the cleavage of CCN2 by the MMP2 metalloproteinase at the junction between VWC and TSP1 modules (Dean et al. 2007). A truncated CCN3 variant deprived of both the IGFBP domain and the signal peptide that drives secretion of the CCN proteins (Joliot et al. 1992;Perbal 2001Perbal , 2004 was expressed in nephroblastoma tumor cells, because of myeloblastosis associated virus (MAV) insertional mutagenesis in the genome of the blastemal target chicken cells . CCN3 truncated isoforms that are deprived of the IGFBP and VWC modules and contain only the last two C-terminal modules (TSP1 and CT) were identified in most cultured cells and tissues which also express the full length protein (Perbal 1999(Perbal , 2006Su et al. 2001;Kyurkchiev et al. 2004;Bleau et al. 2007;Vallacchi et al. 2008). N-terminal sequencing of the truncated CCN3 protein that is contained in the cell culture medium of insect cells infected with a recombinant CCN3 baculoviral vector established that it was generated by a proteolytic cleavage occurring between domain II (VWC) and III (TSP1) of the full length CCN3 protein. Since the cleavage site was found to be identical to that used in the case of CCN2 , we suggested that a common specific protease might be involved in the processing of CCN proteins which generated these variants. Production of CCN variants via alternative splicing The production of rearranged CCN variants as a result of alternative splicing has also been documented for many times over several years. Indeed, a CCN4 protein lacking the VWC module II (WISP1v) was detected in scirrhous gastric carcinoma cells. This variant is encoded by a 840 nucleotide alternatively spliced mRNA species, missing the 260 nucleotides of exon 3 that encode VWC module sequences in the wild type mRNA species (Tanaka et al. 2001). Both a similarly spliced message and rearranged variant CCN4 protein were detected in invasive cholangiocarcinoma (Tanaka et al. 2003) and in the human chondrosarcoma-derived chondrocytic cells HCS-2/8 (Yanagita et al. 2007). Interestingly, another spliced variant expressed in these cells was found to encode a single IGFBP module in which eight authentic aminoacids at the C-terminus were replaced by 14 other residues (Yanagita et al. 2007). In addition to the expected full length transcript of WISP1/CCN4 (1,204 bp), two shorter transcripts of 943 bp and 750 bp were identified in the human hepatoma HuH-6 and HA22T/VGH cell lines (Cervello et al. 2004). Sequence analysis of the purified 943-bp fragment revealed that this variant lacks exon 3. Since the joining of exons 2 and 4 did not result in any reading frame shift, the variant mRNA species also encoded a CCN4 protein lacking the VWC module as previously described by Tanaka et al. (2001Tanaka et al. ( , 2003 Exons 3 and 4 were not contained in the CCN4 750-bp spliced variant, and the frameshift that was created by the joining of exons 2 and 5 in this RNA resulted into a premature translation arrest 38 residues downstream. As a consequence, the CCN4 variant protein expressed by this spliced message encoded only the IGFBP module. A longer CCN4 transcript containing an insertion of 64 bp between exons 3 and 4 was detected In HuH-6 and HuH-7 human hepatoma cells. The insertion of this short stretch caused a frameshift at residue 197 that resulted in a premature translation stop 22 residues downstream. As a consequence, a half CCN4 protein containing only the two N-proximal IGFBP and VWC domains, was encoded by this spliced mRNA species. The first evidence suggesting the existence of CCN1 (cyr61) alternatively spliced messages was obtained in my laboratory in the course of a study aimed at identifying the chromosomal localization and expression of CCN1 in human neuroblastoma and glioblastoma cell lines (Martinerie et al. 1997). Because the full length cDNA clone mapped only one chromosomal location at 1p22-p31, we proposed that the additional 3.5 kb CCN1 mRNA species that was detected in a few cell lines in addition to the "canonical" 2.5 kb mRNA likely corresponded to an alternatively spliced message. Since the focus of our studies was on the CCN3 gene and protein, we did not pursue the identification of this additional mRNA species. A few years later, another example of an alternatively spliced CCN1 mRNAs species was documented in the case of serum-induced normal human fibroblasts, which were shown to express a CCN1 message in which an in frame deletion within exon 4 resulted in the production of a CCN1 protein deprived of the TSP1 module (Leng et al. 2002). The results that have now been reported by Hirschfeld et al. (2009) confirm the existence of alternatively spliced CCN1 mRNA species. Interestingly, the new message that is identified in human breast cancer cell lines resulted from retention of the 131 nucleotide intron 3 that separates the exons encoding the VWC and TSP1 domains of CCN proteins. Since this intron contains two stop codons, the authors propose that the alternatively spliced message does not encode a full length protein. The organization of the various alternatively spliced CCN mRNAs identified thus far is depicted in Fig. 1. In the case of CCN3, several observations suggested that variant CCN3 proteins might be produced in a regulated way in both normal and pathological conditions. Large amounts of a CCN3 32-38 kDa doublet were detected in the brain lysates of adult rats in which small amounts of the full length CCN3 protein were detected (Su et al. 2001). Since these short variants were detected by the K19M antibodythat was raised against the C-terminal end of CCN3-we assumed that they were composed of the two last domains (TSP1 and CT) and that they might be generated through alternative splicing. In transfected glioma cells producing exogenous CCN3, a large amount of a half CCN3 protein was also detected in the cytoplasmic fraction in addition to the truncated form that is usually secreted in the culture medium of cells producing CCN3 (Kyurkchiev et al. 2004). Although we did not investigate the mechanisms leading to the production of the intracellular variant CCN3 species, all these findings suggested the existence of two CCN3 variant species that might result from two different mechanisms: post-translational processing of the secreted full length protein and alternative splicing leading to the intracellular short species. Aside from these two pieces of indirect evidence, we recently identified CCN3 variant proteins that most likely result from alternative splicing. In addition to the full length CCN3 protein, Wilm's tumors and normal human embryonic kidneys also expressed a CCN3 variant that was deprived of the TSP1 domain (Subramaniam et al. 2008). Also, 50% of Ewing's tumor cells were found to express a truncated CCN3 protein species lacking the VWC module (Perbal et al. Submitted). Biological significance of CCN alternative splicing As previously discussed (Perbal 2001(Perbal , 2004) the multimodular structure of the CCN proteins raises an interesting question as to the contribution of each module to the biological function(s) of the fully assembled protein. Either the activities of each module sum up or they confer on the whole protein specific functions that might substitute or add to the function of the individual modules. The present consensus is to view the biological properties of the various CCN proteins as the result of both individual module activities and functional interactions between different modules (Leask and Abraham 2006;Yeger and Perbal 2007;Irvine et al. 2008). In view of the complex array of regulatory factors and receptors that physically interact with CCN proteins, the production of variants deprived of one or more elementary module is expected to have profound biological effects. Not only can variant CCNs titrate receptors and other partners interacting with each individual module, and thereby interfere with biological activity of the full length proteins, but also, the absence of a single module might induce conformational changes that could potentially modulate, either positively or negatively, the intrisic biochemical functions of the resulting CCN protein. Various biological activities were assigned to the CCN variants. The amino-proximal half of CCN2 (which contains only the IGFBP and VWC domains) was reported to be an effective surrogate biomarker for fibrosis (Leask et al. 2009) and to mediate both myofibroblast differentiation and collagen synthesis whereas the C-terminal half (composed of the TSP1 and CT domains) mediated fibroblast proliferation (Grotendorst and Duncan 2005). More recently, the recombinant IGFBP and VWC modules were reported to display stronger binding to aggrecan compared to recombinant TSP1 and CT modules (Aoyama et al. 2009). In the case of CCN1, a mutant lacking the CT domain was unable to promote cell adhesion but had conserved the chemotactic and growth-factor promoting activities of the full CCN1 protein (Grzeszkiewicz et al. 2001) The amino-truncated version of CCN3 (containing the three first modules) that was cloned from MAV-induced nephroblastoma exhibited transforming properties in chicken embryo fibroblasts whereas, the full length CCN3 showed growth inhibitory effect (Joliot et al. 1992;Planque et al. 2006). Since half proteins show such specific biological properties, it is surprising that neither Hirschfeld and colleagues, nor the reviewers who evaluated their manuscript, considered the possibility that the truncated protein likely expressed from the CCN1 mRNA species which retain intron3-as stated by the authors themselves-might play a critical role in the breast cancer cells that contains this spliced variant. The use of domain specific CCN1 antibodies that are available would have permitted to address this important question. This is especially critical in the context of this work, since the authors reported exon 3 skipping as a way to produce the full length active CCN1 protein (Hirschfeld et al. 2009). In other words, the strong correlation that was observed between the intron 3 skipping and the invasive breast cancer phenotype might have resulted from the production of a full-length CCN1 in these cells, whereas in noncancerous tissues, only the amino-proximal half CCN1 was expressed. These authors also report that the switch from intron retention to intron skipping was induced by hypoxia. This observation is in line with other results showing that alternative splicing of CCN mRNAs is tightly regulated and might affect the tumorigenic potential of cancer cells. Hence, the use of domain-specific antibodies ) allowed us to establish that CCN3 variants lacking the TSP1 domain were expressed in Wilm's tumors and that in normal kidneys, the production of CCN3 lacking the TSP1 domain is developmentally regulated (Subramaniam et al. 2008). Furthermore, we established that in the case of Ewing's tumors, the increased level of variant CCN3 in tumor cells reduces their tumorigenic potential, and results in better outcome. Conclusion The existence of alternative splicing leading to the production of variant CCN proteins is well documented and should be regarded as a means to increasing the biological functions of this fascinating family of proteins. Whether the variant forms that were detected in normal and pathological conditions antagonize or synergize the functions of full length CCN proteins remains to be clarified. In some cases, the biological activities of variant proteins were associated with particular phenotypes, especially in the tumor cells in which these variants were identified. However, in other cases the origin of the variants is not clearly established even though it is highly probable that alternative splicing is responsible for their production. When we first established that CCN3 was involved in the development of the human brain (Su et al. 2001), we used the K19M polyclonal antipeptide that is directed against the C-terminal end of CCN3 (Chevalier et al. 1998). As a consequence, immunocytochemistry experiments performed with this antibody did not permit us to distinguish between a positive staining due to the presence of a full-length or an amino-truncated CCN3 protein. Neither was the K19M antibody able to identify the variant forms lacking one particular domain. Only the detection of large quantities of a low molecular weight CCN3 variant in rat brain lysates (Su et al. 2001) raised the possibility that alternative splicing was involved and that the short aminotruncated CCN3 might play a critical role in this tissue at this particular developmental stage. The recent results that we have obtained in the course of study performed with Ewing's tumors and Wilm's tumors, established that domain-specific antibodies such as those we derived for CCN3 are invaluable tools to identify all the CCN isoforms contained in cells and tissues. The existence of variants generated by both posttranslational processing and alternative splicing can no longer be ignored by the scientific community. Future progress and understanding of the role of CCN proteins in normal and pathological conditions will rely on the thorough characterization of all isoforms that requires the use of appropriate biological tools. Open Access This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
2014-10-01T00:00:00.000Z
2009-04-28T00:00:00.000
{ "year": 2009, "sha1": "523615e91890ba6ff6072a6ca92f899688dbe5ab", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12079-009-0051-9.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "2d7f95cec4299312a7d1af936466cabaa0d3bbd2", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
3858364
pes2o/s2orc
v3-fos-license
Pharmacokinetics of Nintedanib in Subjects With Hepatic Impairment. Abstract Nintedanib is an intracellular inhibitor of tyrosine kinases used in the treatment of non–small cell lung cancer and idiopathic pulmonary fibrosis (IPF). This phase 1 open‐label study investigated the influence of mild and moderate hepatic impairment on the pharmacokinetics (PK), safety, and tolerability of nintedanib following oral administration of a single 100‐mg dose. Subjects with hepatic impairment classified as Child‐Pugh A (mild hepatic impairment) or Child‐Pugh B (moderate hepatic impairment) were eligible. The control group comprised healthy matched subjects. Primary end points were Cmax and AUC0–∞ of nintedanib. Thirty‐three subjects received nintedanib (8 in each of the Child‐Pugh A and Child‐Pugh B groups and 17 controls). The shape of the plasma concentration–time curve for nintedanib was similar between Child‐Pugh A or B and healthy subjects. Nintedanib exposure was ∼2‐fold higher in Child‐Pugh A subjects and ∼8‐fold higher in Child‐Pugh B subjects than in healthy subjects. Adverse events were reported in 3 Child‐Pugh B subjects (37.5%), no Child‐Pugh A subjects, and 3 healthy subjects (17.6%). In conclusion, exposure to nintedanib was higher in Child‐Pugh A and B subjects than in matched healthy subjects. A single dose of nintedanib 100 mg had an acceptable safety and tolerability profile in subjects with hepatic impairment. Results of this dedicated phase 1 study are in line with exploratory investigations into the PK of nintedanib in patients with advanced solid tumors or IPF and hepatic impairment. Nintedanib (formerly known as BIBF 1120) is a potent intracellular inhibitor of tyrosine kinase receptors, including vascular endothelial growth factor receptors 1-3, fibroblast growth factor receptors 1-3, and platelet-derived growth factor receptors α and β, and nonreceptor members of the Src family. 1 By binding competitively to the adenosine triphosphate sites of these receptors, nintedanib blocks autophosphorylation and so inhibits the downstream intracellular signaling cascades necessary for the proliferation, migration, and survival of endothelial cells, pericytes, and fibroblasts. 1,2 Nintedanib is approved in the European Union in combination with docetaxel for the treatment of non-small cell lung cancer (NSCLC) of adenocarcinoma histology after first-line chemotherapy and for the treatment of idiopathic pulmonary fibrosis (IPF) in several countries and regions, including the European Union and the United States. [3][4][5] The recommended dose of nintedanib is 200 mg twice daily in combination with docetaxel in patients with NSCLC and 150 mg twice daily in patients with IPF, with each dose taken approximately 12 hours apart with food. [3][4][5] The pharmacokinetic (PK) properties of nintedanib are comparable in healthy volunteers, patients with IPF, and patients with advanced solid tumors. Following oral administration, nintedanib is rapidly absorbed and reaches maximum plasma concentration after approximately 2-4 hours; steady state is reached within 7 days of dosing. 3,4 Nintedanib undergoes extensive first-pass metabolism and displays at least biphasic disposition kinetics, with a terminal half-life of 10-15 hours. 3,4,6,7 Nintedanib is metabolized by methyl ester cleavage to form the carboxylate derivative BIBF 1202 as the predominant metabolite, which is glucuronidated by UGT enzymes to form BIBF 1202 glucuronide. 7 The absolute bioavailability of nintedanib 100 mg in healthy volunteers is approximately 5%. 6 Nintedanib exposure is approximately 20% higher when it is administered after food intake, and its absorption is delayed (from approximately 2 to 4 hours) compared with fasted conditions. 5 Nintedanib is eliminated primarily (>90%) through biliary/fecal excretion, with renal excretion playing a negligible role. It is a high-clearance drug, with total body clearance in healthy subjects of 1390 mL/min. 6 In patients with solid tumors [8][9][10] or IPF, 11 PK variables show moderate to high interindividual variability across dose groups. The PK properties of nintedanib are linear with respect to dose and time. 4,6,8,10 Hepatic impairment may increase plasma concentrations of nintedanib. 3,4 Patients with elevated aspartate aminotransferase (AST), alanine aminotransferase (ALT), or total bilirubin >1.5 × the upper limit of normal (ULN) at screening were excluded from clinical trials of nintedanib in patients with NSCLC or IPF 10,12-15 except for in the LUME-Lung 2 trial of nintedanib in patients with NSCLC, in which ALT or AST elevation up to 2.5 × ULN was permitted for patients with liver metastases. 16 This dedicated study investigated the influence of mild and moderate hepatic impairment (Child-Pugh A and B) on the PK, safety, and tolerability of nintedanib following oral administration of a single 100-mg dose. Study Design The study was approved by local ethics committees and was carried out in compliance with the protocol, the principles of the Declaration of Helsinki, International Conference on Harmonization Good Clinical Practice guidelines, and applicable regulatory requirements. All subjects provided written informed consent before study entry. The study was registered on clinicaltrials.gov (NCT02191865). This was a phase 1 openlabel, single-dose, parallel-group, matched-group study. Subjects received a single dose of nintedanib 100 mg, administered as an oral soft-gelatin capsule with 240 mL of water under fed conditions. There was a 28-day posttreatment follow-up period. Subjects Individuals aged 18-79 years with a body mass index (BMI) of 18.5-34 kg/m 2 were eligible to participate. Subjects with hepatic impairment were classified as Child-Pugh A or Child-Pugh B. 17 The Child-Pugh criteria assess the severity of hepatic impairment taking into account measures beyond elevations in hepatic enzymes, such as prothrombin time, the degree of ascites, and the grade of hepatic encephalopathy. Each measure is assigned a score of 1-3, with higher score indicating worse hepatic impairment. Child-Pugh A (total score, 5-6) indicates mild hepatic impairment, whereas Child-Pugh B (total score, 7-9) indicates moderate hepatic impairment. For safety reasons, Child-Pugh A subjects were dosed prior to Child-Pugh B subjects. Child-Pugh A or B subjects had to have hepatic insufficiency diagnosed ࣙ 3 months before screening and an estimated glomerular filtration rate (eGFR) > 40 mL/min/1.73 m 2 , according to the Modification of Diet in Renal Disease (MDRD) formula, at screening. Healthy subjects were matched by age (±10 years), weight (±10%), race, and smoking habits (current versus former or never smokers), as these factors were known to influence the PK of nintedanib 4,5,18,19 and also by sex. Healthy subjects had to have an eGFR > 70 mL/min/1.73 m 2 (MDRD) at screening. Subjects with significant or recent acute gastrointestinal disorders with diarrhea as a major symptom or history of gastrointestinal bleeding within the past 3 months were excluded from the study. Subjects who were moderate or heavy smokers (>10 cigarettes or 3 cigars or 3 pipes per day) were excluded. As nintedanib is a substrate of P-glycoprotein, 6 patients who used potent P-glycoprotein inhibitors or inducers were excluded. Subjects with hepatic impairment who had significant diseases other than the underlying diagnosis causing hepatic impairment and diseases related to it were excluded. Subjects with hepatic impairment and severe cerebrovascular or cardiac disorders (eg, myocardial infarction < 6 months prior to administration of study drug, congestive heart failure of New York Heart Association grade III or IV, or severe arrhythmia) were excluded. End Points The primary end points were C max (maximum concentration in plasma) and AUC 0-Ý (area under concentration-time curve in plasma from time 0 extrapolated to infinity) of nintedanib. Secondary end points were AUC 0-tz (area under the concentrationtime curve in plasma from time 0 to last quantifiable plasma concentration) of nintedanib and the proportion of subjects with adverse events (AEs) between administration of nintedanib and the end of the 28-day posttreatment follow-up period. PK samples were taken until day 8 after the administration of nintedanib. Further end points included other PK parameters of nintedanib, its metabolites BIBF 1202 and BIBF 1202 glucuronide, including t max (time from last dosing to maximum concentration in plasma), t 1/2 (terminal half-life in plasma), and renal clearance. Safety was assessed via clinical laboratory tests, vital signs, 12-lead electrocardiogram (ECG) and physical examination. AEs were coded according to the Medical Dictionary for Regulatory Activities, version 17.1. Investigators reported the possible relationship between nintedanib and AEs based on their own judgment. In addition, the plasma protein binding of nintedanib and BIBF 1202 was determined in predose plasma samples after ex vivo spiking of 100 ng/mL [ 14 C]-radiolabeled nintedanib or [ 14 C]-radiolabeled BIBF 1202. Protein binding was measured in vitro All subjects were white. BMI, body mass index; SD, standard deviation. using equilibrium dialysis and quantification of radioactivity by liquid scintillation counting. The mean protein-bound fractions of nintedanib and BIBF 1202 were calculated for each group of subjects. Statistical and Pharmacokinetic Methodology Pharmacokinetic analysis was performed using Win-Nonlin (Cetera, Princeton, NJ). An analysis of variance model on the logarithmic scale was used for the analysis of the AUC 0-Ý , C max , and AUC 0-tz of nintedanib in Child-Pugh A or B subjects compared with healthy controls. The model included hepatic status as a fixed effect and matched pair as a random effect. SAS version 9.2 (SAS, Cary, NC) was used for statistical analysis. Other parameters were analyzed descriptively. The treated set comprised subjects documented to have taken the dose of study drug. The PK set comprised subjects documented to have taken the dose of study drug who provided at least 1 observation for at least 1 primary end point that was judged as evaluable and was not affected by important protocol violations relevant to the evaluation of PK. A sample size of 24 to 32 subjects was regarded as adequate to fulfill the study objectives. This was not based on a statistical power calculation, but was assessed as adequate to attain reliable results and to fulfill the objectives and requirements of the study, in line with regulatory guidance on PK studies in patients with hepatic impairment. 20 Baseline Characteristics and Subject Disposition A total of 34 subjects were screened, and 33 received nintedanib (8 subjects in each of the Child-Pugh A and Child-Pugh B groups and 17 matched healthy subjects). Of the 33 subjects included in the treated set, 30 subjects (90.9%) were included in the PK set. One healthy subject vomited shortly after nintedanib administration, and 2 healthy subjects were excluded because of protocol violations. Baseline characteristics of the treated set are shown in Table 1. The majority of subjects were male (60.6%), mean age was 58.1 years, and mean BMI was 27.3 kg/m 2 . All subjects were white. There were no relevant differences in demographic characteristics between the Child-Pugh A, Child-Pugh B, and healthy control groups. All subjects completed the planned observation time. Pharmacokinetics Median C max was reached 3-4 hours after nintedanib administration (range, 1-8 hours), with no clear difference between Child-Pugh A, Child-Pugh B, and healthy subjects. After reaching C max , plasma concentrations declined in an at least biphasic manner. The shape of the plasma concentration-time curve for nintedanib was similar between Child-Pugh A or B and healthy subjects ( Figure 1). Other PK parameters are shown in Table 3. The fraction of the nintedanib dose excreted in urine was higher in Child-Pugh A and B subjects than in healthy subjects; however, renal clearance was comparable across groups. The t 1/2 was slightly prolonged in Child-Pugh A and B subjects compared with healthy subjects, with no difference between the Child-Pugh A and Child-Pugh B groups. Similar to nintedanib, BIBF 1202 exposure was ß2fold higher in the Child-Pugh A group than in healthy subjects (Table 3). AUC 0-Ý and C max were ß17-fold and 9-fold higher, respectively, in Child-Pugh B subjects than in healthy subjects. For BIBF 1202 glucuronide, C max was comparable across groups, whereas AUC 0-Ý was ß1.5-fold and ß3.5-fold higher in Child-Pugh A and B subjects, respectively, compared with healthy subjects ( Table 3). The t 1/2 values for BIBF 1202 and BIBF 1202 glucuronide were longer in Child-Pugh A and B subjects than in healthy subjects. Safety AEs were reported in 3 Child-Pugh B subjects (37.5%), no Child-Pugh A subjects, and 3 healthy subjects (17.6%); see Table 5. The most frequently reported AE was nausea, reported by 1 Child-Pugh B subject (12.5%) and 2 healthy subjects (11.8%). All AEs were assessed as possibly drug related by the investigator. No clinically relevant abnormal findings and no AEs AUC, area under concentration-time curve in plasma; AUC 0-tz , AUC from time 0 to last quantifiable plasma concentration; AUC 0-Ý , AUC in plasma from time 0 extrapolated to infinity; CL R,t1-t2 , renal clearance over time interval from t 1 to t 2 ; fe t1-t2 , fraction of administered drug excreted unchanged in urine over time interval from t 1 to t 2; gMean, geometric mean; gCV, geometric coefficient of variation; t 1/2 , terminal half-life in plasma; t max , time from last dosing to maximum measured concentration in plasma. a Median (min-max); b n = 7; c n = 6. related to laboratory data, ECG data, or vital signs were reported. No severe or serious AEs were reported. Discussion In this study, after a single dose of nintedanib 100 mg, the shape of the nintedanib plasma concentrationtime curve was similar between subjects with mild or moderate hepatic impairment and healthy subjects, but exposure to nintedanib was ß2-fold higher in Child-Pugh A subjects and ß8-fold higher in Child-Pugh B subjects than in matched healthy controls. The proteinbound fraction of nintedanib was >99% across the groups. PK observations point toward an increase in the bioavailable fraction of nintedanib in subjects with hepatic impairment (ie, higher exposure in subjects with hepatic impairment than in healthy subjects, with comparable plasma concentration-time profiles). This is in line with nintedanib being a high-clearance drug with high first-pass metabolism. 7 Renal clearance of nintedanib was not influenced by impaired hepatic elimination. As observed for nintedanib, C max and AUC 0-Ý for the metabolites BIBF 1202 and BIBF 1202 glucuronide demonstrated increased exposure with increasing hepatic impairment. The effect of hepatic impairment on exposure was more pronounced for BIBF 1202, but less pronounced for BIBF 1202 glucuronide than for nintedanib. This can be interpreted as a consequence of first-pass metabolism and the different metabolic pathways of nintedanib (metabolized via ester cleavage), BIBF 1202 (metabolized via glucuronidation), and BIBF 1202 glucuronide (fecal excretion), as hepatic impairment may interfere mostly with the glucuronidation process. The design of this study was in line with regulatory guidance on PK studies in patients with hepatic impairment, which recommends that PK studies should be carried out when hepatic impairment is likely to significantly alter the PK of a drug and/or its active metabolites, and a posology adjustment may be needed to ensure the efficacy and safety of the drug in these patients. 18,20 A single-dose study is sufficient when a drug and its active metabolites exhibit linear and time-independent pharmacokinetics. 18,20 Categorization of hepatic impairment using the Child-Pugh classification is appropriate in this setting. 20 In this study, a single dose of 100 mg was used for tolerability reasons; however, the PK results are transferable to multiple doses of nintedanib because of its linear PK properties with respect to dose and time. 6 In all groups, the single dose of nintedanib 100 mg was well tolerated. In addition to this dedicated study, supportive data on the PK of nintedanib in patients with hepatic impairment have been collected as part of the clinical development programs in oncology and IPF. Despite some differences in the classification of hepatic impairment, data on the PK of nintedanib in individuals with mild hepatic impairment are aligned across data sets. The PK data from this study are in accordance with 2 phase 1 dose-escalation studies of open-label nintedanib in Asian (n = 39; NCT00987935) and European (n = 32; NCT01004003) patients with impaired hepatic function and advanced hepatocellular carcinoma (HCC), in whom nintedanib was rapidly absorbed, with maximum plasma concentrations achieved ß2-3 hours after administration and at least biphasic disposition kinetics (BI, data on file). Patients in these studies were stratified into groups according to ALT, AST, and Child-Pugh score at baseline: group I comprised patients with ALT and AST ࣘ 2 × ULN and Child-Pugh score 5-6, whereas group II comprised patients with ALT or AST >2 to ࣘ5 × ULN or Child-Pugh score 7. Group II criteria were chosen such that the group could comprise Child-Pugh A or B patients. The majority of patients recruited for group II were Child-Pugh A. In both studies, nintedanib exposure was ß2-fold higher in patients in group II compared with group I (BI, data on file). When PK data from the study in European patients with advanced HCC and impaired hepatic function (Child-Pugh A; n = 32) were compared with PK data from patients with renal cell carcinoma and normal hepatic function (n = 64), 21 a 1.6-to 1.7-fold-higher exposure to nintedanib was observed in patients with advanced HCC and impaired hepatic function (BI, data on file). Two population PK analyses were performed to characterize the PK of nintedanib and to evaluate the effect of intrinsic and extrinsic patient factors on the PK of nintedanib (BI, data on file). One analysis included 849 patients with NSCLC and 342 patients with IPF; the second included 933 patients with IPF. No Child-Pugh categorization was available for these patients. Therefore, in both analyses, patients were defined as having mild hepatic impairment if AST or ALT or bilirubin levels were >ULN, but AST and ALT were ࣘ10 × ULN and bilirubin was ࣘ1.5 × ULN at the start of treatment. The number of patients with mild hepatic impairment was 116 in the first analysis and 44 in the second. A trend toward elevated nintedanib exposure of up to 1.4-fold was observed in patients with mild hepatic impairment compared with patients with normal hepatic function. Because of missing information on underlying hepatic disease, a robust assessment of the effect of hepatic impairment defined by elevation of transaminases or bilirubin on nintedanib exposure was not possible. Treatment with nintedanib is not recommended for patients with moderate or severe hepatic impairment (Child-Pugh B or C). [3][4][5] No adjustment of the starting dose of nintedanib is recommended for patients with mild hepatic impairment (Child-Pugh A) and NSCLC, 3 but the recommended dose of nintedanib for patients with IPF and mild hepatic impairment (Child-Pugh A) is 100 mg twice daily. 4,5 The key data from the current study are included in the latest EU and US prescribing information for nintedanib. [3][4][5] Conclusions In conclusion, this dedicated study provides the most robust data on the PK of nintedanib in patients with mild or moderate hepatic impairment. In subjects with mild and moderate hepatic impairment (Child-Pugh A and B), exposure to a single dose of nintedanib 100 mg was higher in both groups than in matched healthy subjects. Nintedanib had an acceptable safety and tolerability profile in subjects with hepatic impairment. The PK of nintedanib has not been investigated in patients with severe hepatic impairment (Child-Pugh C).
2018-04-03T06:21:12.661Z
2017-11-06T00:00:00.000
{ "year": 2017, "sha1": "8424a37e764a6dda6a40068cd534508b814b2698", "oa_license": "CCBYNCND", "oa_url": "https://accp1.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jcph.1025", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "8424a37e764a6dda6a40068cd534508b814b2698", "s2fieldsofstudy": [ "Medicine", "Biology", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
266843113
pes2o/s2orc
v3-fos-license
The role of ncRNA regulatory mechanisms in diseases—case on gestational diabetes Abstract Non-coding RNAs (ncRNAs) are a class of RNA molecules that do not have the potential to encode proteins. Meanwhile, they can occupy a significant portion of the human genome and participate in gene expression regulation through various mechanisms. Gestational diabetes mellitus (GDM) is a pathologic condition of carbohydrate intolerance that begins or is first detected during pregnancy, making it one of the most common pregnancy complications. Although the exact pathogenesis of GDM remains unclear, several recent studies have shown that ncRNAs play a crucial regulatory role in GDM. Herein, we present a comprehensive review on the multiple mechanisms of ncRNAs in GDM along with their potential role as biomarkers. In addition, we investigate the contribution of deep learning-based models in discovering disease-specific ncRNA biomarkers and elucidate the underlying mechanisms of ncRNA. This might assist community-wide efforts to obtain insights into the regulatory mechanisms of ncRNAs in disease and guide a novel approach for early diagnosis and treatment of disease. INTRODUCTION Approximately 75% of human genes are transcribed into RNA, but only 3% of these transcripts are translated into proteincoding mRNAs [1].The remaining non-coding RNAs (ncRNAs) are a class of RNA molecules that do not have the potential to code for proteins; instead, they play crucial roles in various biological processes directly through their transcripts [2].Common types of ncRNAs include microRNAs (miRNAs), long non-coding RNAs (lncRNAs) and circular RNAs (circRNAs).With the development of high-throughput sequencing technology and bioinformatics, numerous studies have shown that ncRNAs play critical roles in the occurrence and development of various types of diseases, such as cardiovascular disease, cancers and diabetes.This has led to the significant potential of ncRNAs as diagnostic biomarkers and therapeutic targets, particularly miRNAs, which have been demonstrated to be critical regulatory factors in cardiovascular risk and cellular responses.For instance, in heart failure, miRNAs may regulate pathways such as cardiac hypertrophy, inf lammatory response, regeneration and angiogenesis through changes in their own expression or by binding to target mRNAs [3].Recent studies have also indicated promising prospects for small, non-coding nucleic acid therapeutics in the clinical treatment of cardiovascular diseases [4].These findings suggest that ncRNAs have great potential in diagnosing and treating diseases. Researching their expression and molecular mechanisms is beneficial for deepening our understanding of disease occurrence and development. Gestational diabetes mellitus (GDM) is currently one of the most common complications during pregnancy, characterized by the development of chronic insulin resistance and high blood glucose levels occurring in the mid to late second trimester (weeks [13][14][15][16][17][18][19][20][21][22][23][24][25][26] or early third trimester (weeks 27-40) of pregnancy [5,6].Although hyperglycemia subsides after delivery, GDM may cause various complications during pregnancy and delivery.These symptoms can have short-term and even long-term adverse effects on the mother, fetus and offspring.Short term, women who develop GDM have a higher risk of adverse pregnancy outcomes including gestational hypertension, preeclampsia and premature delivery [7,8].Long term, women with a history of GDM and their offspring are prone to metabolic disorders and have an increased risk of developing conditions such as type 2 diabetes mellitus and cardiovascular disease [9][10][11].The prevalence of GDM is on the rise worldwide [12].However, due to variations in GDM screening and diagnostic criteria among countries, the recorded prevalence of GDM varies significantly, ranging from 1% to >30% [5].Using the International Association of the Diabetes and Pregnancy Study Groups (IADPSG) criteria, the prevalence of GDM was 17.8% (range 9-26%) in a multinational cohort of women across 15 centers in the Hyperglycemia and Adverse Pregnancy Outcome (HAPO) study [13].Similarly, when applying the IADPSG diagnostic criteria, a systematic review and meta-analysis that included 25 cross-sectional or retrospective studies conducted in mainland China, involving 79 064 Chinese participants, revealed a high overall incidence rate of GDM among Chinese women, reaching 14.8% [14].Given the various adverse effects of GDM on the health outcomes of both maternal and offspring, as well as the increasing incidence rates, it is crucial to elucidate the pathogenesis of GDM and identify effective biomarkers for early prevention, risk assessment, disease diagnosis and targeted therapies. Currently, treatment strategies for GDM mainly include insulin therapy, metformin therapy, probiotic supplementation and vitamin D supplementation [15].It has been shown that metformin treatment alters the expression levels of ncRNAs in diabetes mellitus [16].As important participants in metabolic processes, there is a significant association between altered expression of ncRNAs and GDM.Du et al. reviewed the role of aberrantly expressed ncR-NAs in the placenta in GDM and pregnancy-related complications.They found that these ncRNAs are associated with abnormal placental structures, metabolic disturbances and pathological features of GDM.In addition, placenta-specific ncRNAs may serve as potential diagnostic biomarkers and therapeutic targets for GDM [17].However, the exact mechanisms of ncRNAs in the development of GDM and their potential as therapeutic targets are not entirely understood.Thus, identifying ncRNA biomarkers and understanding their functions might contribute to elucidating the complex pathophysiological mechanisms in GDM.This, in turn, could help improve early prevention and diagnosis of GDM, enabling early intervention and personalized medicine for GDM patients.Such advancements are of significant importance in enhancing the health status of both pregnant women and their offspring. In this review, we summarize recent research on the dysregulated regulatory mechanisms of ncRNAs in GDM and offer crucial suggestions for utilizing deep learning models to discover new biomarkers and predict potential mechanisms in the future. Long non-coding RNAs Long non-coding RNAs are transcripts longer than 200 nucleotides (nt) that do not encode proteins.They are mainly located in less conserved regions of the genome [18].Most lncRNAs are transcribed by RNA Polymerase II (RNAP2) in the nucleus from intergenic regions, within introns of protein-coding genes, or the antisense strand of genes.To enhance their stability, primary transcripts are generally 3 -polyadenylated, 5 -capped and alternatively spliced [19,20], resulting in mRNA-like characteristics.In addition, they can also be transcribed from conserved genomic regions and reverse splicing of exons [21].The final step in lncRNA biogenesis is the formation of thermodynamically stable structures that allow them to interact with DNA, RNA and proteins to exert their functions [22].Based on these interactions, lncR-NAs can be classified into four categories: signal lncRNAs, decoy lncRNAs, guide lncRNAs and scaffold lncRNAs [23] (Figure 1).They can mediate various biological processes through a variety of mechanisms, i.e. transcriptional interference, induction of chromatin remodeling, regulation of alternative splicing patterns, modulation of protein activity and alteration of protein localization within the cell.For example, lncRNAs can act as cis-acting elements (cis) to regulate the expression of protein-coding genes in the proximity of their own expression sites [24].Some lncR-NAs can also function as miRNA sponges, known as competitive endogenous RNAs (ceRNAs).By binding to one or multiple miR-NAs, they titrate miRNAs away from their target genes, thereby regulating miRNA-mediated post-transcriptional silencing [25].These mechanisms highlight the vital role of lncRNAs in various biological processes, and dysregulation of lncRNA expression may result in the progression of various diseases, including GDM. MicroRNAs miRNAs are highly conserved non-coding RNA sequences consisting of ∼22 nt.In the nucleus, miRNAs are primarily transcribed by RNA polymerase II, producing a primary miRNA (pri-miRNA) of ∼500-3000 nt in length [26].Then, the pri-miRNA is processed by the microprocessor complex, which contains the endoribonuclease DROSHA and its RNA-binding partner DGCR8, to generate an ∼70 nt precursor miRNA (pre-miRNA) [27].The pre-miRNA is then exported to the cytoplasm via the nuclear export protein Exportin-5.In the cytoplasm, the pre-miRNA is further sheared into a 22 nt double-stranded miRNA duplexes by Dicer.In the doublestranded structure, the guide RNA strand becomes the mature miRNA, while the passenger strand is degraded [28] (Figure 2).miRNAs typically regulate gene expression post-transcriptionally by directly interacting with partially or fully complementary target sites in mRNA 3 untranslated region (3 -UTR), 5 untranslated region (5 -UTR) or open reading frames, inhibiting mRNA translation or causing its degradation [29,30].It is worth noting that over 60% of human mRNAs contain target sites for miRNAs, and a single miRNA can regulate the expression of multiple target genes, while one gene can also be regulated by multiple miRNAs [31].Substantial research has shown that miRNAs play a regulatory role in various biological processes, such as cell proliferation, differentiation and development [32].Moreover, dysregulation of their expression is closely associated with diseases Circular RNAs Unlike linear RNAs such as lncRNAs and miRNAs, circRNA is a covalently closed, circular single-stranded RNA that typically lacks the ability to code for proteins.Most splice sites downstream are connected to 3 splice site upstream.This process leads to the formation of a circular RNA molecule with a 3 -5 phosphodiester bond at the back-splicing junction site [33].Owing to the absence of 5 caps and 3 poly-A tails, circRNAs are immune to RNA nucleic acid exonucleases, making their expression more stable and less susceptible to degradation [34].However, since the efficiency of reverse splicing is much lower than that of conventional splicing, the abundance of circRNAs is generally lower in cells and tissues.Based on their composition, circRNAs can be categorized into several types: exonic circRNAs (ecircRNAs)-formed by exons; intronic circRNAs (ciRNAs)-formed by introns; exon-intron circRNAs (EIciRNAs)formed by both exons and introns [35] (Figure 3).They typically function at the transcriptional or post-transcriptional level.For example, circRNAs can act as miRNA sponges, inhibiting miRNA binding to the non-coding region of target genes to regulate gene expression.They also interact with transcription factors to inf luence gene transcription.In addition, circRNAs interfere with the normal splicing of mRNA precursors, regulate mRNA translation, form circular RNA-protein complexes and compete with mRNA-binding proteins, participating in biological processes such as immunity, metabolism and neural system development [33].Some studies have shown that circRNAs play crucial roles in the initiation and development of diseases, especially in conditions like GDM, where they may affect maternal blood glucose levels by regulating beta-cell proliferation and glucose metabolism.Modulating circRNA expression levels to maintain normal cellular function could potentially offer a novel approach to treating GDM. Long non-coding RNAs The abnormal expression of lncRNAs in the plasma, serum and placental tissues of GDM patients is closely associated with the occurrence of GDM and its complications.Therefore, studying the expression profiles of lncRNAs in different tissues of GDM patients is of crucial significance (Table 1).For instance, in the plasma of GDM women during the first and second trimesters, the expression levels of NONHSAT054669.2 and ENST00000525337 were significantly higher than pregnant women with normal glucose tolerance (NGT).In addition, the expression level of NONHSAT054669.2 was positively correlated with the oral glucose tolerance test levels [36].This suggests that these lncRNAs have a higher diagnostic value for GDM in the first and second trimesters.In serum, the Pearson correlation coefficient showed that a high lncRNA HOTAIR expression level positively correlated with body mass index, fasting plasma glucose, 1 h plasma glucose and 2 h plasma glucose in GDM patients.Therefore, it can be used as a diagnostic marker for GDM [37].lncRNA SOX2OT is also highly expressed in GDM and strongly associated with multiple adverse events [38].Furthermore, expression levels of lncRNAs can predict the occurrence of GDM-related adverse effects, such as kidney injury, preterm labor and macrosomia.In a 6 year follow-up study of 400 women with planned pregnancies, it was found that patients with higher plasma levels of lncRNA MEG8 prior to pregnancy had a higher incidence of GDM during pregnancy.Furthermore, GDM patients with higher MEG8 levels at discharge also had a significantly increased risk of kidney injury [39].The lncRNA SNHG17 is expressed at lower levels in the plasma of pregnant women with GDM.This lower expression may contribute to the occurrence of GDM by inhibiting the growth of INS-1 cells and reducing insulin secretion by these cells.Furthermore, these levels are highly correlated with adverse perinatal outcomes, such as preterm labor [40]. The alteration in the expression levels of lncRNAs has shown great potential in the discovery of early diagnostic and therapeutic biomarkers for GDM, providing new insights into the prevention and mitigation of adverse reactions associated with GDM.Therefore, exploring their regulatory mechanisms in the development of GDM is of great significance in guiding the early prevention, diagnosis and treatment of GDM (Table 1).The development of GDM involves various mechanisms, including β-cell dysfunction, insulin resistance (IR), adipose tissue dysfunction, gluconeogenesis and oxidative stress [41].Several studies have shown that lncRNAs can be involved in the development of GDM by targeting genes, pathways or by acting as ceRNAs.β-Cell dysfunction is a major pathophysiological feature of GDM.Similar to the findings of a previous study [39], lncRNA MEG8 is highly expressed in patients with GDM.This high expression may lead to β-cell dysfunction through negative regulation of miR-296-3p, ultimately suppressing insulin secretion [42].In a study on GDM mice, researchers found decreased expression levels of lncRNA HOXA transcript at the distal tip (HOTTIP) and WNT7A, while the level of miR-423-5p was increased.The overexpression of HOTTIP could alleviate insulin resistance and hepatic gluconeogenesis in GDM mice by regulating the miR-423-5p/WNT7A axis [43]. Another study of insulin resistance in GDM mice has shown that lncRNA TUG1 could prevent IR after GDM by competitively binding to miR-328-3p and promoting SREBP-2-mediated inactivation of the ERK signaling pathway [44].In umbilical vein endothelial cells (HUVECs) of GDM patients, the expression of lncRNA HCG27 is significantly decreased, while miR-378a-3p is significantly increased, and MAPK1 expression is reduced.LncRNA HCG27 might promote glucose uptake of HUVECs by miR-378a-3p/MAPK1 pathway [45].Trophoblast cells play a crucial role in maintaining pregnancy, supporting fetal growth and development, and modulating the immune response during pregnancy.Dysfunction of its cellular functions may lead to GDM.Compared to healthy pregnant women, the expression of CCDC144NL-AS1 was significantly upregulated in serum and placental tissues of patients with GDM.Also, the placenta expression level was positively correlated with the index of insulin resistance, which is a major risk factor for the onset and progression of GDM.Furthermore, CCDC144NL-AS1 may regulates the trophoblast cell proliferation, migration and invasion via miR-143-3p [46].The expression of lncRNA XIST was increased in GDM patients and high glucose (HG) HTR-8/SVneo cell models.Inhibition of XIST might alleviate the adverse function of HG on cell viability via sponging miR-497-5p, which may target FOXO1 to mediate the occurrence of GDM [47].In contrast to the findings of Li et al., lncRNA SNX17 was dramatically upregulated in placental tissues of patients with macrosomia and may promote trophoblast proliferation through the miR-517a/IGF-1 pathway [48].Similarly, the expression of lncRNA MALAT1 was higher in the placental tissues of the GDM patient group.At the molecular level, downregulation of lncRNA MALAT1 may inhibit the secretion of inf lammatory factors and inhibit the proliferation, invasion and migration of GDM placental trophoblasts.This might be mediated through TGF-β and NF-κB signaling pathways [49].In addition, decreased expression of OIP5-AS1 was demonstrated in women with GDM.Overexpression of OIP5-AS1 can ameliorate HG-induced HTR-8/SVneo cell injury in part by sponging miR-137-3p [50].In addition to the above mechanisms, lncRNAs can interact with other RNAs to form co-expression networks and ceRNAs networks involved in GDM pathogenesis.In the peripheral blood of GDM patients, lncRNA RPL13P5 forms a co-expression chain with the TSC2 gene through the PI3K-AKT signaling pathway as part of the insulin resistance process in GDM.Similarly, the lncRNA RPL13P5 forms a co-expression network with TSC2 genes through PI3K-AKT and insulin signaling pathways, both of which are involved in insulin resistance in GDM [51].In another study, researchers found that lncRNA ERMP1, TSPAN32 and MRPL38 form a co-expression network with TPH1 in peripheral blood samples from pregnant women with the disease.TPH1 is primarily involved in the tryptophan metabolism pathway and the development of GDM [52].LncRNAs can interact with m6A through the ceRNAs network to mediate the onset of GDM.They identified four m6A-associated lncRNAs in this network. Furthermore, the genes in the m6A-related subnetwork, based on these four lncRNAs, were enriched in GDM-related hormone signaling pathways [53]. MicroRNAs Dysregulated miRNA expression can be used as a marker for early diagnosis of GDM (Table 2).In a study using next-generation sequencing to identify plasma miRNA markers in patients with GDM in early pregnancy, 17 miRNAs associated with GDM development were identified, which may be involved in the regulation of lipid metabolism and insulin sensitivity (IS).Among them, three miRNAs, hsa-miR-517a-3p|hsa-miR-517b-3p, hsa-miR-218-5p and hsa-let7a-3p, have comparable predictive ability to traditional GDM risk factors [54].Another study observed that plasma miRNAs from 18 patients with early pregnancy GDM predicted IS in the late second trimester of pregnancy [55].Similarly, the dysregulation of miRNA expression in serum samples may serve as a diagnostic marker for GDM.miR-16-5p, miR-142-3p and miR-144-3p were significantly upregulated, with a positive correlation between miR-142-3p and plasma glucose post-loading in GDM patients [56].In a study of microRNAs in early pregnancy serum from a Malay population, researchers found significantly elevated expression levels of hsa-miR-193a, hsa-miR-21, hsa-miR-23a and hsa-miR-361, but miR-130a was significantly downregulated.These miRNAs could potentially serve as GDM biomarkers and may be involved in the pathologic process of GDM by regulating common target genes [57].A cross-sectional study identified a total of 157 dysregulated miRNAs in placental tissues of women with GDM.miRNA-125b and miRNA-144 are consistently dysregulated and have good diagnostic value for GDM.The results of functional enrichment analysis of target genes suggest that these two miRNAs may be involved in energy metabolism and, thus, inf luence glucose metabolism [58].Furthermore, in a nested case-control study of participants from the European multicenter 'Vitamin D and lifestyle intervention for GDM prevention (DALI)' trial, elevated expression levels of miR-16-5p, miR-29a-3p and miR-134-5p were found, and combining them into a 3-miRNA signature could effectively predict GDM [59].Two circulating miRNA biomarkers, miR-222-3p and miR-409-3p, have also been discovered in exosomes, which improve GDM classification and may contribute to GDM through metabolic alterations [60].In addition, miR-27a-3p expression levels in peripheral blood mononuclear cells (PBMCs) were significantly higher in GDM patients than in controls, which correlated with lipid metabolism parameters and could be used as a diagnostic marker for GDM and as a marker for assessing pregnancy-associated metabolic status [61].miRNA can regulate the function of tissues and cells in GDM patients by targeting mRNAs, genes, and activating or inhibiting relevant pathways, thus participating in the pathogenesis of GDM.For example, the expression of 48 miRNAs, including miR-574-5p and miR-3135b, was significantly reduced in plasma samples from GDM patients as compared to healthy pregnant women.The expression levels of these miRNAs and the target genes they work with are associated with glucose and lipid metabolism, as well as insulin signaling pathways [62].miR-518 is highly expressed in the serum of patients with GDM and GDM complicated with HDCP (GDM&HDCP).This miRNA can inhibit the regulation of inf lammatory factors by peroxisome proliferator-activated receptor α (PPARα) [63].Indeed, the critical functions of trophoblast cells during pregnancy have been elucidated, and extensive research suggests that miRNAs can regulate trophoblast cell function through various mechanisms. A study of miRNA dysregulation in placental exosomes found that miR-135a-5p expression was significantly upregulated in placenta-derived exosomes from pregnant women with GDM.miR-135a-5p activated the PI3K/AKT signaling pathway by targeting Sirtuin 1 (SIRT1) and thus increased the proliferation, invasion and migration of placental trophoblast cells [64].Upregulation of miR-195-5p expression was found in an in vitro model of human placental microvascular endothelial cells (hPMECs) treated with high concentrations of grapes.miRNAs not only regulate proliferation, apoptosis and angiogenesis of hPMECs, but also regulate GDM by targeting VEGFA apoptosis and pathological changes of placental tissues in mouse models in vivo [65].However, miR-362-5p, which is expressed at a low level in placental tissues of GDM patients, affects the PI3K/AKT pathway and apoptosisassociated factors by negatively regulating the target gene GSR, thereby inhibiting the proliferation and promoting apoptosis in HG-treated HTR-8/SVneo cells [66].Similarly, miR-30d-5p expression is downregulated in the placenta of GDM patients as compared to normal controls, and it binds to RAB8A mRNA to inhibit the expression of this gene, which impairs trophoblast cell function [67].In placental tissues, miR-17-5p expression is identically downregulated, which improves glucose uptake by HTR-8/SVneo cells by targeting TXNIP and NLRP3 [68].In addition, Li et al. reported for the first time that miR-22 expression was significantly downregulated in placental tissues of GDM patients, along with miR-372, whose expression levels were negatively correlated with HG exposure.In vitro studies demonstrated that these two miRNAs could target SLC2A4, the gene encoding GLUT4, in order to regulate its transcription or to inf luence insulin signaling pathways by stabilizing the GLUT4 translation or degradation of the transcript.This, in turn, affects the insulin signaling pathway in GDM [69].Downregulation of miR-143-3p levels was found in plasma samples from 30 pairs of GDM patients and healthy women.Overexpression of miR-143-3p in HG-treated MIN6 cells showed that it inhibited the TAK1/NF-κB signaling pathway to promote cell viability and insulin secretion and prevent pancreatic β-cell dysfunction [70].Dysregulation of miRNAs in serum may also regulate trophoblast function through genes and related pathways.Serum miR-134-5p was elevated in patients with GDM compared to healthy pregnant women and was found to exacerbate GDM by mediating trophoblast inf lammation and apoptosis through regulation of FOXP2 transcription in HTR-8/SVneo cells [71].Similarly, serum miR-377-3p was elevated in patients with GDM, and miR-377-3p was found to directly target FNDC5 in a cell model, promoting GDM by inhibiting cell growth reconstitution and increasing the rate of apoptosis [72].In addition, elevated serum levels of miR-1323 inhibits the expression of TP53INP1, which reduces trophoblast cell viability and leads to hyperglycemia [73].Some studies have also focused on the expression changes and mediating pathways of miRNAs in HUVEC cells.For example, miR-34b-3p was upregulated in HUVECs from GDM patients, and it was found to impair HUVEC viability and migration in GDM by targeting PDK1 in in vitro simulated GDM [74].In the study of exosomal miRNA levels in the placenta-derived mesenchymal stem cells (PlaMSCs) from GDM patients (GDM-MSCs), elevated expression levels of miR-130b-3p were found.Subsequent studies in GDM mice confirmed that miR-130b-3p regulates ICAM-1 expression, inhibiting HUVEC proliferation, migration and angiogenesis [75].miR-6869-5p was considerably downregulated in placenta-derived macrophages from GDM patients.It prevents macrophage from targeting PTPRO and promotes macrophage polarization to M2-type cells, thus preventing macrophage proliferation and inf lammation, and maintaining the balance of the placental microenvironment [76].In addition, one study identified 22 dysregulated exosomal miRNAs in the plasma of pregnant women with GDM and verified that upregulated miR-423-5p and downregulated miR-122-5p, miR-148a-3p, miR-192-5p and miR-99a-5p could be used as early predictors of GDM.Among them, miR-122-5p may be involved in GDM metabolic regulation with targeting G6PC3 and FDFT1 genes to regulate the insulin and AMPK signaling pathways [77].Notably, miRNAs can also regulate GDM through some more interesting mechanisms.For example, the expression level of miR-199a-5p is significantly upregulated in the placenta of patients with GDM compared to normal pregnant women.miR-199a-5p can regulate the glucose pathway by repressing Methyl CpG Binding Protein 2 (MeCP2) and downregulating the classical transient receptor potential 3 (Trpc3) expression to regulate the glucose pathway.This suggests that miR-199a-5p may regulate the glucose pathway by modulating methylation levels, leading to GDM [78].Zeng et al. investigated the association between polymorphisms of miR-196a2 and miR-27a and susceptibility to gestational diabetes mellitus in Chinese population, and found that miR-196a2 rs11614913 and miR-27a rs11614913 and miR-27a variants may negatively regulate lipocalin gene expression and increase susceptibility to GDM [79]. In addition to studies based primarily on samples from GDM patients, analyses have also been conducted on the expression levels and mechanisms of miRNAs in mouse and rat models of GDM.In mouse C2C12 cells, miR-182-3p expression was significantly upregulated.miR-182-3p inhibitor can directly bind to INSR1, a key regulator of the insulin-related pathway, and increase the expression of NSR1 and its downstream signaling pathway in skeletal muscle, thereby promoting GLUT4 translocation as well as glucose uptake and utilization, and thus alleviating the development of GDM [80].In mouse pancreatic islet tissues, miR-152 inhibits hepatic insulin resistance (HIR) in GDM mice by downregulating the expression of cytokine signaling 3 (SOCS3) [81].Similarly in pancreatic tissues, miR-210-3p is significantly overexpressed, which can directly target Dtx1 and negatively regulate its expression to accelerate the development of GDM, thereby damaging glandular β-cell function and cell viability [82].Studies on pregnant rats found that miR-875-5p expression level was downregulated.miR-875-5p can regulate IR and inf lammation through TXNRD1.Meanwhile, silencing miR-875-5p significantly reduced fasting blood-glucose and insulin resistance, lowered the expression levels of lipids and pro-inf lammatory markers, and decreased oxidative stress levels [83]. Circular RNAs Although research on the regulatory mechanisms of circRNAs is relatively limited at present, existing studies have already demonstrated their significant role in the pathogenesis of GDM (Table 3).Variations in circRNA expression levels can be used to predict GDM.A study on the expression levels of plasma exosomal hsa_cirRNA_0039480 in early, mid and late pregnancy in patients with GDM found that this circRNA showed high expression at all three stages [84].In plasma samples, Zhu et al. found that actin-related protein 2 homolog (circACTR2) is overexpressed in GDM, and high plasma levels of circACTR2 are closely associated with adverse events such as preterm birth, miscarriage and fetal malformation [85].Similarly, circVEGFC is upregulated in the plasma of GDM patients.Receiver operating characteristic curve analysis shows that the high expression level of circVEGFC on the day of admission exhibits higher sensitivity and specificity for the early diagnosis of GDM [86,87].A retrospective case-control study found that the expression level of hsa_circ_102682 was lower in GDM patients than in the control group, and was significantly correlated with triglycerides, apolipoprotein A1 (APOA1), apolipoprotein B (APOB) and 1 h blood glucose.These results suggest that hsa_circ_102682 may regulate lipid metabolism to participate in the pathogenesis of GDM [88].In addition, Zou et al. found that the expression of hsa_circ_0003218 was dramatically downregulated in the GDM group.hsa_circ_0003218 was significantly correlated with the GDM risk factor 25(OH)D3, and the two may be jointly involved in the metabolic process of GDM.Therefore, their combination can be used as a predictive marker for the early stage of GDM [89].Moreover, circRNAs can be involved in GDM-related biological processes in multiple ways, including binding to genes or proteins, activating related signaling pathways or acting as miRNA sponges.In studies exploring the effects of HG on trophoblast cells, downregulation of circ_0001578 may promote GDM by inducing chronic inf lammation in the placenta via NF-κB and JNK pathways [90].Li et al. first revealed that the decreased expression of circFOXP1 induces damage to trophoblast cells by regulating the expression of miR-508-3p and downstream SMAD2 molecules in the pathological state of GDM in vitro [91].In addition, the high expression of circRNAs also has an impact on cellular functions.circSESN2 is overexpressed in patients with GDM and exacerbates HG-induced trophoblast cell injury via binding to IGF2BP2 and upregulating its protein expression [92].Similarly, in the GDM group, circDNMT1 is found to be overexpressed.It mainly inhibits trophoblast cell viability, migration and invasion, and induces cell apoptosis and cellcycle arrest by binding to p53 and activating the JAK/STAT signaling pathway [93].In addition, circMAP3K4 can regulate the expression of PTPN1 by binding to miR-6795-5p, thereby modulating the insulin-PI3K/Akt signaling pathway and inhibiting glucose uptake in trophoblast cells.This regulatory mechanism may contribute to IR associated with GDM [94].Notably, circRNA might affect the level of DNA methylation.In GDM, circHIPK3 binds to miR-1278 and targets DNM1.The high expression of circHIPK3 affects the methylation status of the GPX4 gene through this molecular pathway, leading to ferroptosis in HTR-8/SVneo cells under HG culture conditions [95].Besides, circPNPT1 can directly sponge miR-889-3p to promote the expression of miR-889-3p targeted PAK1.The high expression of circPNPT1 in GDM patients can promote cellular biological dysfunction through the miR-889-3p/PAK1 axis [96].circCBLB, circITPR3 and circICAM1 may also serve as GDM-related miRNA sponges and regulate the expression of CBLB, ITPR3, NFKBIA and ICAM1 in cellular immune pathways [97].Knockdown experiments of circ_0074673 facilitated the proliferation, migration and angiogenesis of high glucose-treated human HUVECs via acting as a sponge for miR-1200.This finding may provide a potential target for the treatment of GDM.Notably, circRNAs in GDM exosomes may also serve as potential biomarkers and therapeutic targets for GDM.hsa_circ_0046060 in exosomes derived from hUMSC regulates glucose homeostasis and induces insulin resistance in normal human liver cell L-02 and GDM mice by targeting G6PC2 via hsa-miR-338-3p [98]. DISCUSSION Most of the human genome is transcribed into ncRNAs, which can regulate numerous physiological, developmental and disease processes, holding significant potential as therapeutic targets for diseases.The development of RNA sequencing technologies has led to the discovery of a growing number of ncRNAs, laying a solid foundation for researchers to investigate the regulation mechanisms of ncRNA.To date, a substantial amount of research has been published on how dysregulated ncRNAs participate in the development of various diseases.This article provides a summary of the regulatory mechanisms of lncRNAs, miRNAs and circRNAs in cases on GDM.ncRNAs may target genes, mRNAs or proteins and regulate their transcription and translation, to affect downstream signaling pathways related to the development of GDM, such as glucose and lipid metabolism and insulin signaling.Furthermore, ncRNAs may function by modulating methylation levels and forming regulatory networks.These molecular mechanisms could lead to the occurrence of β-cell dysfunction, insulin resistance, adipose tissue dysfunction, gluconeogenesis and oxidative stress, resulting in the development of GDM.The literature described herein provide strong evidence that ncRNAs play important roles in GDM.However, most of these studies are based on biological experiments, typically time consuming and costly, to analyze the regulatory mechanisms of ncRNAs.Therefore, there is an urgent need to find more cost-effective bioinformatics methods to discover potential biomarkers and mechanisms, providing validation insights for biological experiments.This will also help reduce the use of invasive clinical detection methods and improve the diagnostic accuracy of various diseases.A large number of traditional machine learning (ML)-based models, such as random forest (RF), support vector machine (SVM) and logistic regression (LR), have been used to predict biomarkers for a variety of diseases, including GDM.For example, Yoffe et al. constructed RF, LR and AdaBoost models using the levels of significantly upregulated miR-223 and miR-23a in GDM patients as features.These models were used for classifying GDM and healthy women, resulting in favorable classification outcomes [99]. Due to the exponential growth of biological data, deep learning methods have been widely used in various biological fields, such as protein structure and function prediction and disease marker prediction.Naseer et al. constructed a deep neural network with a generic pseudo amino acid composition, called iGluK-Deep, to identify lysine glutamylation sites.They applied a basic quantitative encoding of Pseudo Amino Acid Composition sequences to generate a baseline dataset consisting of strings of integers.Then the dataset was fed into using well-known neural network architectures (DNNs) such as fully connected neural networks (FCNs), convolutional neural networks (CNNs) and recurrent neural networks with simple units, gated recurrent unit and long shortterm memory units, respectively.Also, the FCN model showed the higher performance of proposed approach for lysine glutarylation site prediction [100].Understanding antigen-antibody binding interactions can help in the design of antibodies, therapeutic drugs and vaccines.A novel deep learning model, called DeepBCE, was developed to predict immunostimulatory factor Bcell epitopes from protein sequences to understand the binding mechanisms between antigens and antibodies.This model was developed using a combination of deep CNNs and A position and AA composition variant feature-based feature vectors, and was able to accurately predict linear B-cell epitopes [101].In addition, there are many other deep learning methods that can be applied to biological data.Among the existing deep learning methods, attention mechanisms were first introduced in order to elucidate the mapping between a query and a set of key-value pairs to an output, where the query, keys, values and output are all vectors.Specifically, the output was computed by a weighted summation of the values, while the weights were computed through a compatibility function of the query with the corresponding key.In simple terms, the attention mechanism calculates the correlations or weights between different parts of the input data, enabling the model to selectively focus on the relevant information in the input.By introducing the attention mechanism, deep learning models can greatly handle long sequences, resolve dependencies between inputs and be more f lexible in weighting different parts.Altogether, it is foreseeable that deep learning models based on attention mechanisms are capable of providing a great potential for biological sequence analysis.To date, attention mechanisms have been applied to predict ncRNA-disease associations and interactions with proteins.For instance, heterogeneous graph attention network framework based on meta-paths for predicting lncRNA-disease associations (HGATLDA) is a novel heterogeneous graph attention network framework based on meta-paths for predicting lncRNA-disease associations.In HGATLDA, first, feature matrices were extracted from the multi-view similarity graphs of lncRNAs and diseases with graph convolutional networks.Second, an attention mechanism was used to assign weights to the feature matrices.Then, all representations were extracted using CNNs and then fed into a stacking ensemble classifier for the prediction purpose.Finally, in the case study, eight out of the top-10 lncRNAs predicted by HGATLDA to be associated with colon cancer have been experimentally validated [102].This shows that the deep learning model can be utilized to effectively characterize the associations between lncRNAs and diseases, and also shows that the biomarkers obtained from its prediction may be of high value for the diagnosis of diseases.In addition, Han et al. constructed a computational model based on a line graph attention network framework, called ncRPI-LGAT, for predicting ncRNA-protein interactions.They transformed the link prediction task into a node classification task in the line network, and then introduced a line graph attention network framework as means to predict ncRNA-protein interactions.ncRPI-LGAT performed well in terms of the prediction of ncRNA-protein interactions across multiple test sets [103].Moreover, this method is considered as a useful tool that can provide new insights for subsequent experiments exploring the underlying mechanisms of these interactions. All in all, future research on the regulatory mechanisms of ncRNAs in disease could involve the following aspects: First, potential ncRNA biomarkers can be identified from large-scale sequencing data with deep learning algorithms, including but not limited to attention mechanisms.In addition, the markers can be predicted for disease relevance and their interactions with genes, mRNAs or proteins.This process plays a crucial role in providing important clues and guidance for the validation of biological experiments.Next, as the expression levels and mechanisms of action of ncRNAs may change before the onset of disease and at different stages of disease development, the clinical preventive and diagnostic potential of ncRNAs should be evaluated.As mentioned above, attentional mechanisms can effectively capture the characteristics of different states, thus providing important information for screening ncRNAs with significant functions.Furthermore, there is a need to explore whether ncRNAs can be used for disease treatment.Using attention mechanism modeling, interaction networks between ncRNAs and drugs can be constructed to identify potential correlations, thus supporting the development of novel targeted therapeutic drugs. Key Points • ncRNA is expected to serve as a potential biomarker for diseases, providing support for early diagnosis and treatment.• We provided a comprehensive review on several recent studies in terms of the role of ncRNA regulatory mechanisms in GDM.• We discussed the potential of deep learning approaches for the prediction of disease markers and pathogenic mechanisms. Table 1 : Summary and functions of aberrant lncRNAs in GDM (Continued) Table 2 : Summary and functions of aberrant miRNAs in GDM (Continued) Table 3 : Summary and functions of aberrant circRNAs in GDM
2024-01-09T06:17:28.723Z
2023-11-22T00:00:00.000
{ "year": 2024, "sha1": "fc36fc73b965a2836193b6a63b1b402da9002ce0", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/bib/article-pdf/25/1/bbad489/55123357/bbad489.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "6e430018e3c9cb443a4559d7cc19301c945b4f14", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118515053
pes2o/s2orc
v3-fos-license
The Axial Anomaly and Large Pulsar Kicks Topological vector currents have gained interest recently with their possible verification at RHIC through the Charge Separation Effect and the Chiral Magnetic Effect. Much work has been done in understanding the role of topological vector currents in astrophysics, specifically in the interiors of neutron stars and quark stars. We will discuss a recent aspect of this work regarding pulsar kicks. A significant percentage of the pulsar population is known to have velocities above 1000 km/s, but a suitable explanation for these velocities does not exist. We will detail how topological currents may be responsible for these large kicks and discuss why the mechanism is successful where others fail. Introduction A recent topic of much interest has been the P and CP-odd effects that arise from the axial anomaly. The most popular of these has been the Chiral Magnetic Effect [1], but this is part a body of work investigating this phenomenon that starts with topological currents in condensed matter systems [2], and includes the study of anomalous axion interactions in QCD [3], the Charge Separation Effect [4], and the high density analogue of the Chiral Magnetic Effect in dense stars [5,6]. The Chiral Magnetic effect is particularly exciting because it rests on the edge of observational science. The current may be responsible for the parity violating effects seen in the STAR collaboration at RHIC [7]. Here we will discuss how the existence of these currents in dense stars may be responsible for generating the large proper motion seen in some pulsars [8]. The goal of the paper [8] was to elaborate on a kick mechanism first discussed by [6] that may explain pulsar velocities greater than 1000 km s −1 . There have been a number of studies that have compiled and modelled the velocities of pulsars. Although they disagree on whether the distribution is indeed bimodal, they agree that a significant number of pulsars are travelling faster than can be attributed to neutrino kicks. The analysis of [9] favours a bimodal velocity distribution with peaks at 90 km s −1 and 500 km s −1 with 15% of pulsars travelling at speeds greater than 1000 km s −1 . Alternatively [10] and [11] both predict a single peaked distribution with an average velocity of ∼ 400 km s −1 , but point out that the faster pulsars B2011+38 and B2224+64 have speeds of ∼ 1600 km s −1 . Large velocities are unambiguously confirmed with the model independent measurement of pulsar B1508+55 moving at 1083 +103 −90 km s −1 [12]. Currently no mechanism exists that can reliably kick the star hard enough to reach these velocities. Asymmetric explosions can only reach 200 km s −1 [13], and asymmetric neutrino emission is plagued by the problem that at temperatures high enough to produce the kick the neutrinos are trapped inside the star [14]. Alterations of the neutrino model that take into account only a thin shell of neutrinos require large temperatures and huge surface magnetic fields. Generating Large Kicks We will provide a sketch of how the kick is generated and direct those interested in the details to read [8]. The kick mechanism we will discuss relies on the existence of topological vector currents of the form described by [6], which some readers may recognize as the same current responsible for the Chiral Magnetic Effect [1] in QCD, where n R and n L are the one dimensional number densities of the right and left-handed electrons, and Φ is the magnetic flux. There are three requirements for topological vector currents to be present: an imbalance in left and right-handed particles µ L = µ R , degenerate matter µ T , and the presence of a background magnetic field B = 0. All of these are present in neutron and quark stars. The weak interaction, by which the star attains equilibrium, violates parity; particles created in this environment are primarily left-handed. The interior of the star is very dense, µ e ∼ 10 MeV, and cold, T ∼ 0.1 MeV, such that the degeneracy condition µ T is met, and neutron stars are known to have huge surface magnetic fields, B s ∼ 10 12 G. If the electrons carried by the current can transfer their momentum into space-either by being ejected or by radiating photons-the current could push the star like a rocket. In typical neutron stars this is unlikely because the envelope (the region where µ ∼ T ) is thought to be about 100 m thick. Once the current reaches this thick crust, it will likely be reabsorbed into the bulk of the star. But if the crust is very thin, or nonexistent, the electrons may leave the system or emit photons that will carry their momentum to space. The electrosphere for bare quark stars is thought to be about 1000 fm. With this in mind we conjecture that stars with very large kicks, v 200 km s −1 , are quark stars and that slow moving stars, v ≤ 200 km s −1 , are kicked by some other means, such as asymmetric explosions or neutrino emission, and are typical neutron stars. Confirmation of this would provide an elegant way to discriminate between neutron stars and quark stars. The total number current for electrons reaching the surface of the star is calculated in [8] and is given by, where B c = 4.4 × 10 13 G is the critical magnetic field, T core is the core temperature of the star, and n 0 is nuclear density. The typical density for quark matter is n b ∼ 10 n 0 but could easily be higher. Though many pulsars have a surface field of around 10 12 G, the field in the bulk of the star is likely much stronger based on virial theorem arguments in [15],which yield possible core fields of B max ∼ 10 18 G. This is an extremely large field and is unlikely as it is a strict upper bound. Based on this we choose a value of the core magnetic field to be B core = 10 B c . The current, and thus the kick, is very sensitive to the cooling of the star. Unfortunately, kicks are likely to occur right after the birth of the star during the most poorly understood stage of cooling. The initial cooling of the star is described in [16], which focuses on neutrino diffusion through the star and thermal cooling. The star then cools until the neutrinos can escape the quark star and the cooling moves into a purely radiative regime as discussed in [17]. The part of the cooling curve between these two well defined mechanisms constitutes the translucent regime, which we model with and exponential decay as shown in Figure 1a. The degeneracy of the electrons is responsible for powering the kick. Each electron carries a momentum equal to its Fermi momentum, which is quite large due to the extreme degeneracy in the star. As seen in Figure 1b, the star quickly reaches a speed of v max ∼ 1600 km s −1 , which is big enough to account for the large kicks seen in many pulsars. As plotted, the entire kick seems The curve before the patch is taken from [16] and the curve after the patch is from [17]. The black dot marks the start of the kick at t = 0. b) Time evolution of the kick for an internal magnetic field B = 10 B c . to happen very quickly, but the current keeps running throughout the star's life. With a large internal magnetic field the mechanism can account for kicks seen in young pulsars. But because the kick is constantly running, pulsars with smaller internal magnetic fields will eventually attain very large speeds very late in life. The Difference between Topological Kicks and Neutrino Kicks Neutrino kicks and topological kicks seem very similar on the surface. The electrons and neutrinos that contribute towards their kicks are created at the same rate w, have nearly the same degree of helicity, and have the same occupation of the lowest Landau level n L . This means the flux of particles contributing to both the electron kick and the neutrino kick is about the same ∼ n L w. The difference between the two mechanisms comes from the momentum that the relevant particle carries. The neutrinos are created thermally and the typical momentum of a neutrino is equal to the temperature of the star T . The momentum of the electrons comes from the large chemical potential, µ e ∼ 10 MeV. The momentum transfer per unit time for neutrinos is F ν ∼ T n L w and for electrons is F e ∼ µ e n L w. When the kick starts the star has a temperature of only T ∼ 1 MeV. The electron kick is stronger than the neutrino kick by a factor of Initially, when the star is very hot, the electron kick is an order of magnitude stronger than the neutrino kick. Furthermore, as the star cools the neutrino kick gets even weaker, while the electrons continue to have a momentum dictated by their chemical potential. This is how electrons generate larger kicks than neutrinos in a similar environment. The Affect of the Current on the Cooling of the Star At the beginning of the star's life the energy from the kick does not contribute to the cooling of the star, but later in life the current could over take neutrino cooling as the dominant mechanism. This is because only a small fraction of the electrons created in the star actually escape, whereas all the neutrinos created in the star escape. The electrons only propagate because the asymmetry in the lowest Landau level and detailed balance allows the helicity states to propagate out of the star. Those electrons that do not contribute toward the kick are trapped inside the star. Only those helicity states that reach the surface contribute to the cooling of the star. The neutrinos cool the star with a luminosity L ν ∼ T w where the electrons cool the star with an energy current (luminosity) of L e ∼ µ e n L w. The ratio of electron cooling to neutrino cooling is At first the electrons cool the star at about 1/100 the rate of neutrino cooling. As the star cools, eventually L e /L ν > 1 and the more energy is lost due to the current than the neutrinos. This transition occurs at a temperature well after the kick has occurred. The current may be an additional cooling mechanism to consider in stars that have cooled below 10 8 K.
2010-05-20T22:41:34.000Z
2010-05-20T00:00:00.000
{ "year": 2010, "sha1": "93af60f6667369bb296ad08947daf5f03d80f76a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "93af60f6667369bb296ad08947daf5f03d80f76a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
215769624
pes2o/s2orc
v3-fos-license
Get With the (Developmental) Program Impaired Regulation of KCC2 Phosphorylation Leads to Neuronal Network Dysfunction and Neurodevelopmental Pathology Pisella LI, Gaiarsa JL, Diabira D, et al. Sci Signal. 2019:12(603):eaay0300. doi:10.1126/scisignal.aay0300. KCC2 is a vital neuronal K+/Cl− cotransporter that is implicated in the etiology of numerous neurological diseases. In normal cells, KCC2 undergoes developmental dephosphorylation at Thr906 and Thr1007. We engineered mice with heterozygous phosphomimetic mutations T906E and T1007E (KCC2E/+) to prevent the normal developmental dephosphorylation of these sites. Immature (postnatal day 15) but not juvenile (postnatal day 30) KCC2E/+ mice exhibited altered GABAergic inhibition, an increased glutamate/GABA synaptic ratio, and greater susceptibility to seizure. KCC2E/+ mice also had abnormal ultrasonic vocalizations at postnatal days 10 to 12 and impaired social behavior at postnatal day 60. Postnatal bumetanide treatment restored network activity by postnatal day 15 but failed to restore social behavior by postnatal day 60. Our data indicate that posttranslational KCC2 regulation controls the GABAergic developmental sequence in vivo, indicating that deregulation of KCC2 could be a risk factor for the emergence of neurological pathology. Developmental Regulation of KCC2 Phosphorylation Has Long-Term Impacts on Cognitive Function Moore YE, Conway LC, Wobst HJ, et al. Front Mol Neurosci. 2019;12:173. doi:10.3389/fnmol.2019.00173. The GABAA receptor-mediated currents shift from excitatory to inhibitory during postnatal brain development in rodents. A postnatal increase in KCC2 protein expression is considered to be the sole mechanism controlling the developmental onset of hyperpolarizing synaptic transmission, but here we identify a key role for KCC2 phosphorylation in the developmental EGABA shift. Preventing phosphorylation of KCC2 in vivo at either residue serine 940 (S940), or at residues threonine 906 and threonine 1007 (T906/T1007), delayed or accelerated the postnatal onset of KCC2 function, respectively. Several models of neurodevelopmental disorders including Rett syndrome, Fragile × and Down syndrome exhibit delayed postnatal onset of hyperpolarizing GABAergic inhibition, but whether the timing of the onset of hyperpolarizing synaptic inhibition during development plays a role in establishing adulthood cognitive function is unknown; we have used the distinct KCC2-S940A and KCC2-T906A/T1007A knock-in mouse models to address this issue. Altering KCC2 function resulted in long-term abnormalities in social behavior and memory retention. Tight regulation of KCC2 phosphorylation is therefore required for the typical timing of the developmental onset of hyperpolarizing synaptic inhibition, and it plays a fundamental role in the regulation of adulthood cognitive function. The GABA A receptor-mediated currents shift from excitatory to inhibitory during postnatal brain development in rodents. A postnatal increase in KCC2 protein expression is considered to be the sole mechanism controlling the developmental onset of hyperpolarizing synaptic transmission, but here we identify a key role for KCC2 phosphorylation in the developmental E GABA shift. Preventing phosphorylation of KCC2 in vivo at either residue serine 940 (S940), or at residues threonine 906 and threonine 1007 (T906/T1007), delayed or accelerated the postnatal onset of KCC2 function, respectively. Several models of neurodevelopmental disorders including Rett syndrome, Fragile  and Down syndrome exhibit delayed postnatal onset of hyperpolarizing GABAergic inhibition, but whether the timing of the onset of hyperpolarizing synaptic inhibition during development plays a role in establishing adulthood cognitive function is unknown; we have used the distinct KCC2-S940A and KCC2-T906A/T1007A knock-in mouse models to address this issue. Altering KCC2 function resulted in long-term abnormalities in social behavior and memory retention. Tight regulation of KCC2 phosphorylation is therefore required for the typical timing of the developmental onset of hyperpolarizing synaptic inhibition, and it plays a fundamental role in the regulation of adulthood cognitive function. Commentary Several neurodevelopmental disorders (NDDs) are associated with disruptions in the balance between excitation and inhibition, including Fragile X syndrome, Rett syndrome, autism spectrum disorders (ASD), and schizophrenia. Important to this readership, epilepsy is commonly comorbid with these NDDs. Although thinking of NDDs as an imbalance between excitation and inhibition may be an oversimplification, there is abundant evidence for impaired GABAergic signaling as a feature of numerous neurological and NDDs. 1,2 In fact, numerous clinical and basic science studies demonstrate alterations in GABAergic signaling resulting from dysregulation in chloride homeostasis, due largely to deficits in the function of the K þ /Cl À cotransporter, KCC2, in NDDs and epilepsy. 1,2 The currently highlighted studies further our knowledge of the role of KCC2 in NDDs demonstrating a critical role for posttranslational modifications in regulating KCC2 and contributing to the developmental switch in excitatory to inhibitory GABA. 3,4 These studies implicate these regulatory sites in the pathophysiology of NDDs and suggest novel therapeutic targets for treatment for these disorders. Mutations in genes associated with NDDs and epilepsy 1,5,6 provide further evidence for a role for KCC2 in the underlying neuropathology of these disorders. Two functionally impairing mutations in KCC2 have been identified in association with ASD and rare KCC2 variants affecting CpG sites are more likely to be associated with ASD cases. 6 Further, genetic mutations in KCC2 have been identified in patients with febrile seizures, idiopathic generalized epilepsy, and epilepsy of infancy with migrating focal seizures (see Duy et al 5 for review). The impact of KCC2 in NDDs and epilepsy is thought to involve the role of KCC2 in the developmental switch from excitatory to inhibitory GABAergic signaling. Chloride homeostasis and, therefore, GABAergic inhibition are controlled by the opposing actions of transporters, largely the Na þ /K þ cotransporter, NKCC1, which imports chloride, and KCC2 which exports chloride. There is a progressive increase in chloride extrusion during development, which has been largely attributed to the increased function of KCC2 that is required for inhibitory GABAergic signaling. KCC2 function is thought to be altered during development since the expression levels of KCC2 remain relatively unchanged, but the function of KCC2 is increased, a process which is tightly regulated by phosphorylation. Several phosphorylation sites have been identified on KCC2 which exert opposing regulation on the function of KCC2. 7 Phosphorylation of the S940 residue has been shown to increase during development and increase the function of KCC2; whereas, phosphorylation of T906 and T1007 impairs KCC2 function and the phosphorylation at these sites is decreased during development. 7 However, few studies have focused on the impact of these posttranslational modifications on the developmental trajectory of GABAergic signaling in NDDs or epilepsy. In order to further explore the role of phosphorylation of KCC2 on the developmental trajectory of GABAergic signaling and NDDs, Moore et al utilized novel mouse models with mutations impairing phosphorylation at S490 (S940A) and T906 and T1007 (T906A/T1007A) to investigate the role of the developmental switch in excitatory 3 to inhibitory GABAergic signaling in regulating excitability and the impact on social behavior and cognitive function. Similarly, Pisella et al developed a phosphomimetic mouse model at T906 and T1007 to study the role of KCC2 phosphorylation in neuronal excitability and NDDs. 4 These complementary studies demonstrate critical roles for the phosphorylation state of S940, T906, and T1007 in the developmental program of GABAergic signaling and the influence on phenotypes related to NDDs and epilepsy. For example, mice with mutations preventing phosphorylation at S940 (S940A) exhibit impaired social interaction, whereas T906A/T1007A mutant mice exhibit enhanced social interaction. 3 Conversely, mice with phosphomimetic mutations of KCC2 at residues T906 and T1007 exhibit altered excitatory: inhibitory balance, increased seizure susceptibility, abnormal ultrasonic vocalizations and deficits in social interaction. 4 Remarkably, mice that are homozygous for the phosphomimetic mutations of KCC2 at T906 and T1007 die hours after birth, highlighting how essential these sites are for the function of KCC2 during development. 4 Treatment with bumetanide, an NKCC1 antagonist which limits intracellular chloride accumulation, from P6 to P15 restored the excitatory: inhibitory balance and seizure susceptibility in mice with the phosphomimetic mutations of KCC2 at T906 and T1007. 4 It is important to note that the phosphomimetic mutations in KCC2 at T906 and T1007 do not prevent the developmental program, but rather delay the developmental switch from excitatory to inhibitory GABA. Interestingly, bumetanide treatment was unable to restore the deficits in social interaction, which may be due to the timing of treatment or a developmental process which cannot be reversed which requires further investigation. Based on basic science studies demonstrating the developmental switch in excitatory to inhibitory GABA, largely due to the developmental regulation of KCC2, bumetanide has been explored clinically for the treatment of schizophrenia, Fragile X, ASDs, and epilepsy. A case study demonstrated that bumetanide treatment decreased hallucinations in an adolescent with schizophrenia. 8,9 Treatment with bumetanide showed promise in reducing symptoms of autism in infants, 10 and spurred subsequent clinical trials to examine the therapeutic potential of bumetanide for the treatment of ASDs. Bumetanide reduced the severity of ASD symptoms in a phase 2 clinical trial 11 and in case study in a patient with Fragile X. 12 A randomized control trial for ASD or Asperger syndrome demonstrated a reduction in autism symptoms when the most severe cases were removed. 13 In a parallel study, bumetanide treatment improved eye contact, emotion recognition and normalized the activation of brain regions involved in social and emotional perception. 14,15 Although bumetanide has demonstrated repeated success in clinical trials for ASD, the effects on neonatal seizures are surprisingly conflicting despite a wealth of preclinical evidence. Bumetanide was shown to be effective at reducing seizures in a case study of a neonate with intractable multifocal seizures, 16 but not in a clinical trial treating seizures in newborn babies with hypoxic ischemic encephalopathy. 17 The currently highlighted studies provide evidence for valuable experimental models to further examine the utility of targeting KCC2 for treatment of NDDs and epilepsy. These data also provide evidence for similar mechanisms associated with the underlying neurobiology of these highly comorbid disorders (NDDs and epilepsy), involving KCC2 and disruption in the development of inhibitory GABAergic signaling. Thus, targeting KCC2 and restoring the normal developmental trajectory of GABAergic inhibition may be beneficial for the treatment of both NDDs and epilepsy. Strength of these studies is that they do not attempt to model a specific NDD, but rather assess phenotypes relevant to numerous NDDs, and investigate KCC2 as a therapeutic target. These data demonstrate that posttranslational modifications of KCC2 are important factors in controlling development which may be critical to several NDDs and, thus, may be useful targets for treatment. Importantly, there are separate pathways for influencing KCC2 function developmentally versus in the adult. Phosphorylation of S940 which is important for facilitating KCC2 function in the adult is mediated by protein kinase C (PKC); whereas, the phosphorylation of T906 and T1007 is regulated by the kinases / SPS1-related proline/alanine-rich kinase (WNK/SPAK) pathway. Thus, there are multiple sites and pathways for the regulation of KCC2 and, therefore, multiple targets for therapeutic intervention. Further, these currently reviewed papers demonstrate the importance of posttranslational modification and remind us that protein expression may not tell the full story. By Jamie L. Maguire Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The author is supported by R01NS105628, R01NS102937, R01AA026256, and a sponsored research agreement with SAGE Therapeutics.
2020-03-26T10:21:16.709Z
2020-03-01T00:00:00.000
{ "year": 2020, "sha1": "558e65088657ba02ac92e4c18ea4ea1ee02f997b", "oa_license": "CCBYNCND", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1535759720901606", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f086fe52ea14d67a76427c5e5dc36095f0b0c582", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
227189802
pes2o/s2orc
v3-fos-license
Prevalence of Mental Health Disorders among Elderly Diabetics and Associated Risk Factors in Indonesia This cross-sectional study aimed to explore mental health disorders (MHD) prevalence among elderly diabetics in Indonesia. Data were extracted from the 2018 national basic health survey in Indonesia (abbreviated as RISKESDAS). The survey involved households randomly selected from 34 provinces, 416 districts, and 98 cities in Indonesia, with 1,017,290 respondents. The number of subjects selected in this study was 2818 elderly diabetic subjects. MHD was determined by self-reporting assessment. Secondary data acquired from RISKESDAS 2018 data involved age, sex, urban–rural residence status, marital status, educational level, employment status, obesity, hypertension, heart disease, stroke, family history of MHD, and DM duration. Binary logistic regression with a backward stepwise method was used to analyze the risk factors related to MHD. MHD prevalence among elderly diabetics in Indonesia was 19.3%. Factors associated with MHD among elderly diabetics were being female (prevalence odds ratio (POR) = 1.64; 95% CI: 1.126–2.394), married (POR = 0.05; 95% CI: 0.031–0.084), less education (POR = 3.37; 95% CI: 1.598–10.355), and stroke (POR = 1.61; 95% CI: 1.183–2.269). MHD prevalence among elderly diabetics in Indonesia was 19.3%, suggesting that screening for psychological problems and educating elderly diabetic patients is essential. Unmarried female elderly diabetics with less education and stroke were altogether more likely to experience MHD. Introduction In 2019, it was reported that 463 million individuals globally suffered from diabetes mellitus (DM). This number has increased from 382 million in 2013 [1]. The United States, China, India, and Indonesia are countries with a high prevalence of DM [2]. The prevalence of DM in Indonesia was 5.7% in 2007, which increased by 10.9% in 2018 [3,4], representing 157,500, or 6%, of total deaths [5]. In 2019, DM represented a catastrophic disease and a financial burden that cost USD 381.25 million in hospital treatment, based on national health insurance (Jaminan Kesehatan Nasional = JKN) [6]. Mental health disorder (MHD) is a common comorbidity in DM, with a prevalence of 28% globally; females tend to suffer more than males, i.e., 34% and 23%, respectively [7][8][9][10]. Mental disorders such as generalized anxiety disorder (GAD), major depressive disorder (MDD), bipolar disorder, and eating disorders are common in DM patients [11][12][13][14]. MHD in diabetics may decrease quality of life [15] and poor self-care management [16], as well as increase disability [17], cardiovascular mortality risk [18] and the risk of all-cause mortality [19]. On the other hand, diabetes is a risk factor for MHD [20]. In general population studies, younger diabetics are more likely to develop MHD [8]. Another study reported that elderly diabetics are more likely to suffer from MHD, with an increased risk for other factors [11]. A previous study also concluded that MHD is more likely to occur in females with no formal education, current alcohol abusers, those with type 1 DM, a longer duration of DM, a chronic complication of DM, and other comorbidities common among elderly diabetic patients [21]. Previous studies concern the association of MHD diabetic comorbidity with genetic and family history [22][23][24][25] as well as obesity [26][27][28]. The frequent coexistence of mental health conditions in elderly diabetics should be of concern [29]. The mechanisms of psychiatric illness involving brain-derived neurotrophic factors, insulin resistance, and inflammatory cytokines could be due to the pathogenesis of DM and several psychiatric illnesses in the elderly [29]. Physical and psychosocial changes affect both mental health and diabetes in the elderly [30]. Diabetic complications such as retinopathy, nephropathy, neuropathy, coronary artery disease, and cerebrovascular disease were also associated with poor mental health status in elderly diabetics [31]. Another study also concluded that overweight status, poor physical capabilities, low activity level, and diabetic complications were risk factors for depression in elderly diabetic patients [32]. However, there is a lack of information regarding MHD risk factors among elderly diabetics in Indonesia. The five-annual national basic health survey (abbreviated as an acronym of RISKESDAS: riset kesehatan dasar) 2018 [3] was the latest national survey conducted by the Ministry of Health, Republic of Indonesia. The present study aims to determine the prevalence and risk factors of MHD among elderly diabetics in Indonesia. Design and Study Population This cross-sectional study employed secondary data acquired from RISKESDAS 2018, which is the latest round of the study. The survey involved households randomly selected from 34 provinces, 416 districts, and 98 cities in Indonesia, with 1,017,290 respondents [3]. The study population involved elderly diabetics older than 60. Diabetic status was determined by fasting blood glucose level ≥ 126 mg/dL, or 2 h postprandial and random blood glucose level ≥ 200 mg/dL, or that which had previously been diagnosed by a doctor. Blood glucose levels were measured using Accu-Check Performa (Roche, Basel, Switzerland). Subjects with incomplete data were excluded from the study. Details of data collection, ethical issues, and other related steps were published in the RISKESDAS 2018 report [3]. Data Collection This study was approved by the Ethics Committee, the National Institute of Health Research and Development (NIHRD), the Ministry of Health, Republic, Indonesia. MHD status was determined by a WHO self-reporting questionnaire-20 (SRQ-20), [33][34][35] as acquired from RISKESDAS 2018 data (Supplementary File S1). SRQ-20 is a tool used to measure common mental disorder symptoms [33][34][35]. The SRQ-20 consists of 20 questions regarding the prevalence of somatic, cognitive, and emotional symptoms over the past 30 days: 0 = No and 1 = Yes [33][34][35]. RISKESDAS 2018 refers to a previous study that validated SRQ-20 in the Indonesian population [35]. The study determined MHD with the cut-off point ≥ 6, positive predictive value = 70%, and negative predictive value = 92% [35]. Secondary data were also acquired from RISKESDAS 2018 that involved age, sex, urban rural residence status, marital status, educational level, employment status, obesity, hypertension, heart disease, stroke, family history of MHD, and duration of DM. Statistical Analysis Subjects' characteristics were presented as frequency and proportions. The relationships of the determinants and MHD status were analyzed by a chi-square test. The p-values < 0.05 were considered statistically significant. Binary logistic regression with a backward elimination (conditional) method was conducted to acquire the regression model since the dependent variable scale was nominal. The dependent variable was MHD status that was categorized as "Yes = 1" if it meets the criterion and "No = 1" if not. We presented a prevalence odds ratio (POR) for cross-sectional study as formulated in the previous study [36]. All statistical analyses were performed using Statistical Package for the Social Sciences (SPSS) software (version 23.0 for Windows, IBM SPSS Inc., Chicago, IL, USA). Results Data extracted from RISKESDAS 2018 contained 2818 elderly diabetic subjects. Table 1 shows that the proportion of female elderly diabetics in the study population was higher, while the age category is almost comparable. Most elderly diabetics had less education, lived in an urban area, were unemployed, and were married. A small number of the study population, i.e., around 6%, was obese and had a family history of MHD. Hypertension and duration of DM were almost comparable, while heart disease and stroke had a lower proportion in the total population. The overall prevalence of MHD among elderly diabetics was 19.3% in the study population. Table 2 identifies variables related to MHD. Sex, residence type, educational level, employment status, obesity, hypertension, heart disease, stroke, and family history of MHD were significantly different between the MHD groups based on the chi-square test. However, age, marital status, and duration of DM were comparable between the groups. The proportions of several parameters were significantly higher in the MHD group, i.e., female, rural residence, lower educational level, unemployed, obesity, hypertension, heart disease, family history of MHD, and stroke. Discussion This cross-sectional study involved 2818 elderly diabetics in Indonesia. Of them, 545 experienced MHD, indicating that the prevalence of MHD among elderly diabetics in this study population was 19.3%. The current study updated the prevalence of MHD among elderly diabetics aged older than 60 years old, especially in Indonesia. A systematic review involving 248 studies estimated a prevalence of 28% of people with type 2 diabetes who experienced depression globally, with 32% being in Asia [8]. People with diabetes aged older than 65 years old had a prevalence ratio of 21% [8], which is a similar value to that indicated in the current study, while those of a younger age (<65 years old) had a greater prevalence ratio, i.e., 31% [8]. The female group had more prevalence than the male group, i.e., 34% and 24%, respectively [8]. Depression determination methods also influence the prevalence ratio; self-reported methods tend to have a higher prevalence (30%) than clinical diagnosis assessment (22%) [8]. The current study utilized self-reported methods using WHO-SRQ-20 [33][34][35]; however, it found a lower prevalence than the previous review [8]. A previous systematic review of 26 studies involving all measurement assessments conducted in 2011 concluded that the prevalence of major depressive disorder in type 2 DM was 14.5%, indicating a lower prevalence ratio [37]. Another study observed diabetic patients aged over 55 years in primary care and found an MHD prevalence of 19.1% [38]. Evidence shows that diabetes mellitus is reciprocally associated with MHD and coincides as a comorbidity [39]. Depression is a common MHD that is discussed as a risk factor of DM [20]; however, the underlying mechanism is still unclear. Chronic stress induces immune dysfunction through the hypothalamus-pituitary-adrenal axis and the sympathetic nervous system, causing hypercortisolemia and promoting insulin resistance and visceral obesity, and leading to metabolic syndrome and DM [39]. Furthermore, chronic stress increases the production of inflammatory cytokines. High inflammatory cytokines interact with the function of pancreatic β-cells, induce insulin resistance, and promote the appearance of type 2 diabetes mellitus [39]. On the other hand, pro-inflammatory cytokines have been reported to influence pathophysiological domains that characterize depression, including neurotransmitter metabolism, neuroendocrine function, synaptic plasticity, and behavior [40]. This association suggests that both stress and inflammation stimulate depression and diabetes mellitus [39]. Chronic stress and inflammation processes, as well as the physical and psychosocial changes that are common in the elderly population, affect both mental health and diabetes in the elderly [30]. The present study found that having a lower educational level, being female, being unmarried, and stroke were all associated with MHD among elderly diabetics with the pseudo-R-squared (Nagelkerke) value being 0.790. This finding explained that 79.0% of MHD determinants in this population study of elderly diabetics were influenced by the mentioned factors. The rest, 21.0%, can be explained by other factors that are not observed in the study. The present study involved many determinants, but only those provided in the RISKESDAS 2018 data. This study did not observe other pivotal determinants for MHD in people with diabetes. The determinants involved physical capability, insulin and drug usage, ethnicity, detailed civil status (married, single, divorced, widowed), residence status (living alone, nuclear family, joint/extended living), family size, family income, pensioner status, smoking, alcohol, religion, glycemic control, and other sociodemographic and clinical health factors [8,9,11,32,41,42]. The involved determinants in the study can potentially explain other parameters that influence MHD among elderly diabetics. Lower education level was a factor that was significantly associated with MHD in this study. Many other studies have reported a similar association of lower educational status with common mental disorders in the diabetic population as well as in the general population [43][44][45][46]. Lower education level diabetics have limitations in coping [47] with diabetes complications and other comorbidities, as well as general psychosocial problems. This study showed that elderly diabetics with lower education had more than three times the risk of experiencing MHD compared to those with more education. The present study determined lower education levels for elderly diabetics as being those who had passed junior high school (secondary education) or a lower form education. The other study categorized education level in greater detail, i.e., no schooling, primary education, secondary education, and tertiary education [41]. This study also concluded that female elderly diabetics had a 64% higher risk of acquiring MHD than males. Females are more likely to acquire MHD in the general population, as well as among diabetics and chronic disease patients [8,48]. The current study also demonstrated a significant relationship between stroke and MHD. The risk of stroke is 61% higher compared to the absence of stroke. The presence of stroke and diabetes is often followed by other multimorbidities, including MHD [49]. Some studies have also concluded that the presence of a comorbidity was more likely to initiate MHD [11]. The more comorbidities and additional illnesses, the higher the risk of acquiring MHD [11]. This condition is also related to physical limitations and capabilities, as well as the complicated clinical and health conditions involved in drug use [32]. A limitation of our study is the absence of diabetic medication status and glycemic control [29,32]. Glycemic control and the use of certain oral medication are related to mental health conditions in elderly diabetics [29,32]. Previous studies involved stress and epigenetics as predictors of MHD in the general population [23]. Candidate genes were also studied and revealed that APOE, BDNF, and SLC6A4 polymorphisms were related to MHD in the general population [24]. Other studies revealed the contribution of inflammatory markers to mental health. Interleukin (IL)-1β, IL-6, IL-10, monocyte chemoattractant protein-1, tumor necrosis factor-alpha, C-reactive protein, and phospholipase A2 contribute to depression [25], which is also associated with type 2 diabetes [50]. However, our study did not involve biological and genetic markers to elucidate an understanding of these mechanisms. Furthermore, regarding the statistical analysis, a backward stepwise binary logistic regression was chosen as an efficient method for the extensive data; however, there are some restrictions regarding this method [51]. Conclusions The prevalence of MHD among elderly diabetics in Indonesia was 19.3%. The risk factors for MHD among elderly diabetic subjects were female, no marriage, low education, and stroke. The high prevalence of MHD among elderly diabetics suggests that screening for psychological problems and educating elderly diabetic patients should be considered as routine components for diabetes care. Further studies should be conducted using clinical diagnostic assessments in a large population study involving genetic factors, inflammatory markers, cardiometabolic traits, and other potential factors in order to elucidate the relationship between risk factors and the occurrence of MHD among elderly diabetic subjects, as well as their mechanisms. Informed Consent Statement: All participants of this study signed an informed consent. Data Availability Statement: The data used to support the findings of this study are available from the corresponding author Mahalul Azam upon request through the email address mahalul.azam@mail.unnes.ac.id.
2020-11-29T14:05:50.855Z
2020-11-29T00:00:00.000
{ "year": 2021, "sha1": "8fc697bff7c42e0ce0a3c7aef7adcd709009540e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/18/19/10301/pdf?version=1632973420", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "32987a23480e21cb3925a4cdffeb226ae546c030", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
6014398
pes2o/s2orc
v3-fos-license
Induced Connections in Field Theory: The Odd-Dimensional Yang-Mills Case We consider $SU(N)$ Yang-Mills theories in $(2n+1)$-dimensional Euclidean spacetime, where $N\geq n+1$, coupled to an even flavour number of Dirac fermions. After integration over the fermionic degrees of freedom the wave functional for the gauge field inherits a non-trivial $U(1)$-connection which we compute in the limit of infinite fermion mass. Its Chern-class turns out to be just half the flavour number so that the wave functional now becomes a section in a non-trivial complex line bundle. The topological origin of this phenomenon is explained in both the Lagrangean and the Hamiltonian picture. Introduction Induced connections can arise in any theory whose classical configuration space displays a certain topological richness. If these connections have non-trivial curvature -the case we are interested in -the precise condition is that the second cohomology H 2 (Q, Z) of the classical configuration space Q must have a non-trivial free part (i.e. factors of Z). Rather than outlining the general theory (which is basically the classification of U (1)-principal bundles over Q, lucidly explained e.g. in Ref. 1) we try to develop some feeling for the underlying mechanism and assumptions, by first discussing a simple finite dimensional toy model which mimics exactly the essential features without the analytic complications. Similar toy models have been used extensively throughout the literature in explaining the geometric and topological origin of anomalies and Berry-phases. But eventually we are interested in field theory. There we wish to understand in detail how the mere possibility of induced connections -established by pure topological arguments -is actualized by concretely given dynamical laws. The purpose of this article is to present such an example in which this process can be studied in detail. Similar to so many contributions to the understanding of anomalies, it will once more show a deep link between topology, geometry and dynamics. Section 1: A Toy Model Consider the 2-dimensional Hilbert space, C 2 , and a Hamiltonian H = x · τ , (1.2) parameterized by the 2-sphere S 2 = {x ∈ R 3 / x = 1}. τ = (τ 1 , τ 2 , τ 3 ) are the Pauli matrices. We think of S 2 as the position-space of some particle. If we calculate the eigenvectors of H, we find that there is no global phase choice to make them well defined over the whole of the 2-sphere. To circumvent this, we think of S 2 as the quotient of S 3 ∼ = SU (2) via the action of U (1) R , the group of right translations generated by i 2 τ 3 . Let g be a general element of SU (2). The quotient map (the Hopf map) is then given by such that the Hamiltonian is now given by gτ 3 g −1 , which is clearly invariant under U (1) R . Its eigenvalues are ±1 with eigenvectors, |± , g , given by That is, the positive eigenvector is given by the first column of the matrix g, the negative one by the second. We write this as where the last equation specifies the representations, ρ ± , of U (1) R . Infinitesimally, the g-dependence of |g, ± can be written as where Θ is the matrix of left invariant 1-forms, {σ i }, on SU (2): The eigenvectors, | + and | − , are equivariant functions on S 3 into the Hilberteigenspaces H ± (here 1-dimensional) carrying the representation, ρ + , and its complex conjugate, ρ − , respectively. This can be expressed by the commutative diagram: If we now want to quantize the S 2 -degree of freedom as well (i.e. the particle motion) we can instead use S 3 as enlarged configuration space with one redundant (gauge) degree of freedom. We then have a wave function which, in the adiabatic approximation, (i.e., under the hypothesis of a slowly moving particle) can be split into ψ ± , each mapping into the 1-dimensional H ± only. The redundant degree of freedom, which is not to be quantized, is then taken care of by imposing the "Gauss constraint" which just expresses the adiabaticity condition in the requirement that ψ ± be an equivariant, H ± -valued function on S 3 , or equivalently, a section in a nontrivial line bundle over S 2 , associated to the Hopf bundle (1.2) in the representation ρ ± . According to (1.7) the Gauss constraint thus reads with X 3 being the left invariant vector field dual to σ 3 . The last step is now to implement (1.9) dynamically, i.e. to find a Lagrangean for the particle on S 3 which has p 3 = ± 1 2 as a constraint. To do this, we write down the standard line element on S 2 in terms of the 1-forms {σ i } The unique term with the same symmetries that gives the desired constraint is ± 1 2 σ 3 . The "effective" Lagrangean can then be locally projected onto S 2 and reads in the coordinate system that covers the 2-sphere except at the north pole But this is just the Lagrangean of an electrically charged particle of unit charge in the background of a magnetic monopole of strength g = 1 2 . Recall that the connection (gauge potential) has been deduced under the explicit assumption of slow motion (hypothesis of adiabaticity). It might well be called the adiabatic connection, and its holonomies are just the celebrated Berry-Phases. For the field theoretic model of the next section it will be useful to have the following correspondences in mind: H = C 2 ←→ fermionic Hilbert space In this model massive fermions will induce a connection on the effective gauge theory which will be the adiabatic connection in the limit of infinite fermion mass. In the toy model we had H 2 (S 2 , Z) = Z and the "magnetic field of the monopole" represented a non-trivial class therein (using DeRahm's construction). The same picture arises in the infinite dimensional model. (See Ref. 2 for a general discussion of how this topological class is also generally responsible for a specific type of anomalies.) Section 2: The 2n+1-Dimensional Yang-Mills Case In this section we consider 2n+1 dimensional Euclidean Yang Mills fields coupled to Dirac fermions. The gauge group is taken to be G = SU (N ) where N ≥ 2. Since Euclidean space is contractible, the bundle is necessarily trivial. Elements of the group of gauge-transformations are given by ordinary SU (N )-valued functions. By standard arguments gauge transformations are restricted to be the identity at infinity. This holds for space-time gauge transformations, as well as just spatial ones. In the Hamiltonian formulation we go into the A 0 = 0 gauge and consider instead a G-bundle over the t =const. slices (called Σ). In both cases, the group of gauge transformations may be identified with the function space of the form where N is either 2n + 1 or 2n and e is the identity in G. In the first case we call it G 2n+1 , in the second G 2n . The symbol ∞ denotes the infinity in Euclidean space and the gauge transformations map the point at infinity to the identity e. The spaces of gauge potentials are called A 2n+1 and A 2n respectively. For statements which are true in either case, we shall omit the superscripts. If A ∈ A is fixed by the action of G, it obeys Dg = 0, where D is the exterior covariant derivative. With the imposed boundary conditions it is easy a to see that this implies g ≡ e. G therefore acts freely and A can be given the structure of a principal fibre bundle (see Ref. 3) The best way to see this is not to look at the gauge potentials but at the connection 1-forms on the principal bundle, the former being pull-backs of the latter by some local sections. On the total space P , gauge transformations are given by bundle automorphisms projecting to the identity, or, equivalently, by G valued functions on P which are Ad-equivariant under the right action of G on P . Covariant constancy now means that the differential of this matrix valued function is zero if restricted to horizontal subspaces. The boundary conditions at infinity then force it to be the unit matrix. where we have in fact two spaces, Q 2n+1 and Q 2n . Only the latter acts as configuration space in the canonical formulation and we will simply call it the configuration space. Since A is an affine space, we have from the associated exact homotopy sequence In particular we have, now specializing b N to N ≥ n + 1, which also implies H 2 (Q 2n ) = Z and hence the possibility of monopoles in the configuration space Q 2n . It is the purpose of the rest of this paper to demonstrate that the interaction with matter (here massive Dirac fermions) causes the wave function for the Yang Mills field to actualize this topological possibility. In Ref. 4 the possibility of induced connections has been anticipated and their consequences for the equal-time commutation relations discussed. In our derivation we follow the spirit of Ref. 2 and make use of both, the Lagrangean formulation, where gauge fields are defined over space time M , and the Hamiltonian formulation, where via the gauge condition A 0 = 0 one has a gauge theory over the spatial sections Σ. Note that a gauge transformation in A 2n+1 is given by a function g : [0, 1] × S 2n → G, such that g 1 = g 0 ≡ e and g t (∞) = e ∀t, which at the same time defines an element of π 1 (G 2n ). Therefore, given a non-closed path in A 2n+1 which connects two different components of G 2n+1 in such a way that it projects to a loop in Q 2n+1 which generates π 1 (Q 2n+1 ), one has at the same time found a generator of π 1 (G 2n ). In A 2n this generator is the boundary of a 2-disk whose image (under the quotient map A 2n → Q 2n ) is a non-contractible 2-sphere generating π 2 (Q 2n ). Finally, let us note that the bundle (2.3) with group G 2n total space A 2n and base Q 2n can be given a natural connection once the metric on the spatial slices has been specified. Tangent vectors in A ∈ A 2n are Lie algebra-valued one forms which under gauge transformations transform with the inverse c adjoint representation. We call this space Λ 1 (LieG). Let T A (A 2n ) = V A ⊕ H A be an orthogonal decomposition of the tangent space at A. V A is the vertical space spanned by vectors of the form X ω A = D A ω, where D A is the covariant derivative at A and ω is an element of Λ 0 (LieG), the Lie algebra of G 2n . H A is by definition the orthogonal complement of V A using the metric r A , defined by ( * is the Hodge-duality map) r is invariant under the action of G 2n and hence H A defines a connection. Locally H A can be expressed as the kernel of the operator D † A , which is the adjoint of D A with respect to r. The connection 1-form can then be written as (see Ref. 5) It annihilates elements of H A , transforms in the appropriate form under gauge transformations in G 2n , and, when acting upon vertical vector fields X ω A , one has as required for connections. The metric (2.8) and the connection (2.9) have already been used in attempts to geometrically understand anomalies and also to formulate a Riemannian geometry of Q 2n (Refs. 5,6,7). The Euclidean action for Yang-Mills coupled to Dirac fermions is given by d We expand the connection in terms of Hermitean basis matrices {T 1 , . . . , T k }, k = dimSU (N ), so that A = iT p A p µ dx µ , and define the current by (2.14) By construction only the exponential of W [A] is expected to give a well defined function on Q 2n+1 . A method to obtain local expressions for W [A] is to calculate the one-form δW [A] at a preferred point A and then integrate this expression within a simply connected neighbourhood of A. We shall follow this strategy in the appendix. The obstruction to extend this to a globally defined function W [A] is given by the cohomology class in H 1 (Q 2n+1 ) generated by the one form δW . Using known techniques, we calculate I µ A in a 1 m -expansion. The zeroth order term (i.e. the m → ∞ limit) then gives us the adiabatic connection. The calculation, which we defer to the appendix, yields for the zeroth order term (compare formula (A.14) from the appendix) Here T is a basis element of LieG and the trace is taken over the Lie algebra indices. It follows that I 0 A transforms with the inverse co-adjoint representation under gauge transformations which it should do being an element of the dual of the Lie algebra of G 2n . We shall use this fact later in the canonical picture. Here we shall follow the original plan and insert the result in (2.14) to obtain where in the first equation we have defined a gauge invariant closed 1-form in A 2n+1 , which defines therefore a closed 1-form in Q 2n+1 , and in the second equation we integrated this 1-form over a straight path from 0 to A which we denoted by γ (0, A). Ω(0, A) is also known as the first Chern-Simons form. We now integrate the 1-form ω 1 2n+1 along the edges of two different triangles in A 2n+1 . The first one has vertices (0, A, A g ), the second (0, g −1 dg, A g ). Since in A 2n+1 closed forms are necessarily exact, the two integrals are zero. We thus arrive at the two relations Invariance of Ω 1 2n+1 under simultaneous gauge transformations in both arguments implies equality of the last terms in each line, and hence equality of the expressions on the left sides of (2.19) and (2.20). Since Ω 1 2n+1 (A, A g ) is the line integral of ω 1 2n+1 from A to A g , it also represents the loop integral of the projected 1-form on Q 2n+1 . This 1-form generates H 1 (Q 2n+1 ) if its integral along a generator of π 1 (Q 2n+1 ) gives the result 1. For this, g has to be a gauge transformation of unit winding number. The desired expression for this integral is now seen to be given by the integral along the straight path between 0 and g −1 dg which is independent of A as required. Elementary integration yields γ(0,g −1 dg) On the other hand, the integer-valued winding number w(g) of g is given by the expression (see Ref. 9) Let us now turn to the Hamiltonian picture. For this, we go back to (2.15). There we expect to find encoded the same information in H 2 (Q 2n ) represented by some curvature 2-form. The infinitesimal holonomy enters the physical picture by anomalous commutators (Schwinger terms). Let us try to explain this in the geometric picture developed so far. If the action (2.11) is put into canonical form, the first class constraint associated with the gauge freedom in A 2n appears as Gauss' law In an effective theory for the gauge field the right hand side is replaced by its expectation value I 0 A . Quantizing the gauge field in the Schrödinger picture involves the constraint (here the dot · represents summation and integration) are the fundamental vector fields on the principal bundle A 2n . It is easy to verify that the map ω → X ω furnishes a homomorphism from the Lie algebra of G 2n into the Lie algebra of vector fields on A 2n . With the aid of the connection C A from equation (2.9) and the charge density I 0 A we can form the G 2n -invariant 1-form on A 2n In quantum field theory, an anomalous commutator is defined as the deficiency term that prevents the mapping ω → ∇ X ω from the Lie algebra of G 2n to the commutator algebra of linear operators on quantum states from being a homomorphism of Lie algebras: where we used that X [ω,η] = [X ω , X η ]. But the right hand side is just the curvature two-form -denoted by K-for the U (1)-connection Ω, evaluated on the fundamental vector fields X ω and X η . It satisfies Using (2.15), we write the connection as follows: (2.32) Integrating K over a 2-sphere in Q 2n can be done by integrating our expression for K over a disc in A 2n with boundary in a fibre G 2n , or, equivalently, by integrating Ω over the boundary circle. To do this, let g(t) be a loop in G 2n and γ(t) := A g(t) the associated loop through A in A 2n . The generating vector field is given by We now integrate Ω along this vector field and obtain: which is just i times the expression for the integral (2.16), if written in the (time dependent) gauge where A 0 = 0. The integration thus leads to i times the right hand side of (2.23), where w[g] is now the integer in π 1 (G 2n ) represented by the loop along which we just integrated. For −iΩ to be a U (1) connection, the result of the integration must be 2π times an integer (see e.g. Ref. 1) which again leads to the condition of f being even. The integer is then known as the Chern-class of the U (1) bundle, which here represents an element in H 2 (Q 2n , R), the second DeRahm cohomology group. where TR φλσ denotes the trace operation over the space-time functions (φ), the Lie algebra (λ) and the spinor space (σ). The factor f results from having already taken the trace over the f -dimensional flavour space. Negative powers of positive operators are defined via so that after some rearrangements we obtain for the expression in (A.4) where we explicitly expressed the trace over the space-time functions in a plane wave basis exp(ik µ x µ ). We write k 0 for the time-component and k for the collection of space-components k i of k µ . Now, the first two terms in the round bracket do not contribute since i∂ 0 e −t(···) e ikz = e ikz i(∂ 0 + ik 0 )e −t(···) (A.7) which vanishes upon k 0 integration and our staticity requirement, and since there are an even number of spatial γ i 's in (· · ·). Further, we have and can thus write (A.1) (A.10) In the 2n-dimensional spatial space, γ 2n+1 ("gamma five") is given by γ 2n+1 = i n γ 1 · · · γ 2n which is Hermitean and squares to one. Also, TR σ (γ 2n+1 γ i 1 · · · γ i 2n ) = (−i) n 2 n ε i 1 ...i 2n . So if we choose γ 0 = −iγ 2n+1 , we obtain TR σ γ 0 γ i 1 · · · γ i 2n = (−i) n+1 2 n ε i 1 ...i 2n , (A.11) and have for the first non-vanishing contribution from the exponential in (A.10) (−i) n+1 t n 2 n n! TR λσ T p γ 0 γ i 1 · · · γ 1 2n F i 1 i 2 · · · F i 2n−1 i 2n = − i t n n!2 n 2 n ε i 1 ...i 2n TR λ (T p iF i 1 i 2 · · · iF i 2n−1 i 2n ) where from the first to the second line we have performed the trace over the 2 ndimensional spinor space. The last expression is meant to be the 0-component of the 1-form in curly brackets, where * denotes the Hodge-duality operator with respect to the 2n + 1-dimensional metric δ µν . Performing the dk-integration yields a factor of (4πt) −n so that we can write expression (A.12) as follows: Although we had selected the 0-th component to arrive at this expression, relativistic covariance tells us that the corresponding relations hold for any component (this is also apparent from the derivation, where any other component could have been preferred). If we now include higher powers t n+r from the exponential in (A.10) the kintegration deletes again n of them so that we are left with an integral of the form s Γ(s + 1) which, when acted upon by d ds s=0 , gives a term ∝ (k 2 0 + m 2 ) −r−1 , and after k 0integration a term ∝ m −2r−1 . So, finally, writing tr for TR λ , we arrive at the compact formula
2014-10-01T00:00:00.000Z
1994-05-26T00:00:00.000
{ "year": 1994, "sha1": "80ba58fe5154b13b553d9d8e9966beafadb27db8", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/9405178", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "80ba58fe5154b13b553d9d8e9966beafadb27db8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
218649981
pes2o/s2orc
v3-fos-license
Usutu Virus Infection of Embryonated Chicken Eggs and a Chicken Embryo-Derived Primary Cell Line Usutu virus (USUV) is a mosquito-borne flavivirus, closely related to the West Nile virus (WNV). Similar to WNV, USUV may cause infections in humans, with occasional, but sometimes severe, neurological complications. Further, USUV can be highly pathogenic in wild and captive birds and its circulation in Europe has given rise to substantial avian death. Adequate study models of this virus are still lacking but are critically needed to understand its pathogenesis and virulence spectrum. The chicken embryo is a low-cost, easy-to-manipulate and ethically acceptable model that closely reflects mammalian fetal development and allows immune response investigations, drug screening, and high-throughput virus production for vaccine development. While former studies suggested that this model was refractory to USUV infection, we unexpectedly found that high doses of four phylogenetically distinct USUV strains caused embryonic lethality. By employing immunohistochemistry and quantitative reverse transcriptase-polymerase chain reaction, we demonstrated that USUV was widely distributed in embryonic tissues, including the brain, retina, and feather follicles. We then successfully developed a primary cell line from the chorioallantoic membrane that was permissive to the virus without the need for viral adaptation. We believe the future use of these models would foster a significant understanding of USUV-induced neuropathogenesis and immune response and allow the future development of drugs and vaccines against USUV. Introduction Usutu virus (USUV) is a zoonotic arbovirus related to Japanese encephalitis (JEV) and West Nile (WNV) viruses (genus Flavivirus, family Flaviviridae) [1]. Initially restricted to Africa, it emerged in Europe in 1996 and managed to establish an endemic mosquito-bird life cycle and to co-circulate with WNV in many European countries [2,3]. Further, its rapid geographic spread across Europe led to a noteworthy recrudescence of infections in birds, recorded in over 96 species from 36 families [4][5][6], as well as substantial avian mortalities, especially in Eurasian blackbirds (Turdus merula) [7,8]. As for WNV, most human USUV infections are asymptomatic. In total, more than 80 cases of subclinical infections were described in blood donors or persons with risk of exposure in Italy, Serbia, the Netherlands, and Germany during WNV surveillance surveys, until now [9][10][11][12][13]. Seroprevalence In Ovo Characterization of USU-BE-Seraing/2017 For the survival study, three different doses of USU-BE-Seraing/2017 strain (10 4 , 10 5 , or 10 6 TCID 50 dispersed in 100 µL of infected Vero cell culture supernatant diluted using DMEM) were each injected into nine 10-day-old ECE via the allantoic route. The eggs were subsequently incubated together with nine mock-infected controls at 37.5 • C and 55% relative air humidity. All eggs were daily checked by candling for embryonic vitality during 6 days post-infection (dpi). After the identification of embryonic death, the corresponding allantoic liquid was harvested and samples from the CAM, liver, skeletal muscle, heart, and brain were collected and examined by histology and immunohistochemistry (IHC) as in [38]. Virus isolation in 24-well plates containing a confluent monolayer of Vero cells was attempted from the allantoic fluid and liver tissues of each dead embryo [8]. To study the time-course of infection using the USU-BE-Seraing/2017 strain, a set of 62 ECE in the tenth day of development was incubated at 37.5 • C following allantoic cavity inoculation with 100 µL of infected Vero cell culture supernatant yielding an infectious dose of 10 5 TCID 50 . As negative controls, 30 eggs were injected via the allantoic route with 100 µL of virus-free DMEM. Over 5 dpi, dead embryos were opened and AFs were harvested to quantify RNA loads by RT-qPCR. In parallel, eight live infected and six uninfected age-matched embryos were randomly selected each day for euthanasia by decapitation. AF samples (200 µL) from the infected embryos were harvested to assess viral replication by RT-qPCR. Tissue samples from the CAMs, livers, hearts, and brains of five embryos were collected for RT-qPCR, histology, and IHC examination [38]. Viral RNA copies (VRC) in each tissue were calculated using a standard curve, which was constructed as described in [39]. The remaining embryos (three infected and one uninfected) were dissected as follows: for each embryo, the head, whole wings, and whole legs were separated from the trunk, which was transversely sectioned. All fragments were then immersed in 10% neutral buffered formalin for histopathological examination. On day 5 post-infection (pi), embryos were weighted to evaluate the impact of USUV infection on their growth. Virulence of Other USUV Strains In Ovo To compare the virulence of USU-BE-Seraing/2017 strain in ovo with that of other USUV strains, three different doses of USU-BE-Grivegnee/2017, Vienna 2001, and UR-10-Tm strains (10 4 , 10 5 , or 10 6 TCID 50 dispersed in 100 µL of infected Vero cell culture supernatants diluted using DMEM) were each injected into nine 10-day-old ECE via the allantoic route. The ECE were kept at a controlled temperature of 37.5 • C and 55% relative air humidity. The eggs were then candled daily over 6 days. Upon detection of embryo mortality, the corresponding egg was opened and processed as previously described. Preparation of Primary Chorioallantoic Membrane Cells Primary chicken CAM cells were prepared from one 10-day-old embryo as follows: the CAM was carefully dissected, washed with phosphate-buffered saline (PBS, Gibco), and then minced into small fragments using a sterile blade. Next, the tissue was digested with 5 mL of TrypLE Select solution (Gibco, Life Technologies) at 37 • C for 10 min in a 15 mL sterile tube. The trypsinate was homogenized in the middle of the reaction by vigorous agitation of the tube. Digestion was stopped by adding 10 mL of DMEM, supplemented with 10% fetal bovine serum and 1% penicillin/streptomycin. After centrifugation at 400× g for 5 min, the supernatant was removed and CAM cells were re-suspended in 10 mL of the same cell culture medium. Next, the cells were filtered through a 100 µm filter and 10 7 cells were distributed in a 25 cm 2 flask. The cells were subsequently incubated at 37 • C with 5% CO 2 . The culture medium was renewed every three days and confluence was obtained within 7 days. The cells were passaged in a 75 cm 2 flask; every 10 days, subcultures were obtained with a split ratio of 1:3. Characterization of USUV Strains Growth Kinetics in Chorioallantoic Membrane Cells Chicken CAM cells (passage 4) were seeded in 24-well culture plates to a confluence of 80%. The four USUV strains were diluted in DMEM supplemented with 1% penicillin/streptomycin to three different multiplicities of infection (MOI, 0.1, 0.01, and 0.001). Then, cells were rinsed once with PBS and each inoculum was added to 3 wells (1 mL per well). After 4 h of incubation at 37 • C, the inoculums were removed and the cells were washed with PBS. Fresh DMEM supplemented with 1% penicillin/streptomycin were added to each well (2 mL per well) and the cells were incubated at 37 • C and 5% CO 2 for the duration of the experiment. Mock-infected CAM cells incubated with an uninfected Vero cell culture supernatant were used as controls. For 6 days, 200 µL of supernatant was harvested daily from each well and held at −80 • C in cryotubes for viral absolute quantification by RT-qPCR, as previously described. Cell monolayers were visually controlled for the presence of cytopathic effects (CPE). By the end of the experiment, cells were rinsed with PBS, fixed with 1 mL of 4% paraformaldehyde and subsequently stained by IHC as in [37], but without the antigen retrieval step. Statistical Analyses Survival curves were plotted and compared using the log-rank and Gehan-Breslow Wilcoxon tests (GraphPad Software, La Jolla, CA, USA). To compare the RNA load per organ per day of infection, the Statistical Analysis System (SAS) Univariate procedure was used to test the normality of the data. Logarithmic transformation was performed to normalize the distribution of the data, which was revealed as nonparametric. The general linear model (Proc GLM, SAS 2001) was used to test the effects of the day, organ, or strain and day-organ interaction on the studied variables. The same procedure was used to compare viral load per strain per MOI in CAM cells. The comparison between the infected embryos weights with those of age-matched uninfected ones was performed by analyses of variance (ANOVA). The GLM was used to compare the viral RNA loads in the AFs of infected euthanized embryos per day of infection. All tests used in the previous analyses were implemented in SAS (SAS Institute Inc., Cary, NC, USA). A p < 0.05 was considered statistically significant. All the data imputed in GraphPad and SAS are provided in the Supplementary Materials. Survival Study Kaplan-Meier survival curves ( Figure 1) showed significant differences in mortalities according to the dose by both the log-rank (Mantel-Cox) (χ 2 = 16.9, p = 0.0002) and the Gehan-Breslow Wilcoxon tests (χ 2 = 16.03, p = 0.0003) plotted in GraphPad Software. Mock-inoculated embryos remained alive until the end of the experiment. The infected dead embryos were hemorrhagic and severely swollen with edema ( Figure 2). The infected dead embryos were hemorrhagic and severely swollen with edema ( Figure 2). The infected dead embryos were hemorrhagic and severely swollen with edema ( Figure 2). Microscopically, the most relevant feature in all of the eggs was multifocal to diffuse areas of degeneration and necrosis in the CAM, with moderate to massive infiltration of heterophils and lymphocytes ( Figure 3). Most slides showed absent or severely autolytic brain tissue. IHC revealed abundant USUV antigen in the CAM (epithelial and mesenchymal cells) and in developing myoblasts in the skeletal muscle and myocardium on day 5 pi (Figure 4A-D). A few hepatocytes were positive in a dead embryo on day 3 pi (not shown). Infectious viruses were successfully isolated on Vero cell cultures from the AFs and liver tissues of all infected dead embryos. Microscopically, the most relevant feature in all of the eggs was multifocal to diffuse areas of degeneration and necrosis in the CAM, with moderate to massive infiltration of heterophils and lymphocytes ( Figure 3). Most slides showed absent or severely autolytic brain tissue. Infectious viruses were successfully isolated on Vero cell cultures from the AFs and liver tissues of all infected dead embryos. Course of Infection USUV RNA was detected in the AFs of all eggs infected with the USU-BE-Seraing/2017 strain ( Figure 5). RNA loads in this region significantly varied over the infection time-course (p = 0.0049) and peaked on day 3 pi. Likewise, significantly higher RNA loads were found in AFs from dead embryos when compared to those from infected and euthanized ones (not shown). Course of Infection USUV RNA was detected in the AFs of all eggs infected with the USU-BE-Seraing/2017 strain ( Figure 5). RNA loads in this region significantly varied over the infection time-course (p = 0.0049) and peaked on day 3 pi. Likewise, significantly higher RNA loads were found in AFs from dead embryos when compared to those from infected and euthanized ones (not shown). On day 5 pi, impaired growth (p = 0.002) was detected in the infected embryos compared to controls ( Figure 6). The pathomorphological analysis revealed cutaneous hemorrhage without specific microscopic findings, except for the CAM, where cell necrosis and inflammation were marked. Varying amounts of viral antigens were demonstrated by IHC in the different tissues mentioned earlier, but also in the eye (retina), skin (epidermis and feather follicle pulp), and intestine ( Figure 4D-G). USUV-antigen staining in the muscle bundles of the head, trunk, legs, and wings was mild but reproducible in the majority of the infected embryos. No USUV antigens were detected in the brain, kidney, or lung at any time of infection with this viral strain. On day 5 pi, impaired growth (p = 0.002) was detected in the infected embryos compared to controls ( Figure 6). The pathomorphological analysis revealed cutaneous hemorrhage without specific microscopic findings, except for the CAM, where cell necrosis and inflammation were marked. Varying amounts of viral antigens were demonstrated by IHC in the different tissues mentioned earlier, but also in the eye (retina), skin (epidermis and feather follicle pulp), and intestine ( Figure 4D-G). USUV-antigen staining in the muscle bundles of the head, trunk, legs, and wings was mild but reproducible in the majority of the infected embryos. No USUV antigens were detected in the brain, kidney, or lung at any time of infection with this viral strain. The CAM, brain, heart, and liver samples all tested positive by USUV-specific RT-qPCR during the infection (Figure 7). A higher viral RNA load was found in the CAM compared to the other three On day 5 pi, impaired growth (p = 0.002) was detected in the infected embryos compared to controls ( Figure 6). The pathomorphological analysis revealed cutaneous hemorrhage without specific microscopic findings, except for the CAM, where cell necrosis and inflammation were marked. Varying amounts of viral antigens were demonstrated by IHC in the different tissues mentioned earlier, but also in the eye (retina), skin (epidermis and feather follicle pulp), and intestine ( Figure 4D-G). USUV-antigen staining in the muscle bundles of the head, trunk, legs, and wings was mild but reproducible in the majority of the infected embryos. No USUV antigens were detected in the brain, kidney, or lung at any time of infection with this viral strain. The CAM, brain, heart, and liver samples all tested positive by USUV-specific RT-qPCR during the infection (Figure 7). A higher viral RNA load was found in the CAM compared to the other three The CAM, brain, heart, and liver samples all tested positive by USUV-specific RT-qPCR during the infection (Figure 7). A higher viral RNA load was found in the CAM compared to the other three tested tissues (p < 0.001). The heart and brain ranked second (p = 0.606), with higher amounts of RNA compared to those detected in the liver (p < 0.001 and p = 0.002, respectively). Viruses 2020, 12, x FOR PEER REVIEW 9 of 19 tested tissues (p < 0.001). The heart and brain ranked second (p = 0.606), with higher amounts of RNA compared to those detected in the liver (p < 0.001 and p = 0.002, respectively). Virulence of other USUV Strains In Ovo Kaplan-Meier survival curves ( No statistical differences were found in the embryonic mortality rates induced by the four USUV strains (Table 1). Similar findings were further observed with European 3 lineage strains USU-BE-Villers aux Tours/2017 (Genbank: MK230890, passage 5) and USU-BE-Richelle/2017 (Genbank: MK230893, passage 5) [37] (data not shown). Moreover, no lethal effect was observed with doses of less than 10 4 TCID50 using all USUV available in our laboratory (data not shown). No statistical differences were found in the embryonic mortality rates induced by the four USUV strains (Table 1). Similar findings were further observed with European 3 lineage strains USU-BE-Villers aux Tours/2017 (Genbank: MK230890, passage 5) and USU-BE-Richelle/2017 (Genbank: MK230893, passage 5) [37] (data not shown). Moreover, no lethal effect was observed with doses of less than 10 4 TCID 50 using all USUV available in our laboratory (data not shown). Table 1. Chicken embryo mortality rates comparison following the infection with three different doses of four Usutu virus strains and using log-rank (Mantel-Cox) and Gehan-Breslow Wilcoxon tests. Gross and microscopic lesions, as well as IHC results, were similar to those observed after infection with USU-BE-Seraing/2017 strain, with some new sites of virus replication. Embryos that died on day 5 pi with USU-BE-Grivegnee/2017 and UR-10-Tm strains presented few antigen-positive cells in the brain ( Figure 4H). An embryo infected with USU-BE-Grivegnee/2017 strain showed abundant viral antigens in the pituitary gland on day 6 pi ( Figure 4I). An overview of the IHC findings using USUV strains is given in Table 2. As for the USU-BE-Seraing/2017 strain, infectious viruses from the AFs and liver tissues of the dead embryos infected with the three USUV strains used in this study were successfully isolated on Vero cell cultures. Characterization of USUV Strains Growth Kinetics in Chorioallantoic Membrane Cells The At the end of the experiment, CPEs were markedly pronounced in the wells infected with MOIs of 0.1 and 0.01 (not shown). The CPEs were characterized by the appearance of rounded, retractile cells followed by cellular death and destruction of the cell monolayer. Abundant antigen signals were seen in the cells remaining in the bottom of the wells, as seen by IHC staining (Figure 10). At the end of the experiment, CPEs were markedly pronounced in the wells infected with MOIs of 0.1 and 0.01 (not shown). The CPEs were characterized by the appearance of rounded, retractile cells followed by cellular death and destruction of the cell monolayer. Abundant antigen signals were seen in the cells remaining in the bottom of the wells, as seen by IHC staining (Figure 10). Discussion In this report, we showed that all four USUV strains injected at high doses in the ECE via the allantoic route successfully replicated in the AF and caused deaths to chicken embryos. These results were in contradiction with three previous studies that inoculated USUV to ECE. In the study carried by Segura et al. [32], the authors infected 10-day-old ECE with high doses (10 4 , 10 5 , or 10 6 Plaque-Forming Units PFU) of USUV strain V18 (Genbank: KJ438730, lineage 3) via the allantoic route. Only low USUV titers were detected in the AFs from 14% of the eggs, and the chicken embryos developed normally [32]. In the study by Bakonyi et al. [31], Vienna 2001 USUV strain was injected into the allantoic sac of 10-day-old ECE at a high dose (6 × 10 5 TCID 50 ). The infected chicken embryos did not show death or lesions after four days of incubation and were negative according to IHC [31]. In contrast, the same strain in our study induced mortality in one embryo at a dose of 10 5 TCID 50 and in three out of nine embryos at a dose of 10 6 TCID 50 after four days of infection. In our hands, both live and dead embryos at this stage presented pathomorphological changes in the CAM and virus antigens in many tissues (typically in the CAM and skeletal muscle) that were highly indicative of USUV infection (data not shown). In the study by Bakonyi et al. [31], the original USUV isolate (before passaging) and USUV passaged twice in Vero cells exhibited negative results. However, the strain we used for ECE inoculation was passaged 17 times in Vero cells, which may have induced specific genomic changes that increased its pathogenicity for ECE. Another possible explanation for the different infection outcomes by this USUV strain is that susceptibility to the virus might be variable according to the chicken breed from which the embryonated eggs were obtained. Indeed, the immune response to a given pathogen can differ according to chicken lines, contributing, at least in a part, to these differences in the infection phenotype. For instance, the innate immune response to Newcastle disease virus infection was shown to be breed-dependent using chicken embryos [40] or hatched chicks [41] as infection models. Evidence of the role of the interferon response in the control of USUV infection was shown using several in vitro [42,43] or murine models [32,[44][45][46]; thus, a breed-dependent, innate immune response to USUV could be the underlying mechanism of the selective pathogenicity of USUV to chicken embryos. The immune response of the developing chicken embryo would be an excellent tool to evaluate the still-unexplored avian innate immune mechanisms in response to USUV infection. Likewise, the investigation of line-dependent chicken embryo immune responses would offer valuable answers to the question of the selective pathogenicity of USUV infection among avian species in general. The lethal effect of USUV was highly linked to the infective dose, as seen with other flaviviruses, such as ZIKV [28], WNV [33], and Japanese encephalitis virus [47], when injected into ECE. No lethal effect was observed with a dose of less than 10 4 TCID 50 , and USUV poorly replicated in the AFs and embryonic tissues at a dose of 10 3 TCID 50 or less (data not shown). Hence, ECE are likely to have limited efficiency for virus isolation from low-concentrated field samples. This may explain why ECE resisted infection by USUV from dead bird samples in the study of Savini et al. [8], contrary to the Vero cells used in the same study. In goose embryos, infection with the Vienna 2001 USUV strain did not cause mortality nor significant gross or microscopic lesions [48]. However, USUV replication was detected in the retina, some autonomic ganglia, skeletal muscle, renal tubular cells, and connective tissue cells [48]. In our report, through intra-allantoic injection of high doses of USUV, the infected chicken embryos showed stunted growth and cutaneous hemorrhage, which are common features of infection with some other mosquito-borne epornitic viruses, such as WNV [49] and Tembusu virus [50,51]. Microscopically, focal necrosis and non-suppurative inflammation were the hallmarks of infection in the CAM. High RNA loads and viral antigens were detected in other tissues, such as the brain, heart, and liver. The lack of inflammation in these organs is not yet well understood. This same feature was found after infection of ECE with the Yellow Fever 17DD vaccine virus [30]. Correspondingly, the liver showed very obvious macroscopic lesions and yielded infectious virus detectable by Vero cell culture; yet, no spectacular histopathological changes, lower RNA loads compared to other tissues, and very few positive hepatocytes were detected by IHC. As a possible explanation, some of the viruses revealed by RT-qPCR and Vero cell cultures were possibly simply circulating in the blood [28]. The brain and pituitary gland tissues of embryos occasionally showed viral antigens. USUV was shown to infect several murine and human neuronal cells and to replicate in mature human astrocytes more efficiently than ZIKV [52]. The impact of ZIKV on the development of the central nervous system of chicken embryos was already assessed [27,28], and we estimate that our in ovo USUV model provides ground for similar studies in the future. In our study, viral antigens were detected in the retinas of the chicken embryos on the second and third days of infection, consistent with the presence of viral antigens in the retina of experimentally USUV-infected goose embryos [31] and the dissemination of USUV to the eye demonstrated by RT-qPCR in experimentally infected canaries (Serinus canaria) [39]. Visual impairment and ocular lesions were described in naturally WNV-infected raptors [53,54]. Another flavivirus, Bagaza virus (BAGV), was reported to cause blindness and ocular lesions in common pheasants (Phasianus colchicus) and partridges (Alectoris rufa [55] and Perdix perdix) [56]. Further in vivo experiments in avian and murine models would be necessary to characterize the visual disorders potentially induced by USUV infection. Likewise, during embryonic development in chickens, we demonstrated for the first time the possibility of viral replication in feather follicles. This finding was in accordance with the excretion of USUV via the immature feathers of canaries during the early stages of experimental USUV infection [39]. These preliminary observations suggested that feathers may potentially play a role in the spread of the virus. Fully grown feathers from either dead or live birds of all ages and molt cycles could provide a simple method for the detection of WNV infection [57]. Further, the Israel turkey encephalitis virus, a deadly flavivirus for turkeys in Israel, could be amplified from feather pulps; virus detection from such samples was proposed to evaluate the proper administration of live vaccines [58]. More studies are needed to characterize the capacity of USUV to disseminate via the feathers in both naturally and experimentally infected birds [39]. The virus replicated in different regions of the egg, preferentially in the AF and CAM. In the AFs, the significantly higher RNA loads detected during the first four days of infection compared to the first day could indeed rule out a simple detection of remnants of the viral inoculum by RT-qPCR. A peak was found in the RNA loads of the infected embryos on day 3, making it the most suitable day to collect AF for virus amplification. Infectious virus was systematically retrieved from the AFs of dead embryos using Vero cell-culture, further indicating the active replication of the virus in this region of the egg. Higher VRC were found in the AFs from dead embryos than in those from surviving ones, suggesting that higher replication in this site prompts fatal outcomes of USUV infection. The Yellow Fever-17D vaccine is considered to be among the most successful live-attenuated human vaccines and was used to develop other flavivirus vaccines by chimerization [29]. It was obtained by serial passages of the virus in chicken embryo tissues to remove its neurotropic properties [29]. Our ECE model could be beneficial to test the protective effect of vaccine candidates, but its efficiency to amplify virus particles in large amounts as needed for the vaccine industry is questionable due to the high virus input needed to obtain viral replication in the AF. Evidence of strong viral replication was seen in the CAM. This result resembled that observed following infection of ECE with WNV [49], but it did not match with that obtained with the Yellow Fever 17DD vaccine virus, which did not replicate in the CAM [30]. Consequently, CAM cells were isolated in vitro and showed susceptibility to USUV infection, as evidenced by the appearance of characteristic CPE and viral RNA production. To our knowledge, goose embryo fibroblasts were the only available in vitro avian model for the study of USUV, until now [31]. Here, we developed the first cellular model from domestic chicken (Gallus gallus domesticus) allowing the study of USUV. Virus quantities were directly related to seed virus input, which may limit the cost-effectiveness of this model in vaccine production. The yield of virus per cell [59] should be determined to characterize the production efficiency of this virus using this model. Primary chicken CAM cells were used to compare the replicability of multiple phylogenetically distinct USUV strains, and differences in growth kinetics were observed. The USU-BE-Seraing/2017 strain showed the highest viral replication using this model, providing an interesting model for the evaluation of the USUV sensitivity to antivirals, for instance. Whether the passage of virus in CAM cells led to the selection of genetic variants needs to be determined by nucleotide sequence analyses and in ovo pathogenicity assessment of CAM cell-derived strains. Conclusions In conclusion, this report is the first to use ECE and chicken embryo-derived cells as artificial models to study the histopathological lesions and virus tropism involved in the pathogenesis of USUV. Our data suggested that USUV infection in Gallus gallus domesticus embryos is systemic and lethal in a dose-dependent manner. The CAM seems to be the main replication site of USUV, with severe histopathological changes and abundant cell staining by IHC. Cells from the CAM were highly permissive to USUV when cultured in vitro. We believe the use of this model, along with ECE, could further foster a significant understanding of the pathogenesis and provide grounds for the development of vaccines against USUV.
2020-05-16T13:05:30.500Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "61d08ef4607b0e29518e8b6d103f2f64762a1787", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4915/12/5/531/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c9d8ff5ef34c15437695615d3659d4d0c2f6fb26", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }