id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
271291881
pes2o/s2orc
v3-fos-license
Behaviour and welfare assessment of autochthonous slow-growing rabbits: The role of housing systems Understanding the farming system impact on animals is crucial for evaluating welfare. Rabbits exhibit distinct behaviours influenced by their surroundings. The conditions in which they are raised directly influence behaviour and stress responses, emphasizing the importance of providing an optimal environment for their overall well-being and growth. In this study, we assessed the behaviour and welfare of two Italian local rabbit populations, namely the grey rabbit of Carmagnola and the grey rabbit of Monferrato. These rabbits are not yet officially recognized as breeds, but they are commonly used in Italy for meat production and represent a distinctive phenotype and local heritage among farmers and consumers. We analysed the behavioural patterns, physiological responses, and blood parameters of the animals to assess the influence of both age and three distinct housing systems (traditional single cages, group farming, and a mixed system) on rabbits’ welfare. In this study, 294 weaned males with 35 days old were divided into three housing systems with seven replicates each until reaching slaughtering age (100 days of age). A traditional single cage system, a group farming with 10 animals each replicate and a Mixed pilot system with 10 rabbits initially grouped, then transferred to single cages. The findings from the behavioural analysis and the evaluation of salivary and hair corticosterone levels demonstrate that both the housing system and the age of the rabbits exerted significant effects on their welfare. Rabbits in group housing displayed a wider range of behavioural patterns, including increased kinetic activities such as running, walking, and exploration. However, this housing system was associated with higher levels of both salivary and hair corticosterone, indicating a high acute and chronic stress condition. The single cage system was associated with higher levels of acute stress and a low frequency of kinetic activities and social interactions, with a predominant behaviour of turning on themselves. The age factor significantly influenced the occurrence of behaviours, with younger rabbits exhibiting higher levels of kinetic activities, while social behaviours such as attacks and dominance were more prevalent as the rabbits reached sexual maturity (around 80–85 days of age). Moreover, the attainment of sexual maturity coincided with an increase in salivary corticosterone levels. We found a significant association between attack behaviours, escape attempts, and elevated corticosterone levels, by demonstrating that these behaviours can be used as indicators of decreased animals’ well-being. Our findings underscore the importance of considering both the housing environment and the temporal dimension in the study of behaviour and welfare. This enables a comprehensive assessment of appropriate rearing management techniques. By understanding the social dynamics and stress sources within housing systems, farmers can implement measures to enhance animal welfare and create a conducive environment for the health and behaviour of rabbits. Introduction The European Union (EU) is the world's second-largest producer of meat rabbits, following China, and it is responsible for 93% of the global imports and exports in this industry [1].Germany, Belgium, and Portugal are the primary importing countries, while Spain, Hungary, France, and Belgium are the major exporting countries [2].Professional rabbit farming for commercial meat production is concentrated in Spain, France, and Italy, which together account for 83% of EU production.Specifically, Spain produces 48.5 million rabbits, France produces 29 million rabbits, and Italy produces 24.5 million rabbits [1].In Italy, commercially reared rabbits are predominantly raised in standard wire cages.These housing conditions have been associated with elevated stress levels in the animals, which can compromise their overall welfare [3].However, currently, there is no specific legislation at the EU level regarding rabbit housing, although some member states such as Italy, Germany, and Belgium have developed their own national legislation or recommendations.The Italian Ministry of Health has developed guidelines on welfare in rabbit breeding, which aim to standardize breeding practices and allow breeders to renew their cages in preparation for the adjustments required by the European Regulation.This Regulation will be established in accordance with the guidance provided by the EFSA (European Food Safety Authority) Scientific Opinion on the main critical points of cuniculture [4].This report highlights the intensive single cage breeding system as a critical factor affecting the welfare of rabbits.The main concerns are related to narrow environments, high breeding densities, and the inability of animals to express social behaviours.In addition, public opinion, influenced by the perception of rabbits as pets, strongly advocates for the abandonment of single cages in rabbit breeding.As a result, there is a pressing need to identify and implement the most suitable alternative housing system to replace the traditional single cages in rabbit breeding. Alternative rearing systems for rabbits encompass a variety of approaches designed to improve animal welfare and optimize production efficiency.Central to these systems are considerations such as cage/pen dimensions and environmental enrichment, which play pivotal roles in promoting the physical and psychological welfare of group-housed rabbits [5].Cage or pen dimensions in alternative systems often prioritize spaciousness to allow for increased mobility and social interaction among rabbits [6].Additionally, environmental enrichment strategies such as providing tunnels, platforms, or chew toys offer opportunities for mental stimulation, physical exercise, and natural behaviours such as burrowing and exploring.These elements are essential for ensuring the overall health of group-housed rabbits, contributing to their quality of life and productivity in alternative rearing systems [7].In light of this, some tests might be helpful to evaluate the stress and fear response of naturally predated animals like rabbits [8].Tonic immobility, observed in rabbits, serves as one such test.When faced with perceived threats or stressors, rabbits may enter a state of immobility, indicating their level of fear.This behaviour, characterized by sudden stillness and rigid posture, offers insights into the mechanisms underlying fear and stress regulation in these prey animals [9].However, alternative housing system solutions proposed at the European Community level and subsequently incorporated into the Ministerial Guidelines for rabbit breeding are not always beneficial for both animal welfare and production performance [10][11][12][13] and the advantages of alternative breeding systems for slow-growing local rabbit breeds remain uncertain.Additionally, aggressive social behaviours emerge with the onset of puberty, which occurs around 70 days of age [14].These conditions contribute to chronic stress, which negatively impacts the immune system and the overall performance of the animals.The consequences include reduced growth, an increased incidence of injuries among rabbits, and elevated mortality rates [13].Given their genetic peculiarities, the preservation of autochthonous slow-growing breeds is advantageous due to their high capacity to adapt and resist to the uprising concerns regarding climate change [15][16][17]. In the present study, we assessed the effects of three different housing systems (traditional single cage, group farming, and mixed system) on the welfare and behaviour of two Italian local rabbit populations the grey rabbit of Carmagnola (GC) and the grey rabbit of Monferrato (GM).They are characterized as a medium-slow growing breed with low reproductive efficiency.The primary objective of this study was to identify the most suitable housing systems for the two local rabbit populations.To achieve this, we evaluated their behaviours and welfare, short-term and long-term physiological stress indicators (i.e., salivary and hair corticosterone), blood stress indicators and we performed the tonic immobility test. Animals and housing The research was conducted at the experimental farm of the Department of Veterinary Sciences, Turin University (Italy), from March to July 2022.All animals were handled in accordance with the recommendations of the Turin University Bioethics Committee (Protocol no.0245520). A total of 294 male weaned rabbits (35 days old) from two different grey rabbit populations, Carmagnola (GC, N = 147) and Monferrato (GM, N = 147), were randomly allocated to three breeding systems: • Traditional single cage (Single): a total of 7 rabbits per breed were housed individually in cages measuring 500 x 250 x 300 mm, with a stocking density of 24 kg/m 2 .Each cage was considered as an experimental unit (7 replicates). • Group farming (Group): a total of 70 rabbits were housed in collective cages measuring 2 m 2 and a density of 15 kg/m 2 (7 replicates). • Mixed pilot system (Mixed): a total of 70 rabbits were initially raised in groups with 7 rabbits per collective cage (15 kg/m 2 ).Once they reached sexual maturity (80 days old), they were transferred to single cages measuring 500 x 250 x 330 mm, with a stocking density of 24 kg/ m 2 (7 replicates). All the experimental groups were housed in the same artificially ventilated building with an airflow rate of 0.3 m/s.The environmental conditions, including temperature and relative humidity, were monitored, and controlled daily within the range of +15/+28˚C and 60% / 75%, respectively.The lighting schedule followed a 12-hour light/12-hour dark cycle (12L/ 12D).During the trial, from the time of weaning until commercial slaughtering age (100 days of age), the rabbits were provided with ad libitum access to feed and water.Daily health checks were performed to monitor the health status of the rabbits and any deceased animals were removed. Behavioural analysis The behavioural patterns of the rabbits were recorded through direct observations conducted by two experienced operators who had undergone prior training together.The study used the Focal Animal Scan Sampling Method, as outlined by Lehner, which involves observing a designated individual (referred to as the focal animal) within a group and meticulously documenting its behaviour in real-time.This method enables a comprehensive understanding of individual behaviours, social dynamics, and responses to environmental stimuli within animal groups [18].Behavioural observations were conducted at four different animals age: 55 (T1), 70 (T2), 85 (T3), and 100 (T4) days, as shown in Fig 1 .The rabbits were observed during the daytime, between 9 a.m. and 11 a.m. and between 2 p.m. and 4 p.m. Therefore, their nocturnal activity was not recorded.The different times observed were chosen to be representative of a day and of a pre-sexual maturity situation and how this hormonal change impacted on animal overall welfare and behaviour. Prior to each observation period, the operators allowed a 5-minute adaptation period for the animals to acclimate to their presence.Data were recorded on a designated form.To determine the end of an observed behaviour, the operator waited for 10 seconds to see if the same behaviour was repeated.After the 10 seconds, any new behaviour observed was recorded.To develop the ethogram (Table 1), the following behaviours were recorded: kinetic activities (walking, running, jumping, turning on itself, and exploratory behaviours); feeding behaviours (eating and drinking); static activities (lying down, crouching, sitting, staying, and standing); comfort behaviours (self-scratching and self-grooming); stereotypical behaviours (smelling and biting bars); and social behaviours (attack, smelling others, allo-grooming, dominance features, and escape attempts).The specific ethogram was compiled based on Mugnai et al. [11] and further validated through preliminary observations.The behaviours were observed and registered as a frequency measure not considering duration.For each rabbit, the frequency of occurrence of each behaviour was calculated by dividing the number of times it was observed by the total number of observations.This frequency was then multiplied by 100 to obtain a percentage value. Tonic immobility test The Tonic Immobility (TI) test was conducted on the same observed animals for all four times (T1, T2, T3, and T4) (Fig 1).The rabbits were individually identified by a clipped area of fur on their right thigh.To perform the test, the operator gently removed the rabbit from its cage and induced immobility by turning the animal on its back while holding it in the operator's arms.The immobile rabbit was then placed on a plastic support surface, following the procedure described by Wilczyńska et al. [8].A maximum of three attempts were made to induce immobility in each animal.The animals were not kept in the immobility condition for more than 5 minutes.The number of attempts required to induce immobility and the total duration of the condition were recorded for each animal.The assessment was carried out 48 hours subsequent to the behavioural evaluations by a trained operator, entirely unacquainted with the subjects, and exclusively dedicated to the execution of this experimental protocol. Inter-observer reliability Inter-observer reliability (IOR) is a crucial aspect for ensuring reliable behavioural and welfare assessments, as these assessments can be influenced by subjectivity and potential biases related to the assessors' prior experience and level of empathy towards the animals [19].To evaluate the reliability of the behavioural assessments conducted by the two observers involved in the study, the IOR was assessed.Three different methodologies were used to determine the extent to which these trained observers consistently observed and recorded data.Firstly, we conducted the Spearman correlation test to examine the correlation between the percentages of each observed behaviour.Secondly, we employed two agreement indexes: Bangdiwala's (B) [20] and Gwet's (γAC 1 ) [21].These indices were chosen based on Giammarino et al. [22], as they have demonstrated superior performance in evaluating animal welfare indicators.Both Bangdiwala's B index and Gwet's (γAC 1 ) index measure the agreement proportion between the two observers, taking into account the total number of observations made by each observer.These indices range from 0 to 1, with 0 indicating disagreement, 0.5 representing neutrality, and values closer to 1 indicating higher agreement between the observers.Spearman correlation test was conducted using R software, Version 3.1.2(R Core Team, 2014), with a significance level of p � 0.05 considered as statistically significant. Corticosterone samples collection, hormonal evaluation, and blood stress indicators To evaluate short-term and long-term physiological stress in rabbits, we assessed their saliva and hair corticosterone (CORT) levels, respectively.Samples were collected in the same rabbits that underwent the TI test at four times (T1, T2, T3, T4).Saliva was collected in the morning (at 9 a.m.) immediately after the TI test using a Salivette1 with polyethylene pad (Sarsted AG & Co., Nu ¨mbrecht, Germany).A cut sampling swap was inserted into the corner of the rabbit's mouth with the use of a clamp and was subsequently chewed by the animal for 30-60 s.The amount of saliva taken from the individual rabbits varied within a range of 0.01-0.5 ml.The collected samples were immediately frozen and stored at -80˚C until analysis.Upon thawing, the stored samples were prepared for hormonal assays.To do this, the thawed samples underwent a 10-minutes centrifugation at 3500 rpm at 4˚C, and the saliva samples were then placed in Eppendorf 1 test tubes.The saliva samples were then placed in Eppendorf test tubes. Hair samples were collected from the thigh, the area was first shaved as close as possible to the skin using previously cleaned scissors.The hair samples were then placed in labelled paper bags and stored under light protected and dry conditions at room temperature until the extraction.Hair CORT extraction was conducted following the method described by Meyer et al. [23] with modifications.250 mg of hair were weighed and washed with 5mL of isopropanol.After 3 minutes of mixing, the excess solvent was removed, and the samples were dried under hood.The dried hair was cut into 1-3 mm-long fragments using scissors.Two 60 mg portions were then placed into a 5ml glass vial, and 3ml of methanol (Sigma Aldrich, IT) was added.Subsequently, the vials were incubated at 37˚C under an airstream suction hood for 18 hours and then centrifuged for 15 minutes at 2,500 rpm.The supernatant, collected in glass vials, was subjected to an airstream suction hood at 37˚C until it completely dried.These extracts were stored at -20˚Cte until analysis.Before the quantification of CORT, the extracted samples were reconstituted with 2mL of Assay Buffer (AB) (Arbor Assays™, Ann Arbor, MI, USA Saliva and hair CORT levels were determined with a multi-format commercial Elisa kit (K014; Arbor Assays™, Ann Arbor, MI, USA) validate for saliva, hair and other substrates.The inter-and intra-assay coefficients of variation were less than 10% for both saliva and hair.According to the manufacturer, the kit exhibited the following cross-reactivities: 100% with corticosterone, 18.9% with 1-dehydrocorticosterone, 12.3% with desoxycorticosterone, and 0.38% with cortisol.The results are reported as the amount of CORT in saliva (ng/mL) and in hair (ng/g). Blood samples were collected from rabbits at 100 days of age to assess the heterophil/lymphocyte ratio (HLR) and oxidative stress parameters (i.e., UCARR and umol HClO/ml).H/L ratio Determination (CBC, complete blood count) was performed on EDTA blood samples with an automated laser analyser (ADVIA1120 Hematology System, Siemens Diagnostics).Automated differentials were validated by microscopic evaluation of blood smears stained with May Gru ¨nwald-Giemsa.One hundred leukocytes, including granular (heterophiles, eosinophils, and basophils) and non-granular (lymphocytes and monocytes) leukocytes, were counted on the slide and the H/L ratio was calculated. Statistical analysis The average percentage of each behaviour, the average seconds of the TI test, the saliva and hair concentration, and the blood stress indicators (expressed as mean ± standard deviation) were calculated for each rabbit populations (GC and GM), housing systems (Single, Group, Mixed), and times (T1, T2, T3, T4).Normality of data distribution was assessed using the Shapiro-Wilk test.We used the two-way analysis of variance (ANOVA) to evaluate the effects of the rabbit population, housing system, age, and their interactions.Multiple comparisons of the means were carried out by calculating the least significant difference with the Duncan test.Correlation analysis of the rabbits' behaviours was performed using Spearman's correlation coefficient rho and were corrected according to Bonferroni.Afterwards, the Generalized Linear Model (GLM) (gamma distribution with a log link function) was employed to explore the relationships between the rabbits' behaviours and CORT levels.Statistical analyses were conducted using R software, Version 3.1.2(R Core Team, 2014), with a significance level of p � 0.05 considered as statistically significant. Inter-observer reliability IOR of behavioural observers are presented in Table 2.The results demonstrated significant correlations between the percentages of all behaviours evaluated by the two observers.Additionally, the agreement indexes, Bangdiwala's B and Gwet's γAC1, provided further evidence of consistency in assessing these behaviours.The results consistently indicated values close to 1 for all behaviours, indicating a high level of agreement between the observers.The B index ranges from 0.798 (smelling bars) to 0.988 (attack and escape attempts), the γAC1 index ranges from 0.486 (smelling bars) to 0.977 (escape attempts). Behavioural observations The effects of population, housing system and age, and their interactions on percentage of behaviours of Carmagnola's (GC) and from Monferrato's (GM) grey rabbit are presented in Table 3. Overall, the housing system and the age of the rabbits had a greater effect on their behavioural patterns compared to the population to which they belong.The housing systems had an impact on the majority of behavioural patterns observed in rabbits (except for jumping, eating, sitting, staying, self-scratching, and escape attempts) and different effects of the housing system were observed depending on the specific behaviours examined.In particular, rabbits housed in Single exhibited higher frequencies of turning on itself (p < 0.001), laying down (p < 0.001), and drinking behaviours (p = 0.045).Conversely, rabbits in Group displayed increased kinetics activity, including running (p = 0.0013), walking (p < 0.001), and exploratory (p = 0.009) behaviours, as well as standing behaviour (p = 0.03) and social interactions such as attacks (p < 0.001) smelling others (p < 0.001), and dominance displays (p < 0.001). The age factor had an impact on the occurrence of certain behaviours, except for jumping, eating, drinking, laying down, crouching, staying, self-scratching, self-grooming, smelling bars, smelling others, and escape attempts.However, certain behaviours that showed variation over time were more prominently displayed during the early stages of the study (T1 and T2), including walking (p = 0.047), exploratory (p = 0.04), sitting (p < 0.001), standing (p < 0.001), self-grooming (p = 0.02), and allo-grooming (p < 0.001).On the other hand, social behaviours such as attack (p = 0.01) and dominance (p < 0.001) were more frequently observed during the final stages of the study (T3 and T4). Although the population did not emerge as a significant factor for most behaviours, when examined as single factor, our findings revealed numerous effects of population, both in interaction with the system and with age.Our analysis of exploratory (p = 0.005), laying down (p = 0.01), crouching (p = 0.02), staying (p = 0.035), biting bars (p = 0.03), and attack (p = 0.04) behaviours revealed an interaction between population and housing system.These findings indicate that the combination of population and specific housing system had distinct effects on these behaviours.Similarly, we found interactions between the population and the age in running (p = 0.007), walking (p = 0.007), turning on itself (p = 0.032), exploratory (p < 0.001), sitting (p = 0.042), staying (p = 0.011), biting bars (p = 0.03) and attack (p = 0.04) behaviours, indicating that the effects of population and age combined had a notable impact on these behaviours.Finally, the analysis of exploratory (p = 0.002), eating (p < 0.001), drinking (p < 0.001), laying down (p = 0.01), crouching (p = 0.005), self-grooming (p = 0.03), smelling bars (p < 0.001), biting bars (p < 0.001), attack (p < 0.001), smelling others (p < 0.001), allo-grooming (p < 0.001), and dominance (p < 0.001) behaviours revealed significant interactions between housing system and age.Furthermore, it is important to consider the timing of observations.In the case of the Mixed, it should be noted that at T3 and T4, rabbits were individually housed in single cages to mitigate potential negative effects like conspecific aggressivity and agonistic behaviours typically associated with sexual maturity [24].Although the observers did not directly observe rabbits plucking their fur, the presence of fur found under the collective cage of CG and CM rabbits at T2 suggested the occurrence of this behaviour. Salivary and hair corticosterone and blood stress indicators The effects of populations, housing systems, and age on salivary and hair CORT levels are presented in Table 3.The populations did not show significant effects on both salivary and hair CORT levels.However, housing systems and the age of the animals, along with their interactions, had a significant impact on both short-term and long-term physiological stress in rabbits (Fig 2).Rabbits hosted in Group exhibited higher levels of both salivary and hair CORT.Higher levels of salivary CORT were observed in Single compared to Mixed system.Conversely, in the Mixed system, rabbits displayed higher levels of hair CORT compared to the Single system.Regarding the age factor, salivary CORT levels increased at the end of the study (T3 and T4), indicating elevated short-term stress levels during those periods (Fig 2).On the other hand, hair CORT showed higher levels at T2 and T4, suggesting a different pattern of long-term physiological stress (Fig 2).Generalized linear models (GLM) were used to examine how the saliva and hair CORT levels varied in relation to the rabbits' behaviours.To avoid potential multicollinearity issues, behaviours that were found highly and significantly correlated were excluded from the models The results revealed a significant increase in both salivary and hair CORT levels as attack behaviours increased.Conversely, salivary CORT levels increased when both sitting and self-grooming behaviours decreased (Table 4).Additionally, hair CORT showed an increase in escape attempts (Table 4). Overall, no significant differences were found in the blood stress indicators of rabbits when investigating the effects of population, housing system, and age of rabbits (S2 Table in S1 File). Tonic immobility The analysis investigating the effects of population, housing system, and age on tonic immobility in rabbits demonstrated that there were no statistically significant differences in TI for each attempt.All the data were combined and pooled together to calculate the mean value, which also showed no significant differences.Additionally, the interactions between population and housing system, population and age, and housing system and age were not found to be significant (S1 Table and S2 Fig in S1 File).Overall, no significant differences were found in the blood stress indicators of rabbits when investigating the effects of population, housing system, and age of rabbits (S2 Table in S1 File). Discussion It is well known that animal behaviour is influenced by the environment, and understanding how the environment affects animals is crucial for assessing and quantifying welfare.In this study, we investigated the impact of three different housing systems (single cage, group farming, mixed system) on the welfare and behaviour of two Italian local rabbit populations, taking into account the age of the rabbits as well.We achieved a high level of inter-observer reliability in behavior assessment, which revealed significant agreement among the methods employed.This reinforces the validity and reliability of our findings and the accuracy of the behavioral data. Regarding differences in the behaviours exhibited by the two rabbit populations (i.e., GM and GC), we observed that only specific behavioural patterns, particularly crouching behaviour and comfort activities, differed.We observed a significant population effect on comfort activities, with GC rabbits displaying a higher propensity for self-scratching and self-grooming compared to GM rabbits.These results suggest the possible presence of genetic differences between the two populations that contribute to their distinct behavioural patterns.We hypothesize that Table 4. Generalized Linear Model for salivary corticosterone and hair corticosterone levels.The dependent variable is the salivary CORT, and the independent variables (predictors) are the rabbit behaviours. Salivary corticosterone levels Hair corticosterone levels these differences may derive from selective breeding practices or environmental factors specific to each population.Further investigation into the genetic factors contributing to these behavioural differences could provide valuable insights into the underlying mechanisms shaping rabbit behaviour.Beyond population differences and possible genetic variations, environmental enrichment plays a key role in promoting natural behaviours, providing animals with a greater number of behavioural opportunities [25].Rabbits, like other animals, have specific behavioural expectations in relation to their surroundings, and the environmental conditions in which they are reared have a direct impact on their behaviour.Accordingly, we found significant effects of housing system on the behavioural patterns of both populations of grey rabbits.Rabbits in Group exhibited a broader range of behaviours, with a higher percentage of kinetic activities like running, walking, and exploratory behaviours.This finding aligns with previous research by Dal Bosco et al. [26], L. Lambertini and Formigoni [27], Princz et al. [28]; Trocino et al. [29], who also reported increased movement in rabbits housed in group systems, and a negative correlation between movement and eating activity.These findings indicate that group housing offers a more stimulating and dynamic environment for rabbits, leading to a broader range of physical activities.This was associated with a reduction in stereotypical behaviours, decreased time spent on feeding and resting, and an increase in social activities, exploration, and aggressiveness, in line with previous research [26,28].On the other hand, rabbits housed in Single exhibited higher frequencies of turning on itself, laying down, and drinking behaviours, while rabbits in Mixed displayed more crouching, self-grooming, and stereotypic activities such as smelling and biting bars.These observations suggest that the Mixed system may not provide an optimal environment for rabbits, given the increased occurrence of stereotypical behaviours.In the case of rabbits housed in Single, their behavioural repertoire is limited due to the spatial constraints of their environment.Social activities are restricted, as rabbits have limited opportunities for performing behaviours such as smelling others and allo-grooming, especially when neighbouring rabbits are housed in adjacent cages [30].Research indicates that anxiety symptoms are often linked to restrictive repetitive behaviours (RRBs), particularly when animals engage in repetitive behaviours consistently [26,27].This may explain why rabbits in Single-cages exhibit anxious repetitive behaviours, such as bar biting and sniffing, with the latter two classified as stereotyped behaviours [3]. The impact of age on behaviours, particularly kinetic activities such as walking and exploratory behaviours, was significant, in addition to the effect of the housing system.Rabbits housed in Single exhibited limited opportunities for kinetic activities, except for turning on themselves, as previously discussed.Conversely, rabbits in collective cages, including both in Group and Mixed systems, had more available space for movement, particularly at younger ages (T1 and T2).The contrast in available space for kinetic activities among the three housing types underscores the importance of housing design in facilitating rabbit behaviour.Single housing, while offering individual space, may restrict movement due to spatial limitations, leading to predominantly stereotypic behaviours such as turning.This difference in space availability highlights the potential benefits of Group and Mixed housing configurations, as they better accommodate the locomotor needs of rabbits, particularly during early developmental stages.Moreover, it is well-known that locomotor activity in rabbits tends to decrease with age.Consistent with the findings of Trocino et al., [29] our study revealed that the occurrence of running behaviour was influenced by both the housing system and age, with higher frequencies observed in older rabbits (T3 and T4) within the Group system.The tonic immobility test did not show any significant differences among the housing systems, age of the rabbits, and populations.This lack of significant differences could be attributed to the regular handling and interaction of rabbits by the farmer during routine management practices.It is widely known that animals can gradually become habituated to human presence and contact, resulting in a decrease in fear responses over time.One limitation of our study is that our sampling method does not fully account for the circadian variations in corticosterone levels.Corticosterone levels naturally fluctuate throughout the day, and our fixed sampling time may not capture these dynamic changes accurately.As a result, while our morning saliva samples reflect the stress levels from the preceding night, they may not provide a comprehensive view of the fluctuations in corticosterone that occur over a 24-hour period.All observed animals were males, thus there was no sex variability in our study.However, sex differences could have a significant impact on behaviour and may be an area of interest for future studies.Furthermore, the total number of animals observed was 294.While this number may seem substantial, increasing the sample size in future studies could enhance the representativeness of our results and provide a better understanding of the observed behavioural patterns. Our results on corticosterone levels provide valuable insights into the physiological stress experienced by the rabbits in different housing systems, which is consistent with previous studies [3].Prior studies investigating the diurnal rhythm of salivary corticosterone concentration in rabbits have highlighted fluctuations in their stress hormone levels over the course of a day.Notably, research indicates highest corticosterone levels between 12:00 and 15:00 [31].These findings underscore the dynamic nature of stress regulation in rabbits and provide valuable insights into the temporal patterns of their physiological responses to environmental stimuli.Such understanding contributes significantly to our comprehension of the adaptive mechanisms employed by rabbits in coping with varying stressors encountered in their natural habitat.Rabbits housed in Group system exhibited higher levels of both salivary and hair CORT, indicating an increased stress response in this housing condition.This might be attributed to factors such as social dynamics, competition for resources, or other stressors associated with group housing.This finding is consistent with previous studies that have reported increased stress levels in group-housed animals [32] including rabbits [10,33] due to factors such as social hierarchy and environmental challenges.The Single system was associated with higher levels of salivary CORT compared to the Mixed system.This result suggests that individual housing might lead to acute stress responses in rabbits, possibly due to the limited opportunities for social interactions and environmental enrichment in single cages [34].The lower hair CORT levels (i.e., lower chronic stress) observed in rabbits housed in Single could be attributed to a potential coping response, as suggested by Mugnai et al. [3].The coping response refers to behaviours that appear to attenuate stressor-induced physiological responses [35] by exerting a calming effect [36].Rabbits in single cages exhibited a higher frequency of stereotypical behaviours, notably turning on itself.It is plausible that this behaviour triggered a calming effect, contributing to the maintenance of lower CORT levels in these animals.We acknowledge that a limitation of using corticosterone in hair as a biomarker for chronic stress is the individual differences in hair coloration.Even within a population of rabbits with uniform hair color, there may be individual variations that require further investigation.Regarding the age factor, we observed that salivary CORT levels increased at the end of the study (T3 and T4), indicating elevated short-term stress levels during those periods.This observation could be attributed to factors such as the attainment of sexual maturity, which may have triggered acute stress responses in the rabbits.On the other hand, hair corticosterone levels showed higher values at T2 and T4, suggesting a different pattern of long-term physiological stress.This pattern could be influenced by the cumulative effects of chronic stressors experienced by the rabbits over time, which might result in a delayed impact on hair CORT levels.Moreover, the influence of age and its interaction with the housing system had a significant effect on allo-grooming behaviour.As the rabbits aged, the occurrence of this cohesive social behaviour decreased, particularly at T3 and T4 when the rabbits reached sexual maturity. Instead, aggressive behaviours such as attack, dominance features, and escape attempts became more prevalent.These findings align with previous studies conducted by Lambertini et al. [27], Dalle Zotte and Szendro [37], and Trocino et al. [38], which reported an increased risk of aggression among rabbits as they approached sexual. To investigate the potential connections between specific behaviours and physiological stress responses in rabbits we used a Generalized Linear Models (GLM) in which the significant increase in both salivary and hair CORT levels as aggressive behaviours increased suggests a potential link between aggressive interactions and both acute and chronic stress reactivity in rabbits.This finding aligns with previous research in other animal species, indicating that aggressive behaviours can elicit physiological stress responses [39,40].On the other hand, we found that the decrease in self-grooming and sitting behaviours was associated with an increase in salivary CORT levels.These behaviours are often associated with relaxation and comfort, and their decrease may indicate higher acute stress levels in the rabbits.Additionally, the GLM revealing an increase in hair CORT levels in response to escape attempts highlights the potential long-term effects of stress on the rabbits' physiology.Escape attempts are indicative of aversive or challenging situations, and the observed association with hair CORT levels may suggest that these stressful experiences have a lasting impact on the animals' stress hormone levels.By considering both behavioural and physiological indicators, we can better assess the welfare and well-being of rabbits in various environments and identify areas where improvements can be made to enhance their living conditions. Conclusions Our research emphasizes the importance of observing both the behaviour and physiological stress markers of rabbits over time to understand their well-being in different housing systems.We have highlighted that the type of housing significantly affects various behaviours in rabbits.For instance, group farming fosters social bonding but can also lead to increased levels of chronic and acute stress in rabbits.Conversely, rabbits in solitary cages may experience acute stress due to loneliness and confinement.These differences arise from both social and physiological changes in rabbits, which should be consider when selecting the appropriate housing system.However, it's essential to acknowledge some limitations in our study, analyzing rabbit behaviour during night-time, considering their nocturnal nature, could offer a more complete picture of their behavioural patterns and stress responses.Furthermore, the timing of observations plays a crucial role in understanding how housing systems influence behaviour.Our statistical analyses provide deep insights into the complex relationship between behaviour and stress physiology in rabbits, uncovering underlying stressors and adaptive coping mechanisms across different farming conditions.The relationship we've identified between aggressive behaviours, escape tendencies, and cortisol levels present promising avenues for identifying key behavioural indicators.Armed with a deeper understanding of social dynamics and stress factors within farming systems, our findings equip farmers with targeted interventions to enhance animal welfare and create an environment conducive to optimal health and behaviour. Fig 2 . Fig 2. Salivary and hair CORT levels of two autochthonous slow-growing grey rabbit populations housed in three different farming systems on population, housing system, and age.The box and whisker plots illustrate the interquartile range, and the black lines indicate the median.The error bars extend from the box to the highest and lowest values.The diamonds indicate the outlier's data.https://doi.org/10.1371/journal.pone.0307456.g002 Table 1 . Evaluated ethogram of two autochthonous slow-growing grey rabbit populations housed in three differ- ent farming systems. Activities Behaviour Behaviour description Biting barsLicking or gnawing cage bars and scratching cage floor insistently Social Attack Offensive moves, in which the doe attempts to bite its opponent Smelling others Smelling another rabbit Allo-grooming Licking, scratching, or nibbling another rabbit's body Dominance features A rabbit that mounts, bites, or scratches another rabbit, or that sits with a tense body posture with erected ears and tail near to another doe Escape attempts A rabbit that attempts to escape from another rabbit presence https://doi.org/10.1371/journal.pone.0307456.t001
2024-07-20T05:17:09.198Z
2024-07-18T00:00:00.000
{ "year": 2024, "sha1": "87d48d5d20092c25c4fba9ca359ffa082d7870a3", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "84337f3d39e6b60c49ae4906ce45c4c4865a304c", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
247281945
pes2o/s2orc
v3-fos-license
Curbing Fake News: A Qualitative Study of the Readiness of Academic Librarians in Ghana Abstract While fake news has been a common problem for well over a century, the emergence of social media and smartphones has escalated its spread. This study adopts a qualitative approach to explore the readiness of academic librarians in curbing fake news. Data was drawn from interviews with reference library staff and head librarians who were purposively selected from 12 academic libraries and evaluated through the lens of the International Federation of Library Associations and Institutions [IFLA] guide on ‘how to spot fake news’. The study revealed that although academic librarians were aware of fake news, they do not grasp the complexity and intricacies of the phenomenon. Therefore, the study recommends regular on-the-job training for academic librarians in identifying fake news. The Library and Information Science departments of universities in Ghana should review their curriculum to include training and education on problematic information. There should be collaboration between libraries and social media organization on curbing fake news. We support the call for information literacy, critical thinking and media literacy instructions to be embedded in all subjects with academic librarians as co-instructors. Introduction In recent years, fake news has become increasingly widespread, affecting all spheres of life. However, the phenomenon is not new and has always been with us (Andrejevic, 2019;Tandoc, Lim & Ling, 2018). It has gained notoriety primarily because of the convergence of communication technologies (Ahiabenu,Ofosu-Peasah & Sam, 2018;Rose-Wiles, 2018), the growth of social media platforms and its online virality (Venturini, 2019). It has become so complex that factors that information professionals use to determine trustworthy information are now questionable (Westerlund, 2019). National policies/ legislation to curb fake news is seen as a window for censorship and human rights abuse (United Nations Human Rights, 2017). The Ghanaian public news space has witnessed several news articles or social media posts that have been purported to be false (Jamil & Appiah-Adjei, 2019). This phenomenon has become predominant in the political landscape, especially during electioneering periods (Allcott & Gentzkow, 2017;Ireton & Posetti, 2018). Information verification and source validation have been at the forefront of librarianship since its inception (Courtney, 2018). However, the changing convergence of technology and information generating tools has made tracking and compliance very problematic. The concerns of academic librarians have been how to help faculty and students decipher what is accurate from what is factually inaccurate. This is becoming increasingly difficult to control because of how easy it is to "create and disseminate inaccurate and misleading information" (Fallis, 2015, p. 402) as a result of the advancement of technology (Ngwainmbi, 2019;Westerlund, 2019). Loertscher (2017) laments that even search engines have been programmed to study each user's information preferences, so they feed users with what they like to receive. While scholarly research on fake news is abundant (Tandoc, Lim & Ling, 2018), especially since the Arab uprising, the 2016 U.S election and the recent COVID-19 pandemic, scholarly literature on what academic librarians in Ghana are doing to assist minimize the effect of fake news is sparse. Thus, this paper examines the readiness of academic librarians in Ghana to help minimize fake news using the IFLA guide on "how to spot fake news. " We are convinced that academic librarians play a vital and mediating role in raising awareness of fake/bad news and should be aware of strategies and frameworks that can be deployed to support faculty and students' information needs. Literature review What is fake news? Fake news is the same as misinformation (false and misleading) and disinformation (false information intended to deceive) (Lazer et al., 2018). Fake news is also referred to as "fabricated information that mimics news media content in form but not in organizational process or intent". (Lazer et al., 2018(Lazer et al., , p. 1094. Fake news has elements of fiction and deception (Ngwainmbi, 2019). It is either to mislead, damage an individual or an entity, entertain, gain readership, or for political or financial benefits (Ngwainmbi, 2019). In other words, real news will be an accurate account of a factual event. Some scholars are moving away from the term 'fake news' because of the notion that it does not adequately represent the spectrum of mis/disinformation or problematic information (Freelon & Wells, 2020;Habgood-Coote, 2019). The term fake news has often been used interchangeably with other concepts namely; misinformation, misrepresentation, false news, problematic or bad news. Table 1 outlines a spectrum of different terms often used synonymously in place of fake news. Instances of fake news and the dangers Fake news has become dangerous for human society and every country's democracy (Borges et al., 2019;Qayyum et al., 2019). For example, the recent COVID-19 has become the latest target for fake news on most social media outlets. In the view of Neto et al. (2020), the COVID-19 virus comes together with misinformation, causing harm to people. Several videos, audio and texts have been bundled around on diverse social and traditional media about the virus. Some of these stories as reported in the literature include; the fact that the virus was from a failed laboratory experiment (Azim et al., 2020), the unwillingness of some people to take the vaccine based on false claims about the efficacy and safety of the vaccine which ultimately led to some anti-vaccine rallies and protests in countries like the United States and certain regions of Europe (Carrion-Alvarez & Tijerina-Salina, 2020), the shortages of medicines Types Meaning Articles Misinformation An honest mistake in spreading false or inaccurate information (Kumar & Geethakumari, 2014) Misrepresentation A statement made with conscious ignorance or a reckless disregard for the truth that can create liability. (Young, 2021) Disinformation and hoaxes False information intended to mislead, such as propaganda (Kumar & Geethakumari, 2014;Young, 2021) Deepfakes Digitally manipulating videos that depict someone saying or doing something that in reality is not true. (Westerlund, 2019) Yellow Journalism An old term of fake news in the 1890s (Campbell, 2019) Mal-information Right information used in the wrong context to incite hatred against a particular group (Wardle & Derakhshan, 2017) False connection Where headlines, visuals or captions do not support the content False context Genuine content shared with false contextual information Manipulated content Genuine imagery/information manipulated to deceive Misleading content Misleading use of information to frame an issue or individual Imposter content Genuine sources that are Impersonated such as hydroxychloroquine or medical facial masks as a consequence of fake news on their usage, ranging from the well-known "Big Pharma" (Neto et al., 2020), the nonexistence of the virus due to claims of microchips in vaccines, the stealing of personal information, and the implementation of 5 G to reduce the population of some countries (Islam et al., 2020). Carrion-Alvarez and Tijerina-Salina (2020) aptly describe this phenomenon as "destructive beliefs" during the pandemic that has continued if not increased. Westerlund (2019), in an article on deepfakes revealed that during the mass shooting in Christchurch, New Zealand, a video that was being circulated depicted a suspect being shot dead by police which was later discovered to be a different incident in the U.S. The suspect in the Christchurch shooting was not killed. Westerlund (2019) warns that deepfakes can threaten national security and even cause wars. He reveals that such videos can be used to depict a politician taking bribe, confessing a crime or admitting a secret plan to carry out a crime. For example, in Malaysia, a deepfake in which a man admitted to having sex with a local cabinet minister caused political controversy. Stories like these can cause mistrust toward even genuine information provided by authority since it makes people regard everything as a deception (Westerlund, 2019). One of the factors in determining trustworthy information is through expert or authoritative sources. Thus, it is indeed a great danger if this too can be manipulated for malicious intents. The danger of misinformation is that the probability of people accepting whatever is consistent with their inherent/preexisting beliefs is very high and fast. In contrast, the likelihood of correcting that misinformation is very low and slow (Kumar & Geethakumari, 2014). Ngwainmbi (2019) suggests that people tend to trust negative rather than positive information. Accordingly, fake news is woven based on these assumptions. Why fake news? Several reasons account for the phenomenon of fake news. There are a lot of unscrupulous individuals who profit from disseminating false news. For instance, teenagers in North Macedonia earned thousands of dollars by getting many clicks through fake news they shared on Facebook about U.S President Donald Trump during the 2016 election (Kirby, 2016). Ahiabenu, Ofosu-Peasah and Sam (2018) also suggest that the growing appetite for fast news and short news cycle accounts for an increase in the fake news phenomenon. Fake news has no restrictions and cuts across all spheres of discourse. One of the most famous and oldest playgrounds for spreading fake news has and is still in the political arena (Allcott & Gentzkow, 2017). During elections, propaganda takes a foothold where parties try to outdo each other to gain votes even if it means misinforming or disinforming the electorates (Ireton & Posetti, 2018). The phenomenon has become endemic and an increasing threat to the sustainability of democracy and human rights. The scourge of fake news in the African context is phenomenal and even more dangerous in the political arena (Wasserman & Madrid-Morales, 2018). Ngwainmbi (2019) suggests that the phenomenon is also prevalent in academia. For instance, text interpretation could be made to support one's bias. In the field of academia, some studies have also suggested that some researchers also gather data to support their own beliefs and ideologies thereby impeding on the validity of their findings (Chiou & Tucker, 2018) It is difficult to stop the spread of fake news because as Chopra et al. (2019) indicate, people have a demand for more biased news and this demand is influenced by a craving to confirm preexisting beliefs or as Golman et al. (2017) suggest; people might rationally choose to avoid legitimate information as a means to maintain optimism. This is consistent with Young's (2021) argument that interventions aimed at curbing the menace ought to account for the psychological and emotional processes triggered by misinformation. However, the growing concern with fake news is how it affects those who genuinely do not know that what they are trusting is fake. Also, it is becoming extremely difficult to distinguish what is real and what is fake due to advances in technology. Again, the ease and speed with which items can be received and reposted is very worrying (Citron & Chesney, 2019;Rose-Wiles, 2018;Westerlund, 2019). Call to action Some individuals/organizations have called on authorities to curb the growing menace of fake news (Ireton & Posetti, 2018;Talwar et al., 2020). However, others have cautioned that in the light of free speech, which is guaranteed under the constitution of most countries, caution must be exercised in order not to infringe on the rights of citizens (Levinson, 2017;United Nations Human Rights, 2017). Irrespective of the right to freedom of expression, measures must be instituted to curb this escalating threat of fake news. The role of librarians in helping patrons recognize fake news The role of librarians in managing the fake news menace has been long standing. As far back as 1989, it became necessary for the America Library Association (ALA) presidential committee on information literacy to bring this issue to prominence (ALA., 1989). The report stressed the significance of the librarian's role in supporting patrons to select and evaluate information resources and why creating an information literate society was necessary. The report further highlights the looming danger of biases and "expert opinions" on citizenship in a modern democracy. It concluded that information literacy is a survival skill needed in detecting misinformation/disinformation for political and monetary gains and therefore was essential to guarantee the survival of democratic institutions. With the growing concerns with misinformation/disinformation, librarians and information professionals are called to manage the crisis (Jacobson, 2017;Rose-Wiles, 2018). However, as Ecker (2015) suggests, the issue is not about recognizing which information is misinformation/ disinformation but how to change what misinformation/disinformation does to people's minds. Berry (2016) opines that the current misinformation situation is the most difficult challenge in library history. Walsh (2010, p. 508) cautions that "we [librarians] teach our users how to acquire knowledge and not to decide what knowledge they should acquire". Thus, one wonders what role academic librarians can play to curb fake news? (Dollinger, 2017) Sullivan (2019) reveals that in the United States, many workshops and conferences have been organized by library organizations to curb the menace. Some of these include; "Libraries in a Post-Truth World" (Phillips Academy, Andover, MA) and "Developing a Metadata Community Response in the Post-Truth Information Age" (DCMI), along with webinars on "Post-Truth: Fake News and a New Era of Information Literacy" (ALA), "Don't Get Faked Out by the News" (AASL), and "Confronting Misinformation: How Librarians Can Assist Patrons in the Digital Information Age" (FDLP). With issues arising from the 2016 U.S elections, some information schools in the U.S established centers to help control fake news in the 2020 elections. For instance, the University of Washington Information School established the Center for an Informed Public (CIP) in 2019 to resist strategic misinformation and to promote an informed public. Since its establishment, it has collaborated with public libraries in Seattle to start the community labs in public libraries project. This project seeks to use public libraries as community halls where community members can gather to discuss fakes news and identify them. In 2020, the CIP organized lectures and radio sessions on misinformation . In the view of Sullivan (2019), the librarians' urge to join the campaign against fake news is not just a duty but an opportunity to prove their primary functions. However, on reflection, the librarians' role in curbing the fake news menace seems like a mirage due to a lack of control or authority of the Internet (Sullivan, 2019). Also, the indicators of information reliability are difficult to recognize with unscrupulous persons' ability to mimic reputable and authoritative sources (Allcott & Gentzkow, 2017). Nonetheless, Sullivan (2019) reveals that information literacy is the proposed way to curb it. Ireland (2018) adds that when librarians teach information literacy in terms of the reliability of a publisher or an author, emphasis must also be placed on the authenticity of the sources of information that the author relied on. Batchelor (2017) also recommends teaching critical thinking skills during information literacy sessions. For Rose-Wiles et al. (2017), information literacy goals should be embedded in every course with librarians as co-instructors who will aid students to consciously practice information literacy concepts throughout their studies. Conceptual framework This study examines the readiness of academic librarians in Ghana to curb fake news using the IFLA guide on 'how to spot fake news'. Discussions about fake news has led to a new focus on media literacy skills and the role of libraries in providing these skills. According to IFLA (2017), a call for action was made on librarians to educate and advocate for critical thinking, which is a crucial skill when navigating the information society. With this in mind, IFLA created an eight (8) simple step infographic guide based on FactCheck.org's 2016 guide for understanding and identifying "fake news". The essence of this infographic by IFLA was to enable people to discover a given news piece's verifiability. In the view of IFLA (2017, p. 4), "the more we crowdsource our wisdom, the wiser the world becomes". The first step of the infographic checklist stipulates that a person in search of information needs to consider the source. This sometimes requires clicking away from the story to investigate its mission, contact and other valuable information of the website. The second step postulates that consumers of information read beyond the headlines. The infographic suggests headlines can be outrageous and misleading in an effort to get clicks, likes and readership. As such, it is usually prudent to read the entire story. The third step of the infographic is to check for the author. In so doing, the consumer of the information must conduct a quick background check on the author to ascertain their genuineness and credibility. The fourth step requires that the user of the information searches for corroborating sources by clicking on those links to determine if the information churned out supports the story. The fifth step is to check the date to ascertain the currency of the information. In the view of IFLA, reposting old news stories does not mean they are relevant to current events. The sixth step is to confirm if it is not a joke. IFLA admonishes that it might be a satire if the story is too outlandish. It is, therefore, imperative to conduct a thorough search on the site and author to be certain. The seventh step admonishes consumers of information to check their biases as this may make them fall for fake news. In so doing, an information user may need to consider their beliefs, which affects their judgment. The eighth step also admonishes that an individual can ask an expert such as a librarian or consults a factchecking site when in doubt. Table 2 presents a summary of the guide. Our interest in using this infographic guide lies in the eighth step, which recommends that when the consumer of information doubts some information, they consult an expert such as a librarian. However, the ability of a librarian to assist patrons in verifying information largely depends on their knowledge and understanding of the seven other steps as outlined in the infographic guide. Using the IFLA infographic guide as a tool in this study, we seek to explore academic librarians' readiness to curb fake news. We believe that when librarians are armed with knowledge on identifying "fake news", with Table 2. Summary of IFLA's guide on 'how to spot fake news'. Step Action Activities Methods This section of the paper outlines the approach used in conducting the study. The research findings are based on interviews with staff of academic libraries in Ghana on their knowledge of fake news, how to identify them and how they can assist patrons in recognizing fake news. This research was exploratory and adopted a qualitative approach to explore the readiness of academic librarians in curbing fake news. According to Carlin (2008), qualitative research strives for a subjective understanding of a phenomenon from the point of view of the actor(s) directly involved. The qualitative methods were chosen to elicit depth and complexity, rather than generalizability. The study focuses on academic libraries in the Upper East Region of Ghana. The decision to recruit all the academic libraries in the region was to get a holistic picture of the topic under investigation and because the number was adequate for the study's purposes. We also believe that findings from these institutions would give an empirical view of the understanding of fake news and the readiness of librarians in other institutions in Ghana to curb the spread of fake news. At the time of the study, there were 12 fully accredited tertiary institutions across the Upper East Region of Ghana, comprising two teacher training colleges, six health training institutions, one technical university, two private universities and one public university. Each of the 12 institutions has an academic library managed by one or more librarians. In this study, reference library staff and head librarians comprising thirty-two (32) respondents across all the institutions were the study's target population. Our decision to use the entire population was justified considering the target population under study was too small to sample. Considering this was a qualitative study, purposive sampling was adopted in recruiting all reference library staff and head librarians from the various academic libraries. Creswell (2016) suggests that purposive sampling is appropriate for collecting detailed participants' views when the informants are very few and the information is hardly quantifiable. Head librarians facilitated the recruitment of library staff at the reference section to be interviewed. Except for four (4) persons, all persons interviewed either had a first degree or a Masters degree in Information Studies. Nonetheless, the four (4) other persons who did not have a certificate in Information or Library Studies had some form of in-service training from the Ghana Library Authority (GhLA). Besides, they had worked for more than five years in their respective institutional libraries, bringing their experience to bear on the subject matter. Out of all those interviewed, only one (1) person had less than 1-year of working experience in the library. The rest had between three (3) to sixteen (16) years of working experience. The reference library staff of the various academic institutions were particularly targeted for this study because they play a pivotal role in guiding students and faculty at the reference desk. As the first point of call in the library, reference librarians apply critical-thinking skills, emotional intelligence, teaching ability and question analysis to connect patrons with appropriate information resources. Due to the COVID-19 pandemic, most interviews were conducted via telephone. Head librarians were contacted via phone or e-mail and permission was sought from them before the interviews were conducted with reference staff. Where a head librarian consents to their institutional library participating in the study, they are called and asked to provide contacts of reference library staff who agree to participate in the study. Some e-mail contacts were also retrieved from the Ghana Library Association (GLA) mailing list, while phone numbers were obtained from known colleagues. The nature of the study was comprehensively explained to prospective respondents and their consent to participate in the study was sought. Qualitative data obtained through in-depth interviews were analyzed using an interpretative approach. Gathered data (interview transcripts) were transcribed, read and re-read. They were then uploaded to the Nvivo software (version 20) and analyzed thematically. Data analysis was through a deductive process that involved scrutinizing data collected through individual interviews in search of common meanings and patterns regarding the phenomenon under study. It began with the coding of data, sorting different codes into potential themes and collating all the relevant coded data extracts within identified themes (Nowell et al., 2017). The use of multiple researchers helped to enhance the trustworthiness of the findings. The emerging themes formed the basis of our findings and discussions. All the data gathered were analyzed to identify emerging approaches that academic libraries employ to assist patrons in recognizing fake news. These themes were compared with IFLA's guide on 'how to spot fake news'. Findings This section presents the findings from interviews conducted with participants (See Appendix for the interview guide). They are presented under three broad themes which capture the views of library staff who participated in the study. Quotes from participants are used to illustrate the emerging themes. For anonymity and confidentiality purposes, the institutions where the interviewees were drawn and interviewees' names are not mentioned. The findings are presented under the following major themes: Academic Librarians Knowledge of Fake News Academic Librarians Knowledge on How to identify Fake News Role of Academic Librarians in Managing Fake News Academic librarians knowledge of fake news Academic librarians are supposed to be knowledgeable about issues of fake news since they have a responsibility to evaluate information resources for academic library collections and guide patrons to evaluate the information they consume (IFLA, 2017). Extant literature has sought to espouse the role of librarians in curbing the growing menace of fake news (Batchelor, 2017;IFLA, 2017;Jacobson, 2017;Rose-Wiles, 2018;Sullivan, 2019;Young et al., 2021). Given this, we sought to find out academic librarians' views and a general understanding of the fake news phenomenon and their subjective experiences in their career as reference librarians while dealing with patrons regarding. We also sought to find out the types of fake news they were familiar with. Responses to these questions revealed that librarians across all the institutions studied had some form of understanding of fake news. However, their views and perceptions were varied but similar. For instance, responding to the question: "What is your understanding of fake news?", a participant intimated: I understand fake news to mean spreading wrong information to achieve a particular aim or to saturate the media space/information market to ensure that people do not get to know the right information … Other respondents shared similar views. For instance, one said: Basically, fake news is about news that is put out in the public domain to mislead people. Usually, there is no evidence to back it up. Just news to deceive people. And another also added that: Fake news is a piece of information that is meant to mislead the public. This type of information usually suits the vested interests of a specific group of people. The views of other participants were not far from the above responses. These responses suggest that librarians had some form of understanding of fake news. Their knowledge of fake news is synonymous to disinformation which is the deliberate spread of false information for an intended purpose. To get a deeper understanding of respondents' knowledge of fake news, we sought to find out if respondents knew the types, motivation or complexity of fake news that exist. This was imperative because knowledge on the types, motivations or complexity of fake news puts the librarian in a better position to educate patrons on how to identify them effectively. From our findings, twenty-six (26) participants comprising the majority did not know any other types or motivations of fake news except for political power or to be famous in the case of musicians. In answering this question, the recurrent responses from most participants as a popular source of fake news was social media news. Only three (3) respondents identified clickbait as a type of fake news. From the above revelations, we believe that although librarians are aware of fake news, most are not very knowledgeable about the scope, intricacies and complexity of the fake news phenomenon. Our findings support the views of Lim (2020), who contends that librarians must further clarify the term fake news so that it reflects its multiple layers and complexity. Knowledge on how to identify fake news Librarians are responsible for managing a vast amount of information at their disposal. Discussions about fake news and the ubiquity of information technology have led to a new focus on information literacy, media literacy and critical thinking (Batchelor, 2017) and the role of libraries in providing training in these fields. Stemming from the above, librarians as experts according to the IFLA (2017) guide on 'how to spot fake news', must have the requisite knowledge, expertise and skill in identifying fake news. This is to enable them render support to their patrons. To this end, participants were asked a series of questions on how to identify fake news. The IFLA proposed guide was used as a toolkit. We began by finding out from respondents if they knew or had a list of any known fake news sites or factchecking sites. The idea here was to know if reference librarians could identify fake news by knowing which sites are purveyors of fake news. We were guided by the knowledge that fake news can appear on sites that do not appear on any list of known fake news sources and that some news sources may produce reliable as well as unreliable new stories. Strangely, participants' responses revealed that none of them were aware of any fake news sites or resources for identifying fake news. We probed further to determine how as librarians, they evaluate new stories and sources for patrons. The IFLA guidelines suggest that an information user must consider the source of information. In so doing, one must click away from the story to investigate the site, its mission, and contact information. Other information evaluation mechanisms and criteria include but not limited to the following: C.R.A.A.P Test (Currency, Relevance, Authority, Accuracy, Purpose); P.R.O.V.E.N. Test (Relevance, Objectivity, Verifiability, Expertise, Newness); R.A.D.A.R Test (the Rationale, Authority, Date, Accuracy, Relevance); A.B.C.D test (Author, Biasness, Content, Date) and 5W's þ 1H Test (Who, What, When, Where, Why and How). However, responses from interviewees revealed that while all participants were aware of criteria such as determining the source and credibility of the author, all participants were unaware of the evaluative guide or criteria stated above for identifying fake news, including the most common guide developed by IFLA. When the various checklist and evaluative criteria were mentioned to them, only six (6) out of the 32 participants confirmed that they had heard of the IFLA's guide on identifying fake news. However, they could not tell its content. One respondent had this to say: For online information, I usually identify fake news using the domain name of the website. For instance, websites with domain name ".edu" are educational sites and so are more credible than websites that end with ".com" which in my view are mostly commercial websites. Another participant also pointed: I confirm whether other sources are reporting the same thing, in which case I can arrive at a conclusion as to whether it is fake or not. A recurring view from twenty (20) respondents was that the print media was a more authentic source. In the words of one participant: Most fake news comes from online sources because they can easily be deleted and manipulated. Lastly, we sought to find out if these institutional libraries had developed any standardized toolkits to help patrons identify fake news. This question revealed that none of the institutions studied had such in-house toolkit to assist patrons. In most respondents' views(28 out of the 32 participants), students barely even visited the library to authenticate or verify news sources. For instance, a respondent had this to say: This is something we have never even contemplated about as a department. Possibly because we do not even have students coming into the library to verify news sources. Another respondent intimated that: The library per say does not have one but where there is the need I use my discretion to help such patrons decipher if a particular news item is fake or not." It also emerged from our interviews with the librarians that none of the institutions had software tools such as site browser plug-ins that alert users to unreliable news sources. This probably is as a result of the fact that librarians interviewed do not even know such tools exist to help identify fake news. Role of academic librarians in managing fake news As trained experts in evaluating information, academic librarians are positioned to lead in the campaign against fake news (Batchelor, 2017;IFLA, 2017;Jacobson, 2017;Rose-Wiles, 2018;Sullivan, 2019;Young et al., 2021). Accordingly, this paper was interested in finding out from participants the role academic librarians can play in curbing fake news during a time when such skills are urgently required. Specifically, we asked questions on how academic librarians could assist patrons in identifying fake news, topics that should be of priority, the current capacity of librarians and their libraries in terms of skills, resources and services. Table 3 captures the views of 5 respondents regarding strategies that academic librarians could adopt in curbing the fake news menace. These responses vividly capture the recurring suggestions proposed by all the participants of the study. Despite these submissions on the role librarians can play in curbing fake news, further enquiry from the librarians on whether they organized any information literacy sessions for members of faculty and students revealed that only two (2) out of the 12 academic libraries studied carried out information literacy instructions as a means of educating their patrons on how to do scholarly research. But this is only for a semester and for first year students alone. On a larger perspective, the lack of information literacy sessions in 10 out of 12 libraries is a major source of concern. The concern is not just about fake news discovery and prevention but the wider scope of introducing patrons to the discovery and consumption of wholesome information. As a fundamental, the library through periodic information literacy sessions should introduce students to library resources, services, and material organization. The changing landscape of information delivery has made it even more compelling for the integration of information literacy in higher education. The lack of instruction of information literacy in these 10 institutions is, therefore, problematic. The integration of information literacy instruction into the formal curriculum has become an accepted practice in higher education (Tang, 2018;Torrell, 2020;White, 2021). If the fight against fake news is to succeed, then the investigated institutions must as a necessity redesign curriculum to integrate information literacy instruction as a base course in all programmes. This call must be spearheaded by the library in collaboration with faculty by developing new approaches and learning resources to improve the Information Literacy competencies of students. As suggested by Franklin et al. (2021), teaching librarians and faculty must collaborate to develop instructions aimed at integrating information literacy into undergraduate instructions to improve competencies of students. Table 3. Some responses from participants on the role of librarians in curbing fake news. Responses "Library professionals should sit together to brainstorm on measures to adopt in fighting fake news." "Library professionals should be proactive in dealing with patrons since most of them do not often visit the library to verify news. One way is by circulating tips on ways of identifying fake news via students' social media platforms like WhatsApp. The school's library website could also be used as a conduit for awareness creation and librarians made to lead the charge." "Patrons should be educated on how to evaluate information before spreading them through social media. The library should organize information literacy training programmes and workshops." "A topic on fake news should be an essential part of the Information Literacy curriculum and librarians made to teach students on how to identify fake news. There must be collaborations among libraries and other institutions like social media organizations on software and skills available on identifying fakes." Discussion and conclusion Research has shown that the phenomenon of fake news is a global crisis and of concern to many countries and institutions (Batchelor, 2017). Ghana has had its fair share of the devastating effects of fake news, with many myths and untruths propagated by a myriad of traditional and social media outlets especially about the COVID-19 pandemic and during elections. This tends to make the impact of fake news deadlier and widespread than the COVID-19 pandemic itself. This calls for a concerted effort by all relevant stakeholders, including those in the information profession, especially librarians, since they play a pivotal role in evaluating print and/or digital information. In an era where digital information abounds, many unresolved organizational, managerial and technical issues make the fight against fake news a daunting task for librarians globally. With the exponential increase in information (Barclay, 2017), careful consideration must be given to the issue of fake news within academic institutions since they serve as citadels for acquiring knowledge. Fake news is even more dangerous for academic institutions since it can compromise and reduce the integrity of scholarly work if care is not taken. Academics must, therefore, be taught the requisite skill and knowledge in evaluating information for teaching, learning, research and other scholarly work. Against this backdrop, this present study sought to explore the readiness of academic librarians in curbing fake news from the perspective of librarians in selected Ghanaian academic libraries with the ultimate aim of identifying some of the prospects and challenges. This study revealed that though academic librarians are aware of fake news, they do not grasp the phenomenon's complexity. Some were also unaware of the evaluation mechanisms and criteria available in identifying fake news as outlined by the IFLA guide on 'how to spot fake news'. Similarly, the study also revealed that some librarians who participated in the study were unaware of sites that are purveyors of fake news and some resources available for identifying fake news. This dearth of knowledge could possibly be attributed to the fact that librarians have not risen to the challenge of curbing this menace. De Paor and Heravi (2020) argue that librarians have experienced some form of difficulty in advocating for information literacy instruction to be introduced in some institutions. Our findings confirm the above assertion of De Paor and Heravi (2020) as it was revealed that most of the academic libraries studied do not provide curricular avenues for librarians to teach media and information literacy skills. It also emerged that there were no standardized evaluation toolkits across the various institutions to support faculty and students in identifying fake news. Despite these challenges identified from the study, it was very encouraging to see the efforts made through the recommendations made by respondents as to the role academic librarians can play in curbing fake news. Thus, it is our view that tremendous efforts are required to impress upon academic librarians to be proactive in helping faculty and students identify fake news. This is, however, dependent on acquiring the requisite tools, knowledge and skills. It is our view that librarians must be creative in how they reimagine the types of literacies necessary to combat fake news. In view of the above, the following recommendations and strategies are being proffered to better position academic librarians in Ghana in curbing the menace of fake news. The study recommends that academic librarians should be given adequate and regular on-the-job training on fake news. Managers of academic libraries should create opportunities for all library staff, especially reference librarians, to attend workshops and seminars on such themes. As a first step, librarians could be taken through series of video tutorials which will enable them to identify fake news. In the same vein, the Library and Information Studies departments of universities in Ghana should review their curriculum to include training and education on fake news. This could be teased out as a sub-topic in the information sources, literacy and retrieval curriculum. De Paor and Heravi (2020) and Rose-Wiles et al. (2017) assert that information literacy instructions, media literacy, and critical thinking should be regarded as essential functions and addressed by librarians. This suggests that academic libraries have to intensify their information literacy lessons and look at creative and practical teaching methods. Therefore, the study recommends that librarians collaborate with faculty and their ICT departments to develop media and information literacy modules with well-established goals and objectives. These modules should set aside a criterion for instruction and evaluation by academic librarians. The module should have two interrelated goals and learning outcomes. The first is to equip students with the skills to become literate and responsible consumers of information. The second goal is for them to be responsible sharers of information to minimize the spread of misinformation. Implications of the study Fake news requires the efforts of all to curb it. However, academic librarians have a frontline duty in assisting faculty and students in identifying fake news. With the wealth of information resources on the internet, the role of librarians keeps evolving each day. Library professionals who are only stuck to the traditional role of recording, organizing, storing, preserving, retrieving, and disseminating information resources for patrons may soon be without jobs. The idea is about adding value to the profession and taking up new roles as watchdogs in this era of information explosion. The information market is very competitive and this calls for aggressive education from information professionals regarding issues of concern such as fake news. Thus, the findings from this research should inform academic librarians to conduct rigorous studies on the subject of fake news and educate their patrons. The paper further underscores the critical role of librarians as experts in curbing the fake news menace. It contributes to scholarly literature on the role of academic librarians in the campaign against fake news within the Ghanaian landscape and highlights the prospects and challenges in curbing this menace. Though our research concentrated on one region in Ghana, the findings and recommendations should be of great value to academic libraries in other parts of Ghana and the larger research community. Limitation and suggestions for further studies This article's limitation is inherent in its coverage. The study covered academic libraries in one region. Thus, we recommend that future studies consider a comparative study of academic libraries across the country to allow for acceptable levels of generalizing the findings. Also, this study adopted a qualitative approach to the collection and analysis of data. Possible future studies could adopt a sequential explanatory mixedmethods approach to get an indepth view on the awareness and effect of fake news on academic work in tertiary institutions. Since students and faculty are directly affected by the consumption of fake news, they should thus be included in the study to seek their views on the subject matter.
2022-03-08T16:45:09.492Z
2022-03-03T00:00:00.000
{ "year": 2023, "sha1": "3c865e3400b70996ef06ceaa43d94fa6bf528f89", "oa_license": "CCBYNCSA", "oa_url": "https://digital.lib.washington.edu/researchworks/bitstream/1773/48357/1/Curbing%20Fake%20News%20A%20Qualitative%20Study%20of%20the%20Readiness%20of%20Academic%20Librarians%20in%20Ghana.pdf", "oa_status": "GREEN", "pdf_src": "TaylorAndFrancis", "pdf_hash": "7a19347f38ba748c687dc4bc03d19e4ac2d5e9d0", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
253419282
pes2o/s2orc
v3-fos-license
CXC‐ receptor 2 promotes extracellular matrix production and attenuates migration in peripapillary human scleral fibroblasts under mechanical strain Abstract As the main loading‐bearing tissue of eye, sclera exerts an important role in the pathophysiology of glaucoma. Intraocular pressure (IOP) generates mechanical strain on sclera. Recent studies have demonstrated that sclera, especially the peripapillary sclera, undergoes complicated remodelling under the mechanical strain. However, the mechanisms of the hypertensive scleral remodelling in human eyes remained uncertain. In this study, peripapillary human scleral fibroblasts (ppHSFs) were applied cyclic mechanical strain by Flexcell‐5000™ tension system. We found that CXC‐ ligands and CXCR2 were differentially expressed after strain. Increased cell proliferation and inhibited cell motility were observed when CXCR2 was upregulated under the strain, whereas cell proliferation and motility did not have a significant change when CXCR2 was knocked down. CXCR2 could facilitate cell proliferation ability, modulate the mRNA and protein expressions of type I collagen and matrix metalloproteinase 2 via JAK1/2‐STAT3 signalling pathway. In addition, CXCR2 might inhibit cell migration via FAK/MLC2 pathway. Taken together, CXCR2 regulated protein production and affected cell behaviours of ppHSFs. It might be a potential therapeutic target for the hypertensive scleral remodelling. | INTRODUC TI ON Intraocular pressure (IOP) is the primary risk factor for the pathophysiology of glaucoma, affecting not only the apoptosis of retinal ganglion cells (RGCs), but also the biomechanical behaviours of the optic nerve head (ONH). The lamina cribrosa and peripapillary sclera (PPS) mainly constitute the ONH complex, which may have great influence on the development and progression of glaucoma. [1][2][3][4] Ocular-hypertension-induced alterations in the lamina cribrosa have been numerously documented previously. [5][6][7] Recent findings have generated increased interest in the biomechanical properties of the hypertensive sclera and exploring its relationship with glaucoma. [8][9][10] Sclera, especially the PPS, may even exert a more important role than the lamina cribrosa in the pathophysiology of glaucoma. 4,8,9 Sclera undergoes dynamic alterations with the fluctuation of IOP, including an increased scleral stiffness, 11,12 complex components changes [13][14][15] and therefore, how it responds to IOP. Some scholars have proposed that sclera-based therapy might become a promising approach for RGCs' reservation by intervening scleral remodelling. However, studies with regard to remodelling of the hypertensive sclera, especially in human beings, are far from thorough investigated. Extracellular matrix (ECM) is important for sclera to maintain the tension generated by IOP. The elevated IOP may increase fibrous components (such as collagens) and decrease non-fibrous components in mice sclera. 15 We also confirmed increased expression of type I collagen in chronic ocular hypertension model of rats in our preliminary work. 14 Hence, we may speculate that the hypertensive scleral alterations probably indicate ECM synthesis rather than degradation, at least in the early stage. Responses of the cells resided in the sclera are the key points. Studies regarding scleral remodelling in human beings were limited to postmortem histology and biomechanical tests. Therefore, mechanistic insight into the driving force of scleral ECM remodelling demands detailed investigation on human scleral fibroblasts (HSFs), especially the peripapillary HSFs (ppHSFs), under the mechanical strain in vitro. To investigate the ECM remodelling of the hypertensive PPS, it is necessary to establish a proper strain model for ppHSFs in vitro, which could mimic the in vivo ocular hypertension. The Flexcell-5000™ tension system has been described previously. [16][17][18][19][20] It is a computer-controlled vacuum unit conventionally used to apply mechanical forces on different kinds of cells. [18][19][20] Under a proper strain, the mRNA and protein expressions of the ppHSFs should coincide with the scleral ECM alterations detected in vivo. The mechanical strain also possesses the ability to alter cell behaviours. Therefore, we aimed to (1) explore a proper biaxial mechanical strain for ppHSFs in vitro; (2) screen for the mechanically stimulated genes of ppHSFs under the strain; (3) investigate the effect of CXCR2 on cell behaviours and ECM production under the strain. In the current study, we employed a precise measurement of the entire transcriptome by high-throughput sequence. Our results showed that the CXC-motif chemokine ligands and receptors were the most changed ones and CXCR2 was validated as the most differentially expressed CXC-receptor. The CXC-ligands/receptor axis has been implicated as proinflammatory process and exerts crucial influence on cell proliferation, migration and angiogenesis. [21][22][23] In our study, we further explored the potential role of CXCR2 on ECM remodelling and cell behaviours under the mechanical strain, including cell proliferation, apoptosis and migration ability. | MATERIAL S AND ME THODS The study was approved by the Institutional Review Board and Ethics Committee of Eye and Ear, Nose, Throat Hospital of Fudan University. Consent to use for research purpose was made from human donors or the family members before conducting the experiment. | Cell cultivation and treatment Cell culture and identification have been described previously. 16 In brief, ppHSFs were isolated from the PPS of postmortem human eyes (2 mm scleral band from the ONH 17 ) according to a collagenase digestive protocol (Serva). The primary ppHSFs were grown with Dulbecco's modified Eagle's medium (DMEM) containing 20% fetal bovine serum (FBS) and 1% penicillin-streptomycin (Hyclone), in an atmosphere of 5% CO 2 -95% air in a humidified incubator. Cells after passage 1 were grown in medium containing 15% FBS. The cells used for experiments were between passage 4 and 8. In selective experiments with inhibitors, ppHSFs were incubated with JAK1/2 inhibitor Ruxolitinib (20 μm) for 4 h or FAK inhibitor PF-573228 (10 μm) for 24 h (both Selleck) before subsequent processing. | Mechanical strain To explore a proper strain parameter that represented the in vivo elevated IOPs, we attempted to apply multiple mechanical strains by the Flexcell FX-5000™ tension system (Flexcell International Corporation). The ppHSFs were seeded into the six-well collagen-I coated Bioflex plates (Flexcell International Corporation), with 2.5 × 10 5 cells per well. After the cells were attached to the plates under 24-h culturing with conventional serum-containing DMEM, the media were changed with serum-free DMEM for 24 h, then DMEM containing 1% FBS before the biaxial strain. In the present study, we applied 0, 5% 0.5 Hz, 10% 0.5 Hz, 20% 0.5 Hz strain for 8 and 24 h, respectively. | Reverse transcription-quantitative polymerase chain reaction (RT-qPCR) Samples were dissolved in TRIzol reagent (Invitrogen) to extract total RNA. cDNA synthesis was conducted by reversely transcribing 1 ug total RNA into cDNA according to the PrimeScript RT reagent kit (Takara). RT-qPCR was performed on ABI ViiA7 Real-Time PCR system (Thermo Lifetech) using SYBR Premix Ex Taq™ (Takara) according to the manufacturer's protocol. PCR parameters used were as follows: 95°C for 30 s, then 50 cycles of 95°C for 5 s and 60°C for 30 s. The forward and reverse primers were described in Table 1. Data were analysed using the comparison Ct (2 −ΔΔCt ) method, and the relative mRNA expressions were normalized to GAPDH. | RNA-sequence (RNA-seq) and the bioinformatics analysis Three pairs of cell samples were collected from the mechanical stimulated ppHSFs (10% 0.5 Hz strain for 8 h) or the unstimulated normal control. Total RNA was extracted by TRIzol reagent (Invitrogen). After samples' quality certification, RNA-seq running was carried out by the Beijing Genomics Institute (BGI) using BGISEQ-500. | Lentivirus transfection The lentiviral vector containing short-hair RNA (shRNA) of human CXCR2 and the scramble lentiviral vector were purchased from Genomeditech Co. Ltd. After being seeded in the six-well plates for 24 h, the ppHSFs were transfected with lentiviral vectors for 24 h, respectively. Then the medium was replenished with DMEM containing 15% FBS and puromycin (4 μg/ml; Genomeditech Co., Ltd.). After 48-h growth, the cells were resuspended and seeded into the plates for 24-h culturing before further experiments. After cell harvests, the mRNA and protein expressions of the transfected cells were confirmed by RT-qPCR and Western blot, respectively. Technologies). Briefly, the ppHSFs were resuspended and seeded into 96-well plates. After overnight cell attachments, 10 μm Edu was added into each well and incubated for 4 h, followed by cell fixation and permeation. Then the cocktail was incubated for 30 min to detect the positive cells. Images were taken by fluorescence microscopy (Nikon Eclipse Ti-S). | Cell cycle by flow cytometry The experiments for cell cycle of flow cytometry have been conducted following the manufacturer's protocol. In brief, the resus- | Enzyme-linked immunosorbent assay (ELISA) The secretory level of type I collagen was detected by ELISA. Briefly, cell culture medium from different groups was collected immediately. The secretory level of type I collagen was quantitatively determined following the instructions of the ELISA kit (Abcam). | Zymography The activity of MMP2 was detected by MMP Zymography Assay Kit (Xinfanbio, Shanghai, China). According to the procedure, we collected cell samples from the unstrained and strained groups. FlowJo software (BD). | Migration assay Cell migration ability was determined by the migration assay. After the required treatment of each subgroup, cells (2.5 × 10 5 /well) were transferred into the six-well plates and cultured with the DMEM containing 10% FBS, then serum-free DMEM, respectively, for 24 h. | Statistical analysis The data were presented as mean ± SD, and every experiment was repeated at least three times. Statistically analyses were assessed by student t test or one-way ANOVA using SPSS software (version 19.0; Inc.). p value < 0.05 were considered to indicate statistical significance. | The mRNA and protein expressions of the ppHSFs under multiple mechanical strains To explore a proper strain parameter, multiple biaxial mechanical strains were applied on the ppHSFs. According to the documented literatures, the hypertensive sclera revealed increased fibrous components and decreased non-fibrous components, 15 accompanying increased stiffness 11,12 and reduced permeability of sclera. 13 Our preliminary work has also confirmed the upregulated scleral production of type I collagen and elastin in chronic ocular hypertension model of rats, at least in the early stage. 14 4). Data were expressed as mean ± SD of three replicates. *p < 0.05, **0.001 < p < 0.01, and ***p < 0.001. Scale bar: 100 μm. | RNA sequence and mRNA expression profiles between the mechanical stimulated and unstimulated ppHSFs Three independent cell samples were used for the entire transcriptome measurements by high-throughput sequence. The mRNA expressions were compared between the mechanical stimulated (10% 0.5 Hz for 8 h) and unstimulated ppHSFs. The transcriptome was massively altered after mechanical stretching ( Figure 1A). Table 2. F I G U R E 4 The mechanically stimulated CXCR2 increased scleral ECM production in ppHSFs under the mechanical strain. Data were expressed as mean ± SD of three replicates. *p < 0.05, **0.001 < p < 0.01, and ***p < 0.001; ns, no significant. To further investigate the function of CXCR2 in ppHSFs, the lentivirus vectors containing shRNA of human CXCR2 gene were established. shRNA1 was selected for subsequent experiments as it was more effective in knocking down CXCR2 ( Figure 1E-G). | The mechanically stimulated CXCR2 prompted cell proliferation of ppHSFs under the mechanical strain Under the strain of 10% 0.5 Hz for 8 h, the expression of CXCR2 was upregulated in ppHSFs. However, the effect of CXCR2 on ppHSFs has not been well explored. Edu imaging and cell cycle analysis of flow cytometry were performed to detect cell proliferation ability. When compared with the unstimulated normal control, the proportion of positive Edu cells (green) was increased in the stimulated group, indicating prompted cell proliferation (Figure 2A,B). Cell cycle analysis of flow cytometry also showed a significant increase in G2 and S phase in ppHSFs ( Figure 2C,D). Our data also revealed that when CXCR2 was upregulated, the phosphorylation of AKT, STAT3 and ERK1/2 was increased, which might explain the stimulated cell proliferation ( Figure 2E,F). The immunofluorescent imaging showed nuclei translocation of the phosphorylated STAT3 under the same circumstances ( Figure 2G). When CXCR2 expression was knocked down by shRNA, Edu imaging did not have a significant change between the two groups ( Figure 3). Therefore, CXCR2 might promote cell proliferation of ppHSFs under the mechanical strain. | The mechanically stimulated CXCR2 increased scleral ECM production in ppHSFs under the mechanical strain via Jak1/2-STAT3 signalling pathway To explore the effect of CXCR2 on the mRNA and protein productions of scleral ECM, qPCR and Western blots were conducted. The results revealed that strain-stimulated CXCR2 upregulation increased the mRNA and protein productions of type I collagen and reduced the production of MMP2 in the scramble group. When CXCR2 expression was inhibited, the mRNA and protein productions of type I collagen and MMP2 were reversed after the stimulation ( Figure 4A-C). CXCR2 could also modulate the secretory level of type I collagen ( Figure S3) and the activity of MMP2 ( Figure S4A,B). The translocation of phosphorylated STAT3 under the strain was also attenuated when CXCR2 was knocked down ( Figure 4D,E). Increased type I collagen, reduced MMP2 and activated phosphorylation of STAT3 under the strain could also been abrogated by applying JAK1/2 inhibitor Ruxolitinib ( Figure 5). Thus, we humbly speculated that the mechanically stimulated CXCR2 might modulate scleral ECM production in ppHSFs via JAK1/2-STAT3 signalling pathway. | The effect of CXCR2 on cell apoptosis of ppHSFs Flow cytometry of annexin V/PI dual staining was used to analyse cell apoptosis. The apoptosis of ppHSFs was slightly reduced under the strain, although without statistical significance (p = 0.875). In the shRNA interfering group, the expression of CXCR2 was inhibited and apoptotic cells were significantly reduced after strain (p = 0.042; Figure S5). The results implied that CXCR2 inhibition might reduce the apoptosis of ppHSFs. 5). GAPDH was used as the loading control. Data were expressed as mean ± SD of three replicates. *p < 0.05, **0.001 < p < 0.01, and ***p < 0.001; ns, no significant. | The mechanically stimulated CXCR2 attenuated cell migration of ppHSFs under the mechanical strain via FAK/MLC 2 pathway For cell mobility, migration assay was performed. In the scramble group, ppHSFs migrated 38.65% and 59.16% of the total distance 24 h after the scratch, respectively, with or without strain (p = 0.036). In the shRNA group, the cells migrated 34.06% and 37.12% of the total distance, respectively, with or without strain (p = 0.547; Figure 6A,B). Under the mechanical strain, the upregulation of CXCR2 increased the phosphorylated FAK but inhibited the phosphorylated MLC 2 , hence reduced the migration ability of ppHSFs. However, the phosphorylation of FAK and MLC 2 did not have a significant change after strain when CXCR2 was knocked down ( Figure 6C,D). Therefore, CXCR2 may affect the migration ability of ppHSFs under the mechanical strain. Src. 37 Zhou et al. 33 found that CXCR2 might induce neuralgia by activating NF-κB after exposure to the vincristine. CXCR2 could also stimulate the expression of STAT in diabetic animal models, contributing to activation of the glomerular monocytes and migration of the macrophages. 38 By upregulating β-catenin signalling pathway, CXCR2 increased migration, invasion and epithelial-mesenchymal transition of the papillary thyroid carcinoma cells. 36 The crucial role of CXCR2 was also detected in eye diseases, such as keratitis, 34 proliferative vitreoretinopathy 39 To furtherly understand how CXCR2 retarded the migration of ppHSFs, much emphasis is given on FAK. 41 shown that cells in the periphery sclera were distinct from those in the PPS, and they reacted differently even under the same mechanical stimulation, including the expression of α-SMA. 16 Further investigations would be conducted to full elucidate the CXCR2-FAK-MLC signalling pathway. Generally, the reduced migration was a well-coordinated process. Increased scleral ECM production and retarded cell motility may together contribute to the remodelling of the hypertensive sclera. Taken together, our study showed that the strain of 10% 0.5 Hz Mechanical strain may induce significant changes in microRNAs of cells, suggesting that microRNAs might be one of the mechanisms that modulating ppHSFs under mechanical stimuli. 47,48 In conclusion, CXCR2 might be a potential therapeutic target for glaucoma from view of a sclera-based therapy. CO N FLI C T O F I NTE R E S T The authors declared no conflict of interests. DATA AVA I L A B I L I T Y S TAT E M E N T The data that support the findings of this study are available from the corresponding author upon reasonable request.
2022-11-10T06:16:59.887Z
2022-11-08T00:00:00.000
{ "year": 2022, "sha1": "1bddfd86b260cc909a7ead9454f681af76586363", "oa_license": null, "oa_url": null, "oa_status": "CLOSED", "pdf_src": "Wiley", "pdf_hash": "7108d7307935edde3ac7caf998c90e59d240ed6b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
214354545
pes2o/s2orc
v3-fos-license
Load‐transfer in the human vertebral body following lumbar total disc arthroplasty: Effects of implant size and stiffness in axial compression and forward flexion Abstract Adverse clinical outcomes for total disc arthroplasty (TDA), including subsidence, heterotopic ossification, and adjacent‐level vertebral fracture, suggest problems with the underlying biomechanics. To gain insight, we investigated the role of size and stiffness of TDA implants on load‐transfer within a vertebral body. Uniquely, we accounted for the realistic multi‐scale geometric features of the trabecular micro‐architecture and cortical shell. Using voxel‐based finite element analysis derived from a micro‐computed tomography scan of one human L1 vertebral body (74‐μm‐sized elements), a series of generic elliptically shaped implants were analyzed. We parametrically modeled three implant sizes (small, medium [a typical clinical size], and large) and three implant materials (metallic, E = 100 GPa; polymeric, E = 1 GPa; and tissue‐engineered, E = 0.01 GPa). Analyses were run for two load cases: 800 N in uniform compression and flexion‐induced anterior impingement. Results were compared to those of an intact model without an implant and loaded instead via a disc‐like material. We found that TDA implantation increased stress in the bone tissue by over 50% in large portions of the vertebra. These changes depended more on implant size than material, and there was an interaction between implant size and loading condition. For the small implant, flexion increased the 98th‐percentile of stress by 32 ± 24% relative to compression, but the overall stress distribution and trabecular‐cortical load‐sharing were relatively insensitive to loading mode. In contrast, for the medium and large implants, flexion increased the 98th‐percentile of stress by 42 ± 9% and 87 ± 29%, respectively, and substantially re‐distributed stress within the vertebra; in particular overloading the anterior trabecular centrum and cortex. We conclude that TDA implants can substantially alter stress deep within the lumbar vertebra, depending primarily on implant size. For implants of typical clinical size, bending‐induced impingement can substantially increase stress in local regions and may therefore be one factor driving subsidence in vivo. those of an intact model without an implant and loaded instead via a disc-like material. We found that TDA implantation increased stress in the bone tissue by over 50% in large portions of the vertebra. These changes depended more on implant size than material, and there was an interaction between implant size and loading condition. For the small implant, flexion increased the 98th-percentile of stress by 32 ± 24% relative to compression, but the overall stress distribution and trabecular-cortical loadsharing were relatively insensitive to loading mode. In contrast, for the medium and large implants, flexion increased the 98th-percentile of stress by 42 ± 9% and 87 ± 29%, respectively, and substantially re-distributed stress within the vertebra; in particular overloading the anterior trabecular centrum and cortex. We conclude that TDA implants can substantially alter stress deep within the lumbar vertebra, depending primarily on implant size. For implants of typical clinical size, bendinginduced impingement can substantially increase stress in local regions and may therefore be one factor driving subsidence in vivo. | INTRODUCTION Almost 500 000 spinal fusions are performed annually in the United States to treat degenerative disc disease and other spinal pathologies. 1 While mostly successful, 2 evidence suggesting that reduced segmental mobility may accelerate degenerative changes at adjacent levels [3][4][5] has driven interest in motion-preserving approaches, such as total disc arthroplasty (TDA). 6 This class of implants can allow for some degree of flexion/extension, lateral bending, and axial rotation between adjacent vertebrae. [7][8][9] The underlying premise is that this mobility produces a more natural kinematic and biomechanical environment in the adjacent vertebrae-that is, motion and load-transfer patterns that are closer to those occurring without an implant. Clinical outcomes following TDA are mixed. Problems including heterotopic ossification, [10][11][12] adjacent-level vertebral fracture, [13][14][15] and implant subsidence [16][17][18] suggest problems with the resulting biomechanics. Reduced implant coverage-a smaller footprint of the implant on the vertebral endplate-is associated with elevated interfacial stresses 19 and a higher incidence of implant subsidence, 16 suggesting that small implants may cause high stresses and failure of underlying bone. Implants that cover an equivalent percentage of the vertebral endplate but have different shapes can require different forces to subside into the bone because they recruit different regions of the endplate and underlying trabecular microstructure. 20 Despite those insights, the fundamental load-transfer behavior within a vertebral body supporting a TDA implant remains largely unknown. For example, it is not known whether implant-induced changes in stress occur in local regions adjacent to the implant and then dissipate in deeper regions, or whether the extent of the vertebral body is impacted. Similarly, it is not known how stresses within the trabecular microstructure change as a function of implant size or material. The etiology of subsidence also remains unclear. Data from Punt et al 16 show that for 60% (21/35) of clinically diagnosed cases of subsidence, the implant footprint did not subside in a parallel manner but rather rotated by at least 5 relative to the bony endplate. This suggests to us that bending could be involved in subsidence, though this link has not been previously established. In part, these uncertainties arise because of the structural complexity of the human vertebral body, including the spatially variable trabecular microarchitecture and thin cortical shell and endplate. Addressing this issue, our goal was to elucidate the role of implant size and stiffness on load-transfer behavior within the vertebral body following TDA, accounting for realistic multi-scale geometric features of human vertebral bone. To capture these features, we employed micro-computed tomography (μCT)-based finite element analysis. The high resolution and mechanistic nature of μCT-based finite element analysis has provided unique insight into the mechanisms of osteoporotic wedge-fracture, 21 the mechanical role of the trabecular microstructure, 22 in vivo structural changes to bone, 23 and fundamental properties of bone tissue 24,25 and is therefore well suited to investigate tissue-level mechanics following TDA. Specifically, for both uniform compression and flexion-induced anterior impingement, we investigated the effects of implant size and stiffness on trabecularcortical load-sharing behavior, stress and stress changes in the vertebral bone tissue, and the spatial distribution of tissue at the highest risk of failure. The resulting insight can help elucidate fundamental biomechanical behavior for this class of device, including how implant design may facilitate the replication of a natural biomechanical environment in adjacent vertebrae. | Study design Our study comprised parametric, high-resolution, μCT-based finite element analysis of a human vertebral body virtually implanted with generically shaped elliptical TDA implants of varying sizes and stiffness and loaded in compression and flexion-induced anterior impingement. We assumed that subtle details of the implant geometry have only a secondary effect on tissue-level stresses within the vertebral body (Appendix A). Thus, to simplify the modeling effort, generic implants were modeled that comprised 3-mm-thick elliptical cylinders with varying major and minor diameters. Implant models were compared against an intact (no-implant) case, which simulated loading via a disc-like material covering the superior and inferior endplates. | Specimen preparation and μCT scanning We analyzed μCT data from a separate study of one human L1 vertebral body from a de-identified 80-year-old male cadaver with no history of metabolic bone disorder. The bone volume fraction (BV/TV) was 0.23 for the entire vertebral body (cortical shell included). This value is higher than has been reported for osteoporotic vertebrae 26 and is therefore typical of what would be expected for a TDA candidate. The μCT scan had an isotropic pixel size of 37 μm and the posterior elements were removed to isolate the vertebral body. To reduce computational cost, the scan was coarsened to 74 μm before the hard-tissue and marrow were segmented using a global threshold value. Bone tissue was then compartmentalized into trabecular, cortical and endplate tissue using custom algorithms described elsewhere ( Figure 1A). 27 A planar surface was virtually created superiorly to mimic surgical preparation 28,29 prior to TDA implantation. This required resection through parts, but not all, of the osseous endplate. | Finite element analysis Each 74 μm voxel in the coarsened scan was converted into an eightnoded hexahedral finite element. 27 A TDA implant, also modeled using voxels, was placed such that the implant center coincided with anterior-posterior (A/P) and medial-lateral (M/L) midpoint of the vertebral body (the A/P dimension was measured from the vertebral foramen). To simulate compressive loading of an intact disc, a uniform compressive displacement boundary condition was applied to the superior disc ( Figure 1A). Following calculation of the finite element solution, results were scaled linearly to produce a net reaction force of 800 N (approximately 1× body weight 30 ), a typical force at that spinal level for static standing. 31 To simulate flexion of an intact disc, a displacement boundary condition was used to rotate the disc in the midsagittal plane about the far posterior-superior point, simulating flexion over a single motion segment ( Figure 1B). 21,32 Results were then scaled linearly to produce an overall reaction force of 800 N. While flexion can increase loads on the spine 2-to 3-fold compared to what was modeled here, 31,33 a reaction force of 800 N was maintained in order to facilitate comparison across models. Compression of an F I G U R E 2 Flexion of an implanted segment was modeled by applying a force through an arc (yellow) to simulate impingement. θ = 90 , t = 2 mm, r = 40% of the footplate anterior-posterior diameter F I G U R E 1 Mid-sagittal crosssection (0.5 mm thick) showing (A) the differentiation of trabecular (light gray), cortical (blue), and endplate (red) tissue. Boundary conditions and displaced shapes are shown for the (A) intact disc in compression, (B) intact disc in flexion, (C) implant in compression, and (D) implant in flexion. The implant components depicted above the footplate in (D) were not explicitly modeled but are shown to illustrate impingement which motivates our flexion boundary conditions implanted segment was modeled by applying a uniform force of 800 N to the superior implant footplate ( Figure 1C). Flexion of an implanted segment was modeled by assuming impingement between the footplate and the insert ( Figure 1D). There is substantial evidence that impingement occurs in vivo in flexion/extension and lateral bending for both unconstrained (eg, Charité) and semi-constrained (eg, ProDisc-L, activL) devices. [34][35][36][37][38][39][40][41] Analysis of retrieved implants and in vitro experiments suggests that large loads are transmitted through the impinged regions during bending. 35,37,40 Therefore, flexion of an implanted segment was modeled by applying a net force of 800 N to a 2 mm thick, 90 arc of the footplate, representing load-transfer through the footplate induced by impingement ( Figure 2). The distance from the implant center to the impinged region (r) was set as 40% of the footplate A/P diameter and was chosen because it represents a typical impingement moment arm for devices used clinically. For all models, an intervertebral disc-like material was modeled inferiorly using a roller-type (symmetry) boundary condition applied to the base of a 4-mm-thick disc, thereby simulating an 8-mm-thick disc with unconstrained bulging. All bone elements were assigned the same elastic material properties (E = 10.3 GPa, ν = 0.30 42 ). While absolute values of stress in the bone directly depend on the choice of tissue material properties, the relative outcomes are insensitive to uncertainties in tissue modulus over a realistic physiologic range (Appendix B). Disc elements were assigned material properties consistent with the measured effective (homogenized) modulus of the disc at a low loading rate (E = 8 MPa, ν = 0.45 43 ). Implant modulus was parametrically varied as described below (ν = 0.33). Perfect bonding was assumed at all interfaces, thereby modeling full footplate fixation in the bone. 44,45 As described below, a total of 20 analyses were run. Depending on implant size, individual models had 36 to 46 million elements and 141 to 174 million degrees of freedom. All analyses were linearly elastic and were solved on a supercomputing cluster (Stampede2, Austin, Texas) using a custom finite element code that included a parallel mesh partitioner and an algebraic multi-grid solver. 46 A typical analysis utilized 1100 processors, 3000 GB of memory, and required over 200 CPU hours. | Outcomes The primary outcomes were: (a) trabecular-cortical load-sharing behavior; (b) the spatial distribution of tissue at the highest risk of initial failure; and (c) stress and stress changes relative to the intact model in the bone tissue. Trabecular-cortical load-sharing was quantified using the cortical load fraction, which was calculated at each transverse slice as the ratio of axial force in the cortical bone to that in the whole vertebra; trabecular load fraction equals unity minus cortical load fraction. 27 High-risk tissue was defined as the 10% of bone tissue at the highest risk of initial failure. 21 This was quantified by taking the ratio of the maximum and minimum principal stresses of each bone element (calculated at the element centroids) to its tensile (61 MPa) or compressive (150 MPa) yield stress, 48 respectively, then taking the higher value of this ratio. After ranking values across all elements, high-risk tissue was defined as the top 10% of values. 21,43 To evaluate tissue-level stress in the bone, the minimum principal stress (calculated at the element centroids) was visually plotted and compared. To quantify changes in stress compared to normal physiologic loading, the von Mises stress (calculated at the element centroids) for each implanted model was subtracted, element-by-element, from the intact model. | RESULTS In both axial compression and flexion-induced anterior impingement, the presence of an implant altered the trabecular-cortical load-sharing behavior and spatial distribution of high-risk tissue relative to the intact disc both adjacent to the implant and also deep into the vertebral body ( Figure 3). These alterations depended more on implant size than material. In compression, the cortical shell experienced less overall load relative to the intact disc for all implant sizes and materials ( Figure 3A). Among the implant models, the large implant transferred the most load into the cortical shell and thus best replicated the intact disc in compression. In flexion, on the other hand, the cortical load fraction for the implant models could be either less than or greater than that of the intact disc, depending on implant size ( Figure 3B). Small and medium implants decreased, while large implants increased, the cortical load fraction relative to the intact disc, regardless of implant material. For the large size, the cortical load fraction exceeded that of the intact disc by up to 23% (this occurred 9 mm away from the boneimplant interface, at an axial position of 70%). At most axial positions, the medium implant best replicated the load-sharing behavior of the intact disc in flexion. Flexion of medium and large implants shifted high-risk tissue anteriorly in a way that flexion with an intact disc did not ( Figure 3D). For the intact disc, flexion skewed the high-risk tissue distribution In compression, large implants caused 34 ± 1% of bone tissue to experience von Mises stress changes greater than ±50% relative to the intact model, compared with 51 ± 2% and 58 ± 0% of bone tissue for medium and small implants, respectively (material median ± range). In flexion, on the other hand, large implants caused 57 ± 3% of bone tissue to experience von Mises stress changes greater than ±50% relative to the intact model, compared with 51 ± 1% and 53 ± 8% for medium and small implants, respectively (material median ± range). | DISCUSSION These results indicate that the presence of a TDA implant can substantially alter cortical-trabecular load-sharing, the spatial distribution of high-risk tissue, and stress in bone tissue throughout the vertebral body relative to an intact disc. Implant size has a larger effect on these alterations than implant material in both compression and flexioninduced anterior impingement. The differences in load-transfer behavior between the intact model and the implant models were much larger for flexion-induced anterior impingement than for compression. In other words, flexion to the point of impingement with an implant caused much larger deviations from the natural biomechanical environment compared to compression with an implant. Specifically, flexion with an implant caused local increases in stress anteriorly and F I G U R E 6 Mid-sagittal cross section (0.25 mm thick) showing the percent difference in von Mises stress between the intact disc model and metallic implant models in compression (top) and flexion (bottom). Other implant materials were omitted for clarity. Positive differences denote higher stresses for the implant models compared to the intact disc shifted the tissue at the highest risk of failure to local anterior regions. This behavior was accentuated as implant size increased but did not depend much on implant material properties. The medium implant in our study is of particular interest because it is most representative of devices used clinically: the dimensions of the implant and dimensional mismatch between the implant and the underlying vertebra are both within the range found clinically. 49 Our results suggest that implants of this size recruit less overall cortical bone than the intact disc in compression, thereby overloading the tra- has been reported to occur in 9% to 66% of cases 34,51,52 and has been documented for nearly all major device designs (Charité, [52][53][54] ProDisc-L, 34 The primary limitations of this study are its use of just a single vertebra and its theoretical nature. Micro-architectural parameters, such as BV/TV, vary between specimens and can impact mechanical behavior. 48 However, studies on n = 22 21 and n = 13 27 nonosteoporotic human vertebrae show a consistent pattern of corticaltrabecular load-sharing, which was also exhibited by the vertebra studied here. Thus, our results should likely extend to most nonosteoporotic vertebrae, though a larger sample size is necessary to confirm their generality. A substantially lower BV/TV representative of osteoporosis might result in different behavior, since structural redundancy is lost with osteoporosis. 56 In part, this may help explain the contraindication of TDA for osteoporotic patients. A second limitation stems from the purely computational nature of our study. The finite element approach used here has been shown to accurately predict whole vertebral-body and trabecular-core strength compared to experimental values, implying that the dominant structural mechanisms in the bone are well-captured. 48,56 We also modeled the disc as a homogenous isotropic elastic material, thereby neglecting the details of the gelatinous nucleus pulposus and lamellar annulus fibrosus. However, with degeneration, compressive loads are thought to transmit directly through the annulus 57 as the nucleus shifts from a fluid-like to solid-like structure. 58 Thus, in terms of loads experienced by the vertebral body, the annulus-type material properties we assigned to the disc should reasonably simulate a state of disc degeneration associated with the aged nature of the vertebra. Further, our prediction of the high-risk tissue distribution for the intact model is consistent with the location of bone failure observed for cadaveric vertebrae loaded via degenerated discs. 59,60 Finally, our implant model omitted the protrusions (such as the spikes or teeth used for fixation) found on real implants. A sensitivity study (Appendix A) indicated that using a higher fidelity implant model that includes protrusions had a negligible effect on reported results and would not alter our conclusions. However, some implants utilize a keel instead of a series of teeth for fixation. Since these keels are much larger than the protrusions modeled here and can extend deep into the vertebral body, it is possible that keeled implants could exhibit fundamentally different behavior than that reported here. Therefore, interpretation of our results should be limited to nonkeeled implants. We created a planar surface superiorly to replicate a TDA procedure, 28 which included resection through parts of the osseous endplate. There is clinical agreement that the osseous endplate should be preserved and that only the disc and cartilaginous endplate should be resected during TDA. 28,29 However, we found it was not possible to create a planar surface without resecting parts of the osseous endplate due to its inherent irregularity. This raises the question of whether complete endplate preservation might have enabled the implants to better replicate the intact model. Results from a prior study (n = 5 L1 vertebrae) indicate that, compared with full endplate preservation, full endplate resection only minorly altered maximum cortical load fraction (decrease of 4%, P < .01) and had a similarly small effect on high-risk tissue distribution. 61 35,36,40,41 In other words, it is been suggested that the implant position changes following subsidence which then increases its proclivity to impinge. However, our data suggest the opposite is also feasible -that impingement causes subsidence. We found that flexion-induced anterior impingement substantially increased stress in the bone and concentrated the highrisk tissue to local anterior regions. The 800 N force we applied in both compression and bending (approximately 1× body weight 30 ) facilitated comparison between models since it enabled us to isolate the interaction between size and loading mode. However, the forces on the vertebral body generated in vivo during flexion can be two to three times body weight, 31,33 since the moment arm caused by the weight of the trunk must be balanced by increasing forces in the erector spinae muscles, 65 which increases the reaction force at the vertebra. Scaling the values of stress in flexion 2-to 3-fold to those better representing the in vivo environment (permitted by the linear elastic nature of our study) would generate tissue-level stresses high enough to be of concern for both monotonic and fatigue-related tissue failure. 66 The failure of the bone tissue supporting an implant may be a causal factor for implant subsidence. Therefore, if the magnitude and distribution of tissue-level stress reported here are similar to those which develop in vivo, implants designs which impinge may inherently be at risk of overloading the bone in the regions near the impingement. We suggest that benchtop subsidence tests should incorporate bending-induced impingement to better replicate in vivo behavior. In summary, our findings suggest that implant size has a larger effect on load-transfer behavior within the vertebral body than implant material in both compression and flexion. If impingement following flexion occurs in vivo, local stresses in the bone tissue can substantially increase anteriorly in the region adjacent to the impingement. This behavior is accentuated as implant size increases. For the medium implant, whose size is similar to those used clinically, these elevated stresses are sufficiently high to warrant concern for monotonic or fatigue-related bone failure, which may contribute to clinically observed implant subsidence. The load-sharing outcomes were insensitive to our choice of bone tissue material properties over the range tested. The cortical loadfraction varied by a maximum of 0.5% and the high-risk tissue volume varied by a maximum of 1% at any axial location ( Figure S1). Thus, the error of our load-sharing estimates with respect to our choice of bone tissue elastic modulus are negligible.
2020-01-23T09:22:17.858Z
2020-01-19T00:00:00.000
{ "year": 2020, "sha1": "c074cee28ae3670d2337f0721930d3065d03a259", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jsp2.1078", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b5dce0c8aeaace8717a23d15b0fe508fd8988fb6", "s2fieldsofstudy": [ "Biology", "Engineering", "Medicine" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
243913391
pes2o/s2orc
v3-fos-license
Omni-channel integration: the matter of information and digital technology Abstract Purpose: This paper aims to explore how omni-channel data flows should be integrated by specifying what data, omni-channel agents, and information and digital technologies (IDTs) should be considered and connected. Design/methodology/approach: A multiple case study method is employed with 17 British companies. The studies are supported by 68 interviews with the case companies and their consumers, five site visits, four focus group meetings, and the companies’ archival data and documentations. Findings: This paper provides novel frameworks for omni-channel data flow integration from consumer and business perspectives. The frameworks consist of omni-channel agents, their data transactions, and their supporting IDTs. Relatedly, this paper formalizes the omni-channel data flow integration in the forms of horizontal, vertical and total integrations, and explores their contributions to the adaptability of omni-channel, as a complex adaptive system (CAS). It also discusses that how inter-organizational governance mechanisms can support data flow integration and their relevant IDT implementations. Originality: This research’s recommended frameworks provide a robust platform to formalize data flow integration as the omni-channel’s core driver. Accordingly, it moves the literature from a basic description of “what omni-channel is”, and provides a novel and significant debate on what specific data should be shared at what levels between which agents of the omni-channel, and with what type of relationship governance mechanism, to assure omni-channel horizontal, vertical and total integrations. Research implications: The breadth and depth of the required IDTs for omni-channel integration prove the necessity for omni-channel systems to move toward total integration. Therefore, supported by CAS and inter-organizational governance theories, this research indicates how data flow integration and IDT can transform the omni-channel through self-organization and autonomy capability enhancement. Introduction The retail operations management have increasingly become consumer-led, while they need to respond to the diverse and varying market demand in an omni-channel environment (MacCarthy et al., 2016). The omni-channel retailing aims to achieve a seamless and uniform view of the physical and data flows across its agents (e.g. retail shop, delivery service, and data service provider), while providing various channels (e.g. online and offline) for consumers to find, buy, deliver and return the product (Saghiri et al., 2017). The existing body of knowledge has highlighted synchronization, visibility and integration as the main drivers of the omni-channel management (Cai and Lo, 2020;Melacini et al., 2018). Numerous studies address specific needs for data management and integration between online and offline stores (Herhausen et al., 2015;Verhagen and van Dolen, 2009), across multiple firms' Enterprise Resource Planning (ERP) systems (Sousa and Voss, 2006), among different stock-keeping points (Barratt et al., 2018), and within warehouses (Onal et al., 2018). Empirical and sectoral studies, for example in grocery (Müller-Lankenau et al., 2006), clothing (Kembro and Norrman, 2019b) and banking (Patel, 2014), also underscore the crucial role of data integration in the omni-channel system. Notwithstanding the prevailing emphasis on omni-channel integration, the literature on "how" to make integration happen is still nascent. Managerial and visionary monographs (e.g. Briedis et al., 2019) report a high level of data inconsistency and inaccuracy in omni-channels, and scholarly research demands further research in this area (Caro et al., 2020). To manage data flows and the required integrations allied to them, retail operations management have already employed a wide range of information and digital technologies (IDTs) such as barcode, radio frequency identification (RFID) and electronic data interchange (EDI) (Dai and Tseng, 2012;Vize et al., 2013). However, IDT has significantly developed in recent years, and the application of more advanced technologies such as cyber-physical systems (CPS), Internet-of-Things (IoT), artificial intelligence (AI) and big data analytics (BDA) are yet to be explored in the omni-channel context. The outstanding and rapidly expanding capacity of IDT in capturing, analyzing, and synchronizing data fits very well with the increasing complexities around omni-channel integration (for details, see the literature review in Section 2). Nevertheless, the major relevant studies do not acknowledge the IDT centrality for omni-channel management (Seyedghorban et al., 2020), and there is a need for more studies on omni-channel integration with a particular focus on advanced IDT and their impact on omnichannel performance (Cheng et al., 2015). This need has been also stressed earlier by Oh et al. (2012), calling for studies on embedding mobile channels within delivery systems, and more recently by Chi et al. (2020) and Pereira and Frazzon (2021), highlighting the need for research on digital-physical retail integration and governance. This paper responds to these calls by providing a comprehensive exploration of how to enhance omni-channel data flow integration, using IDT, through extensive multiple case studies of 17 companies. The case studies include and are supported by 68 interviews, IDT applications and omni-channel research gaps To manage product and data flows, omni-channels have tried various technologies, specifically IDT. The literature review of this research also investigates a number of potential IDT applications in product identification, and Business-to-Business (B2B) and Business-to-Consumer (B2C) data transactions. Product data are core for omni-channels (Acquila-Natale and Iglesias-Pradas, 2020), and need to be retrieved, monitored, and updated from manufacturer to retailer (Sun and Tyagi, 2020). Barcodes and RFID are two well-established data recording and capturing technologies, used in various picking, storage, shipping and delivery operations. Barcodes, readable by scanners, and RFID tags, with the capability of being both readable and writable (in passive and active forms), help users identify and locate products swiftly and accurately (Fan et al., 2015). They facilitate product shelf-replenishment (Condea et al., 2012), and can save operational costs for omni-channels (Dai and Tseng, 2012). B2B transactions, mainly between buyer and seller, are facilitated by essential, well-established platforms such as EDI (Ettlie et al., 2005), supporting order placement and receipt, purchase order, shipment notice, delivery notice, invoicing, and payment. The conventional EDI has been enhanced to e-business and Internet systems, such as cloud-based EDI, to support fast, accurate, and large volumes of data transactions, not only between one buyer and one seller, but also among a wider range of supply chain and logistics agents of omni-channel systems (Vize et al., 2013). These real-time data transmissions help omni-channel agents to receive the data they need, make timely decisions, and move toward integrated physical flows (Hofmann and Rüsch. 2017). Common errors and delays, which typically happen due to manual intervention, are minimized by B2B IDTs. This reduces reworks, buffer times and buffer inventories, and makes omni-channels more efficient and less costly (Craig et al., 2015). With the expansion of data generation and capturing nodes, and data transactions among them, the new concept of IoT has emerged. IoT refers to a highly distributed network of devices, communicating with each other and their users (Ancarani et al., 2020). IoT, assisted by Wireless Sensor Networks (WSN) and the Internet, can widely support B2B integration schemes such as information sharing, collaborative warehousing, automated ordering and replenishment, and vendor managed inventory (Ivanov et al., 2020), which help the omni-channel system have greater visibility of its products, coordinate the products' inventories and make them available efficiently and flexibly in different channels on demand. B2C transactions are also supported by IDT applications mostly around electronic-commerce systems (King et al., 2004), which have extended to highly advanced technologies such as mobile apps with location recognition, data capturing, and payment capabilities (McLean, 2020). The main purpose of ecommerce systems is to involve consumers in the omni-channel processes (e.g. order fulfillment shipment, and returns), to not only facilitate their shopping experience, but also maintain the relationship with them in the long-term . This may include updating consumers with the latest news about products which have been previously purchased by them (e.g. new features, warranty services, or recalls), informing consumers about new products, involving them in market surveys, and even monitoring their latest shopping behavior, status and needs (Li et al., 2020;Margetis et al., 2019). More advanced IDTs such as virtual reality and augmented reality try to enhance the consumer experience and engagement, and integrate the online and offline aspects of the omni-channel retailing even further (Heller et al., 2019). 7 On the production side of the omni-channel system, advanced technologies in manufacturing, assembly and packaging have significantly increased companies' abilities in making product in small volumes and more flexibly (Khanchanapong et al., 2014). In particular, robotic and CPS try to coordinate computer programs with physical objects, devices, and machines (Leitao et al., 2016), assist firms to manage and control various operations autonomously, and eventually enhance production performance in terms of time, quality and adaptability (Xu et al., 2018). In such a smart, integrated production, machines can interact with each other using WSN (Strozzi et al., 2017) to implement the production plans, where the ERP system can receive and process all shop floor data instantaneously, and update the plans accordingly (Hald and Mouritsen, 2013). This can give omni-channel systems real-time access to and uniform visibility of their operations (e.g. manufacturing, storage, and delivery), distributed among different channels, and help them plan their stock movements more efficiently. Moreover, advanced, smart manufacturing technologies can enable omni-channels to make more customized products for different channels based on those channels' specific needs (Zhang and Zheng, 2020). Advanced IDTs such as BDA, AI, and IoT, as mentioned briefly earlier, may have wider applications across the omni-channel processes. Although there is no omni-channel-specific research on those applications, there are studies that address how advanced technologies are implemented in various functions of omni-channel systems (e.g. warehousing). The expanding usage of IDTs in various agents and operations of the omni-channel retailing has led to enormous amounts of data, generated and shared by sensors, scanners, and data exchange and transmission instruments. These data can be a valuable source of knowledge if mined and analyzed properly (Villalobos et al., 2020). BDA, using advanced analytical and computing techniques to synthesize large and heterogeneous data sets, can provide various descriptive, predictive and prescriptive analyses (Brinch, 2018). They, (a) on the consumer side, help the omni-channel retailing understand the market trends, preferences, and behavior at the segment, channel, and even individual consumer level (Jocevski, 2020), and (b) on the business side, support the omni-channel complex decisions and actions around when and where to order, make, hold and move the products (Wamba and Akter, 2019). Very much coupled with BDA, AI has been increasingly used in market and consumer behavior analysis and e-commerce (Singh and Tucker, 2017), order management and locating the order status (Hofmann and Rüsch 2017), order pick-up and delivery scheduling and routing planning with time, cost, air pollution, and consumer satisfaction optimization objectives (Kang et al., 2019), and operations improvement and lean supply chain (Liu et al., 2013). In summary, the quite disjointed use of advanced IDTs, such as IoT, AI and BDA, in different parts and processes of the retail business, as well as the great opportunities of those technologies to contribute to the omni-channel retailing (Bradlow et al., 2017), highlight the demand for new studies on the specific applications of IDTs collectively (conventional and advanced ones) to omni-channel data integration, decision and actions (as explained in the sub-Section 2.1). These specific gaps have been also emphasized in very recent literature reviews by Cai and Lo (2020) and Seyedghorban et al. (2020), who call for explorative research on channel integration and the technology applications which transform the omni-channel retailing. 8 Theoretical anchor The explorative approach of this paper can synthesize and explain its outcomes in omni-channel data flow integration frameworks better as it comprehends the omni-channel phenomenon through theoretical lenses. In light of that, this research relates the omni-channel concept to the complex adaptive system (CAS) (Miller and Page, 2009) and inter-organizational relationship governance (Cao and Lumineau, 2015) theories. The use of two theories, found to be complementary, allows the research to understand the phenomenon from different perspectives, hence providing more in-depth evaluations and insights. Scholars and practitioners have already raised the need to manage omni-channel as a system (Wollenburg et al., 2018a). But, its highly dynamic and complex nature, as discussed below, calls for a broader view to omni-channel, where CAS is an appropriate lens for it. Greatly in line with the CAS attributes, the omni-channel's numerous agents of generating, capturing and sharing data in multiple channels, and also the countless amounts of data, generated by and exchanged between them, while they are changing dynamically, make the omni-channel retailing a truly complex system. Meanwhile, the omni-channel competences in, for example, managing order fulfillment and switching products from one channel to another, to respond to constantly changing consumer demand, supports its adaptability. This paper links omni-channel retailing to CAS and theorizes it through five properties: (i) A large number of inter-connected agents as the main feature of CAS (Nilsson and Darley, 2006), is very much applicable to omni-channel retailing. Following Haki et al. (2020), several data generation/capturing/ sharing agents of the omni-channel system are considered as agents, in the two forms of actors (e.g. retail store, online retailer, logistics service, and manufacturer) and IT applications (e.g. replenishment systems, and e-commerce packages), which should be connected and synchronized. This connectivity (e.g. the case of online-offline showrooming: Zhang et al., 2020a) is a core feature of the omni-channel system, which distinguishes it from conventional retail systems (e.g. bricks & mortar or multi-channel). (ii) Diversity and heterogeneity apply to omni-channel agents and their multiplicity, while each agent may have various roles and objectives (Nilsson and Darley, 2006). They also refer to an extensive range of interactions among the agents. While conventional retail systems have limited interactions with suppliers and delivery services, omni-channel systems should manage a quite diverse range of physical/information/financial transactions, associated with payment, delivery, product exhibitions, stocking, manufacturing and returns processes, in various channels synchronously (Verhoef et al., 2015). (iii) Dynamism refers to frequent changes in the agents' activities and objectives (e.g. a warehouse may switch its role from a stock-keeping point to a solution provider entity for retailer (Kembro and Norrman, 2019b), or logistics service providers support the healthcare system to collect Covid-19 tests (Mooney, 2020), which may cause changes in their interactions with other agents, and add to the system complexity. (iv) Non-linearity is grounded in the disproportional effects of changes in one agent (and its connections) on other agents and interactions. Non-linearity is found influencing and influenced by the interconnectedness, heterogeneity and dynamism of the system (Choi et al., 2001). Various cases of non-linearity are traceable in the retail sector which indicate how the omni-channel retailing is different from conventional retail models -for example, a negative feedback in online channel might significantly and positively increase the demand in other channels (Berger et al., 2010), or major breakthrough 9 changes in mobile shopping technology might have a minimal impact on cash buyers of the store channel. (v) Adaptation through self-organization and emergence, as a key property of CAS to manage complexities, is of great significance for the omni-channel system. At the system level, omni-channel retailing needs to cope with and manage its changes internally (e.g. in data and physical flows among the agents) and externally (e.g. in demand, supply, business environment, and regulations). At the entity level, omni-channel agents need to spontaneously modify their internal business processes and external interactions (with other agents) to respond to heterogenous and dynamic changes imposed to them by other agents or the external environment (e.g. see the logistics model restructuring case, proposed by Marchet et al., 2018). The agents' capabilities to structure/restructure their business processes and interactions define their self-organization property, highly required for adaptability of CAS. The omnichannel agents' self-organization capabilities should collectively lead to emergence of new flows, interactions or settings for the whole omni-channel system (e.g. see the case of omni-channel adaptation to the changes in the consumer shopping journey during Covid-19: Zhang et al., 2020b), which is also a key for CAS adaptability. Drawing on the properties above, data flow integration and its assisting IDT are expected to enhance the omni-channel adaptability. Managing and integrating diverse and large-scale data flows across the omni-channel system, and implementing IDTs to support them, need engagement of omni-channel agents and their encouragement toward further integration and IDT application. Therefore, this research views the omni-channel data flow integration in the inter-organizational relationship context and its relevant governance mechanisms (Akin Ates et al., 2015), that drive data flow integration and its supporting IDT implementation. Governance is defined as the coordination and control of economic exchange among organizations (Mahapatra et al., 2019), and typically implemented through contractual and relational mechanisms (Um and Oh, 2020). Contractual governance focuses on determining the rights, responsibilities and control procedures in a relationship, and relational governance is mainly defined around collaboration, trust and joint problem solving (Cao and Luminea, 2015;Mahapatra et al., 2010). In view of them, the paper discusses how cases of data capturing instruments (e.g. smart sensors), data sharing platforms (e.g. WSN and WWW), horizontal and vertical data exchange technologies (e.g. IoT and CPS), autonomous planning units (supported by AI) and advanced data analysis methods (BDA) need one or both contractual and relational governance across the omni-channel system. Research setting This study adopts a multiple case studies design (Stake, 2013), which fits well with the explorative approach of this research, and leads to an in-depth understanding about the emerging phenomena of IDT, data flow integration and omni-channel retailing as a complex system. Following Miles et al. (2013) recommendations, the sampling frame in this multiple case study research is guided by the research questions. Purposive sampling (Ellram, 1996;Flick, 2009;Miles et al., 2013) in stratified form (Marshall and Rossman, 2006: p71) is used, enabling developing a rich and comprehensive theoretical framework on application of IDTs, by considering perspectives of the key 10 agents in omni-channels (sub-groups in stratified sampling), i.e., manufacturers, wholesaler, logistics companies, retailers, and data service/IDT providers. Based on this frame, the criteria defined by the researchers for selecting the cases are: first, their position as leading organizations who have a significant share and influence in their sectors; second, being part of an omni-channel system (i.e. interacting with a number of businesses such as manufacturing firms, sales, logistics, and information services, who are performing in an inter-connected way in multiple channels); third, having a strategy or plan for the digital transformation of their omni-channels and investing in IDT for their operations (e.g. by using technology and standards for capturing and sharing data on products and operations); and, fourth, providing the researchers with sufficient access to data, illuminating the research questions (Yin, 2014). The sample includes 17 leading British companies, who are parts of omni-channel systems, with multiple manufacturers, distributors, logistics service providers, and data service/IDT providers, performing in multiple inter-connected channels. This number of cases ensures generalizability of findings in this case study research (Eisenhardt, 1986). The definition of the 'unit of analysis' in case study research is related to the research questions of the study (Yin, 2014). The research questions of this paper (presented in the introduction section) are about IDT-enables data flow integration (across omni-channel processes), its contribution to omni-channel, and inter-organizational relationship mechanisms to support it. Therefore, the unit of analysis of this research is the 'omni-channel system' of each of the studied companies, while the focus is on data flows among each company and other agents of the omni-channel, where that company is a member of. Information about the case companies is provided in Table I. Having an extended view of different companies, who perform as different agents (e.g. manufacturer, wholesaler, logistics provider, retailer, and IDT provider) in different omni-channels, supports the breadth and depth of this research in operationalizing the omni-channel integration and understanding the opportunities and challenges of IDT to support it. Data collection Data for this research are collected via different sources including interviews, documentations, archival records, direct and participants' observations and focus group meetings. A database is created and all the data for the cases are stored in it following Miles et al. (2013) and Stake (2013) guidelines. Interviews Interviews with companies are conducted with the informant person(s), in each company, with appropriate knowledge of omni-channel systems and the IDTs used for managing product and data flows through them. Given that this research focuses on data management and integration aspects of the omni-channel system (including their technical and inter-organizational issues), the interviewees are introduced by their companies as the most informed and experienced people, who are involved in decision making processes regarding design and setting of the omni-channels and implementing the IDTs (from both the IDT user and IDT provider companies). The interviewees are also knowledgeable about all the issues which their different departments are coping with in products and data flow management in their omni-channels. Prior to conducting the interviews, the list of questions is sent to the interviewees in a timely manner, to enable them to receive any required complementary information from other departments in their companies if needed. The interviews are in a semi-structured format, and each took 60-90 minutes. All the interviews are recorded and transcribed according to the guidelines by Yin (2014). The interviews are done face-toface, via telephone and via email (as shown on Table I) -which are the established mediums of conducting interviews in qualitative studies (Bryman and Bell, 2007). A group of the interviews are conducted via telephone, due to its suitability to the respondents (e.g. when they work from home, or they are on business trips). The literature shows no significant difference between validity and reliability of face-to-face and telephone interviews (Sturges and Hanrahan, 2004). Even there are studies encouraging using telephone interviews more often in qualitative research (e.g. Novick, 2008) by emphasizing its advantages over face-to-face interview -including reducing the costs of research, being easier to administrate, and eliminating the effects of characteristics of the interviewer (e.g. class or ethnicity) on the interviewee (Bryman and Bell, 2007). To ensure the validity and reliability of telephone interviews, Bryman and Bell's (2007: p.216) measures are adopted, including: interviewing targeted respondents, selected thoughtfully (not randomly) in each company; and asking the interviewees to email supporting documents which provide more information about the topics discussed during the interviews, to address limitations in illustrating visuals or figures by the respondent during the telephone interviews. Two respondents have requested to answer the interview questions via email (by filling the interview form), to make sure they have enough time to search and provide accurate answers to all questions, and to make internal enquiries with their colleagues. After receiving their responses, follow up emails are exchanged, if required, to ensure understanding the answers clearly by the researchers. Email interviews have a few advantages over face-to-face interviews, including providing more details in written answers, providing a cleaner text, and eliminating the effect of developing personal relationships with the interviewees, which can affect the research (Bryman and Bell, 2007: p.674;Murray and Sixsmith, 1998, p:118). Following Bryman and Bell (2007) and Murray and Sixsmith (1998) guidelines for validity and reliability of email interviews, prior agreements are made with the respondents for the email interview; detailed and structured records of the questions and answers and their time and date are documented; and all the answers are stored in the case studies' database. The interview questions aim at creating an in-depth understanding of the operations of the companies, the agents, and channels in their omni-channel systems, and the information systems (including IDTs) used for managing their intra-and inter-organizational operations. The questions also address IDT applications and implementation issues around them. The list of interview questions is available in Appendix 2a. The interviews transcripts are then coded by using NVivo, leading to an appropriate and detailed analysis of the qualitative data. There have been very few cases of contradicting remarks stated by the interviewees (for example, on the choice of right track and trace technology for product identification, considering the cost and accuracy tradeoffs). In those cases, following the recommendations by Power (2004), listening to the 'logic of practice' of the interviewees, and triangulation of data (e.g. by discussing these issues in the focus groups and by reviewing documents of the studied companies) have been used to maintain validity of the findings. The approach used for coping with divergent views and information provided by interviewees from the same company (Watson, 2006;Power, 2004), is sending follow up emails to the respondents who have provided those different views, by copying all of them on the email, and asking them to clarify the ambiguity and contradiction in their answers. These follow up emails are very useful, by leading to either coming to a consensus on the provided insights, or identifying different scenarios, i.e. solutions depending on the use case. For example, using specific IDT can depend on the type of product, its value, or the physical condition in which the product goes through in its channel, which might be a reason for providing divergent answers by the respondents. When identifying divergent views from different sources of data from the same company (e.g. interviews and documents), data triangulation (Yin, 2014) is used by sending follow up emails to different respondents from the company and asking for clarifications on those topics. These investigations lead to revising some of the propositions of the study. Key information about the company interviews and further details of them are provided in Table I and Table II respectively. Table II here Interviews with consumers, of the case companies, are conducted in semi-structured format (Flick, 2009) to explore omni-channel data flow integration and the relevant IDT applications from the consumer's point of view. The interviewees are the consumers of companies 3 and 8-14 (listed in Table I). The initial consumers sample frame is recommended by the companies or extracted from the companies' databases of consumers, who have given the consent to be contacted for marketing and research purposes. Out of 126 consumers of the sample frame, 52 responds to the initial contacts and finally 38 interviews are conducted completely. The interviewees represent the case companies very well (3-6 interviewees from each company), with a diverse age, income and education range, with experience of buying products through omni-channels (i.e. online and click-and-collect, besides in-store shopping). During the interview, verbal explanations are provided to the interviewees about the structure of the omni-channels and the agents who are involved in them. Explanations and examples are also provided about the types and flows of data, that are collected from/provided by the consumers. The interviewees' answers are not limited to one specific company or omni-channel, and largely reflect their knowledge of and views to omni-channel and its relevant data flows. The interviews, with the companies and the consumers, are very valuable for this research, as they lead to gaining a deep understanding with explanations about omni-channel data types, data flows, and the IDT applications for data capturing, sharing, and analysis throughout the omni-channel system, from both consumer and business perspectives. These explanations where not achievable via any other source of case study evidence. 13 Documentations Different types of documentation, including information available on the websites of the companies, articles and white papers about the studied companies and their omni-channel systems, and their archival records and annual reports are collected (Stake, 2013;Yin, 2014) for this research to complete the evidence gathered via the interviews. Other important sources of information are the videos available about the operations of the companies and the way the IDT enhances its relevant omni-channel operations. Documents are very important sources of data in this research, because full understanding of the omni-channel structures, and more importantly the architecture and structure of the IDTs (e.g. the connections between different elements of their systems such as auto-identification devices, data capturing, data exchange and data storage in within-and inter-organizational levels) are possible only via studying the documents provided by the companies. Considering the explorative nature of this research, the technical information provided in the documents (e.g. technical figures and maps of the systems showing the interconnections of the IDTs and data flows), is essential in the research to help formulating the findings of the study. Moreover, the documentations enabled the validation through triangulation of the case studies' data (Yin, 2014). The documentation collected from the companies and their details are presented in Table II. Observations Visiting five sites of the studied companies, and observing their IDT-based solutions at work and demos of the advanced systems that they design to utilize, help developing the insights of the researchers, by providing an understanding of the practical aspects of implementing the systems, and the technical considerations and limitations related to them. Pictures are taken and videos of the IDTs of the companies are recorded (subject to receiving prior consent from the companies). Furthermore, notes are taken by the researchers during the visits. During and after making direct and participant observations, new questions are raised by the researchers which are answered by the companies' representatives, leading to improving the quality of findings of the paper and increasing the level of their practical relevance. The site observations are complementary sources of data as they provided new insights into the IDTs which are not possible to be achieved with other data sources. Through the site visits, the way data is generated or shared by using IDT are demonstrated by the companies. Observations are used for triangulation and validation of the case study data (Stake, 2013;Yin, 2014). Details about the participants and direct observations are provided in Table II. Focus groups Four focus group meetings with participants from the IDT provider companies (companies 15-17 shown in Table I) and the researchers take place for the purpose of improving the understandings of the researchers and validating and triangulating the findings of the study (Eisenhardt, 1989;Yin, 2014). The meetings occur in different stages, in order to provide required insights and feedback from IDTs experts' perspective which guide the researchers through the entire research process, and ensure rigor and robustness of the findings. The first and second meetings take place in initial stage of the study, when the researchers present the scope of the research and the managers provide important insights regarding inter-organizational data management in omni-channels and the IDTs which are used for 14 organizing the data flows. Also, the discussions in the first two meetings help identifying suitable sectors and companies for the research. In the third meeting, the findings generated from analyzing the interviews and documents of companies 1-14 are presented to the managers, and feedback and additional technical details are provided by them on the IDTs used by those companies (companies 1-14) and the opportunities and limitations related to implementing them. In the fourth focus group meeting, which takes place at the end of the study, the findings and initial propositions of the research are presented for the managers and feedback is received from them. The focus group meetings take place face-to-face. Several important insights in relation to inter-organizational aspects of data management (e.g. issues related to compatibility of databases of companies) are identified through these meetings which contributes to the findings of the study. The researchers moderate the meeting by providing a clear agenda for the conversations. They record the conversations and the highlights of the discussions. After each meeting, the researchers meet to identify the main themes of conversation and to discuss the points of group consensus (Kitzinger, 1995). Details about the focus groups and their participants are provided in Table II. Data analysis For analyzing the case study data, after collecting the evidence, the transcripts are coded and analyzed using NVivo. The keywords used for data analysis include: 'provided data', 'gathered data', 'data management', 'integration' and 'information and digital technology'. Matrix displays are used within spreadsheets displaying the codes on one dimension and quotes on the other (Kaufmann and Denk, 2011;Miles et al., 2013). This analysis procedure leads to identifying robust patterns within case studies (Eisenhardt and Graebner, 2007). Via eight brainstorming sessions between the researchers, the patterns identified through the case study analysis process are refined, leading to 'sharpening' the propositions of the study (Kaufmann and Denk, 2011;Yin, 2014). Differences in business models of the companies (e.g. having different roles or providing their products via different channels) and their system setups (e.g. using different types of IDTs and sharing data at different levels with different types of companies) are taken into account when the data are analyzed. Following the guidelines on cross-case analysis by Stake (2013), the research questions are used as a guide, when applying the findings of each case for creating the overall findings of the research. Crosscase analysis enables more generalizability of the research findings by identifying the relevance of applicability of the findings to other settings (Miles and Huberman, 2013: p.173). Considering the research aim in exploring omni-channel data flows/integrations using IDTs from consumer and business perspectives, differences in business models and setups of the case companies of this research lead to providing more generalizable results. However, at the same time, identifying the similarities and differences between business models and system setups of the studied cases demands significant amount of synthesis by the researchers, to keep the focus on the areas of their businesses and IDTswhich can provide answers to the research questions. The sampling frame of the study and unit of analysis (explained in sub-Section 3.1) are used as guides in this process. In the Findings section, following the guidelines of Pratt (2009), the research outcomes are organized in sub-Sections 4.1-4.3, supported by the power quotes, which are presented in Table III. 15 Table III here Quality of research Steps taken during the data collection process, to ensure robustness, validity and reliability of the research have been explained in each sub-section related to every source of data (sub-Sections 3.2.1-3.2.4). Following the recommendations by Yin (2014), four widely used tests, including construct validity, internal validity, external validity, and reliability have been applied to ensure rigor of this case study research (Gibbert, et al., 2008). Table IV shows the tactics through which these tests are used for this research and the stages of the research in which the tactics are implemented. Findings The outcomes of the qualitative data analysis of this research indicate various channels of generating, capturing, receiving, sharing, and analyzing data throughout the omni-channel system, which can be viewed from the perspective of, first, the consumer and, second, the business/retailer -as presented in sub-Sections 4.1 and 4.2 respectively. The data, exchanged in the consumer and business sides are then explored further in terms of their types and the technologies, used to capture, share and analyze themin sub-Section 4.3. In each sub-section the research outcomes are summarized in the form of notions. Data flows: consumer perspective Consumer-side omni-channel data flows are organized around the consumer shopping journey, widely known in the business research as a sequence of touchpoints or a process that a consumer follows to acquire and use a product (Følstad and Kvale, 2018;Harris et al., 2021). Reference textbooks such as Blackwell et al. (2011) recommend five steps of "need recognition", "information search", "evaluation", "purchase", and "post-purchase evaluation" for the consumer buying journey, and Harvard Business Review analyzes it based on "consider", "evaluate" and "buy" steps (Edelman, 2015). The recent marketing literature simplifies it in "pre-purchase", "purchase", and "post-purchase" stages (Grewal and Roggeveen, 2020; Lemon and Verhoef, 2016;Tueanrat et al., 2021), and omni-channel retailing research establishes the shopping journey in pre-purchase, payment, delivery, and return steps (Saghiri et al., 2017). Further synthesis of the consumer shopping journey is provided in Figure 1a, where different steps, identified by the literature, are mapped and the features of different consumer shopping journey classifications are underlined. Figure 1 here Taking these classifications into consideration, analysis of the collected data of this research has come up with the following themes (organized as the main steps) of the consumer's shopping journey: -"searching for product", "seeing the product", "trying the product", "touching the product", "consumers' review/feedback" → outlined as C1. Pre-Purchase: collecting product data -"payment", "purchase terms" → outlined as C2. Purchase: selecting/buying the product -"home delivery", "collection point", "delivery updates/status" → outlined as C3. Receiving and Using the Product -"returns", "reverse logistics", "home collection", "drop-off point" → outlined as C4. Aftersales Service, including possible returns C1-C4 form the center column of Figure 2a, and help organizing the consumer shopping journey data flows chronologically. They reflect the consumer journey from the time he/she might find about the need for the product and start thinking/searching for it, up to the point he/she receives it, and even after that, when (if) he/she decides to return the product. Various channels that provide/collect data to/from the consumer are respectively presented on the lefthand and right-hand side headers of Figure Figure 2a reflects the grouping, done in the initial analysis of the case studies and interviews (e.g. CI1: Consumer Input data, relevant to consumer journey step 1 -pre-purchase). The relevant IDTs to each data flow are also specified by bold capital letters in each box (e.g. B: Barcode; D: Data Analytics). The explanations of each data transaction, in the following sub-Sections, are supported by short power quotes, as described in sub-Section 3.3 and fully listed in Table III. In the pre-purchase step, the consumer tries to learn about the product or expand his/her knowledge about it; hence he/she may need all or some of the following data. Product data including specifications, material/ingredients, functions and features are typically available from various sources including store and non-store channels (CI1.1). Various sales channels help the consumer not only to receive the product data from the physical store, but also from the store's website, other e-tailers, catalogues or tele-sales, pop-up stores, and even the manufacturer itself, as the original source of product core data: " … click on the link in the ad [shared in social media] takes me directly to the online store, where I can find full product data." (Consumer 6) The retail store has been conventionally the right place to view and try the product, and physically compare it with other available options there (CI1.2). Advanced technologies such as augmented reality have made viewing and trying the products (at least some products) more viable via online and virtual channels: "I go to the store to see the product physically and eventually try it on …" (Consumer 1) "Augmented reality [available online] is… the closest to physically trying the products. (Consumer 3) Therefore, box CI1.2 in Figure 2a is extended beyond the physical retail store. In addition to the product description and features, product review reports, prepared by the sellers or third-parties (CI1.3), can support the consumer's purchasing decision. Product data can be also acquired from other consumers, which are shared via social media, product review services, and the sellers' websites. Even though they are not typically well organized and standardized, other consumers' reviews and comments provide some good insights on the product, which are not easily available in formal reports: "… other consumers' reviews [shared online are] affecting my choice." (Consumer 4) The consumer purchase decision can be affected by the available stock level (CI1.4) as well as price and promotion (CI1.5). For some products, a reduced price, marketing promotion, or low stock availability may be good motivations for consumers to buy more and/or faster: "[social media] influencers …. are affecting my decision to buy …". (Consumer 5) " … multi-brand websites offer great discounts and more product stock variety." (Consumer 7) "Instagram adverts … suggest me products that I am looking for." (Consumer 8) The consumer, in the pre-purchase step, usually has questions about the products, purchase terms, delivery options, and the like. These queries have been conventionally shared with the seller. But omnichannel retailing makes it possible for the consumer to approach non-sales channels and social media, and share his/her questions there (CO1.1). "I can find answers to my questions about the product from the seller, in specialist websites or even from you-tube" (Consumer 21) The consumer queries can then form a valuable knowledge-base for sellers, market analysis service and manufacturer, which indicates what consumers are looking for, and what their main concerns are. These can inform those channels to work on their input data in the future, and make the answers available for any potential questions around them. Channels that are involved in the pre-purchase step can monitor the consumer's shopping behavior and record its moves and preferences (CO1.2). These data can be analyzed further and form a basis for future omni-channel market intelligence. "I am surprised that companies have so much information about me." (Consumer 9) The data provided through CI1.1-CI1.5 are dynamic and might be inter-dependent too (e.g. stock availability in CI1.4 may affect the pricing and promotion CI1.5). To adapt to those changes and ensure an adequate, ideally real-time, synchronization among CI1.1-CI1.5, all data-providing agents (sales, non-sales, and social media -in the left-hand side header in Figure 2a) need to have a good visibility of the data they generate and share. This visibility can be driven or instructed by a leading or influencing 18 agent -for example a manufacturer can instruct the retailers and other sales channels to use its original product data format. Moreover, the self-organization property of the data-providing agents, as expected from CAS agents, enables them to establish good connections with each other. The connections are not expected to be identical. For example, while store and online channels of one retail brand can benefit from the product, stock availability and price data sharing by integrating their databases, a similar data sharing among a retail store, manufacturer, and an independent online retailer may happen via different agreements between every two agents (e.g. sharing the point-of-sales data by the store with the manufacturer at the end of each day), or through data monitoring and tracking schemes, run by each agent (e.g. online retailer). Anyhow, due to the importance of consistency in and data generation and sharing (in any form that the omni-channel agents agree on), it should be done through formal mechanisms to assure the proper commitment of all relevant parties. Similar to the data-providing side, data-gathering agents (the right-hand side header in Figure 2a) may track and collect consumer enquiries or shopping behavior data and share them with each other through mutual agreements. This self-organizing intelligence of the consumer behavior, preferences, and queries, at the agent level (extended across multiple channels in CO1.1-CO1.2), provides a high visibility of the market at the omni-channel system level, as a CAS. The thorough visibility and integration of the data-providing/gathering agents as well as their self-organizing ability to capture, generate and share the product data at the pre-purchase stage (of the consumer shopping journey) fundamentally differentiate the omni-channel system from single or multi-channel systems -which do not intend or are not capable of managing such an exhaustive level of data flow integration. In summary: Notion 1: At the consumer journey's "pre-purchase" stage (1a) data flow integration among the self-organizing data-providing/gathering agents supports the omni-channel system adaptability -as a CAS. (1b) data flow integration instructed by one leading agent or mutually agreed on by a group of agents should go through formal mechanisms which need contractual governance. Purchase: selecting and buying the product (C2) In the purchase step, the consumer decides about the specific product that he/she wants to buy, where he/she wants to buy from (i.e. sales channel) and how he/she is going to receive it (i.e. delivery options). During the purchase and payment process, the consumer may receive more data, and the sales channels also gather some data from him/her, as follows. To finalize his/her decision, the consumer needs to understand and agree the terms and conditions (T&Cs) of the sale (CI2.1). In practice, not many consumers are interested in reading the whole T&Cs in full, but usually some key points such as delivery/collection date, warrantee/guarantee, and return policy are reviewed in detail. "I never read the terms and conditions. I just check the box with the agreement." (Consumer 10) "Often I save the terms and conditions on my desktop, but then I delete them after a while without reading them." (Consumer 11) Therefore, the sales T&Cs should be available in various formats. It is notable that although these data are mainly needed during the purchase step, and should be agreed on before payment, they should also 19 be available for the foreseeable future, in case the consumer wants to go back and review them later. When the payment is done, the consumer should receive a full record of his/her purchase, payment confirmation, and a tracking code to trace the order (CI2.2). This data is generated, stored, and shared at different levels of details and in different format, as investigated through documentation and archival records in this research. Once the purchase is done, the consumer information (CO2.1) is passed to a number of channels to prepare the order for him/her. As the observation in all case companies (with sales facility) of this research reveals, depending on the product and terms of purchase, these data may go beyond the name and address, and can be a good source of intelligence for future sales, market research services, manufacturers, and product and service development teams. During the purchase process, the sales channels, physical or virtual, observe, capture, and record the consumer's buying behavior (CO2.2)for example preferences toward promoted items, coupling a product with another one, group purchase of multiple sizes or colors, or preferences toward a particular mode of payment or delivery. "I would prefer that one [i.e. PayPal], rather than putting my entire credit card details" (Consumer 12). These data are of high value for future sales and the logistics services around them. CO2.3 keeps the payment data and shares them with relevant agents, if the consumer agrees, for future use. Adaptability of the omni-channel system in this step should meet variations and alterations in the sales and delivery T&Cs, consumer/payment details, and payment methods -while considering their wider operational and logistical implications. Payment methods (credit/debit card or cash) can shape the omnichannel structure and the logistics around it. For example, the click & collect model will be in the form of buy-online-collect-from-store for consumers who pay by card, and in the form of reserve-online-payin-store for cash payers, each has operational implications for the omni-channel system (e.g. in stock management), and should be regulated and communicated among the relevant agents properly. Similarly, any extended warranty, promised by the retailer to the consumer, should be formally communicated and shared with the manufacturer to put proper plans and resources in place for them. Such an overarching view to the payment stage in omni-channel retailing, is not essentially comparable with store-only or multi-channel retailing models, where their payment processes are handled entirely separated. Therefore, it can be stated that: Notion 2: At the consumer journey's "purchase" stage (2a) payment and purchase choices affect (may change) the omni-channel arrangements and processes, where data flow integration among the relevant agents supports the omni-channel adaptability to those changes. (2b) omni-channel adaptability to change needs formal (contractual) governance to instruct all relevant agents of the omni-channel about the sales/purchase T&Cs and their responsibilities against the T&Cs. Receiving and using the product (C3) In this step, the product is handed to the consumer (e.g. in the store or at a collection point) or delivered to the address specified by the consumer. Although it is the logistics service that is largely involved in 20 this step, there are many other agents of the omni-channel system which contribute to providing, gathering, and analyzing the input/output data of this step, as explained below. After placing the order, the consumer needs to know when the product is to be delivered or is ready for collection (CI3.1). Since there are different parties and channels involved in product delivery, it is the omni-channel responsibility (as one unit) to communicate the delivery status to the consumer. Omnichannels have experienced many examples of consumer perplexity, i.e. when he/she buys the product from company X, then receives a shipment update from logistics service Y, and a "ready for collection" message from collection point Z, separately and quite confusingly. Observing and reviewing the sales and logistics activities of this research cases prove similar procedures and data transactions in them. When the product is delivered/collected, the consumer should be informed via a product delivery confirmation (CI3.2). This is particularly important when the buyer is not the same person as the recipient of the product (e.g. if the item is sent as a gift to another person). Besides, attached to the product are the user guidelines (CI3.3) -in the form of booklets, e-manuals or even videos in the social media, shared by sales or non-sales sources, or by other consumers. Product delivery should be confirmed by a signature, or a photo of the consumer while he/she is receiving the product, as an approval and evidence, which the delivery service can then present to the sellers or can be recorded, in case any dispute arises on delivery later (CO3.1). Consumer experience and feedback (CO3.2) are then collected through various channels to maximize the feedback rate and collect further details. "I answer the quick and easy ones [(i.e. consumer review/feedback questions] but ignore the time-taking surveys." (Consumer 13) Product delivery to a wide range of locations (e.g. home, store, and collection point), and its flexibility in responding to the consumer's heterogenous and changing choices/instructions indicate the omnichannel adaptability. Product delivery operators need to be kept updated about the latest status of the consumer order and delivery address or special request, which needs sales-delivery channels integration -through formal platforms and arrangements around them. Such extended and integrated inter-connections in the omni-channel product delivery, are not traceable in bricks-and-mortar or multi-channel retailing -as the former do not expect many interconnections, and the latter cannot handle them. In view of that: Notion 3: At the consumer journey's "receiving" stage (3a) integration of the delivery operations to other relevant stages and agents of the omni-channel system is crucial for its adaptability against changes in the delivery requirements and conditions. (3b) omni-channel adaptability to change needs formal (contractual) governance to instruct all relevant agents about their responsibility against sharing the delivery data -to facilitate a proper response to its change(s). 21 Aftersales services, including possible returns (C4) For aftersales services (CI4.1), such as a warranty/guarantee, the consumer may deal with the retailer, manufacturer or third-party service companies. Regardless of which specific company takes care of this service, the consumer expects a consistent service from the omni-channel system as a whole. In case a consumer decides to return a product, he/she needs full return instructions (CI4.2), including the return address, label, code, and so on. These should be provided by the retailer, or the return service which specifically handles the reverse logistics part of the business. An electronic or physical receipt, including a confirmation or tracing code would assure the consumer that the returned product is taken care of, and the refund/exchange will be done according to the terms of the purchase (CI4.3). As a common practice, observed by this research in the logistics and sales case studies, when the consumer returns the item, he/she should be sure to attach the return label, including the return barcode and address (usually provided by the return service), to the returned product (CO4.1). This helps the return service to handle the product and the refund quickly and correctly. To receive the refund, the consumer might need to re-confirm his/her payment card information (CO4.2), if it has not been originally in cash. In some cases, the refund is not made directly, but via vouchers or coupons. To offer the after-sales operations smoothly, the associated agents need to coordinate and respond to the changing requirements of the product returns or warranty in a timely manner. The associated data flow integration, which requires the related agents' obligations, can close the adaptability loop of the omni-channel, as a CAS, at this stage, which accompany and support the consumer throughout his/her shopping journey -the advantage which store-only, online pure-play, or even-multi channel retailers lack. Notion 4: At the consumer journey's "aftersales" stage (4a) aftersales service providers should be fully aware of the product, product data, delivery conditions and sales T&Cs, hence they need to be integrated to the rest of the omni-channel data system, to adapt to the consumer needs. (4b) expansion of data flow integration up to the aftersales stage needs formal commitment (contractual governance) of all earlier stages of the omni-channel and its relevant agents, as well as the aftersales service providers to share the product, delivery, sales, aftersales, and return data. Data flows: retail/business perspective The omni-channel data flows in the retail/business side are organized around the main actions of the retailer in managing and responding to the market. Croxton's (2003) model organizes the retail operations, in a more conventional environment, through market review, defining and planning order 22 fulfillment, and logistics network evaluation sub-processes. Moving toward online-offline retail environments, Leung et al. (2018) recommend a B2C e-commerce setting based on key steps of launching sales platforms, receiving orders, internal order processing, and outbound delivery. Zhang et al. (2019) devise the retail's online fulfillment in: (a) core order receipt, pick and pack, and shipment stages, and (b) supportive inventory management, warehousing logistics, and delivery and after-sales operations, to secure the product availability and demand fulfillment. Similarly, Zhu et al. (2021) divide online order fulfillment into two parts: online operations (including market analysis and order processing) and offline operations (including stock management, picking, packing, and delivery). Further synthesis of the retailer's activities is provided in Figure 2b, where different steps, identified by the literature, are mapped and matched against each other. In summary, the retail order fulfillment settings, recommended by the literature, can be organized in decisions and activities around market, demand, orders, product availability, picking and preparing items for each order, delivery, and aftersales. These are consistent with the retailer's actions, extracted from the collected data of this research including: -"having an understanding of customers", "learning about consumers' needs", "understanding the market through analysis of social media data" → outlined as R1. Market/Demand Analysis -"pulling demand and supply capacity data together", "monitoring inventory availability", "logistics capacity" → outlined as R2. Securing Product Availability -"product delivery", "home delivery", "click and collect" → outlined as R3. Demand Fulfillment -"returns", "reverse logistics", "home collection", "drop-off point" → outlined as R4. Aftersales R1-R4, forming the center column of Figure 2b, cover a broad range of activities that a business/retail should undertake to meet the consumer demand. These start from understanding and analysis of the market and move toward product delivery (order fulfillment). They also go further and include aftersales activities, mainly around gathering consumer feedback and handling returns. Figure 2b header row shows the various channels, on the left-hand side, that provide data, and on the right-hand side, that collect data, to and from the retailer respectively. Details of the retailer data flows within each channel are provided in the main body of Figure 2b, and explained in sub-Sections 4.2.1-4.2.4. The code in each box (in Figure 2b) reflects the grouping, conducted in the initial analysis of the case studies and interviews (e.g. RI1: Retailer Input data, relevant to retailer step 1 -market/demand analysis). The relevant IDT to each data flow is also determined by bold capital letters in each box (e.g. B: Barcode; D: Data Analytics). The explanations of each data transaction, in the following subsections, are also supported by short power quotes, as described in sub-Section 3.3 and fully listed in Table III. Market/demand analysis (R1) To manage its capacity, replenishment, inventory, and delivery plans, the retailer needs to have a good knowledge of the market and consumer demands, which are typically acquired from the market intelligence services (RI1.1) and/or directly from the consumer/market (RI1.2). At the same time, the retailer constantly analyzes the market, behavior, demand trends, and retail technologies. The outcomes of those analyses can be accessible by consumers (RO1.1) and market analysis services (RO1.2). "… deep understanding of customers and market is a part of the success of omni-channel retailing." (Company 13) "Analyzing data on Facebook and Twitter is one way of understanding the market …" (Company 13) A diverse range of market data/intelligence sources, if shared/connected rigorously in the omni-channel system, significantly support its individual agents to adjust or even restructure their processes and resources (e.g. warehouse relocation). Hence, they make the whole omni-channel system more adaptable to the market uncertainties and fluctuations -compared to single or multi-channel retailers which are quite disadvantaged in this area. In view of that: Notion 5: At the business "market/demand analysis" stage (5a) to adapt to changes, omni-channel needs rigorous demand market data flow integration, contributed to by all relevant agents. (5b) The rigor of the market/demand data can be assured through formal data capturing and sharing mandates (i.e. contractual governance). Securing product availability (dealing with supply side -R2) Based on its collective knowledge of demand (i.e. consumer confirmed order and forecasted demand -RI2.1), the retailer plans to fulfill the demand for its products on a regular basis. "… they manage to pull together overviews of demand data … from across different silos and Excel spreadsheets"(Company 10) In addition to the estimated demand size, the retailer should have a good visibility of stock availability with the manufacturer and wholesaler/distribution center (i.e. those who are required to replenish the retailer's order), as shown by RI2.3 in Figure 2b. Depending on the number of suppliers, managing such data is important for retailers. In some of the studied companies, data exchange on ordering and product availability is done using a more integrated format, e.g. electronic data exchange (EDI) systems. "… replenishment is carried out through a B2B platform where … inventory availability [is visible]" (Company 11) "… retailers are putting pressure on suppliers to take more responsibility for a master data set-up [with full visibility of stocks/orders]" (Company 1) Moreover, the product's logistics considerations and requirements (RI2.2), such as special storage temperature, may affect the time and condition of receiving the product, and the retailer should know about them in advance. These conditions and requirements can also be dictated by the retailer, as observed in a number of case studies of this research (i.e. RO2.1). "Having correct handling information will lead to better movement of products …" (Company 8) 24 When the order is placed by the retailer (RO2.2.), the retailer expects to receive an advanced shipment notice (ASN) from the dispatch point (RI2.4). ASN greatly helps the retailer to manage its product receiving, storage, and delivery plans and operation. To receive its orders, the retailer expects to have the latest status of the product shipment, until it arrives (RI2.5). Those updates can be provided by the shipment origin (i.e. manufacturer and wholesaler/distribution center) or the logistics service provider. "… there are process checks … to ensure that the right information is going to the customer at the right time … ASN is a process that we are looking at …" "[after the order is placed] products go into a black hole until when they are ready to ship." (Company 11) If the product is supposed to be shipped to the consumer's address or collection point, then the delivery status is to be shared with the consumer (RO2.3). Upon receiving the product, the retailer should check the product and confirm the delivery to the relevant parties. Any problem with the product or delivery is shared with those parties too (RO2.4). "In track and trace [system] Logistics is working with the purchasing team much more closely on managing logistics data." (Company 11) Finally, payment is made (RO2.5), according to the invoice (RI2.6). "End to end integrated system [including point-of-sales and invoicing] will allow real time view of stock." (Company 10) In this step, whereas RI2.1 contributes to the demand/market intelligence (added upon RI1.1 and RI1.2), RI2.2-RI2-6 provide omni-channel retailing with supply market intelligence and supply status, necessary for agents to adjust or restructure their processes and resources (e.g. stock tracking and stock availability) -which make the whole system more adaptable to supply uncertainties and fluctuations. RI2s and RO2s data at this stage are typically shared to/by the retailer via formal procedures (e.g. ASN and payment methods), that all relevant agents should follow. Notion 6: At the business "securing product availability" stage (6a) to adapt to changes in the supply side, omni-channel needs rigorous supply market data flow integration, contributed to by all relevant agents. (6b) The rigor of the supply data can be assured through formal data capturing and sharing mandates (i.e. contractual governance). Demand fulfillment (dealing with the consumer side) and aftersales (R3&R4) Most parts of these steps include data interchanges with the consumer, as already addressed in sub-Section 4.1. But, to deal with the consumer, the retailer should work with other agents of the omnichannel system too. Technical advice about the product, any required update or change in it, or callbacks should be originated by the manufacturer (RI4.1). The retailer should also pass the required aftersales services, raised by the consumer (e.g. any technical or functional problem or difficulty in the product 25 usage) to the manufacturer (RO4.1). General consumer feedback is also shared with other relevant parties (RO4.2). In the case of a product return, the return instructions should be provided by the relevant parties (RI4.2) and once it is returned, its related data should be shared with those parties too (RO4.3). The retailer does not need to be involved in all steps of the return process, but it can be the interface with the consumer to manage the consistent sales and aftersales services of the omni-channel with the consumer. "Returns is a huge challenge in omni-channel retail. … no [poor] visibility about returns leads to lack of control on our costs." (Company 9) "… don't have visibility of returns until [they] turn up at the warehouse." (Company 11) As addressed earlier in the consumer side (Section 4.1), retailers (as the main interface with consumers) and other omni-channel agents seek a high visibility of the consumer's order and preferences. This may include data flow integration or spontaneous tracking/searching for data via well-established mechanisms, which make the omni-channel order fulfillment and aftersales services more adaptable. This advanced capability of omni-channel systems, in managing the order fulfillment process seamlessly, cannot be found in single/multi-channel retailers, which are mainly limited to a single/separate data flow(s). Notion 7: At the business "demand fulfilment" and "aftersales" stages (7a) to adapt to the product suppliers' and service providers' changes, the omni-channel retailer needs supply/service data flow integration to assure the visibility of the latest status of order fulfilment and aftersales service. (7b) supply/service data flow integration needs formal mechanisms (i.e. contractual governance) to instruct the relevant agents in sharing the order fulfilment and aftersales data. Omni-channel data types and IDTs To be well-systemized, the omni-channel data flows, explained above, need a more focused analysis to apprehend the omni-channel data types and the technologies, employed to capture, share and analyze those data. Further synthesis of the research outcomes has grouped the omni-channel system data into five categories of product data, consumer data, business unit data, sales/delivery/return data and planning data. Details of each data category and their supportive technologies are explained as follows -each section is supported by a summary power quote(s) -which are fully listed in detail in Table III. Product data An extensive range of data fields can be considered for the product, whose contents can be fixed/static (e.g. product name, size, and ingredient) or variable/dynamic (e.g. product freshness, price, or its location in the sales channel). "… product data changes through different stages of the omni-channel."(Focus Group 3) 26 Labeling the product by barcode and/or RFID tags can facilitate storing and capturing the product data. A Global Trade Identification Number (GTIN) is usually generated by the manufacturer and can be encoded in a barcode or RFID tag, to provide a unique identity for the product (GS1, 2015). "GTIN is fundamental to our scanning processes [and recognizing] individual units,… cartons, … [or any moving] product."(Company 9) Basic barcodes/RFID tags can be enhanced to more advanced smart tags to embed a wider range of dynamic product data, which can be captured by various sensors or data readers across the distribution and sales channels, or even by the consumer's smartphone. In theory, it looks straightforward to keep the same unique data label for the product from the point it is manufactured until it reaches the consumer. In practice, however, intermediary companies may re-label the products to identify and capture them more easily within their own information system -which is not an appropriate solution for the omni-channel integration as a whole. "Barcodes are placed at point of origin, [but] for other brands … [we need to] relabel [products] with [a new] SKU number. Hierarchies of scanning from unit to box to pallet to truck [is done by barcode then]" (Company 11) Among more advanced technologies, augmented reality, as addressed earlier, is becoming prominent, sharing product data more thoroughly and tangibly. This is in line with the literature (Heller et al., 2019) and confirms the practical relevance of new technologies in omni-channels. Besides, the consumer's knowledge of the product is not limited to the data attached to the product. Product reviews and other consumers' comments on the product cannot be put in a structured data format (e.g. barcode), and are typically shared in a text format and world-wide-web (WWW) platforms. Similarly, consumer enquiries about the product are usually in an unstructured, text format, and can be shared via various WWW and extensible markup language (XML) platforms -where XML, for instance, annotates the text in a way to enhance its capturability by both machine and human. Consumer data Details of the consumer (e.g. name, age, address, and payment information) and his/her order history need to be protected by highly secured data transaction and sharing systems. Data encryption, Payment Card Industry (PCI) compliance, Secure Electronic Transaction (SET), Secure Socket Layer (SSL) technology, and Secure Hypertext Transfer Protocol (S-HTTP) are among the key technologies to be considered by the omni-channel system to protect both the consumer and order. Consumer basic data can be usually gathered and stored in a structured format (e.g. spreadsheet database). However, other data about the consumer, such as shopping behavior, preferences, and comments, are not well-structured in the first place, and need further process and analysis, for example by BDA. "When collecting consumer data, [we] comply with applicable privacy legislation and regulations, [including] specific guidance on data collection from children."(Company 2) "We take protecting consumer personal information seriously [using] firewalls, user verification, strong data encryption, and separation of roles, systems & data … Systems are proactively monitored through a "detect and respond" information security function" (Company 11) Business unit data Key data (e.g. name, address, and products) of all business units involved in an omni-channel system, should be shared among them. Satellite navigation (Sat-Nav) technologies help businesses and consumers to find and plan journeys to each other's location faster, more accurately. Other business unit data such as capacity, performance indicators, and potential capabilities are addressed below in the planning data sub-section. In this category, the key business unit data should be stored and shared in a structured way. Available data standards such as ISO/IEC-6523 (International Organization for Standardization, 1998) or Global Location Number (GLN) can be used to identify the business location. Planning data To assure the product availability for the consumer, and make the product flows smooth, the omnichannel agents need to plan various operations around delivery, inventory, manufacturing, returns, capacity, and so on. Those plans largely depend on consumer orders, market forecasts, market trends and consumer behavior, vehicle delivery capacity, warehouse capacity, human resource policies and regulations, transport infrastructure (e.g. roads) capacity, manufacturing capacities, competitors' plans, demand and capacities, and the like. A more complex situation is where the dynamic data (in addition to static data) are needed, for example inventory status in terms of freshness, or order arrival time based on the latest traffic conditions. Although quite crucial, these data are neither easily nor widely available. The available data in this category are not always in a structured format. Established systems such as ERP or Warehouse Management System (WMS) can be helpful to generate and share more structured planning data for the omni-channel system. Sales/delivery/return data The process of product delivery needs to be visible for a number of parties, particularly the consumer. This visibility may include accurate product data (both static data such as product specifications, dynamic data such as stock availability); shipment and arrival data, delivery and usage instructions and returns data. 28 To this end, RFID technology can share the latest location of the product, from the time when it is allocated to a consumer/order until it is delivered to him/her. Further details can be captured and shared by WSN and CPS. The WSN consists of sensor nodes in the distribution center, local warehouse and stock-keeping points, delivery vehicles, and pick-up points, which means that they can have fixed or moving positions. The sensor nodes go beyond RFID capacities, and measure, compute, and communicate the latest status of the product, order, and location on a real-time basis. Moving toward robotic systems in product pick-up, packaging, shipment, collection, and even homedelivery, CPS links the delivery status data with the physical equipment of the omni-channel system, and ultimately, integrates the product flows and data flows across the omni-channel system. Using CPS, order and product handling tools and technologies (e.g. operations planning systems and robotic tools) can lead to a smart omni-channel system, whose agents are capable of capturing and sharing data, and handling the product moves autonomously. This applies to the product forward flow, from manufacturer to consumer, as well as the product return flow from consumer to retailer, refurbishment center or manufacturer. At a more detailed level, transport items/equipment (e.g. pallet cases and crates) data can be standardized using the Global Returnable Asset Identifier (GRAI) coding system too (GS1, 2015). IDT advancement and omni-channel adaptability Adapting Ivanov et al. (2020) new insights to Industry 4.0 technologies, IDTs identified above have been found supportive for omni-channel data capturing, sharing and analysis at different scale and pace. For example, where barcode technology is limited to the object's (e.g. individual product or box) static data, smart tag can handle a much higher volume of dynamic data, ERP systems share and analyze enormous amount of that, and BDA deals with big data, defined based on their extremely high volume, velocity and variety. Figure 3 organizes the IDT advancement, based on its volume and speed in capturing, sharing, and analyzing data (shown in y axis), and its application to different data type (shown in x axis). IDTs toward the top part of the diagram are capable of dealing with a higher volume of data at a higher speed. Some IDTs have a limited scope (e.g. GLN applies to the business unit location), and some apply to multiple areas or data types (e.g. BDA). Figure 3 here Looking at the omni-channel's data type and their supporting IDTs, explored above, from the CAS perspective, it becomes evident that more advanced technologies, which can capture, analyze, and share the omni-channel complex and unstructured data, enhance its adaptability. For the frequently changing product and delivery data, extending conventional barcode and RFID to more contemporary smart tags, WWW, XML, CPS, WSN, and augmented reality technologies enables omni-channels to have a realtime visibility of the latest status of the products, their location and availabilities, and also their latest 29 market positions (e.g. consumer feedback and rating). Similarly, advanced consumer and business unit data management and analysis IDTs (e.g. PCI, SET and Sat-Nav) can help omni-channel retailing grasp the latest status as well as past/future trend analysis of the supply and demand markets. These can then boost the planning data and their relevant IDTs (e.g. ERP and WMS). For example, a business unit's ERP system, supported by real-time and more accurate product, consumer, supplier, and delivery data, can have more effective autonomous decisions/plans for its production and inventories. The contribution of IDT is not just limited to data capturing and sharing (among IDTs). Advanced IDTs are capable of transferring unstructured data (e.g. consumer behavior) to structured data (i.e. analytics and trends). Moreover, they are competent in comprehending complex data and making autonomous decisions based on them (e.g. smart last-mile delivery system: Schwerdfeger and Boysen, 2020). The findings also imply that advanced IDTs, due to their technical and administrative complications need more formal procedures for implementations and any integration, made by or among them, requires official arrangements between the associated omni-channel agents. In view of these, it can be stated that Notion 8: (8a) advanced IDTs enhance omni-channel data flow integration by capturing, analyzing, and sharing more complex and unstructured product, consumer, business units, planning and sales/delivery/return data. (8b) advanced IDTs, to be implemented and integrated in omni-channel systems, need contractual governance of the relevant omni-channel agents. Further analysis and discussions The main findings of this study, in Section 4, have illustrated the scope of data flows in omni-channels from the consumer, business, data type and IDT perspectives. Summing up notions 1a, 2a, 3a, 4a, 5a, 6a, and 7a and considering individual data flow integration effects, on the omni-channel system adaptability, collectively lead to an inclusive proposition: Proposition 1: Data flow integration across different agents and business processes of omni-channel system supports omni-channel adaptability. Notions 1b, 2b, 3b, 4b, 5b, 6b, 7b, and 8b can also sum up the impact of contractual governance on IDTs and omni-channel data flow integration, identified in different parts of omni-channel. Therefore, it can be generalized that: Proposition 2: Contractual governance of the relationships among omni-channel agents positively contributes to omni-channel data flow integration and IDT applications. Beyond notions 1-7, further synthesis of the data flow integration of the consumer side and the business side, illustrated in Figures 2a&b, can be broken down into horizontal, vertical and total integrations (Liao et al., 2017). Horizontal integration refers to communication and compatibility among IT systems of various omni-channel agents. Vertical integration is about connection and synchronization among IT devices, tools, platforms and systems at data capturing, sharing and analysis levels (e.g. connecting the equipment sensors at the shop floor level with the control machines in the operations monitoring room, and planning modules of the production management system of the firm). Total integration includes alignment of both horizontal and vertical integration elements across each other, linking and coordinating all relevant devices, business processes and decisions throughout the omni-channel system. As an example of horizontal and vertical integrations, in CI1.4, on one hand the stock availability data should be shared among the consumer, manufacturer, wholesaler, distribution center, and retailer (horizontal integration); and on the other hand product identification (e.g. barcode and RFID), data capturing devices (e.g. scanners and sensors), data transmission instruments (e.g. WSN), the inventory database, and order fulfillment decisions should be communicating and integrated (vertical integration). In order to achieve the total integration, the data provider and data collector agents of the omni-channel system (i.e. the header rows of Figure 2a&b) should be linked with each other to close the data flows loop -that is the data collected in the right-hand side of Figure 2a&b need to be processed and fed back to the left-hand side of the figures. This is evidenced by a number of case companies of the current research, where the feedback data are transferred from/to online retailer (addressed in Co11), physical retailer (addressed in Co14), manufacturer (addressed in Co1,2&4), wholesalers (addressed in Co5), delivery service (addressed in Co6,7&10), returns (addressed in Co9) and market and social media (addressed in Co13). Figure 4a&b illustrate the total integration (i.e. feedback loop data flows) by the triangle tables, added on the top of Figure 2a&b. Supporting examples of Figure 4a&b (borrowed from the case companies mentioned above) show how the collected data in the right-hand side of Figure 2a&b are processed and analyzed to be fed back to the data providers. The intended total integration will enable managing omni-channel complexities, at a greater scale and scope -than data capturing, sharing and analysis only in the consumer shopping journey and business order fulfillment process. Data transactions and system synchronizations among the agents can form a strong base for the horizontal and vertical integrations, making omni-channel retailing, at the agent level and as a whole system, more adaptable to internal and external changes and uncertainties. In view of that, it can be stated that Proposition 3: Omni-channel data flow total integration is achievable through coordination of horizontal integration and vertical integration of relevant devices, business processes and decisions throughout the omni-channel system. Figure 4 here It has become evident that the omni-channel vertical integration largely depends on the IDT tools and devices for data definition, identification, capturing, and sharing (e.g. barcodes and RFID to embed data; sensors and scanners to identify and capture data; and CPS and WSN to collect and share the captured data with the data management systems such as WMS or ERP). The omni-channel horizontal integration needs connections among the agents and channels, which is achievable through interconnected and communicating databases and information systems (e.g. ERP). The total integration then needs major support from advanced IDTs, such as BDA and IoT. In this regard, IDTs, appropriate for omni-channel total integration, are derived from the case companies of this research as follows: -Social media is found as the source of big data of the consumers' preference, behavior, and feedback, if analyzed properly with BDA, form a source of knowledge for the omni-channel system. (Company 13) -Demand data (e.g. demand size, location, and type) are collected from all various sources (e.g. online sales websites, product reviews, retail shops, and delivery services). Shared sales databases and XML/WWW technologies are helpful and should be coupled with advanced analytics to provide a good view of demand. (Company 10) -Shared databases and XML/WWW platforms among various parties who are involved in the demand fulfillment process (e.g. manufacturer, distribution center, and delivery) assure accurate and timely decisions and actions (e.g. inventory availability) by the omni-channel system. (Company 11) -Data visibility is expected to be omni-channel-wide, where real-time data are communicated among the relevant entities in an IoT system (e.g. the manufacturer can see the latest status and performance of its products in the retailer or consumer sites (Company 1) -Product returns/reverse logistics is a valuable source of market data, which contribute to the market intelligence, if recorded and analyzed adequately [using BDA]. (Company 9) -Consumer/product data, collected by various omni-channel entities, need to be shared by the relevant parties (i.e. emphasizing the role of IoT and shared databases), and analyzed in an advanced level (i.e. emphasizing the role of BDA). (Company 9, 10) Accordingly, the IDTs required/implemented to achieve total integration are identified and proposed for the triangle tables of Figure 4a&b. The outcomes of total integration and its supporting IDTs (e.g. BDA) can go beyond the processed data, and in most cases can build a knowledge-base and intelligence for the future decisions and actions of the omni-channel system. Therefore, while the earlier notion 8a refers to the relationship between IDT and omni-channel data flow integration, further analysis above emphasizes the role of more advanced IDTs in achieving omni-channel total integration: Proposition 4: IDTs support omni-channel data flow integration, specifically more advanced IDTs (BDA) support omni-channel system toward total integration. Consistent with the supply chain inter-organizational governance theory (Ashenbaum et al., 2009;Choi and Kim, 2008;Mahapatra et al., 2010), the recommended frameworks shown in Figure 2 and Figure 4 indicate that the strength of the inter-agent and inter-channel connections, as well as their structures, can highly affect the omni-channel integration. The omni-channel vertical and horizontal integrations, as devised earlier, can be achievable through contractual governance (Cao and Lumineau, 2015). Formal mechanisms of contractual governance (e.g. binding agreements and official instructions) can direct, enable and even enforce (if needed) data sharing and integration among agents (e.g. a major retailer demands and imposes its delivery service providers to use satellite-navigation systems). Over and above vertical and horizontal integrations, total integration involves widely distributed and heterogeneous data exchanges, agents, and resources, where the contractual governance are typically expensive and inefficient (Wacker et al., 2016). Therefore, at the total integration level, patterns of inter-organizational connections need to emerge (Mahapatra et al., 2019) to assure data visibility and 32 integration throughout the omni-channel system. This, in turn, makes both contractual and relational governance essential for omni-channel retailing, where formal agreements as well as informal cooperation among agents will lead to more complementary relationships (Formentini and Romano, 2016). Further investigation on the total integration and the role of contractual governance in it points out that at this level of integration, contractual governance is not about enforcing some requirements (e.g. how to use big data) but more about more clarification and definability, albeit formally, of integration decision and actions. This is in line with definition of contractual governance based on definability and enforceability, where they define contractual definability as clarifying of the roles, responsibilities and codes of conduct for the associate parties. In view of those, more specific role of governance in omni-channel total integration can be stated as: Proposition 5: Relational governance and contractual governance (in the form of contractual definability) support omni-channel total integration. Conclusions Drawing upon an in-depth explorative research and substantial qualitative data this study has developed a thorough empirical insight on omni-channel data flows and their supporting IDTs. Accordingly, data flow frameworks are developed to illustrate the omni-channel data flow integration, from both the consumer and retailer/business perspectives (Figures 1 and 2). The recommended frameworks show how integrated data flows can be materialized by horizontal, vertical and total integrations (addressing research question i). They also point out how interorganizational relationships should be managed to support data flow integrations toward enhancing the omni-channel adaptability (addressing research question iii). Relatedly, the required IDTs to support data flow frameworks are identified, and their specific implications for omni-channel integration are discussed (addressing research question ii). The research outcomes above are formalized in five propositions ( Figure 5). Overall, this research enhances the existing general ideas around omni-channel retailing to a more specific, robust and pragmatic level, where details of data transactions and integration, their supporting IDTs, their contribution to the omni-channel system, and their enablers, in terms of inter-organizational governance mechanisms, are specified. Theoretical contribution Given the lack of established theories in the omni-channel literature, this research goes beyond a basic description of "what omni-channel is" and "why it is important for the retail sector", and debates what specific data should be shared at what levels and between which agents of the omni-channel system to 33 assure a well-integrated omni-channel system retailing. In view of that and building upon the evident necessity of integration for omni-channel systems (Galliano and Moreno, 2014), this paper has shown how integrated data flows can be materialized at horizontal, vertical and total integration levels. This paper posits that framing the data flows, according to sources and users of data, help the omnichannel system identify how to integrate them thoroughly. The recommended detailed frameworks for horizontal, vertical and total integration directly respond to the need for data exchange among channels, online and offline channel integration, interconnection of products and information records, and synchronization of the supply and demand data, frequently raised by the literature (Gallino and Moreno, 2014;Herhausen et al., 2015;Onal et al., 2018;Verhagen and van Dolen, 2009). The recommended data flows and integrations of this research also minimize the data inconsistencies across different information siloes, as labeled by Briedis et al. (2019), and, for instance, improve the inventory data accuracy -a major concern in the omni-channel literature (Barratt et al., 2018). Moreover, the current research shows how the Mirzabeiki and Saghiri (2020) implicit propositions on the product data track and trace and automation can be implemented more explicitly and thoroughly, with an omni-channelwide, IDT-supported data flow horizontal, vertical and total integrations. Beside the contributions above, this research emphasizes the specific role of IDTs in achieving data flow integration. In view of that, it goes beyond the conventional barcode, RFID and EDI tools and recommends the application of advanced IDTs including WSN, CPS, IoT, AI and BDA (Frank et al., 2019). Thanks to advanced sensors as well as data transmission technologies and the Internet, real-time data can be automatically captured and shared by connected things. However, it should be noted that automatic data production in a complex system such as the omni-channel, if not managed adequately, may just add to its complications. Although studies such as Xu et al. (2018) underline the necessity for proper analysis of the big data that is generated through advanced technologies, this paper points out that the omni-channel system should first look at the data capturing and sharing more systematically and holistically, and generate, share and analyze the relevant data based on a well-established framework of data transactions, and well-defined horizontal, vertical, and total integrations among them. Determining specific IDTs for omni-channel horizontal, vertical and total integrations, also contributes to the literature by extending the recent research and propositions on the relationship between store/online data capturing technologies and data analytics (Jocevski, 2020); the need for more advanced information systems to support omni-channel data synchronization (Kembro and Norrman, 2019a); the necessity of IDTs infrastructure to achieve data consistency across the omni-channel system (Kazancoglu and Aydin, 2018;Larke et al., 2018); and the role of order trace and track technologies in the consumer choice of channel (Xu and Jackson, 2019). In addition to the technical insights above, the contribution of this research is viewed from the CAS as well as inter-organizational governance theories' perspectives. Juxtaposition of the two theories helps this research to explain data flow integrations in omni-channel systems. In the recommended omnichannel data flow frameworks of this paper, the agents which share and gather data (i.e. the header rows of Figures 2a&b) represent the agents of the omni-channel system, and the proposed intra-and interconnections among them form a common schema for the omni-channel complex system. In effect, the interaction and connectivity required to manage a complex system such as the omni-channel is achieved through the threefold integrations (horizontal, vertical, and total), mapped against the data flow frameworks of this paper. 34 The self-organization and autonomy element of CAS is largely supported by the recommended moves toward total integration. Well-developed data flows, when coupled with advanced IDTs, can enable self-organized agents to make timely, autonomous decisions about physical flows (e.g. order fulfillment, shipment, lot-sizes, and stock-keeping point) and data flows (e.g. orders, ASN, invoices, and recalls). The self-organization feature of a well-integrated omni-channel system can be ideally defined at all levels, such as warehouse robots (to make storage and filling decisions), CPS in the production line (to make production decisions), order-fulfillment systems (to make ordering and delivery decisions), and Sat-Nav-supported delivery vehicles (to make route planning decisions). Further synthesis of and discussion on the research outcomes also show that contractual governance and relational governance do not necessarily substitute each other (Cao and Lumineau, 2015), and while the former supports omni-channel horizontal and vertical integrations, it needs the latter for total integration. This is in line with analytical research on omni-channel retailing (Giannikas and McFarlane, 2021) which reveals that the implementation of IDT is the matter of acceptance by omni-channel agents (i.e. relational) as well as formal feasibility (i.e. contractual). The inter-organizational governance to enforce or facilitate data flow integration and IDT implementation by omni-channel agents is, nevertheless, open to further discussion and argument, as it largely relies on a central or leading agent (Mahapatra et al., 2019), which may not exist in omnichannel retailing. Moreover, while it has been emphasized that IDT implementation has major impacts on inter-organizational governance (Lumineau et al., 2020), this paper underlines another direction of that relationship, where both contractual and relational governance models are found supportive for omni-channel integration and IDT implementation. In combination, CAS and inter-organizational governance theories enhance and extend the understanding of omni-channel data flows and integrations, and their links with omni-channel adaptability and governance mechanisms. Practical contribution This paper offers four practical contributions. First, understanding the omni-channel data flows, and where they originate from/go to, will help omni-channel managers make informed decisions and take more timely actions -in sourcing, purchase, storage, shipment, sales, logistics, and return operations. The magnitude and scope of data flows and the integrations among them in the omni-channel system, underlines its essential difference with store-only, online pure-play, and multi-channel retailing. This provides the single/multi-channel retail managers a more explicit view of their route to the omnichannel retailing, should they decide to move toward it. Second, given the cases of fragmented and disjointed processes, reported in many omni-channels as pointed out earlier, the recommended data flow integrations of this paper provide practical insights for omni-channel managers to materialize seamless flows of products across both business and consumer sides. Hence, omni-channel managers should be aware that data flow integration should occur (a) horizontally among relevant agents/processes who make, keep, sell, deliver and return the product across different channels; and (b) vertically among data capturing, storing, sharing, and analyzing instruments, equipment, and technologies. These significantly affect the decisions on omni-channel structures and processes. IDTs have been found crucial to make these happen. This indicates the third practical contribution of this research, by emphasizing to omni-channel managers that IDTs do not only make data transactions faster or more accurate in their day-to-day business, but also connect and 35 synchronize the horizontal and vertical integrations to form a totally integrated omni-channel. More advanced IDTs (e.g. CPS, IoT, AI, and BDA) then go beyond data flow management and integration, and facilitate autonomous decisions and actions. To achieve this level of autonomy at the total integration level, omni-channel managers need to prioritize training and investments in advanced IDTs. Fourth, this research highlights for managers that data flow integrations and their supporting IDTs have direct effect on their business competitiveness by making the omni-channel more adaptable (to its surrounding uncertainties and changes). In line with it, implementation of data flow integrations and their supporting IDTs are found dependent on interorganizational governance mechanisms. Hence managers are advised to consider formal arrangement to enforce execution of IDTs across the relevant omni-channel agent. Moving further toward the total integration, omni-channel integration also need to work on relational governance mechanisms to promote and facilitate more advanced IDTs applications. Future research The recommended frameworks and suggestions of this research can initiate a number of debates on their enablers and barriers, and implementation challenges. Future research can expand the recommended framework of Figure 2b to other businesses (in addition to the retailer), and study the expansion of the omni-channel total integration (as suggested by Figure 4) according to them. The future research also needs to address the uncertainty around advanced IDT implementations in omni-channels, as a complex system. Application of advanced IDTs and achieving total omni-channel integration also needs data and data management standardization. This needs further research on standardization of data structures and type, as well as data capture and communication procedures, as the major drivers of omni-channel systems. Inter-organization relationship governance in omni-channel systems have been also found largely unexplored, and further research is needed to progress the pioneering understanding of this paper on omni-channel governance mechanisms and their effects. Necessity for and Requirements of Integration Opportunities and Challenges of Integration Impacts of Integration Marchet et al. (2018) Integration is a foundation of omni-channel system. Melacini et al. (2018) Mirzabeiki and Saghiri (2020) Omni-channels need well-defined integration plans. Onal et al. (2018) Integration is needed within and among the warehouse systems Shen et al. (2018) Channel integration (including service transparency, channel choice variety and content, and process consistency) significantly affects the users' experience. Sousa and Amorim (2018) Full integration among physical and data flows is needed. It provides an opportunity for omni-channel entities to see and share product, inventory, sales and/or logistics data throughout the omni-channel system. Sousa and Voss (2006) Information management platforms and systems (e.g. cross-firm ERP systems) are needed for onlineoffline integration. Verhagen andvan Dolen (2009) Wollenburg et al. (2018b) Integrated management of inventory, delivery, and return are necessary for omni-channel. Data collection method Details Aim and contribution to the findings of the study Interviews In total 68 interviews with companies involved in omni-channels and customers of retailers who sell their products via omni-channels: Companies Interviews with 30 representatives from participating companies, leading to more than 3100 pages of transcripts and 1350 minutes of recorded conversation. Consumers Interviews with 38 consumers buying apparel, food, and fast-moving consumer goods (FMCG) via omni-channels (in store, online, and click & collect), leading to more than 2460 pages of transcripts and about 970 minutes of recorded conversation. They cover all age ranges from 20 to 70, and all education levels (high school -PhD education level). These interviewees are quoted in the manuscript as (Customer 1 - Customer 38). To receive explanations about the omnichannels of the studied companies (e.g., their channels and selling methods), data capturing and sharing mechanisms, the level of integration and digitalization of their omni-channels, IDTs used, and the challenges and opportunities associated with them. Also, to receive information about the experience of consumers from omnichannels and methods for receiving and providing data from/to them during their shopping experience. Documentation and archival records -Websites of all participating companies (17 websites). -Annual reports of 17 companies. -97 internal reports of the companies about their omni-channels. -56 articles and white papers by the companies participating in this study and by the leading industrial press about the omni-channels of the studied companies and digitalization of their operations. -Films from the companies showing how IDT, including IoT technology, BDA, sensor-equipped RFID devices are used in omni-channel operations. Technical aspects of IDTs and the way they capture, share and store data within one company or among several organizations in an omni-channel are possible to be received via documents, including technical reports with figures and maps of the systems. These are completing the qualitative data received via other sources. Details of the companies' performance are gathered via their annual reports. More details about the omni-channels of the companies and their initiatives, projects, and investments in IDT are gathered via white papers, internal reports and industrial articles Documentation also enable data triangulation. Direct and participant observations Visiting five sites using IDTs for capturing information about products and cargo including observation of: -A port which is operating by using RFID tags on containers. -RFID readers and sensors installed alongside rail tracks to receive verified information from every RFID-tagged rail wagon in transportation. -An IoT lab which visualizes and tests different types of tags and sensors to be used in different kinds of environments for enabling connectivity of objects. -Smart trays, using temperature sensor-equipped RFID tags, used for tracking and tracing fresh seafood among several companies in a retail network. -A distribution center in which RFID tag equipped pallets are handled by RFID reading equipped lift trucks. Observations lead to a much deeper understanding of the researchers about the IDTs used in omni-channels and the technical limitations associated with each of them, the price of the technology, and data sharing aspects including trust and sharing commercially sensitive data. Observations also enable data triangulation. Focus groups In total four focus group meetings with the participants of the IDT providers (companies 15-17) take place including: -First meeting with two researchers and 11 managers. The meeting lasts for four hours. -Second meeting with two researchers and 8 managers, which takes five hours. -Third meeting, with two researchers and 7 managers, which lasts for four hours. -Fourth meeting with 2 researchers and 8 managers, which takes four hours. Providing important insights regarding inter-organizational aspects of data management and IDT used for it in omni-channels (e.g., inter-operability of databases of companies when exchanging data) by presenting and discussing the perspectives of their companies. These meetings help to identify the suitable companies and sectors to focus on. They lead to improving the findings and propositions of the study. Furthermore, they enable data triangulation. Table III. Power quotes of the interviews, supporting the omni-channel data frameworks (Co: Company, Cu: Customer, FG: Focus Group). Data Flow * Power Quotes Source C1 "I go to the store to see the product physically and eventually try it on. Then I prefer to buy online both because I can think about it and because it is very convenient to receive the product directly at home." Cu1 "I believe the opening of pop up stores can bring us back to the public and enable businesses to get more public attention." Cu2 "Augmented reality is a new way of trying products-closest to physically trying the products. Just select a pair of shoes of your choice from the digital catalog, point the iPhone camera at your right leg and a simulation of how the sneaker would look will be shown." Cu3 "One of the things that I like the most about searching online is the possibility to read other consumers' reviews. I think that the enthusiasm or the disappointment expressed by another person is affecting my choice." Cu4 "[social media] influencers try clothes and share it with us, which is affecting my decision to buy them." Cu5 "I like seeing these advertisements on social media because I am looking for something that is kind of entertaining. So, if I see something with those characteristics, I will check out the product and click on the link in the ad that takes me directly to the online store ..." Cu6 "I often purchase on multi-brand websites because they offer great discounts and more product stock variety." Cu7 "I find some Instagram adverts useful, because they suggest me products that I am looking for." Cu 8 "I am surprised that companies have so much information about me." Cu 9 C2 "I never read the terms and conditions. I just check the box with the agreement." Cu10 "Often I save the terms and conditions on my desktop, but then I delete them after a while without reading them." Cu11 "If you have different payment apps like PayPal, I would prefer that one, rather than putting my entire credit card details." Cu12 C3 "Often a questionnaire is sent asking about the shopping experience. I answer the quick and easy ones but ignore the time-taking surveys." Cu13 C4 "I prefer to buy from only one retailer because I know their terms and conditions. Because different online retailers have different return terms and conditions and I do not have time to go and review all of them." Cu14 "I buy from companies who have an easy and straightforward return…If a company gives vouchers for returned items or if I should pay for posting the product back then I don't buy from them unless their price is very low." Cu15 R1 "Having a deep understanding of your customers and market is a part of the success of omni-channel retailing." Co13 "Analyzing data on Facebook and Twitter is one way of understanding the market … there are [other] ways too. For example, the best way of learning about the needs of customers of running shoes and clothing is arranging group running events and working with coaches." Co13 R2 "Currently they manage to pull together overviews of demand data very manually from across different silos and Excel spreadsheets" … "one of our DCs alone has 3,500 suppliers; some data consolidation is conducted but we need to do more [on data management]." Co10 "For wholesale accounts replenishment is carried out through a B2B platform where they can see inventory availability and place orders." Co11 "At the moment retailers are putting pressure on suppliers to take more responsibility for a master data set-up, ordering all of those transaction elements of the supply chain." Co1 "Having correct handling [requirements] information will lead to better movement of products in our supply chain." Co8 "[after the order is placed] products go into a black hole until when they are ready to ship." Co11 "ASN is an area or process that we are looking at. The information on the pallet before it leaves the warehouse, [should be] the same as the information that has been sent through the ASN message. So, there are process checks that you can have in place to ensure that the right information is going to the customer at the right time." Co11 "UPS use their own track and trace -they have a current pilot in place for own track and trace system." Co3 "Customer home delivery orders go out by AAA Mail currently. They use an Access data base from the warehouse to manage online orders -They wanted to switch carriers to [XYZ Logistics] but the Access database can't integrate with Hermes so stuck with [AAA Mail] (and [BBB Mail]for next day deliveries)." Co10 "Track and trace is in place and is all managed through SAP [Enterprise Resource Planning system]. Logistics [Department] is working with the purchasing team much more closely on managing logistics data." Co11 "We have a 2 year project to replace all systems -store EPOS and ERP and planning/warehousing systems ...Vendor selection has been made… End to end integrated system will allow real time view of stock. Currently they manage to pull together overviews of data very manually from across different silos/Excel spreadsheets." Co10 R3 "Proposition and the requirements (e.g. costs involved in meeting two hour delivery slots from point of order) -customers are now more savvy, especially when they don't need it immediately; however this shifts in the week pre-Christmas where customers are now very demanding." Co9 "Anything that can reduce the amount of [data] errors will only help the relationship." Co1 R4 "Returns is a huge challenge in omni-channel retail. … no visibility about how many times a thing [item] was returned leads to lack of control on our [operations] costs." Co9 "We don't have visibility of what is being returned until it turns up at the warehouse." Co11 Product Data "… product data changes through different stages of the omni-channel." FG3 "GTIN is fundamental to our many scanning processes; [It is] used on either individual units or cartons, whatever the product is moving." Co9 "Barcodes are placed at point of origin, for other brands [we need to] often relabel [products] with the [Company 11] SKU number." Co11 "Barcodes enable semi-automation… [and] RFID would give more 'warehouse' style stock accuracy on the shop floor which will improve fulfillment". "Hierarchies of scanning from unit to box to pallet to truck [is done by barcode and] we would like to do the same with RFID where appropriate." Consumer Data "When collecting consumer data, [Company 2] complies with applicable privacy legislation and regulations, and applies [Company 2] standards where specific regulation is not yet in place. The [Company 2] Data Collection Guidelines also include specific guidance on data collection from children." Co2 "We take protecting consumer personal information seriously and are continuously developing our security systems and processes. Some of the controls we have in place are:… We use technology controls for our information systems, such as firewalls, user verification, strong data encryption, and separation of roles, systems & data … Systems are proactively monitored through a "detect and respond" information security function" Co20 Business Unit Data "[Company 1] uses a single Global Location Number (GLN), a code that uniquely identifies [an omni-channel] member organization, for its entire product range." Co1 "[Company 2] is participating in the United Nations' (UN) "Blue Number" program… that gives farmers an online presence,… connects them to buyers… also assigns a unique global location number (GLN) to farmers around the world. Co2 Planning Data "The [planning] information exists, but not under one umbrella." Co9 "Currently we manage to pull together overviews of [planning] data very manually from across different silos/Excel spreadsheets." Co10 Delivery Data "We have EDI set up with 3PL to feed data into whatever WMS system they are using." Co3 "Customer home delivery orders go out by Royal Mail currently. They use an Access data base from warehouse to manage online orders -They wanted to switch carriers to Hermes but the Access database can't integrate with Hermes so stuck with Royal Mail (and UK mail for next day deliveries)." Co10 "WMS different case by case -if not SAP then have EDI set up with 3PL so feed data into whatever WMS system they are using." Co3 * Associated with the consumer journey and retailer steps in Figures 1 and 2, and data types in Section 4.3. Table IV. The case study research quality tests according to Yin (2014), and the way the tactics related to each of them are used in this research. Construct validity • Multiple sources of evidence (interviews, documentations, focus groups and observations) are used, enabling data triangulation. • Chain of evidence is created and used when collecting data, by identifying different agents of omnichannels, the types of data which they provide and receive, the IDTs used for managing their data flows, and the governance mechanisms required to manage their relationships . • Company representatives review and confirm the findings of the study before finalizing and publishing the outcomes. • Data collection • Composition Internal validity • Key theme-matching and coding, with support of the key literature, are done when analysing data. • Explanations are built, highlighting the connections among the omni-channel data flows, agents, IDTs, and inter-organizational governance mechanisms. • Theoretical associations are made both at within-case and cross-case levels, as a basis for pattern matching. Data analysis External validity • Well defined criteria are used for case studies selection -as explained in Section 3.1. • Multiple case studies (totally 17), from a range of industries, and from various omni-channel agents, as well as interviews with both businesses and consumers strongly support the research generalizability. Research design Reliability • Case study protocol is developed and used in a systematic way. • A case database is created in which all the data (interview files and transcripts, documents and meeting recordings and notes) are stored. Reference Distinguished Consumer Shopping Steps Main Feature(s) of the Classification Grewal and Roggeveen, (2020); Lemon and Verhoef (2016) Figure 2a Channels/entities providing data Channels/entities gathering data
2021-11-10T16:10:29.143Z
2021-09-27T00:00:00.000
{ "year": 2021, "sha1": "4335006033c6263bd88e6a09eb8c1d56676b7232", "oa_license": "CCBYNC", "oa_url": "https://dspace.lib.cranfield.ac.uk/bitstream/1826/17170/7/Omni-channel_integration-2021.pdf", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4029fe9d23fbb57779e43350ee63b091c42beb5a", "s2fieldsofstudy": [ "Business", "Computer Science" ], "extfieldsofstudy": [] }
263717581
pes2o/s2orc
v3-fos-license
Job Training and Job Search Assistance Policies in Developing Countries The Policy Research Working Paper Series disseminates the findings of work in progress to encourage the exchange of ideas about development issues. An objective of the series is to get the findings out quickly, even if the presentations are less than fully polished. The papers carry the names of the authors and should be cited accordingly. The findings, interpretations, and conclusions expressed in this paper are entirely those of the authors. They do not necessarily represent the views of the International Bank for Reconstruction and Development/World Bank and its affiliated organizations, or those of the Executive Directors of the World Bank or the governments they represent. In countries across the developing world, headlines commonly warn of a combined jobs crisis and demographic time bomb, in which millions of jobs need to be created each year.For example, "India is sitting on a time bomb': Jobs crisis looms as population soars" warns "The country needs to create at least 90 million new non-farm jobs by 2030 to absorb new workers" (NewsIn.Asia, 2023)."Africa's Youth Unemployment Crisis Is a Global Problem" notes "while 10 million to 12 million youth enter the workforce in Africa each year, only 3 million formal jobs are created annually" (Donkor, 2021).Along with the issues posed by new job seekers in the future, unemployment of existing workers is already high in some countries.Despite a perception that unemployment rates tend to be low in developing countries because people can always work in agriculture or are too poor not to work, the 20 countries with the highest unemployment rates in the world in 2022 were all in the developing world (Figure 1).Unemployment rates exceeded 15%, higher than the 13% in Spain and 12% in Greece, the European Union's two highest rates, and far in excess of the 3.6% unemployment rate in the United States.They were even higher for youth, exceeding 30% in most of these 20 countries. Figure 1: The countries with the highest unemployment rates are all in the developing world Source: World Bank's World Development Indicators, July 25, 2023, update.Note: Unemployment refers to the share of the total labor force that is without work but available for and seeking employment.Youth Unemployment is similarly defined for the share of the total labor force ages 15-24. Even among those with employment, many find themselves in low paid, informal jobs, and would like to do better.Changes in the structure of the economy and job opportunities due to automation, climate change, and lasting impacts of the COVID-19 pandemic are thought to have made the challenge even harder, resulting in headlines such as "Robots Pose Big Threat to Jobs in Africa, Researchers Warn" (Ridgwell, 2018), "Rising heat stress could destroy 80 million jobs by 2030, UN says" (Taylor, 2019), and "This Chinese jobs crisis could be its worst" (Chen, 2023). Faced with these current employment challenges and such forecasts, governments face pressure to help job-seekers.One viewpoint is that the problem is a skills mismatch, where many of those seeking jobs do not have the skills sought by employers.This may be due to poor education systems, as well as changes in skills demanded by employers as economic growth, technological change, globalization, and the desire for a green transition change the structure of the economy.A popular solution is then for the government to provide job training so that jobseekers can acquire these skills.An alternative viewpoint is that even when workers have skills that employers want, they have difficulty finding the right job fit due to search and matching frictions.Fragmented and largely informal labor markets may make it difficult to identify job openings, and workers without much work experience may not know how best to look for jobs or signal their skills to potential employers.This results in governments providing a range of job search assistance policies designed to help jobseekers better use the skills they do have.This paper asks whether, when, and how developing country governments should undertake job training and job search assistance policies.Job generation will typically require policies that boost the demand for labor by increasing firm productivity and overall economic growth.Supply-side interventions that train workers and help them look for jobs will not be very effective if there are few jobs for them to look for.An earlier generation of critical reviews and meta-analyses of the first generation of evaluations of these programs found the typical impacts were rather small, with typically only 2 or 3 people out of every 100 trained or assisted finding work as a result of these programs (McKenzie, 2017).As a result, Blattman and Ralston (2015) argued that "it is hard to find a skills training program that passes a simple cost-benefit test."However, there have been recent innovations in how job training and job search programs in developing countries are designed, targeted, and implemented. 1We argue that there is a real, if more limited, role for government in providing these programs.For example, government action may be warranted when private firms are underinvesting in training of workers due to the possibility workers will leave, and when search and matching frictions slow down or prevent the reallocation of workers across sectors and geography that are critical for structural transformation. Equity concerns may also justify the use of these policies to help disadvantaged jobseekers find jobs.We begin with an overview of the worse and better reasons why governments in developing countries have become involved in job search and job training.We then outline lessons from recent experiences in developing economies with vocational training, policies designed to overcome spatial and informational search frictions, interventions aimed at overcoming psychological barriers to job search, and government efforts to encourage the development and use of online jobs platforms. Good and Not-So-Good Reasons for Governments to Provide Job Training and Search Assistance Firms typically have strong incentives to find and retain good workers, and many workers likewise have strong incentives to look for jobs and seek training in the skills demanded by the market.As a result, the vast majority of jobs are filled without government intervention.Table 1 illustrates this point, using some of the relatively few available labor force surveys that directly ask employed workers in developing countries how they found their jobs.The vast majority of jobs are found by workers either learning about jobs through their social networks of friends and relatives, or through workers directly approaching employers in their business or worksite and asking if they have openings.Public (government) agencies and private employment agencies are 1 We focus on job training and job search assistance as two of the most common types of active labor market policies undertaken by governments.There are several other types of active labor market policy that we do not cover.For example, another common form of skills training is to teach jobseekers business skills in order that they can start their own small businesses (for a review, see McKenzie et al. 2021).Governments can also directly provide jobs to workers through large public works programs (reviewed in Gehrke and Hartwig, 2018) or make it cheaper for firms to hire workers through wage subsidy programs (reviewed in McKenzie, 2017).We also restrict our focus to largely urban labor markets, reflecting government efforts to integrate workers into wage jobs, and do not discuss policies to increase agricultural employment. the source of a tiny fraction of all jobs found by workers: 9 percent of jobs in Albania, 5 to 6 percent of jobs in Jordan and Morocco, and down to less than 1 percent of jobs obtained in Mexico.Job matches between workers and firms can be highly location-, sector-, time-, and firmspecific, so that it seems unlikely in most cases that a central planner or government will do better than the private market in filling most job openings.In line with Table 1, throughout history and the growth of most countries, large increases in labor supply have been absorbed by the market without the need for the government to help millions of people find jobs.So why should the government get involved in providing job training and search assistance rather than leaving it to the private market? Perhaps the most common reason that governments provide these services is based on political pressure for a government to show it is doing something about employment-and providing job training or search assistance is often politically easier than addressing other barriers to creating jobs.In political terms, the success of these programs can be based on visible inputslike the number of people who finished a training program, or the number of towns with jobassistance centers-rather than the more difficult-to-observe effects like the number of workers who obtain lasting good jobs that they would not have found without this help.When quantitative inputs are the measure of success, the consequence can be reliance on public sector training agencies that may have limited linkages to the private sector, and that offer training of variable quality that is not necessarily in skills that are in high demand.For example, Maitra et al. (2022) note that, despite efforts at reform, India's system of vocational training is still characterized by "a bureaucracy-driven centralised and hierarchical framework" with a "glaring lack of involvement of industries," resulting in a system that "has not proved agile enough to quickly adapt skill-training provisions to contemporary technological innovations." An additional not-so-good reason for providing these job training and search services is as a second-best response to distortions in the labor market created by other government actions that may be politically much more difficult to change.One example is the dominance of a large public sector that pays much higher wages than comparable private sector jobs, causing jobseekers to queue for these public jobs and not consider working in the private sector.For example, this pattern has historically defined the job market for relatively educated workers in many countries in the Middle East.Assaad (2014) notes that "by using labor markets as means to distribute rents and to buy political quiescence, Arab governments have essentially undermined the labor markets' primary function."Another example is the presence of high minimum wages and inflexible labor laws that make it expensive and burdensome for private employers to hire workers, leading the supply of jobseekers to greatly exceed demand at prevailing wages.South Africa introduced a minimum wage set roughly equal to the median wage, and soon had an unemployment rate over 30 percent (Bhorat et al. 2020).2Many of these regulations are size-dependent, applying more strictly to firms that have more than a given threshold (e.g. 10, 50 or 100) number of workers, acting as an additional constraint on the expansion of more productive, larger, firms. 3While training and search assistance may help some workers to overcome the constraints caused by these distortions and find jobs, the success stories may just crowd out other jobseekers, and it would be better for governments to concentrate their policy efforts in addressing the distortions directly. In the absence of these government-introduced distortions, the simplest introductory model of labor supply and labor demand, in which the market works to equilibrate the supply and demand for jobs through changes in the wage is actually not a bad approximation for many types of labor in less-skilled jobs in urban labor markets in developing countries.Many firms have no trouble filling the job openings that they have.For example, Groh et al. (2015) conducted a panel survey of employers in Jordan to track how long it took firms to fill vacancies.Most vacancies are filled fast, with only 6 percent requiring more than two weeks.In a field experiment conducted in Sri Lanka, De Mel et al. (2019) find that only 12 percent of microenterprises said they found it hard to find the right worker for routine, physical jobs.Many governments are particularly interested in boosting the workforce in formal jobs in large manufacturing firms.But several studies have found these manufacturing firms do not appear to be that discerning about who they hire.They find workers very quickly and many workers voluntarily quit these jobs -suggesting that these jobs are neither that hard to fill, nor so rationed and sought after that no one leaves these jobs after securing such a position (for example, Blattman and Dercon 2018). 4o then what are some better reasons for government involvement in the provision of job training and search assistance?There are three main reasons why the private market allocation of labor may be inefficient or inequitable, thus providing a potential justification for the government to get involved.First, search and matching frictions are likely to exist for some jobs, and they not only inhibit individual jobseekers from finding work, but can slow down or prevent the reallocation of workers across sectors and geography that is such an integral part of structural transformation. For example, China's rapid growth involved its urban population growing from 100 million in 1980 to over 500 million today.This enormous reallocation involves workers needing to learn about job opportunities in another place, and potentially also requires them to learn different skills. The government may be able to speed up this transition process through appropriate training and job search policies.Matching frictions can also mean inefficiently high rates of job turnover, which can be costly for both workers and firms, and so efforts to improve match quality may increase labor productivity. Second, the production of human capital can involve externalities that cause firms and individual workers to underinvest in training.For example, Caicedo et al. (2022) provide empirical evidence from Colombia for the idea that firms underinvest in the training of workers due to the possibility that they will leave and work for other firms.Likewise, the apprenticeship system common in West Africa may embed inefficiencies caused by the concern of master craftspeople that their apprentices will compete against them after they have been trained-which creates incentives for firm owners to provide slower and lower quality training to apprentices than they would do if properly incentivized (Brown et al, 2022).Government support of training in general, and for sector-specific (rather than firm-specific) skills, may address these concerns.Concerted investment in certain sets of skills may also be needed as part of an industrial policy to attract large multinational plants that have spillover benefits for the rest of the economy.For example, the Costa Rican Investment Promotion Agency (CINDE) has worked with partners to provide training in technological fields demanded by multinational companies. Finally, governments may wish to be involved for social mobility and equity reasons.As shown in Table 1, the main way many firms and jobseekers connect is through networks of connections formed by friends and relatives.But disadvantaged individuals with limited networks and a lack of funds to spend acquiring skills may end up segmented into different labor markets, without the knowledge or skills to approach firms directly and find jobs that way.Even if training and job search assistance to these groups does not generate additional jobs, providing an opportunity for some individuals from these disadvantaged groups to access better jobs may be desirable for equity reasons, and may help improve overall allocative efficiency in the economy. While our focus is on low and middle-income economies, there are a wide range of labor market conditions prevailing across the developing world, and many of these same issues and rationales for government intervention apply to certain labor market segments in high-income countries.Similar concerns apply about whether workers displaced by trade or technology shocks, or stuck in cities with declining industries are able to reskill and relocate, as well as equity reasons being used to focus programs on individuals from disadvantaged backgrounds.However, a larger share of the labor market may be subject to these frictions in many developing countries, which in addition to greater demographic pressures and an ongoing structural transformation of the economy can strengthen the rationale for policy action. Given these reasons that might justify government involvement, the key question is then how should the government get involved in order to ensure these policies have higher chances of being successful? What Are the Most Promising Avenues for Government Involvement in Job Training and Job Search? Jobseekers may struggle to find jobs for three main reasons.First, there may be a shortage of firms wishing to hire them.This may reflect a lack of overall labor demand in the economy, in which case the appropriate policy actions will be in the area of private sector development policy. In some cases, it may also reflect discrimination, with firms not willing to hire individuals with certain characteristics.Second, jobseekers may struggle because they lack the skills and experience needed for jobs.Job training and internships and apprenticeships can then be used to help overcome this problem.Finally, even if jobs are available and individuals have the right skills, they may struggle to find and match with employers who want their labor, in which case job search assistance can be useful. It is difficult to find systematic evidence to assess the relative importance of these three reasons, and the answer will almost surely vary with context.Our sense is that in many cases the largest issue is a lack of labor demand for jobs, and especially "good" jobs, and that policies to help firms to grow and demand more labor need to be a primary part of any jobs policy solution (although it is not the topic of this paper).Data from the Mexican National Survey of Occupation and Employment (ENOE) provides one data point, asking unemployed individuals why they are not looking for work: 4.6 times as many individuals say it is because they think there are no jobs available near them than say it is because they lack the skills or experience needed for jobs.Lack of experience and knowledge of where to look for jobs may be bigger barriers for young jobseekers.A survey of high school graduates in one part of Mexico with a strong labor market found 33 percent of youth said lack of experience was their main obstacle to finding a job, 10 percent said lack of skills, and 14 percent said difficulty searching for jobs. 5 While employers can often fill jobs with "a" worker quickly, many employers say that difficulty finding workers with the right skills is an issue they face, and many employers appear to feel this is more the case today than in the past.Manpower Group has been surveying public and private employers in multiple countries annually and reports that 77 percent of employers in 41 countries surveyed in 2023 report difficulty finding talent with the skills they need, up from only 31 percent in 2010.Figure 2 shows high levels of firms saying they struggle to find talent in all of the developing countries the survey covers, with the percentage of firms reporting difficulties similar to that in the U.S. and Germany.Employers report both difficulty finding workers with the right technical skills such as information technology, data skills, and sales and marketing skills, as well as with the right soft skills such as reliability, creativity, critical thinking, and resilience. Public policy efforts can then try to help jobseekers develop these skills, as well as helping workers with these skills to signal in a credible way that they have these skills as they search for jobs and match with employers. 5 Data from México Piloto de Inclusion Laboral de Jóvenes Baseline survey Phase 1 (2018), used in Abel et al. (2022).A further 21 percent said their age, gender, ethnicity, or socioeconomic background was the main obstacle, which could reflect employer discrimination, social norms, or difficulties they face paying for training or job search. Job Training Job training programs are designed to provide new skills and experience, and are predominantly focused on youth and the unemployed.They typically try to teach technical vocational skills in fields such as hairdressing and beauty, carpentry, electrical work, tailoring, plumbing, or information technology skills such as coding, data entry, office programs, and others. In addition to these "hard" skills, some programs also include "soft skills" components such as communication skills, teamwork, planning, self-efficacy, and financial literacy.These programs can be taught in classrooms and/or in the form of on-the-job-training through an internship or apprenticeship.The most common programs offered by governments and studied in the literature tend to last three to six months, although there are shorter intensive "bootcamps" that can be three weeks to a month, as well as longer programs of two years that are more similar to the traditional Technical and Vocational Education and Training (TVET) programs that are sometimes offered as part of the formal education system. The hope is that these programs will increase employment and earnings for jobseekers nine such studies and finds an average impact of a 2.3 percentage point increase in employment, which, given the costs of these programs, equates to approximately $17,000-$60,000 per additional person employed, and a median increase in earnings of 11 percent, or $19 per month.Agarwal and Mani (2023) include an additional 14 recent studies in a formal meta-analysis, and find an average impact on employment of 4 percentage points (with a 95 percent confidence interval between 2 to 6 percentage points) and on earnings of 8.2 percent (with a 95 confidence interval between 2 to 14 percent).These impacts on employment are similar in magnitude to the average impact of training in high-income countries, with Card et al. (2018) finding from a combination of experimental and non-experimental studies that training averages a 2.0 percentage point impact on employment in the first year after training, and 6.6 percentage points one to two years later.6 These effects appear particularly muted for at-scale programs operated by governments. Figure 3 shows evidence from five experimental evaluations of government vocational training programs that trained at least 5,000 people in a year.In this figure and the subsequent ones, we show both the point estimate and a 95 percent confidence interval from the evaluation.For example, the figure shows that vocational training in Türkiye resulted in an estimated 2.0 percentage point increase in employment, with a 95 percent confidence interval of -0.5 to 4.4 percentage points.With the exception of the impact for women of the Colombian Jóvenes en Acción program (Attanasio et al, 2011), the estimated effect on employment is 2 percentage points or lower in these programs, half that of the meta-analysis impact across all pilot, nongovernment organizations, and government programs of 4.0 percentage points reported by Agarwal and Mani (2023).There is more variability in the impact on earnings, but five of the seven reported estimates are also below the meta-analysis average impact of 8.2 percent.One plausible reason for the limited impact of these large government programs is that they may not be creating the skills that the labor market is demanding.Some small and informal firms may have no demand for skilled labor at all, and lack the physical and managerial capital to benefit from it.For those firms that would like skilled workers, training programs may be slow to update courses and curricula to reflect the changing needs of firms, or perhaps training providers are of poor quality and are incentivized based on number of people trained rather than on employment outcomes.Training may even backfire and cause a reduction in employment if it creates unrealistic expectations among jobseekers, causing them to raise their reservation wages and only search for jobs in the area in which they were trained (Acevedo et al., 2020). In Results-based contracting aims to increase the incentives for providers to deliver employment impacts by linking some of the payments to targets such as the percentage of trainees in jobs.World Bank (2020) discusses some of the practical issues and experiences with such an approach.It sounds promising in theory, but in practice many governments lack the administrative capacity to measure results and manage such a process.In addition, the share of the total payment linked to performance may in practice end up being relatively small and incentivize only shortterm, and not long-term, employment outcomes. A more optimistic view of the potential impact of job training programs has emerged from impact evaluations of several programs implemented by nongovernment organizations. Particularly influential here has been the work by Alfonsi et al. (2020), who evaluate the impact However, while these programs led by nongovernment organizations do offer some lessons for public policy, there are reasons to be cautious in expecting them to be a cost-effective jobs solution at large scale for thousands of jobseekers.First, the impacts of programs tend to fall with scale, which List (2022) dubs the "voltage effect," in part because of the challenges of ensuring the quality of training is maintained and the topics provided continue to meet the needs of employers at scale.In addition, general equilibrium concerns may arise with scale: that is, training jobseekers en masse in a limited range of skills may result in them all competing with one another for a fixed supply of jobs.Second, the impressive-sounding percent increase in earnings in many of these studies often comes from a relatively small absolute increase in earnings divided by a small base income that disadvantaged individuals would be earning in the absence of training.As a consequence, the gain in income would typically have to last for many more years than studies typically measure in order to pass a cost-benefit test.For example, the 25 percent increase in income in Alfonsi et al. (2020) equates to an extra $6.10 per month, for a program that costs $470 per person to provide and the 16.9 percent increase in income in Crépon and Premand (2021) equates to a $16.20 per month increase for a program that cost $2,045 per person.A combined vocational training and life skills program for adolescent girls in Uganda run by the nongovernment organization BRAC had an incredible 308 percent increase in earnings (Bandiera et al, 2020), but this still only equates to an additional $4.20 per month. Thus, in many cases the seemingly large percentage earnings gains do not reflect transformational absolute income gains, and will need to persist for five to ten years, or longer, to pass cost-benefit tests.Alternatively, such programs could target poor individuals who alternatively would be receiving even more expensive forms of government support in social assistance programs, which could help them to pass a cost-benefit test. These modest average impacts of vocational training suggest that a lack of skills is unlikely to be the single binding constraint to finding employment or earning higher wages for the majority of jobseekers taking part in such programs.But modest average impacts may mask large effects for certain subsets of individuals.An unexplored area for research and policy is testing better ways of targeting the selection of participants into such programs based on those most likely to benefit. Labor Market Intermediation Based on the notion that workers may not know how or where to find available jobs, labor intermediation services seek to equip workers with the tools to improve their job search and to connect workers with jobs.Earlier evidence on traditional labor market intermediation programs, such as government intermediation and placement services, resume and interview preparation, job fairs, and simply sending information about job openings to jobseekers found that they had only limited and short-term impacts (McKenzie 2017).One possible reason is that these programs may help workers learn to find one job, but given high job turnover, they are little help in finding subsequent work.Similarly, Card et al. (2018) report job search assistance programs only having average impacts of 1.1 to 2.0 percentage points in their meta-analysis of impacts in high-income countries. Recent experimental evaluations in developing countries instead show somewhat more promise for interventions that get jobseekers to search in new locations, update biased beliefs, and better signal their skills.In developing country labor markets, it is not uncommon to find a surplus of workers relative to available jobs in some locations coexisting with employers in other locations experiencing shortages of similar workers.Even when jobseekers living far from job centers have a job or can easily find one in their local labor markets, the higher quality jobs offering stability, protections, and higher salaries tend to remain out of their reach.For instance, comparing individuals with the same educational attainment, Franklin (2017) finds that the share of workers employed informally and in low-skilled occupations increases, and the share employed in highskilled occupations decreases, with distance to larger cities.Such spatial mismatches are even more striking when considering search across international borders.Jobseekers within an urban area do not seem to search broadly enough in city centers, and jobseekers outside an urban area do not seem to search broadly enough in nearby urban areas, despite potentially high returns to searching for jobs over larger distances. One approach to this issue used by researchers (but not typically by governments at scale) has been to subsidize search across space directly through transportation subsidies.In Ethiopia, Franklin (2017) found youth given these subsidies were more likely to find employment in the city center, and to find jobs of higher quality and permanent jobs, rather than the kinds of casual jobs available in their vicinity.However, as Figure 4 shows, the impacts of such assistance may not last.In Ethiopia, Abebe et al. (2021) find a modest impact of transport subsidies on permanent and formal employment, with little to no improvement in jobseekers' probability of having a job four years after this support is withdrawn. Distance to jobs and high commuting costs could in part explain the declining impacts over time.Job quits may occur if jobseekers initially underestimate the disutility of commuting long distances and this offsets the wage premium paid in larger cities and urban labor markets (Banerjee and Sequeira, 2020).When poor matches between workers and firms result in high rates of job turnover, jobseekers recurrently have to search for jobs and may again find it difficult to access better opportunities in distant labor markets without a repeated subsidy. A one-time transport subsidy can lead to longer-run effects on employment if searching more broadly allows jobseekers to learn about the spatial distribution of wages, and build job connections in a wider labor market.Strong repeated search is similarly possible if the spatial wage premium is large, and jobseekers can build assets over time as a result of obtaining employment in the short-term.An example comes from the work of Bryan et al. (2014), who provided financial assistance to help subsistence rural households in Bangladesh migrate for work in nearby urban areas during the agricultural lean season.Not only did this result in better jobs in the short-run, but once workers learn about and experience the benefits of jobs in urban areas, they return for work in following years without further incentive. Subsidies to address spatial frictions are likely to work best for that subset of jobseekers for whom cash constraints, lack of experience, and lack of networks are the most important and binding constraints to participating in jobs in this new location.Mitchell et al. (2022) report that efforts to scale the Bangladeshi program resulted in much lower impacts, because financial assistance ended up largely going to people who were inclined to migrate anyway.But the very poor and disadvantaged may find it harder to save and afford the costs of repeat travel, so that there will also be no long-term impacts for this group.Hence, while spatial frictions are important, transport subsidies to overcome these frictions need careful targeting. An alternative set of interventions to improve labor market intermediation seek to address information frictions, overcome biased beliefs, and signal skills.Remember, most jobs in developing countries are found through social networks and direct contact with employers.Thus, many workers may have inaccurate beliefs about the full range of job opportunities and wages available.This may be a particular concern for young workers and for racial minority and low educated workers who may be segregated from networks that provide information and contacts on many better jobs.The result of these biased beliefs can be that jobseekers may not search for jobs that could be a good match for them.Such workers may also have too high a reservation wage (Alfonsi et al. 2022), or reservation job prestige (Groh et al. 2015), causing them to turn down jobs they could get, choose poorly matched jobs, and quit soon after starting. Can new, valuable, credible information cause an updating of beliefs, and in this way result in employment and earnings gains?In Uganda, Alfonsi et al. (2022) find trainees in a vocational education program overestimate how much they will earn in their first job, resulting in high reservation wages, but also underestimate the returns to experience and salary growth potential possible after starting work.Mentors who had been through the same program several years earlier were able to credibly help jobseekers form more realistic expectations, causing them to revise reservation wages downwards, turn down fewer jobs, and earn 18 percent more a year later. In developing countries, information about the skills of workers can pose frictions to hiring. Potential employers find it difficult to assess the ability of workers, especially those with low levels of education.Jobseekers may have limited ability to know their own job skills and how they compare to other candidates, which can affect their job search behavior.To the extent that employers are more uncertain or underestimate the ability of lower-educated jobseekers and not always show up on time or comply with workplace rules, leading to high turnover.A softskills training program focused on activating conscientiousness reduced job turnover among construction workers in Senegal (Allemand, 2023).Finally, high discount rates and impatience may mean that even if workers recognize the returns to experience, they may be unwilling to accept jobs with relatively low starting wages and high wage growth trajectories.In a study in Mexico, Abel et al. (2022) find that a temporary wage subsidy can help overcome this behavioral bias and increase formal employment rates as a result. Online Job Platforms The number of workers and firms using online jobs platforms has undoubtedly been rising as internet penetration has increased in developing countries and new platforms have been Government policy might also seek to improve the functioning of these job portals.One problem is fragmentation: that is, a proliferation of job portals can make it more difficult for jobseekers and employers to find one another.A potential solution is for government employment agencies to work as an aggregator of vacancy information from different platforms, as is done in Colombia.Another problem is trust issues: after all, firms have often relied on personal connections and networks for hiring in part to overcome trust issues.In a study of small firms in India, Fernando et al. (2022) find that offering verification of skills along with an expanded pool of candidates makes these firms more likely to hire workers on an online platform.While platforms themselves can provide some verification services, government education and training programs can also do this, and government can also play a role through criminal background checks, credit records, and other reputation mechanisms. Conclusion Employment in developing countries is often dominated by small and informal firms, and thus faces a shortage of good wage formal-sector jobs.Job training and job search policies by themselves are unlikely to generate a lot of new employment, and there is a need for complementary policies that aim to spur the demand for labor.Nevertheless, there is still a role for well-designed policies to help speed up the process of structural transformation in labor markets, in ways that will improve employer-employee matches and thus increase the productivity and wages of available jobs, while also improving the employment prospects of disadvantaged groups. However, governments often struggle to implement these job training and job search policies successfully at scale.How might these policies be implemented more effectively?We have discussed some general principles, but much depends on tailoring solutions to meet the needs of specific localities, sectors, and types of jobseekers-which is another reason that centralized national programs often struggle.For job training, programs need to be closely tied to market demand, so that employers want to hire workers who are trained.The returns to training to the average jobseeker in a large-scale government program are typically very modest, and much more work is needed to determine how best to target training programs and how to expand successful pilot programs to larger scale.Job search assistance seems to work best when it helps jobseekers learn not just about a particular job, but rather learn something more fundamental that causes them to update their beliefs about the types of jobs they should be considering (including which sectors and locations to search in), and the wage levels and trajectories in those jobs.Efforts to credibly certify the existing skills of workers can help reduce information frictions when education systems do not provide good signals.Performing these tasks well requires investing in good data systems: otherwise, labor markets in which it is difficult for workers, and firms are failing to create lasting and well-paid jobs, will also be challenging for governments to understand. Figure 2 : Figure 2: The majority of firms in many developing countries say they struggle to find talent with the skills they need through at least three potential channels: 1) by increasing their human capital through teaching new skills, thereby making them more productive workers; 2) by alleviating employer uncertainty about the skills workers have by providing a signal in the form of certification or references; and 3) by providing jobseekers with new strategies (and potentially new networks) for helping to find jobs.Public funding of these programs is often justified by arguing that jobseekers are creditconstrained and unable to pay for the direct costs of training, as well as unable to bear the opportunity costs involved in needing to pay for living expenses and not earning money during training.In addition, due to information frictions, jobseekers may not know about the full range of training providers or find it difficult to ascertain their quality.These same arguments are also made in high-income countries, but credit constraints and informational frictions are likely to be larger issues in less developed economies.On the firm side, even though firms may have trouble finding workers with the right skills, firms also may be credit-constrained in paying for worker training, and reluctant to spend time and money training workers in general skills if these workers may then leave to work in other firms soon afterwards.Returns to education and experience are among the strongest empirical regularities in labor economics, suggesting that training should affect earnings and employment.However, given that these courses are short in duration and returns to education and experience are typically in the order of around 10 percent per year, we might expect only three or six months of training to have relatively modest impacts.Indeed, most randomized experiments measuring the impact of vocational training programs have found effects of roughly this size.McKenzie (2017) reviews Figure 3 : Figure 3: Limited Effects of Large-Scale Government Job Training Programs on Employment and Earnings part due to the evidence from the first waves of rigorous training evaluations, policy efforts have aimed to improve the effectiveness of vocational training programs.The two approaches usually mentioned are to make training more demand-driven, and to link payments for the training program more clearly to results.Demand-driven programs aim to have private sector firms and providers, rather than the government, determine what courses are offered and how they are delivered, and to link on-the-job training explicitly to employer demand.An often-mentioned example is the Jóvenes programs in Latin America, including the Colombian program studied by Attanasio et al. (2011), which had the largest impacts among the government programs summarized in Figure 1.Attanasio et al. (2017) link participants in this program to social security records and find, up to a decade later, a lasting effect of 3.8 percentage points on being employed in the formal sector, with trained individuals earning US$13 more per month in formal earnings.But simply having private sector providers offering the training may be insufficient: Hirschleifer et al. (2016) find privately run courses in Türkiye have larger short-term impacts than governmentrun courses, but that this difference disappears in the three years after training. of vocational training and firm-provided training in the form of apprenticeships in programs operated in Uganda by the nongovernment organization BRAC.The program is much smaller in scale than government programs (697 youth get vocational training and 283 get apprenticeships), relatively intensive (six months duration), with training restricted to a narrow set of sectors identified as having substantial demand for skilled workers, and with a small set of training providers that were selected based on quality.They find the firm apprenticeships have positive short-run impacts that fade out, which they attribute to a lack of skill certification.In contrast, the vocational training has impacts that grow and then stabilize: those assigned to vocational training are 9 percentage points more likely to be employed and earn 25 percent more than the control group averaged over the three years.Shonchoy et al. (2018) work with the nongovernment organization Gana Unnayan Kendra in Bangladesh and highlight another way nongovernment organizations may help enhance the effectiveness of training programs: by alleviating other constraints that inhibit youth from using the skills learned.They find that one-month of training to work in garment factories has much larger impacts when paired with assistance to migrate to the cities where these jobs are located. Figure 4 summarizes examples of the short-and longer-term impacts of two types of these interventions: transport subsidies to help overcome spatial frictions, and skill signaling interventions to help reduce information frictions.We discuss these types of interventions in turn, and also mention a relatively new research area-interventions to address behavioral and psychological factors that may be limiting employment. Figure 4 : Figure 4: Subsidizing search over distance has positive but often temporary impacts, whereas skill signaling interventions that improve match quality can have more lasting impact Figure 5 : Figure 5: Most Efforts Encouraging Job-Seekers to Use Online Job Portals Have Not Significantly Boosted Employment Table 1 : Main methods used by employed workers to find jobs Carranza and McKenzie (2023) Labor Force Survey 2019; Jordan is from New Work Opportunities for Women Pilot Impact Evaluation 2010-2013; Mexico is from Trimester 1, 2014 National Survey of Occupation and Employment (ENOE); Morocco is from Household and Youth Survey 2009-2010; Romania is from Household Labor Force Survey 2021; Sierra Leone is from 2014 Labor Force Survey; and Turkey is from Vocational Training for the Unemployed Impact Evaluation 2010-2012.Note: Morocco survey allowed multiple methods to be used, so responses add up to more than 100 percent.Romania survey combines newspaper advertisements and internet advertisements into one response category.Carranza and McKenzie (2023)provide full details. March 2023]r example, the Nigerian platform Jobberman claims 2.6 million jobseekers and 75,000 employers on its platform (as reported in Ladipo 2022), while the Indian portal Naukri has an estimated database of 82 million job seekers and 5 million recruiters, with an estimate of over 7 million searches by recruiters conducted daily.7Theseonlineportalscanlower the costs of search, enable search across space, and help alleviate information frictions, making it easier for job matches to occur.Apparently, many users believe these platforms are at least somewhat beneficial.Estimates here are from the business management software company Freshworks at https://www.freshworks.com/hrms/indeed-vs-naukri-choose-the-best-prescreening-tool/[accessed12March2023].
2023-10-07T15:15:25.804Z
2023-10-03T00:00:00.000
{ "year": 2024, "sha1": "46e256e66100df9f3996c8a0c2fbccb645735e1d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1596/1813-9450-10576", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "fbc9fd678b33c2ac976efcf7112b55c08fb07d80", "s2fieldsofstudy": [ "Economics", "Political Science" ], "extfieldsofstudy": [] }
266344177
pes2o/s2orc
v3-fos-license
Evolution of Microstructure and Texture in Grain-Oriented 6.5% Si Steel Processed by Rolling with Intrinsic Inhibitors and Additional Inhibitors A grain-oriented steel containing 6.5% Si, characterized by a notable Goss texture, was effectively manufactured through the rolling technique, incorporating both intrinsic inhibitors and additional inhibitors. This investigation focuses on tracking the development of texture and magnetic properties during the manufacturing process and delineates the mechanism underlying secondary recrystallization. The empirical findings clearly demonstrated the significant influence of nitriding duration and quantity on the secondary recrystallization process. In instances where additional nitrogen is absent, the intrinsic inhibitors alone do not lead to secondary recrystallization. However, when the nitriding duration is 90 s and the nitriding amount is 185 ppm, a complete secondary recrystallization structure with a strong Goss texture enables the finished products have excellent magnetic properties. The preferential growth of Goss grains is mainly governed by the enhanced mobility of high-energy (HE) grain boundaries. With the increase in annealing temperature, the occurrence of 20°–45° HE grain boundaries with Goss grains becomes more progressively frequent. At the secondary recrystallization temperature of 1000 °C, the frequency of 20°–45° HE grain boundaries with Goss grains reaches 62.7%, providing favorable conditions for the abnormal growth of Goss grains. This results in a secondary recrystallization structure predominantly characterized by a strong Goss texture. In light of these observations, the present study provides fundamental theoretical insights and serves as a valuable procedural guideline for the industrial manufacturing of 6.5% Si grain-oriented electrical steels. Introduction Fe-6.5 wt% Si alloys are ideal core materials for motors, transformers, and generators with excellent soft magnetic properties such as high permeability and saturation magnetization, almost zero magnetostriction, and low eddy current and hysteresis losses especially at high frequencies [1,2].Forming and texture control are the key factors to obtaining high-quality high-silicon steel.Although the high-silicon steel strip rolling technique based on the plasticization and toughing process has made much progress [3][4][5][6], its texture control theory and magnetic properties remain to be significantly improved.Developing beneficial textures, such as Goss and cube textures, is a cost-effective way to achieve high magnetic properties for high-silicon steel.Goss ({110}<001>) orientation, due to the crystal orientation direction parallel to the rolling direction (RD), has better susceptibility to magnetization under directional magnetic fields and therefore, it mostly exists in grain-oriented silicon steel.Cube ({100}<001>) orientation, which has two <001> parallel to the rolling direction (RD) and transverse direction (TD), respectively, is a favorable orientation for both grain-oriented and non-oriented silicon steel.However, cube orientation is difficult to control in grain-oriented silicon steel and can easily rotate to other orientations during deformation [7,8]. So far the studies about high-silicon steel have been mainly focused on non-oriented high-silicon steel [9][10][11][12][13][14], and there are few reports about producing grain-oriented 6.5 wt% silicon steel by a rolling process; this is because the preparation process for grain-oriented silicon steel is complex, and the size, distribution, and quantity of inhibitor need to be accurately controlled, especially the texture control which requires an extreme level.In addition, increasing the silicon content can delay or hinder the development of secondary recrystallization, and stronger inhibitors are needed to suppress the grain growth, which undoubtedly increases the difficulty of preparing grain-oriented silicon steel.Patents concerning grain-oriented high-silicon steel for electric purposes were already granted in Japan during the 1990s.However, the widespread industrial utilization of this material has been limited due to the low production efficiency and specific technical challenges.The availability of theoretical studies on grain-oriented high-silicon steel has also been limited.In recent years, there have been pertinent literature publications concerning the preparation process of grain-oriented high-silicon steel [15][16][17].These reports have involved an analysis and comparison of the magnetic properties of this steel with non-oriented high-silicon steel, as well as non-oriented and grain-oriented silicon steel containing 3% Si with the same sheet thickness.Our laboratory study also successfully prepared grain-oriented high-silicon steel by adding inhibitors in the early stage [18].Nonetheless, the process stability is insufficient, and there is still significant room for improvement in the magnetic properties.In addition, the specific characteristics and behaviors of the secondary recrystallization are not well known, and should be further investigated. Given the substantial practical and theoretical importance associated with investigating the manufacturing process and attributes of secondary recrystallization in 6.5% Si grain-oriented electrical steels, this paper explores a novel method for producing such steel.The proposed procedure entails the utilization of both intrinsic inhibitors at the outset and subsequent supplementary inhibitors in the production of 6.5 wt% Si grain-oriented electrical steels. Experimental Procedures For this experiment, the standard approach using Cu-containing 3% Si grain-oriented silicon steel was employed, with certain adjustments made to the Al content.The specific composition of the material is as follows (in mass fraction, %): C 0.01, Si 6.5, Mn 0.12, S 0.008, Al 0.02, N 0.003, and Cu 0.2.Initially, the sample is cast into a 30 mm thin slab in a vacuum induction furnace.Subsequently, it undergoes five passes of hot rolling, commencing at 1150 • C and ending at 880 • C, resulting in a thickness reduction of 1.8 mm.Next, the sample is warm rolled at 450 • C, yielding a final sheet thickness of 0.25 mm.Following a 4 min temperature hold and a decarbonization annealing process at 850 • C, the oxidization layer is attenuated through mechanical polishing.A nitriding step occurs next in an atmosphere containing 15% NH 3 and 75% H 2 at 750 • C for 60-120 s.The sample is then coated with MgO and prepared for high-temperature annealing, as illustrated in Figure 1.The process begins in a nitrogen atmosphere, with the temperature rapidly increasing at 200 • C/h until it reaches 400 • C. A H 2 :N 2 ratio of 1:1 is maintained as the temperature gradually increases up to 600 • C. At this point, the temperature is held for 4 h before rapidly increasing at a rate of 15 • C/h, reaching 1200 • C. Subsequently, the temperature is maintained for 5 h in a pure hydrogen atmosphere until the finished sheet is obtained.The "interruption method" was employed to study variations in sample structures, textures, and precipitation behaviors of second-phase particles during the high-temperature annealing process.In this interruption test, a sample subjected to 90 s nitriding was utilized.Starting from 900 • C, the sample was periodically taken out of the furnace at 50 • C intervals, allowing for the examination of changes in the sample structure and texture.The preparation process of 6.5 wt% Si grain-oriented electrical steels is illustrated in Figure 2. The preparation process of 6.5 wt% Si grain-oriented electrical steels is illustrated in Fig- ure 2. In this investigation, field-emission secondary electron microscopy (FESEM) was employed to observe the precipitates, and the compositions of the precipitated phases were analyzed using energy-dispersive X-ray spectroscopy (EDS).The texture of the samples was quantified and assessed with the aid of an Oxford Instruments HKL-Channel 5 EBSD system.The magnetic properties were evaluated using an electrical steel tester (MPG200D). Evolution of Organization and Texture Figure 3 displays the electron backscatter diffraction (EBSD) orientation maps depicting the characteristics of hot-rolled sheets in the lateral section.It is evident that the hotrolled plate containing 6.5% Si content displayed a noticeable gradient in both microstructure and texture, closely resembling the characteristic microstructural features commonly observed in conventionally grain-oriented silicon steel.The surface and subsurface layers primarily consisted of dynamically recrystallized structures, with the predominant texture types being Goss, {112}<111>, and {110}<112>, as shown in Figure 3a-c,e.The Goss texture had fewer components compared to the {110}<112> texture components, which is related to the low-temperature hot rolling process.The center layer, experiencing solely plane stress, exhibited a deformed elongated structure characterized by the presence of {100}<021>, {113}<361>, and {111}<112> grains, as shown in Figure 3b,d.The preparation process of 6.5 wt% Si grain-oriented electrical steels is illustrated in Fig- In this investigation, field-emission secondary electron microscopy (FESEM) was employed to observe the precipitates, and the compositions of the precipitated phases were analyzed using energy-dispersive X-ray spectroscopy (EDS).The texture of the samples was quantified and assessed with the aid of an Oxford Instruments HKL-Channel 5 EBSD system.The magnetic properties were evaluated using an electrical steel tester (MPG200D). Evolution of Organization and Texture Figure 3 displays the electron backscatter diffraction (EBSD) orientation maps depicting the characteristics of hot-rolled sheets in the lateral section.It is evident that the hotrolled plate containing 6.5% Si content displayed a noticeable gradient in both microstructure and texture, closely resembling the characteristic microstructural features commonly observed in conventionally grain-oriented silicon steel.The surface and subsurface layers primarily consisted of dynamically recrystallized structures, with the predominant texture types being Goss, {112}<111>, and {110}<112>, as shown in Figure 3a-c,e.The Goss texture had fewer components compared to the {110}<112> texture components, which is related to the low-temperature hot rolling process.The center layer, experiencing solely plane stress, exhibited a deformed elongated structure characterized by the presence of {100}<021>, {113}<361>, and {111}<112> grains, as shown in Figure 3b,d.In this investigation, field-emission secondary electron microscopy (FESEM) was employed to observe the precipitates, and the compositions of the precipitated phases were analyzed using energy-dispersive X-ray spectroscopy (EDS).The texture of the samples was quantified and assessed with the aid of an Oxford Instruments HKL-Channel 5 EBSD system.The magnetic properties were evaluated using an electrical steel tester (MPG200D). Evolution of Organization and Texture Figure 3 displays the electron backscatter diffraction (EBSD) orientation maps depicting the characteristics of hot-rolled sheets in the lateral section.It is evident that the hot-rolled plate containing 6.5% Si content displayed a noticeable gradient in both microstructure and texture, closely resembling the characteristic microstructural features commonly observed in conventionally grain-oriented silicon steel.The surface and subsurface layers primarily consisted of dynamically recrystallized structures, with the predominant texture types being Goss, {112}<111>, and {110}<112>, as shown in Figure 3a-c,e.The Goss texture had fewer components compared to the {110}<112> texture components, which is related to the low-temperature hot rolling process.The center layer, experiencing solely plane stress, exhibited a deformed elongated structure characterized by the presence of {100}<021>, {113}<361>, and {111}<112> grains, as shown in Figure 3b,d. The Goss-oriented grains generated during hot rolling served as the origin of the Goss texture during secondary recrystallization annealing.Furthermore, the hot rolling process also generated {110}<112> and {112}<111> textures.These three textures are recognized as typical shear textures in grain-oriented silicon steel.It is important to highlight that the laboratory rolling conditions, in contrast to the hydraulic transmission employed in industrial settings, resulted in a rapid temperature decrease during the rolling process, intensifying the shearing impact of hot rolling.As a result, hot-rolled silicon steel subjected to laboratory rolling conditions often displays a higher prevalence of {110}<112> grains.The Goss-oriented grains generated during hot rolling served as the origin of the Goss texture during secondary recrystallization annealing.Furthermore, the hot rolling process also generated {110}<112> and {112}<111> textures.These three textures are recognized as typical shear textures in grain-oriented silicon steel.It is important to highlight that the laboratory rolling conditions, in contrast to the hydraulic transmission employed in industrial settings, resulted in a rapid temperature decrease during the rolling process, intensifying the shearing impact of hot rolling.As a result, hot-rolled silicon steel subjected to laboratory rolling conditions often displays a higher prevalence of {110}<112> grains. Figure 4a-c displays the EBSD orientation maps of warm-rolled sheets in the lateral section, revealing the prominent 20-45° shear bands and fragmented grains, as shown in Figure 4a,b.The warm-rolled sheet is dominated by γ textures and α textures, with {112}<110> textures showing the highest orientation density, as shown in Figure 4c.Furthermore, there were a few weakly present Goss orientation grains, as shown by the red color in Figure 4b.The limited quantity of Goss grains was mostly distributed within the fragmented grain area of the {111}<112> orientation in the subsurface layer.According to high-energy boundary theory, {111}<112> grains with significant deformation energy storage are favorable for the coalescence and preferential growth of Goss grains.However, in this particular experiment, the strength of the {111}<112> texture was noticeably inferior to that observed in traditional 3% Si grain-oriented silicon steel.This discrepancy is related to the reduction in deformation energy storage caused by the warm rolling process.4c.Furthermore, there were a few weakly present Goss orientation grains, as shown by the red color in Figure 4b.The limited quantity of Goss grains was mostly distributed within the fragmented grain area of the {111}<112> orientation in the subsurface layer.According to high-energy boundary theory, {111}<112> grains with significant deformation energy storage are favorable for the coalescence and preferential growth of Goss grains.However, in this particular experiment, the strength of the {111}<112> texture was noticeably inferior to that observed in traditional 3% Si grain-oriented silicon steel.This discrepancy is related to the reduction in deformation energy storage caused by the warm rolling process. Next, Figure 5a-c Figure 6a-f displays the EBSD orientation maps obtained at various nitriding durations, while Table 1 presents the statistical data regarding the nitriding quantity and average grain size corresponding to the different nitriding durations.It is apparent that the grain size remained unchanged as the nitriding time increased, with an average grain size ranging from 17 to 22 µm.This observation is ascribed to the short nitriding time and might also be influenced by the lower deformation energy storage resulting from the warm-rolling of high-silicon steel strips. ranging from 17 to 22 µm.This observation is ascribed to the short nitriding time and might also be influenced by the lower deformation energy storage resulting from the warm-rolling of high-silicon steel strips.Furthermore, through comparative analysis of the variations in nitriding time and texture, it is observed from Figure 6a-f that there was no discernible pattern of change in the microstructure and texture as the nitriding time increased.The prevailing texture was dominated by γ and {113}<361>.Notably, the {111}<112> texture component, which plays a favorable role in subsequent abnormal growth of Goss grains, accounts for approximately 12%, lower than the 20% typically observed in traditional 3% Si nitrided steel.This indicates that the warm rolling system employed for high-silicon steel attenuates the {111}<112> texture component.The nitriding time has minimal effect on both grain size and texture, mainly affecting the nitrogen content within the sample. Precipitation of Second-Phase Particles The presence of second-phase particles in oriented silicon steel influences the development of strong Goss textures during secondary recrystallization.From the perspective Furthermore, through comparative analysis of the variations in nitriding time and texture, it is observed from Figure 6a-f that there was no discernible pattern of change in the microstructure and texture as the nitriding time increased.The prevailing texture was dominated by γ and {113}<361>.Notably, the {111}<112> texture component, which plays a favorable role in subsequent abnormal growth of Goss grains, accounts for approximately 12%, lower than the 20% typically observed in traditional 3% Si nitrided steel.This indicates that the warm rolling system employed for high-silicon steel attenuates the {111}<112> texture component.The nitriding time has minimal effect on both grain size and texture, mainly affecting the nitrogen content within the sample. Precipitation of Second-Phase Particles The presence of second-phase particles in oriented silicon steel influences the development of strong Goss textures during secondary recrystallization.From the perspective of the metallurgical composition system, the intrinsic inhibitor for Cu-containing steel, should include Cu and S precipitates.Figure 7 illustrates the precipitated particles within the hot-rolled plate.The findings indicate that the majority of particles present in the hot-rolled sheet consisted of Cu and S precipitates, with an average size of approximately 50 nm.Due to the adoption of a slab heating process at 1150 • C before hot rolling, the lower heating temperature caused a reduction in the amount of Cu and S atoms dissolved in the high silicon.Consequently, it greatly impaired the nucleation force during the subsequent cooling process, resulting in fewer precipitated particles. of the metallurgical composition system, the intrinsic inhibitor for Cu-containing steel, should include Cu and S precipitates.Figure 7 illustrates the precipitated particles within the hot-rolled plate.The findings indicate that the majority of particles present in the hotrolled sheet consisted of Cu and S precipitates, with an average size of approximately 50 nm.Due to the adoption of a slab heating process at 1150 °C before hot rolling, the lower heating temperature caused a reduction in the amount of Cu and S atoms dissolved in the high silicon.Consequently, it greatly impaired the nucleation force during the subsequent cooling process, resulting in fewer precipitated particles.Figure 8a-c presents the morphology of the precipitated particles observed at different nitriding times.The figure illustrates that, as the nitriding time increased, a greater number of nitrogen atoms entered the sample and reached deeper regions.After nitriding for 60 s, a quantity of second-phase particles precipitated on the surface of the test steel, as depicted in Figure 8a.With nitriding durations of 90 s and 120 s, the second-phase particles started to cluster together.The majority of these precipitated second-phase particles exhibited a regular square shape and had a small size ranging from 20 to 50 nm, as evidenced in Figure 8b,c.Meanwhile, EDS spectrum analysis inferred that these small square particles consisted of Si3N4 and (Al,Si)N.However, Si3N4 has a tendency to change into (Al,Si)N under specific conditions due to its unstable nature.Moreover, Si3N4 particles formed after nitriding are not evenly distributed throughout the thickness direction of the sample; instead, they predominantly concentrate in the surface layer.The conversion of Si3N4 into (Al,Si)N helps achieve a more uniform distribution of N atoms, and the resulting dispersed fine (Al,Si)N particles serve as inhibitors.This transformation occurs within the temperature range of 700 °C-750 °C [19].Since the nitriding temperature selected for this experiment was 750 °C, some Si3N4 particles have already undergone conversion to (Al,Si)N.However, a complete conversion can only be accomplished during high-temperature annealing processes.Figure 8a-c presents the morphology of the precipitated particles observed at different nitriding times.The figure illustrates that, as the nitriding time increased, a greater number of nitrogen atoms entered the sample and reached deeper regions.After nitriding for 60 s, a quantity of second-phase particles precipitated on the surface of the test steel, as depicted in Figure 8a.With nitriding durations of 90 s and 120 s, the second-phase particles started to cluster together.The majority of these precipitated second-phase particles exhibited a regular square shape and had a small size ranging from 20 to 50 nm, as evidenced in Figure 8b,c.Meanwhile, EDS spectrum analysis inferred that these small square particles consisted of Si 3 N 4 and (Al,Si)N.However, Si 3 N 4 has a tendency to change into (Al,Si)N under specific conditions due to its unstable nature.Moreover, Si 3 N 4 particles formed after nitriding are not evenly distributed throughout the thickness direction of the sample; instead, they predominantly concentrate in the surface layer.The conversion of Si 3 N 4 into (Al,Si)N helps achieve a more uniform distribution of N atoms, and the resulting dispersed fine (Al,Si)N particles serve as inhibitors.This transformation occurs within the temperature range of 700 • C-750 • C [19].Since the nitriding temperature selected for this experiment was 750 • C, some Si 3 N 4 particles have already undergone conversion to (Al,Si)N.However, a complete conversion can only be accomplished during high-temperature annealing processes. Table 1 presents the nitrogen content of samples with different nitriding times, reflecting variations in the concentration of inhibitors.The nitrogen content fell within the range of 130 ppm to 246 ppm.The recommended N distribution in conventional nitrided steel typically ranges from 130 ppm to 240 ppm [20,21].While the nitrogen content after nitriding for 60 s and 90 s fell within this range, the morphology of precipitates after nitriding for 60 s indicates that the particle density of precipitates was notably lower compared to that of conventional nitrided steel.Table 1 presents the nitrogen content of samples with different nitriding times, reflecting variations in the concentration of inhibitors.The nitrogen content fell within the range of 130 ppm to 246 ppm.The recommended N distribution in conventional nitrided steel typically ranges from 130 ppm to 240 ppm [20,21].While the nitrogen content after nitriding for 60 s and 90 s fell within this range, the morphology of precipitates after nitriding for 60 s indicates that the particle density of precipitates was notably lower compared to that of conventional nitrided steel. Microstructure and Magnetic Properties of the Final Annealed Plate Figure 9a-d displays the final macrostructure of the annealed sheet.It is obvious from the figure that the samples not subjected to nitriding had an average grain size in the range of 2-3 mm, with the texture types basically identical to those observed after primary recrystallization, as shown in Figure 9a.Based on these observations, it can be deduced that secondary recrystallization did not occur in the non-nitrided samples.In contrast, all samples subjected to nitriding exhibited varying degrees of secondary recrystallization.The sample exposed to 60 s of nitriding displayed a secondary recrystallization ratio exceeding 80%, with an average secondary grain size of 8 mm.However, fine grains were still discernible in specific localized regions of the sample, indicated by the green color in Figure 9b.The sample treated with 90 s of nitriding exhibited the most advanced secondary recrystallized structure and the most distinct Goss textures.Moreover, the macroscopic images clearly reveal that all grains in the sample had undergone secondary recrystalliza- Microstructure and Magnetic Properties of the Final Annealed Plate Figure 9a-d displays the final macrostructure of the annealed sheet.It is obvious from the figure that the samples not subjected to nitriding had an average grain size in the range of 2-3 mm, with the texture types basically identical to those observed after primary recrystallization, as shown in Figure 9a.Based on these observations, it can be deduced that secondary recrystallization did not occur in the non-nitrided samples.In contrast, all samples subjected to nitriding exhibited varying degrees of secondary recrystallization.The sample exposed to 60 s of nitriding displayed a secondary recrystallization ratio exceeding 80%, with an average secondary grain size of 8 mm.However, fine grains were still discernible in specific localized regions of the sample, indicated by the green color in Figure 9b.The sample treated with 90 s of nitriding exhibited the most advanced secondary recrystallized structure and the most distinct Goss textures.Moreover, the macroscopic images clearly reveal that all grains in the sample had undergone secondary recrystallization, with an average grain size ranging between 9 mm and 11 mm, as shown in Figure 9c.The secondary grains accounted for a lower proportion in the sample subjected to 120 s of nitriding.Moreover, a significant number of fine grains existed in the finished sheet, as shown by the blue arrow in Figure 9d.The magnetic properties at 50 Hz of the finished sheet made from grain-oriented high-silicon steel is presented in Table 2.The magnetic properties of the B8, after nitriding, fell within the range of 1.462 T and 1.625 T, while the magnetic properties of the B50 approach the saturated magnetic strength Bs (1.80 T) of the Fe-6.5% Si alloy.Thus, it can be concluded that the samples subjected to 90 s of nitriding exhibited complete secondary recrystallization and the best magnetic properties.The magnetic properties at 50 Hz of the finished sheet made from grain-oriented high-silicon steel is presented in Table 2.The magnetic properties of the B 8 , after nitriding, fell within the range of 1.462 T and 1.625 T, while the magnetic properties of the B 50 approach the saturated magnetic strength Bs (1.80 T) of the Fe-6.5% Si alloy.Thus, it can be concluded that the samples subjected to 90 s of nitriding exhibited complete secondary recrystallization and the best magnetic properties. Microstructure and Texture Evolution of Samples Extracted by Interrupted Annealing Figure 10a-e shows the EBSD orientation maps that underwent 90 s of nitriding during an interrupted high-temperature annealing.It can be seen from Figure 10a,b that no abnormal growth occurred during annealing at 850 • C or 900 • C. In these temperature conditions, the sample was dominated by γ and {113}<361> grains, with a small number of Goss orientation grains interspersed within the γ grains.At an annealing temperature of 950 • C, certain Goss grains situated in the upper and lower surface layers displayed a significantly accelerated growth rate compared to the neighboring grains, indicating the possibility of secondary recrystallization.At this juncture, the dimensions of these Goss grains exceeded 200 µm, as illustrated in Figure 10c.As the annealing temperature continued to rise, the abnormally grown Goss grains progressively extended beyond the thickness of the sheet, leading to the merging of adjacent smaller grains.Figure 10d illustrates that secondary recrystallization had already occurred.Upon reaching the annealing temperature of 1100 • C, secondary recrystallization was completed, forming a secondary recrystallized structure dominated by a Goss orientation, as shown in Figure 10e.to 45 • range was significantly higher compared to random grains.This situation creates favorable conditions for the subsequent abnormal growth of Goss grains.In comparison to the interrupted annealing at 900 • C, the sample annealed at 950 • C exhibited a greater number of grain boundaries falling within the 20 • to 45 • range, as depicted in Figure 11a,b.During the initial stages of secondary recrystallization, particularly at annealing temperatures of 1000 • C, the proportion of Goss oriented grains relative to the surrounding grain boundaries within the 20 • to 45 • range was notably high at 62.7%.This observation indicates the presence of more "free" grain boundaries surrounding the Goss grains, as demonstrated in Figure 12. Materials 2023, 16, x FOR PEER REVIEW 12 of 16 Figure 12 displays the misorientation distribution at 1000 °C.It is evident that the frequency of grain boundary misorientations between Goss and adjacent grains falling within the 20° to 45° range was significantly higher compared to random grains.This situation creates favorable conditions for the subsequent abnormal growth of Goss grains. In comparison to the interrupted annealing at 900 °C, the sample annealed at 950 °C exhibited a greater number of grain boundaries falling within the 20° to 45° range, as depicted in Figure 11a,b.During the initial stages of secondary recrystallization, particularly at annealing temperatures of 1000 °C, the proportion of Goss oriented grains relative to the surrounding grain boundaries within the 20° to 45° range was notably high at 62.7%.This observation indicates the presence of more "free" grain boundaries surrounding the Goss grains, as demonstrated in Figure 12.Furthermore, the average grain size and percentage of {111} plane textures in the four different types of samples following interrupted annealing are shown in Figure 13.The variations in average grain sizes reveal that the primary recrystallization occurred during annealing in the range of 900 °C-1000 °C, resulting in a slight increase in the average grain size for all four types of samples.During the subsequent secondary recrystallization phase within the 1000 °C to 1100 °C range, the average grain size in nitrided samples increased drastically, whereas the increase in samples without nitriding occurred at a much slower rate, corresponding to normal grain growth.When comparing the three types of nitriding samples, it can be seen that during the secondary recrystallization stage, the samples nitrided for 90 s showed the highest grain growth rate and the largest average grain size.Conversely, the sample nitrided for 120 s experienced a delayed onset of secondary recrystallization temperature due to the strong pinning ability of the inhibitor, consequently leading to a slower grain growth rate.The samples nitrided for 60 s exhibited a grain growth rate that falls between the other two sample types, as shown in Figure 13a.The Figure 12 displays the misorientation distribution at 1000 °C.It is evident that the frequency of grain boundary misorientations between Goss and adjacent grains falling within the 20° to 45° range was significantly higher compared to random grains.This situation creates favorable conditions for the subsequent abnormal growth of Goss grains. In comparison to the interrupted annealing at 900 °C, the sample annealed at 950 °C exhibited a greater number of grain boundaries falling within the 20° to 45° range, as depicted in Figure 11a,b.During the initial stages of secondary recrystallization, particularly at annealing temperatures of 1000 °C, the proportion of Goss oriented grains relative to the surrounding grain boundaries within the 20° to 45° range was notably high at 62.7%.This observation indicates the presence of more "free" grain boundaries surrounding the Goss grains, as demonstrated in Figure 12.Furthermore, the average grain size and percentage of {111} plane textures in the four different types of samples following interrupted annealing are shown in Figure 13.The variations in average grain sizes reveal that the primary recrystallization occurred during annealing in the range of 900 °C-1000 °C, resulting in a slight increase in the average grain size for all four types of samples.During the subsequent secondary recrystallization phase within the 1000 °C to 1100 °C range, the average grain size in nitrided samples increased drastically, whereas the increase in samples without nitriding occurred at a much slower rate, corresponding to normal grain growth.When comparing the three types of nitriding samples, it can be seen that during the secondary recrystallization stage, the samples nitrided for 90 s showed the highest grain growth rate and the largest average grain size.Conversely, the sample nitrided for 120 s experienced a delayed onset of secondary recrystallization temperature due to the strong pinning ability of the inhibitor, consequently leading to a slower grain growth rate.The samples nitrided for 60 s exhibited a grain growth rate that falls between the other two sample types, as shown in Figure 13a.The Furthermore, the average grain size and percentage of {111} plane textures in the four different types of samples following interrupted annealing are shown in Figure 13.The variations in average grain sizes reveal that the primary recrystallization occurred during annealing in the range of 900 • C-1000 • C, resulting in a slight increase in the average grain size for all four types of samples.During the subsequent secondary recrystallization phase within the 1000 • C to 1100 • C range, the average grain size in nitrided samples increased drastically, whereas the increase in samples without nitriding occurred at a much slower rate, corresponding to normal grain growth.When comparing the three types of nitriding samples, it can be seen that during the secondary recrystallization stage, the samples nitrided for 90 s showed the highest grain growth rate and the largest average grain size.Conversely, the sample nitrided for 120 s experienced a delayed onset of secondary recrystallization temperature due to the strong pinning ability of the inhibitor, consequently leading to a slower grain growth rate.The samples nitrided for 60 s exhibited a grain growth rate that falls between the other two sample types, as shown in Figure 13a.The secondary recrystallization in oriented silicon steel is achieved through the merging of Goss grains with surrounding γ grains; thus, quantitative statistics on the percentage of {111} plane textures can also provide insights into why complete secondary recrystallization does not take place in the three sample types.The data analysis on the percentage of {111} plane textures suggests an upward trend in the percentage content of {111} surface texture without nitriding as the annealing temperature rises.Nonetheless, Goss-oriented grains did not exhibit a growth advantage, suggesting that achieving secondary recrystallization solely based on inherent inhibitors is challenging for the samples without nitriding.After annealing at 1000 • C, there was a declining trend in the percentage content of {111} surface texture in the three categories of nitrided samples.Therefore, it can be inferred that the inclusion of inhibitors played a pivotal role in stimulating the abnormal growth of Goss grains.The distinct rates of decline (indicated by straight slopes) in the percentage of {111} plane textures among the three sample types signify differing growth rates of the Goss-oriented grains.These growth rates are associated with the varying capacity of the inhibitor to impede the surrounding boundaries, which is influenced by the quantity of the inhibitor present.In the samples subjected to 90 s of nitriding, the percentage of {111} plane textures exhibited a nearly linear decline, showing a better correlation between the number of inhibitors and the abnormal growth of Goss grains.This is beneficial for achieving complete secondary recrystallization of Goss grains.On the other hand, the decreasing trend of the {111} texture percentage content in the sample nitrided for 60 s was less pronounced compared to sample nitride for 90 s, indicating an insufficient addition of inhibitors.Meanwhile, in the samples subjected to 120 s of nitriding, the percentage of {111} textures showed a minimal difference between 1000 • C and 1050 • C, exhibiting only a slight decrease.Hence, it is inferred that the sample nitrided for 120 s experienced a delay in the process of secondary recrystallization, which is attributed to the large number of inhibitors leading to excessive pinning at the grain boundary.secondary recrystallization in oriented silicon steel is achieved through the merging of Goss grains with surrounding γ grains; thus, quantitative statistics on the percentage of {111} plane textures can also provide insights into why complete secondary recrystallization does not take place in the three sample types.The data analysis on the percentage of {111} plane textures suggests an upward trend in the percentage content of {111} surface texture without nitriding as the annealing temperature rises.Nonetheless, Goss-oriented grains did not exhibit a growth advantage, suggesting that achieving secondary recrystallization solely based on inherent inhibitors is challenging for the samples without nitriding.After annealing at 1000 °C, there was a declining trend in the percentage content of {111} surface texture in the three categories of nitrided samples.Therefore, it can be inferred that the inclusion of inhibitors played a pivotal role in stimulating the abnormal growth of Goss grains.The distinct rates of decline (indicated by straight slopes) in the percentage of {111} plane textures among the three sample types signify differing growth rates of the Goss-oriented grains.These growth rates are associated with the varying capacity of the inhibitor to impede the surrounding boundaries, which is influenced by the quantity of the inhibitor present.In the samples subjected to 90 s of nitriding, the percentage of {111} plane textures exhibited a nearly linear decline, showing a better correlation between the number of inhibitors and the abnormal growth of Goss grains.This is beneficial for achieving complete secondary recrystallization of Goss grains.On the other hand, the decreasing trend of the {111} texture percentage content in the sample nitrided for 60 s was less pronounced compared to sample nitride for 90 s , indicating an insufficient addition of inhibitors.Meanwhile, in the samples subjected to 120 s of nitriding, the percentage of {111} textures showed a minimal difference between 1000 °C and 1050 °C, exhibiting only a slight decrease.Hence, it is inferred that the sample nitrided for 120 s experienced a delay in the process of secondary recrystallization, which is attributed to the large number of inhibitors leading to excessive pinning at the grain boundary. Discussion Achieving accurate Goss-oriented grains during high-temperature annealing is a crucial research objective as it is the key characteristic of grain-oriented silicon steel, enabling the formation of a sharp {110}<001> texture through secondary recrystallization and ensuring favorable magnetic properties.The secondary recrystallization behavior of Goss textures in grain-oriented silicon steel has been a subject of debate for many years, leading to the proposal of two prominent theories: the high-energy (HE) grain boundary theory and the coincident site lattice (CSL) theory [22][23][24][25][26].Both theories believe that special grain boundaries with enhanced mobility are the main drivers for the anomalous growth of Goss grains.However, while the HE grain boundary theory emphasizes the influence of grain boundaries with orientation differences between 20° and 45° on the Goss secondary recrystallization, the CSL theory suggests that Goss grains exhibiting CSL grain boundary Discussion Achieving accurate Goss-oriented grains during high-temperature annealing is a crucial research objective as it is the key characteristic of grain-oriented silicon steel, enabling the formation of a sharp {110}<001> texture through secondary recrystallization and ensuring favorable magnetic properties.The secondary recrystallization behavior of Goss textures in grain-oriented silicon steel has been a subject of debate for many years, leading to the proposal of two prominent theories: the high-energy (HE) grain boundary theory and the coincident site lattice (CSL) theory [22][23][24][25][26].Both theories believe that special grain boundaries with enhanced mobility are the main drivers for the anomalous growth of Goss grains.However, while the HE grain boundary theory emphasizes the influence of grain boundaries with orientation differences between 20 • and 45 • on the Goss secondary recrystallization, the CSL theory suggests that Goss grains exhibiting CSL grain boundary characteristics tend to possess fewer solute atoms clustered at the grain boundaries, resulting in weaker pinning forces.Therefore, these specific grain boundaries preferentially detach pinning, causing abnormal growth of Goss grains.Notably, Σ9 grain boundaries with an orientation of 35 • <110> exhibit enhanced mobility.Nevertheless, the presence of Σ9 grain boundaries was minimal, constituting less than 5% of the total boundaries in this experiment.This suggests that the influence of CSL boundaries on the abnormal growth behavior of Goss grains is limited.Conversely, HE grain boundaries emerge as a pivotal factor in facilitating the abnormal growth behavior of Goss grains.The frequency of distribution of Goss-oriented grains at HE grain boundaries in the range of 20 • -45 • was notably higher compared to that at random grain boundaries.This observation indicates that the preferential growth behavior of Goss-oriented grains is achieved by the high mobility provided by HE grain boundaries. Considering the primary recrystallization structure, texture, and the impact of inhibitors on secondary recrystallization behavior, it becomes apparent that the primary recrystallization stage governs the size of the secondary recrystallization seeds and grains.On the other hand, the inhibitors play a crucial role in determining both the inhibitory and driving forces of secondary recrystallization.Despite the fact that the strength of the γ texture in grain-oriented high-silicon steel is weaker compared to that of traditional 3 wt% Si grain-oriented silicon steel, there is a higher distribution frequency of Goss-oriented grains at HE grain boundaries within the range of 20 • -45 • .In addition, the secondary recrystallization temperature exceeds 1000 • C, providing Goss-oriented grains with a favorable orientation environment and thermodynamic temperature environment, ultimately forming a secondary recrystallization structure with a sharp Goss texture.During the secondary recrystallization process of Goss-oriented grains, a preferential growth mechanism comes into play, where the high mobility of HE grain boundaries within the range of 20 • and 45 • enables the preferential growth of Goss-oriented grains. Conclusions This study comprehensively investigated the evolution of microstructure, texture, and the magnetic properties of grain-oriented 6.5% Si electrical steels formed by rolling and incorporation of intrinsic inhibitors and additional inhibitors.This research highlights the crucial role of nitriding quantity in achieving complete secondary recrystallization.In instances where additional nitrogen is absent, the intrinsic inhibitors alone do not lead to secondary recrystallization.However, when the nitriding duration is 90 s and the nitriding amount is 185 ppm, a complete secondary recrystallization structure with a strong Goss texture enables the finished products have excellent magnetic properties.The preferential growth behavior of Goss-oriented grains primarily depends on the high mobility of HE grain boundaries.With the increase in annealing temperature, there is a gradual increase in the occurrence of HE grain boundaries within the range of 20 • -45 • that are associated with Goss grains.Furthermore, during the secondary recrystallization process at a temperature of 1000 • C, there is a substantial occurrence of HE grain boundaries within the 20 • -45 • range, accounting for 62.7%.This prevalence creates favorable conditions for the abnormal growth of Goss grains, ultimately leading to the formation of a secondary recrystallization structure dominated by a strong Goss texture.The present study's findings provide a novel and efficient way to optimize the recrystallization texture and improve the magnetic properties of 6.5% Si grain-oriented electrical steels. Figure 2 . Figure 2. The schematic diagram of preparation process. Figure 2 . Figure 2. The schematic diagram of preparation process. Figure 2 . Figure 2. The schematic diagram of preparation process. Figure 3 . Figure 3. EBSD orientation maps of hot-rolled sheets in the lateral section.(a) EBSD IPF map; (b) several texture components of hot-rolled sheets; (c) the texture ( φ2 = 45° section of ODFs) in upper surface layer of hot-rolled sheets; (d) the texture ( φ2 = 45° section of ODFs) in center layer of hotrolled sheets; (e) the texture ( φ2 = 45° section of ODFs) in lower surface layer of hot-rolled sheets. Figure 3 . Figure 3. EBSD orientation maps of hot-rolled sheets in the lateral section.(a) EBSD IPF map; (b) several texture components of hot-rolled sheets; (c) the texture (ϕ 2 = 45 • section of ODFs) in upper surface layer of hot-rolled sheets; (d) the texture (ϕ 2 = 45 • section of ODFs) in center layer of hot-rolled sheets; (e) the texture ( ϕ 2 = 45 • section of ODFs) in lower surface layer of hot-rolled sheets. Figure Figure 4a-c displays the EBSD orientation maps of warm-rolled sheets in the lateral section, revealing the prominent 20-45 • shear bands and fragmented grains, as shown in Figure 4a,b.The warm-rolled sheet is dominated by γ textures and α textures, with {112}<110> textures showing the highest orientation density, as shown in Figure4c.Furthermore, there were a few weakly present Goss orientation grains, as shown by the red color in Figure4b.The limited quantity of Goss grains was mostly distributed within the fragmented grain area of the {111}<112> orientation in the subsurface layer.According to high-energy boundary theory, {111}<112> grains with significant deformation energy storage are favorable for the coalescence and preferential growth of Goss grains.However, in this particular experiment, the strength of the {111}<112> texture was noticeably inferior to that observed in traditional 3% Si grain-oriented silicon steel.This discrepancy is related to the reduction in deformation energy storage caused by the warm rolling process.Next, Figure5a-c depicts the EBSD orientation maps of the decarburized sample.The main textures observed in the decarburized plate were {100}<011>, {111}<112>, and {113}<361>, as shown in Figure5c.The orientation imaging map reveals a uniform microstructure in the decarburized plate, with an average grain size of 16.43 µm.Notably, the central layer exhibited a larger grain size ranging from 30 to 40 µm, accompanied by a dominant {100}<021> texture, as shown in Figure5a,b. Figure 4a-c displays the EBSD orientation maps of warm-rolled sheets in the lateral section, revealing the prominent 20-45 • shear bands and fragmented grains, as shown in Figure 4a,b.The warm-rolled sheet is dominated by γ textures and α textures, with {112}<110> textures showing the highest orientation density, as shown in Figure4c.Furthermore, there were a few weakly present Goss orientation grains, as shown by the red color in Figure4b.The limited quantity of Goss grains was mostly distributed within the fragmented grain area of the {111}<112> orientation in the subsurface layer.According to high-energy boundary theory, {111}<112> grains with significant deformation energy storage are favorable for the coalescence and preferential growth of Goss grains.However, in this particular experiment, the strength of the {111}<112> texture was noticeably inferior to that observed in traditional 3% Si grain-oriented silicon steel.This discrepancy is related to the reduction in deformation energy storage caused by the warm rolling process.Next, Figure5a-c depicts the EBSD orientation maps of the decarburized sample.The main textures observed in the decarburized plate were {100}<011>, {111}<112>, and {113}<361>, as shown in Figure5c.The orientation imaging map reveals a uniform microstructure in the decarburized plate, with an average grain size of 16.43 µm.Notably, the central layer exhibited a larger grain size ranging from 30 to 40 µm, accompanied by a dominant {100}<021> texture, as shown in Figure5a,b. Figure 4 . Figure 4. EBSD orientation maps of cold-rolled sheets in the lateral section: (a) EBSD IPF map; (b) several texture components colored in orientation maps of cold-rolled sheet; (c) the texture ( φ2 = 45° section of ODFs) of cold-rolled sheet. FigureFigure 4 . Figure6a-f displays the EBSD orientation maps obtained at various nitriding durations, while Table1presents the statistical data regarding the nitriding quantity and average grain size corresponding to the different nitriding durations.It is apparent that the grain size remained unchanged as the nitriding time increased, with an average grain size Figure 4 . Figure 4. EBSD orientation maps of cold-rolled sheets in the lateral section: (a) EBSD IPF map; (b) several texture components colored in orientation maps of cold-rolled sheet; (c) the texture ( φ2 = 45° section of ODFs) of cold-rolled sheet. FigureFigure 5 . Figure6a-f displays the EBSD orientation maps obtained at various nitriding durations, while Table1presents the statistical data regarding the nitriding quantity and average grain size corresponding to the different nitriding durations.It is apparent that the grain size remained unchanged as the nitriding time increased, with an average grain size s of nitriding.Moreover, a significant number of fine grains existed in the finished sheet, as shown by the blue arrow in Figure9d. FigureFigure 10 . Figure 11a,b illustrates the misorientation distribution between Goss grains and neighboring grains during the interrupted annealing process at 900 °C and 950 °C, while Figure Figure 11a,b illustrates the misorientation distribution between Goss grains and neighboring grains during the interrupted annealing process at 900 • C and 950 • C, while Figure 12 displays the misorientation distribution at 1000 • C. It is evident that the frequency of grain Figure 13 . Figure 13.Statistics of average grain size and {111} texture percentage at different annealing times.(a) Average grain size at different annealing times; (b) 111} texture percentage at different annealing times. Figure 13 . Figure 13.Statistics of average grain size and {111} texture percentage at different annealing times.(a) Average grain size at different annealing times; (b) 111} texture percentage at different annealing times. Table 1 . Statistics of nitriding amount and average grain size at different nitriding times. Table 1 . Statistics of nitriding amount and average grain size at different nitriding times. Table 2 . Magnetic properties of the final annealed sheet. Table 2 . Magnetic properties of the final annealed sheet.
2023-10-20T15:27:06.698Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "2b9b8c0f67063c8639d97c09aa3f2360589a4fab", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/16/20/6731/pdf?version=1697544786", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "37612a0828b2d62808121446f83d840302f46cf4", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
246333672
pes2o/s2orc
v3-fos-license
Experiences of nurses with an innovative digital diary intervention in the intensive care unit: A qualitative exploration Introduction Diaries have been used regularly in various intensive care units (ICUs) in international settings. Hard copy diaries written by relatives became impractical during the COVID-19 pandemic due to ICU visiting restrictions and infection control considerations. The implementation of a web based application, named the “Post-ICU” diary, offered relatives the ability to collaboratively write in a digital diary, to easily upload photos, video and audio clips and to feel engaged with the patient at a safe distance. In addition it allowed nurses to easily provide up-to-date information. The aim of this pilot study was to explore the experiences of ICU nurses with the implementation process and application of the Post-ICU diary. Methods A multicentre qualitative design with focus group interviews was used with ICU nurses in November 2020. Interview data were audiotaped and transcribed verbatim, and then a thematic analysis was performed to categorize the data. Results Participants from three hospitals (n = 14), 57% of whom were women, with a mean age of 40.6 years, described their experiences with the Post-ICU diary. The following themes emerged: implementation process, COVID-19, integration, and motivation. The results showed that ICU nurses perceived the Post-ICU diary to be applicable in daily care and endorsed the added value of the digital Post-ICU diary as a new opportunity to improve interhuman connectedness. However, the nurses also experienced barriers such as non-user-friendly access, lack of time and hesitance to write short messages. Conclusion ICU nurses reported that the Post-ICU diary had added value for patients and their relatives. However, in the beginning they also experienced barriers such as lack of time, insufficient integration with their own work processes, and challenges regarding writing short messages themselves. For structural embedding of the intervention, tailored strategies are needed to support ICU nurses in using this innovative Post-ICU diary. Introduction The application of diaries in the clinical practice of intensive care units (ICUs) is a valuable intervention for the prevention of long-term mental health-related problems in patients and their relatives (Ullman et al., 2015;Nielsen and Angel, 2016). As a consequence of ICU admission, symptoms of post-intensive care syndrome (PICS), including physical, cognitive, psychological and social problems, may occur in up to 50% of ICU survivors (Needham et al., 2012;Harvey and Davidson, 2016;Geense et al., 2021(ja).). In addition, relatives (partners, family, friends) can suffer from PICS-family (PICS-F), which includes symptoms of posttraumatic stress (experienced by 30-42% of respondents), anxiety (21-56%) and depression (20-34%) (van Beusekom et al., 2015;Inoue et al., 2019). These symptoms and percentages were expected to increase due to undesirable physical distance due to isolation and adjusted ICU policies in family-centred care necessitated by the COVID-19 pandemic (Murthy et al., 2020;Robert et al., 2020;Hart et al., 2020;Hwang et al., 2021). These restrictions have affected COVID-19 positive patients as well as regular ICU patients from March 2020 till the present, resulting in a lack of face-to-face meetings and normal human contact (Wakam et al., 2020). A mobile app, the digital "Post-ICU" diary, was developed and implemented through a fast-tracked process as paper diaries were no longer accessible due to visitation restrictions during COVID-19. It was expected that the digital diary would have the potential to ameliorate the increasing frequency and intensity of PICS and PICS-F. Diaries have been used regularly in various hospitals in international settings. In the Netherlands, it was found that 87% of ICUs provided diaries, which were mostly written by the patients' relatives (Hendriks et al., 2019). These were usually a paper versions, sometimes accompanied by (outdated) brochures and other informational materials (Aitken et al., 2016;Garrouste-Orgeas et al., 2014). Reading a diary can have positive effects for ICU patients coping with a traumatic aftermath of the ICU period (Barreto et al., 2019;Nydahl et al., 2020). Additionally, keeping a diary might effectively support the mental health status of relatives by helping them to feel useful, as the process provides emotional support during the recovery of their loved one (McIlroy et al., 2019;Geense et al., 2019). Visiting limitations and worries about infection control during COVID-19 limited the use of diaries written by relatives (Jones, 2021). The implementation of a digital ICU diary, offers relatives the ability to stay engaged with the patient at a safe distance, to easily upload photos, videos and audio clips; and to write collaboratively with other relatives in the digital diary. In addition, it allows nurses to easily add up-to-date information. In this study, the patient's primary contact person was invited to start the Post-ICU diary and he or she provided authorization of responsibility in line with the privacy regulations according to Dutch law. The Post-ICU diary was made available on any connected device with a display (e.g., smartphone, tablet). Relatives could contribute short messages similar to WhatsApp functionality or write longer stories about what happened in the personal situation of the patient. Fig. 1 provides an overview of the screen layouts of the Post-ICU diary. Upon an invitation from the patient's primary contact, ICU nurses could also write short messages in the patient's Post-ICU diary. These voluntary messages, which most nurses wrote in addition to their regular daily activities, could, be a valuable contribution to the relatives' description of the situation and medical circumstances each day for the ICU patient. Through this messaging and interaction, nurses could become more aware of the importance of preventing PICS and PICS-F (Holme et al., 2020). Furthermore, they could experience the use of a diary as a holistic intervention, leading to more personalized care (Johansson et al., 2019) as they could become aware of the importance of giving words and meaning to the period when the patient was critically ill. For this study, applicability was defined as a combination of the usability, integration and appreciation of the Post-ICU diary. Usability could be described as the accessibility of the web based application and the ease of use, including the extent to which the Post-ICU diary was used and the extent to which the ICU nurses were prepared to maintain it as part of their work processes. Integration referred to the incorporation of the new intervention in ICU nursing care, while appreciation reflected the positive or negative judgements encountered in regard to the Post-ICU diary. Several strategies were used to implement the diary. First, informational strategies were used such as a weekly team newsletter to announce and encourage the use of the Post-ICU diary; posters in the nurse station and in the waiting room for relatives; handouts and users guidelines; and an online kick-off with the developers of the Post-ICU diary to provide the instructions for logging in. Second, educational strategies were used, such as clinical lessons to learn how to use the Post-ICU diary; video material to enhance the involvement of the nurses, and onsite explanations during working time to support the introduction of the Post-ICU diary. Third, motivational strategies, such as family support staff and champions, were used to inspire the ICU team to use the Post-ICU diary. Because the intervention was developed and implemented in a short time period, a pilot study was conducted to evaluate the experiences of the nurses. In addition, persuasive prompts in the electronic patient dossier were used to persuade and inspire relatives and nurses to write regularly. Although the needs and preferences of ICU survivors, relatives and ICU professionals were considered in the development of the Post-ICU diary, the applicability of this intervention from their perspectives was unknown. Therefore, the aim of this pilot study was to explore the experiences of ICU nurses with the implementation process and applicability of the Post-ICU diary. Methods This study was conducted as an explorative pilot study prior to national scale-up of the Post-ICU diary. It was intended to pre-test the study materials and procedures for broader inquiry in the near future. The study question was: 'What is the applicability of the Post-ICU diary in daily practice for ICU nurses?' The consolidated criteria for reporting qualitative studies (COREQ), a 32-item checklist (Tong et al., 2007), was used to finalize reporting of the study methods in detail (Supplemental file 1). Study design A qualitative multicentre design, including focus group and individual interviews with ICU nurses was applied from November 2020 to January 2021, when the second wave of the COVID-19 pandemic was occurring. Focus groups were the preferred method of data collection for this study, for reason that a moderated interaction will help the participants to articulate their personal experiences, beliefs, perceptions and attitudes around this subject, which may be especially beneficial for those who have little experience with the Post-ICU diary (Nyumba et al., 2018). One advantage of using focus group data collection method is that the interaction between the participants can enrich discussions on the topic, which cannot be achieved with individual interviews. The study design ensured that all viewpoints of participants were included, with equal importance assigned to multiple perspectives (Hall, 2004). Study setting Three ICUs in the Netherlands, including an academic hospital and two tertiary teaching hospitals, actively participated in the study; thus, the study covered a variety of patients and medical treatments. Experiences with the Post-ICU diary differed across the three study settings, with the length of the use of the diary varying from three to eight months at study onset. Study population The population consisted of a nonrandom sample of nurses working in the included ICUs. They were invited in collaboration with the nurses' managers who had no role in the study preparation, aims and methods, nor in the analysis of the results. One nurse manager participated in a focus group interview and one project leader had an individual interview with the researcher. The meetings were announced among the team members and the managers allowed their staff to join during work time. Participants were informed of the study objectives, the duration of the interviews, expectations for their contribution, and the background of the researchers. At the start of the interview, informed consent was given in written or oral form. All participants were offered a transcription of the interview to review so that they could provide their comments and member check the implications for practice. Ethical considerations The study is approved by the Daily Board of the Medical Ethics Committee Erasmus MC of Rotterdam, The Netherlands as coordinating centre institutional review board (MEC-2020-0640). The committee has reviewed the research proposal and decided that the rules laid down in the Medical Research Involving Human Subjects Act (also known by its Dutch abbreviation WMO), do not apply to this research proposal. The study was conducted according to the principles of the Declaration of Helsinki (64th WMA General Assembly, Fortaleza, Brazil, October 2013) and in accordance with the Medical Research Involving Human Subjects Act. Participants could leave the study at any time for any reason if they wished to do so without any consequences. Study procedures Qualitative data were collected through two focus group interviews in two study settings and three individual interviews with ICU nurses from a third setting. There was no mix of participants across the ICUs. These meetings would have ideally been held in person (to observe nonverbal attitudes and facial expressions); however, due to the COVID-19 measures and social distancing, it was not possible for all participants to be physically present or to organize themselves for a group interview. In those cases, the interviews were carried out via individual appointments using video calling technology. The inclusion criterion was to be an ICU nurse at one of the participating hospitals. The exclusion criterion was complete unfamiliarity with the Post-ICU diary. Two researchers with expertise in qualitative research and ICU care led the two focus group interviews in the workplace with five (by TH and BS) and six (by TH and MvM) participants. Three individual interviews were conducted by (TH) online, with the participants choosing to participate either from home or from the workplace. No one refused to participate. Prior to the interviews, the research group created a topic list and interview guide based on the literature and their own experiences to structure the meetings (Supplemental file 2). All interviews took approximately 45 minutes. Fieldnotes were used for analysis and reflection and to add more specific interview questions, in order to gain an in-depth understanding of the phenomenon in different contexts. Demographic data were collected with a two-minute survey at the beginning of the interview. In addition, all participants were asked to give a score from 1 (not at all) to 10 (excellent) on how relevant and how useful they assessed the Post-ICU diary to be in daily practice. This was a self-composed, non-validated numerical rating score. None of the participants was interested in receiving a transcript or summary of the interviews Data management and analyses Thematic analysis was used as a foundational method that provided clear steps to categorize and report the data that were found (Braun and Clarke, 2006). This method describes the data set in rich detail and investigates patterns of response or meaning within the data set. To explore predominant themes, an accurate reflection of the content of the entire data set was needed. As a consequence, some depth and complexity were necessarily lost (Braun and Clarke, 2006). An inductive analysis was applied to find emergent themes outside the pre-existing theory or the researchers' preconceptions. Finally, a semantic approach was used to identify themes within the explicit meanings of the data and without assigning implied meanings beyond the actual words used by the participants. Interview data were audiotaped and transcribed verbatim. Two researchers (TH and MvM) read the transcripts (Step 1; Familiarise with the data). Each developed a structured analysis framework that consisted of preliminary codes (Step 2; Generate initial codes). After that, they compared their frameworks to reach consensus on codes and themes. Next, one researcher (TH) coded the transcripts line by line according to this framework in the software programme NVivo12© (Step 3; Search for themes). When coding was finished and the code 'Other' was used, the two researchers discussed the coded texts and categorized them into a new or existing code best reflecting the contents of the otherwise uncategorized text fragment (Step 4; Review themes). After coding was finished, the cohesion and interrelations between codes were analysed by the two researchers through mind mapping (Step 5; Define and relate themes). The principal investigators had access to these data, and data will be stored for fifteen years. Results Fourteen respondents, of whom 57% were women and with a mean age of 40.6 years (Table 1), participated in the interviews across three different study settings. All were familiar with the Post-ICU diary. Four participants had no experience with the Post-ICU diary. However, they have been exposed to the implementation strategies that were applied in the ICU. Six participants had used the Post-ICU diary 1 to 5 times, two had used it 10 to 15 times and the last two had used it >16 times. The diary was used in all shifts (day, evening and night). The mean scores for applicability and relevance of the Post-ICU diary were 7.3 and 8.4, respectively. The following four themes were found with thematic analyses: implementation process, COVID-19, integration, and motivation (Fig. 2). Implementation process The first theme encompassed the respondents' experiences with the implementation process; they reported how they became familiar with the Post-ICU diary and how it may have contributed to their professional work processes. Divergent opinions were reported, both between settings and among individual ICU nurses from the same setting. Although a clear introduction and educational materials were provided by project leaders and team managers, the initial use of the diary was experienced as difficult by the respondents. One respondent felt insufficiently supported during the implementation process, even though the supporting implementation strategies were applied. 'How nice that a written guideline was made, as it wasn't clear in the beginning [what to write in the diary].' (Respondent 9) The fast-tracked development and introduction of the digital Post-ICU diary resulted in misunderstandings that generated resistance to use of the diary. Consequently, some nurses felt apathetic to new information, leading to differences in the level of familiarity among the ICU nurses. 'I think I will speak for myself, […] I really am information tired' (Respondent 9). cOVID-19 This second theme described the contextual particularities of implementing the POST-ICU diary during the COVID-19 pandemic. The pandemic profoundly affected the ICU work environment and daily routine of the ICU nurses. It had a dual effect on the implementation process of the Post-ICU diary. On the one hand, the pandemic led to a high workload in the ICU, which reduced the available time for adequate introduction and support. On the other hand, the diary offered added value when relatives were worried, waiting at home due to visiting restrictions and suffering from feelings of physical distance. 'And yes, we had the feeling that "the family cannot visit the patient at all, so we have to do something to capture what is going on.' (Respondent 13) Integration The integration of the Post-ICU diary referred to the extent to which, and how, the diary was used by all participants in all settings. This theme also included the facilitating and hindering factors concerning the provision of the diary and writing of short messages by nurses for integration into daily practice. Integration included three subthemes: user friendliness, work process in offering the diary, and work process in writing in the diary. User friendliness Respondents indicated that logging in was not as easy and fast as they would have liked. This was mainly the result of privacy and legal data protections; however, it created a barrier in the ease of use of the Post-ICU diary. 'If it is just one click, that would be super motivating for me, because then I could just write [in the diary]. However, now it is not so I believe it is too difficult.' (Respondent 1) The user friendliness of the POST-IC diary was a 'work in progress' and the respondents suggested the need for technical adjustments to logging in procedures. Creating an account, logging in to a separate programme and following several subsequent steps were barriers in the beginning. More specifically, if the Post-ICU web application had been connected to their hospital account, the extra step of logging in would have been redundant. Work process in offering the diary The ICU nurses' role in initiating and encouraging diary use is necessary to ensure that relatives understood and used the Post-ICU diary. The final choice regarding wheter to use the diary was up to the relatives; however, ICU nurses played an important role in encouraging them to use the diary. All respondents agreed that the Post-ICU diary should be offered and started immediately upon admission because the first days in the ICU are crucial and impactful for the patient and his or her relatives. Some ICU nurses were reluctant to offer the Post-ICU diary due to the time it might take to provide the corresponding explanation, and the possible questions they could get in return. Others reported that the time investment was minimal and that it had become a routine practice in their ICU. They sometimes get a lot of information that also needs to be processed […] then you actually have to inquire the very next day The completion of consent forms delayed the diary initiating process somewhat, partly due to visitation restrictions. One of the respondents found an alternative way. 'Recently, I had a family from a transferred patient who lived far away, and then we had an oral agreement on the juridical responsibility via telephone. Thereafter we sent the form via post to get it signed, but the diary had already started by then. This was in collaboration with the team management, because officially we should have waited until we had the signature.' (Respondent 11) Work process in writing in the diary Respondents acknowledged that writing in the Post-ICU diary should be part of their own daily work, becoming care as usual. Thinking about the content of what they could write was a learning process including consideration of relevant events, sensitive privacy aspects, and use of understandable language for the relatives. Appreciation and gratitude from relatives contributed positively to the nurses' willingness to write short messages. 'If I post a picture of a patient sitting in the chair for the first time and a few hours later I have the daughter on the phone saying: "How nice that he is sitting in a chair!", then you see an immediate effect.' (Respondent 12) At the beginning of the Post-ICU diary, many ICU nurses lacked confidence in their own writing skills; they did not always know what to write or how to write the messages, and they worried about others misunderstanding their messages. The respondents with little experience in writing mentioned that they would write about 'special events' such as 'you opened your eyes today after 14 days in the ICU', and they had difficulty finding something to write when 'nothing special' happened. 'Writing for the sake of writing has no added value. But when you give a short recap of the past four nightshifts for example, then you can leave a mark in my opinion. ' (Respondent 9) Respondents with more experience also wrote about daily events such as 'I washed your hair today' and 'I sat at your bedside for a while'. Some also wrote more personal messages. 'We have a "get-to-know-me" poster with personal information about the patient that the relatives provides. This way, we can write more personal messages like: "We know you liked this type of music and today I played you a song by your favourite artist"'. (Respondent 13) Motivation The fourth theme concerned nurses' personal motivation, which was partly shaped by the environment and organization. It included four subthemes: attitude, culture, feedback, and added value. Attitude of the individual ICU nurse All respondents believed that the diary should be distributed to the relatives immediately upon admission of the patient to the ICU. The respondents with little experience with the diary were positive about starting to use it. They felt that clear coordination and expectation management about writing in the Post-ICU diary, among themselves and with relatives, was important. 'And also not to make the expectation with the family too high that we are going to write in it daily.' (Respondent 1) Respondents also compared themselves to colleagues in that respect and felt pressure to write similar-length messages. '[…] But in general you do see very short to medium-length stories, but I mean the longer the story gets, the more that they start to expect from me.' (Respondent 8) Culture in the work environment The respondents were not asked to be accountable for distributing or writing in the Post-ICU diary, which was partly because this was not directly linked to their predominant nursing tasks. The ICU nurses mainly focused on the treatment and care of the patient during admission and less on emotional recovery afterwards. '[…] for me it is important in the moment, how is it going at the bedside? And not, how will it go later? […] That mindset has to change.' (Respondent 5) Feedback from relatives Sometimes, the respondents received direct feedback from relatives when they had written a message in the Post-ICU diary. Some respondents mentioned that they found it motivating when they experienced interaction through the diary with relatives of the patient. With some patients you also have the children actively responding […] about what you wrote and then kind of interacting. I do like that.' (Respondent 11) Added value All respondents were convinced of the potential added value of a diary for the patient and relatives. However, not all were positive about their own contribution into the diary. 'It's actually a bit difficult now with the time you have, but family can do that, they [the relatives] just have all the time now.' (Respondent 2) The respondents were convinced that the digital Post-ICU diary offered enough added value that the organization could not return to the paper version when COVID-19 no longer played a role. 'Basically it does work so well and the responses are also so positive that we are actually not going to use the paper diary anymore.' (Respondent 14) All respondents recognized the added value in the short term for relatives and in the long term for preventing health-related impairments in ICU survivors. They felt that human interaction with relatives was stimulated by the use of the Post-ICU diary, specifically in situations of complete visitor restrictions. This situation inspired them to take leadership in advocating the introduction and use of the Post-ICU diary. Discussion This qualitative multicentre study to explore the ICU nurses' experiences with the Post-ICU diary and its applicability in their daily practice highlighted their opinions on the added value for the patients and relatives. Most nurses quickly embraced the intervention as a positive innovation. However, they also experienced barriers such as lack of time, insufficient integration with their own work processes, and challenges regarding writing short messages themselves. These hindering factors are similar to those observed in previous studies (Kiwanuka et al., 2019;van Mol et al., 2017). Offering the diary was considered more important than writing messages for the patients. An interesting feedback loop was identified, where nurses reviewed the entries of relatives to check if messages from doctors and nurses were correctly understood and to ensure that relatives were coping sufficiently. Other researchers have reported that the ICU diary could stimulate contemplation and professional development, as nurses reflected on their thoughts, feelings, and actions while writing in the diary (Johansson et al., 2019). This kind of statement was not found in the current study. Collegial support and interaction with relatives were facilitating factors for use of the post-ICU diary. The results of this pilot study provided an evaluation of the current implementation process and direction for the scalability of the Post-ICU diary. Clear differences in implementation phases among the centres emerged; at one ICU, the implementation of the diary was in its infancy, while in the other, it was already standard to offer the diary. The ICUs also differed in culture and motivation to put efforts into a policy of family-centred care. One of the centres assigned handling the Post-ICU diary to relatives, and a special family guidance team motivated ICU nurses to write on a daily basis. This team regularly supported relatives with information about the facilities of the hospitals, organized appointments with the medical staff, and provided psychosocial guidance. Champions such as these family guidance team members were a valuable strategy for implementing the Post-ICU diary (Curtis et al., 2016). For structural embedding of the intervention, tailored implementation strategies are needed to support ICU nurses in using the innovative Post-ICU diary (Wensing et al., 2011). Focus groups and individual interviews involve different interactions. It would have been ideal to conduct a third focus group. This was unfortunately prevented by a high workload due to an upcoming new COVID-19 surge. However, in the focus group interviews that were held, the participants were prompted through questioning. The participants reacted to each other, in agreement or disagreement, but mostly the participants answered the interviewer regarding their opinion on the proposed item. The same technique, questions and sequence were followed in the individual interviews. Following this method and using thematic analyses supported the reporting of the results. Because of the explorative character of this study, we aimed to learn whether the participants were aware of the Post-ICU diary. Even if they had little experience themselves in using the diary, it was important to determine why they did not make use of it. Since the aim was to evaluate the implementation process and usability of the Post-ICU diary, unexperienced participants made valuable contributions. In general, the psychological impact of quarantine measures might include posttraumatic stress symptoms, confusion, and anger (Brooks et al., 2020). The presence of relatives could be supported by nonphysical methods to reduce the negative impact of the pandemic for ICU patients. In previous studies, video calling was introduced to facilitate contact between relatives and patients as well as communication with professionals (Hart et al., 2020;Negro et al., 2020). Although appreciated by all stakeholders, it seemed insufficient or difficult to carry out because patients were often physically and cognitively incapable of participation. Privacy and functionality considerations were reported to limit the utility of commercially available video communication tools (Montauk and Kuhl, 2020). In addition, health care professionals lacked materials and time to support online connections. The development of the Post-ICU diary addressed such problems in collaboration with privacy officers and legal support from all participating hospitals. The overall process is provided robust and safe. Traditionally, healthcare entities have been reluctant to embrace innovation, probably due to the need for safety and excellent quality. Digital developments with the patient and their self-care in mind are accelerating in the 21st century. This necessitates a cultural transformation to a technological/scientific approach. Therefore, considering the needs and perceptions of the professionals involved and supporting their adaptation to new methods, interventions or features is essential to progress in quality ICU care. Strengths and limitations The strength of the study was the qualitative design implemented in several hospitals with broad exploration of opinions and experiences until data saturation was reached. Thematic analysis provided in-depth insights into ICU nurses' perspectives on the applicability and implementation process of a digital diary intervention in the ICU, which was a timely innovation in response to the challenges of the COVID-19 pandemic. There were several limitations to this study. First, there was likely a response bias in the overall evaluation, due to a high proportion of early adopters of the Post-ICU diary who participated. Second, related to the difficulty of doing research during the COVID-19 pandemic, a convenience sample of participants was included instead of the planned sampling of early and late adopters of the innovation. Third, no triangulation of data was performed, and conclusions were drawn based on the literature and qualitative results. However, data saturation was reached, and the results resonated with those of similar studies, suggesting the accuracy of the results reported. On the topics of technique and writing in the diary, the last two interviews did not reveal new information. Fourth, using focus groups and individual interviews could have influenced the findings. Finally, patients and their relatives were not included in this study. Thus, this study was not an overall holistic evaluation of the Post-ICU diary; further research addressing these perspectives is ongoing. Conclusion ICU nurses endorsed the added value of the digital Post-ICU diary for patient and their relatives. However, they also experienced barriers to diary use, such as a lack of time, insufficient integration with their own work processes, and challenges regarding writing short messages themselves. Although most of the experienced barriers were resolved during the first half year after introduction, tailored strategies are needed to support ICU nurses in using the innovative Post-ICU diary. Consent for publication Not applicable. Availability of data and materials Anonymized data gathered and analysed during the current study are not publicly available due to legal and ethical restriction. These can be requested from the corresponding author as well as text and photo material of the developed intervention. Materials described in the manuscript, including all relevant raw data, will be freely available at a reasonable request to any scientist wishing to use them for noncommercial purposes. Authors contributions MvM and MB jointly designed the study, raised funding and established the development of the study protocol. MvM, TH, BS and RT prepared the study materials and gathered the data of both sub-studies. TH and MvM produced the first draft of the article. All authors (TH, BS, RT, MB and MvM) critically revised the content of the manuscript, have read and approved the final version Declarations of interest The authors declare no conflict of interest.
2022-01-28T16:15:28.481Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "ba3893046e997c37a716f8fb89f9cebf3d8f1bee", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.iccn.2022.103197", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "1eff832cb0a8a8fd43feeaf1f055899d2ea27c46", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
84382854
pes2o/s2orc
v3-fos-license
Comparison among some tenderization processing on Maremmana meat Abstract Tenderness is the most important characteristic to define acceptability of products by consumers. In this paper three different methods, Calcium infusion (Ca), Very Fast Chilling (VFC) and pelvic suspension hanging from aitch bone (PS) or the Achilles tendon (AT) are applied in pre-rigor phase on Maremmana meat. Ca injection has reported the lowest shear force on cooked meat compared to control group (5.09 vs. 16.33 kg instead VFC group has underlined intermediate values. Pelvic hanging has shown significant difference on shear force and sarcomere length on Longissimus thoracis muscle (4.13 vs. 8.11 kg and 1.63 and 2.04 µm respectively for WBS and sarcomere length in PS and AT); further in Biceps femoralis we have found a lowest collagen content (4.33 vs. 4.79 g/100 g for PS and AT). In conclusion these methods could influence tenderization processes even if some processes, that could lower the effect of these practices, are still unclear. ) Introduction -Tenderness is the most important characteristic of meat quality as reported in many researches. This characteristic is modified by several factors like age, livestock management and breed (Dransfield, 1994). Therefore, in many researches extensive breeding was observed to produce a tougher meat than intensively one, hence that meat is generally less accepted by consumers. Some Italian breeds like Maremmana have high meat nutritional quality but its meat is penalized because it is usually rather tough (Gigli et al., 2000). In order to improve this aspect, and exploiting the knowledge on proteolytic activity during ageing, different techniques have been tested on carcasses during pre-rigor mortis phase in the last ten years. These techniques have a mechanical, and/or biochemical action. A particular mechanical practice is suspension of beef carcasses by the pelvic girdle (tenderstretch) to produce during rigor a decrease of myofibrils shortening and a connective tissue structural change (Ahnström Ahnström et al., 2006). Another one, with biochemical action, is post mortem CaCl ). Another one, with biochemical action, is post mortem CaCl 2 injection, that leads to an advanced and increased activity of Ca dependent enzymes (Gerelt et al., 2002), and, at last, very fast chilling (VFC). A fast cooling allows a strong muscle contraction, with release of calcium into myofibrils, and an outcoming activation of proteolytic enzymes (Van Moeseke et al., 2001), which produces a mechanical and biochemical action on myofibrillar structure. The aim of this paper is to compare different tenderization methods during ageing to decrease hardness in Maremmana breed meat. Material and methods -Six Maremmana young bulls were slaughtered at about 590 days old with a carcass weight of 313 ± 11.40 kg. After slaughtering, the left side was suspended from the aitch bone (Pelvic Suspension, PS) for 24 hours, while the right side from the Achilles Tendon (AT). After 30 minutes from right side the Longissimus thora�is (Lt) was taken (between 8th and 13th rib) and subdivided into three portions to carry out three different thesis: "C" (Control) traditional ageing; "Ca" (CaCl 2 infusion) injecting inside muscle 9% of meat weight (wt/wt) of 300 mM CaCl 2 ; "VFC" (Very Fast Chilling) obtained proc. 18 th nat. congr. aSpa, palermo, Italy putting samples into a freezer at -70°C and storing them until the core have reached 1°C of temperature. Both carcasses (PS and AT) were aged for 8 days and stored at 2°C ± 1°C. At dissection, Longissimus thora�is (Lt) at the 7th rib and Bi�eps femoralis (Bf) were taken off, and physical analysis at only 8 days were determined; whilst the three thesis were studied at three different times: 24 hours, 5 and 8 days. On each sample, the following analyses for physical and chemical characteristics were performed: temperature and pH values, with penetration through meat (monitored since 8 hours after slaughtering with 1 hour step); drip loss (Chrystall Chrystall et al., 1994) with gravimetric method on raw meat preserved at with gravimetric method on raw meat preserved at 4°C for 8 day (Barton-Gade et al., 1993), and cooking loss obtained cooking samples vacuum-packed in polyethylene bag in water bath at 75°C for 50'; shear force on cooked meat (WBS) on 6 samples (1x1 cm cross section and 2 cm long), using Warner Bratzler Share apparatus on Instron 5543; sarcomere length by measuring ten fibres on raw meat by immersion optical microscope; total and insoluble collagen by hydroxyproline quantification (Kolar, 1990). The statistic analysis of variance was performed on GLM procedure of SAS software (SAS, 1985) using a bifactorial model (treatment and muscle for tenderstretch test, treatment and time for the other tests). results and conclusions -Ca group showed a more rapid pH fall than the other groups ( Figure 1) during the first 2h post mortem, even if at 8h this group reached a similar value in comparison with others. In fact at 24h the ultimate pH did not show significant difference between groups and this is also reported by Jaturasitha et al., (2004). Rapid chilling (VFC) system decreased the samples temperature from 35°C to 7.5°C during the first hour of treatment, whilst the other groups reached this temperature in 8 h. Similar trend is found by King et al. (2003). Shear force on cooked (Table 1) showed an evident calcium infusion effect during ageing time, reporting the lowest values at every time (5.09 kg on average for three times vs. 16.33 kg and 9.37 kg respectively for C and VFC groups), because more amount of Ca produced a greater activation of calcium depending enzymes during the first day of ageing (Gerelt et al., 2002). No differences between groups were found in cooked loss; whilst drip loss at 24h was highest in Ca samples, because meat lost the liquid introduced by CaCl 2 injection, at last, VFC group has always a median value, compared to other groups. Total collagen did not show significant differences between the three theses during the ageing, whilst insoluble collagen was significantly different during ageing time between VFC group and control, and Ca thesis showed intermediate values. Highest value in insoluble collagen probably depends on cold contraction occurred during the first hours producing more links in the collagen tail. Sarcomere length arcomere length in fact showed higher value just at 24h in C group than the others (1.62 vs. 1.43 µm), but no differences were at 8 d from slaughtering. Pelvic suspension (Table 2) reported significant differences for WBS in cooked meat and sarcomere length in Lt muscle that showed the lowest value for WBS (4.13 vs. 8.11 kg) and highest value on sarcomere length (2.04 and 1.63 µm) as reported in Ahnström as reported in Ahnström et al. (2006). In Bf muscle significant n Bf muscle significant values were found in total and insoluble collagen where PS thesis showed lowest values. This trend . This trend could be explained by mechanical action of stretching that probably breaks the collagen links. In conclusion all these methods, with different actions, have positive effects on tenderization during a short ageing, but there are still a lot of unclear processes that might lower the efficiency of these procedures during longer ageing period.
2019-03-21T13:13:06.091Z
2009-01-01T00:00:00.000
{ "year": 2009, "sha1": "a4c37f476d400b6bf5a5c46961f21f370ea3982e", "oa_license": "CCBYNC", "oa_url": "https://www.tandfonline.com/doi/pdf/10.4081/ijas.2009.s2.486?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "d107a0454fe4bf03cd9f17c48ad636b5bd72559b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry" ] }
251507130
pes2o/s2orc
v3-fos-license
Characterization and Modification of Activated Carbon Generated from Annogeissus Leiocarpus Activated carbon (AC) is a versatile adsorbent that is used in the treatment of waste water, colour, odour removal and CO 2 capture. Annogeissus leiocarpus is one of abundant agricultural precursor that can be used for the production of activated carbon. Characterization was done to investigate some proximate parameters. The modifications were made by soaking AC in 40% H 2 SO 4 and 40% NaHCO 3 for 24hours in the ration of 1:3 w/v. The FT-IR and SEM was conducted for surface functional group and morphology respectively. The result of this study revealed that the activated carbon produced possessed high yield, low Ash content, low Burnt off, low moisture content Average bulk density and large pore volume. The results from FT-IR analysis identified appearance and disappearance of carbonyl and hydroxyl groups which contributed in the creation of more adsorptive site for adsorption process. The results for SEM indicated the development of pores all over the surface of adsorbent with Acid modified activated carbon (AMAC/ H 2 SO 4 ) having the highest pore distribution followed by Base Modified Activated Carbon (BMAC/NaHCO 3 ) and finally ordinary (AC). The results suggest that the modification of AC using Acid and Base can significantly enhance the surface properties which improves adsorptive properties of the activated carbon produced and enhances it adsorption potentials for wastewater treatment. Introduction Activated carbons are the most powerful adsorbents known (Zauro et al., 2018). It is basically a solid material consisting mainly of pure carbon. A characteristic feature is its porous structure and the resulting immense surface area (Meena et al., 2005). Due to its exceptional adsorption qualities, activated carbon is widely used in solution purification, decolorization, and removal of odor at low cost and superior efficiency (Onawumi et al., 2021). Activated carbons work on the principle of adsorption which is an interfacial process involving the collection of gaseous or solute components on the surface of adsorbent solids. This phenomenon is associated with physical attractive forces that bind gaseous and solute molecules commonly known as Van-der-Waals forces. Adsorption is thus a physical process and the substances adsorbed on the solid do not undergo any chemical reaction with the latter. The adsorbing solid is referred to as adsorbent and the substance to be adsorbed from the liquid or gas phase as the solute (Chawdhury et al., (2020).The adsorption power and rate is determined by the kind of activated carbon, the particle size, the pore size and its distribution (Vikash et al., 2017). The importance of porous materials has been recognized since antiquity when porous charcoal was used for its medicinal properties. The worldwide interest in environmental protection and energy conservation has revived the research on porous materials, which have numerous applications such as in catalysis, separation, insulation, sensors and chromatography among others (Borislav et al., 2007). Activated carbon is being generated from different agricultural precursors, but there is less emphasis on the modification of the existing surface morphology. The study will emphasize on the preparation, modification and characterization of activated carbon to ascertain the disparities on the surface morphology among the adsorbents generated. Equipment and Reagents Functional groups were determined using FT-IR (Carry630), surface morphology was investigated using scanning electron microscope SEM (JSM-7900F), pH (ST-926), furnace (Sx-2.5-10) was used for carbonization and activation and oven (DHG-9101.ISA) was used for drying. The materials used in this study are zinc chloride from BDH, England, sodium bicarbonate Sigma Aldrich, Germany, sulphuric acid (98 %) from M&B and Hydrochloric acid (36 %) from Sigma Aldrich Germany. 2.2 Plant Sample Annogeissus leiocarpus plant known as chewing stick tree is available and possessed remarkable hardness, which makes it suitable precursor for the production of activated carbon. The Annogeissus leiocarpus stem was obtained from Kardi Area, Birnin Kebbi Local Government using cutlass and placed in polyethene bag and then brought to herbarium for identification in UDUS, botany laboratory unit by Malam Abdulazeez Salihu. The Boucher Number UDUS/ANS/0180 was given. Sample Treatment The method of sample treatment adopted by Itodo (2010) were used. The materials were washed with tap water at least three times and then with distilled water to remove any adhering dirt or impurity. The sample was further sun dried for one week, squashed with hand slightly to remove the unwanted sticks and then placed in an oven at 105 o C for 24 h (Kra et al., 2019). The dried sample was ground and sieved to obtain particles in the size range from 0.106 to 0.250 mm. Approximately 100 g of the powdered materials were placed and stored in an air tight container. Carbonization Powdered material obtained from Annogiessus leiocarpus stem (5 g) of was placed in six different clean crucibles before introduction in the furnace at 500 o C for five minutes after which they were placed in a bath of ice block. The water from the bath was drained and the samples were sun dried. This process was repeated until substantial amount of carbonized sample was obtained (Itodo, 2010).The carbonized sample was washed with 10 % HCl to remove the surface ash, followed by hot water and finally with distilled water to remove the residual acid. The solid particles were sun dried and later placed in an oven at 100 o C for one hour (Rahman et al., 2002;Itodo, 2010). The yield on the carbonization was calculated from weight before carbonization (Wbc) and after carbonization (Wac). The percentage yield was calculated using Equation 1 (Yoshiyuki and Yutaka, 2001). % Yield = ……………… (1) Wac = is the weight after carbonization and Wbc = is the weight before carbonization. Activation Process The carbonized sample (5 g) was mixed with 5cm 3 of 1M ZnCl2 solution. The sample was introduced in to the furnace at 800 o C for five minutes, after which the activated sample was cooled with cold water. Excess water was drained and the activated carbon allowed to dry at room temperature (Gimba et al., 2002;Itodo, 2010). It was then washed with 10% HCl acid to remove surface ash, followed by hot water and then with distilled water to remove the residual acid until the pH of 6-8 is attained (Rahman et al., 2002;Itodo 2010). The generated activated carbon was placed in an oven at 110 o C overnight and stored in an air tight container. Modification with Sulfuric Acid. Annogeissus leiocarpus stem activated carbon was washed with deionized water until any leachable impurities due to free acid and adherent powder is removed. Wet activated carbon (5 g) was treated with 40% H2SO4 (v/v) in the ratio 1:3 in an incubator at 110 o C for 24 h and later soaked with deionized water until the solution pH is 7. Finally, the sample was dried overnight in an oven at 110 o C, cooled at room temperature, and stored in a desiccator (Kadirvelu et al., 2001). Modification with Sodium Bicarbonate The prepared activated carbon was washed with deionized water until any leachable impurities due to free acid and adherent powder was removed. Wet activated carbon (5 g) was treated with 40% NaHCO3 (w/v) in the ratio 1:3 in an incubator at 110 o C for 24h and later soaked with deionized water until the solution pH was 7. Finally, the sample was dried overnight in an oven at 110 o C, cooled at room temperature, and stored in a desiccator (Kadirvelu et al., 2001) Dried sieved AC sample (3 g) was placed on a dry-cleaned and pre-weighed Petri dish dried in oven at 105 o C overnight and then in desiccators for 30 minutes and weighed. This process was repeated three times and the average is taken from which percentage moisture content was calculated as in equation 2 (AOAC, 1990). Determination of Ash Content Copper crucible was heated in a furnace at 500 o C for 2 minutes and cooled in a desiccator and weighed. AC (3 g) was placed in the crucible and then introduced in muffle furnace at a temperature of 500 o C for three hours. It was removed and cooled to room temperature and then placed in the desiccators before weighing. The process was repeated three times and the percentage ash content was determined using equation 3. Determination of Bulk Density Density was measured on activated sample of < 2mm. It is estimated by placing the product into a graduated cylinder and compacted by tapping on the bench top until an expected volume, v (cm 3 ) was occupied by mass, m (g). The cylinder was tapped on the bench top until the volume of the sample stop decreasing. The mass and volume were recorded and density was calculated using equation 4 (Yoshiyuki and Yukata, 2005) ρ = Mass/Volume occupied ………… (4) Determination of Burn Off Burn off refers to the weight difference between the original char and the AC divided by the weight of the original char with both weights on dry basis (Onawumi et al., 2021). % burn off = … …………… (5) Wo= weight of char after pyrolysis, washing and drying. Wi = weight of carbon after activation, washing and drying, % burns off = mass after activation/original mass of char x 100 Determination of Porosity Base on Swellings Activated carbon (0.5 g) of was dispersed in 20cm 3 water in a graduated tube with the aid of a shaker. This was centrifuged for 10 minutes at 4000 rpm. The resulting volume will be read at final volume VT and recorded. Equation 6 is used to calculate the porosity (Ekpete et al., 2017). Determination of Volatile Matter An empty crucible was weighed and then the sample (1 g) was added in to crucible with lid and weighed. This was kept in muffle furnace at a temperature of 910 o C for seven minutes, after which it was taken to desiccator for 30 minutes to cool down (Kra, 2019). %Volatile matter was calculated using equation 7. Determination of pH AC (3 g) was mashed and soaked in 10 cm 3 distilled water, boiled for 5 minutes and allowed to cool (Yoshiyuki and Yukata, 2003) 1% solution (w/v) of the samples were made using distilled water. A pH of the supernatant was obtained after I hour and pH electrode was dipped into the solution and the value was read from the meter. Samples with undesirable pH were washed continuously until a pH of 6 -8 was reached (Ahmedna et al., 2000). Determination of Fixed Carbon (% Carbon Content) Fixed carbon content of the AC was determined using the procedure employed Kra, (2019). The relation for obtaining the carbon content is given in equation 8 Fixed Carbon = 100 -(%Moisture content + %Volatile Matter + %Ash Content)……….. (8) Results Table1: Proximate analysis of the adsorbents produced from Annogeissus leiocarpus. Discussion The results of physicochemical analysis obtained in table 1 were subjected to statistical analysis using one-way ANOVA. The p-value (0.001) for the different adsorbents prepared, indicate that there was significant difference among the parameters investigated on the adsorbents, when alpha value was set at 95 % confidence level. Physicochemical parameters are very important in activated carbon production for a particular chemical process (Chowdhury et al., 2012). Yield %, Ash content, moisture content and porosity were among the physicochemical parameters investigated. The high yield for AC (67.3 %) could be due to complete degradation of lignin, hemi cellulose and cellulose during pyrolysis process (Ajala and Ali, 2020). Porosity decreases in the order: AC > BMAC/NaHCO3 > AMAC/H2SO4 the percentage of micropore volume might result to higher adsorptive capacity of small molecule. AMAC/ H2SO4 appeared to have a good micropore volume. The results were supported by the findings of Hai (2018). A pH of all the adsorbents were observed in this study was in line with the 7.5 pH value reported by Boadu et al., (2018). Ekpete et al. (2017) stated that the pH value within the range of 6-8 is usually acceptable for adsorption process. The pH value affects the surface and binding capacity of adsorbent due to exchange of Hydrogen ion (H + ) with metal ion. Therefore, the pH value of the activated carbon shows that the washing process was completed. Hence it will have an immense impact on the adsorptive capacity of activated carbon. Ash content refers to non carbon materials that do not combine chemically with the carbon surface. Good activated carbon is expected to have low ash content. The value obtained in Table 1 were 6.1 %, 7.6 % and 8.72 % for AMAC/H2SO4, BMAC/NaHCO3 and AC respectively which are in agreement with findings of Sabino et al., (2016) who reported that less than 15 % ash content is more efficient, hence the lower the ash value the better the activated carbon for adsorption process. Another study conducted by Sanni et al., (2017) obtained an ash content of 7. 2 % and Ajala and Ali, (2020) reported 11.8 % which indicate the presence of high inorganic content in the raw material used in the production of activated carbon. Another study conducted by Ahmaruzzaman et al., (2010) obtained closely related values on different activated carbon samples. Therefore, the lower ash content obtained in this study shows that the activated carbon could be very good for adsorption studies. Low moisture content is said to be good for activated carbon production. In this study, it was found to be 2.1 %, 2.2 % and 3.32 % which were relatively low and within the range of 1-5 % reported by Sanni et al., (2017). Higher value of moisture content may dilute the adsorbent and increase its weight in adsorption process, which in turn has immense effect in reducing the efficiency of the activated carbon. Therefore, the low values indicate that the activated carbon has greatly developed an adequate porosity through the activation process (Meena et al., 2005). Studies has shown that the moisture content of less than 5 % allow higher adsorption of pollutant (Hock and Zain, 2015). The FT-IR spectra for three different category of activated carbon AC, AMAC / H2SO4 and BMAC / NaHCO4 are shown in figure 1, 2 and 3 respectively, indicating the appearance of several functional groups. This indicates that the surface modification increased the functionality on surface of activated carbon by creating more adsorption site. The broad OH absorption on AC disappeared for both, AMAC/ H2SO4 and BMAC / NaHCO4. This could be due to excessive drying during modification process. Modification induced chemical reaction which causes bonds making and bonds breaking, this is the reason why some bonds that are originally present have disappeared, while the new once are now seen (Shivakumar et al., 2012). It can be noticed that activated carbons shared some common peaks and values which justified the fact that they have the same sources. The new peaks that appeared after oxidation for AMAC/ H2SO4 and BMAC / NaHCO3, were between 1700 to 1890cm -1 for AMAC / H2SO4 and around 1690 to 1740cm -1 for BMAC / NaHCO3. These absorptions correspond to the C=O stretching vibration for acid halide, carboxylic acid and transition metal carbonyl with AMAC / H2SO4 while in BMAC / NaHCO4 the C=O stretching vibration stands for primary amide conjugated aldehyde, aliphatic ketones and aldehyde are some of the functionalities also obtained by Ekpete et al. (2017). Acid and base modifications of activated carbon increase the adsorptive capacity of activated carbon. This is supported by a study conducted by Lasaona et al., (2019) which investigated the influence of mineral acid modification and observed each acid produce activated carbon surface with unique functional groups and properties. The scanning electron micrograph for AC, AMAC/ H2SO4 and BMAC/NaHCO4 are shown in figures 4, 5 and 6 respectively. The SEM enables the direct observation of the changes in the surface microstructures of the carbons due to the modifications. Studies are available which have reported the utilization of SEM analysis that shows the surface modification changes in the developed adsorbent (Borislav et al., 2007). Looking at the images, there is clear demarcation in the surface morphology of the three different adsorbent modifications. Large numbers of pore spaces are seen in AMAC / H2SO4 and least are seen in the AC with BMAC / NaHCO4 showing an average number of pore spaces. Although both AMAC / H2SO4 and BMAC / NaHCO4 are good adsorbent AMAC / H2SO4 appears to be the best with better surface morphology. The finding of this research work is in line with the result obtained by Boadu et al. (2018). Another similar work conducted by Tan et al. (2008) reported that the enhanced pore formation might be due to the diffusion of modification agent in to the rudimentary pores created during pyrolysis and consequent acceleration of its reaction with activated carbon surface. Pore development in a char during pyrolysis plays a crucial role in improving the total pore volume. Therefore, it can be inferred that the surface morphology of activated carbon strongly depends on the preparation method. Sanou et al. (2020) revealed that modification agents such as acid and base have the ability to promote the formation of new pore on the surface there by increasing the adsorptive capacity of the activated carbon. The results from FT-IR analysis clearly identified appearance and disappearance of carbonyl and hydroxyl groups. The SEM results also accurately explained the surface morphology of the adsorbents generated. AMAC / H2SO4 appeared to have favourable characteristics over BMAC / NaHCO4 and AC, validated by the statistical analysis which indicated that there was significant difference (P < 0.05) on the physicochemical parameters investigated among the adsorbents. On the basis of good characteristics, the adsorbents generated can be considered as an excellent precursor for the production of activated carbon.
2022-08-12T15:14:07.400Z
2022-08-10T00:00:00.000
{ "year": 2022, "sha1": "5c23d92655c195ab193c21d32a9f3405a07f5be5", "oa_license": "CCBYNC", "oa_url": "https://www.ajol.info/index.php/cajost/article/view/229558/216714", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "b39c0463a0f621d4006bae2c7a42a5f1f8391301", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [] }
234803135
pes2o/s2orc
v3-fos-license
Smart Health Prediction Using Machine Learning The "Smart Health Prediction Using Machine Learning" system, based on predictive modelling, predicts the disease of patients/users on the basis of the symptoms that the user provides symptoms as an input to the system. The application has three login options: user/patient login, doctor login, and admin login.The device analyses the symptoms given by the user/patient as input and provides the likelihood of the disease as output based on the prediction using the algorithm. Smart health predictions are made by the implementation of the Naïve Bayes Classifier. The Naïve Bayes Classifier measures the disease percentage probability by considering all its features that is trained during the training phase.Exact interpretation of disease data benefits early patient/user disease prediction and provides clear vision about the disease to the user. After a prediction, the user/patient can consult a specialist doctor using a chat consulting window. It uses machine learning algorithms and database management techniques to extract new patterns from historical data. The Forecast Accuracy can improve with the use of a machine learning algorithm and the user/patient will get fast and easy access to the application. Introduction Machine learning is a generative method of producing predictive modelling using certain instances. It's an AI branch that promotes the idea that machines can learn from data, recognise patterns, and make decisions with minimal human intervention. Machine learning is a programming algorithm that uses sample data or previously collected data to optimise results with high accuracy. There are two stages of the machine learning algorithm: preparation and research. The signs and symptom logs of the user/patient are used to predict the illness. Machine Learning technology offers a strong application forum in the medical sector to address health disease prediction concerns based on the user/patient experience.We use machine learning to keep track of all signs and diseases. Machine learning technology helps predictive models to rapidly analyse data and produce meaningful results more quickly. With the aid of technology, the user/patient may make an informed decision to see a doctor about their particular symptoms, resulting in improved patient health services.The Naïve Bayes Classifier technique is used to analyse a large amount of data obtained. For each sub-field of Disease Predictions, we also demonstrated how symptom data storage combined with data classification can assist the administrative, clinical, academic and educational aspects of Disease Prediction from Symptoms. There are a host of data collection issues that can be discussed in terms of health prediction. [1][2][3][4][5] Project Analysis 2.1. Objective There is some sort of resources available to predict smart health. However, chronic diseases have been studied in particular and a level of risk has been identified. However, these methods are not widely used for disease prediction in general disease. Smart health prediction helps in the diagnosis of multiple diseases by analysing patient symptoms using a perfect fitting Machine Learning Algorithm technique. Existing Method The framework predicts chronic diseases for a specific area and population. Disease Prediction is for specific diseases only. In this method, Big Data and Convolutionary Neural Networks Algorithm are used to predict disease risk. The method uses Machine Learning algorithms for S-type data, such as K-nearest neighbours and Decision Tree. The machine has an accuracy value of 94.8 percent for some diseases.In the previous paper, we simplified machine learning algorithms to predict effective chronic disease outbreaks in disease-prone populations. We are testing updated prediction models using real-world hospital data from certain specific regions/area's. Using structured and unstructured patient/user data, we suggest a new multimodal disease risk prediction algorithm for Convolutionary Neural Networks. [6][7][8][9][10]. Proposed Method If someone is actually diagnosed with some sort of disease, they need to see a doctor/physician which is both time consuming and expensive too. It can also be difficult for the user to reach of doctors and hospitals so, the disease cannot be detected. Because, if the above procedure can be done with electronic software application that saves time and resources, it could be better for the patient to do the process runs smoothly. Smart health prediction is a web-based programme that predicts a user's illness based on their symptoms that the user/patient can feel. Data sets for the Smart Health Prediction Framework have been compiled from various health-related websites. The consumer will be able to assess the likelihood of a disease on the basis of the symptoms presented in the web-application.The aim of this project is to create a web platform that can predict disease events based on a range of symptoms. Users can choose from a range of symptoms and find diseases with probabilistic estimates and conditions. Table.1 Efficiency Comparison NB -Naïve Bayes, LR -Linear Regression, K*-K th Nearest&DT -Decision Tree Focused on a machine learning algorithm, we proposed a general method of disease prediction. We used Naïve Bayes algorithms to identify patient data because medical data are increasing at an exponential rate, requiring the processing of existing data in order to predict exact disease based on symptoms. By having the input as a patient record, we were able to get accurate general disease risk prediction as an output that helped us understand the degree of disease risk prediction. Because of this method, disease prediction and risk prediction could be achieved over a short period of time and at a low cost. In terms of accuracy and time, the results of Naïve Bayes and other algorithms are compared, and the accuracy of the Naïve Bayes algorithm is higher than the other algorithms mentioned in Figure1. Algorithm and Architecture 3.1. Naïve Bayes Algorithm The Naïve Bayes algorithm is a simple dynamic method for creating models for assigning class labels to problem instances to find a mapping to object. Class labels are chosen from a finite set of choices. It is a family of algorithms based on a general principle, not a particular algorithm. According to this principle, the value of each function of all Naive Bayes Classifiers is independent of the value of other features.For example, if the fruit is orange, round, and around 10cm-15cm in diameter, we might call it an orange. The Naïve Bayes algorithm also takes into account each feature to determine if the fruit is an orange. Fig.1: Algorithm Flow Diagram There are a n-variety of probability models, but for some of them, the Naïve Bayes algorithm performs best in supervised learning model. Architecture The goal of this project is to produce a web application forum for predicting disease manifestations on the basis of different symptoms and conditions. The user will pick different symptoms and find the diseases with their probabilistic data from the collected set of datasets. Conclusion Required clinical symptom related information can be obtained from historical knowledge in the suggested methodology by planning datasets using the Naïve Bayes algorithm. Smart health can only be achieved if the system responds in this way. These datasets will be compared with the incoming queries and an Association Rule Mining Report will be generated. Given that this new solution will be based on real historical data, it would provide accurate and prompt results that would allow patients to get an urgent diagnosis. Web-Application such as sending a doctor remotely for a chat session are often provided so that patients can speak directly with physicians. As a result, in the true sense, this web system will be predictable and also produce high accuracy with fairness.
2021-04-17T19:59:05.463Z
2021-03-23T00:00:00.000
{ "year": 2021, "sha1": "9a747b814563a27a1d97559822c2d45c0ba26fd1", "oa_license": "CCBY", "oa_url": "https://rspsciencehub.com/article_9906_0b5f7c770424a808bd4950c7088e2470.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9a747b814563a27a1d97559822c2d45c0ba26fd1", "s2fieldsofstudy": [ "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Computer Science" ] }
207957650
pes2o/s2orc
v3-fos-license
Thymopentin ameliorates dextran sulfate sodium-induced colitis by triggering the production of IL-22 in both innate and adaptive lymphocytes Background: Ulcerative colitis (UC) is a chronic inflammatory gastrointestinal disease, notoriously challenging to treat. Previous studies have found a positive correlation between thymic atrophy and colitis severity. It was, therefore, worthwhile to investigate the effect of thymopentin (TP5), a synthetic pentapeptide corresponding to the active domain of the thymopoietin, on colitis. Methods: Dextran sulfate sodium (DSS)-induced colitis mice were treated with TP5 by subcutaneous injection. Body weight, colon length, colon weight, immune organ index, disease activity index (DAI) score, and the peripheral blood profile were examined. The immune cells of the spleen and colon were analyzed by flow cytometry. Histology was performed on isolated colon tissues for cytokine analysis. Bacterial DNA was extracted from mouse colonic feces to assess the intestinal microbiota. Intestinal lamina propria mononuclear cells (LPMCs), HCT116, CT26, and splenocytes were cultured and treated with TP5. Results: TP5 treatment increased the body weight and colon length, decreased the DAI score, and restored colon architecture of colitic mice. TP5 also decreased the infiltration of immune cells and expression levels of pro-inflammatory cytokines such as IL-6. Importantly, the damaged thymus and compromised lymphocytes in peripheral blood were significantly restored by TP5. Also, the production of IL-22, both in innate and adaptive lymphoid cells, was triggered by TP5. Given the critical role of IL-22 in mucosal host defense, we tested the effect of TP5 on mucus barrier and gut microbiota and found that the number of goblet cells and the level of Mucin-2 expression were restored, and the composition of the gut microbiome was normalized after TP5 treatment. The critical role of IL-22 in the protective effect of TP5 on colitis was further confirmed by administering the anti-IL-22 antibody (αIL-22), which completely abolished the effect of TP5. Furthermore, TP5 significantly increased the expression level of retinoic acid receptor-related orphan receptor γ (RORγt), a transcription factor for IL-22. Consistent with this, RORγt inhibitor abrogated the upregulation of IL-22 induced by TP5. Conclusion: TP5 exerts a protective effect on DSS-induced colitis by triggering the production of IL-22 in both innate and adaptive lymphocytes. This study delineates TP5 as an immunomodulator that may be a potential drug for the treatment of UC. Introduction Ulcerative colitis (UC) is a chronic inflammatory gastrointestinal disorder most commonly afflicting adults aged 30-40 years impacting the quality of life [1,2]. The incidence and prevalence of UC have increased worldwide with most cases reported in northern Europe, the United Kingdom, and North Ivyspring International Publisher America [2]. However, UC has also been increasing in the East [3]. UC places a heavy burden on health-care systems; the annual direct and indirect costs related to UC are estimated to be as high as $8.1-14.9 billion in the USA [4]. As UC remains incurable so far, in the clinic, UC treatment has focused mainly on anti-inflammatory or immune-suppressing drugs, including 5-aminosalicylic acid, glucocorticosteroids, and immunosuppressants [5]. Despite their widespread use, there is little evidence to support the use of drugs such as glucocorticosteroids in UC. Moreover, although these treatments may reduce disease symptoms temporarily, they do not prevent future bouts of the disease and have a wide array of side effects [6]. New therapeutic strategies to treat UC are, therefore, urgently needed. Since UC is thought to be driven by a seemingly aberrant immune system attacking the gastrointestinal tract [7], immunesuppressive drugs are the mainstay of UC treatment. Although the topical colon presents with a hyperactive immune response, the pathology of UC is far more complicated than an abnormal immune response. Thymic atrophy is found in many kinds of colitis models, including a Gai2 -/colitis model [8,9] and a dextran sulfate sodium (DSS)-induced colitis model [10]. Our recent results also showed a marked thymic involution in severe colitis [11]. Thymus, the primary lymphoid organ for T lymphocytes, is a common target organ that easily undergoes atrophy [12,13]. Thymic atrophy caused by several endogenous and exogenous factors results in an impaired release of thymus-derived T cells and in impaired host immunity development and maturation. Considering the positive relationship between thymic atrophy and colitis severity [9], it is alarming that the widely used drugs for UC are immune-suppressive drugs, which are known to induce thymic atrophy and not much effort has been directed towards restoring or replacing thymic function in these diseases. Thymopentin (TP5), a synthetic pentapeptide corresponding to position 32-36 of thymopoietin, exhibits similar biological activity as thymopoietin responsible for phenotypic differentiation of T cells and the regulation of immune systems [14]. TP5 has been clinically used for the treatment of patients with immunodeficiency diseases, such as rheumatoid arthritis, cancers, hepatitis B virus infection, and acquired immunodeficiency syndrome (AIDS) [15,16]. Despite its extensive use in the clinic, the pharmacological effects and mechanisms action of TP5 have scarcely been studied in the past 10 years. Also, the possible effect of TP5 on colitis has not been investigated. Thus, the biological activity of TP5 on thymus prompted us to investigate its effect on colitis. Herein, we discovered that TP5 ameliorates DSS-induced colitis, accompanied by markedly increased lymphocyte fraction in peripheral blood. TP5 increased the expression of IL-22 and normalized the composition of gut flora which contributed to its therapeutic effect on colitis. Our findings highlight a new therapeutic strategy to restore UC. TP5 alleviated DSS-induced colitis The thymus can be considered as a barometer of health, readily undergoing atrophy in a variety of infectious diseases [12]. We observed thymic involution in DSS-induced colitis which was exacerbated as the severity of colitis progressed ( Figure 1A), and thymus coefficient was negatively correlated to the disease activity index (DAI) score ( Figure 1B). These results support the notion that promoting thymus restitution might be considered as a therapeutic approach for colitis. Remarkably, TP5, a synthetic alternative thymopoietin, dramatically blocked DSS-induced colitis, prevented the body weight loss, and increased the length of the colon and decreased the DAI score ( Figure 1C-E). DSS-induced colitis was characterized by severe pathology throughout the proximal and distal colon, with extensive epithelial damage and inflammatory infiltrate. TP5 administration significantly decreased DSS-induced epithelial damage and inflammatory infiltrate and protected the integrity of the colon structure ( Figure 1F). To monitor the effects of TP5 on the physiological functions in normal mice, we treated C57BL/6N mice with saline or TP5 for 7 days. TP5 had no noticeable effect on mice including body weight, colon length, and histological structure of the colon (Figure S1A-C). The weight remained unchanged in mice treated with TP5 for 14 days ( Figure S1D). Collectively, these results reveal that TP5 ameliorates the development of DSS-induced colitis. TP5 had no direct effect on colon epithelial cells The epithelial layer in the gastrointestinal tract represents the first line of defense against potential enteric pathogens. Epithelial regeneration is especially important for epithelial repair [17]. Consistent with the protective effect of TP5 on colitis, a significant increase in Ki-67 positive cells was observed in the colon of TP5-treated mice (Figure 2A). Intercellular junctions include the tight junctions, which contribute to the epithelial barrier [18]. TP5 significantly increased the mRNA level of the tight junction protein 1 (Tjp1), but not mRNA levels of other tight junction proteins like Claudin-2 (Cldn2) and Occludin (Ocln) ( Figure 2B). To examine whether TP5 could directly promote the proliferation of epithelial cells, we treated two colon epithelial cell lines with TP5. Unlike the Ki-67 staining results in vivo, TP5 did not affect the proliferation of HCT116 or CT26 cells ( Figure 2C). Also, no differences were found in Tjp1 expression after TP5 treatment in either HCT116 or CT26 cells ( Figure 2D). These results indicate that TP5 has no direct effect on colon epithelial cells. (n=4-5). * P < 0.05, ** P < 0.01, *** P < 0.001, compared with the normal group; # P < 0.05, ## P < 0.01, ### P < 0.001, compared with the DSS group. DAI: disease activity index; DSS: dextran sulfate sodium; TP5: thymopentin. Three independent experiments were performed. TP5 restored the number of peripheral lymphocytes Given the negative correlation between thymus coefficient and severity of colitis and the critical role of TP5 in the immune system, we set out to detect the effect of TP5 on circulating peripheral immune cells. We first analyzed the number and influx of circulating blood cells from the peripheral blood using an automated hematology analyzer (Siemens ADVIA2120i, Germany). DSS or TP5 treatment caused no significant changes in the number of circulating platelets (PLT), but TP5 significantly abrogated the downregulated number of red blood cells (RBC) induced by DSS, which is consistent with the protective effect of TP5 on colitic bleeding ( Figure 3A). Although no significant differences were found for the total white blood cells (WBC) in all groups, the subgroups of WBC were changed. DSS caused a significant increase in neutrophils (NEU)%, while TP5 significantly decreased the proportion of NEU. Notably, the percentage of lymphocytes (LYM) was diminished in the DSS group, while TP5 significantly restored the percentage of LYM. TP5 also restored the percentages of eosinophils (EOS) and basophils (BASO). For monocytes (MONO)% and large unstained cell (LUC)%, no differences were found in all groups ( Figure 3A). When mice were treated with saline or TP5 for 7 days, there were no differences in LYM%, NEU%, and MONO% ( Figure S1E). However, the percentage of LYM was substantially increased in mice treated with TP5 for 14 days ( Figure S1F). Thymus involution was observed in the DSS group, which was consistent with our previous study as well as those reported by others [11,19]. As expected, TP5 significantly increased the thymus coefficient and saved the thymus from involution ( Figure 3B). To examine whether TP5 alone could promote the enlargement of the thymus, we determined the thymus coefficient in TP5-treated mice. We observed that TP5 increased the thymus coefficient after 14 days of treatment but not in the mice treated for only 7 days ( Figure S1G-H). It is possible that short-term use of TP5 has no apparent effect on the thymus, while long-term use can promote thymus regeneration in normal mice. We investigated the effect of TP5 on T lymphocytes in the thymus using flow cytometry ( Figure S2A). The results showed a significant decrease in the percentage of immature double positive (DP) thymocytes in the DSS group and a significant increase after TP5 administration ( Figure 3C). The absolute number of total thymocytes were in line with the percentage of DP thymocytes ( Figure S2B). Simultaneously, a substantial decrease of the CD4 single positive (SP) thymocytes and a slight decrease of the CD8 SP thymocytes were observed in the TP5-treated group compared with the DSS group ( Figure S3B). As previously reported, thymic atrophy was found in DSS-induced colitis [10], and was accompanied with the cortical and medullary disorder in thymus [20]. In our study, the hematoxylin & eosin (H&E) staining of thymus indicated that DSS induced large intercellular space, cortical and medulla structure disorder and cell necrosis, while TP5 could alleviate the detrimental effects of DSS ( Figure 3D). Also, the splenomegaly induced by DSS was reduced by TP5, though no significant differences were found in the lymph node coefficient ( Figure 3B). Furthermore, TP5 had no effect on spleen coefficient of normal mice (Figure S1I-J). Similar to the results in blood, TP5 increased the percentage of LYM T and B cells in the spleen ( Figure 3E and S2E). There were no significant changes in NEU%, NK cell%, or the composition of CD4 and CD8 LYM in the spleen ( Figure S2D). All these results indicate that TP5 restores the number of peripheral lymphocytes. TP5 diminished inflammation in the colon and increased IL-22 expression A hyperactive immune response in the colon is the hallmark of colitis. We, therefore, evaluated the effect of TP5 on inflammation-related cytokines and markers. Among the pro-inflammatory markers, DSS induced a significant increase of IL-6 mRNA and protein levels in the colon as previously described [21], while TP5 reversed the increased expression of IL-6 almost back to normal levels ( Figure 4A and S3A). For IL-1β, IL-1α, TNF-α, and IL-18, TP5 had no apparent effect. However, TP5 significantly increased the mRNA and protein levels of IFN-γ, a cytotoxic cytokine promoting not only immunomodulation but also antimicrobial activity ( Figure 4A and S3A). Infiltration of immune cells in the colon is a hallmark of UC. We observed an upsurge of chemokine CCL2 and an influx of macrophages into the colon, both of which were significantly decreased by TP5 ( Figure 4A, 4B and S3B). Infiltration of T cells and NEU were also increased in the DSS group and compromised by TP5 ( Figure S3C-D). Among some other cytokines, we identified a noticeable increase of IL-22 mRNA, and also a significant increase of IL-10 and IL-12 mRNAs in TP5-treated samples ( Figure 4C). No differences were found in IL-23 and TGF-β mRNA expression. Protein levels of IL-22 were also substantially increased by TP5 not only in the colon but also in the blood ( Figure 4D). IL-22, a potent cytokine involved in tissue recovery and maintenance of barrier function, is mainly produced by T cells and group 3 innate lymphoid cells (ILC3s) [22,23]. We first isolated the splenocytes and treated them with TP5 in vitro. Results showed a significant increase in the percentage of IL-22 + CD4 + T cells and increased expression of IL-22 mRNA ( Figure 4E-F). Also, ILC3s of murine colon lamina propria mononuclear cells capable of producing IL-22 [17] were isolated and treated with TP5 in vitro. Results showed increased production of IL-22 + ILC3s (CD45 + CD4 − RORγt + IL-22 + ) in the TP5 group ( Figure 4G). Besides, upregulation of IL-22 was observed in the thymus ( Figure S3E) together with an increase of thymus coefficient in the group treated with TP5 for 14 days ( Figure S1H). No differences in the IL-22 concentration were found in the serum and colon ( Figure S3E). The data from these observations emphasize that TP5 increases the expression of IL-22, which may protect mice from colitis. TP5 maintained mucus barrier and normalized gut microbiota Since IL-22 is primarily associated with the maintenance of mucus barrier function, we used Periodic Acid-Schiff (PAS) staining to assess the number of goblet cells which produce mucin. The results clearly showed restoration of the goblet cell number after TP5 treatment in colitic mice ( Figure 5A). Consistent with this observation, TP5 also significantly increased the expression levels of Mucin-2 (MUC2) mRNA, which is the building block of colonic mucus ( Figure 5B). IL-22 is known to enhance the antibacterial defense of mucosal epithelial cells through different mechanisms [24], and UC is inextricably linked to the gut microbiome [25,26]. Also, the expression level of the antibacterial protein lysozyme was increased by TP5 ( Figure 5C). We, therefore, assessed the composition of the gut microbiota in the colonic feces of each group. TP5 maintained the homeostasis of gut microbiota, completely protecting them from DSS, as shown by principal component analysis (PCA) data ( Figure 5D). The composition of microbiota at the phyla levels, as displayed in Figure 5E, indicates that the percentages of Verrucomicrobia, Bacteroidetes, and Actinobacteria were decreased sharply, whereas the percentages of Proteobacteria and Deferribacteres were increased in DSS-induced colitis. Consistent with the results of PCA, TP5 treatment abrogated the DSS-induced severe intestinal flora disturbance. To further confirm this result, we analyzed the composition of gut microbiota at the genus level. The results revealed that TP5 restored the greatly diminished composition of Akkermansia, Acinetobacter, Barnesiella, and Lactobacillus, and inhibited the markedly upregulated presence of Escherichia/Shigella, Bacteroides, Klebsiella, Staphylococcus, Enterococcus, and Clostridium sensu stricto in DSS-induced colitis ( Figure 5F). Collectively, these results indicate that TP5 maintains the mucus barrier and normalizes gut microbiome, which may contribute to its protective effect on colitis. IL-22 mediated the protective effect of TP5 To investigate whether the protective effect of TP5 was mediated by IL-22, we treated the DSS-treated mice with saline, TP5 and anti-IL-22 antibody (αIL-22), and TP5 and isotype control antibody (Iso). Interestingly, the protective effect of TP5 was terminated by αIL-22, which also compromised the increased body weight and colon length in colitic mice treated with TP5 ( Figure 6A-B). Also, αIL-22 abrogated the downregulation of DAI score in TP5-treated mice ( Figure 6C). H&E staining revealed that αIL-22 blocked the protective effect of TP5 on colon inflammation and crypt damage of colitic mice ( Figure 6E). Furthermore, in contrast to TP5+Iso, TP5+αIL-22 no longer upregulated LYM% or downregulated NEU% in the peripheral blood ( Figure 6D). These results confirm that IL-22 mediates the protective effect of TP5 in DSS-induced colitis. TP5 upregulated IL-22 through RORγt IL-22 can be produced by activated T cells and some subsets of innate lymphoid cells as previously described [24]. Also, transcription factors, such as Stat3, aryl hydrocarbon receptor (Ahr), RORγt, and nuclear factor of activated T cells (NF-AT), were reported to regulate IL-22 expression [27,28]. To determine the mechanism of IL-22 expression promoted by TP5, we examined the regulatory factors of IL-22 in the colon samples from mice with or without treatment with TP5. We detected a significantly increased expression of RORγt in the DSS+TP5 group compared with DSS only group ( Figure 7A). Similarly, we detected a higher number of CD45 + CD4 -RORγt + (ILC3s) and CD45 + CD4 + RORγt + cells in the colon lamina propria mononuclear cells (LPMCs) in DSS+TP5 group than in the DSS only group ( Figure 7B-C). Ursolic acid (UA), a RORγt inhibitor [29], prevented the production of IL-22 in spleen T cells induced by TP5 ( Figure 7D). These results suggest that RORγt plays an important role in the upregulation of IL-22 following TP5 treatment. Discussion This study tried a new therapeutic strategy to combat UC, and found that TP5, a synthetic pentapeptide corresponding to the active site of thymopoietin, made a profound improvement on DSS-induced colitis. The immune-regulating reagent TP5 prevented thymic dysfunction in experimental colitis and restored the peripheral supply of lymphocytes. TP5 increased the expression level of IL-22, maintained the mucus barrier, and promoted normalization of the gut microbiota. Especially, TP5 promoted the production of IL-22 in both innate and adaptive lymphoid cells in vitro. Though UC is known as a chronic inflammatory disease affecting the colon, it is accompanied with thymic involution [11,19,30]. The thymus is known to be associated with colitis, as Tg Ɛ 26 transgenic mice [31] and T-cell receptor α chain-deficient mice [32] develop colitis. In our study, we found a negative correlation between thymus coefficient and DAI score in colitic mice. Thymic involution causes suppression of the immune system, thereby increasing the incidence and severity of infections. TP5 promotes the differentiation of thymocytes, restores cyclophosphamide-induced suppression of the immune system, and has been clinically used for the treatment of immunodeficiency diseases [33,34]. Consistent with previous studies, we found that TP5 significantly abolished the thymic involution induced by DSS and restored the proportion of lymphocytes in blood and spleen accompanied by the remission of colitis. Interestingly, although TP5 increased the proportion of peripheral lymphocytes, it decreased the percentage of inflammatory cells such as NEU in the peripheral blood. In the colon, the local inflammatory site, infiltration of macrophages and NEU were also decreased by TP5. Accordingly, pro-inflammatory cytokines, especially IL-6, were significantly diminished by TP5. The impaired inflammatory response was consistent with the protective effect of TP5 on the colon. These results negated concerns that immune enhancers like TP5 might increase inflammation. Although immune-suppressive drugs are widely used in the clinic for the treatment of UC, it is generally believed that the benefit of immunosuppressants is likely to be small while the toxicity is high [35,36]. On the other hand, immune-regulators such as TP5 might enhance the self-defensive ability of the body and thus mitigate the local inflammation of colitis. To determine the underlying molecular mechanisms of the protective effect of TP5 on colitis, we focused on cytokines produced by lymphocytes, which play a central role in cell proliferation and differentiation, and defense against pathogens. Among several protective cytokines, we observed a marked increase of IL-22 expression by TP5 in colon and blood. IL-22 is a member of the IL-10 cytokine family and has emerged in recent years as a key effector molecule in host defense and in the pathogenesis of autoimmune diseases such as colitis [24,37]. The clinical relevance of IL-22/IL-22 receptor subunit 1 (IL-22R1) signaling system is being increasingly recognized in diseases such as psoriasis and UC [24]. The upregulation of IL-22 induced by TP5 is consistent with the idea that it may serve as a promising therapeutic agent for inflammatory bowel disease (IBD) [38]. Reports showed that IL-22 can be produced by activated T [39][40][41][42]. To confirm the increased IL-22 induced by TP5 in vivo, we analyzed the production of IL-22 by TP5 in vitro. The results showed a significant increase in IL-22 expression in both innate and adaptive lymphoid cells. Most of these cells rely on specific transcription factors to promote the secretion of IL-22, for example, Stat3, Ahr, and RORγt [27]. Our results also indicated a critical role of RORγt in TP5-induced increase of IL-22. Notably, the abolishment of IL-22 completely blocked the protective effect of TP5 on colitis. The data emphasize that increased IL-22 by TP5 promotes recovery of mice from DSS-induced injury. Furthermore, IL-22 is reported to drive endogenous thymic regeneration [43], and TP5 alleviated DSS-induced thymic atrophy, thus thymus could supply more T cells to peripheral tissues to maintain immune homeostasis after TP5 treatment, forming a positive feedback loop. TP5, of course, is the effect on the thymus more important, or the direct effect on the lamina propria lymphocytes, needs to be further investigated by thymectomized mice or nude mice. cells, including T helper 22 (T H 22) cells, T H 17 cells, T H 1 cells, as well as subsets of innate lymphoid cells We have demonstrated that TP5 protected colitis by producing IL-22 through RORγt/IL-22 signaling pathway, although how RORγt is induced by TP5 is not known. The receptor for TP5 has not yet been identified, and the downstream signaling pathway that directly interacts with TP5 is also not clear. It has been reported that T-cell receptor (TCR) [44] and Toll-like receptor (TLR2) [45] are the possible receptors of TP5. TP5 may bind directly to the membrane receptors on T cells and ILC3s to activate intracellular signaling pathways. We found that the expression level of Ahr in DSS+TP5 group was somewhat increased compared with the DSS group. Ahr is an important regulator in intestinal RORγt + ILC and intraepithelial lymphocytes [46]. Interestingly, both Ahr and RORγt transcription factors could directly bind to the promoter of IL-22. Ahr expression alone only marginally increased IL-22 expression, while RORγt alone induced IL-22 transcription and strong synergism was observed in cell lines transduced with RORγt and Ahr [46]. Therefore, TP5 may directly bind to the membrane receptors on T cells and ILC3s and stimulate the Ahr/RORγt/IL-22 signaling pathway. Also, we found that the expression level of stat3 in DSS+TP5 group was higher than that in the DSS group. RORγt expression and T H 17 differentiation are stat3-dependent [47], thus we speculated that TP5 may directly bind to the membrane receptors on T cells and ILC3s and activate the stat3/RORγt/IL-22 signaling pathway. However, how does TP5 actually regulate RORγt needs further investigation. We are trying to connect some reporters on TP5 and use some methods such as flow cytometry, Co-IP, mass spectrum etc. for further exploration. Target cells of IL-22 are found in organs that mainly constitute the outer-body barriers such as the gastrointestinal system, but not in organs orchestrating immunity. IL-22 is known to act on mucosal tissues to regulate host defense responses. Mucus and antibacterial protein play essential roles in preventing and limiting bacteria, and both can be induced by IL-22 [48,49]. Therefore, the increase of IL-22 by TP5 may explain its remarkable effect in restoring the number of goblet cells and MUC2 expression. The antibacterial protein lysozyme was also upregulated by TP5. It is of note that the markedly changed composition of the gut microbiome was abolished by TP5, which restored the composition of probiotics such as Lactobacillus and Akkermansia, known for its potential anti-inflammatory properties. Reduced levels of Akkermansia have been observed in patients with inflammatory bowel diseases (mainly UC) and metabolic disorders [50,51]. TP5 also decreased the composition of some proinflammatory and harmful bacteria, such as Escherichia/Shigella, Bacteroides, Klebsiella, Staphylococcus, and Enterococcus. These promising results of TP5 on the gut microbiome makeup further confirmed the therapeutic effect of TP5 on colitis. However, the modulation of the gut microbiome is highly complex and needs further characterization. Collectively, our results showed that TP5 increases the output of lymphocytes, elevates the expression of IL-22, normalizes the composition of the gut microbiome, and thus, alleviates DSS-induced colitis (Figure 8). Since thymus is a frequent target organ in a variety of diseases, this study not only posits an interesting concept that restoring an impaired thymus might be a useful therapeutic strategy for UC but also offers a therapeutic strategy for patients with the involuted thymus. Materials and Methods Mice. Male C57BL/6N mice aged 6-7 weeks were purchased from Charles River (Beijing, China). Mice were house-caged with a 12 h/12 h light/dark cycle and habituated in the room 3 days before experiments. Animal experiments were conducted according to the National Institutes of Health Guide for the Care and Use of Laboratory Animals, with the approval of the Center for New Drug Safety Evaluation and Research, China Pharmaceutical University. Colitis induction with DSS. Colitis was induced by 2.5% DSS (36-50 kD, MP Biomedicals, Canada) in the drinking water for 7 days. TP5 (HAINAN ZHONGHE PHARMACEUTICAL CO., LTD., Hainan, China. No: 20170906, 20180402, 20 mg/kg) was administrated by subcutaneous (s.c.) injection once every day during the time schedule and mice were weighed daily. On day 8, as previously reported [52], mice were sacrificed, and the length and weight of the whole colon were recorded after opening longitudinally and flushing with PBS. DAI scores were determined as previously described [53,54]. In brief, DAI was determined by scoring changes in weight, gross bleeding, and stool consistency. We used four grades of weight loss (0, no loss or weight gain; 1, 1% to 5% weight loss; 2, 5% to 15% weight loss; 3, more than 15% weight loss), four grades of stool consistency (0, normal; 1, slight loose; 2, loose; and 3, diarrhea), and four grades of gross bleeding (0, normal; 1, slight; 2, modest; and 3 severe). The combined scores constituted the final DAI. One-half of the mouse colons were used for flow cytometry analysis, and one half were divided into three sections (proximal, middle and distal). The proximal and distal colon sections were fixed in 4% phosphate-buffered formaldehyde for histological analyses. The middle colon was snap-frozen for subsequent molecular analyses. Mice treated with TP5 or saline for 7 or 14 days were sacrificed on day 8 and day 15, respectively. For αIL-22 treatment, the mice were administrated with one intraperitoneal (i.p.) dose of 400 µg anti-IL-22 (clone: IL22JOP, eBioscience, San Diego, USA) or 400 µg rat IgG2a κ isotype control (eBioscience, San Diego, USA), as previously described [55], and the mice were sacrificed on day 6 as some mice were dying. Histology and immunohistochemistry. Proximal colon, distal colon, and the thymus tissues were fixed in 4% phosphate-buffered formaldehyde solution for 24 h and embedded in paraffin. Sections of 4 μm were stained with H&E. Colon inflammation and tissue damage were scored based on the degree of epithelial damage and inflammatory infiltrate in the mucosa, submucosa, and muscularis/serosa, as previously described [11,56]. Each of the four scores was multiplied by 1 if the change was focal, 2 if it was patchy, and 3 if it was diffuse. The H&E sections of thymus were evaluated and graded for the structure disorder and necrosis in thymus using a semiquantitative scale from 0 to 3. Cortical and medullary disorders and lymphocytic necrosis were scored as follows: mild=1, moderate=2, and severe=3, and the sum of the two scores were the total pathological score. For immunohistochemistry, after dewaxing and rehydration, the colon sections were soaked in sodium citrate buffer for heat-induced epitope retrieval, and incubated with 10% goat serum for 1 h to block the nonspecific binding sites. Subsequently, sections were incubated with anti-F4/80 antibody (1:200, CST, USA) or anti-Ki-67 antibody (1:100, CST, USA) overnight at 4℃, followed by incubation with horseradish peroxidase secondary antibodies for 20 min (MXB Biotechnologies, Fuzhou, China). The sections were stained using a Diaminobenzidine Substrate Kit (TIANGEN, Beijing, China) and counterstained with hematoxylin. Images were obtained with an Olympus BX41 microscope (Olympus, Japan). Cell isolation and flow cytometry. Colonic LPMCs were isolated following a previously established method [57]. In brief, luminal content, extraintestinal fat tissue, and blood vessels were removed, and colons were then cut into 0.5 cm pieces. For the experiments with colonic LPMCs in vitro, we isolated fresh colons from normal C57BL/6N mice and used the methods mentioned above to obtain sterile LPMCs, also as previously reported [58,59]. The LPMC fractions were stimulated with TP5 (1 μg/mL) and Brefeldin A (4 μg/mL, MedChemExpress, Monmouth, USA) in RPMI-1640 supplemented with 10% fetal bovine serum (FBS, Biological Industries, Israel), 1% glutmax (Gibco, 100x, China), penicillin (100 U/mL, HyClone, USA), streptomycin (100 μg/mL, HyClone, USA) and 50 µM 2-mercaptoethanol (Sigma, USA) for 12 h. For the experiments with splenocytes in vitro, we isolated fresh spleens from normal C57BL/6N mice. The tissues were ground and dispersed into single cells with a sterile cell strainer (Corning, Durham, USA). The cell suspension was then lysed with red blood cell Lysis buffer (ebioscience, San Diego, USA) for 5 min. The remaining cells were counted and cultured after washing, as reported before [58,59]. The cells were cultured in the medium similar to LPMCs for 24 h except for Brefeldin A, and Brefeldin A was added for the last 4 h of culture. For some experiments, ursolic acid (2 μM, MedChemExpresss, Monmouth, USA), a RORγt inhibitor, was added to the spleen T cells. The cells were stained with the surface antibodies including LIVE/DEAD™ fixable far red dead cell stain kit (633 or 635 Real-Time PCR analysis. Total RNA from colon tissues was extracted using RNeasy Plus Mini Kit (Qiagen, Germany), while cell cultures were extracted by using TRIzol (Invitrogen, San Diego, USA). Reverse transcriptase PCR (RT-PCR) was performed with a cDNA synthesis kit (Takara, China). Quantitative PCR was performed using SYBR Green QPCR Master Mix (Takara, China) with a Step one plus Real-Time PCR system (Applied Biosystems, USA). Relative amounts of mRNA were calculated by the ΔΔCt method with β-actin as house-keeping genes. For the primer sequences, some of them were from PrimerBank (https://pga.mgh.harvard.edu/ primerbank), others were designed with the Beacon Designer Software. All primers were custom-made by Genscript. The primer sequences were shown in Supplementary material. Inflammatory mediator measurement. Colon tissues were weighed and homogenized using a tissue mixer (PRO Scientific Inc., USA) with 15 volumes of PBS. The tissue samples were then centrifuged at 3000 rpm for 20 min. Tissue supernatants were collected for the assays. IL-22 (Lianke Biotech CO., Ltd., Hangzhou, China), IL-1β, IFN-γ, TNF-α, and IL-6 (Dakewe Biotech Co., Ltd., Shenzhen, China) concentrations were measured by ELISA. Immunostaining of goblet cells. For goblet cells immunostaining, tissue sections were dewaxed, hydrated, and stained with Periodic Acid-Schiff (PAS)/hematoxylin (SenBeiJia BioTech CO., Ltd., Nanjing, China). PAS + goblet cells were counted in 5 different areas of the section, and at least 10 sections of each mouse. Measurement and observation were performed with an Olympus BX41 microscope. 16S rRNA sequencing and analysis. Colon content homogenates in PBS were immediately frozen (-80℃) and total community genomic DNA extraction was performed using an E.Z.N.A. Soil DNA Kit (Omega, USA), following the manufacturer's instructions. Next-generation sequencing library preparations and Illumina MiSeq sequencing were conducted at Sangon Biotech (Shanghai, China). The 16S rRNA V3-V4 amplicon was amplified using KAPA HiFi Hot Start Ready Mix (2×) (TaKaRa Bio Inc., Japan). Two universal bacterial 16S rRNA gene amplicon PCR primers (PAGE purified) were used: the amplicon forward PCR primer was 5'-CCTACGGGNGGCWGCAG-3' and the amplicon reverse PCR primer was 5'-GACTACHVGGGTATC TAATCC-3'. Sequencing was performed using the Illumina MiSeq system (Illumina MiSeq, USA), according to the manufacturer's instructions. The effective sequences of each sample were submitted to the RDP Classifier to identify archaeal and bacterial sequences. Statistical Analysis. Data are presented as mean ± SEM. Statistical significance was determined by two-tailed Student's t-test between two groups, and one-way ANOVA followed by Dunnett's posttests when groups were more than two. P < 0.05 was considered statistically significant.
2019-10-31T09:05:54.800Z
2019-10-12T00:00:00.000
{ "year": 2019, "sha1": "2c7f47299ed9dd475a30f6fded472b002bd39bac", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7150/thno.35015", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "20c1f397aa8eadd92c67459f7054924c49c56ff8", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
260970279
pes2o/s2orc
v3-fos-license
Prevalence and associated factors of active trachoma among 1–9 years of age children in Andabet district, northwest Ethiopia, 2023: A multi-level mixed-effect analysis Background Trachoma is the chief cause of preventable blindness worldwide and has been earmarked for elimination as a public health problem by 2030. Despite the five-year Surgery, Antibiotics, Facial cleanliness, and Environmental improvement (SAFE)-based interventions in the Andabet district, the prevalence of trachomatous follicular (TF) was 37%. With such a high prevalence of TF, the determinant factors were not revealed. Besides, there were no reports on the overall prevalence of active trachoma (i.e.TF and or trachomatous intense (TI)). Objective To determine the prevalence and associated factors of active trachoma among 1–9 years of age children in the Andabet district. Method A community-based cross-sectional study was conducted among children aged under nine years from March 1–30, 2023 in Andabet district, Northwest Ethiopia. Multi-stage systematic random sampling was employed to reach 540 children. A multilevel mixed-effect logistic regression analysis was employed to assess factors associated with active trachoma. We fitted both random effect and fixed effect analysis. Finally, variables with p<0.05 in the multivariable multilevel analysis were claimed to be significantly associated with active trachoma. Result In this study, the overall prevalence of active trachoma was 35.37% (95% CI: 31.32%, 39.41%). The prevalence of TF and TI was 31.3% and 4.07% respectively. In the multilevel logistic regression analysis ocular discharge, fly-eye contact, latrine utilization, and source of water were significantly associated with the prevalence of active trachoma. Conclusion In this study, the prevalence of active trachoma was much higher than the World Health Organization (WHO) threshold prevalence. Ocular discharge, fly-eye contact, latrine utilization, and source of water were independent determinants of active trachoma among children (1–9 years). Therefore, paying special attention to these high-risk groups could decrease the prevalence of a neglected hyperendemic disease, active trachoma. Background Trachoma is the chief cause of preventable blindness worldwide and has been earmarked for elimination as a public health problem by 2030.Despite the five-year SAFE (Surgery, Antibiotics, Facial cleanliness, and Environmental improvement)based interventions in the Andabet district, the prevalence of TF (trachomatous follicular) was 37%.with such a high prevalence of TF, the determinant factors were not revealed.Besides, there were no reports on the overall prevalence of active trachoma (TF and or TI).Objective To determine the prevalence and associated factors of active trachoma among 1-9 years of age children in the Andabet district Method A community-based cross-sectional study was conducted among children aged under nine years from March 1-30, 2023 in Andabet district, Northwest Ethiopia.Multi-stage systematic random sampling was employed to reach 540 children.A multilevel mixed effect logistic regression analysis was employed to assess factors associated with active trachoma.We fitted both random effect and fixed effect analysis.Finally, variables with p<0.05 in the multivariable multilevel analysis were claimed to be significantly associated with active trachoma.Result In this study, the overall prevalence of active trachoma was 35.37% (95% CI: 31.32%,39.41%).The prevalence of TF and TI was 31.3% and 4.07% respectively.In the multilevel logistic regression analysis ocular discharge, fly-eye contact, latrine utilization, and source of water were significantly associated with the prevalence of active trachoma. Conclusion In this study, the prevalence of active trachoma was much higher than the WHO threshold prevalence.Ocular discharge, fly-eye contact, latrine utilization, and source of water were independent determinants of active trachoma among children (1-9 years).Therefore, taking special attention to these high-risk groups could decrease the prevalence of a neglected hyperendemic disease, active trachoma.The datasets used during the current study are available from the corresponding author upon request. Introduction Trachoma is the chief cause of preventable blindness worldwide and has been earmarked for elimination as a public health problem by 2030 (1,2).Children aged under nine are most likely anguish from active trachoma, with prevalence rates roaming from 60-90% (3,4).Globally, 1.3 million people shunned their sight and 1.8 million become visually impaired from the disease (5,6).While in Ethiopia, 1.2 million people had a visual loss, and 2.8 million visual impairment cropped up by the disease (7).It is estimated that blindness and visual impairment cost US$ 2.9-5.3 billion annually adrift from productivity, dawning to US$ 8 billion when trichiasis is embraced (8,9).Those wrestling with visual impairment or blindness have a deteriorated quality of life (9). The WHO notified 157.7 million people pervading in districts where active trachoma was a public health venture, 88 percent of them in Africa and half in Ethiopia (69,802,693) (2).In Ethiopia, trachoma is the second most common cause of blindness and the third most common cause of low vision (10).The prevalence of active trachoma among children aged 1-9 years old was 40.1% and it is ubiquitous across the country.Albeit, the Amhara region bore the highest prevalence (62.6%) (8,11,12).Despite the perpetual endeavor in halting the quandary, it's still a public health problem in Ethiopia, particularly in the Amhara region (11). Shreds of evidence have revealed that different sociodemographic, behavioral, and environmental factors have been associated with the prevalence of under-nine trachoma, although they differ between settings (13)(14)(15)(16).Of the socio-demographic factors, Age, sex, family size, and educational status, were reputed to affect the prevalence of trachoma (13,14).Up on children's hygienic behavior, factors such as ocular discharge, nasal discharge, flies on their faces, soap used for face washing, and fomite-sharing practices have an impact on the prevalence (13,16).Moreover, environmental factors such as the availability and utilization of latrines, waste disposal pit utilization, and scarcity of water have also been associated with the prevalence of trachoma (13,14).Likewise, limited access to latrines boon fecal contamination of the environment, which backs fly breading, another mechanical vector for trachoma transmission (13). Despite the five-year SAFE (Surgery, Antibiotics, Facial cleanliness, and Environmental improvement)-based interventions in the Andabet district, the prevalence of TF (trachomatous follicular) was 37% (12).Other than the three similar settings in which SAFE was equally implemented, it remained hyperendemic and the reason was an enigma in the Andabet district.The previous study only examined TF prevalence.There were no reports on the prevalence of TI (trachomatous intense) and besides, with such a high prevalence of TF, the determinant factors were not revealed.To our knowledge, this is the first study in the district to examine the overall prevalence of active trachoma (TF and or TI) and associated factors.Moreover, most of the previous studies done out of the district did not consider the community-level factors that could affect the prevalence of the disease.But it is imperative to consider factors on an individual and community level in preventing the disease, as well as implementing policies and programs to reduce trachoma.Thus, this study aimed to determine the prevalence and associated factors of active trachoma among 1-9 years of age children using a multi-level mixed-effect analysis. Study design and settings A community-based cross-sectional study was conducted among children aged under nine years from March 1-30, 2023.Andabet district, the study area, is sited 150 km from Bahir Dar, the capital city of Amhara National Regional State, and 717 km from Addis Ababa, the capital of Ethiopia.The district of Andabet encompasses a large geographical area and has the preeminent population density.Based on the 2019 regional population census, the district's protruding total population is 152,683 with 34765 households verified by 26 kebeles.A primary health care center and two health posts are embedded there.There was a high prevalence of TF in the district after 8 to 11 years of implementation of SAFE. Study population and eligibility criteria All children whose age was found in the range of 1-9 years in the Andabet district were the Source population and we include children from the age of 1-9 years who lived for at least 6 months in the study area.In contrariwise, children who are unable to undergo physical examination due to medical illness were excluded from the study. Sample size determination and sampling procedure We appraised the required sample size using the single population proportion formula.We assumed, based on a previous similar study, an observed prevalence of active trachoma in Ebinat, Ethiopia (36.1%) (17), which we sought to estimate 95% confidence within ±5% margin of error. We used a design effect of 1.5 and allowed for 10% non-response rate, the final sample size for this study was determined to be 585. A multistage sampling technique was used during the sampling process.Based on a list of kebeles provided by the Andabet district administration bureau, six kebeles out of 26 kebeles were selected by using a simple random sampling method.To determine the required sample size for each randomly selected kebele, population proportional allocation was employed. In the selected kebele, there were only 4785 households with at least one child between the ages of 1 and 9. Systematic random sampling with an interval of 8 was used to select households with children between the ages of 1 and 9. Before starting the sampling, pen spiring was carried out to mark the starting point of the village.In the case of more than one child aged 1-9 years per household, one child was selected using the lottery method. Ethics statement The study has acceded to the tenets of the Declaration of Helsinki and approval was solicited and attained from the Institutional Review Board of Debre Tabor University, Health Science College. A permission letter was procured from Andabet district administrative office.The guardians were informed that the study would not foist harm on children.There were no personal identifiers and the confidentiality of the study participants was retained at all stages of data processing.Written Informed consent was reaped and confidentiality was held by virtue of codes and dodging personal identifiers.Trachoma-infected children were referred to the closest health facility. Operational definition Active trachoma: the presence of Trachomatous inflammation, follicles/TF (the appearance of five or more follicles with a diameter of greater than 0.5 mm in the central part of the upper tarsal conjunctiva), and/or Trachomatous inflammation intense/TI (pronounced inflammatory thickening of the tarsal conjunctiva that obscures more than half of the normal deep tarsal vessels) on one or both eyes (18). Community level of women illiteracy: it is the clumped community-level variable derived from maternal educational level and rated as the proportion of women with no formal education at the kebele/community level.Based on a median value it was then divided into low (mothers from communities with lower illiteracy levels) and high (mothers from communities with higher illiteracy levels) categories (19,20). Latrine utilization: Those latrines with at least two of these: the presence of a splash of urine, fresh excreta inside the latrine, footpath to the latrine, and the absence of a spider web of the squat were considered utilized (21). Waste disposal pit utilization: Those pits with at least one of these: domestic products, discarded unwanted agricultural products, or ashes (a burned sign of waste) were considered utilized (22). Data collection tools and procedures After reviewing the available literatures, the data collection tool was developed.A pretested structured questionnaire, observational checklists, and a physical examination were used to collect data.There were four parts to the questionnaire: sociodemographic variables, child behavioral variables, environmental-related variables, and observation checklists.The environmental and household data were collected by three trained ophthalmic nurses.Through the use of 2.5x loupes, two Trachomatous trichiasis (TT)-trained surgeon nurses certified for trachoma grading assessed each child for active trachoma signs.In accordance with the WHO simplified grading system, certified TT-trained trachoma grader surgeons examined both eyes for active trachoma.Using cotton tip applicators and alcohol for hand disinfection, an aseptic eyelid eversion was performed on the children. Data quality control To ensure consistency, the data collection tool was first developed in English, then translated into the local language (Amharic), and then back to English.Then, to accustom data collectors and supervisors to the data collection procedures, two days of training were provided.Lastly, a pretest was conducted on 5% of the total sample size in another kebele that was not included in the study.Unclear questions were edited and modified based on the analysis of the pre-test.Data were evaluated by supervisors and investigators for completeness, accuracy, and clarity. Data processing and analysis Epi-Data version 4.6 was used for data entry, followed by STATA 16 for cleaning, coding, and analyzing the data.Text, tables, and figures were used to report the descriptive statistics.The prevalence of active trachoma with its 95% Confidence interval (CI) was reported.A multilevel logistic regression analysis was employed to assess factors associated with active trachoma to account for the hierarchical nature of the data in which children were nested within-cluster and children within the same cluster are more likely to share similar characteristics than children in another cluster which contravenes the independent assumptions of the standard logistic regression model such as the independent and equal variance assumptions. First bivariable multilevel logistic regression analysis was executed and those variables with pvalue <0.20 were considered for multivariable multi-level analysis.While performing a multilevel binary logistic regression analysis, we fitted both random effect and fixed effect analyses.The random effect parameter, intraclass correlation coefficient (ICC) computes the degree of heterogeneity in the prevalence of active trachoma between clusters and an ICC of more than 10% indicates that accounting for the cluster-level variability of active trachoma using multi-level analysis is relevant.In addition, proportion change in variance (PCV), and median odds ratio (MOR) were appraised.Moreover, multicollinearity was verified using the variance inflation factor (VIF) and we obtained a VIF of less than five for each independent variable with a mean VIF of 1.90, denoting there was no significant multicollinearity between independent variables.In fixed effect analysis, four models were fitted; null model (without explanatory variables), model 1 (containing only individual-level factors), model 2 (examining the effect of community-level factors), and model 3 (which incorporates both individual and community-level factors simultaneously).Among the four models fitted, the last model (model 4) was selected as the bestfitted model given that it has the lowest deviance and highest PCV.The adjusted odds ratio (AOR) with its 95% CI was reported for all models fitted.Interpretations, however, are based on the final model, the best-fit model.Finally, variables with p<0.05 in the multivariable multilevel analysis were claimed to be significantly associated with active trachoma. Socio-demographic characteristics of study participants A total of 540 children in the age range of 1-9 years were included in the study providing a response rate of 92.3%.Of these, more than two-thirds (75.74%) were between the age of 4 and 9 with an overall mean age of 6.5±1 years.Almost half (50.93%) of the children were males and 252 (46.67%) were from rural areas.Regarding the household size, more than half (62.22%) of children were from a household size of 4 or less (Table 1). Random effect and model fitness Table 4 revealed that in the null model, about 35.7% of the total variation in the prevalence of active trachoma occurred at the cluster (kebele) level and is attributable to the community-level factors.In addition, the null model also had the highest MOR value (3.59) indicating when randomly selecting children from one kebele with a higher risk of active trachoma and the other kebele at lower risk, children at the kebele with a higher risk of active trachoma had 3.59 times higher odds of having active trachoma as compared with their counterparts.Furthermore, the highest (85%) PCV in the full model (model 3), indicates that 85% of the community-level variation in the prevalence of active trachoma was explained by the combined factors at both the individual and community levels.The model fitness was done using deviance in which the final model (model 3) was the best-fitted model since it had the lowest deviance. Factors associated with active trachoma In multivariable multilevel logistic regression analysis, where both the individual and community level factors were fitted simultaneously; ocular discharge, fly-eye contact, latrine utilization, and source of water were significantly associated with the prevalence of active trachoma. Children who had ocular discharge had 2.01 (AOR=2.01;95%CI: 1.11, 3.64) times higher odds of developing active trachoma as compared to those children who had no ocular discharge. Regarding fly-eye contact, children who had fly-eye contact were 1.96 (AOR=1.96;95%CI: 1.09, 3.53) times higher odds of developing active trachoma as compared to those children who had no fly-eye contact.In the same manner, the odds of developing active trachoma among children aged 1-9 years from families who didn't utilize latrine were 5.28 (AOR=5.28;95%CI: 2.88,9.70)timeshigher than children from families who utilize latrine.On the source of water, the odds of developing active trachoma among children aged 1-9 years from households who get water from the river had 3.89 (AOR=3.89;95%CI: 1.30,11.67)timeshigher than children from households who get water from the household tap (Table 5). Discussion The study sought to assess the prevalence and associated factors of active trachoma in the Andabet district, Northwest Ethiopia.This study revealed the prevalence of active trachoma among 1-9 years was 35.37%.This finding is consistent with different studies done in Areka, Zala district, Ethiopia, Nigeria, Chad, Uganda, Central African Republic, and Senegal (23)(24)(25)(26)(27)(28).However, this prevalence of active trachoma was found lower compared to different studies conducted in Ankober, Amhara, Amaro, Burji, and Horo Guduru (16,(29)(30)(31) and higher than different studies conducted in different countries (11,13,14,32).The discrepancy might be due to the difference in the ground status of trachoma prevention practice, study setting, period, and intervention. Besides, the availability and accessibility of health facilities, as well as the capacity of water, sanitation, and hygiene, differs between countries (33,34).Moreover, the discrepancy of this finding with that of the findings of studies conducted out of Ethiopia might be due to sociodemographic and cultural differences. In this study, we found that children who had ocular discharge were more likely to develop active trachoma as compared to those children who had no ocular discharge.This finding is supported by studies done in southern and northern Wollo zone districts, Dangila, Gambia, and Tanzania …. (13,(35)(36)(37).which similarly showed children with ocular discharge were more likely for active trachoma infection.This might be due to a discharge from the infected eye prompts transmission of infection by direct contact or via fingers, flies, or fomites (38). Fly-eye contact was another factor for the prevalence of active trachoma among (1-9) in which children who had fly-eye contact were more likely to develop active trachoma than those children who had no fly-eye contact.This finding was similar to those of studies done in Dangila, Rural Ethiopia, and Ankober (13,29,37), which similarly showed fly-eye contact as a risk factor for trachoma.This might be due to the eye-questing flies Musca sorbents and other domestic Muscidae is vectors in Chlamydia trachomatis and opens the trachoma transmission route (39). Consistent with other studies conducted in Baso-Liben, Ankober districts, and Dangila (13,29,32) in this study, children (1-9 years) who lived in households that didn't utilize latrines were more likely to develop active trachoma than children (1-9 years) who lived in households that utilize latrine.The possible reason might be due to Musca sorbents; these are a reservoir of the causative agent, Chlamydia trachomatis, been shown to preferentially breed in human excreta (13).Hence, employing open defecation beside the house is a favorable environment for breeding Musca sorbents and an indispensable benefactor to disease transmission. The fourth important finding in this study is the source of water which is a community-level factor associated with the prevalence of active trachoma among children (1-9 years).That is, Children (1-9 years) living in households who get water from the river were more likely to develop active trachoma than children from households who get water from the household tap.This finding is supported by different studies done in Waghemera and Madda Walabu (40,41).This might be due to a river, or any other unprotected source of water that can serve as a reservoir of infection because it is a breeding ground for flies, as well as a habitat for Chlamydia trachomatous (42). Strengths and limitations of the study .There were strengths and limitations in this study.To begin with the strength, this study investigated neglected tropical diseases in children aged 1-9 years under WHO guidelines. Besides, the study uses multilevel modeling that takes into account the clustering effect to draw valid conclusions and inferences.Moreover, sufficient sample size was used to ascertain representativeness.This study, however, has limitations due to its cross-sectional nature.It may not show a true temporal relationship between the outcome and the independent variables.Besides, there might be an anticipation of social desirability bias and potential recall bias while assessing ticklish variables. Conclusion In this study, the prevalence of active trachoma was much higher than the WHO threshold prevalence.It's still a severe public health problem and far from the elimination of trachoma as a public health problem in this community.Ocular discharge, fly-eye contact, latrine utilization, and source of water were independent determinants of active trachoma among children (1-9 years). Therefore, an intervention area needs to be refined for personal hygiene-related activities such as washing children's faces utterly to remove dirt (ocular discharge), and fly-eye contact.Significant emphasis and framework are crucial to the construction and service provision of household taps. Besides, the building and use of latrines need to be prioritized. with the following details: Initials of the authors who received each award • Grant numbers awarded to each author • The full name of each funder • URL of each funder website • Did the sponsors or funders play any role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript?• Did you receive funding for this work?The author(s) received no specific funding for this work.Competing Interests On behalf of all authors, disclose any competing interests that could be perceived to bias this work.Review the instructions link below and PLOS NTDs' competing interests policy to determine what information must be disclosed at submission.Data Availability Provide a Data Availability Statement in the box below.This statement should detail where the data used in this submission can be accessed.This statement will be typeset if the manuscript is accepted for publication.Before publication, authors are required to make all data underlying their findings fully available, without restriction.Review our PLOS Data Policy page for detailed information on this policy.Instructions for writing your Data Availability statement can be accessed via the Instructions link below. Information: Powered by Editorial Manager® and ProduXion Manager® from Aries Systems CorporationThis statement is required for submission and will appear in the published article if the submission is accepted.Please make sure it is accurate. Table 1 : socio-demographic characteristics of study participants in Andabet, northwest Ethiopia, The majority (85.16%) of the households had access to a covered pit latrine.Of them, 433 226 (80.19%) utilize the latrine.Almost all (97.07%) hadn't handwashing facilities near to latrine and 227 255(47.22%) of them didn't have separate places for animal dwellings (Table 2). Table 2 : Environmental characteristics of the households in Andabet, northwest Ethiopia, 2023: a Table 4 : -Random effect and model fitness in prevalence and associated factors of active trachoma Table 5 : Multi-level analysis for assessing determinant of under-five trachoma among in Andabet
2023-08-19T06:16:38.236Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "dc570223def740f1ac76b57de17821d6f5eec09a", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0011573&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b37033093dbc148f4bf99c8c54f2ba3487a4c0b6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235254573
pes2o/s2orc
v3-fos-license
Reflow: Zero Knowledge Multi Party Signatures with Application to Distributed Authentication Reflow is a novel signature scheme supporting unlinkable signatures by multiple parties authenticated by means of zero-knowledge credentials. Reflow integrates with blockchains and graph databases to ensure confidentiality and authenticity of signatures made by disposable identities that can be verified even when credential issuing authorities are offline. We implement and evaluate Reflow smart contracts for Zenroom and present an application to produce authenticated material passports for resource-event-agent accounting systems based on graph data structures. Reflow uses short and computationally efficient authentication credentials and can easily scale signatures to include thousands of participants. Introduction Multi-party computation applied to the signing process allows the issuance of signatures without requiring any of the participating parties to disclose secret signing keys to each other, nor requires the presence of a trusted third-party to receive them and compose the signatures. However, established schemes have shortcomings. Existing protocols do not provide the necessary efficiency, re-randomization or blind issuance properties necessary for the application to trust-less distributed systems. Those managing to implement such privacy preserving features are prone to roguekey attacks [Boneh et al., 2020] since they cannot grant that signatures are produced by legitimate key holders. The lack of efficient, scalable and privacy-preserving signature schemes impacts distributed ledger technologies that support 'smart contracts' as decentralized or federated architectures where trust is not shared among all participants, but granted by one or more authorities through credential issuance for the generation of non-interactive and unlinkable proofs. Reflow applies to the signature process a mechanism of credential issuance by one or more authorities for the generation of non-interactive and unlinkable proofs, resulting in short and computationally efficient signatures composed of exactly two group elements that are linked to each other. The size of the signature remains constant regardless of the number of parties that are signing, while the credential is verified and discarded after signature aggregation. While being signed, duplicates may be avoided by collecting unlinkable fingerprints of signing parties, as they would invalidate the final result. Before being able to sign, a one-time setup phase is required where the signing party collects and aggregates a signed credential from one or more authorities. Our evaluation of the Reflow functions shows very promising results: basic session creation takes about 20ms, while signing 73ms and verification 40ms on average consumer hardware. Overview Reflow provides a production-ready implementation that is easy to embed in end-to-end encryption applications. By making it possible for multiple parties to anonymously authenticate and produce untraceable signatures, its goal is to leverage privacy-by-design scenarios that minimize the information exchange needed for document authentication. The participation to a signature will be governed by one or more issuers holding keys for the one-time setup of signature credentials. The steps outlined below are represented in figure Participant sends a credential request to Issuer 3 Issuer signs the credential request and sends it back 4 Participant can create anonymous credential proofs Following this setup any participant will be able to produce a zero-knowledge proof of possession of the credential, which can be verified by anyone anonymously on the blockchain. The base application of Reflow is the privacy preserving collective signature of digital documents for which the signed credential proof is a requirement to participate to any signature process. A Reflow signature process is best described in 3 main steps: session creation, signature and verification. 1. Anyone creates a session A session may be created by anyone, no credentials are required, but only information that should be public: the public keys of participants holding a credential to sign, the public verifier (public signature key) of the issuer who has signed the credentials and at last a document to be signed. The steps below are represented in figure 2 and illustrate how a signature session is created: 1 Participants will publish their public keys, available to anyone 2 Anyone may indict a signature session by selecting a document, the public keys of signing participants and the issuer 3 The signature session is then published without disclosing the identity of any participant A Reflow signature session can then be published for verification and its existance may be confidentially communicated to the participants elected to sign it. The possession of the session will allow to disclose the identity of the participants, but only that of the Issuer and the document (or the hash of it) signed. 2. Participants sign the session Only elected participants that were initially chosen to sign the session may sign it, this is forced through credential authentication. Whenever they know about the existance of a session requesting their signature, they may chose to sign it. The steps below are illustrated in figure 3. 1 Participants may be informed about the signature session and may create an anonymous signature to be added to the session 2 Anyone may check that the anonymous signature is authentic and not a duplicate, then adds it to the session 3 Anyone may be informed about the signature session and be able to verify if the document is signed by all and only all participants A delicate aspect of BLS signatures is avoiding doublesigning: if a participant signs twice then the whole signature will never be valid. Relying on a stateless credential authentication alone does not avoid this case in Reflow, therefore we use a list of anonymous "fingerprints" of the signature related to the document being signed. Each signature will produce a participant fingerprint saved in a list that is checked against duplication before adding a new signature. This procedure adds significant computational Reflow DYNE.ORG overhead for sessions with a large number of participants, but it can be switched off in a system that avoids doublesigning in its own architecture. Anyone verifies signatures Until the session will have collected all the signatures of participants, its verification will not be valid. It is also impossible to know if all participants have signed the session or how many are missing. Anyone can verify the state of the signature session in any moment just by having the document and the session, as illustrated in figure 3 along with the signature process. Configurable features may be introduced in the Reflow signature flow that may or may not disclose more information, for instance who has indicted the signature session, what documents are linked to signatures and how many participants were called to sign: this depends from the implementation and the metadata it may add to signature sessions or the communication protocols adopted. In any case the basic signature and verification flow of Reflow requires that only one identity is really made public and is that of the issuer. Applications Moving further in envisioning the possibilities opened by Reflow is important to state the possibility to aggregate (sum) signatures into compact multi-signatures, a core feature of our BLS based signature scheme [Boneh et al., 2018a]. Need to Know. The base implementation exploiting this feature is that of a signature scheme for a single document split in separate sections to be signed by different participants: all the signatures can be later aggregated in a single one proving the whole document has been signed by all participants, without being disclosed to all of them in its entirety. This application helps to enforce the principles of need-to-know and least privilege to the access of information [Saltzer and Schroeder, 1975] and is useful for the realization of privacy-aware applications in various sectors, for instance for medical and risk mitigation analysis. Disposable Identities A Disposable Identity is based on four properties common to other authentication and identification systems: verifiability, privacy, transparency, trustworthiness; then in addition introduces a fifth property: disposability. Disposability permits purpose-specific and context-driven authentication, to avoid linking the same identity across different authentication contexts [Goulden et al., 2021]. Reflow can be adopted by such an application to remove the need of context-free identifiers and implement authentication functions through a disposable identity whose traceability is bound to a context UID. Furthermore, being a signature scheme, Reflow can add the feature of contextfree verifiable signatures (and multi-signatures) that are untraceable in public, but can be traced and even revoked in This scenario is relevant for the implementation of privacypreserving public sector applications that allow authentication and signatures through disposable identity systems [Kranenburg and Gars, 2021]. Material Passport. Drawing on feature of multiple signature aggregation, Reflow can be used to implement a material passport for circular economy applications [Luscuere, 2017] to maintain the genealogy of a specific product, providing authenticated information about the whole set of actors, tools, collaborations, agreements, efforts and energy involved in its production, transportation and disposal [Dyne.org, 2020]. The provision of the information that forms the content of the material passport should be done by every actor in the supply chain and among the most important technical necessities for such an application are the confidentiality issues regarding access to information and the guarantees of the quality of information [Damen, 2012]. As an ideally simple and effective ontology we adopt a Resource-Event-Agent model [Laurier et al., 2018] and the ValueFlows vocabulary [Foster et al., 2017] We then consider Resources as material passports made of the track and trace of all nodes -Events, Agents and Processes -they descend from. The material passport is an authenticated graph structure: in figure 4 the Resources on the right side have an UID which is the aggregation of all UIDs of elements leading to their existance. The integrity of the material passport can be verified by recalculating the UID aggregation and see it matches the signed one attached to the Resource. In case one or more DYNE.ORG UIDs are wrong or missing, the Resource will not verify as valid. In brief is possible that: 1 One or more Agents may interact to create material passports 2 A material passport is the aggregation of all parent nodes 3 Anyone may verify the integrity and validity of any material passport 4 The verification of a material passport does not reveal the identity of Agents contained 5 One may export and import a material passport as a graph query Reflow's unlinkability of credentials and signatures satisfy the privacy requirement for the material passport, while the possibility to aggregate and link all the elements of its graph allow to group multiple signatures into a single compact one, without requiring any interaction with the previous signers. The material passport signature will then be the sum of all Agents, Processes and Events involved, created or consumed for it. The Reflow material passport is the authenticated, immutable and portable track record of all nodes connected in its graph: material passports can be signed on export and verified on import, which makes them reliably portable from one graph to another in a federated environment. This paper makes four key contributions • We describe the signature scheme underlying Reflow, including how key generation, signing and verification operate (Section 2). The scheme is an application of the BLS signature scheme [Boneh et al., 2018b] fitted with features to grant the unlinkability of signatures and to secure it against rogue-key attacks. • We describe the credential scheme underlying Reflow, including how key generation, issuance, aggregation and verification of credentials operate (Section 3). The scheme is an application of the Coconut credential scheme [Sonnino et al., 2018] that is general purpose and can be scaled to a fully distributed issuance that is re-randomizable. • We implement a Zencode scenario of Reflow to be executed on and off-chain by the Zenroom VM, complete with functions for public credential issuance, signature session creation and multi-party non-interactive signing (Section 4). We evaluate the performance and cost of this implementation on on-site and on-line platforms leveraging end-to-end encryption (Section 5). • We implement an efficient end-to-end encryption scheme for the authentication of graph data structures as observed in the material passport use-case. The scheme allows to simplify the complexity of track-and-trace implementations by making it sufficient to proceed by one level of depth and verify the integrity of aggregated signatures at each step. Notations and assumptions We will adopt the following notations: • F p is the prime finite field with p elements (i.e. of prime order p); • E denotes the (additive) group of points of the curve BLS-383 [Scott, 2017] which can be described with the Weierstrass form y 2 = x 3 + 16; • E T represents instead the group of points of the twisted curve of BLS-383, with embedding degree k = 12. The order of this group is the same of that of E; We also require defining the notion of a cryptographic pairing. Basically it is a function e : G 1 × G 2 → G T , where G 1 , G 2 and G T are all groups of same order n, such that satisfies the following properties: i. Bilinearity, i.e. given P 1 , Q 1 ∈ G 1 and P 2 , Q 2 ∈ G 2 , we have e(P 1 + Q 1 , P 2 ) = e(P 1 , P 2 ) · e(Q 1 , P 2 ) e(P 1 , P 2 + Q 2 ) = e(P 1 , P 2 ) · e(P 1 , Q 2 ) ii. Non-degeneracy, meaning that for all g 1 ∈ G 1 , g 2 ∈ G 2 , e(g 1 , g 2 ) = 1 G T , the identity element of the group G T ; iii. Efficiency, so that the map e is easy to compute; iv.G 1 = G 2 , and moreover, that there exist no efficient homomorphism between G 1 and G 2 . For the purpose of our protocol we will have G 1 = E T and G 2 = E, and G T ⊂ F p 12 is the subgroup containing the n-th roots of unity, where n is the order of the groups E and E T . Instead e : E T × E → G T is the Miller pairing, which in our work is encoded as the method miller(ECP2 P, ECP Q). To conclude, the credential scheme in section 3 uses noninteractive zero-knowledge proofs (for short NIZK proofs) to assert knowledge and relations over discrete logarithm values. They will be represented using the notation introduced by Camenisch and Stadler [1997] as NIZK{(x, y, . . . ) : textstatementsaboutx, y, . . . } Signature A BLS signature is a signature scheme whose design exploits a cryptographic pairing. As for other well known algorithm such as ECDSA, it will work following these three main steps: • Key Generation phase. For a user who wants to sign a message m, a secret key sk is randomically chosen uniformly in F n , where n is the order of the groups G 1 , G 2 , G T . The corresponding public key pk is the element sk · G 2 ∈ E T ; • Signing phase. The message m is first hashed into the point U ∈ E, which in our scheme is done by the method hashtopoint; the related signature is then given by σ = sk · U ; Reflow DYNE.ORG • Verification phase. For an other user that wants to verify the authenticity and the integrity of the message m, it needs to 1. parse m, pk and σ 2. hash the message m into the point U and then check if the following identity holds, e(pk, U ) = e(G 2 , σ) If verification passes it means that σ is a valid signature for m and the protocol ends without errors. Proof of the verification algorithm: By using the definitions of the elements involved and exploiting the property of the pairing e we have BLS signatures present some interesting features. For instance, the length of the output σ competes to those obtained by ECDSA and similar algorithms; in our specific case, by using BLS-383 [Scott, 2017], it will be 32 Bytes long, which is typically a standard nowadays. Then, since this curve is also pairing-friendly, meaning that (with the assumption made on e) signature and verification are obtained in very short time. Moreover, BLS supports also aggregation, that is the ability to aggregate a collection of multiple signatures σ i (each one related to a different message m i ) into a singular new object σ, that can be validated using the respective public keys pk i in a suitable way. This is possible thanks to the fact that σ i ∈ G 1 ∀i, giving to the algorithm an homomorphic property. We will show now how this last feature can be attained in the context of a multi-party computation using the same message m but different participants. Session Generation After the key generation step we introduce a new phase called session generation, where the signature is initialized; anyone willing to start a signing session on a message m will create: 1. a random r and its corresponding point R = r · G 2 2. the sum of R and all pk supposed to participate to the signature such as P = R + i pk i 3. the unique identifier UID of the session calculated as hash to point of the message m, such as U = H(m) ∈ E, where H is a combination of a cryptographic hash function (treated as a random oracle) together with an encoding into elliptic curve points procedure 4. the first layer of the signature σ ← r · U , later to be summed with all other signatures in a multi-party computation setup resulting in the final signature as σ ← r · U + i sk i · U 5. the array of unique fingerprints ζ i of each signature resulting from the credential authentication (see section 3) After this phase is terminated, every participants involved in the session start their own signing phase during the session, producing (from the same message m) their respective σ i 's. The final signature σ is then computed in this way: first of all let us call σ 0 = r · U , then supposing k participants have already aggrgated their σ i , obtaining a partial signature S k , the (k + 1)-th one will compute Finally, the resulting output will be σ = S N , where N is the total number of the signers of the session. In order to verify that σ is valid we compute P = R + N i=0 pk i , where R = r · G 2 , working as a public key with respect to the nonce r, which instead is kept secret. Verification is then performed by checking if the following identity holds e(P, U ) = e(G 2 , σ) If verification passes without errors it means that σ is a valid aggregated signature of m. Proof. By recalling that We conclude this section with a final consideration on this feature. We recall that in the generation of the aggregated signature σ we used as a starting point the variable σ 0 = r · U , but in literature it is also common to find instead simply the base point G 2 . The choice of randomizing it (providing that the random number generator acts as an oracle) helps in preventing replay attacks, since the Reflow DYNE.ORG signature generated by the process is linked to the session in which is produced, for if an attacker managed to get some information from σ, it would be difficult to use it in order to forge new signatures. Credential Following the guidelines of Coconut, the credentials issuing scheme works as follows: 1. the issuer generates its own keypair (s k , v k ), where s k = (x, y) ∈ Z 2 is the pair of secret scalars (the signing key) and v k = (α, β) = (x · G 2 , y · G 2 ) is the verifying key, made by the related pair of public points over E T ; 2. the user i, with its respective keys (sk i , P K i ) make a credential request on its secret attribute ck i ∈ Z to the issuer, represented by λ which contains a zero-knowledge proof π s of the authenticity of user i; 3. the issuer, after having received λ, verifies the proof π s at its inside, and if it passes, then releases to user i a credentialσ signed used its own key sk. Step 1. is self-explanatory. Steps 2. and 3. require a bit more effort, in fact in order to build a valid request λ, and so also a valid proof π s , first of all the user must produce an hash digest for the attribute ck i , that we call h, then computes two more variables c and s h defined as where r and k are fresh randomly generated integers, HS is an hard-encoded point on the curve E, and γ = ck i · G 1 . These two variables are alleged in the credential request λ produced in prepare_blind_sign and are needed to the verifier to assure the authenticity of the user through the proof π s , which requires as input h, k, r, c. The Non-Iteractive Zero Knowledge proof (for short NIZK proof) π s generated by the function blind_sign is computed as follows: • Randomization phase. Three new nonces w h , w k , w r ∈ Z are generated, each one related to the input variables h, k, r respectively as we will show soon; • Challenge phase. The protocol creates three commitment values, namely A w , B w , C w defined as follows Then these variables are used as input of a function ϕ producing an integer c h = ϕ({c, A w , B w , C w }); • Response phase. In order that the proof can be verified the protocol generates three more variables which are alleged inside the proof itself and link the nonces w h , w k , w r with h, k, r, i.e: So basically the proof π s contains the three response variables r h , r k , r r and also the commitment value c h , that can be used for a predicate φ which is true when computed on h. Once the verifier receives the request λ, in order to check if the proof is valid it should be able to reconstruct A w , B w , C w by doing these computations, If the request is correct, then we will have that (1) and verification is thus complete, meaning that the verifier has right to believe that the prover actually owns the secret attribute ck i associated to the public variable γ and that consequently has produced a valid commitment c and (El-Gamal) encryption s; in other words, At this point the user will now have a blind credential σ = (c,ã,b) issued by the authority, wherẽ The user then will have to un-blind it using its secret credential key, obtaining σ ck = (c, s) = (c,b − ck i (ã)), which will use to prove its identity when signing a message. The procedure is similar to the one seen before with some extra details: • Setup. As for the the BLS signature, an elliptic point U , associated to the hash of the message to sign, is required as an Unique Identifier (UID) for the signing session; • Credential proving. The user produces two cryptographical objects θ (containing a new proof π v ) and ζ (which is unequivocally associated to U ) through prove_cred_uid, taking as input its own credential σ, the related secret attribute c k , the authority public key v k = (α, β) and the session point U . The new objects θ and ζ are derived as follows: Reflow DYNE.ORG • as before the user hashes ck into h, and this time generates two random values r and r ; • next, it randomizes its credential σ ck into σ ck = (c , s ) = (r · c, r · s) and then computes two elliptic curve points κ and ν as where π v is a valid zero knowledge proof of the following form with φ being a predicate which is true on h; • ζ will be instead the elliptic curve point obtained as h · U ∈ E. Building the proof π v requires similar steps as seen for π s , in fact we create three commitment values A w , B w , C w defined as where w h , w r are fresh generated nonces; then we set the challenge as the computation of c h = ϕ({α, β, A w , B w , C w }) with the related responses The values of r h and r r are stored inside π v which will be then sent through θ (together with ζ) from the prover to the verifier. In order to check that the user has legitimately generated the proof and at the same time is the owner of the credential the following steps must be made: that c = O, the point at infinity, and that e(κ, c ) = e(G 2 , s + ν) Actually the predicate φ in the definition of π v can be thought as performing steps 2. and 3. and, if any of these fails the protocol will abort returning a failure, otherwise verification passes and the user can finally produce the signature. Proof of the verification algorithms. We now show for the proof π s that actually by using the responses r h , r k and r r , together with c h and the other parameters inside λ, i.e. s = (a, b), c and γ, using also the hard coded point HS, it is possible to reconstruct the commitments A w , B w , C w : Regarding the second proof π v , we have to prove both the identities (2) and (3) hold. We will focus only on the latter since the former requires a similar approach on what we have have done for π s , but with different parameters involved (κ, ν, etc.). The left-hand side of the relation can be expressed as e(κ, c ) = e(α + h · β + r · G 2 , c ) = e(x · G 2 + hy · G 2 + r · G 2 ,r · G 1 ) = e((x + hy + r) · G 2 ,r · G 1 ) = e(G 2 , G 1 ) (x+hy+r)r using the substitution c =r · G 1 , withr ∈ F p since we know that c ∈ E. For the right-hand side we have instead The second argument of the pairing can be rewritten as So, at the end e(G 2 , s + ν) = e(G 2 , r (x · c + y · b − dy · a + r · c)) = e(G 2 , r (x + yh + r) · c) = e(G 2 , (x + hy + r)r · G 1 ) = e(G 2 , G 1 ) (x+hy+r)r and (3) is finally proved. Security considerations. As mentioned in Coconut [Sonnino et al., 2018], BLS signatures and the proof system obtained with credentials are considered secure by assuming the existence of random oracles [Koblitz and Menezes, 2015], together with the decisional Diffie-Hellman Problem (DDH) [Boneh, 1998], the external Diffie-Hellman Problem (XDH), and with the Lysyanskaya-Rivest-Sahai-Wol Problem (LRSW) [Lysyanskaya et al., 1999], which are connected to the Discrete Logarithm. In fact, under these assumptions, we have that our protocol satisfies unforgeability, blindness, and unlinkability. Reserves can be made about the maturity of pairing-based ellipcic curve cryptography despite various efforts to measure its security and design curve parameters that raise it, it is reasonable to consider this as a pioneering field of cryptography in contrast to well tested standards. In addition to considerations on the maturity of EC, the future growth of quantum-computing technologies may be able to overcome the Discrete Logarithmic assumptions by qualitatively different computational means. Reflow then may be vulnerable to quantum-computing attacks, as well hard to patch, because the pairing-based design sits at its core with the adoption of ATE / Miller loop pairing of curves in twisted space, a practice that is not covered by research on quantum-proof algorithms and will eventually need more time to be addressed; however this is all speculative reasoning on what we can expect from the future. The Reflow implementation we are presenting in this paper and that we have published as a Zenroom scenario ready to use is based on the BLS383 curve [Scott, 2017] that in the current implementation provided by the AMCL library has shown to pass all lab-tests regarding pairing properties, a positive result that is not shared with the slightly different BLS381-12 curve adopted by ETH2.0. Debating the choice of BLS381 is well beyond the scope of this paper, but is worth mentioning that our lab-tests have proved also the BLS461 curve to work in Reflow: it is based on a 461 bit prime and hence upgrades our implementation to 128 bit security [Barbulescu and Duquesne, 2019] against attacks looking for discrete logs on elliptic curves [Lim and Lee, 1997]. At last the complexity and flexibility of Reflow in its different applications, its optional use of fingerprint lists, multiple UIDs saved from aggregation and other features covering the different applications also represent a security risk in the technical integration phase. We believe that the adoption of Zenroom and the creation of a Zencode scenario addresses well this vulnerability by providing an easy to use integrated development environment (apiroom.net) and a test-bed for the design of different scenarios of application for Reflow that can be deployed correctly granting end-to-end encryption and data minimization according to privacy by design guidelines [Hoepman et al., 2019]. Implementation In this section we illustrate our implementation of Reflow keygen, sign and verify operations outlining for each: • the communication sequence diagram when necessary • the Zencode statements to operate the transformation Zencode is actual code executed inside the Zenroom VM, behind its implementation the algorithms follow very closely the mathematical formulation explained in this article and can be reviewed in the free and open source code published at Zenroom.org. The Reflow Zencode scenario implementation is contained inside the Zenroom source code. To execute an example flow on one's own computer it is enough to visit the website ApiRoom.net and select the Reflow scenario examples, pressing play it will execute zencode and show input and outputs in JSON format. Credential Setup The Credential Setup sequence which has to be executed only once to enable participants to produce signatures. This sequence is briefly illustrated with a diagram and it consists in the creation of keypair for both the Issuer, who will sign the participant credentials, and the participant who will request an issuer credential. Credential signature: Generate a credential request and have it signed by an Issuer, as well generate all the keys used to sign documents. This procedure will generate private keys that should not be communicated, as well BLS keys whose public section should be communicated and can be later aggregated to create a seal (signature session) and verify one. The public key of the Issuer which is used to sign the credential should have been public and known by the Signer at the beginning of the Keygen process. This code is executed in multiple steps by the Zencode utterances: 1 When I create the reflow key and I create the credential key will create inside the keys data structure two new secret keys: the credential and the bls keys both consisting of BIG integers. Secret keys are randomically chosen uniformly in F n , where n is the order of the group G 1 . 2 When I create the credential request will use the secret keys to create a new credential request consisting of a λ containing a zero-knowledge proof π s of the authenticity of Signer i owning keys (sk i , P K i ) and requesting the Issuer to sign its secret attribute ck i ∈ Z. The Issuer may request additional communications held on a side channel in order to establish proof-of-possession and the authenticity of the Signer, for instance a challenge session in which a random message is demonstrably signed by the secret credential key using an El-Gamal signature [ElGamal, 1985] and verified by the public key. 3 Given I have a 'credential request' When I create the credential signature and I create the issuer public key will be executed by the Issuer to sign the credential after the proof-of-possession challenge is positive. It consists of aσ signed used the Issuer's own secret key sk after verifying the proof π s inside the received λ. 4 Given I have a 'credential signature' When I create the credentials will be executed by the Signer to aggregate one or more credential signature into the secret keys structure. The credentials consist of a σ ck that should be stored locally and secretly as it will be later required to create the zero-knowledge proof components θ and ζ that authenticate the Reflow signature against rogue-key attacks. Signature The Reflow signature consists into a multi-signature that can be indicted by anyone who selects multiple participants. It requires no interaction between participants and it results into what we call a Seal collecting all the signatures. Both the credential to sign and the signatures in the reflow seal are non traceable. The credentials are zero-knowledge proof of possessions bound to the seal identifier; the public keys of participants are blinded and known only by the one who indicts the signature, who may or may not be among participants. The signatures are non-interactive and made in a multi-party computation setup: any participant with a credential to sign can do so without interacting with others, just by adding the reflow signature to the reflow seal. Every signature in a seal produces a fingerprint unique to the seal and the signer consisting in a ζ that is also non traceable. Using fingerprints a check may be enforced to avoid double signing and invalidate the reflow seal. When a seal is created a session is opened for multiple participants to add their signature, anyone opening such a process needs to have an array of public keys of all desired participants. When a seal is signed there are two different steps to be performed: one by the participant publishing the signature and one by anyone aggregating the signature to the seal. This is necessary since we are in a multi-party computation session where each step forward in the process can be performed independently and without requiring anyone to disclose any secret. At this point the seal may be marked as closed as the session is completed succesfully and all correct signatures have been collected without duplicates. In case this optional status will be attributed to the seal then is also possible to remove the fingerprints array from it by reducing its size. Material Passport The Reflow material passport consists in a REA model track-and-trace [Laurier and Poels, 2012] listing. It aggregates Reflow seals by means of collecting their signed UID (unique identifier). The list of fingerprints in a material passport is not to be confused with that of a seal: it is the list of all UID being traced in one seal, which adds its own UID resulting from the hash to point of the new message. The complexity of this implementation requires the adoption of a small ontology, the Valueflows vocabulary [Foster et al., 2017], that briefly distinguishes between roles of the nodes represented and organizes their interaction. A seal may still be signed by one or more participants which are the Agents in the ValueFlows REA model. Agents may be signing a new Event or Process alone or toghether with more agents. All signatures will still be blinded and non-traceable to satisfy Reflow's privacy requirements, while the UID composing the material passport may be listed in a fingerprint section to authenticate the track-and-trace graph and perform deeper verification of all nodes. What follows is the representation of the output, which shows how a material passport is formatted, keeping the same variable names used in the mathematical formulas and indicating the type of members ("ecp" stands for elliptic curve point and a 2 is added to indicate those on a twisted curve) and their size in bytes. As hinted by the optional presence of a list of fingerprints, as well optional record of the agent's credential Proof inside the material passport, this implementation is extremely flexible and opens up a range of possible solutions in the field of privacy-enhanced graph authentication, allowing import and export operations between federated graph data bases, also considering each data silo may be an Issuer publicly identified and granting agent credentials. The verification of a material passport implicitly verifies that the the hash to point of the objects's contents result in the same reflow identity, then executes a cryptographic verification that the given material passport is the valid signature of that identity: Listing 10: Verify a material passport Scenario reflow Given I have a string dictionary named EconomicEvent and I have a issuer public key in The Authority and I have a material passport When I verify the material passport of EconomicEvent Then print the string Valid Event material passport To avoid confusion is important to keep in mind that identities are UIDs and fingerprints are zetas. The simple signature and verification of a single object is not enough to implement a material passport, we need to authenticate the track and trace of it eventually validating all the complex history of transformations that lead to the current object. In order to do so our implementation exploits the aggregation property of elliptic curve points by making a sum of all object identities present in the track and trace. Following the ValueFlows logic then, every Event or Process identity will result from the sum of all identities of previous Agents, Events and Processes. By recalculating this sum and checking its signature then one can authenticate the integrity of the track and trace. For example, assuming we have an array of seal objects found in material passports of a track and trace result of one level depth: Then we can calculate a new identity using aggregation (sum value) with Zencode statements: Listing 12: Aggregate reflow seal identity Given I have a reflow seal array named Seals When I create the sum value identity for dictionaries in Seals Then print the sum value Which will output an elliptic curve point corresponding to the sum of all identity elements found in the "Seals" array. The fact that the verification happens to a sum that cascades down aggregating further depth level of parent objects allows to modularize the track-and-trace implementation making its depth configurable. Evaluation The goal of this section is to produce benchmarks of the implementation of the cryptographic flow, in realistic conditions of use, so in a way that is similar to how a software solution would use the software. Our approach was opposite to testing single algorithms in a sandbox or a profiler, instead we tested Zenroom scripts, written in Zencode, that include also loading and parsing input data from streams or file system, as well as producing output as a deterministically formed JSON object. Platforms The three target platforms used for the benchmarks were: • X86-64 on Intel(R) Core(TM) i5-5300U CPU @ 2.30GHz, running Ubuntu 18.04 (64 bit) • ARM 32bit on Raspberry Pi 4 board, running Raspberry Pi OS (32bit, kernel version: 5.10) • ARM 32bit on Raspberry Pi 0 board, running Raspberry Pi OS (32bit, kernel version: 5.10) The the X86-64 machine runs on a Lenovo X250 laptop, with 8GB of RAM. The Raspberry 0 and 4 have been chosen as benchmarking platforms because of their similarities to, respectively, very low-cost IoT devices (5 USD) and very low cost mobile phones (sub 100 USD). Builds Being a self-contained application written in C, Zenroom can be built for several CPU architectures and operative systems, both as command line interface (CLI) application and as a library. The configurations we chose for this benchmark are: • Two binary Zenroom CLI, compiled using the GCC toolchain in native 32bit ELF binaries, once for the X86-64 platform and once for the ARMv7-32bit. We refer to these builds as Zenroom CLI. • One mixed library, trans-compiled to WASM using the Emscripten tool-chain [Zakai, 2011], then built into an NPM package along with a JavaScript wrapper. After trans-compilation, the WASM library is converted to base64 and embedded inside the JavaScript wrapper, which unpacks it at run-time. The library runs in browsers (using the native WASM support, currently present in Chromium/Chrome, Firefox and Safari), as well in node JS based application. We refer to this build as Zenroom WASM. CLI and WASM use cases The use case for Zenroom CLI is typically a server-side application or micro-service. The WASM library can run in the browser, in the front end of a web application or in a server-side application (running on node JS) or a mobile application. Zenroom can also be built as a native library for Android and iOS (on ARM 32bit, ARM 32bit or X86), but no benchmarks of the Reflow cryptographic flow have been performed using Zenroom as a native library. In previous benchmarks, we noted similar performances for the native CLI and the native library versions of Zenroom, we therefore assume that performances would be comparable between the CLI builds we are using and the native library builds. Testing Benchmarking of Zenroom CLI and Zenroom WASM required using two different tools. Both tools performed chained execution of Zencode scripts that would simulate the whole cryptographic flow. Furthermore, the tool allowed to configure the amount of participants in the flow and the repeat the run of selected scripts in order to gather more data. • The testing of Zenroom CLI was executed using a single, self-contained bash script. The script outputs the duration of the execution of each Zenroom scripts, along with the memory usage and size of output data, in a CSV formatted summary. The script also saves to files both the data and scripts for quality control reference. • The testing of Zenroom WASM required building a JavaScript application, running in node JS. The application outputs the duration of the execution of each Zenroom scripts, along with the memory usage, it calculate averages and returns output in a JSON formatted summary. Benchmarks for the cryptographic flow were run on both the CLI and the WASM builds of Zenroom, for each platform, for a total of six different data collections. Scripts description Our benchmarks provide measurements for the all the steps in the signature flow: for each of them we reported the time of execution (expressed in seconds) and the size of output (expressed in bytes) in different conditions with a progressive number of participants (5, 10, 50, 100, 1000 signatures). The Zencode scripts used in the flow and for benchmarking can be divided in three categories: • Scripts with variable execution times executed by anyone, marked with (A): they include the creation of the Reflow seal (Session start), the aggregation of the signatures (Collect sign) and the verification of the signatures on Reflow seal. Their duration is proportional to the amount of participants and can differ greatly. These scripts are the most calculation intensive and expected Reflow DYNE.ORG to typically run server-side. The (A) stands for "Anyone" as they don't need an identity or a keypair to run. These scripts are typically expected to run server side. • Scripts executed by each participant, marked with (P): they include participant's setup and signing, their duration is not correlated with the amount of participants. These scripts are expected to typically run in the browser or on mobile devices. • Scripts executed by the issuer, marked with (I): they include issuer's setup and signing, their duration is not correlated with the amount of participants. These scripts are expected to typically run server-side. Each script was executed 50 times, on each platform and each configuration, and the average execution time was extracted. Findings • The execution time of the scripts in the (A) group, grows linearly with the amount of participants. • On a X86-64 machine, the benchmarks execute on Zenroom CLI have a comparable duration with their counterparts run on WASM. On the other hand, the execution times differ greatly on the ARM 32bit based Raspberry Pi 0 and 4 machine, when comparing Zenroom CLI and Zenroom WASM executions. While investigating the cause of the differences is beyond the purpose of this paper, we speculate this difference signals different levels of optimization, between the of WASM interpreters built for X86-64 on one hand, and ARM 32bit on the other. • The most numerically outstanding benchmark is the execution time of the issuer's keygen script on Raspberry Pi 0, in comparison with the other platforms. Once more, investigating the reason for this discrepancy is beyond the scope of this paper: we do anyway speculate that the ECP2 pairing performed in the script, may justify the large difference. Benchmark conclusions The ultimate goal of this benchmarks is to assess what parts of cryptographic flow are suitable for execution on each platform, both in terms of execution times as well as RAM usage. Concerning the RAM usage, the highest value recorded is in the 5 megabytes range, which we consider to be well within the acceptable range for any of the platforms used in the benchmarks. Future analysis will investigate the possibility to execute the less resource intense parts of the flow, on ultra low power chips such as the Cortex M4, for which a port of Zenroom is available. Concerning the execution time, if we choose that the longest execution time acceptable for any script of the flow, is up to 1 second, we find that: • Every script of the groups (P) and (I) have acceptable performances on any platform when using Zenroom compiled as CLI. • With a few exceptions, we find a similar result when running scripts of the (P) and (I) groups, for Zenroom compiled in WASM. With this setup, only three scripts run over the 1 second limit, only on Raspberry Pi 0. • The scripts in the group (A), when using Zenroom CLI, can be executed within one second for an amount of participants between 100 and 500 on the X86 machine and for a similar amount on a Raspberry Pi 4, while on a Raspberry Pi 0 the limit is between 10 and 50 participants. • The scripts in the group (A), when using Zenroom WASM, can be executed within one second, for an amount of participants between 100 and 500 on the X86 machine and up to 100 on a Raspberry Pi 4, while on a Raspberry Pi 0 the limit is between 2 and 5 participants. Benchmarks Following are the benchmarks of the benchmarks for the (A) group of scripts (whose execution times change based on the amount of participants), running in CLI binaries, grouped by platform. Zenroom CLI X86-64 Zenroom CLI -all platforms Following are the benchmarks of the benchmarks for the (P) and the (I) groups of scripts (whose execution times don't change based on the amount of participants), with a comparison of all the platforms, running in CLI binaries. Following are the benchmarks of the benchmarks for the (A) group of scripts (whose execution times change based on the amount of participants), running in WASM libraries, grouped by platform. Zenroom WASM -all platforms Following are the benchmarks of the benchmarks for the (P) and the (I) groups of scripts (whose executions don't change based on the amount of participants), with a comparison of all the platforms, running in CLI binaries. RAM usage The benchmarking of the RAM has been performed on Zenroom CLI only. The RAM usage showed negligible differences among the X86 and the arm builds. The tables show the results of the X86 benchmarks. Conclusion This article and the referenced free and open source implementation made in Zencode successfully provides an easy, flexible and well performant way to realize anonymous signatures made by zero-knowledge proof credential holders and aggregated in multi-party computation decentralized environments. The privacy-preserving features of Reflow signatures are protected from rogue-key attacks by enforcing credential issuance, at the price of requiring one or multiple authorities. From this approach it comes the feature of generating signature "fingerprints" that are contextual and become traceable only with the consensus of participants (for instance by replicating the signature). These fingerprints are also useful to avoid double-signing and to strengthen the track-and-trace features of the most advanced material passport use-case scenario. The use-cases we observed haven't required the adoption of multiple authorities, however it is possible to enhance our scheme by porting a threshold credential issuance mechanism based on Lagrange interpolation as described in Coconut's paper [Sonnino et al., 2018]. Generally speaking the Reflow cryptographic system can be adapted to authenticate many sorts of graph data sets; this aspect goes well beyond the state of the art and its application may comes useful to an upcoming generation of non-linear distributed ledger technologies. Future directions This cryptographic scheme is named after the Reflow project and currently follows its path to serve the use-case of circular economy projects for which the integrity of accounting, the portability of data and the privacy of users are of paramount importance. We are proud of the priority this cryptography research gives to use-cases dealing with environmental responsibility and will carry on this ethos through its future developments. Standardisation is an important point in order to establish federated and decentralised use of Reflow across a variety of contexts, so we have high incentives in integrating this signature in W3C standards as well submitting it to the attention of the Object Management Group. As hinted in the overview, the discourse around disposable identities is also very interesting and deserves further interaction as it may open up more privacy preserving possibilities of development in the field of health-care and transportation as well public services and accounting traceability for KYC/AML practices. The possibility to aggregate Reflow signatures may also be exploited to serve need-to-know schemes where different actors may be required to sign different parts of a document without knowing the whole until its final publication, a scenario present in peer-review processes that may improve their impartiality. Being at the core of the Reflow project development of a federated database of information that serves the piloted circular-economy use-cases, the main deployement of the Reflow crypto model is inside the Zenpub software component and will refine its interaction to GUI and dashboard components through a GraphQL interface and ActivityPub event protocol, as well leveraging the federating capabilities of this scheme to a stable and replicable implementation. The developments of next-generation DLT applications as Holochain [Harris-Braun et al., 2018] and Hashgraph [Green, 2019] are also very interesting, in particular Holochain is known have the ValueFlows REA approach in course of implementation and may soon provide a platform where to further distribute the graph data beyond the federated server provision currently implemented in ZenPub. Acknowledgments The development and demonstration of the Reflow signature scheme and the material passport has been partially funded by the European project "REFLOW" referenced as H2020-EU.3.5.4 with grant nr.820937; the multi-party signature implementation has been partially funded by Riddle & Code GmbH as private commission. We are grateful to all our colleagues at Dyne.org for the help and insights they have shared contributing to this work and in particular to Puria Nafisi Azizi, Danilo Spinella, Adam Burns and Ivan Minutillo; as well to Thomas Fuerstner for sharing his visions and passion for crypto development with us; and to prof. Massimiliano Sala and Giancarlo Rinaldo for facilitating Alberto's stage program with the dept. of Mathematics at University of Trento. Our gratitude goes also to our former colleagues in the DE-CODE project George Danezis and Alberto Sonnino for creating Coconut, a wonderful crypto scheme on which Reflow is grafted. Last not least we are grateful to all the REFLOW project consortium represented by prof. Cristiana Parisi as principal investigator.
2021-06-01T01:16:23.396Z
2021-05-30T00:00:00.000
{ "year": 2021, "sha1": "0db627243516141b08d7a39b9da433aee8d248a5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "0db627243516141b08d7a39b9da433aee8d248a5", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
5278137
pes2o/s2orc
v3-fos-license
Superconducting UCN Polarizer for a New EDM Spectrometer. A test experiment has shown that the number of ultracold neutrons (UCN) of one polarization state, transmitted through a 100 μm Al foil when placed in a 5 T magnetic field, is greater by 3.8 times. The increased transmission is due to the higher velocity of the UCN passing through the foil. Introduction There are two possible ways to obtain polarized UCN for the electric dipole moment (EDM) experiment: either one uses magnetized ferromagnetic layers on thin foils [1] or one uses strong magnetic fields of the order of 5 T. We propose to use a superconducting (SC) solenoid polarizer on the fill lines of the EDM spectrometer. The advantages of such a choice are: 1) the possibility to obtain fully polarized UCN as compared to about 85 % polarization using magnetized foils; 2) the possibility to place the vacuum separation foil in the high magnetic field region. Separation foils are needed between the UCN source volume containing the solid deuterium and the EDM volume with the high voltage. A test experiment has recently shown that the number of UCN of one polar-ization state transmitted through a 100 µm Al foil when placed in a 5 T magnetic field is greater by 3.8 times. The increased transmission is due to the higher velocity of the UCN passing through the foil. The superconducting solenoids will be equipped with ARMCO return yokes in order to suppress the stray fields that might influence the EDM measurements. Due to the fact that the magnetic field of the SC magnets, once switched on, will be very stable over typical measurement times, its influence on the EDM experiment is only static and can be compensated for. In our test experiment on foil transmission at the Institut Max von Laue-Paul Langevin (ILL), the possibility to carry out the present RAL-Sussex-ILL EDM experiment together with an even unshielded SC solenoid nearby (about 4 m distance) has been demonstrated. As it is shown in the calculation, a cylindrical magnetic shield made of ARMCO with a diameter of 700 mm and a thickness of 100 mm can suppress the magnetic field in the EDM spectrometer down to 50 µT. The Scheme of the Experiment The scheme of the experiment, which was recently performed at ILL, Grenoble, is shown in Fig. 1. UCN from the ILL turbine filled a Be coated gravitational spectrometer volume. After closing the shutter 1, a well defined UCN spectrum was formed in the spectrometer over 100 s by means of a moveable absorber. The spectrum was very similar to the one that will be obtained in the UCN storage vessel of the PSI UCN source, which is presently under construction. When the shutter 2 was opened, UCN were counted in the UCN detector. Various cases were studied: with and without an Al foil, with and without a magnetic field. All four possible cases were studied with the UCN absorber at various positions in order to obtain the energy dependence of the transmission. Figure 2 shows the differential UCN spectrum, as seen by the UCN detector, with the magnetic field switched on to 5 T and the differential UCN spectrum with the magnetic field switched off divided by factor of two because it is an unpolarized beam with two spin components. For a 100 µm thick Al foil the integral number of polarized UCN with the SC magnet switched on is 3.8 times greater than the integral number of UCN with one spin component with the magnet switched off. UCN of one polarization component get accelerated in the magnetic field gradient, and have a longitudinal velocity of more than 7.6 m/s for the case where the 5 T field is at the foil position. As a result, these neutrons more easily penetrate the Al potential barrier and pass through the foil with considerably smaller losses. Figure 3 shows the transmission probability as a function of UCN energy outside the solenoid. This probability is determined by the ratio of count rates with and without the foil. In the case where the magnet is switched on, the transmission is only weakly dependent on the spectrum. For the 100 µm Al foil, the transmission is larger than 80 %. For a 50 µm Zr foil (which can be used for separating the vacuum in the same way as 100 µm Al), the transmission is about 90 %. Experimental Results Thus the usage of a SC solenoid allows one to obtain polarized UCN and to increase the density of polarized UCN by factor of 3.8. In the cases when polarization of UCN is not needed in the experiment, the factor of the UCN density increase is about two times. It is important also for the experiments with unpolarized UCN. --is the case when the magnetic field is switched on to 5 T. --is the case when the magnetic field is switched off (the counting rate is scaled by a factor of two in order to account for the fact that without the field, the two polarization components are present while for the solenoid switched on, only one polarization component is transmitted) . Fig. 3. Absolute transmission of foils with and without the magnetic field as a function of UCN energy (height). --is the case with an Al foil (100 µm) and with the magnetic field switched on. --is the case with an Al foil (100 µm) and with the magnetic field switched off. -∆-is the case with a Zr foil (50 µm) and with the magnetic field switched on.
2017-04-15T01:47:38.674Z
2005-05-01T00:00:00.000
{ "year": 2005, "sha1": "30e15525bf919a1cbb2a4068768080a87e690eb9", "oa_license": "CC0", "oa_url": "http://europepmc.org/articles/pmc4849609?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "30e15525bf919a1cbb2a4068768080a87e690eb9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
6035813
pes2o/s2orc
v3-fos-license
Clinical and Behavioral Correlates of Achieving and Maintaining Glycemic Targets in an Underserved Population With Type 2 Diabetes OBJECTIVE—In an underserved Latino area, we established a disease-management program and proved its effectiveness. However, many patients still remained above target. This study was designed to evaluate which factors are associated with reaching program goals. RESEARCH DESIGN AND METHODS—This was a randomized, prospective, observational study in which patients enrolled in our program were followed for 2 years with outcomes, measures, and questionnaires assessed at baseline and at 6, 12, and 24 months. RESULTS—Overall, A1C fell by 1%. Adherence to medication was the strongest predictor of reaching the target A1C of <8%; baseline A1C was also predictive. Knowledge scores increased in those who reached target, but the measures of self-efficacy and empowerment did not change for either group. CONCLUSIONS—Diabetes management is effective in a lower-income Latino population. However, adherence was suboptimal even when medications were provided on-site for free. Further research into barriers associated with medication adherence is needed. I n the U.S., Latino individuals have a high prevalence of diabetes and are often poor and uninsured (1). Research needs to be done to develop cost-effective, ethnically appropriate diabetes programs for these vulnerable individuals. We implemented a diabetes-management program in a comprehensive health center serving lowincome, Latino patients in east Los Angeles. A previous study indicated that our program improves short-term outcomes but that the improvement is often not sustained (2). This study was conducted to identify the correlates of success in our program. RESEARCH DESIGN AND METHODS -This study was a 2-year one-center randomized comparative trial. Consenting patients entering our program were randomized to either an episodic model of care or a continuous model of care. Each patient completed an Institutional Review Board-approved informed consent form. All subjects underwent the same first 6 months of care in accordance with our protocols. After 6 months, those in the episodic group were to be discharged and returned annually for an evaluation. Those in the continuous model were seen at least every 3 months for the duration of 2 years. Patients were randomized and frequency matched based on age, diabetes duration, sex, BMI, and A1C. Routine diabetes clinical and laboratory measurements and questionnaires (Diabetes Knowledge Test [3], the Summary of Diabetes Self-Care Activities questionnaire [4], Diabetes Empowerment Scale [5], and the Problem Areas in Diabetes questionnaire [6]) were completed at baseline and at 6, 12, and 24 months. Adherence to prescribed drugs was measured as a medication possession ratio (MPR), which represents the proportion of days on which the patient had the medication available (7). The design of our program has previously been described (2). Most patients have no health insurance and live below the federal poverty level. Care managers (nurses and nurse practitioners) provide care following protocols and are supervised by a diabetologist. The program attempts global risk reduction in a culturally appropriate context. Our A1C target is Ͻ8%, with recommendations for reducing the A1C further in primary care. Initially, we followed patients indefinitely, but resources mandated shortening the programs duration to 6 months, with a possible extension based on the judgment of the care manager. Statistical analysis Data are presented as means Ϯ SEM. Changes in each variable from baseline at each follow-up time were compared between the two groups using a two-way Student's t test. For all behavioral response measures, the scores of responses for each questionnaire were compared between the two groups at baseline and each visit using Wilcoxon's rank-sum test. Data were analyzed using GraphPad Instat 3, version 3.0b. We conducted a multivariate logistic regression analysis on patients treated with metformin (the most frequently used medication) to determine the factors associated with a patient's probability of achieving an A1C Ͻ8% at the last follow-up evaluation. The key regressor, adherence to metformin, was defined as the fraction of time metformin was available to the patient. Other covariates included a set of demographics (sex, age, education, and country of origin) and pretreatment conditions (duration of diabetes at baseline, baseline A1C, and drug treatment at baseline). The model was analyzed using Stata Statistical Software, version 10.1. RESULTS -Of 211 eligible patients, 162 were enrolled, with 79 in the control group and 83 in the episodic group. In the first year, 129 patients (79.7%) completed the program, and 100 (61.7%) had data available during the second year. Baseline mean A1C values were 7.9 Ϯ 1.8% (control) and 7.7 Ϯ 1.6% (episodic) and fell by ϳ1% at 6 and 12 months in both groups. However, the total number of visits was the same in the control and episodic groups, erasing the separation between the models. No relationship was found between the number of visits and change in A1C. Subsequent analyses compared those who reached the A1C target of Ͻ8% with those who did not. Ninety-six subjects (62%) achieved the target of Ͻ8%, and 59 (38%) were above the target. Patients in the group with Ͻ8% A1C showed significant improvements on the Summary of Diabetes Self-Care Activities questionnaire in diet and foot care as well as improvements on the Diabetes Knowledge Test. These variables did not improve in the Ն8% A1C group. No significant changes were seen on the Diabetes Empowerment Scale or the Problem Areas in Diabetes questionnaire in either group. Regression results are presented in Table 1. Factors that are statistically significant include adherence to metformin and A1C at baseline with odds ratios of 19.31 (95% CI 2.16 -172.60; P ϭ 0.01) and 0.51 (0.36 -0.73; P ϭ 0.00), respectively. The results suggest that the probability of reaching the Ͻ8% A1C target increases with higher adherence to metformin and lower baseline A1C levels. Based on both pharmacy data and patient self-report, rates of self-monitoring of blood glucose did not differ between groups (ϳ30 strips obtained and reportedly used per month). No differences were seen in visits to walk-in clinics or the emergency department or in rates of inpatient hospitalization. CONCLUSIONS -This study showed a 1% sustained reduction in A1C through diabetes management. Adherence is a statistically significant (P ϭ 0.01) and robust predictor for the Ͻ8% target at follow-up. Other studies have found a similar relationship between medication adherence and outcomes in individuals treated for diabetes (8,9). Unlike most diabetic patients in most settings, however, our patients are able to obtain their prescriptions for free in the same building where they are seen for their diabetes care. Limitations to our study include the failure of the initial episodic model to be implemented, the use of medication attainment as a measure for medication adherence (less accurate than actual pill counts but more accurate than patient self-report) (10), and a high dropout rate in the second year. The surveys used did not capture the psychosocial stressors of our patients. The need for study of barriers to self-management has been discussed by others (11). Additionally, many of our patients experience food insecurity (12) and lack access to the recommended healthy foods (13). These considerations make more traditional questions about lifestyle and stress more difficult to interpret. In conclusion, medication adherence was a strong predictor of maintaining an A1C level below target. Incorporating new approaches to enhance adherence (14,15)
2014-10-01T00:00:00.000Z
2009-01-01T00:00:00.000
{ "year": 2009, "sha1": "b65f41b0a88a96463306f7624bf1f21317348b34", "oa_license": "CCBYNCND", "oa_url": "http://care.diabetesjournals.org/content/32/1/54.full.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "b65f41b0a88a96463306f7624bf1f21317348b34", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14738390
pes2o/s2orc
v3-fos-license
The Qa-1 antigenic system. Relation of Qa-1 phenotypes to lymphocyte sets, mitogen responses, and immune functions. The antiserum (B6 X A-Tlab) anti-A (Tlaa) defines several TL antigens expressed exclusively on thymocytes. When reacted with peripheral lymphocytes, the same antiserum defines another antigenic system, provisionally termed Qa-1. The genotypic disparity distinguishing the recipients and donors in this immunization comprises a section of chromosome 17 extending from a crossover point between H-2D and Tla to a presently unmarked point beyond Tla. Therefore although Qa-1 may constitute a single cell surface component, it is equally probable that the Qa-1 system defines two or more cell surface components determined by genes in this region, each of which may be expressed on a different cell set. Cytotoxicity assays indicate that Qa-1 antigen is expressed on Lyt-1 cells and Lyt-123 cells, and may serve to subclassify these two cell sets; it is not known whether Qa-1+ cells may occur within the small Lyt-23 set. There may be also be a cell set with the phenotype Thy-1--:Qa-1+. Another distinctive feature of the Qa-1 system is the characteristic profile of responses to mitogens exhibited by spleen cell populations from which Qa-1+ cells have been eliminated; in conventional assay of [3H]thymidine incorporation the response to lipopolysaccharide was essentially unchanged, the response to phytohemagglutinin M (PHA-M) was virtually abolished, and the response to concanavalin A (Con A) was reduced by 40%. The third distinctive feature of the Qa-1 system is the characteristic profile of changes which elimination of Qa-1+ cells produces in tests of immune function in vitro: (a) proliferation, measured by [3H]thymidine incorporation, in mixed lymphocyte culture (MLC) with major histocompatibility complex (MHC)-incompatible stimulator cells, was not affected. (b) in tests of cell-mediated cytotoxicity (CMC) of MHC-incompatible target cells, neither the generation nor the effector functions of cytotoxic lymphocytes was affected, implying that Lyt-23 prekiller and killer cells are Qa-1--. (c) primary and secondary responses to SRBC were considerably augmented, suggesting that Qa-1+ cells may be responsible for suppression in this test system. (d) accordingly the suppression of the anti-sheep erythrocyte (SRBC) response normally engendered in spleen cells by culture with SRBC was profoundly reduced by elimination of Qa-1+ cells, either before or after culture. (e) the suppression of the anti-SRBC response normally engendered in spleen cells cultured with Con A was reduced by removal of Qa-1+ cells before but not after culture with Con A. Although analysis is as yet far from complete, the Qa-1 system should already be of considerable value because it distinguishes a population of lymphocytes that is not defined by any other antigenic system, according to three criteria: (a) representation of Qa-1 cells among T-cell sets defined by Lyt phenotypes, (b) the profile of responses to mitogens exhibited by lymphocyte populations depleted of Qa-1+ cells, and (c) the profile of immune responses of lymphocyte populations depleted of Qa-1+ cells. Complement. This study was conducted with pools of rabbit serum that had been rigorously screened and selected for high complement levels combined with low inherent cytotoxicity, usually four to five rabbits out of 30 were chosen, as described in reference (2) which also gives details of cytotoxicity assays for Qa-1 and the calculation of cytotoxicity indices. Antibody Responses In Vitro PRXMARV RESPONSE. After treatment with a-Qa-1 (+C) or normal mouse serum (NMS) (+C), 10 X 106 viable spleen cells were incubated for 5 days with 3 X 106 sheep erythrocytes (SRBC) (Colorado Serum Co., Denver, Colo.) according to Mishell and Dutton (5). In some cases, a modification of this procedure was used, allowing efficient stimulation of 5 x 106 cells (6). Duplicate cultures were harvested on days 4 and 5 and assayed for direct plaque-forming cells by a slide modification of the Jerne plaque assay (6). SECONDARY RESPONSE. Mice were primed by intraperitoneal injection of 2 X 108 SRBC and pertussis vaccine containing 3 × 10 ~ bacteria (Eli Lilly and Co. Indianapolis, Ind). 8 Days later, spleen cell suspensions were treated with either a-Qa-I (+C) or NMS (+C) and 4 X 10 ~ remaining viable cells were incubated for 5 days with 3 X 106 SRBC. Duplicate cultures were harvested on days 3, 4, and 5, and assayed for direct and indirect plaque-forming cells (PFC) as described above. Peak responses were obtained on day 4. Cell-mediated cytotoxicity (CMC) was measured on day 5 by a 4-h 6tCr release assay. Con L Abbreviations used in this paper: C, complement; CMC, cell-mediated cytotoxicily; Con A, concanavalin A; CTL, cytotoxic lymphocyte; FCS, fetal calf serum; LNC, lymph node cell; LPS, lipopolysaccharide with Fscherichia coli 0127: BS; MIIC, major histocompatibility complex: MLC, mixed lymphocyte culture; NMS, normal mouse serum; PFC, plaque-forming cell, PItA-M, phytohemagglutinin M; SRBC, sheep erythrocytes; Ts, T-suppressive activity. mean cpm experimental -mean cpm spontaneous release x 100. mean cpm maximum release -mean cpm spontaneous release Spontaneous release was determined by incubating target cells in medium alone; maximum release was determined by detergent lysis (5% Triton, New England Nuclear, Boston, Mass.) of target cells. Distribution ofQa-I + Cells within the Lyt Sets. Whether or not each Lyt set contained Qa-1 + cells was determined by the following procedure: Lyt and Qa-1 antisera were used in combination with each other (e.g., a-Qa-1 + a-Lyt-l.2 or a-Qa-1 + a-Lyt-2.2 or a-Lyt-l.2 + a-Lyt-2.2) in two-stage cytotoxic tests, a-Qa-1 was used at a final dilution of 1:50, a-Lyt-l.2 at 1:40, and a-Lyt-2.2 at 1:10. The Ly antisera were prepared as described previously (3,4). To increase the proportion of Ly positive cells, lymph node cells were treated with a-Ig antisera (provided by Dr. U. Hammerling, Sloan-Kettering Institute) and complement as described (2) resulting in 70-80% Thy-1 ÷ cells. Cytotoxic indices in Table V are calculated as a percent of Thy-1 + cells. Qa-I Phenotypes of Cells Involved in Proliferative Responses to Mitogens In Vitro. For this purpose the proliferative responses of B6-Tla a spleen cells preselected with a-Qa-1 plus C (the Qa-1-population) were compared with the responses of equal numbers of unselected B6-Tla ~ spleen cells (NMS + C). Fig. 1 shows that the Qa-1-population has a distinctive profile of responses to the three mitogens Con A, PHA-M, and LPS, implying that a-Qa-1 defines a population of cells not identified by any other antigenic system. Evidently Qa-1 ÷ cells are not essential for response to LPS (unaffected by a-Qa-1 preselection) but are mainly responsible for the response to PHA-M (virtually abolished by a-Qa-I preselection), either by proliferating themselves or possibly by inducing proliferation of Qa-1-cells. Preselection with a-Qa-1 reduced the Con A response by 40%. This is in keeping with the broader reactivity of Con A as compared with PHA-M and suggests that the Con A responsive population includes both Qa-1 + and Qa-1-members. An indication that the Qa-I ÷ cells which contribute tO the Con A response do so directly by proliferating comes from other tests with lymph node cells (LNC). Before stimulation, roughly 30% of B6-Tla" LNC are Qa-1 +; after activation by Con A, roughly 50% of the blast cells are Qa-1 + (and 80% Thy-1+). Qa-I Phenotypes of Cells Involved in Reaction to MHC-Incompatible Cells PROLIFERATION (MLC). Removal of Thy-1 + cells markedly affected the tempo of the MLC response to irradiated MHC-incompatible stimulator cells, and their [aH]thymidine incorporation was substantially reduced at the time when control (NMS + C) responder cells showed maximal incorporation (48-72 h after initiation of culture). This suggested that elimination of a proportion of T cells could best be assessed in MLC assay by a delay in the tempo of the response. Fig. 2 shows that the tempo and degree of MLC activation in cultures lacking Qa-1 + cells were virtually identical to those of cultures of control unselected responder cells (NMS + C). The experiment shown in Fig. 2 is one of six such experiments performed, all of which gave essentially similar results. Evidently Qa-1 + cells need not be present for the proliferative MLC response to MHC-incompatible cells. In fact in one test elimination of Qa-1 ÷ cells enhanced incorporation during the 3rd day of stimulation. Although Qa-1 + cells are not necessary for proliferation, measured by MLC, they may nonetheless proliferate. We tested this by counting Qa-1 ÷ cells present after MLC with unselected spleen cells. (The stimulating cells were Qa-1-and therefore could not contribute to the count ofQa-1 + cells.) After 72 h in culture, MLC-stimulated cell populations contained 41% Qa-1 + cells and 51% Thy-1 + cells, as compared with 20% Oa-1 + cells and 30% Thy-1 + cells in the unstimulated (control) populations. Thus during MLC either Qa-1 + cells normally proliferate or the phenotype of some Qa-1cells is converted to Qa-1 +. CMC. The following experiments were designed to indicate the Qa-1 phenotypes of cells responsible for the generation and effector function of cytotoxic lymphocytes (CTL). Representative data are shown in Table I: elimination of Thy-1 + cells before MLC abolished cytotoxic activity of the stimulated population in subsequent CMC assays, but elimination of Qa-1 + cells before MLC did not, indicating that Qa-1 + cells are not required for the generation of CTL. Elimination of Thy-1 + cells after MLC, immediately before CMC assay, greatly reduced lytic activity. CTL generated in MLC seem relatively resistant to ot-Thy-1 + C, and complete elimination ofcytolytic function was not achieved in all experiments despite two cycles of exposure to or- equalized. MLR, mixed Thy-1 + C. Elimination ofQa-1 + cells (two cycles of exposure to a-Qa-1 + C), after MLC, did not reduce CMC activity, indicating that CTL, in this experimental system, are Qa-l-. In fact, these data indicate that Qa-I + LNC may exert an inhibitory effect upon the generation of CTL. Thus major histocompatibility complex (MHC)-alloreactive Lyt-23 prekiller and killer effector cells are apparently Qa-1-. No conclusions, however, should be drawn concerning the Qa-1 phenotype of Lyt-1 + TH cells that amplify the response of Lyt-23 allogeneic prekiller cells because help and suppression are probably generated concomitantly (12), and thus a reduction in helper activity could be balanced by reduction of suppression. INFLUENCE OF Qa-t + CELLS ON PRIMARY AND SECONDARY RESPONSES TO SRBC IN VITRO. Table II shows that the primary a-SRBC PFC response of the Qa-1-spleen population (selected with a-Qa-1 + C), on day 5 of cuhure, was more than six times that of equal numbers of the unselected population (NMS + C). The response of Qa-1-spleen cells from SRBC-primed mice, on day 4 of culture, was more than double that of the unselected population. These increments in PFC values for the Qa-1-population were sufficiently high to suggest that they might be due to elimination of suppression by the missing Qa-1 ÷ cells. This proposal was tested in the experiments reported below. SRBC, to cultures of 107 unprimed B6 spleen cells. As indicated in Table III, elimination of Qa-1 ÷ cells before induction of anti-SRBC suppressor activity resulted in a substantial (> 10-fold) reduction in the ability of the residual Qa-1-population to generate a-SRBC suppression. Elimination of Qa-1 ÷ cells after induction of a-SRBC suppressor activity also resulted in considerable (~10-fold) reduction of Tsuppressive activity. The Qa-1 specificity of elimination in this study was confirmed in control experiments in which suppression was not abrogated in parallel tests in which B6 (Qa-1-) spleen cells were subjected to the same selection procedure with a-Qa-1 +C. (b) Con A-induced suppressive activity: under certain conditions, inclusion of Con A in SRBC-stimulated spleen cell cultures results in a substantially reduced PFC response, and this is due to suppressive effects of Thy-1 ÷ cells (6,8,12). Such suppression was virtually eliminated if Qa-I + cells were removed from spleen cells before culture (Table IV; A). To determine whether Qa-1 + cells were essential for both generation of T-suppressive (Ts) activity as well as for suppressor-effector activity, T cells (nylon wool purified) were selected with a-Qa-1 + C either before or * Cells of the indicated T-cell population + 5 × 10 s B cells (selected with 0t-Thy-l.2 + C × 2) were incubated with 5 p,g/ml Con A + 3 × 10 s SRBC, and assayed for PFC on day 5. Graded numbers of the indicated T-cell population + 10 r unprimed spleen cells + 3 × 106 SRBC were cultured for 4 days. § 100 -mean PFC count per culture/mean PFC count per control culture. 969 after stimulation by Con A and were then added in graded numbers to 107 spleen cells. Table IV (B) shows that although Qa-1 ÷ cells contribute to the initiation of Ts activity by Con A (line 1 compared to line 2), Ts activity, once generated, was no longer susceptible to elimination by a-Qa-1 + C (line 1 compared to line 3). These data, taken together, indicate that Qa-1 ÷ cells are required for generation of antigeninduced and, to a lesser extent, for optimal Con A-induced Ts activity: once generated, Con A-induced suppressive activity does not require the continuous presence of Qa-1 ÷ cells while SRBC-induced suppressive activity does. Representation of Qa-I Phenotypes Within Various T-Cell Sets (Tab& V). Approximately 70% of Thy-1 ÷ cells are Qa-1 ÷. Since this Thy-l+.:Qa-I + set is larger than any of the Lyt sets estimated for B6 spleen and lymph node cells (33% Lyt-l; 5-10% Lyt-23; 50% Lyt-123; expressed as proportions of the Thy-1 ÷ population [9]), Qa-1 must be expressed on more than one Lyt-defined T cell set. To determine which sets contain Q a-1 + cells, we enumerated B6-Tla a peripheral LNC by cytotoxicity assays with a-Qa-1, a-Lyt-l.2, and a-Lyt-2.2 sera either alone or in combination. The combined antiserum method indicates a small proportion of Lyt-:Qa-1 + cells, because more cells were lysed by the a-Lyt plus a-Qa-1 combinations than by any ct-Lyt serum alone. It is acknowledged that lysis with two antisera may be more efficient than lysis with either antiserum alone, and that this would stimulate a proportion of cells expressing only one of the two antigens. But this is unlikely to be the main explanation of the apparent Lyt-:Qa-1 + cell set, because a small proportion of Thy-1-:Qa-1 ÷ cells is detectable in normal (2) and in LPS-stimulated populations. That issue aside, the data shown in Table V Discussion The genetic disparity involved in the immunization (B6 X A-Tla b) anti-A (Tla"), which defines the Qa-1 system, concerns a region of chromosome 17 reaching from a crossover point between H-2D and Tla and extending to a presently unmarked point beyond Tla. It is already clear that genes in this region define several cell surface components, including the TL antigens of thymocytes, histocompatibility antigens responsible for skin graft incompatibility, and the group of serologically defined antigens collectively termed Qa. Qa itself has already been shown to encompass at least three systems, Qa-1, Qa-2, and Qa-3 (10, 11), by serological analysis of Flaherty's B6.K1 and B6.K2 strains derived from different recombinants between H-2D and Tla. Qa-2 and Qa-3 do not enter the picture as far as the present study is concerned because B6 and A (the allele donor of the recombinant congenic B6-Tla a strain) have so-far indistinguishable Qa-2 and Qa-3 types. It is impossible at the moment to say how many cell surface components Qa-1 may comprise, nor whether each belongs to a different program that subclassifies the Lyt and perhaps other lymphocyte sets. Although the immunization of (B6 × A-Tla o) mice is performed with the TL + leukemia ASLI, it does not follow that ASL1 expresses all Qa-1 antigens that may be represented in the antiserum, because splenic ASL1 cell suspensions include a variety of normal A strain cells. Although analysis is as yet far from complete, the Qa-1 system is already of considerable value because it distinguishes a population of iymphocytes that is not defined by any other antigenic system, according to three criteria: (a) representation of Qa-1 cells among T cell sets defined by Lyt phenotypes, (b) the profile of responses to mitogens exhibited by lymphocyte populations depleted of Qa-I + cells, and (c) the profile of immune responses of lymphocyte populations depleted of Qa-I ÷ cells. Mitogens, as polyclonal activators of lymphocytes, have been a valuable aid in the study of lymphocyte function and the dissection of lymphocyte sets. In this respect it is noteworthy that Qa-I + cells, which compose part of the Lyt-1 and Lyt-123 sets (their representation in the Lyt-23 set is unknown), are essential for significant responses to PHA and are required for optimal responses to Con A. By the criterion of SRBC PFC production by spleen cells, Con A induces both helper and suppressor functions (6,12) and has been shown to stimulate all three Lyt T cell sets (6); whereas PHA appears to induce mainly suppression but not help, and other studies have indicated that optimal response to PHA, unlike Con A, may require Lyt-123 cells (13). It may be that Con A can induce Lyt-123+:Qa-1 + cells to become Lyt-23+:Qa-Isuppressor cells; in the absence of Lyt-123 cells Con A can activate populations enriched for Lyt-23 cells to express suppressive activity. Thus Con A-activated suppression of the a-SRBC PFC response was reduced by elimination of Qa-1 + cells during the generative phase, at a time when amplifiers or precursors (or both) would most likely be needed. This was particularly evident when Con A-induced inhibition depended upon rapid generation of suppressor cells during a 5 day in vitro primary response (Table IV, A). Once generated, Con A-induced suppressor cells were insensitive to elimination of Qa-I + cells. By contrast Qa-I + cells were evidently required for both generation and effector function of SRBC-induced suppression of the a-SRBC response. This may imply that SRBC-specific Qa-1 ÷ cells generate additional suppression during restimulation by SRBC in the assay cultures. In any event, the participation of Qa-I + ceils in the regulation of the SRBC antibody response is a remarkable feature of the Qa-1 system and is primarily reflected in the generation of a T-cell suppressor (Ts) population (14)(15)(16). This is the evident reason why elimination of Qa-1 + cells so greatly augments the primary response to SRBC in vitro. Recent experiments have shown that cells of Lyt-123:Qa-1 ÷ phenotype can be induced by antigen-stimulated Lyt-1 :Q a-1 ÷ cells to develop substantial feedback suppressive activity (15,16). We no not know (a) whether Lyt-123:Qa-1 + cells are accessories needed to assist in optimal generation of suppressor-effectors, or give rise directly to suppressor-effectors after induction by Lyt-l:Qa-1 + ceils; nor (b) whether Lyt-I inducer cells and resting Lyt-123 cells express identical Qa-1 region gene products. The prominence of Qa-1 ceils in suppression of the SRBC antibody response in PFC assays contrasts with the minimal influence of Q a-1 + cells in MLC assays, and in both the generation and effector function of cytotoxic lymphocytes active in CIMCI assays. Evidently alloreactive prekiller and killer cells are Lyt-23+:Qa-1 -. Whether the same applies to the generation and effector function of cytotoxic lymphocytes generated against virally or chemically modified cells remains to be seen; this is of special interest because in this case Lyt-123 precursors are necessary for generation of optimal numbers of Lyt-23 cytotoxic effector cells (17). Summary The antiserum (B6 X A-Tla b) anti-A (Tla a) defines several TL antigens expressed exclusively on thymocytes. When reacted with peripheral lymphocytes, the same antiserum defines another antigenic system, provisionally termed Qa-1. The genotypic disparity distinguishing the recipients and donors in this immunization comprises a section of chromosome 17 extending from a crossover point between H-2D and Tla to a presently unmarked point beyond Tla. Therefore although Qa-1 may constitute a single cell surface component, it is equally probable that the Qa-1 system defines two or more cell surface components determined by genes in this region, each of which may be expressed on a different cell set. Cytotoxicity assays indicate that Qa-1 antigen is expressed on Lyt-I cells and Lyt-123 cells, and may serve to subclassify these two cell sets; it is not known whether Qa-1 + cells may occur within the small Lyt-23 set. There may also be a cell set with the phenotype Thy-l-:Qa-1 +. Another distinctive feature of the Qa-1 system is the characteristic profile of responses to mitogens exhibited by spleen cell populations from which Qa-1 + cells have been eliminated; in conventional assay of [3H]thymidine incorporation the response to lipopolysaccharide was essentially unchanged, the response to phytohemagglutinin M (PHA-M) was virtually abolished, and the response to concanavalin A (Con A) was reduced by 40%. The third distinctive feature of the Qa-1 system is the characteristic profile of changes which elimination of Qa-1 ÷ cells produces in tests of immune function in vitro: (a) proliferation, measured by [3H]thymidine incorporation, in mixed lymphocyte culture (MLC) with major histocompatibility complex (MHC)-incompatible stimulator cells, was not affected. (b) in tests of cell-mediated cytotoxicity (CMC) of MHC-incompatible target cells, neither the generation nor the effector functions of cytotoxic lymphocytes was affected, implying that Lyt-23 prekiller and killer cells are Qa-1-. (c) primary and secondary responses to SRBC were considerably augmented, suggesting that Qa-1 + cells may be responsible for suppression in this test system. (d) accordingly the suppression of the anti-sheep erythrocyte (SRBC) response normally engendered in spleen cells by culture with SRBC was profoundly reduced by elimination of Qa-1 + cells, either before or after culture. (e) the suppression of the anti-SRBC response normally engendered in spleen cells cultured with Con A was reduced by removal of Qa-1 + cells before but not after culture with Con A. Although analysis is as yet far from complete, the Qa-1 system should already be of considerable value because it distinguishes a population of lymphocytes that is not defined by any other antigenic system, according to three criteria: (a) representation of Qa-1 cells among T-cell sets defined by Lyt phenotypes, (b) the profile of responses to mitogens exhibited by lymphocyte populations depleted of Qa-1 + cells, and (c) the profile of immune responses of lymphocyte populations depleted of Qa-1 + cells.
2014-10-01T00:00:00.000Z
1978-10-01T00:00:00.000
{ "year": 1978, "sha1": "6078f3009eac7af4d9d3af8ce8bb8494dd9d369c", "oa_license": "CCBYNCSA", "oa_url": "http://jem.rupress.org/content/jem/148/4/963.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "6078f3009eac7af4d9d3af8ce8bb8494dd9d369c", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
6270548
pes2o/s2orc
v3-fos-license
Comparative transcriptomic profile analysis of fed-batch cultures expressing different recombinant proteins in Escherichia coli There is a need to elucidate the product specific features of the metabolic stress response of the host cell to the induction of recombinant protein synthesis. For this, the method of choice is transcriptomic profiling which provides a better insight into the changes taking place in complex global metabolic networks. The transcriptomic profiles of three fed-batch cultures expressing different proteins viz. recombinant human interferon-beta (rhIFN-β), Xylanase and Green Fluorescence Protein (GFP) were compared post induction. We observed a depression in the nutrient uptake and utilization pathways, which was common for all the three expressed proteins. Thus glycerol transporters and genes involved in ATP synthesis as well as aerobic respiration were severely down-regulated. On the other hand the amino acid uptake and biosynthesis genes were significantly repressed only when soluble proteins were expressed under different promoters, but not when the product was expressed as an inclusion body (IB). High level expression under the T7 promoter (rhIFN-β and xylanase) triggered the cellular degradation machinery like the osmoprotectants, proteases and mRNA degradation genes which were highly up-regulated, while this trend was not true with GFP expression under the comparatively weaker ara promoter. The design of a better host platform for recombinant protein production thus needs to take into account the specific nature of the cellular response to protein expression. Introduction The wide variability in the expression levels of recombinant proteins in Escherichia coli remains a major challenge for biotechnologists. While some proteins are routinely expressed at 30-40% of total cellular protein (TCP) (Joly and Swartz 1997;Kim et al. 2003;Suzuki et al. 2006), others may reach a maximum of only 5% of TCP (Kiefer et al. 2000). The uses of strong promoters, removal of codon bias and media design are favored strategies for improving recombinant protein yield (Acosta-Rivero et al. 2002;Hale and Thompson 1998). It is important to note that most scale up strategies involving high cell density cultures tend to increase biomass concentrations and hence volumetric product concentrations rather than the specific product yield in terms of product formed per unit biomass (Y p/x ). This yield remains an intrinsic property of the host-vector-gene combination used for expression. Improvements in host vector systems has tended to focus on developing high copy number plasmids with strong tightly regulatable promoters (Bowers et al. 2004;Jones et al. 2000;Wild and Szybalski 2004) along with protease free and recombination deficient strains (Meerman and Georgiou 1994;Ratelade et al. 2009). The focus has thus primarily been on enhancing the metabolic flux of the recombinant protein expression pathway, with few studies on analyzing how the gene products interact with the host cell machinery to depress its own expression. It has been routinely observed that the specific growth rate of recombinant cultures declines post induction. Earlier authors had correlated this decline to be a measure of the metabolic burden associated with recombinant production (Bentley et al. 1990;Seo and Bailey 1985). It was postulated that the availability of critical metabolites was reduced since they were diverted to product formation, leading to a concomitant decline in the specific growth rate (Babaeipour et al. 2007). It is therefore to be expected that the decline in growth should be most severe when expression levels are maximum. However in most cases there seems to be no such correlation since severe growth retardation is observed when some proteins are expressed in fairly low amounts (Bhattacharya et al. 2005) whereas high level expression of other proteins cause little or no growth retardation Vaiphei et al. 2009). The metabolic burden hypothesis is also unable to explain the large variability observed in the levels of recombinant protein yield. Recent studies on the transcriptomic profiling of recombinant cultures has improved our understanding on the nature of cellular stress associated with over-expression of recombinant proteins (Haddadin and Harcum 2005). Global regulators are triggered in response to induction and these in turn up/down-regulate sets of genes involved in a range of cellular functions (Perez-Rueda and Collado-Vides 2000; Perrenoud and Sauer 2005). These include genes for central carbon metabolism glycolysis, Entner-Doudoroff pathway, pentose phosphate pathway (PPP), tricarboxylic acid (TCA) pathway, glyoxylate shunt (GS), respiration, transport, anabolism, catabolism and macromolecular degradation, protein biosynthesis, cell division, stress response, flagellar and chemotaxis system. This coordinated response of the host mimics many features of the heat shock, osmotic shock, oxidative stress and stringent responses (Gill et al. 2000;Kurland and Dong 1996). This results in the decline of both growth and product formation rates. Thus transcriptomic data reveals a more complex picture of the host response where the cell dynamically reacts to the stress associated with recombinant protein expression. In this work we have tried to extend this analysis by two ways. Firstly we have mimicked industrial scale fermentation where complex media is used to obtain a combination of high cell densities along with high specific growth rates. The latter allows high specific product formation rates and thus product yields are significantly higher in complex media. The transcriptomic profiling of such cultures could provide a more meaningful picture of the cellular physiology under conditions of hyper-expression. We have also attempted to overcome the problems of monitoring cultures grown in complex media by online measurement of metabolic activity like OUR, CER, etc. Secondly we have looked at the variability in cellular stress responses as a function of the nature of the expressed protein. For this we choose three proteins viz. rhIFN-β, Xylanase and GFP, where the bioprocess parameters for high level expression has been previously optimized in our lab. A primary reason for choosing these three proteins was to analyse the difference in the transcriptomic profile when two soluble proteins were expressed under different expression systems and also to see the variability in the cellular response when expression is in the form of inclusion bodies (rhIFN-β) or as a soluble protein (xylanase). In all these cases there is a large diversion of the metabolic flux towards recombinant protein synthesis and thus according to the 'metabolic burden' hypothesis the cellular stress response should be similar. However we observed significant difference in the up/ down regulation of genes demonstrating that the cellular response is a function of the gene product and the expression system used. Chemicals and reagents Media and bulk chemicals were purchased from local manufacturers, Himedia, Qualigens, and Merck. Media used were LB (Luria-Bertani media containing yeast extract 5 g, tryptone 10 g, and NaCl 10 g/L, pH 7.2), TB (Terrifc broth containing yeast extract 24 g, tryptone 12 g/L, and 0.4% glycerol, pH 7.2). IPTG (1 mM), ampicillin and chloramphenicol were from Sigma, USA. Restriction and modifying enzymes were purchased from MBI Fermentas. All other chemicals were of analytical grade and obtained from local manufacturers. Cloning & expression of Representative proteins rhIFN-β gene was inserted downstream of the T7 promoter in a pET22b expression vector and transformed into E.coli BL-21(DE3) cells. rhIFN-β gene was synthesized using SOEing PCR where all the non optimal codons were replaced with optimal codons. The complete xylanase gene fragment was amplified using M13 forward and XylR primers and a hexahistidine fused xylanase was cloned into the pRSET B vector. This construct was named pRSX and showed soluble cytoplasmic expression. Cloning of GFP gene into pBAD33 was done by digesting pET14b-GFP (obtained from ICGEB, India) with enzymes XbaI and HindIII and ligating it into plasmid pBAD33 (which does not contain any ribosome binding site). GFP was cloned under the ara promoter which is a tightly regulated promoter. High cell density cultivation A freshly transformed single colony of each clone was inoculated in 10 ml Terrific Broth (TB) containing 100 μg/ml (1×) ampicillin and grown over night. This culture was used to inoculate 200 ml TB having the same antibiotic concentration and grown further for 8 h (OD~7). This was used as an inoculum for the fermenter (Sartorius Biostat B Plus) containing TB medium & 1× antibiotic. Temperature, pH and initial Dissolved Oxygen (DO) were set at 37°C, 7.0 and 100% respectively with the initial stirrer at 250 rpm. DO was cascaded with stirrer and maintained at 40%. The airflow rate was kept at 2 l/m. The medium pH was set at 7.0 and controlled by automatic addition of 1 N HCl or NaOH. Sigma Antifoam 289 was added when required. The feeding solution which comprises 12% peptone, 12% Yeast Extract and 18% Glycerol was fed so as to maintain the pre-induction μ at 0.3 h -1 . The culture was initially grown in a batch mode till 10-12 OD and then the feed was attached. In order to support the growth at a constant specific growth rate of 0.3 h -1 , the feed rate was increased exponentially using the equation F = F o e μt , where F o is the initial flow rate, F is the flow rate at any given time, μ is the specific growth rate and t is time in hours. Simultaneously, the metabolic activity of the cultures was estimated indirectly by observing the Oxygen Uptake Rate (OUR) and Carbon Emission Rate (CER) which was measured by an exit gas analyser (Fer-Mac 368, Electrolab Ltd, Tewkesbury, UK). RPM is also a useful online indicator of the oxygen transfer rate which matches the oxygen uptake rate (OUR) when dissolved oxygen is at steady state. Since throughout the experiment, dissolved oxygen was maintained at 40% by cascading RPM with dissolved oxygen, we could correlate these parameters with the metabolic activity of the culture (Gupta et al. 1999). Thus a plot of OUR versus RPM 2 , gave a straight line (Additional File 1) and this provided us with a cross check on the measured values of OUR. This was used to estimate the online metabolic activity of the culture post induction which allowed us to design the post induction feeding strategy without allowing substrate buildup in the media. From the pH profile it was ensured that there was no acetate accumulation and both acetate and glycerol levels were monitored using the Megazyme Acetic Acid kit (KACETRM; Megazyme International Ireland Limited) and using the Megazyme Glycerol kit (K-GCROL; Megazyme International Ireland Limited) respectively, to confirm that there was no overflow metabolism. Transcriptomic Profiling Samples from fed batch fermentations of rhIFN-β, Xylanase and GFP were collected at four time points (0 h, 2 h, 4 h, and 6 h) after induction. 0 h (uninduced) samples were taken as a control for every run. The cDNA synthesis, labelling (biotin) and hybridization (Affymetrix Gene-Chip E.coli genome 2.0 array) were performed according to the Affymetrix GeneChip expression analysis protocols. Washing, staining and amplification were carried out in an Affymetrix GeneChip ® Fluidics Station 450. Affymetrix GeneChip ® scanner 3000 was used to scan the microarrays. Quantification and acquisition of array images were done using Affymetrix Gene Chip Operating Software (GCOS) version 1.4. Three types of detection call (i.e., present, absent, or marginal) were calculated using statistical expression algorithm and average normalization was performed. Hybridization and spike controls were used. Subsequent data analysis was performed using Gene-Spring GX11.5 software (Agilent Technologies, USA). RMA algorithm was used for data summarization (Bolstad et al. 2003) and quality control of samples was assessed by principle component analysis (PCA). Fold change was calculated as time point/uninduced control (0 h). Normalized signal intensities of each gene on chips were converted to log2 values, and compared between experiments. Experimental design for data analysis The data set was filtered and genes with ≥ 2 fold change were selected for further analysis. The comparison was done across all time points for all 3 sets of recombinant protein and the common set of up/down-regulated gene were used for further analysis. The comparison set is shown as a Venn diagram in Additional file 2a. To analyze the similarities in the response to rhIFN-β, Xylanase and GFP production, common genes in all the three gene sets were extracted and shown in Additional file 2b, e and Additional file 3. Next, to analyse the effect of hyper-expression of recombinant protein under a strong promoter, the list of genes that were exclusively up/down-regulated in the time course profiles of rhIFN-β and Xylanase but not in GFP were extracted from the Venn diagram as shown in Additional file 2.c, f and Additional file 4. Similarly to analyse the effect of heterologous soluble protein expression on host cells the time course expression profile of Xylanase and GFP were analysed and the genes that were solely up/down-regulated in these two sets and not in rhIFN-β (expressed as inclusion body) were picked up (Additional file 2d, g and Additional file 5) for further studies. Gene expression values of the above three sets are represented in the form of heat map in Figure 1. Results In this work rhIFN-β was expressed as an inclusion body whereas xylanase and GFP were expressed as soluble proteins. While rhIFN-β and xylanase were expressed under a strong promoter (T7) in E.coli BL21 (DE3) cells, GFP was expressed under the ara promoter in an E.coli DH5α strain. Cells were grown exponentially in the bioreactor at a specific growth rate of 0.3 h -1 by using an exponential feed of complex media and induction was done at an OD between 20-25. At this point the feed rate was~40 ml/h and the OUR was 0.27 moles/l/h, with a Respiratory Quotient (RQ) of 1.1. Since the biomass yield (Y x/s ) on glycerol, while using complex nitrogen sources had been previously determined to be between 1-1.1 g/g. The above results matched stoichiometrically and demonstrated complete consumption of substrate feed. A continuous fall in the specific growth rate was observed which dropped to zero within 4 hours of induction. In the post induction phase continuous increase in the OUR was observed which necessitated oxygen supplement of the inlet air after 1 h of induction. From the on-line metabolic activity measurement we could identify 3 phases in the metabolic activity of the culture. In the first phase from the point of induction till 2 hours the activity as measured by OUR, CER and RPM 2 kept increasing, even though there was continuous decline in specific growth rate. Clearly a large part of this metabolic activity was diverted towards maintenance (Russell and Cook 1995). The specific product formation rate was high during this period. Since the metabolic activity doubled in this period, the post-induction feed was also increased concomitantly (Ramalingam et al. 2007). In the second phase between 2 to 4 hours the feed was kept constant since the on-line measurement indicated a constant metabolic activity. Finally after 4 hours there was a decline in metabolic activity and the specific product formation rate declined to reach zero in 6 hours. Samples were collected to represent these three phases 2, 4 and 6 hour (post-induction). Figure 2 shows the SDS-PAGE gel picture of rhIFN-β, xylanase and GFP expression profile post induction. Identifying the similarities in the cellular stress response The transcriptomic profiles of three different fermenter runs with rhIFN-β/BL21 (DE3), Xylanase/BL21 (DE3) and GFP/DH5α were analyzed post induction and genes with an expression fold change ≥2 with respect to the point of induction were chosen for further analysis. From these, the common list of genes with a high fold change across all time points and across all three fermenter runs was identified (Additional file 3). We observed that in all the three cases, the genes associated with metabolic activity in terms of carbon utilization and energy generation pathways were severely down-regulated. This was similar to earlier reports, where the expression of plasmid based proteins caused a down-regulation of genes involved in biosynthetic pathway, energy metabolism and central carbon metabolism (Ow et al. 2010). Among the existing transport systems involved in nutrient uptake in E.coli, two major components of the glycerol uptake system are glpT (Glycerol-3-phosphate transporter) and glpK (Glycerol kinase). Both these were down-regulated 3.7 and 5.6 folds respectively. Oh and Liao (2000) have also reported that when glycerol was used as a carbon source, under nutrition limitation, genes involved in glycerol catabolism were down-regulated. We also observed that maltose transporters malT, malE and malK were repressed with a concomitant upregulation of mlc which negatively regulates the ATPbinding component of the maltose ABC transporter (Plumbridge 2002) similar to observations of Lemuth et.al. (2008), which indicates that transport of carbon sources were significantly affected. The transcript levels of a number of aerobic respiration proteins involved in ATP synthesis were found to be relatively lower. The genes of the nuo operon encoding for Sharma et al. AMB Express 2011, 1:33 http://www.amb-express.com/content/1/1/33 components of NADH dehydrogenase-I were down-regulated. NADH: ubiquinone oxidoreductase-I (NDH-1) is an NADH dehydrogenase which is part of both the aerobic and anaerobic respiratory chain of the cell (Hua et al. 2004). It was found that the ndh and genes of the atp operon were down-regulated in line with previous observations (Durrschmid et al. 2008;Haddadin and Harcum 2005). In addition, expression of two main aerobic terminal oxidases, cytochrome bd (cydAB) and cytochrome bo (cyoABCD genes) were also reduced (Oh and Liao 2000). Concomitantly we observed a severe down-regulation of genes involved in TCA cycle (icdA, aceBAK, acs) and amino acid synthesis which can be attributed to the cellular stress associated with the over-expression of recombinant proteins. sucABCD operon of TCA cycle was downregulated and this may be due to the repressor activity of ArcA/ArcB, which is known to act on aerobic central metabolism pathway during oxidative stress (Vemuri et al. 2005). Both glpD, which catalyses the conversion of glycerol-3-phosphate to dihydroxyacetone phosphate, and prpE, a key enzyme in propionate degradation were upregulated 10.4 fold and 5.4 fold respectively. This indicates that alternative pathways for substrate utilization are active during stress, and act as anapleurotic reactions to replenish TCA cycle metabolites. gatZ is involved in galactitol degradation which catalyze the dissociation of D-tagatose 1, 6-biphosphate to glycolytic intermediates (Nobelmann and Lengeler 1996). This gene was observed to be downregulated, indicating that potential anapleurotic pathways which are energy consuming are down-regulated in order to conserve energy. Interestingly there was also downregulation of tnaA which breaks down L-tryptophan and L-cysteine to pyruvate. This shows that while the overall flux in the glycolytic pathway is decreased, a cascade of events also takes place to maintain the pool of critical intermediaries inside the cell. We can therefore hypothesize that the cell ensures its supply of nodal metabolites while it reprogrammes its machinery upon induction of metabolic stress. The schematic of the processes and reactions catalyzed by this common set of differentially expressed genes is given in Figure 3. Analysis of differential expression due to hyperexpression The set of genes which were found to be up/down-regulated (fold change ≥ 2) during high level expression of rhIFN-β and xylanase under the T7 promoter, but not in the relatively lower 'ara' based expression of GFP were analysed to understand the host response towards hyperexpression of proteins (Figure 4, Additional file 4). The processes of cell growth and expression of foreign gene products both compete for the use of various intracellular resources for the biosynthesis, of amino acids, nucleotides as well as metabolic energy. When recombinant proteins are over-expressed under strong promoters, a major chunk of the flux of the precursors are diverted towards heterologous gene expression (Chou 2007). This gross imbalance in the resource distribution leads to degradation of cellular health and the cellular physiology is significantly reprogrammed. We thus observed that this list contained the maximum number of up/down-regulated genes. This included the major channels of precursor molecules like transporters (artJ, mglB, hisJ, ybeJ, ptsH, sufC, ycdO, gatA, gatB, gatC, fepA, ompA, actP and mrdB), central intermediary metabolism (pdhR, aceE, aceF, lpdA, and gltA), amino acid metabolism (argE, argH, entA, entB, entE, entF, aspA and ubiF) and energy generation pathways genes which were down-regulated. glpF, the glycerol facilitator, which helps in facilitated diffusion of glycerol across the inner membrane of the cell was found to be down-regulated 3 fold. Down-regulation of glycerol transport and utilization pathway is a major bottleneck in achieving high yield of recombinant protein, and co expression of glpF with target protein has been reported to increase productivity (Choi et al. 2003). This is in agreement with the hypothesis that the cell restricts the supply of precursor molecules in order to slowdown metabolic fluxes and thus restricts foreign protein expression. We observed that the whole atp operon was down-regulated, supporting the fact that energy generation pathway are repressed during metabolic stress. Simultaneously the flagellar motility (fliL, fliN, fliS, fliT) genes were also found to be down-regulated. A steep proton gradient is required for flagellar motility between the periplasmic space and the cytoplasm; decreased motility could indicate energy deficiency. Probably, the cell strategically also down-regulates genes related to flagellar motility to minimize energy expenditure, which is in agreement with earlier data (Jozefczuk et al. 2010). The genes proW and proP help in maintaining osmotic homeostasis, prevent cell dehydration and restore membrane turgor (Gunasekera et al. 2008;Mellies et al. 1995). These were found to be 6.0 fold and 5.3 fold up-regulated respectively, which is in agreement with the fact that hyper-expression of recombinant proteins not only affects the biosynthetic pathways but also leads to the disruption of cellular integrity. Similarly, yaeL was up-regulated which is activated in responses to unfolded protein stress (Alba et al. 2002;Betton et al. 1996;Jones et al. 1997;Mecsas et al. 1993;Missiakas et al. 1996). The pnp gene which encodes for PNPase and has a role in mRNA degradation during carbon starvation Apirion 1974, 1975), was observed to be upregulated. Interestingly these proteases and genes for mRNA degradation were not differentially expressed in case of GFP expression indicating that under lower levels of recombinant protein expression these stringent responses were not generated. Comparing soluble and insoluble forms of expression An interesting comparison of the transcriptomic profile could be made by looking at those genes which were up or down-regulated, when xylanase and GFP were expressed as soluble proteins but not during the expression of rhIFN-β (as IBs). In both cases there is a metabolic flux diversion towards product formation. However with soluble protein expression, an additional stress is imposed by the interaction of the soluble protein with the cellular constituents, which is absent when the product gets sequestered as IBs. This list of genes is given in Additional file 5 and a schematic representing the reactions and Figure 3 Schematic diagram showing common genes which were up/down-regulated (fold change ≥ 2) during rhIFN-b, xylanase and GFP production, along with the processes and reactions they are involved. Red and green colour letters represent up-regulated and down-regulated genes respectively. Sharma et al. AMB Express 2011, 1:33 http://www.amb-express.com/content/1/1/33 processes which are up/down-regulated are shown in Figure 5. The amino acid biosynthetic genes, aroC coding for chorismate synthase, which is the key branch-point intermediate in aromatic biosynthesis, leuB and ileS were among the significantly down-regulated group. Genes involved in the anapleurotic pathways of TCA cycle intermediates astD, as well as the glycerol degradation genes encoded by glpABC operon which provides intermediaries to the glycolytic pathways were also down-regulated. The rate limiting steps of both glycolysis as well as TCA cycle were down-regulated which would result in retarded substrate utilization and energy generation pathways. sapA is well known as a peptide transporter which is part of the defence degradation system in E.coli (Parra-Lopez et al. 1993). Along with this ATP binding to SapD has also been shown to be sufficient for restoring K + uptake in E. coli via its two Trk potassium transporters (Harms et al. 2001). There was a significant down-regulation of sapA involved in potassium uptake in E. coli indicating that there is a decline in nutrient uptake and oxygen consumption rate of the cell (Harms et al. 2001). Similarly the fadJ gene which is a part of the anaerobic βoxidation of fatty acids was also down-regulated suggesting that the cells were not able to use fatty acids as carbon and energy source (Campbell et al. 2003). In E. coli, fpr participates in the synthesis of methionine, dissimilation of pyruvate, and synthesis of deoxyribonucleotides. The latter two reactions are anaerobic processes. In all cases, fpr functions together with flavodoxin in the transfer of electrons from NADPH to an acceptor (Bianchi et al. 1995;Ow et al. 2006) and this was also found to be down-regulated. atpC component of ATP Synthase F1 complex was down-regulated. These results indicate that the expression of a soluble protein leads to an enhanced suppression of key metabolic pathways, adversely affecting the cellular health and productivity of the host. Discussion It was observed that the cellular response to the diversion of metabolites for product formation, is at multiple levels directed both at growth rate and protein production. Since growth rate and protein synthesis share common pathways, this stress response hits both processes simultaneously, affirming previous reports on the growth associated nature of recombinant protein production (Bentley et al. 1990;Shin et al. 1998). The stress response first affects the carbon uptake by down-regulating various transporters and this phenomenon was observed for all the conditions irrespective of the nature and level of recombinant protein expression. Simultaneously the carbon utilization and energy generation pathways starting from Glycolysis, TCA to electron transport chain were severely repressed resulting in decreased growth yield, product formation and viability of the cell population as has been shown by Hardiman et al. (2007). Interestingly, there was a significant time lag between this transcriptomic down regulation and its resultant phenotype. Thus the metabolic activity which is linked to substrate uptake rate fell only after 4 hours postinduction. The down-regulation of energy generating pathways also leads to a drop in growth rate (Kasimoglu et al. 1996;Troein et al. 2007) which was also observed in the present case. It has been previously reported that in complex medium, several genes of energy generating pathways such as hycB, cyoA, cydA, and ndh, were down-regulated, along with the ATP synthase gene (Oh and Liao 2000), which is similar to our observations. The addition targets of this metabolic stress response were the amino acid uptake, peptide uptake and amino acid biosynthetic pathways. Interestingly amino acid uptake and biosynthesis was significantly repressed only when soluble proteins were expressed under different promoters, whereas these pathways were not significantly affected when the recombinant protein was expressed as an inclusion body. We observed that hyper-expression of recombinant protein tends to generate a very strong response where several pathways are affected, most importantly the transporters and the cellular degradation machinery like the osmoprotectants (proP and proW), proteases (yaeL) and mRNA degradation (pnp). All these genes were highly up-regulated during protein production with the T7 promoter (rhIFN-β and xylanase), whereas these were not significantly affected during protein production with the weaker ara promoter. The large fold changes in the genes associated with transport is an indication of cellular shutdown. Simultaneously the cell loses its osmotolerant property along with an increase in protease and mRNA degradation activity. We can therefore conclude that both the nature and level of recombinant protein expression leads to the generation of a common as well as a differential stress response. Host cell engineering should take into account the nature of protein to be expressed for designing improved platforms for over-expression. Additional material Additional file 1: Pre-induction graphs for fed-batch fermentation of GFP. OD 600 Vs Time. OUR(mol/l/h) Vs Time. CER(mol/l/h) Vs Time. OUR (mol/l/h) Vs RPM 2 . CER(mol/l/h) Vs RPM 2 Additional file 2: Experimental design for data analysis. a) Set of up/ down-regulated gene across different time points (2 h, 4 h and 6 h). b) Set of genes up -regulated in rhIFN-β, xylanase and GFPpe) Set of genes down-regulated in rhIFN-β, xylanase and GFP.pc) Set of genes up -regulated in rhIFN-β and xylanase but not in GFP.pf) Set of genes downregulated in rhIFN-β and xylanase but not in GFP. d) Set of genes up -regulated in xylanase and GFP but not in rhIFN-β. g) Set of genes down-regulated in xylanase and GFP but not in rhIFN-β. Additional file 3: List of common genes present during expression of rhIFN-b, xylanase and GFP with their log2 fold change values (fold change ≥ 2). Additional file 4: List of common genes present during expression of rhIFN-b and xylanase but not in GFP, along with their log2 fold change values (fold change ≥ 2). Additional file 5: List of common genes present during expression of GFP and xylanase but not in rhIFN-b, along with their log2 fold change values (fold change ≥ 2).
2016-05-12T22:15:10.714Z
2011-10-22T00:00:00.000
{ "year": 2011, "sha1": "db9e603fb9d0a4e2a9b8a6b64125dc3083575967", "oa_license": "CCBY", "oa_url": "https://amb-express.springeropen.com/track/pdf/10.1186/2191-0855-1-33", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "705028041ddc967feb7651ff4e541b1fcb8c2321", "s2fieldsofstudy": [ "Biology", "Engineering" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
6831690
pes2o/s2orc
v3-fos-license
Conservation of artists' acrylic emulsion paints: XPS, NEXAFS and ATR-FTIR studies of wet cleaning methods† Works of art prepared with acrylic emulsion paints became commercially available in the 1960s. It is increasingly necessary to undertake and optimise cleaning and preventative conservation treatments to ensure their longevity. Model artists' acrylic paint films covered with artificial soiling were thus prepared on a canvas support and exposed to a variety of wet cleaning treatments based on aqueous or hydrocarbon solvent systems. This included some with additives such as chelating agents and/or surfactants, and microemulsion systems made specifically for conservation practice. The impact of cleaning (soiling removal) on the paint film surface was examined visually and correlated with results of attenuated total reflection Fourier transform infrared, XPS and near-edge X-ray absorption fine structure analyses – three spectroscopic techniques with increasing surface sensitivity ranging from approximately − 1000, 10 and 5 nm, respectively. Visual analysis established the relative cleaning efficacy of the wet cleaning treatments in line with previous results. X-ray spectroscopy analysis provided significant additional findings, including evidence for (i) surfactant extraction following aqueous swabbing, (ii) modifications to pigment following cleaning and (iii) cleaning system residues. © 2014 The Authors. Surface and Interface Analysis published by John Wiley & Sons, Ltd. Introduction There is an increasing need for fundamental research informing the conservation and restoration of 20th century acrylic paintings. In contrast to oil paintings, for which conservator-restorers can draw on an extensive body of detailed previous research, there is much less extant knowledge that permits informed choices of particular cleaning strategies for works of art made from acrylic emulsion paints. To address this knowledge gap, the impact of wet cleaning agents on the bulk film and surface properties of artists' acrylic paint films has been investigated over the last decade [1][2][3] by using several types of Fourier transform infrared (FTIR) spectroscopies [1,2] alongside mass spectrometry, [4] AFM, [3] SEM-EDX analysis and visual inspection. Also relevant for the present study is recent work on acrylic glass, for example, by atomic force microscopy. [5] While all of these techniques probe the surface of the paint film, information on the chemistry of the crucial uppermost surface (<10 nm) is nonexistent. In light of this, the analytical value of XPS, which has not been applied in this context before, in combination with near-edge X-ray absorption fine structure (NEXAFS) has been explored in the present work by characterising azo yellow (PY3) artists' acrylic emulsion paint films from two manufacturers before and after wet cleaning treatments. The results were examined in conjunction with those from attenuated total reflection-FTIR (ATR-FTIR) spectroscopy and visual inspection, which are routinely used to assess the surfaces of these paint films. A selection of these results is presented here to highlight the benefit of applying surface X-ray spectroscopy surface analysis in improving our understanding of the impact of wet cleaning treatments on artists' acrylic emulsion paint films. Experimental Artists' acrylic paint films Acrylic emulsion paint films were prepared on triple primed 100% cotton duck canvas (Russel and Chapel, London). Artists paints, Golden Heavy Body Acrylics Hansa Yellow Light and Talens Rembrandt Azo Yellow Lemon, both containing PY3 azo yellow organic synthetic pigment, were applied to the canvas support using a draw-down technique on a Sheen Instruments film caster, to a wet thickness of approximately 800 μm and dry thickness of 200-250 μm, as measured with a digital calliper. The resin for the Golden paint was a pn(butyl acrylate/methyl methacrylate) [pn(BA/MMA)] copolymer while the resins for the Talens paint was a p(ethyl acrylate/methyl methacrylate) [p(EA/MMA)] copolymer with detectable amounts of a chalk (CaCO 3 ) extender. ATR-FTIR spectra indicated that the surfactant used as a pigment dispersant in the bulk paint formulation had not migrated to the surface of either dry films. Because migrated surfactant is suggested to play a role in soiling adhesion and pigment mobility and known to change macroscopic properties of the paint film such as gloss, [6] knowledge of its presence or absence may be relevant to interpreting the relationship between surface chemistry and cleaning efficacy. Films were divided into squares (~1 cm 2 ) and cleaned with one of 25 cleaning agents in each square. Four of these areas were selected for spectroscopic analysis and details of their cleaning treatment presented below. Artificial soiling To simulate the effect of many years of passive soiling, the model films were allowed to dry in ambient conditions for 5 months before brushing on an artificial soiling mixture [7] approximating typical indoor particulate soiling. Before cleaning treatments, the soiling was allowed to dry for 2 weeks on the Golden paint film, while the Talens film underwent cleaning studies after 2 days. Although longer drying is ideal, access arrangements to experimental techniques made this impossible. Wet cleaning agents The water (W) used was deionized (DI) (Purite, D700 deionizer). A 100% aliphatic petroleum spirit (PS) (VWR International) with a boiling point of 120-160°C was used as received. 'Ecosurf + triammonium citrate (TAC)' (ET) consisted of a solution of 1% v/v ECOSURF TM EH-9 (The Dow Chemical Company) and 1% w/v TAC chelating agent in deionized water. The microemulsion (ME) was a water-in-oil microemulsion comprised of proportions of lauryl ammonium sulphate (LAS), low-molecular-weight-alcohol-based cosolvents, a Shellsol D38 mineral spirits solvent continuous phase and deionized water. [3] Cleaning treatment simulation Each cleaning agent was applied to an approximately 1 cm 2 square area of the paint film by dipping a pre-rolled cotton swab on a wooden applicator (Puritan) into the solution and then rolling the swab back and forth (one roll) across the paint film to a total of 20 rolls and dried in ambient conditions. In standard conservation practice, the cleaning step would have been immediately followed by a clearance step, i.e. swabbing of the cleaned area with a liquid likely removes any cleaning residues. This was avoided in these preliminary studies so possible residues from the cleaning agents could be identified but will be addressed in subsequent publications. Samples for spectroscopic analysis were prepared using a singlehole punch, resulting in circular discs with a diameter of 6 mm. Fourier Transform Infrared Spectroscopy Attenuated total reflection-FTIR spectra were collected on a Nicolet iS10 system (Thermo Scientific) with a Smart Omni ATR sampler containing a single-bounce germanium crystal. The penetration depth of the beam is 0.66 μm at 2000 cm À1 . Background and sample spectra were acquired for each sample on two to three different spots in 64 scans with a resolution of 4 cm À1 . Averages of these spectra are presented here. The cleanest part of a sample (judged visually) was analysed to explore the surface chemistry of the most effectively cleaned area. X-ray photoelectron spectroscopy X-ray photoelectron spectra were collected on a Kratos Axis Ultra spectrometer operating with (i) a monochromatic Al K α X-ray anode (1486.69 eV) at 180 W (15 kV, 12 mA), (ii) a hemispherical analyser in electrostatic mode (p < 10 À7 mbar) and (iii) charge neutralisation. Survey XP spectra were acquired in a single sweep with a pass energy of 80 eV, in steps of 0.35 eV and dwell time of 150 ms, giving collection times of approximately 9 min per spectrum. High-resolution XP spectra were acquired in a single sweep with a pass energy of 20 eV, in steps of 0.1 eV with a dwell time of 200 ms, giving a collection time of 1 min per spectrum. The circular punch outs from the paint film were mounted on the sample bar with double-sided tape. As with FTIR, the visually cleanest part of each sample was analysed, by aligning in the focal point of the electron analyser. Data analysis was carried out with CasaXPS. Binding energies were referenced to a primary hydrocarbon peak at set to 285.0 eV, which required a correction of approximately +3-3.5 eV to the experimental data. XP spectra acquired over 0-200 eV binding energy (BE) of elements at low concentration were noisy and smoothed for better clarity. Near-edge X-ray absorption fine structure Near-edge X-ray absorption fine structure measurements were performed at the U7a beamline of the National Synchrotron Light Source at Brookhaven National Laboratory, NY. [7,8] Partial electron yield spectra for the Ca L-edge were collected via a channeltron electron multiplier and with an entrance grid bias (EGB) of À50 and À200 V. Visual inspection Macroscopic effects of cleaning treatments on the paint films were initially assessed by eye. The observations confirmed previous reports of paint brand (formulation) dependency of these effects. [3] The relative cleaning efficacies of the systems were immediately apparent. Soiled paint areas swabbed with effective cleaning systems were very similar in colour to the yellow as-prepared unsoiled areas while those soiled areas swabbed with less effective cleaning systems remained partially grey from residual soiling. On the basis of these criteria, the cleaning systems were ranked as follows, from least to most effective: PS ≤ DI water (W) < Ecosurf + TAC (ET) = microemulsion (ME). Furthermore, it was also apparent from the transfer of yellow pigment to the cotton swab that more PY3 pigment was removed from the Talens films by the cleaning action than from the Golden films. Fourier Transform Infrared Spectroscopy The ATR-FTIR (Fig. 1) readily identified surface soiling from a group of bands between 3600 and 3700 cm À1 and several bands in the fingerprint region at ca 1030 and 1000 cm À1 associated with vibrations of aluminosilicate materials in the soiling mixture, e.g. lattice water and Si-O stretches respectively. Further, bands associated with residues from the hydrocarbon makeup solvent of the soiling mixture were observed at ca 2920 and 2950 cm À1 (C-H stretch). The spectra of the soiled paint films after cleaning (Fig. 1) Whereas soiling bands were completely gone after cleaning with the ME and ET, some soiling was still visible after cleaning with PS on both films and also water on the Talens film. Other modifications to components of the paint film such as surfactant, which was not detected on either paint films before or after cleaning, or pigment were not observed. X-ray photoelectron spectroscopy More in-depth analysis of the paint film surfaces by X-ray spectroscopy confirmed the trends in relative cleaning efficacy. However, this analysis also revealed the presence of cleaning residues (without a clearance step) in addition to other subtle changes to the paint film surface, some of which are highlighted further in the succeeding text. The survey XP spectra (of the unsoiled paint films) are dominated by C and O, which originate from the acrylic binder (Fig. 2). Additionally, strong Na and S emissions are visible on the sample cleaned with the microemulsion. These residues stem from the anionic LAS surfactant in the cleaning agent. The presence of LAS residues was evident also from subtle changes in the C 1s and O 1s XP spectra (not shown here). Identification of the LAS residues was not possible by visual inspection or ATR-FTIR. Work is underway to determine how these residues are affected by a clearance step, i.e. a final cleaning step that is standard conservation practice aimed at removing cleaning system residues usually carried out with deionised water or mineral spirits as well as how to identify residues (if any) from similar microemulsions without LAS or other components with inorganic functional groups. During cleaning, pigment transfer onto the cotton swabs was repeatedly observed during cleaning of the Talens film but to varying degrees depending on the cleaning agent used. While ATR-FTIR did not detect associated modifications to the surface of the paint film, XP spectra of chlorinean element of the azo yellow pigment and primary contributor to the chlorine signal in these measurementsproved sensitive (Fig. 3). In the Cl 2p emission from the Talens films, the Cl signal was more intense after cleaning with solvent-based systems (PS and ME) than after cleaning with water-based agents (W and ET). In contrast, the Cl 2p signal, after the same cleaning treatments on the corresponding Golden paint film, was virtually indistinguishable from the baseline before and after cleaning treatments. These manufacturer-dependent and treatment-dependent changes in the Cl 2p XP signal associated most likely with the azo yellow pigment could be explained by greater pigment mobility of unbound or poorly bound pigment in the Talens paints, perhaps mediated by increased solubility in the hydrocarbon solvents. The lack of similar trends in the Cl 2p XP spectra on the Golden film suggests that the pigment in the Golden formulation is more resistant towards the particular chemistry of the wet cleaning agents, possibly because the film is more medium rich at the film surface, i.e. covered by acrylic resin. In comparison with Cl 2p XP spectra of the unsoiled cleaned Talens paint film, the Cl 2p XP spectra of the soiled film, after cleaning treatments, was much weaker in intensity and hardly distinguishable from the background, suggesting that while some pigment is transferred during cleaning to the swab rolls, the soiling layer appears to have provided some protection to the underlying paint film. This supports the standard conservation practice of only cleaning until an area appears visually clean, i.e. until the soiling layer has been sufficiently reduced. Evidence for surfactant at the surface of the unsoiled Talens yellow paint was not identified by any technique before cleaning. However, in the C 1s XP spectrum (Fig. 4) of the unsoiled Talens yellow paint film, a slight increase in the C-O ether component at +1.5 eV (relative to the primary C-H XP component at 285 eV) was observed after aqueous swabbing. The type of surfactant in these paints is non-ionic, of the Triton type and hydrophilic, making it amenable to aqueous extraction from the bulk paint film. Talens paints in general have previously been observed to be more surfactant rich than Golden paints. [8,9] The subtle increase in the C-O ether component of the C 1s XP spectrum suggests that the concentration of this PEO Triton-type surfactant has increased slightly by aqueous swabbing. This change was not observed on the soiled Talens yellow film, suggesting that surfactant in the underlying paint film was unaffected by the cleaning treatment. This is reminiscent of the apparent protective function of the applied soiling layer in preventing pigment disruption at the paint film discussed previously in the context of the Cl 2p XP spectra. Near-edge X-ray absorption fine structure Near-edge X-ray absorption fine structure provided additional insight. One is evidence for the stratification of paint film components in the uppermost surface, which was obtained by variation of the surface sensitivity through the use of a biased grid at the detector entrance. [9,10] For example, Ca L-edge NEXAFS spectra (Fig. 5) of soiled Talens yellow films were acquired at two different EGBs. Applying an EGB of À50 V results in a higher signal sampling depth than for an EGB of À200 V. The Ca L-edge signal is much less intense in the more bulk-sensitive spectra taken with an EGB of À50 V than in the more surface-sensitive spectra taken with an EGB of À200 V. This indicates stratification in the uppermost surface of the paint film of calcium originating from the applied soiling layer and/or chalk extender present in the bulk paint. Moreover, data acquired with an EGB of À50 V were virtually unaffected by cleaning treatments, while the surfacesensitive spectra acquired at an EGB of À200 V were very responsive to cleaning treatments, indicating a decrease in Ca L-edge intensity broadly in proportion to the visually observed cleaning efficacy of the surface treatment. Conclusions Two yellow (PY3) paint films prepared from artists' acrylic emulsion paint from different manufacturers were exposed to four cleaning treatments. Films were assessed by eye for the impact of cleaning and results correlated with those of ATR-FTIR, XPS and NEXAFS analyses. All techniques established the relative cleaning efficacy of the wet cleaning treatments in order from least to most effective to be: PS ≤ DI water (W) < Ecosurf + TAC (ET) = microemulsion (ME)in line with previous results. However, significant additional findings arose from the X-ray spectroscopic analysis, including (i) indications of surfactant extraction following aqueous swabbing, (ii) modifications to pigment at the paint film surface and (iii) the identification of cleaning residues. These subtle modifications at the very surface of the paint film may have consequences for the preservation and appearance of works of art made with these paints. The potential application of NEXAFS as a 'depth profiling' tool for these materials should also be examined further. Currently, more systematic investigations on these paint films are being carried out employing a wider variety of surface treatments (e.g. ageing and other microemulsions) and paint materials (e.g. paint films from different manufacturers with different pigments). This information will inform ongoing research into the most appropriate, minimal-risk way to conserve and preserve this growing proportion of modern and contemporary works of art.
2018-04-03T05:35:25.061Z
2014-02-17T00:00:00.000
{ "year": 2014, "sha1": "cb97f95952eb15f1836e5e10e15ef8b48bb7adfb", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/sia.5376", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "cb97f95952eb15f1836e5e10e15ef8b48bb7adfb", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Materials Science", "Chemistry", "Medicine" ] }
258604329
pes2o/s2orc
v3-fos-license
Disparities in Women With Endometriosis Regarding Access to Care, Diagnosis, Treatment, and Management in the United States: A Scoping Review Endometriosis is a benign gynecological condition that elicits chronic pain in 2-10% of reproductive-age women in the United States and exists in approximately 50% of women with infertility. It creates complications such as hemorrhage and uterine rupture. Historically, the gynecologic symptoms of endometriosis have been associated with economic strain and inferior quality of life. It is suspected that endometriosis diagnosis and treatment are affected by health disparities throughout gynecological care. The goal of this review was to collate and report the current evidence on potential healthcare disparities related to endometriosis diagnosis, treatment, and care across race, ethnicity, and socioeconomic status. This scoping review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines and searched the Excerpta Medica Database (EMBASE), Medline Ovid, Cumulated Index to Nursing and Allied Health Literature (CINAHL), Web of Science, and PsycInfo databases for relevant articles on the topic. Eligibility was established a priori to include articles written in English, published between 2015-2022, and reported on cohort, cross-sectional, or experimental studies conducted in the United States. Initially, 328 articles were found, and after screening and quality assessment, four articles were retained for the final review. Results indicated that White women had higher rates of minimally invasive procedures versus open abdominal surgeries than non-White women. White women also had fewer surgical complications compared to all other races and ethnicities. Black women had higher rates of perioperative complications, higher mortality, and spent more time in the perioperative stage than any other race or ethnicity. In the management of endometriosis, the limited research available showed that all non-White women encountered an increased risk of perioperative and postoperative complications compared to White women. More research is needed to explore diagnostic and treatment disparities beyond surgical management, socioeconomic barriers, and improved representation of racial and ethnic minority women. Introduction And Background Endometriosis is a chronic inflammatory disease that globally affects up to 10% of reproductive-aged women [1]. Defined as extra-uterine lesions composed of endometrial glands and stroma, the estrogendependent disease process is often associated with chronic pelvic pain, infertility, pregnancy complications, and a decreased quality of life [2,3,4,5,6]. Due to its dependence on ovarian cyclicity, endometriosis commonly affects women between menarche and menopause [2,7]. Although clinically benign, the harmful impacts on quality of life are significant. Pain and associated dysfunctions are detrimental to patients' economic status and professional, educational, social, and family lives, proving endometriosis to be a significant medical, social, and economic issue [8,9]. While a clinical history of dyspareunia and dysmenorrhea may raise suspicions of endometriosis, the nonspecific nature of these symptoms is mimicked by several other potential etiologies. Additionally, endometriosis lacks pathognomonic features and biomarkers specific to the disease process [10]. Instead, diagnosis relies upon surgical visualization and histological confirmation of extra-uterine lesions consisting of endometrial glands and stroma [11]. However, the presence of these lesions cannot exclude other potential etiologies for the patient's symptoms, nor can the absence of lesions prevent endometriosis. The lack of sensitive, specific, non-invasive diagnostic methods contributes to a delay in the diagnosis of endometriosis that can last between four and eleven years, contributing to unnecessary, prolonged suffering and decreased quality of life in patients [10,12]. Endometriosis is a complex disease with a multifactorial etiology, and several theories surrounding its origin exist. The most widely accepted theory of what causes endometriosis is retrograde menstruation, where endometrial cells flow backward through the fallopian tubes. The cells move into the pelvic cavity, where they can insert and grow. Cellular dysplasia of extra-uterine cells into endometrium-like cells has also been proposed, as has the theory that stem cells can give rise to the disease [13,14]. Other etiological factors may include altered or impaired immunity, complex hormonal influences and fluctuations, genetics, and environmental contaminants [15,16]. On average, globally, it takes seven years to diagnose endometriosis after the onset of symptoms, causing delays in diagnosis and subsequent treatment [17]. More than four million women of reproductive age have an endometriosis diagnosis in the United States; six out of ten cases go undiagnosed [12]. Historically, endometriosis was commonly known to affect White women while being seen as a "rare condition" for other races, leading to misdiagnoses for chronic pelvic pain in non-Whites [18]. This review explored the literature on endometriosis diagnosis, management, and treatment as it pertains to health inequities among women. Materials and methods This scoping review followed Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Eligibility was established a priori to include articles written in English, published between 2015-2022, and reported on cohort, cross-sectional, or experimental studies conducted in the United States. Initially, 328 articles were found, and after screening four were retained for the final review. Search Strategy This scoping review searched Excerpta Medica Database (EMBASE), Medline Ovid, Cumulated Index to Nursing and Allied Health Literature (CINAHL), Web of Science, and PsycInfo with completed searches on September 26, 2022. The search strategy was created in EMBASE by the first author and reviewed by an expert librarian. The goal of the search was to uncover what evidence is available on disparities in endometriosis care in the United States. The EMBASE search strategy was adapted for Medline Ovid, CINAHL, Web of Science, and PsycInfo. The search terms used were "endometriosis," "endometrioma," "social inequality," "health care disparity," "health care access," "economic," "income," "gender," "sexual, bias," "disparities," "minority," and "poverty." Multiple combinations of search terms were used to obtain maximum results. The citations and bibliography sections of the included articles were reviewed; no articles were found by other means (e.g., hand searching) that met the search criteria. The four articles that met inclusion criteria were evaluated for method quality and the risk of bias using the Joanna Briggs Institute (JBI) critical appraisal tools. All articles were evaluated using the JBI critical appraisal checklist for cohort studies. All four articles were found to have a minimal risk of bias, with scores above 70%. No articles were excluded based on the JBI checklists. Results A total of four articles were retained for this review. All identified articles were retrospective cohort studies that addressed disparities in the surgical management and outcomes of women with endometriosis. The two main concepts that appeared were ethnic and racial inequalities in 1) surgical treatment and 2) surgical complications. Three articles demonstrated that White women are more likely to have minimally invasive surgeries versus open abdominal surgeries for endometriosis diagnosis or treatment compared to other racial and ethnic groups [19][20][21][22]. All four articles found that White patients had lower rates of surgical complications compared to other racial groups [19][20][21][22]. Each article included different racial and ethnic categories with the most limited racial consideration only evaluating Black and White patients [19], while the most inclusive study used American Indian or Alaska Native, Asian, Black or African American, Hispanic, Native Hawaiian or Pacific Islander, White, and unknown race to categorize patients [22]. Disparities in Endometriosis Surgical Treatments Hysterectomies can be used in endometriosis treatment and diagnosis, and there are many variations of this procedure, from open abdominal laparoscopy to video-assisted laparoscopic surgery. Although minimally invasive hysterectomies shorten recovery time, decrease the length of hospital admissions, and decrease the risk of complications, there is still a disproportionate use of minimally invasive hysterectomies in White patients compared to minority racial and ethnic groups [19]. An article that specifically evaluated Black and White patients found that Black patients experienced a significantly higher rate of open hysterectomies compared to White women [19]. Another study investigating endometriosis treatment used a broader range of race and ethnicity categories and found that the highest rates of open hysterectomies occurred in Asian patients, followed by Black and Hispanic patients, while White patients again had the lowest rates of open abdominal hysterectomies [21]. A third study found comparable results showing that all racial and ethnic groups, except American Indian or Alaskan Natives, were more likely to have an open abdominal hysterectomy instead of a laparoscopic hysterectomy for endometriosis compared to White women [22]. The study with the broadest race and ethnicity categories explored the outcomes of surgical procedures for endometriosis beyond hysterectomies [22]. They evaluated eight surgical procedures to treat and diagnose endometriosis, including hysterectomies, and determined the rates of minimally invasive routes versus laparotomy. When the eight alternative endometriosis surgeries were considered, all racial and ethnic groups, excluding Asians, were more likely than White patients to have an open surgery instead of the minimally invasive alternative. When hysterectomies were excluded from the surgical types, Hispanic, Black, Asian, and unknown race/ethnicity patients were less likely to receive the minimally invasive surgical alternative. Black and Hispanic women had the highest rates of oophorectomy while overall hysterectomy rates were highest in White and Native American patients [22]. Disparities in Endometriosis Surgical Complications The disproportionate use of open hysterectomies in non-White patients is accompanied by an increased risk of both perioperative and postoperative surgical complications [19][20][21][22]. In the studies evaluating the prevalence of major and minor postoperative complications, the Clavien-Dindo classification was used to distinguish major from minor complications [19,21]. The Clavien-Dindo classification ranks surgical complication severity based on the therapy type needed to fix the complication. The scale consists of grades I, II, IIIa, IIIb, IVa, IVb, and V. Grade 3 and higher are considered major complications and include cardiopulmonary arrest, myocardial infarction, venous thromboembolism, cerebrovascular accidents, pneumonia, renal failure, deep or organ surgical site infection, sepsis, unplanned reoperations, and death. Grade 2 and lower were considered minor complications and included blood transfusions, urinary tract infections, and superficial wound infections [19,21]. After accounting for confounding variables such as comorbidities and surgical approach, it was found that Black women had a significantly increased likelihood of experiencing major and minor postoperative complications than their White counterparts [19,21]. An article specifically evaluating the prevalence of lower urinary tract infections post-hysterectomy found that Black patients had higher rates of this complication than White patients [20]. This study also found that Black women had more extended hospital stays, unplanned readmissions, reoperations, and other adverse events in the 30 days postoperation. A study including Hispanic, Black, Native Hawaiian or Pacific Islander, and American Indian or Alaska Native women patients found that non-White patients were more likely to experience similar complications to those above postoperatively than did White patients, with Black patients having the most postoperative complications [22]. In addition to the increased risk of postoperative complications, Black patients had higher rates of perioperative complications compared to other racial groups [20,22]. Black women experienced higher rates of morbidity and mortality during surgery than their White, Hispanic, Native Hawaiian or Pacific Islander, American Indian, or Alaska Native counterparts [22]. Black women also had longer perioperative times than White women after accounting for confounding variables. Discussion This scoping review was conducted to evaluate the current literature available that addressed disparities associated with endometriosis diagnosis and treatment. A search of five scientific databases yielded four articles that met the inclusion criteria. All four articles showed racial and ethnic disparities in the care of women with endometriosis. Two common themes across the studies suggested that White women were more likely to receive minimally invasive surgical options than ethnic and racial minorities [19,20,22] and White women were least likely to have surgical complications from their endometriosis-related surgeries than their racial and ethnic minority counterparts [19][20][21][22]. The disproportionate use of open surgical intervention in minority groups is significant because the minimally invasive options decrease the morbidity of the surgery compared to the open surgical options [23]. The lower rates of minimally invasive procedures among minorities likely predispose them to more surgical complications and worse outcomes. The expected higher rates of complications associated with decreased usage of the minimally invasive options were shown in all four articles. Minority women are more likely to have minor and major surgical complications, including urinary tract infections, urinary tract injuries, pulmonary embolism, and the need for a reoperation [19][20][21][22]. Previously published research supported the existence of racial and ethnic disparities throughout healthcare, despite controlling for socioeconomic factors and disease characterization [24]. This reinforces the current finding that non-White women receive inadequate endometriosis treatment in comparison to their White counterparts. Included Studies A limitation of the included studies is that they are all cohort studies that used the American College of Surgeons National Surgical Quality Improvement Program (NSQIP) database. The cohort studies depended on data collection from a database that might have a bias in data collection and other factors that cannot be controlled, including the location of the facility where the procedure was performed or the experience level of the surgeon [19]. The data evaluated for this study are from the hospitals that report their outcomes to the NSQIP. Since all articles used the NSQIP database to obtain their cohort data, the information might not be representative of the United States as a whole [19][20][21][22]. Another limitation is that the articles did not address the socioeconomic characteristics of the patients and only addressed surgical diagnosis and treatment of endometriosis [19][20][21][22]. Due to this lack of socioeconomic evaluation of the patients, these studies failed to address if the financial status, presence, absence, or type of health insurance held by the patient had an impact on aspects of health outcomes, such as when the patient sought medical evaluation if financial status prevented a surgery that was otherwise indicated or the type of surgery selected for each patient. The available studies focused on the surgical aspects of endometriosis, despite the search strategy including all diagnostic and treatment options. The absence of studies that discuss disparities in non-invasive medical management and outcomes of endometriosis care highlights the need for further investigation into the disparities in endometrial management. Review Process The review process yielded four studies that fit the inclusion criteria, which is a small number when considering that up to 10% of females have endometriosis, a chronic condition that is associated with chronic pain, decreased quality of life, and infertility [2]. The search was restricted to studies published in 2015 or later and data collected within the United States, which excluded older studies and some completed in other countries. The limitation on publication date excluded other relevant articles. Further, while this study included a small number of articles, the search strategy through five databases was robust, verified by an expert librarian, and conducted based on PRISMA guidelines. The articles were evaluated by multiple authors to determine if they fit the inclusion criteria, and all included articles were evaluated using the JBI critical appraisal tools. The reference list of all four studies was manually searched without revealing any additional relevant articles. Implications for practice While the research studies contained in this review did not determine the cause of the disparity in outcomes for endometriosis treatment, the findings suggest that disparities exist for non-White minority groups. Until further research and conclusions are made, healthcare professionals and physicians should strive to ensure they provide consistently high-quality and appropriate care to all women seeking endometriosis treatment, regardless of race or ethnicity. Each patient should be apprised of all the treatment options and encouraged to work with their physician to find the best treatment to limit the risks of complications and adverse outcomes. Future research considerations The consensus of the four articles is that racial and ethnic minority females experience more complications from endometrial-related surgeries and are less likely to receive the laparoscopic surgical option. However, the causes of these findings were not well addressed. Future research should be conducted that aims to address if there are racial or ethnic biases from physicians and healthcare systems toward minority patients. Furthermore, studies should evaluate if there are confounding variables, such as socioeconomic or cultural factors, that are affecting the outcomes. Understanding the basis of why disparities exist is necessary to implement changes to correct the imbalance. Future studies should aim to be inclusive of the racial, ethnic, and socioeconomic diversity of females with endometriosis. Studies that included only White and Black or White, Black, and Hispanic patients were not inclusive of the United States population. Excluding minority groups limits the ability to understand, address, and improve disparities within healthcare. In addition to the inclusivity of race and ethnicity, the socioeconomic factors that affect endometriosis care need further investigation. Exploration of how a lack of access to healthcare and the ability to afford treatment impacts those with endometriosis is necessary to address the disparity and decrease the suffering from this chronic condition. This review only found articles about short-term surgical diagnosis and management related to endometrial care. Since endometriosis is a chronic condition that impacts the quality of life of the affected women, further evaluation should be conducted to determine if disparities exist in the long-term medical management of endometriosis [2]. Finally, to implement lasting strategies to improve and assure sustainable healthcare equity, there needs to be a prioritization of and commitment to conducting extensive, representative, and comprehensive research on the factors impeding health equity. Among these factors are minority attitudes towards the healthcare system, barriers faced in accessing quality care, and specific factors influencing provider decisions about the route of care, including provider bias. Conclusions Regarding the diagnosis and surgical management of endometriosis in the United States, the research summarized in this review suggests that Black women face a higher risk of perioperative and postoperative complications in comparison to White women. Although one study found that American Indian or Alaska Native, Asian, Hispanic, and Native Hawaiian or Pacific Islander patients also face an increased risk of perioperative and postoperative complications, these racial groups are often overlooked in the current body of research. The preventable burden of suboptimal care experienced by non-White women must be further explored, along with contributing systemic and socioeconomic factors (e.g., income, insurance, and access to care), to implement best practices to ensure healthcare equity for all patients. Economic, educational, occupational, geographic, and sociocultural disparities are both embedded within and partially responsible for the systemic failure to provide minority patients with exceptional care. Non-White women face an unnecessary and problematic burden of risk when it comes to receiving quality endometriosis care, despite controlling for confounding variables. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2023-05-11T15:02:32.513Z
2023-05-01T00:00:00.000
{ "year": 2023, "sha1": "23fd4019a28a6e5cd26abdcc13d732b7c6d3adb2", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7759/cureus.38765", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9975dc9012f2c51e43a769bc6f4ffdab19688933", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
10814070
pes2o/s2orc
v3-fos-license
Association of Alpha B-Crystallin Genotypes with Oral Cancer Susceptibility, Survival, and Recurrence in Taiwan Background Alpha B-crystallin (CRYAB) is a protein that functions as “molecular chaperone” in preserving intracellular architecture and cell membrane. Also, CRYAB is highly antiapoptotic. Abnormal CRYAB expression is a prognostic biomarker for oral cancer, while its genomic variations and the association with carcinogenesis have never been studied. Methodology/Finding Therefore, we hypothesized that CRYAB single nucleotide polymorphisms may be associated with oral cancer risk. In this hospital-based study, the association of CRYAB A-1215G (rs2228387), C-802G (rs14133) and intron2 (rs2070894) polymorphisms with oral cancer in a Taiwan population was investigated. In total, 496 oral cancer patients and 992 age- and gender-matched healthy controls were genotyped and analyzed. A significantly different frequency distribution was found in CRYAB C-802G genotypes, but not in A-1215G and intron2 genotypes, between the oral cancer and control groups. The CRYAB C-802G G allele conferred an increased risk of oral cancer (P = 1.49×10−5). Patients carrying CG/GG at CRYAB C-802G were of lower 5-year survival and higher recurrence rate than those of CC (P<0.05). Conclusion/Significance Our results provide the first evidence that the G allele of CRYAB C-802G is correlated with oral cancer risk and this polymorphism may be a useful marker for oral cancer recurrence and survival prediction for clinical reference. Introduction Oral cancer, which is a leading cause of death and disfigurement around the world [1][2][3][4], has ranked on the 4 th cancer in Taiwanese male population [5]. There is an urgent need to develop routine preoperative markers to spare patients with poor prognosis after surgery or other treatment and on the other hand, identify patients at risk of early recurrence and justify prophylactic neck dissection and adjuvant concurrent chemoradiotherapy as well as those who could benefit from various treatments regardless of their tumor size or staging. Those who are identified at higher risk of oral cancer recurrence and/or metastasis should be detected earlier and followed up more frequently to enjoy longer life with the development of useful markers for prognosis prediction. Alpha B-crystallin (CRYAB) is a member of the small heat shock protein (sHSP) family and a molecular chaperone expressed in various tissues [6,7]. Recent evidence has established that CRYAB presents not only in eye, but also in heart, skin, brain, spinal cord, and lung tissues [6,8]. In mammals, there are three classes of crystallins: alpha, beta, and gamma, each contributing equally to the total mass of the lens. From the proteomics or protein level studies, it has recently been recognized that CRYAB may have a role in cancer development. In 2005, it is reported that CRYAB was down-regulated at mRNA level from oral cancer patients compared with normal oral mucosa [9]. However, in contrast to the highly expression in normal oral mucosa, patients with negatively or lower CRYAB detected in their tumor sites had better disease-free survival rates than those patients whose tumors stained strongly. On the contrary, it was reported that from a proteomics screening in Taiwan, CRYAB was significantly upregulated in the primary tissue from oral cancer patients [10]. In 2010, similar results were reported in a mice oral cancer model via concomitantly 8-week treatment with 4-NQO (200 mg/mL) and arecoline (500 mg/mL) [11]. Despite of the disagreements raised among different ethics and populations investigated, the genomic status of CRYAB and the linkage between its genotype and clinical outcome are largely unknown. In order to understand the genomic role of CRYAB in oral cancer, we have chosen three single nucleotide polymorphisms (SNPs) of CRYAB, A-1215G (rs2228387), C-802G (rs14133), intron 2 (rs2070894), and investigated their genotypic distribution in a large Taiwanese oral cancer population. In addition, the two clinical outcomes contribute to the highest death rate of oral cancer, metastasis and recurrence, were analyzed of their associations with CRYAB genotypes. Results The clinical characteristics of the oral cancer patients and controls are shown in Table 1. There were no significant difference between both groups in their age and sex, while the patients are much higher exposed to the environmental risky factors for oral cancer in Taiwan, smoking, alcohol drinking and betel quid chewing habits ( Table 1). The frequencies of the genotypes and alleles for the CRYAB A-1215G, C-802G, and intron 2 for the participants are shown in Table 2. Genotype distribution of various genetic polymorphisms of CRYAB C-802G is significantly different between oral cancer and control groups (P,0.05), while those for A-1215G or intron 2 were not significant (P.0.05) ( Table 2). Also, the allele distributions of the CRYAB C-802G (P = 1.49*10 25 , OR = 1.51, 95%CI = 1.25-1.83), not those of A-1215G (P = 0.8593, OR = 0.91, 95%CI = 0.31-2.62) or intron2 (P = 0.1366, OR = 1.16, 95%CI = 0.95-1.41), is found to be associated with the susceptibility for oral cancer ( Table 2). To sum up, the G allele and GG or CG genotype of CRYAB C-802G are associated with oral cancer risk and may be biomarkers for oral cancer detection. The representative PCR-based restriction analyses for the CRYAB C-802G polymorphisms are shown in Figure 1. To evaluate the prognostic value of CRYAB genotypes, the relationships among disease-free survival, recurrence, metastasis and CRYAB C-802G genotypes were analyzed. First, the oral cancer patients carrying the CRYAB C-802G CG had a significant trend toward decreased disease-free survival, and the patients carrying CRYAB C-802G GG had the shortest disease-free survival period ( Figure 2). The short disease-free may mainly reflect local recurrence. More than 80% (30 of 37) of the patients carrying CRYAB C-802G GG had nodal recurrence without an advanced N stage (N0-1) at the first diagnosis. Interestingly, the patients would have frequent recurrence and high second primary tumor rates within the following five years. Second, compared with those with CC genotype, the patients carrying CRYAB C-802G CG or GG genotype had a higher recurrence rate within the following five years (P = 0.228, OR = 2.08, 95%CI = 1.11-3.92), but not a higher metastasis rate (Table 3). Discussion The study aimed to investigate the association of CRYAB genotypes and clinicopathopogical variations in Taiwan oral cancer patients. It has recently been recognized that CRYAB protein may play a role in oral cancer development. In previous literature, it was reported that CRYAB was significantly over expressed in the primary tissue from oral cancer patients in Taiwan [10]. In 2010, in a tongue cancer mice model performed by concomitantly 8-week treatment with 4-NQO (200 mg/mL) and arecoline (500 mg/mL) and withdraw in the following 20 weeks, the cells of the tumor sites had higher expression of CRYAB than the counterpart cells of the sham-treated mice [11]. However, there were also some findings challenging this upregulation association [9,12]. This may be due to that different populations of different ethics, genetic background, cultures, and environment exposure were investigated. From the viewpoints of cell-line based studies, it was demonstrated that overexpression of CRYAB in transformed immortalized human mammary epithelial cells demonstrated neoplastic features and luminal growth and these changes were inhibited when CRYAB expression was silenced using RNA interference [13]. Overexpression of CRYAB in human mammary epithelial cells also formed invasive mammary carcinomas in nude mice, induced epidermal growth factor and anchorage independent growth, increased cell migration and invasion, and activated the mitogen-activated protein kinase/extracellular signal-regulated kinase (MEK/ERK) pathway, suggesting that CRYAB could be considered an oncoprotein [14]. However, there is not yet any study performed from the DNA level to investigate the important role of CRYAB in carcinogenesis. Based on the previous differential expression evidence, we were strongly interested and chose the three SNPs of CRYAB, two at the promoter region (A-1215G and C-802G) and one at the intron 2 (intron2), to investigate their associations with oral cancer risk and prognosis. We found that CRYAB C-802G polymorphism, not A-1215G or intron2, was associated with increased risk of oral cancer (Table 2), and the local recurrence rate (Table 3). Also, the oral cancer patients carrying GG or CG at the polymorphic site had lower 5-year survival rate than those carrying homologous CC ( Figure 2). Interestingly, the patients carrying GG at CRYAB C-802G were recorded to have much more frequent recurrence and second primary rates. This may indicate that CRYAB C-802G could be a predicator for oral cancer progression direction. Possibly the genetic polymorphism directly affects the differential patterns of the CRYAB protein, at the expression and/or functional levels, and indirectly imbalances the normal functions of other CRYAB-related genes and proteins, which may result in the oral carcinogenesis. At the same time, the alteration of CRYAB protein expression in the extracellular matrix may cause the subtle changes of the microenvironment near the primary oral tumor, for the recurrence, but not for the metastasis. This can be justified by the role of CRYAB in the tyrosine kinase signaling, that could be easily altered in cancer cells. The reduced expression of CRYAB has been firstly reported to be associated with a negative prognosis in 2003 [15]. Approximately 10% of early-stage head and neck squamous cell carcinoma patients develop locoregional recurrence and 15% to 25% develop second primary tumors within 5 years of initial diagnosis [16,17]. As diagnostic and therapeutic approaches continue to develop, the ability to accurately predict second primary tumor/recurrence in early-stage oral cancer patients would facilitate intensive surveillance or targeted interventions for high-risk patients and thereby reduce mortality and morbidity. In this study, the patients carrying CRYAB C-802G CG or GG genotype were found to have a higher recurrence rate within the following five years, but not a higher metastasis rate ( Table 3). The occurrence of second primary tumors may be due to the subtle alterations of the microenvironment which have been accumulated to reach the threshold of tumorigenesis in the patients of risky genotypes, such as GG at CRYAB C-802G. The functional study of this SNP and how the CRYAB protein interacts with proteins in extracellular matrix in oral carcinogenesis also need further investigations. In the future, collective evidence from larger and different cohorts using this SNP may help to oral cancer staging, outcome direction prediction, and more effective and integrative strategy. It is firstly found that the SNP at the promoter region of CRYAB, C-802G, is associated with oral cancer susceptibility, recurrence, and 5-year disease-free survival, but not metastasis. Since poor local-regional control and easy recurrence are the main causes of treatment failures in oral cancer therapy, the results of this study may provide more predictive guidance information for not only the prevention, but the care, therapy and follow-up of those patients at higher risk of cancer recurrence and lower 5-year survival rate. Study population and sample collection Four hundred and ninety six cancer patients diagnosed with oral cancer were recruited at the outpatient clinics of general surgery between 2005-2008 at the China Medical University Hospital, Taichung, Taiwan, Republic of China. The clinical characteristics of patients include histological details were all graded and defined by expert surgeons. All patients voluntarily participated, completed a self-administered questionnaire and provided peripheral blood samples. Double number of non-cancer healthy volunteers as controls were selected by matching for age, gender and some indulgences after initial random sampling from the Health Examination Cohort of the hospital. The exclusion criteria of control group included previous malignancy, metastasized cancer from other or unknown origin, and any familial or genetic diseases. Both groups finished a short questionnaire which included some indulgences and they were recorded. Our study was approved by the Institutional Review Board of the China Medical University Hospital and written-informed consent was obtained from all participants. Statistical analyses In our study, only those matches with all SNPs data (case/ control = 496/992) were selected into final analyzing. To ensure that the controls used were representative of the general population and to exclude the possibility of genotyping error, the deviation of the genotype frequencies of CRYAB SNPs in the control subjects from those expected under the Hardy-Weinberg equilibrium was assessed using the goodness-of-fit test. Pearson's two-sided Chi-square test or Fisher's exact test (when the expected number in any cell was less than five) was used to compare the distribution of the CRYAB genotypes between cases and controls. The primary outcome was disease-free survival. The endpoints included local cancer recurrence and metastasis. Follow-up information was available for all patients at the 5-year time point. Disease-free survival time was calculated from the date of treatment until the time of recurrence, defined as disease recurrence at the same site or the detection of metastases, including recurrence in the neck lymph nodes. The genotypes were coded assuming an allele dose-effect (CC wild-type = 0, CG heterozygous carrier of the mutated allele = 1, GG homozygous carrier of the mutated allele = 2). Disease-free survival curves were generated by the Kaplan-Meier method and verified by the logrank test. The significance level was set at P,0.05.
2017-04-13T10:07:34.700Z
2011-09-07T00:00:00.000
{ "year": 2011, "sha1": "f2c0c31d15c8c2fab112d5f70944ff751bec3947", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0016374&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1eefe05cf958beb67e2d958543ac18e003b1618a", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
41138211
pes2o/s2orc
v3-fos-license
Effect of combination therapy in the treatment of auricular keloid Keloids are dermal fibroproliferative benign disorders that develop at the site of cutaneous injury, due to an imbalance of the mechanisms that control tissular repair and regeneration. The pathogenesis is not well understood, however it is postulated that an imbalance between anabolic and catabolic factors occurs in the healing process, resulting in overproduction of collagen. There are several treatment modalities, nevertheless recurrence rates are high. The objective of the present paper is to describe five cases of keloid in the ear lobe that showed a good response to a combination therapy: excisional surgery, corticosteroid injection, and pressure device. INTRODUCTION Keloid is an abnormal proliferation of cicatricial tissue formed during the healing process, usually in the location of cutaneous injuries.It does not regress spontaneously, grows beyond the original edges of the scar and should not to be mistaken with hypertrophic scars, which are elevated, do not grow beyond the original margins and can regress over time. 1,2 t occurs in a percentage that varies from 5% to 15% of scars, only in humans, at a mean onset age of between 10 and 30 years. In general, it arises within one year after the event of a cutaneous injury and is 15 times more common in individuals with more intensely pigmented skin than in those with less pigmented skin. 3Prolonged inflammation is one of the major risk factors for the development of keloid. 4Some regions of body are more susceptible to the formation of keloids, such as the anterior thorax, anterior surface of the neck, shoulders, arms, ears and in wounds perpendicular to the skin's tension lines. 4][5] Growth factors and cytokines are intimately involved in this process.There is an increase in transforming growth factor β (TGF-β), that regulates the proliferation of fibroblasts and collagen synthesis, promoting the differentiation of fibroblasts into myofibroblasts.Myofibroblasts present an important role in the contraction and remodeling of granulation tissue due to its ability to contract actin filaments and to increase the synthesis of collagen. Other factors involved include the increase in mast cells, elastin, glycosaminoglycans, tumor necrosis factor β (TNF-β), interferon β (IFN-β), platelets derived growth factor (PDGF), insulin-like growth factor 1 (IGF-1) and interleukin 6 (IL-6), in addition to the decrease of the apoptosis in fibroblasts and of factors that reduce the synthesis of collagen type I, III, and possibly IV (TNF-β, IFN-β and g, and metalloproteinase 9). 4,5 here is also alteration of the immune response, with a predominance of Th2 lymphocytes that promote fibrogenesis at the expense of Th1 lymphocytes, which attenuate the fibrosis of tissues. 4,5 ere are no guidelines for the treatment of keloids, and among the diverse treatment modalities, there is an effort aimed at achieving the best treatment -one that offers the lowest recurrence rate, due to the aesthetic and functional alterations as well as the impact on the patient's quality of life that these lesions cause. 5,6 e objective of the present paper is to report five cases of keloid in the ear lobe that did not recur after being treated with a combination therapy: excisional surgery, corticosteroid injection and the use of a pressure device. CASE REPORTS CASE 1: VJS, 14 years old, phototype III, describes the appearance of a keloid in the lobe of the left ear (Figure 1), two years before, after having the ear pierced.The partial removal of the keloid was performed 10 months before the present paper was approved for publication, with subsequent local monthly injection of 40mg/ml triamcinolone acetonide and the use of a pressure device (Delasco ® ,Council Bluffs, Iowa, USA) for 8 to 12 hours daily (Figure 2).Outcome nine months after the beginning of the treatment (Figure 3).CASE 2: LPR, 46 years old, phototype III, developed keloids in the earlobes following the piercing of the ears (Figure 4).Underwent four previous surgeries and serial injections of corticosteroids without success.Three months before the present paper was approved for publication, the partial removal of the keloid was carried out in the right ear, associated with monthly injections of triamcinolone acetonide and use of pressure devices (Figure 5).CASE 3: AGP, 21 years old, phototype V, alleges having keloids in both earlobes for six years.Has undergone two previous surgeries, with recurrence.The partial removal of the keloid was performed three months before the present paper was approved for publication.The patient is in use of pressure device and monthly triamcinolone acetonide injections (Figure 6).4: ACSS, 40 years old, phototype V, describes the emergence of a keloid in the right ear lobe two years before the present paper was approved for publication.The patient had previously undergone six injections of corticosteroids without success.One year before the present paper was approved for publication the partial removal of the keloid was performed, associated with local injection of 40mg/ml triamcinolone acetonide and the use of a pressure device for 8 to 12 hours daily (Figure 7).CASE 5: DC, 25 years old, phototype III, reports the onset of bilateral auricular keloids five years before.Surgery to partially remove the keloid was performed three years before.Injections of corticosteroids have been applied since then, combined with the continuous use of a pressure device, without recurrence (Figure 8). All cases underwent intralesional excision of the keloid with an almost complete removal, leaving in place a thin portion of the affected tissue, where primary suture was performed with It is believed that the pressure devices promote hypoxia, collagen degradation and increased collagenase activity due to the reduction of the α-macroglobulins activity. 1 In addition to reducing the time of scar formation, they re-direct the collagen fibers and increase the levels of hyaluronic acid. 6As a result, pressure earrings (ear pressure devices) were developed with the following characteristics: non-flammability, easiness of placement and removal by the patient, capability of promoting the adequate pressure, easiness of cleaning and being aesthetically acceptable. 9There are different shapes for these devices, some of them with adaptations for the ear helix region. Treatment recommendations with pressure earrings are not well defined, however it is postulated that the exerted pressure should be between 25-40mmHg and should be applied for 12-24 hours a day for periods ranging from months to years. 1,6 Ban et al. report that the combination of surgery and postoperative pressure treatment leads to good response in 90-100% of cases, especially in the treatment of auricular keloids. 6In line with the literature, 1-10 the cases reported in the present study showed good response to the combined therapy, with absence of reports of recurrence to date.However, long term monitoring is necessary for the detection of possible recurrences and early re-operation, in order to that the desired therapeutic success be achieved. Conclusion Keloids arise as a frequent complaint in dermatology practices, especially due to the impact on the quality in life caused by aesthetic changes.The authors believe that the combined treatment, with the use of aesthetically acceptable devices, is more effective, with lower recurrence rates when compared to the monotherapy, being a good therapeutic choice for the keloids in the earlobes.l Figure 4 : Figure 4: Patient with keloid in the right earlobe Figure 6 :Figure 7 : Figure 6: A-Keloid in the left earlobe.B -Three months after surgery.C -Pressure device used by the patient Figure 8 :Figure 5 : Figure 8: Outcome 3 years after the beginning of combined therapy
2017-10-25T14:31:45.500Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "6691ff7f4f8b4207bbf3c63ef82e27f6a7eea385", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5935/scd1984-8773.201573578", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "3509d41d6672806a7cbdce3d91bff1d438fc1a0b", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
271178010
pes2o/s2orc
v3-fos-license
Performance and Safety of Amino-Acid- and Hydroxyapatite Enriched-Hyaluronic Acid Intradermal Gel in Facial Skin Defects Background and Objectives: The facial skin defects associated with aging are common concerns in the aging population. Hyaluronic-acid-based intradermal gels have established themselves as safe and effective treatments for addressing these concerns. Recently developed enriched products aim to enhance the efficacy of these gels, yet their effectiveness lacks thorough validation in the existing literature. Materials and Methods: In this retrospective analysis, we investigated the outcomes of intradermal gel treatments in 103 patients with soft tissue defects. This study included three groups: 35 patients received amino-acid-enriched hyaluronic acid gel, another 35 were treated with hydroxyapatite-enriched hyaluronic acid gel, and the remaining 33 underwent hyaluronic acid treatment only. The efficacy of the treatments was assessed using the Global Aesthetic Improvement Scale (GAIS) score, while patient satisfaction was gauged through a detailed questionnaire. Any adverse event was monitored. Results: The treatments demonstrated remarkable efficacy, as evidenced by mean GAIS scores of 1.714 points for those treated with amino acid-enriched hyaluronic acid gel, 1.886 points for individuals receiving hydroxyapatite-enriched hyaluronic acid gel, and 1.697 for those treated with hyaluronic acid alone, all showing statistical significance (p < 0.0001). Patient satisfaction was very high. Significantly, there were no recorded instances of major adverse events. Conclusions: Hyaluronic gels, particularly those enriched with amino acids and hydroxyapatite, are effective and safe interventions for addressing facial skin aging defects. They serve as valuable tools in mitigating age-related blemishes and contribute to the overall improvement of skin aesthetics. Introduction Skin aging is intricately influenced by the interplay of extrinsic and intrinsic factors, with environmental aggressors such as solar ultraviolet (UV) radiation and genetically influenced intrinsic changes being prominent contributors to this multifaceted process [1,2].These mechanisms converge on common molecular and cellular pathways, notably generating reactive oxygen species (ROS) [3][4][5].Additionally, aging profoundly impacts the extracellular matrix (ECM), impairing its ability to synthesize and catabolize essential components such as collagen, elastin, and glycosaminoglycans (GAGs).Consequently, these processes culminate in a diminished volume and richness of the ECM, rendering it more susceptible to heightened enzymatic degradation by metalloproteinases and collagenases [5,6].Moreover, during the natural aging process, a decline in endogenous hyaluronic acid (HA) levels leads to diminished skin hydration and, subsequently, reduced skin elasticity.This phenomenon manifests itself as soft tissue deficits, notably influencing the skin's physiological characteristics and aesthetic appearance [7,8]. Injectable gels have emerged as longstanding interventions to address age-related soft tissue defects stemming from these molecular and cellular alterations.Predominantly formulated with HA, these gels have been employed for decades to correct age-associated soft tissue deficiencies.Currently recognized as the gold standard for such treatments, these HA-based injectable gels have demonstrated a commendable track record of safety, efficacy, and user-friendliness [9][10][11].The unique combination of biocompatibility, biodegradability, hygroscopicity, viscoelasticity, and reversibility establishes HA as a prominent choice for soft tissue augmentation [12][13][14][15][16][17].These distinctive properties enhance the safety profile of HA-based interventions and contribute to their efficacy and versatility in achieving enduring aesthetic results. In addressing skin aging, HA's properties can be enhanced through supplementation with synergistic elements, such as amino acids and hydroxyapatite.Amino acids are pivotal in supporting collagen synthesis, a crucial aspect of maintaining skin elasticity and firmness [18,19].Furthermore, amino acids possess moisturizing properties, fostering improved hydration and promoting a balanced skin texture.Hydroxyapatite is a mineral that constitutes a primary component of bones and teeth, finding utility in cosmetic medicine [20][21][22].Hydroxyapatite's biocompatibility, osteoconductive properties, durability, controlled biodegradation, and versatility contribute to its utility, especially in correcting facial defects. While promising, the efficacy of treatments involving HA supplemented with amino acids or hydroxyapatite must be systematically evaluated in clinical practice to establish their effectiveness in addressing age-related skin defects.This study, therefore, holds significant importance as it aims to fill this gap by evaluating the performance of two medical treatments, one containing HA enriched with amino acids (Peptidyal 2) and the other containing HA enriched with hydroxyapatite (Peptidyal HX), in the treatment of soft tissue deficits.To enhance the impact of the results and ensure comprehensive information, we have included another medical treatment exclusively containing HA (Doublyx Evo) in this analysis. Study Design This monocentric, investigator-initiated, retrospective, observational study aimed at assessing the safety and efficacy of three medical treatments-Peptidyal 2, Peptidyal HX, and Doublyx Evo-in treating soft tissue deficits.This study involved a comprehensive analysis of patients treated between December 2022 and April 2023 at the University of Campania Luigi Vanvitelli, Naples, Italy.Ethical approval was obtained from the Ethical Committee of 2022 University of Campania Luigi Vanvitelli, Prot.0038554/i of 22 December 2022.All procedures adhered to the ethical standards set forth by the responsible committee on human experimentation (both institutional and national) and the Helsinki Declaration of 1975, as revised in 2008. Study Population This study comprised 103 adult subjects.Of that total, 35 were treated with Peptidyal 2, 35 with Peptidyal HX, and 33 with Doublyx Evo.Patients with incomplete medical records were excluded.Inclusion criteria were adult patients with soft tissue deficiency treated with one of the medical treatments studied, treatment carried out within a time window of 3-9 months before the start of the study, and a written privacy policy consent. The exclusion criteria were as follows: age over 65; patients with severe obesity; patients with previous aesthetic or surgical treatments at the same anatomical site; or patients with concomitant diseases that may have affected the treatment outcome. Data Collection Data were collected retrospectively from medical records, focusing on the treatment regimen for each treatment.Eligible subjects underwent a single injection of one of the investigated treatments in areas such as nasolabial folds, lips, midface, perioral, and periocular regions requiring correction.The investigators estimated the injection volume using commercially available hyaluronidase in cases of overcorrection, which was administered at the investigators' discretion.Post-injection, subjects were instructed to refrain from applying makeup for 12 h, avoid prolonged exposure to sunlight and UV light for 36 h, and abstain from saunas or Turkish baths for one week.Additionally, subjects were advised against massaging the treatment site or applying pressure to the area for one-week post-injection.For all medical treatments, patients underwent treatment within a 30-to 90-day window before the study began. The treatment protocol involved using the Global Aesthetic Improvement Scale (GAIS) and a satisfaction questionnaire.GAIS, a 5-point scale, assessed global aesthetic improvement in appearance as perceived by the investigator, categorized as "worse", "no change", "improved", "much improved", and "very much improved" [23,24].Subject satisfaction was gauged through a series of questions on 3-or 5-point scales (Table 1). Clinical Investigation Endpoints All complications and outcomes were reviewed from patients' medical records.Primary efficacy endpoint was evaluating the treatment efficacy with the GAIS (Table 2), with a median GAIS score < 4 considered successful.Secondary endpoint was patient satisfaction was assessed using specific questionnaires.Safety endpoint: For the safety analysis, adverse events (AEs) were coded with the Medical Dictionary for Regulatory Activities (MedDRA) version 16.0 terminology and summarized by system organ class and preferred term.Adverse treatment effects and serious adverse events (SADEs), as well as adverse events leading to withdrawal, were summarized separately. Medical Treatments Doublyx Evo (Aerazenlab, Milan, Italy) uses highly purified, not cross-linked, 20 mg/mL sodium hyaluronate derived from bacterial fermentation.Hyaluronic acid is known for its biocompatibility, biodegradability, hygroscopicity, viscoelasticity, and reversibility.It effectively addresses soft tissue deficits by providing volume and hydration to the skin, thus improving skin elasticity and reducing the appearance of wrinkles. Peptidyal 2 (Aerazenlab, Milan, Italy) is made using highly purified, not cross-linked, 20 mg/mL sodium hyaluronate derived from bacterial fermentation, and L-hydroxyproline, L-proline, and glycine amino acids.Amino acids play a crucial role in supporting collagen synthesis, which is essential for maintaining skin elasticity and firmness.Additionally, they possess moisturizing properties that foster improved hydration and promote a balanced skin texture. Peptidyal HX (Aerazenlab, Milan, Italy) is made using highly purified, not crosslinked, 18 mg/mL sodium hyaluronate derived from bacterial fermentation and hydroxyapatite (up to 0.01%) with glycine and L-proline amino acids.Hydroxyapatite is a mineral that is a primary component of bones and teeth.Its biocompatibility and durability make it useful in cosmetic medicine, particularly for correcting facial defects.It contributes to enhanced collagen production and provides a scaffold for new tissue formation. All the hydrogels are transparent, colorless, and low-viscosity.All three treatments are supplied in 2.5 mL pre-filled disposable syringes with a luer-lock.These products bear the European conformity mark and have been available on the European market since 2016.These treatments were used separately in this study.The images of all the three gels with their appearances are provided in Figure S1: Gel Images. Data Analysis The sample size was calculated based on estimates of the possible improvement of the GAIS score after treatment using information from the literature.The sample size calculation was based on a continuous response variable, not assuming a normal distribution, specifically the change in the median GAIS score after treatment.The calculations considered an effect size (dz) of 0.50, aiming for a power of 80%.The analysis of the GAIS score utilized a Wilcoxon signed rank test with a two-sided significance level set at 5%.This approach was chosen due to the non-normal distribution assumption.For all other endpoints, a descriptive analysis was employed.This included the assessment of patient satisfaction questionnaire responses and safety endpoints, where adverse events (AEs) were descriptively evaluated. Patients A total of 103 subjects underwent screening and were enrolled in the study, with 35 individuals receiving treatment with Peptidyal 2, 35 with Peptidyal HX, and 33 with Doublyx Evo.Subjects treated with Peptidyal 2 were evaluated between 36 and 78 days (median = 41) after treatment (mean = 49.2;STD = 10.3),subjects treated with Peptidyal HX were evaluated between 32 and 77 days (median = 43) after treatment (mean = 51.1;STD = 14.1), and patients treated with Doublyx Evo were evaluated between 33 and 71 days (median = 39) after treatment (mean = 42.5;STD = 12.7).Demographic details are summarized in Table 3.This study exclusively comprised women participants.For those treated with Peptidyal 2, ages ranged from 32 to 60 years, with a median age of 45 years.Body mass index (BMI) varied from 16.46 Patient Response The primary performance endpoint, measured by the GAIS score, significantly improved patients' defects in each treatment group (Table 4).Patients treated with Peptidyal 2 exhibited a median GAIS score of 2, with a mean of 1.714 (p-value < 0.0001, STD = 0.51).Similarly, those treated with Peptidyal HX and Doublyx Evo showed median scores of 2, with means of 1.886 (p-value < 0.0001, STD = 0.75) and 1.697 (p-value < 0.0001, STD = 0.75), respectively.Subject satisfaction was notably high across all groups (Table 5).The satisfaction levels of patients treated with Peptidyal 2, Peptidyal HX, and Doublyx Evo were assessed across multiple parameters.For Peptidyal 2, most subjects reported a significant improvement in appearance, with 37.14% stating it was "very much improved" and 48.57% indicating "much improved".Furthermore, 88.57% of subjects expressed satisfaction with the treatment, and 97.2% would recommend it.Peptidyal HX recipients similarly experienced positive outcomes, with 31.43% reporting a "very much improved" appearance and 48.57% noting a "much improved" status.A high satisfaction rate of 71.43% was observed, along with a 100% recommendation rate.Doublyx Evo recipients experienced substantial improvements, with 27.27% reporting a "very much improved" appearance and 45.45% indicating "much improved".Additionally, 60.61% expressed satisfaction, and all subjects (100%) would recommend the treatment.These results attest to a notable level of patient satisfaction across all three treatments, highlighting favorable outcomes and a willingness to recommend them to others. Safety The safety analysis provides an overview of adverse events (AEs) categorized by system organ class and preferred term for each investigated medical treatment.For Peptidyal 2, out of 35 subjects, nine events were reported, with injection site hematoma and injection site swelling being the most frequent, constituting 14.2% and 11.4% of the total events, respectively (Table 6).Peptidyal HX exhibited a total of seven events out of 35 subjects, with injection site swelling, and general disorders and administration site conditions being the predominant AEs, representing 8.5% and 25.7% of the total events, respectively (Table 7).In the case of Doublyx Evo, seven events were reported among 33 subjects, with injection site swelling, and general disorders and administration site conditions being the most significant AEs, constituting 12.1% and 21.2% of the total events, respectively (Table 8).Most AEs were of mild intensity and did not require specific treatments, reflecting the overall safety profile of the investigated medical treatments.Notably, no significant adverse events or treatment deficiencies were recorded throughout the study, confirming the safety of Peptidyal 2, Peptidyal HX, and Doublyx Evo in treating soft tissue deficits. Discussion This retrospective study enrolled 103 participants, exclusively women aged 40 to 50, with mild to severe face volume deficiency, presumably influenced by age-related and environmental factors.This study aimed to evaluate the safety and efficacy of three medical treatments, Peptidyal 2, Peptidyal HX, and Doublyx Evo, for addressing soft tissue deficits.This study design allowed for a comprehensive assessment, incorporating demographic details, treatment timelines, and stringent inclusion criteria.A validated assessment instrument, the GAIS, was used in this study.The GAIS was completed independently by the subjects and the physician investigators.A 1 grade or better improvement from the baseline was considered to be a clinically meaningful outcome for this assessment scale. The primary efficacy endpoint, measured by the GAIS, demonstrated significant improvement in the patients' defects across all treatment groups.Patients treated with Peptidyal 2, Peptidyal HX, and Doublyx Evo exhibited median GAIS scores of 2, with corresponding means and p-values indicating statistically significant improvements.The results were consistent with a notable level of patient satisfaction, as evidenced by high percentages of subjects reporting an improved appearance and expressing satisfaction with the treatment.Recommendations for the treatments were very positive, underscoring the perceived efficacy and patient endorsement of Peptidyal 2, Peptidyal HX, and Doublyx Evo. The observed improvements in the GAIS scores of the three medical treatments align with those in previous studies evaluating HA-based gels [25][26][27].Including amino acids in Peptidyal 2 and hydroxyapatite in Peptidyal HX formulations may enhance collagen synthesis and fibroblast stimulation, potentially amplifying the volumizing effects and overall aesthetic improvements.Doublyx Evo, although exclusively containing HA, exhibits comparable efficacy, emphasizing the fundamental role of HA in achieving desired outcomes.These formulations synergistically address soft tissue deficits by leveraging distinct mechanisms, showcasing versatility in achieving aesthetic goals. The safety analysis revealed adverse events categorized by system organ class and preferred term for each treatment.Adverse effects included injection site swelling, which was an anticipated short-term ADE described in previous studies with related products and is a common ADE for intradermal gels [25,26,28].The most common ADE in this investigation was injection site hematoma.However, the injection of any intradermal gel can be associated with hematoma formation, amongst other injection site reactions such as burning, itching, or pain (secondary to stretching of cutaneous nerves), erythema, edema, or bruising, even with excellent injection technique [25, 29,30].Notably, most adverse events observed were of mild intensity, aligning with an overall favorable safety profile.Notably, no significant adverse events or treatment deficiencies were recorded, confirming the safety of Peptidyal 2, Peptidyal HX, and Doublyx Evo in treating soft tissue deficits within the studied timeframe. As with any retrospective study, the inherent limitations include the relatively short follow-up period and potential biases.Notwithstanding, while this study did not have an extensive duration, the provided timeline (evaluations between 36 and 78 days) offers insights into the initial durability of the treatments.This short-to medium-term follow-up duration provides insights into immediate and intermediate outcomes but needs a longterm perspective.Moreover, the absence of a placebo group limits contextualization and causation attribution.These limitations emphasize the need for cautious interpretation and consideration in designing future research in soft tissue augmentation. Conclusions In conclusion, this retrospective study provides valuable insights into the safety and efficacy of Peptidyal 2, Peptidyal HX, and Doublyx Evo in addressing soft tissue deficits.The observed improvements in GAIS scores, high patient satisfaction, and favorable safety profiles suggest that these treatments hold promise in aesthetic medicine.Further research with extended follow-up periods and larger cohorts can offer a more comprehensive understanding of these interventions' durability and their long-term effects.The favorable results highlighted in this study contribute to advancing the field of soft tissue augmentation, positioning Peptidyal 2, Peptidyal HX, and Doublyx Evo as viable options for practitioners and patients seeking safe and effective aesthetic treatments.This study has limitations, and the data highlight the need for further research with larger sample sizes, longer follow-up periods, and additional quantitative measures. to 24.01, with a median of 20.35.Similarly, subjects treated with Peptidyal HX had an age range of 26 to 60 years, with a median age of 46 years, and a BMI range of 16.76 to 21.35, with a median of 18.79.Subjects treated with Doublyx Evo had an age range of 29 to 57 years, with a median age of 43.5 years, and a BMI range of 16.06 to 20.39, with a median of 18.22.All subjects were of Italian and Caucasian descent.None of the participants had a significant medical history.Consistent with standard clinical practice, no antiplatelet agents were taken two days before treatments.Additionally, no rescue medication (hyaluronidase) was administered throughout the study. N = number of subjects, STD = standard deviation, BMI = body mass index. Table 4 . Global Aesthetic Improvement Score. Table 6 . Adverse events by system organ class and preferred term with Peptidyal 2. Table 7 . Adverse events by system organ class and preferred term with Peptidyal HX. n = number of events, N = number of subjects in the data set, N' = number of subjects with events. Table 8 . Adverse events by system organ class and preferred term with Doublyx Evo.
2024-07-15T15:47:22.997Z
2024-07-01T00:00:00.000
{ "year": 2024, "sha1": "92156d0111fd41df7984ea9a0d6522f0008ff246", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "4e47c73280fe65e2549ad3deb6779ce536ff3841", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
214709548
pes2o/s2orc
v3-fos-license
PROCESS MANAGEMENT IN LOCAL GOVERNMENT SHARED SERVICES CENTRES – FROM AN INVENTORY OF SHARED SERVICE PROCESSES TO SLA DESIGNING The efficiency and quality of performed tasks constitute one of the indicators of functioning of an organisation in both the public and private sector. The article presents the experience of the Shared Services Centre (SSC) in Toruń in the managing processes conducted as a part of provided shared service. The management of the processes which are presented by the authors of the article includes inventories of taken-over processes, their standardisation, optimisation and the principles of constructing service level agreements (SLAs) concluded by the SSC with the served units. INTRODUCTION The concept of shared services is quite simple and it has been adapted from other areas of business activities, for instance from production. A few decades ago, enterprises operating on a global level supplied the market with goods using their branches placed in different regions of a country or the world, which in turn supplied local, regional and also national markets. After some time it was discovered that if products were mass produced in specialised companies and then delivered to customers (local, regional and also national ones), it would be more efficient and more profitable (a classic example) of specialisation and returns to scale. SSCs used the same concept of business organisation and operating, i.e. they concentrated on providing shared services to entities located in different places in one area [Bangemann 2016]. Such a solution enabled achieving basic objectives that were assigned to such organisations or limiting expenses with improving quality through standardisation of processes and their repeatability [Tomasino et al. 2014]. These fundamental objectives found at the basis of SSCs in the private sector have been implemented in the public sector, which has been going through important changes consisting of the evolution of paradigm for public governance since the beginning of the 1980s 1 . The process started with implement- Modrzyński, P., Karaszewski, R., Reuben, A. (2018). Process management in local government Shared Services Centres -from an inventory of shared service processes to SLA designing. Acta Sci. Pol. Oeconomia 17 (3) 2018, 63-73, DOI: 10.22630/ ASPE.2018 ing the concept of New Public Management in Great Britain [Sandford 2015]. It consisted of the implementation of management tools typical for the private sector in public organisations [Samberg 2017]. The main objective of such an approach was to increase the efficiency of the public sector operations resulting from a growing deficit of the public finance sector [Holzer and Fry 2011]. The reforms of new public governance based on such principles as the promotion of competitiveness in the area of providing services; empowerment through transferring control functions from the bureaucratic sphere to citizens (communities); measuring activity and concentrating on outputs and financial results [Local Government Association 2003], instead of on expenditure [Local Government Association 2010]. Despite the lack of supporters of such an approach to public governance, such concepts as the quality of provided services, flexibility of management, evaluation of implemented tasks, shifts of interest from observing legal procedures to effects of operations seem to be constantly in the discussion on determining how the public sector should be organised [Branda 2006]. Thoughtless use of private sector tools in public organisations has been criticised, and has resulted in constant searching for new solutions of public governance [Hall 2017]. However, in the new paradigms of public governance proposed currently, i.e. new public governance or neo-Weberian state, the efficiency issues mentioned above are still indicated as an important element of the public sector organisation [Henderson 2015]. LOCAL GOVERNMENT SHARED SERVICES CENTRES IN POLAND The process of implementation of functioning in business solutions belonging to the area of SSCs in the public sector in Poland was started in 2015 with an amendment to the Local Government Act of 2016. The aforementioned amendment enabled munici-palities to perform tasks under a shared service. The legislator left an open catalogue of tasks performed under a shared service and indicated only its basic scope of administrative, financial and organisational tasks. In case of personal scope, the legislator strictly defines the catalogue of entities that can be placed under a shared service. This catalogue includes: (1) organisational units of a municipality, (2) community cultural institutions and (3) other community legal entities established under separate legislation for performing public tasks and included into the public finance sector. The group of entities does not include enterprises, research institutes, banks and commercial law companies established by self-governments. The biggest municipalities showed initiative in the area of creating new organisational structures supporting managing budget entities. When the local authorities decided to establish SSCs within their structures, entities performing the same statutory tasks and having unified IT systems, including financial and accounting systems, were placed under a shared service. The two specifications mentioned above, i.e. performing the same statutory tasks and using unified IT systems, were the key points for designing shared services in self-governments which were designed for education units. Out of 18 analysed municipalities 2 where voivodship and local authorities are placed, SSCs as organisational units were established on resolutions of the City Councils in 8 following municipalities: (1) XLI.498.2016]. This constitutes over 44% of all the municipalities that were analysed. In all of the municipalities that decided to establish local government SSCs, except for the Łódź municipality, only education units were placed under a shared service (Fig. 1). 2 The research to diagnose the interest of local authorities in creating Shared Services Centres and determining the scope of entrusted tasks was conducted in municipalities where the authorities of self-governmental or governmental administration are located. Appropriate resolutions of a resolution-passing body -a city council, according to which shared services centres were established and material and personal field of application of provided services was determined constituted the basis of the study prepared by the authors. The research was conducted between June and December 2017. Modrzyński, P., Karaszewski, R., Reuben, A. (2018). Process management in local government Shared Services Centres -from an inventory of shared service processes to SLA designing. Acta Sci. The material scope of provided services in the area of handling units is wide and it includes: accounting, payroll and personnel services, IT, legal advice and others. Financial and accounting services are common to all the analysed CSSs, thus it is safe to assume that they constitute basic services in the catalogue of shared services. The article presents the model of accounting processes management within the provided services from the moment they are taken from served units, through the process of their modelling, until the moment of their standardisation and drafting service level agreements (SLAs). The authors illustrate the abovementioned process with the example of the Shared Servi ces Centre in Toruń -a unit providing a shared service in the area of accounting, reporting, payroll, settlements and centralisation of VAT settlements for organisational entities in Toruń 3 . The Standardisation of accounting processes is illustrated with the example of Toruń Shared Services Centre. When a shared service is received by a certain group of municipal organisational units, e.g. by education units, a unification (standardisation) of processes within provided services (accounting, payroll and other) should be performed 4 . The process of standardisation is analysed and presented in detail in the article with the example of an accounting service. The process was preceded with an inventory of processes performed by the served units. All possible processes within a provided financial and accounting service were listed and their determinants, which influence their frequency, were specified. It resulted in creating a matrix of processes occurring in particular served units and their frequency and/or number. It was assumed that a given process manifested the same labour intensiveness in particular units and only the frequency/number was the factor differentiating a given 3 Toruń Shared Services Centre (TSSC) provides a shared service for all the education units in the municipality (68 units in total) in the area of accounting, reporting, payroll and settlements and it provides the service of VAT centralisation for all the organisational units of the municipality (95 units in total). TSSC has been providing a shared service since 1 January 2017. The standardisation of accounting processes has been performed on the basis of the data collected between January and September 2017. 4 Before it is included in a shared service, each education unit that is now served by TSSC used its own accounting policy with a chart of accounts, interpreted budget classification for particular economic events individually, it used a different terminology of payroll and settlements, etc. process in a unit. Moreover, the processes were classified by the following topics: (1) accounting processes that included (a) drawing up annual plans (planning resources for the following year, drawing up unit plans of budget revenue and expenditure for a given year, planning budgetary needs until the end of a given year, drafting requests for resources 5 , (b) accounting processes (accounting of the following: proofs of sale, proofs of purchase, bank statements, cash-desk reports, cash register income, payroll, loans, benefits, payment of benefits from a company social benefits fund on the basis of drawn-up lists, EU project proofs and proofs of purchase) and payments: entering bank transfers, (c) settlement processes (clearing of the following: tuition fees of preschool children, payments for meals for preschool children, payments for meals for school children, payments for boarding houses; reconciliation of the following: balances of receivables with counterparties, balances of commitments with counterparties, stocks, stock books with accounting situation; (2) reporting, including among others: monthly, quarterly and six-month reports (overtime settlement, settlements of payments, budgetary accounts; EU projects reports, settlement of liabilities, preparing accounting data for the education information system), (b) annual reports (balance, profit and loss account, statement of changes in the fund with a description, a trial balance, pre-numbered form and cash register report, an inventory and settlements statement, information on liabilities, statements on awarded contracts and municipal assets, preparing information for a consolidated financial statement). In the analysed case, TSSC placed 68 education units of the municipality under a shared financial and accounting service. Each of the units conducted the aforementioned processes, but their construction, i.e. number and the scope of performed activities, accounting and detailedness were not unified. In the period of the shared service provision, i.e. in the first, second and third quarter of 2017, the number of accounting documents amounted to 69,085 in total, of which purchase invoices constituted the majority (45%). The process of standardisation of accounting processes was preceded with an inventory, that included the analysis of accounting documents in terms of the form and date of payment of liabilities and receivables. Particular units did not apply an integrated and unified policy in this area, and the form and mode of payment depended on the current decisions taken by the managers of the units and did not constitute an element of the financial policy of a given unit. A cash payment (24%) and 14-day deadline for payment (56% for sales invoices and 46% for purchase invoices) were the most frequently used forms of payment regulations, for both liabilities and receivables. At the stage of the inventory of processes, they were not standardised but a description of particular units by means of the number of implemented processes was made. Preparation of process mapping for a given unit enabled their comparing in accounting terms (Fig. 2), which was used in scaling of the workload of individual employees allocated with the number of implemented processes and enabled achieving the returns to scale when work productivity was calculated 6 . The significant differences in the number of processes conducted by the served units resulted from the scope and scale of their activity. Among the main determinants differentiating the number of accounting processes the following can be found: (1) the number 5 A request for resources in the public sector units constitutes a stage initiating expenditure. It includes verification of available funds in a unit's financial plan and their reservation. 6 The disproportion among the units served by TSSC is significant. In the smallest (in accounting terms) units over a thousand processes are implemented annually, and in the biggest units -over 40 thousand. The conducted inventory of processes enabled comparing one another in the area of accounting complexity. In TSSC, like in other analysed self-government SSCs, the process of taking over employees performing activities under a shared service (e.g. accounting staff) was implemented in accordance with art. 23' of the labour code. Assigning two facilities to accounting services improved the work productivity and thus resulted in financial savings -an economic effect. The inventory of processes was a necessary condition in this process because the allocation of units served by particular employees was conducted on the basis of the character of an education unit (e.g. a boarding house, a swimming pool, external service provision) and first of all on the basis of the total of processes of the assigned units, so that the workload for one employee would be standardised. Modrzyński, P., Karaszewski, R., Reuben, A. (2018). Process management in local government Shared Services Centres -from an inventory of shared service processes to SLA designing. Acta Sci. Pol. Oeconomia 17 (3) of sales invoices, (2) the number of purchase invoices, (3) the number of students/children in the unit and boarding house, (4) the number of payments due to catering, accommodation, tuition and others, (5) the number of implemented EU projects and (6) the number of chapters, paragraphs and tasks included in the unit budget. The number of budget paragraphs and chapters resulting from the tasks of a given unit is correlated to invoices, that significantly influences the multiplication of processes. In case of completion of a few tasks by a unit, the same invoice is distributed to many chapters and paragraphs, there is treated as separate processes. Bank statement accounting, catering payment settlements, entering bank transfers are accounting processes the repetition rate and number of which result from the presented determinants (e.g. the number of invoices, of students, etc.) and influence the number of completed processes in a unit in a given period of time. Although the number of processes conducted by particular units under a shared service is characterised by high variability (63.32%) and the number of processes of particular served units differs on average by 8,805 processes from the average amount for a studied group, the determinants specifying the value of processes for a served unit are characterised by a greater stability. It proves that despite the fact that served units differ one from another, the same units in the following months are characterised by similar labour intensity, which is of key importance for the management of processes under a shared service in the area of the use of human resources in serving the units, budgeting costs and first of all in the process of creating unified standards of conducted processes (Fig. 3, the table). Purchase invoices are basic documents processed by TSSCs and they constitute about 70% of all accounting documents. Considering the character of the served units (education units), seasonality can be observed in their activity (holiday season: July and August), when the number of processed accounting documents (of all kinds) is half as small as in other months. The number of documents generated by the served units is characterised by a high degree of predictability, and after eliminating the holiday months (when the entities are much less active), a corrected coefficient of variation of the number of processed documents can be obtained and depending on their types it alternates between 8.09 and 16.41%. The biggest number of invoices, i.e. purchase and sales invoices is estimated at their average amount. It is of key importance for optimisation of the processes management and specifying their labour intensity (in the process of standardisation). With specified particular groups of accounting processes conducted in the served units, the process of their standardisation, that constitutes the basis for introducing the rules of a provided shared service, can be started. It is necessary for designing a SLA 7 between a served unit and a SSC [Cordall 2018]. The problem of the lack of standardisation of particular accounting processes had a few reasons. Firstly, each unit had its own accounting policy, rules of accounting, reporting expenditure and income, which in effect made and still makes it impossible for instance to compare budget expenditure according to paragraphs. Secondly, each facility differently defined the scope of activities performed by the Chief Accountant, the circulation and description of accounting documents and activities performed by other employees participating in a process. Thirdly, the timeliness of realised accounting processes resulted only from external regulations concerning for instance the deadline of periodical reports, payments of liabilities, etc. Fourthly, no unit quantified the number of realised processes, their frequency and labour intensity, neither did they standardise them internally. A consequence of the above mentioned conditions is the necessity to introduce standardisation of each accounting process provided within a shared service, according to the material and personal scope, the time of realisation and competence and responsibility of the participating parties. The scope of changes connected with the an inventory of shared service processes to SLA designing. Acta Sci. Pol. Oeconomia 17 (3) Source: Own study based on TSSC data. introduction of unification of accounting types in units concerned first of all the elimination of differences in accounting of events belonging to the same category, that have the effect in different accounting records and results from among others vagueness of interpretation of rules and the lack of uniform enforcement and use of regulations in all the served units. As a consequence, differences occurred, such as for instance non-compliance of accounting records with regulations in some units, non-uniformity of data in financial reports, inconsistencies in data in additional financial settlements required by the municipal authorities, no possibility of a reliable analysis of financial data and thus a different presentation of data in a unit's financial reports -balance, income statement and consequently the lack of reliable examination of a consolidated balance of a municipality. Unification of accounting records and interpretation of regulations will result in a reliable presentation of data and appropriate (complying with the provisions) account Modrzyński, P., Karaszewski, R., Reuben, A. (2018). Process management in local government Shared Services Centres -from an inventory of shared service processes to SLA designing. Acta Sci. Pol. Oeconomia 17 (3) of economic events in the units. A unified account of economic events in all units in legal and accounting terms in financial reports will unify the forms of reports (for analysis), and the received homogenous data, when analysed, will enable to find the anomaly in the units (in a group of units with a similar profile, e.g. schools with swimming pools) and specify the reasons. The most frequent differences in the process of standardisation were the differences in the classification of accounting the same accounting event, for instance expenses incurred on repairs, conservation or review of fixed assets were recorded depending on the description in different budget paragraphs and accounts. The standardisation of book-keeping procedures was not implemented in a TSSC only, but also in the served units. On the part of the served units, the scope of information describing an economic event constituting the object of accounting was not only unified, but also competences and duties of particular employees participating in the circulation of accounting documents were defined. Moreover, payment dates and the number of purchase documents from counterparties were standardised. An individual supplier was required to issue from one to four invoices a month -depending on the character of supply (cf. Fig. 4) [Peel et al. 2011]. After the processes of standardisation and optimisation are finished, the stage of creating procedures of providing shared services can be started [Department for Communities and Local Government 2006]. The existing self-government organisational units operate in traditional, hierarchical organisational and managerial structures, where vertical dependence is a basic management relation. Creating a SSC and entrusting it with the provision of supporting processes (e.g. accounting, payroll, personnel or IT ones) changes the relationship between a serving unit and a served unit entirely. Firstly, a SSC is based on horizontal and thus flexible organisational structures. Secondly, the relations between a SSC and served units are not based on reporting relations but on cooperation and for this reason the formalisation of rules of cooperation is essential. Separating tasks performed so far by particular self-government/public organisational units and relocating their conducting to a specialised unit, a SSC, results in the necessity to define precisely the Fig. 4. The scope and effects of accounting processes standardisation in self-government shared services centres illustrated with the example of SSC in Toruń Source: Own study based on TSSC data. Modrzyński, P., Karaszewski, R., Reuben, A. (2018). Process management in local government Shared Services Centres -from an inventory of shared service processes to SLA designing. Acta Sci. Pol. Oeconomia 17 (3) 2018, 63-73, DOI: 10.22630/ ASPE.2018 responsibilities of individual parties participating in the process implementation -providing a service. In the traditional model of management, without SSCs, the scope of competences and responsibilities of particular employees engaged in processes is important, but the whole responsibility for their organisation, implementation and supervision rests with the manager of a unit. The introduction of the abovementioned model, a SSC, causes a division of competences and responsibilities between the served unit and a SSC. In this scenario it is of key importance to define the risk matrix, responsibility and time of particular processes implementation. For the needs of TSSC the Business Process Model and Notation (BPMN) developed by the Object Management Group was used. Its accuracy and usefulness for describing the processes of enterprise resource planning (Enterprise Resource Planning systems) is its great advantage. Within the activity of TSSC, three process areas, in which procedures specified for this entity will take place, have been identified: (1) management processes -concerning activities of managerial character (consistent with the scope of internal control), like for instance creating the vision of a unit, strategy planning, indicating objectives, identifying and analysing risks. It must be remembered though that a budget unit, which TSSC will be, will largely depend on management decisions made at a higher level, i.e. in the Municipality of Toruń; (2) operational processes -the biggest process area concerning providing services to served unit, obviously in factual terms resulting from the unit statute. Therefore, operational processes concern fulfilling basic duties of the unit, for which it was created (cf. Fig. 5) and (3) supporting processes -their basic aim is to support operational processes. Each operational process is assigned with sub-processes and 8 Created within Business Process Management Initiative. It is currently owned by Object Management Group. Current version of the standard is 2.0. The objective of OMG created in 1989, that included IBM, Apple Computer and Sun Microsystems, was to specify standards of cross-platform, distributed, object-oriented programming. Source: Own study based on TSSC data. Modrzyński, P., Karaszewski, R., Reuben, A. (2018). Process management in local government Shared Services Centres -from an inventory of shared service processes to SLA designing. Acta Sci. Pol. Oeconomia 17 (3) 2018, 63-73, DOI: 10.22630/ ASPE.2018 tasks, the role and responsibility of particular people and units participating in its flow. Another stage of SLA creating is building RACI matrix for each process, in which the owners of processes, subjects/people responsible for its achievement at every party of a process are specified [Dollery et al. 2016]. The main advantage of a RACI matrix is an unambiguous interpretation of tasks assigned to each unit and its employee participating in a process. Although the description of procedure and standardisation may imply interpretational problems -mainly in conflict situations, delays in tasks implementation, the RACI matrix is free from such faults. If a collection (manual) of processes is to constitute a basis for designing IT systems supporting delivered shared services, the standardisation process should include a graphic notation of business processes, for instance according to the OMG standard version 2.0 8 . CONCLUSIONS Creating the rules and principles of shared services in self-government shared services centres is a time--consuming and labour-intensive process that requires a detailed knowledge of the processes in the units under a shared service. The standardisation of processes, that constitutes a basis for the rules of shared service policy, must be preceded by an inventory of processes, and then their remodelling (optimisation) must follow. Many of the process elements from for instance the area of accounting or payroll service must be unified and verified in terms of the necessity of their implementation (avoiding duplicating of unnecessary activities and processes). Standardisation is a starting point for the work on creating the rules of shared service policy, i.e. a document that in self-governments is adopted by the order of a municipal executive body (the president or mayor) or a legislative body (the City Council). Using the BPMN or RACI matrix is not obviously essential in this area, but it enables very easy designing of IT systems supporting the implementation of those processes, it unifies risk maps and responsibility for their implementation. Establishing SSCs and entrusting those units with conducting supporting processes is without doubts a favourable solution for self-governments, and as British experience shows [Depart-ment for Communities and Local Government 2006], they require the involvement of their management in the process of creation. They implement new organisational forms, use flexible organisational forms, review procedures and conduct their standardisation, which constitutes a milestone in improving the quality of provided services. Summing up, it should be emphasized that accounting, payroll and tax services belong to the canon of services forming the basic subject matter of SSCs, operating both in private and public sector [Delloitte 2018]. It should be underlined that there are basically no significant differences in the scope of the shared service provided in the public and private sector. Differences appear at the level of implementation and management of SSCs, because in public sector SSCs implement shared services to local government units whose statutory objectives are not profit oriented (whereas in private sector SSCs are basically profit oriented) -at the same time qualifications and managerial competences of the staff employed in the serviced units vary considerably from those in private sector entities [Modrzyński and Gawłowski 2018]. After all, SSCs base on the experience of the private sector, and the implementation of business processes in the public sector will certainly have a positive influence on mutual learning and relations between those two areas.
2020-03-29T10:51:46.972Z
2018-09-30T00:00:00.000
{ "year": 2018, "sha1": "db278d8bdc7fbaf48b19de0b4ed587223daf9d8d", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.22630/aspe.2018.17.3.38", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "db278d8bdc7fbaf48b19de0b4ed587223daf9d8d", "s2fieldsofstudy": [ "Business", "Engineering" ], "extfieldsofstudy": [ "Business" ] }
89703543
pes2o/s2orc
v3-fos-license
Effects of Benzyladenine and Light on Post-harvest Calamondin (x Citrofortunella microcarpa) Fruit Color and Quality In harvested calamondin fruit, the effects of benzyladenine (BA) and light on the rind color and fruit quality were investigated. BA application delayed degreening of the calamondin fruit in both light and dark conditions. At 5 days, BA application had no influence on total soluble solids (TSS), titratable acid (TA), sugar contents, ascorbic acid (AA), and organic acid in the fruit juice and at 9 days, only AA content has decreased in BA-treated fruits. Light promoted quick degreening of calamondin fruit, whereas in the dark, degreening had proceeded very little in un-treated fruit at 9 days. Light did not influence fruit quality at 5 days either. However, light influenced the sugar content, especially increasing glucose and total sugar, as well as AA content in fruit at 9 days. Concerning the AA content in calamondin fruit, BA decreased and light increased it. These results indicated that BA treatment after harvest and maintaining fruit in the dark are sufficient to retain the green rind color of harvested calamondin fruit without affecting fruit quality, except for AA content. Introduction Calamondin is an acid citrus which is indigenous to the Philippines and widely cultivated throughout the country. In this citrus fruit, the rind color is an important factor in customer satisfaction. Mature green calamondin fruit of high quality sell for a high price in fresh food markets, and yellowing of the rind results in quality loss, which gives rise to economic loss in the market. During marketing and storage, it is desirable to maintain the green rind color of calamondin. Plant hormones are related to control of color change in citrus rind and fruit ripening. Although citrus fruit is non-climacteric (Kusunose and Sawamura, 1980), exogenous ethylene promotes a change from green to yellow in rind color and ripens it (Porat, 2008). On the other hand, cytokinin and gibberellin retain the green rind color and inhibit the ripening and senescence of Received;September 28, 2017. Accepted;December 9, 2017. First Published Online in J-STAGE on February 8, 2018. This study was supported by the International Cooperative Research Project of the Tokyo University of Agriculture. * Corresponding author (E-mail: y3kawai@nodai.ac.jp). citrus fruit and both blocked the ethylene-induced change in rind color and ripening of detached citrus fruit (García-Luis et al., 1986). Cytokinins inhibit the destruction of chlorophyll in detached leaves and delay their senescence thorough physiological processes (Thimann, 1980). It has been reported that cytokinin prevents chlorophyll destruction in the rind of green harvested oranges (Eilati et al., 1969), but not in attached satsuma mandarins (García-Luis et al., 1986). Development of a simple method to keep peel green is needed because this is the most important quality in calamondin fruit. Because the effects of cytokinin on rind color of detached calamondin fruit have not been investigated, it was worth testing this approach as a possible means of retaining the green rind. Light is one of the major environmental factors affecting citrus rind color and is closely involved in the production and decomposition of photosynthesis pigments such as chlorophyll and carotenoids, which give the fruit their color. In satsuma mandarins, Izumi et al. (1992) reported that high-intensity sunlight promoted the coloration of fruit. Yamaga et al. (2016) indicated that exposure to low-intensity red light-emitting diode (LED) irradiation led to development of a degree of This article is an Advance Online Publication of the authors' corrected proof. Note that minor changes may be made before final version publication. rind color in post-harvest satsuma mandarin fruit. However, in red grapefruit, light avoidance accelerated chlorophyll breakdown and induced carotenoid accumulation (Lado et al., 2015). On the other hand, it is well known that light induces regreening in citrus fruits (Goldschmidt, 1988). In this study, we investigated the effect of treatment with a cytokinin (benzyladenine, BA) treatment under light or dark conditions on the rind color and fruit quality of calamondin and thus its effectiveness in delaying the degreening of rind during a short period in shipping and temporary storage. Materials and Methods Mature green calamondin (x Citrofortunella microcarpa (Bunge) Wijnands) fruit were obtained from the Citrus Research Station, NARO Institute of Fruit Tree Science, Japan. The 255 individual fruits used were selected for uniform maturity and size, and were graded as having a green rind. They were divided into groups consisting of 15 fruit. At the start of the experiments, the 15 fruit in each group were assessed for rind color and fruit quality. The rest of the groups were treated with 0, 1, 10, or 100 ppm BA in either light or dark conditions. Fruit were put into low-density polyethylene bags (230 × 340 × 0.03 mm) and sprayed with BA solution. The bags were then closed and stored in a Biotron LH-220N growth chamber (Nippon Medical & Chemical Instrument Co. Ltd., Osaka, Japan) either under the light condition (temperature 25°C constant, fluorescent light, 16h-photoperiod) or in dark conditions, in which the bags of fruit were placed into a lighttight steel box before storage in the same growth chamber. At 5 and 9 days after treatment, the rind color (a*) of the fruit was measured by a ZE2000 color meter (Nippon Denshoku Industries Co., Ltd., Tokyo, Japan). The fruit juice extracted from the fruit was analyzed for total soluble solids (TSS), titratable acidity (TA), sugars (fructose, glucose, sucrose), ascorbic acid (AA), and organic acids (citric acid, malic acid). TSS was determined with a PAL-1 refractometer (Atago Co., Ltd., Tokyo, Japan). TA was measured by the neutralization titration of juice with 0.1 N NaOH and was expressed as grams of citric acid in 100 mL of juice. Analysis of sugars, AA, and organic acids The extracted fruit juice was passed through a membrane filter (0.45 μm) and analyzed for sugars and organic acids. For AA analysis, an aliquot of the fruit juice was mixed with 5% metaphosphoric acid (1:1 v/v) immediately after extraction. Sugars were analyzed by high-performance liquid chromatography (HPLC) on a Shimadzu liquid system equipped with an LC-6A pump, a RID-6A refractive index detector (Shimadzu Corporation, Kyoto, Japan) and personal computer installed with ChromatoPro software (Run Time Corporation, Hachioji, Japan). A UK-Amino column (4.6 × 250 mm) coupled to a UK-Amino guard column (2 × 5 mm) (Imtakt Corporation, Kyoto, Japan) was used and the mobile phase was acetonitrile:water (75:25). Analytical conditions were as follows: flow rate 1 mL·min −1 : column temperature 40°C. AA was analyzed by HPLC with a Shimadzu liquid system equipped with an SPD-20A UV-VIS detector (Shimadzu Corporation) and ChromatoPro software. An ODS 2551-P (6 × 200 mm) (Senshu Scientific, Tokyo, Japan) was used with isocratic elution by a mobile phase of 1.5% ammonium dihydrogen phosphate (NH 4 H 2 PO 4 ) (pH 3.8). Analytical conditions were as follows: flow rate 1 mL·min: detection wavelength 254 nm: column temperature 40°C. Organic acids were analyzed by HPLC with a Shimadzu liquid system equipped with an LC-10A, an SPD-20A UV-VIS detector (Shimadzu Corporation) and ChromatoPro software. A Supelcogel H column (7.8 × 300 mm) coupled to a Supelguard C610H guard column (Supelco, Bellefonte, USA) was used with isocratic elution by a mobile phase of 0.1% phosphoric acid. Analytical conditions were as follows: flow rate 0.5 mL·min −1 : detection wavelength 210 nm: column temperature 30°C. Statistical analysis Data on peel color were analyzed for statistical significance using Statcel3 software (OMS Publishing Inc., Tokorozawa, Japan) by Dunnett's multiple range test. All other statistical analyses were performed using Excel statistics 2010 (Social Survey Research Information, Tokyo, Japan) with a two-way ANOVA and Tukey's test. Fruit color Under light conditions, the a* value in the rind of calamondin fruit increased with decreasing concentrations of BA at both 5 and 9 days after treatment (Fig. 1). Rind color at 10 ppm and 100 ppm BA treatment showed no changes at 5 days when stored in the light, but at 9 days after treatment, untreated rind showed the maximum a* value and degreening had mostly proceeded. Under dark conditions, the a* values at 5 days in all BA treatments were similar to the initial 0-day values, and all fruits kept their green color. On the other hand, at 9 days, only the a* value of untreated fruit had increased, whereas the peel color in the 1, 10, and 100 ppm BA treatments had not changed. Although light promoted peel degreening of the harvested calamondin fruit, BA delayed it. Fruit quality As for fruit quality, TSS and TA in calamondin juice did not differ significantly among BA treatments or between light and dark conditions at 5 and 9 days after treatment (Table 1). However, TSS at 9 days after treatment was influenced by the interaction between BA and exposure to light. The TA of the fruit decreased gradually from the start of the experiment to 5 and 9 days. Sugar content in 5-day fruit was no different among BA treatments or between light and dark conditions (Table 2). In 9-day fruit in the light, glucose, sucrose, and total sugar content of the treatments other than 100 ppm BA were significantly higher than in 9-day fruit in the dark. On the other hand, fructose, glucose, and total sugar increased and sucrose decreased in 9-day fruit stored in the light relative to 0-day fruit. There was no significant difference in AA contents in 5-day fruit between light and dark conditions or BA treatments (Table 3). AA content in fruit stored for 9 days under light conditions was higher than in dark conditions and was decreased significantly by BA treatment. BA and light had no effect on citric acid or malic acid contents in fruit juice at either 5 or 9 days after treatment (Table 3). Discussion The cytokinin BA, which is known to delay degreening and senescence of leaves (Eilati et al., 1969), inhibited chlorophyll decomposition in the rind of calamondin. The inhibitory effects of BA increased at higher concentrations and had a maximum effect at 100 ppm. Barmore (1975) reported that chlorophyllase activity and chlorophyll decomposition in calamondin rind are increased by ethylene treatment and a concomitant decrease in chlorophyll occurs. It was also shown that ethylene-induced degreening of citrus fruit is inhibited by BA. Although citrus is a non-climacteric rise type of fruit, the endogenous ethylene level at the ripening stage reportedly reaches a maximum in seedless yuzu and wase satsuma mandarins when the color changes from green to yellow (Kusunose and Sawamura, 1980). Isshiki et al. (2005) reported that postharvest treatment with 1-methylcyclopropene, an inhibitor of ethylene production, delayed the degreening of peel in sudachi fruit. The delay of degreening in BA-treated fruit may have occurred by repressing the ethylene effect on chlorophyllase activity and chlorophyll decomposition. In the light, harvested calamondin fruit degreening proceeded quickly as shown by the a* values (Fig. 1). Light is important to the development of the photosynthetic apparatus and is involved in both degreening and regreening in the peel of citrus fruit. Yamaga et al. (2016) reported that low-red LEDs increased the a* value of peel in early-harvest satsuma mandarin fruit. In red grapefruit, high intensity light inhibited chlorophyll breakdown and carotenoid accumulation in the peel of fruit located on the branch (Lado et al., 2015). We infer that light accelerates chlorophyll decomposition and the transformation of chloroplasts into chromoplasts in the rind of detached citrus fruit. In terms of fruit quality, BA did not influence TSS, TA, and sugar, or organic acid content of calamondin juice with the exception of AA. Cooper and Rasmussen (1968) concluded that degreening of calamondin fruit receiving AA treatment was caused by their release of ethylene. BA treatment may decrease AA in calamon- z Significance: NS, non-significant; *, P < 0.05; **, P < 0.01 in two-way ANOVA (n = 5). y Different letters show significant differences by Tukey's test at the 5% level. Table 2. The effects of BA treatment and light conditions on sugar content in calamondin fruit at 5 days and 9 days after treatment. z Significance: NS, non-significance; *, P < 0.05; **, P < 0.01 in two-way ANOVA (n = 5). y Different letters show significant differences by Tukey's test at the 5% level. Table 3. The effects of BA treatment and light condition on AA, citric acid, and malic acid in calamondin fruit at 5 days and 9 days after treatment. din juice directly. Although there were no significant differences in TSS, TA, or organic acid content of fruit juice between light and dark conditions, the content of sugars and AA was higher in the light than the dark. Huff (1984) reported that calamondin fruit accumulated soluble sugars in the pericarp as degreening was initiated. We suggest that light accelerates both the degreening and the accumulation of sugars. The accumulation of sugar is an important factor in the quality of some citrus fruits, but it is less important for the quality of calamondin fruit because the fruit has a low sugar content. Consequently, storage in the dark is better at preserving the green rind color after harvest. This study showed that BA-treated fruit stored under dark conditions retained a green rind color similar to that of newly harvested fruit for at least 9 days, which should allow short-term shipment and storage.
2019-04-02T13:13:17.525Z
2018-02-08T00:00:00.000
{ "year": 2018, "sha1": "bfa1897290ac8287a1bee3d935cb2e3cea001128", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/hortj/87/3/87_OKD-145/_pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "18ff45599f01a1643325aee3551e83d4339ea989", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
267037565
pes2o/s2orc
v3-fos-license
Analysis of the incidence and influencing factors of abdominal distension in postoperative lung cancer patients in ICU based on real-world data: a retrospective cohort study Background Abdominal distension is a relatively common complication in postoperative lung cancer patients, which affects patients’ early postoperative recovery to varying degrees. However, the current status of the incidence of abdominal distension in postoperative lung cancer patients and the affecting factors are not well understood. This study aims at exploring the incidence of abdominal distension in postoperative lung cancer patients in ICU based on real-world data and analyzing its influencing factors. Methods A retrospective cohort study was conducted, encompassing patients who underwent lung cancer resections in the Lung Cancer Center of West China Hospital of Sichuan University from April 2020 to April 2021. Nevertheless, patients younger than 18 years and those whose information was limited in medical records were excluded. All data were obtained from the hospital HIS system. In this study, the influencing factors of abdominal distension were analyzed by univariate analysis and multiple logistic regression methods. Results A total of 1317 patients met eligibility criteria, and were divided into the abdominal distended group and the non-distended group according to whether abdominal distension occurred after surgery. Abdominal distension occurred in a total of 182 cases(13.8%). The results of the univariate analysis showed that, compared with the non-distended group, the abdominal distended group had these features as follows: more women (P = 0.021), older (P = 0.000), lower BMI (P = 0.000), longer operation duration (P = 0.031), more patients with open thoracotomy (P = 0.000), more patients with pneumonectomy (p = 0.002), more patients with neoadjuvant chemotherapy (P = 0.000), more days of hospitalization on average (P = 0.000), and higher costs of hospitalization on average (P = 0.032). Multifactor logistic regression analysis showed that sex (OR = 0.526; 95% CI = 0.378 ~0.731), age (OR = 1.154; 95%CI = 1.022 ~1.304) and surgical approach (OR = 4.010; 95%CI = 2.781 ~5.781) were independent influencing factors for the occurrence of abdominal distension in patients after lung cancer surgery in ICU. Conclusions The incidence of abdominal distension was high in postoperative lung cancer patients in ICU, and female, older and patients with open thoracotomy were more likely to experience abdominal distension. Trial registration The study was approved by the Chinese Clinical Trials Registry (registration number was ChiCTR2200061370). Analysis of the incidence and influencing factors of abdominal distension in postoperative lung cancer patients in ICU based on real-world data: a retrospective cohort study Yan Liu 1 , Tingting Tang 1 , Chunyan Wang 1 , Chunmei Wang 1 and Daxing Zhu 2,3* Background Lung cancer is the leading cause of cancer deaths worldwide [1].In China, lung cancer has become the fastest growing malignancy in terms of incidence rate and mortality in the last 30 years [2]. Available treatments for lung cancer include surgery, chemotherapy, immunotherapy, and radiotherapy [3].Surgery is the primary treatment for early-stage lung cancer patients.And surgery also showed efficacy in advanced stage disease after induction therapy in the context of multidisciplinary treatment models [4][5][6][7][8].Studies showed that a series of complications may occur after lung cancer surgery, including pulmonary infection, respiratory failure, cardiac arrhythmia and acute heart failure [9,10]. In clinical work, we found that surgical trauma, anesthetic mechanicals, and mechanical ventilation predispose patients to abdominal distension after lung cancer surgery, which is defined as a measurable increase in abdominal girth, mainly manifested by gas retention, increased intra-abdominal pressure, and a feeling of fullness [11], which affects patients' early postoperative recovery.However, compared with other serious complications, postoperative abdominal distension after lung cancer is often easily overlooked, and there are few studies on abdominal distension in postoperative patients with lung cancer, and its course and the influencing factors are not very clear. Hospital Information System (HIS system) is mainly based on patient information, financial information and material information to provide timely, comprehensive and accurate data sources for hospital personnel in all departments, using the method of collection, storage, transmission, statistics, analysis, comprehensive query, report output and information sharing of information.The objective of this study is to report demographic and clinical characteristics of postoperative patients with lung cancer during 2020-2021 based on real-world data from the HIS system.The focus is to report the incidence of abdominal distension in postoperative lung cancer patients in ICU and to analyze its influencing factors. Patients Included in this study were patients who underwent surgical resections in lung cancer center of West China Hospital of Sichuan University between April 1, 2020 and April 30, 2021. The inclusion criteria were (1) patients who were pathologically confirmed lung cancer and require surgery; (2) patients who were 18 years and older; (3) patients whose complete case information was available; (4) patients who were transferred to ICU for treatment after lung cancer surgery. The exclusion criteria were (1) patients whose symptoms were combined with gastrointestinal diseases and other serious diseases such as liver cirrhosis, (2) patients whose data was not completed. Research design The study was a retrospective cohort study.The retrospective data was obtained from the HIS system.The study was approved by the hospital ethics committee with the approval number 2022 review (No. 1046) and the registration number ChiCTR2200061370 of the Chinese Clinical Trials Registry.The informed consent form was waived with the approval of the Bioethics Review Committee of West China Hospital, Sichuan University. Data collection Two investigators look up medical records through the HIS system to collect demographic and clinically relevant information of the patients, including sex, age, BMI, surgical approach, basic medical history, chemotherapy, postoperative pain level, the use of the analgesic pump after operation, the time of the intubation and extubation of the trachea, blood potassium, blood calcium, and the length of ICU stay, days of hospitalization on average, and costs of hospitalization on average. Judgment criteria for the degree of abdominal distension [12] Conclusions The incidence of abdominal distension was high in postoperative lung cancer patients in ICU, and female, older and patients with open thoracotomy were more likely to experience abdominal distension. Trial registration The study was approved by the Chinese Clinical Trials Registry (registration number was ChiCTR2200061370). Keywords Lung cancer, Surgical resection, Abdominal distension If necessary, combine with the X-ray/CT report to comprehensively determine whether the patient has abdominal distension and the degree of abdominal distension. Statistical analysis Patients were grouped according to whether abdominal distention occurred by combining their symptoms and abdominal X-ray/CT reports, and SPSS 26.0 software was used for data processing and statistical analysis. The above 17 risk factors were analyzed by univariate analysis (sex, age, BMI, TNM stage, operation duration, sedation and the amount of curarization time, postoperative extubation time, surgical approach, type of surgical resection, basic medical history, neoadjuvant chemotherapy, postoperative pain level, postoperative opioid analgesic use, the time to start oral intake, whether there is gas or stool output in ICU period, blood potassium, blood calcium).Significant differences in univariate analysis (sex, age, BMI, operation duration, surgical approach, type of surgical resection, neoadjuvant chemotherapy) were subjected to multivariate logistic regression analysis.Univariate analysis of the three outcome indicators (length of stay in the intensive care unit, mean number of days in the hospital, and mean hospital costs).The univariate analysis of common postoperative complications of lung cancer (lung infection, cardiac arrhythmia and hemorrhage) was performed, to further explore the influencing factors on the total number of days of hospitalization and the costs of hospitalization of the patients.Statistical significance of the results was indicated as P < 0.05. Results The findings showed that among 1317 postoperative patients with lung cancer in ICU who met the criteria of adbdominal distension, 182 cases(13.8%)had different degrees of abdominal distension. Compared with the non-distended group, the abdominal distended group were more women (P < 0.05), older (P < 0.01), and lower BMI (P < 0.01), longer operation duration (P<0.05).A higher proportion of open thoracotomy (21.44% incidence of distension) patients experienced abdominal distension compared with thoracoscopic surgery (6.94% incidence of distension) (P < 0.01).The icidence of abdominal distention was higher in patients with pneumonectomy (23.08% incidence of distension) compared with other types of surgical resection (P<0.01).The incidence of abdominal distention was higher in patients with neoadjuvant chemotherapy (25%) compared with those who did not (12.62%)(P < 0.01), as detailed in Table 1. It is worth noting that the use of perioperative anesthesia and analgesics has been standardized for all lung cancer patients in our hospital, analgesic pumps were used in all postoperative patients, and additional opioid analgesic were used in some patients with severe postoperative pain. Firstly, the indicators with statistical significance in the results of univariate analysis were selected as independent variables, including sex (man 1, and woman 0), age (18∼20 as 1, 21∼30 as 2, 31∼40 as 3, 41∼50 as 4, 51∼60 as 5, 61∼70 as 6, and 71∼80 as 7), BMI (raw value), operation duration(raw value), surgery approach (open thoractomy 1, and VATS 0), type of surgical resection wedge resection 1, segmentectomy 2, lobectomy 3, pneumonectomy 4, and others and neoadjuvant chemotherapy or not (no 0, and yes 1).Whether abdominal distension occurred after surgery was used as the dependent variable, and a multifactorial logistic regression analysis was performed.The results showed that gender, age and surgery apporoch were independent risk factors for the occurrence of abdominal distension in postoperative patients with lung cancer, as detailed in Table 2. Compared with the non-distended group, patients in the abdominal distended group had more days of hospitalization on average(P < 0.01) and higher costs of hospitalization on average (P < 0.05), but there was no statistical difference in the length of ICU admission (P > 0.05), as detailed in Table 3. Since a series of complications may be combined after lung cancer surgery, which perhaps leading to prolonged the days of hospitalization and the costs of hospitalization, thus we further analyzed the incidence of common complications after lung cancer surgery and their impact on the days of hospitalization and the costs of hospitalization of the patients. The results showed that patients who developed abdominal distension group after lung cancer surgery had a higher incidence of complications such as lung infection, cardiac arrhythmia, and hemorrhage than Non-distended group, but none of the differences were statistically significant (P > 0.05), as detailed in Table 4. Patients with lung infection had higher hospitalization days and hospitalization costs (P < 0.05).Patients with cardiac arrhythmia and hemorrhage had higher hospitalization days and hospitalization costs compared to patients in the normal group, however, the differences were not statistically significant (P > 0.05).Pneumonectomy patients had higher hospitalization days and hospitalization costs (P < 0.05), as detailed in Table 5. Discussion The results of the current study showed that the incidence of abdominal distension in postoperative lung cancer patients in ICU was 13.8%, which is close to the incidence of cardiopulmonary complications (18.4%) [9].The results of multifactorial logistic regression analysis showed that women, older and patients with open thoracotomy took a higher risk of experiencing abdominal distention, which were independent risk factors for postoperative abdominal distention in lung cancer. Abdominal distension is a subjective sensation of abdominal swelling and accompanied by visible increase in abdominal girth, which can occur in any part of the abdomen (upper, middle, lower or whole abdomen) and has a complex etiology and pathogenesis that is usually the result of multiple factors which are not fully understood [13,14].Abdominal distension may often lead to delayed feeding, anxiety and sleep disturbance, which may impair patients' recovery [15].Previous studies have shown that abdominal distention may be associated with pathophysiological factors such as altered microbiota, abnormal gastrointestinal dynamics, abdominophrenic dyssynergia (APD) and visceral hypersensitivity reactions [11].It has also been shown that abdominal distension may be associated with operative stress, postural immobilization, and slowed gastrointestinal motility due to perioperative use of opioids and anesthetics [12]. The study results showed that the incidence of abdominal distension was significantly higher in women (19%) than in men (13.11%) (P < 0.01), similar to the findings of Jiang [16].The study of Xu [17] suggested that women are relatively slow to recover from postoperative gastrointestinal tract, and women are more sensitive than men in terms of emotional performance and more prone to negative emotions such as anxiety and depression [18].As reported in the literature [19], the prevalence of clinically anxiety and depression in female lung cancer patients was as high as 30% and 24.7%, respectively, and female cancer patients were almost two times more likely than males (24.0%versus 12.9%)to report clinical levels of anxiety, which to some extent aggravate the incidence of abdominal distension.In addition, influenced by traditional culture, male patients in China show a higher tolerance to their symptoms and have a relatively low reporting rate of abdominal distension [17].Therefore, the above factors may lead to a higher incidence of abdominal distension in women than men. This study showed that the risk of abdominal distension in postoperative lung cancer patients increases with age (OR = 1.154, 95%CI = 1.022∼1.304).It is possible that when age increases, people's physical function declines, metabolism slows down, and gastrointestinal peristalsis slows down, which can easily lead to gastric retention and abdominal distension. The incidence of abdominal distension was significantly higher in patients undergoing open thoracotomy (21.44%) than in patients undergoing thoracoscopic surgery (6.94%) (P < 0.01).With the advancement of surgical techniques, video-assisted thoracoscopic surgery (VATS) is gradually used as an alternative to open thoracotomy, which has a smaller incision and fewer postoperative complications compared with traditional open thoracotomy, and is more conducive to reducing postoperative pain and inflammatory immune response in patients and promoting rapid postoperative recovery, thereby shortening hospital stay, improving patients' quality of life, and even improving long-term survival [20][21][22][23][24][25]. The increase of hospitalization days and hospitalization costs may be the result of multiple factors, including abdominal distension, postoperative complications (lung infection), and the type of surgical resection (pneumonectomy).The results of the current study showed that the number of days and the cost of hospitalization for patients with abdominal distension and postoperative complications were higher than normal group (P<0.01),similar to the findings of Schulze [15] and Li [26].It indicates that postoperative abdominal distention and lung infection may prolong patients' hospital stay, and increase medical costs [27]. Recent advances in early detection and screening of lung cancer decrease the number of patients with advanced diseases [28,29].However, pneumonectomy is still necessary for 10% of patients undergoing surgical resection [30].Pneumonectomy greatly increases the risk of postoperative complications and mortality [31,32].The results of the current study showed that the length of ICU admission was higher in patients with pneumonectomy (approximately 1.69 days) than in patients with other types of surgical resection (approximately 1.16 days), in addition, patients with pneumonectomy combined with a higher incidence of abdominal distension, which further contributed to higher hospitalization costs and days than other types of surgical resection. Therefore, medical staff should fully evaluate female patients before surgery and understand the psychological changes of patients at different ages and providing them with targeted health education, reasonably choose the surgical method, and strengthen the understanding, preventing and managing of perioperative abdominal distension in lung cancer patients, which is important to prevent the occurrence of postoperative abdominal distension in lung cancer and reduce the length of hospitalization and economic burden of patients. The study had several limitations.First, as a single-center retrospective cohort study, it included data obtained from secondary data sources (the HIS system).Consequently, information bias could be present.Second, the study was conducted at a single hospital, and therefore it is not representative of the whole region or country.Third, only relevant influencing factors were collected and analyzed, and the inclusion of indicators may not be comprehensive.Subsequent prospective, multicenter, and large-sample studies should be conducted. Conclusions In conclusion, the incidence of abdominal distension was high in postoperative lung cancer patients in ICU, which is closely related to gender, age and surgical approach, and may extend the patients' hospital stay and increase medical costs. : Table 1 Comparison of risk factors between abdominal distension and non-distended groups (n = 1317) BMI?Body mass index; Normal range of blood potassium?3.5∼5.5mmol/L;Normal range of blood calcium?2.25∼2.75mmol/L* Statistically significant Table 2 Logistic regression analysis of risk factors for abdominal distension * Statistically significant Table 5 Comparison of length of stay and cost between the two patients with different complications and types of surgical resection [M(Q1, Q3)] (n = 1317)
2024-01-19T02:26:19.631Z
2024-01-18T00:00:00.000
{ "year": 2024, "sha1": "6c4845278c36d51b687b8c7f495e4b37cfa4dda5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "6c4845278c36d51b687b8c7f495e4b37cfa4dda5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233975799
pes2o/s2orc
v3-fos-license
Dual surgeon operating to improve patient safety The COVID-19 pandemic resulted in an unprecedented reduction in the delivery of surgical services worldwide, especially in non-urgent, non-cancer procedures. A prolonged period without operating (or ‘layoff period’) can result in surgeons experiencing skill fade (both technical and non-technical) and a loss of confidence. While senior surgeons in the UK may be General Medical Council (GMC) validated and capable of performing a procedure, a loss of ‘currency’ may increase the risk of error and intraoperative patient harm, particularly if unexpected or adverse events are encountered. Dual surgeon operating may mitigate risks to patient safety as surgeons regain currency while returning to non-urgent operating and may also be beneficial after the greatly reduced activity observed during the COVID-19 pandemic for low-volume complex operations. In addition, it could be a useful tool for annual appraisal, sharing updated surgical techniques and helping team cohesion. This paper explores lessons from aviation, a leading industry in human factors principles, for regaining surgical skills currency. We discuss real and perceived barriers to dual surgeon operating including finance, training, substantial patient waiting lists, and intraoperative power dynamics. Reduced operating The COVID-19 pandemic has had a considerable impact on the provision of surgical services around the world. Redeployment of staff, and the need to maximise bed capacity to deal with increasing demands of the pandemic have resulted in elective surgery being postponed in many hospitals. 1 Some hospitals have created COVID-light sites, often utilising the independent (private) sector which has enabled urgent operations and cancer work to continue, but many have been unable to achieve this goal amid continued surges of COVID-19 cases. In a 2020 Royal College of Surgeons of England (RCSEng), 33% of respondents were unable to undertake elective or planned procedures within the preceding four weeks, and 38% had been redeployed to alternative, usually non-surgical roles. [2][3][4] The Federation of Surgical Specialty Associations (FSSA) has provided helpful guidance on surgical prioritisation during the pandemic. 5 The RCSEng issued guidance on how teams can optimise surgical pathways to manage the backlog of operations as efficiently as possible. 6 The impact of the pandemic on surgical services is likely to be felt for a long time as teams battle the backlog of cases. In November 2020, a record number of patients (4. Working alongside an experienced colleague may help to minimise these risks and improve safety. were awaiting hospital treatment in England, with 192,169 patients having waited more than 52 weeks. 7 Reduced 'currency' While high-throughput surgical services work through the backlog of postponed or cancelled operations, patient safety must not be compromised. Some operations may be expedited by the increased use of local and regional anaesthesia, day-case units and extended operating hours, and others including more complex surgery must be approached with more caution. 6 Many surgeons will have experienced a prolonged period without performing complex non-urgent surgery (a 'layoff' period) and may experience skill fade or loss of confidence. 8,9 An extensive systematic review by the General Medical Council (GMC) highlights the deterioration of both technical and non-technical skills after an extended time without practice. 10 In aviation, this is known as being 'out of currency' or 'not current'. 11 Healthcare owes many of its developments in human factors (HF) research and improvements in patient safety to the pioneering work conducted by the airline industry. Despite these being different high-reliability organisations (HRO), parallels can be drawn between individual and organisational factors that can lead to catastrophic error (Fig. 1). As a result, lessons and developments in one industry are often applicable to the other which can lead to valuable change and innovation. Layoff periods occur more frequently in pilots than surgeons due to aircraft, financial, and organisational factors. After a period of not flying, pilots must be deemed current as well as legal and medically fit to fly. Similarly, surgeons returning to normal operating after the restrictions imposed by COVID-19 may have a valid licence to practice and be on the specialist register through GMC revalidation and appraisal, but may have lost 'currency'. 11 Lessons from aviation for 'getting current' Pilots are subject to a formal system of skills currency, mandated by the regulators such as the Civil Aviation Authority (CAA) in the UK. When pilots do not meet currency requirements, they must undergo a period of supervision, flying with a safety pilot/training captain. The minimum legal requirements for private pilots are less strict than those in military or commercial airline environments to reflect the variety of activity. Pilots and flying organisations must therefore take more responsibility for maintaining their own currency. A surgical skills currency barometer, adapted from general aviation (GA), prompts surgeons to reflect on their experience over the past 12 months to assess their currency when planning a return to complex non-urgent surgery. 11 Given skill fade over prolonged periods without adequate practice and rehearsal of skills, 10 the currency barometer is a useful reflective exercise that could help to highlight areas of currency deficit. Unlike in aviation, maintaining currency in surgery is not currently mandatory for practice. The NHS Improvement Getting It Right First Time (GIRFT) initiative advocates the introduction of minimum annual operating numbers for consultant and SAS surgeons for each procedure. 12 This remains controversial, but few would deny the benefit of working with an experienced, and current, peer (equivalent of a safety pilot) in undertaking a procedure after months without practice. Dual surgeon operating Working closely with experienced peers is not a new concept in healthcare. For many years colleagues have utilised the power of working within multidisciplinary teams (MDT) with good evidence that this improves patient outcomes. 13 Interaction with other clinicians enables shared decision making, supervision, mentorship, and the combination of skills and experience. 14 These attributes are particularly important when performing unfamiliar, difficult, or less common procedures. Operating with an experienced colleague offers a degree of peer-to-peer support, boosting confidence and reducing cognitive load. It may also prevent fatigue when performing long operations, reducing the risk of error. 15 Just as pilots benefit from a brief period of flying with an experienced training pilot, operating alongside an experienced peer might be judicious when returning to complex surgery after a layoff period. The need for dual surgeon operating is both procedure and operator dependant but can be guided by the surgical skills currency barometer and personal reflection. 11 It is unlikely to be required for all procedures but can be highly valuable when planning for long, difficult, complex, less familiar or infrequently performed operations. The decision to start dual surgeon operating may also be department-led and should be discussed as part of the team's plan to return to normal operating capacity. This is already a standard operating procedure (SOP) in some cases; it is not uncommon practice in challenging reconstructive or re-do procedures, transplant surgery, and complex spinal or orthopaedic surgery. In complex knee revision surgery, 40.5% of trusts report that complex work is already undertaken by two operating surgeons, with a further 55% of trusts aiming to introduce this. 12 Importantly, low currency does not necessarily mean an individual is not capable of performing an operation under routine circumstances, especially after years of experience. Just as a pilot is unlikely to forget how to fly, surgeons are unlikely to forget how to perform procedures. Surgeons and pilots can, however, become deskilled ('rusty'), resulting in slower decision making and technical precision. It follows that if workload increases as a result of adverse or unexpected events, the risk of errors occurring with associated harm would likely increase. In HROs, reduced currency may have considerable consequences for all stakeholders. Just as one would expect pilots to be supervised after a prolonged layoff, most patients would wish the same for their surgeons. There is currently a paucity of publications assessing the impact of two-surgeon operating on outcomes and safety measures. Such studies are made difficult by the relative rarity of some complications and the heterogeneity of complex procedures. Evidence (mostly from spinal and breast surgery) suggests that dual surgeon operating can reduce the duration of surgery, length of hospital stay, blood loss, and may prevent some postoperative complications. [16][17][18][19][20][21][22] Dual surgeon operating is useful for sharing updates in technical and non-technical surgical practice, as well as a potential annual appraisal tool. In commercial aviation, all pilots including senior training captains undergo regular simulation assessment to update skills in line with the latest regulations. Single pilot operations, for example bush cargo pilots in austere environments, undergo instructor appraisal, even if they are legal, fit, and current to fly. Senior surgeons working in NHS organisations operate frequently with senior trainees or post-CCT (certificate of completion of training) fellows. These colleagues may bring knowledge from other surgical practices to allow updating of skills. This may be more difficult to achieve for surgeons working in smaller practices, or predominantly in non-NHS sectors. Implications for training Changes to healthcare services in response to the COVID-19 pandemic has had an unprecedented impact on surgical training. 23 Redeployment of trainees to non-surgical roles is common, with up to 57% of trainees being redeployed in some locations. 23,24 Many trainees will require extensions to their specialty training programs due to lost training opportunities and operative experience. A recent audit of surgical trainees in the UK found a 50% reduction in trainee logged operations across surgical specialties during the COVID-19 pandemic. 25 Care must be taken to ensure that dual surgeon operating remains a valuable training experience for the entire surgical team. Complex and challenging operations present an opportunity for multiple trainees to learn from the combined experience and expertise of two senior surgeons. Before a procedure, trainers should discuss learning objectives, as well as discussing the operative plan, possible perioperative challenges and how these will be mitigated or managed. Following the procedure, debriefing and thanking the team, a further debrief should take place with trainees. This should include an overview of operative events, the postoperative plan, discussion of learning objectives and suggestions for further education. Trainees must also take advantage of dual surgeon operating scenarios to observe and develop nontechnical skills including communication, team-working, and leadership. Dual surgeon operating is not synonymous with twoconsultant operating. Over-learning or the amount a skill has been practised beyond initial mastery can reduce skill fade over time. 26 Trainees may therefore experience greater skill fade having consolidated less operative experience than more senior colleagues prior to a prolonged layoff. 27 Conversely, in some situations, a consultant may be less current than a senior trainee, for example if they have experienced a longer layoff period. Trainers must therefore consider the currency and capability of both operating and assisting surgeons when planning operative lists. The need for dual surgeon operating in high-volume procedures will be required more initially, but is likely to reduce rapidly for many procedures as surgeons become more practised on their return to normal operating. While some may worry that this could impact training case numbers, it is anticipated that there will be a steady return to normal operating, except for the most complex and challenging cases. For low-volume, complex operating, routine dual surgeon operating allows surgeons to remain current for cases they would otherwise rarely encounter. Cost The potential benefits of dual surgeon operating on return to work from a period of layoff could justify both the financial cost of two senior clinicians' time as well as the impact on theatre throughput when trying to manage backlogs. Dual surgeon operating is a temporary investment in time and money, to potentially improve patient safety, outcomes, and surgeon confidence. The need for dual surgeon operating is surgeon, unit, and operation dependant, being most applicable to larger more challenging operations and is unlikely to be required for less complex procedures. Additionally, the pooled expertise of two experienced surgeons decreases operating time. 16,17,[19][20][21][22] For low-volume complex procedures, routine dual surgeon operating may reduce costs over the entire duration of the patient's treatment by reducing the number of sub-sequent operations or non-operative treatment for surgical complications, as well as the length of hospital stay. [16][17][18][19]22 Implementation Loss of surgical currency is not synonymous with loss of capability, but surgeons are human, fallible, and can experience skill fade after a prolonged layoff. All stakeholders should recognise the value of dual surgeon operating in the acute phase of returning to normal surgical capacity and how its implications for patient safety justify the temporary investment. Although some studies could be used to guide the definition of a 'prolonged' layoff period using skill fade over time, this evidence is often based on simulation, operation or surgeon specific practice, and does not adequately reflect the heterogeneity and complexity of surgical skills. The period of time required to experience skill fade and loss of confidence will differ between each team, individual, and operation. Tools such as the surgical skills currency barometer can help surgeons reflect on their level of currency after any period of time away from operating, or between low-volume complex procedures. Dual surgeon operating requires departmental preparation and planning to allow for rota changes and optimal utilisation of theatre time. Potential barriers include anticipated difficulties in intraoperative communication between operating surgeons, and power dynamics. To avoid these pitfalls, communication must be prioritised throughout the procedure, starting at the briefing. In aviation, if two captains fly together one must be designated as legal Pilot in Command of the flight, therefore being legally responsible for the safe conduct of the flight. Likewise, one peer operator should be chosen as the 'responsible or lead surgeon', to avoid confusion. To ensure synchronous operating, the briefing should include a detailed account of the operative plan, anticipated challenges and strategies to overcome these. Unlike in aviation, advanced high-fidelity simulation is less accessible in healthcare, but the merits of simulation training as a supplement to experience in theatre have been widely reported. 28,29 Virtual reality surgical simulation software is becoming increasingly accessible on mobile phones, tablets, and home computers. Simulation technology and bench-top training models are also available at most hospitals. Simulation training should be utilised by surgeons during and following a layoff period to minimise attrition of surgical skill which may support a faster return to currency and confidence. Wider implications These recommendations are not restricted to operating following the COVID-19 pandemic, though this represents the largest simultaneous reduction in activity for surgeons worldwide. Many surgeons will experience a prolonged layoff during their career and some may experience several. A few examples include time out for research or out of programme experiences, parental leave, and sickness. On return-to-work, surgeons that have experienced a loss of currency, may have also experienced skill fade, and a loss of confidence. 30 Any surgeon returning to work from a prolonged layoff should benefit from a collaborative and supportive team-working environment. A period of dual surgeon operating is likely to improve confidence and patient safety while surgeons regain currency.
2021-05-08T00:03:00.457Z
2021-02-24T00:00:00.000
{ "year": 2021, "sha1": "79aaac9875425a9de1891d86361d47a9e2557c6a", "oa_license": null, "oa_url": "http://www.bjoms.com/article/S0266435621000802/pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "612be3997f80ec8a907d518b987f74aae20c1864", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247915376
pes2o/s2orc
v3-fos-license
Multi-Objective Design Optimization of Flexible Manufacturing Systems Using Design of Simulation Experiments: A Comparative Study : One of the basic components of Industry 4.0 is the design of a flexible manufacturing system (FMS), which involves the choice of parameters to optimize its performance. Discrete event simulation (DES) models allow the user to understand the operation of dynamic and stochastic system performance and to support FMS diagnostics and design. In combination with DES models, optimization methods are often used to search for the optimal designs, which, above all, involve more than one objective function to be optimized simultaneously. These methods are called the multi-objective simulation–optimization (MOSO) method. Numerous MOSO methods have been developed in the literature, which spawned many proposed MOSO methods classifications. However, the performance of these methods is not guaranteed because there is an absence of comparative studies. Moreover, previous classifications have been focused on general MOSO methods and rarely related to the specific area of manufacturing design. For this reason, a new conceptual classification of MOSO used in FMS design is proposed. After that, four MOSO methods are selected, according to this classification, and compared through a detailed case study related to the FMS design problem. All of these methods studied are based on Design of Experiments (DoE). Two of them are metamodel-based approaches that integrate Goal Programming (GP) and Desirability Function (DF), respectively. The other two methods are not metamodel-based approaches, which integrate Gray Relational Analysis (GRA) and the VIKOR method, respectively. The comparative results show that the GP and VIKOR methods can result in better optimization than DF and GRA methods. Thus, the use of the simulation metamodel cannot prove its superiority in all situations. Introduction The fourth industrial revolution, known as industry 4.0, is considered the upcoming significant technology development as it allows customers to receive their products based on their expectations in terms of product varieties and quantiles [1]. Industry 4.0 can be attributed to its broadening focus on automation, decentralization, system integration, cyber-physical systems, etc. [2]. One of the basic components of Industry 4.0 is the Flexible Manufacturing System (FMS), which is an advanced production system that interconnects 1. According to the articulation of the preferences of the Decision Maker (DM). This first classification criterion is proposed by Rosen et al. [8]. Four groups of methods are possible and include the following: (1) a priori MOSO methods when the DM expresses their preferences before optimization is conducted; (2) a posteriori MOSO methods (in these methods, the DM selects a solution at the end of the search. Although this approach avoids the disadvantage of the a priori approach by taking into account preference information only at the end of the optimization process, it can lead to extremely high computational costs); (3) a progressive articulation of DM preferences (also named Interactive MOSO Methods) (the progressive approaches repeatedly solicit preference information from the DM to guide the optimization process). These methods enable DM to change his preferences during the optimization process by incorporating knowledge that only becomes available during the search. Interactive methods may be useful when simulation runs are expensive and the DM is readily available to provide input. Finally, the fourth group involves (4) non preference MOSO methods that operate without regard to the preference of DM. 2. According to the research set and variables nature. This second classification criterion is proposed by Hunter et al. [7]. Three groups of methods are possible, including the following: (1) MOSO on finite sets, called Multi-Objective Ranking and Selection In the context of integer-ordered and continuous decision variables, we focus on methods that provably converge to a local efficient set under natural ordering. Furthermore, these methods of the three groups can also be viewed two groups according to the type of the final solution: global solution versus local solution [7]. The MORS methods provide a global solution, in which simulation replications are usually obtained from every point in the finite feasible set, and the estimated solution is the global estimated best. In addition, metaheuristics methods (named also random search) such as simulated annealing, Genetic Algorithms (GA), Tabu Search (TS), etc., also provide global solutions. Metaheuristics methods are efficient because they appropriately control stochastic error. However, the task is more challenging as it results in a number of solutions with different trade-offs among criteria, also known as Pareto optimal or efficient solutions. 3. According to the use or non-use of metamodels. This third classification is proposed implicitly in many research studies such as in Barton and Meckesheimer [9], do Amaral et al. [10], etc. A metamodel or model of the simulation model simplifies the SO in two ways: The metamodel response is deterministic rather than stochastic, and the run times are generally much shorter than the original simulation. The metamodel is used to identify and estimate the relationship between the inputs and outputs of the simulation model, forming a mathematical function that is used to evaluate possible solutions in the optimization process. For example, Hassannayebi et al. [11] highlight that the adoption of metamodel-based SO in industry and service problems has grown due to its potential to reduce the number of simulation rounds necessary in the optimization process. Note that the MOSO methods, which are based on the metamodel, also provide a global solution such as that discussed in the second classification criterion. FMS Design Literature Review The study of Diaz et al. [12] presents a MOSO approach for a reconfigurable production lines subject to scalable capacities. The production line produces two product families and is composed of 18 workstations. The authors utilized a Non-Dominated Sorting Genetic Algorithm II (NSGA-II), a variant of GA to address the assignment of the tasks to workstations and buffer allocation for simultaneously maximizing the Throughput Rate (TR) and minimizing total buffer capacity.Červeňanská et al. [13] explored an MOSO of an FMS via a scalar simulation-based optimization method. The authors integrated a simulation with Design of Experiment (DoE) and Weighted Sum and Product multiobjective methods to optimize the total number of products, the Mean Flow Time (MFT), the Machine UTILization (MUTIL), and the average costs per unit of part. The modeled FMS produces two different products with eight workstations using parallel automated work machines. The paper of Hussain and Ali [14] studied the impact of four design and control factors, control architectures, sequencing flexibility, buffer capacity, and scheduling rule on the performance of an FMS. The studied FMS is composed of six Computer Numerical Control (CNC) machines producing six different types of parts. The system is evaluated on the basis of make-span, average MUTIL, and the average Waiting Time (WT) of parts at the queue using the Taguchi-Grey multi-objective method. Apornak et al. [15] considered a multi-objective optimization of five performance measures in FMS. The authors addressed the optimal set of queues capacity, queues discipline, conveyor and transporter's speed, and operational setup times in an FMS with objectives of minimization of the average WT of raw materials, two average Process Times (PT), as well as the transporter and assembler product outputs. The studied FMS is composed of three work stations producing various kinds of seats for the freight cars. Using DoE, the authors simulated and collected the performance measure of 36 random scenarios. Regression analysis was then used to describe the metamodel of each performance measure. Consequently, the Response Surface Methodology (RSM) was applied to optimize the five objective functions. Ahmadi et al. [16] proposed two Evolutionary Algorithms (EA): NSGA-II and NRGA are applied and compared to simultaneously combine the improvement of the make-span and stability of the schedule. This stability is evaluated by measuring the deviation of start and completion times of each job between prescheduled and realized schedule. The simulation is used to evaluate the state and condition of the machine breakdowns on a variety of manufacturing systems. Freitag and Hildebrandt [17] used a multi-objective simulation-based optimization to create a control strategy for an FMS by considering earliness and tardiness performance measures. This paper investigates the effect of 10 different attributes, which are the PT, the average PT of all waiting jobs, the Setup Time (ST), the average ST of all waiting jobs, the number of remaining operations, the time in system, the time in queue, the batch family size, the time until operational due date, and the average time until operational due date. The authors used the GA coupled with the simulation to solve the scheduling rule choice problem for a complex FMS. Ammar et al. [18] investigated the size of the number of workers to be assigned to an FMS as well as the skills that each worker must have in a multi-objective optimization problem. The two objectives considered are minimizing the expected labor cost associated with the manufacturing team and minimizing the expected average task TR. The proposed multi-objective simulation optimization approach is applied to the design of teams of a manufacturing system; using the EA NSGA-II connected to a simulation model developed using Arena. Dengiz et al. [19] implemented a multi-objective optimization method of an FMS based on simulation through DoE, a regression meta-model, and the Goal Programming (GP) method. The authors have modeled and simulated by the ARENA simulation software an FMS with four workstations. Then, they applied the multi-objective optimization method to optimize the TR and MFT in the system by taking into consideration the number of operator, the velocity of material handling, the number of tool, scheduling rules, and the number of pallets as design and control parameters. Using simulation results, Bouslah et al. [20] developed and solved a mathematical model based on RSM. The main objectives of the authors were to determine the optimal batch size, the optimal hedging level, and the economic sampling plan design, which minimized the average total holding cost, which includes the storage of the Work In Process (WIP) and final inventory stock, the average backlog cost, the average cost of sampling, the average costs of 100% inspection and rectification of the rejected batches, the average cost of transportation, and the average cost of replacement of non-conforming items sold to the consumer. However, the authors did not mention any details on the structure of the simulated manufacturing system. Iç et al. [21] considered a case study of simulation-based multi-objective optimization using the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) method hybridized with the Taguchi design technique. The studied production system is an FMS department composed of four CNC machining centers and producing three part types. The authors based their optimization case on the cycle time, TR, and work in queue as performance measures. In addition, they used five factors as decision variables. These factors are the number of cutting tools, the number of operators, the number of pallets, the velocity of transporter robots, and the pallet selection strategy. The paper of Wang et al. [22] applies an MOSO method to a flexible shop scheduling problem. The two investigated objective functions are the minimum of the maximum PT and the minimum of the maximum machine load. The main considered constraints are the production resources and the technological process. The scheduling model of an FMS is established using simulation software and integrated to NSGA-II EA. In Zhang et al. [23], a hybrid method based on hybrid GA and TS is used to address a multi-objective FMS scheduling problem. Two objectives, which are the make-span and the starting time deviations, are considered to improve schedule efficiency and stability. A case of study of six machines FMS was studied with four different job arrivals rate and six different number of job arrivals. Azadeh et al. [24] integrated simulations with the GP method and DoE technique to address a multi-objective scheduling problem of an FMS. The proposed method was applied on a real textile shop floor to minimize make-span and tardiness. The authors determined the decision parameters by using the DoE technique by estimating the effects the dyeing machine type, the temperature of the printing, the temperature and the number of center machines, and the scheduling rules through meta-modeling. Then, they used GP to find the optimal values of these decision variables, which are subject to a set of technical and managerial constraints. Um et al. [25] presented the simulation based multi-objective optimization of the design of an FMS with Automated Guided Vehicles (AGVs). Their principal objectives were to minimize congestion and utilization and to maximize TR based on many parameters including the number, velocity, and dispatch rule of AGV, part types, scheduling, and buffer sizes. In this paper, the authors considered a nonlinear programming method combined to evolution strategy. Nonlinear programming was used to determine the design parameters of the system through multi-factorial and regression analyses, and an evolution strategy was used to verify each parameter for simulation-based optimization. Syberfeldt et al. [26] describe the use of Artificial Neural Networks (ANN) and EA as MOSO methods to the manufacturing cell at Volvo Aero. The two investigated objectives were the maximization of cell utilization and the minimization of overdue components considering the component inter-arrival times and due date as decision criteria. Kuo et al. [27] proposed a practical case of the Grey-based Taguchi method as a MOSO method for a company that provides integrated circuit packaging services. The authors aimed to optimize TR and cycle time performance for ink marking machines to avoid backlog of orders or lost customers, and the TR of the system must be increased. They based their methodology on five three-level control factors, which are the PT, the machine buffer size, the time between adjustment, the ratio of the adjusted PT to original PT, and the mean time between failures. Oyarbide-Zubillaga et al. [28] focused on the determination of the optimal preventive maintenance frequencies for multi-equipment systems. The authors apply simulation and NSGA-II to the multi-objective optimization problem of preventive maintenance activities to minimize the system's cost and to maximize profit by considering the production speed, the percentage of unavailability of a machine due to corrective maintenance, and the fraction of time before and after the last maintenance as control factors. The system cost was defined as the sum of the preventive and corrective maintenance, the production speed lost, and the quality costs for each of the machines. Profit is the result of selling non-defective products. Park et al. [29] presented a method for determining the design and control parameters of an FMS with multi-objective performance via a fully factorial DoE, regression analysis and trade-off programming. A hypothetical FMS with six workstations was modeled and simulated. The number, speed, and dispatching rules of AGVs, in addition to the number of pallets, the buffer sizes, and the loading, routing scheduling rules, were considered as control parameters. These eight parameters were simultaneously determined by compromising performance measures of TR, delay, MUTIL, and WIP that are formulated using regression analysis. The Proposed Conceptual Classification of MOSO for FMS Design There are many MOSO methods applied for FMS design. According to the previous literature review, it is better to classify them in three main groups: Group A, Group B, and Group C, as detailed in Table 1. This classification is applicable regardless of the articulation of DM's preferences. It should be noted that all of the previous MOSO methods that are applied in the design of FMS are global solutions. A priori DM preferences are generally applied. Group C No No Iterative simulation and optimization using principally metaheuristics for random design research such as simulated annealing, genetic algorithms, etc. Only in this group, the articulation of the preferences of the DM is important. Table 2 summarizes the methods and techniques used in the MOSO methods used for FMS design. The presence of a cross "X" in a row and column intersection means that the research study stated in row use the method mentioned in column. It shows that all of the previous studies have applied a global solution method. These methods can be classified easily according the proposed classification in three groups (A, B, and C). Group C contains complex optimization techniques using metaheuristics, such as (GA, TS, EA, etc.). The performance of MOSO methods is not guaranteed because there is an absence of comparative studies. None of the previous studies has compared different MOSO methods. The Objective of the Case Study Our main contribution is to fill these gaps in the literature and to conduct a study of several relatively straightforward simulation-based FMS optimization methodologies that cover almost all categories of optimization methods classification. Our study investigates and compares the applicability and performances of the Goal Programming (GP), the Desirability Function (DF) method, the Grey Relational Analysis (GRA), and the VlseKriterijuska Optimizacija I Komoromisno Resenje (VIKOR) method. All these methods are based on the DoE technique. They must be preceded by a design of experiments to program and sometimes analyze the simulation results. Moreover, these four multi-objective optimization methods have in common the type of preferences of DM; indeed, they are all based on an a priori decision of the DM for the choice of the objectives. On the other hand, the two methods GP and DF use the simulation-based metamodel technique and combine continuous and integer decision variables to solve the multiobjective optimization problem, while the two other methods, GRA and VIKOR, are based on the RS technique and use exclusively integer decision variables. The solutions reached by the GP and DF methods are then global and those reached by the GRA and VIKOR methods are local. In this study, we are interested in four multi-objective optimization methods in the context of FMS. An application on an FMS system will be used as a basis to compare the performances of these methods. It is mainly a matter of comparing the deviations between their results and the expected target values. Figure 1 describes in detail the adopted MOSO methodologies applied to FMS. These methodologies are essentially made up of three stages. Each of these stages consists of various steps. In the first stage, the primary step starts with FMS factors levels and performance measures selection and definition. Consequently, DoE is constructed, and the corresponding simulation models are developed using ARENA 14 discrete event simulation software. In the final step of this first stage, simulations are run to collect data for every studied performance measure. These simulation results are then analyzed in the second phase by one of the four adopted multi-objective optimization methods. The steps of this phase are discussed in detail in the following paragraphs. Finally, the optimum factors levels are adopted in the last stage of the multi-objective optimization method. Materials and Methods to the FMS of two consecutive parts, is generally generated by common probabilistic laws. In addition, parts are grouped into batches to reduce the machine's setup repetitions and transport times between work stations [31]. Furthermore, parts arriving at any work station are made to wait in a queue until the required machine becomes available. Once this required machine is idle, parts must be selected from the waiting queue based on scheduling rules [32][33][34]. As shown in Table 3, each of the considered FMS factors considered is studied with 2 levels. The FMS investigated in this study is inspired by Pitchuka et al. [30]. An FMS is a manufacturing system characterized by a certain flexibility that allows the system to react in the case of changes. This flexibility is considered to fall into two categories. The first one, called routing flexibility, generally covers the system's ability to be changed to produce new product types. The second category is called machine flexibility, which consists of the ability to use various machines to perform the same manufacturing operation on a part. • To capture the FMS flexibility effect on its performance, this research adopts different machine LAYOUT (LAYOUT) for the studied FMS. Indeed, Functional Layout (FL) and Cellular Layout (CL) are the two most used machine layouts in FMSs. In FL, functionally similar machines are grouped into departments, and all machines of every department can perform production operations for any incoming part [31]. However, CL is made up of independent manufacturing cells. Each of these cells is made up of different machine types dedicated to the treatment of similar parts grouped into families. In addition, this work aimed also to measure the effect of part Batch Size (BS), part Inter-Arrival Time (IAT), and scheduling RULEs (RULE) on FMS performances ( Table 3). The IAT, defined as the difference between the arrival times to the FMS of two consecutive parts, is generally generated by common probabilistic laws. In addition, parts are grouped into batches to reduce the machine's setup repetitions and transport times between work stations [31]. Furthermore, parts arriving at any work station are made to wait in a queue until the required machine becomes available. Once this required machine is idle, parts must be selected from the waiting queue based on scheduling rules [32][33][34]. As shown in Table 3, each of the considered FMS factors considered is studied with 2 levels. • The FMS considered is composed of 8 machines grouped into 3 departments in FL and 2 cells in CL. The two departments "M" and "L" are composed of 3 machines each, while department "M" comprises only 2 machines. This MS is also characterized by two-part families composed of each of 2 part types. Each type of part requires 2 to 5 manufacturing operations (Table 4). • The setup and processing times for each type of part are provided in Table 5. Setup times on every machine can be reduced or cancelled by the setup factor (δ) depending on the similarity of the successive parts family or type. Indeed, if successive parts belong to the same family, the subsequent part setup time must be reduced by a factor of δ = 0.5. On the other hand, if these successive parts have the same type, no machine setup is needed and the subsequent part setup time must be cancelled by a factor of δ = 0. Transfer times in the two layouts follow a statistical uniform law between 10 and 16 min. • To characterize the fluidity of parts flow in FMS, different optimization studies used WIP and MFT as major performance measures [31]. WIP has mainly been measured as the number of parts in the system, and MFT is simply obtained by averaging all durations between every part exit times and entry times in FMS. The TR of the production was adopted as the third performance measure. To evaluate TR, it is normal to measure the number of processed parts per unit of time. The maximization of such a measure of performance reflects the best use of material and human resources. To enhance the efficiency of FMS piloting, various optimization studies used the waiting and transfer times (WT and TT) as performance indicators, and they essentially aimed to minimize these two indicators. The Simulation Model FMS simulation models were built using Arena 14.0 software. The FL and CL models are composed of three parts: "Parts arriving", "Departments" or "Cells", and "System exit": • Parts enter to the system through a "Create" module named "Parts Arrival" in which the BS and IAT times are specified. Then, they are grouped into batches by a "Batch" module, named "Arrival Parts Grouping", to assign them their corresponding types through an "Assign" module named "Part Type". Due to the stochastic nature of their PT and ST, these batches are separated into unit products through a "Separate" module called "Parts Separation" to assign them each of their execution times through one of the four "Assign" modules named "Attribute Part i". However, a preliminary step must be performed through a "Decide" module called "Parts Sorting" to direct each type of product to the corresponding "Assign" module. The products then proceed through a "Batch" module named "Parts Grouping" before proceeding through the "Route" module named "Transfer to System" (Figure 2). Machines 2022, 10, x FOR PEER REVIEW 10 of 26 durations between every part exit times and entry times in FMS. The TR of the production was adopted as the third performance measure. To evaluate TR, it is normal to measure the number of processed parts per unit of time. The maximization of such a measure of performance reflects the best use of material and human resources. To enhance the efficiency of FMS piloting, various optimization studies used the waiting and transfer times (WT and TT) as performance indicators, and they essentially aimed to minimize these two indicators. The Simulation Model FMS simulation models were built using Arena 14.0 software. The FL and CL models are composed of three parts: "Parts arriving", "Departments" or "Cells", and "System exit": • Parts enter to the system through a "Create" module named "Parts Arrival" in which the BS and IAT times are specified. Then, they are grouped into batches by a "Batch" module, named "Arrival Parts Grouping", to assign them their corresponding types through an "Assign" module named "Part Type". Due to the stochastic nature of their PT and ST, these batches are separated into unit products through a "Separate" module called "Parts Separation" to assign them each of their execution times through one of the four "Assign" modules named "Attribute Part i''. However, a preliminary step must be performed through a "Decide" module called "Parts Sorting" to direct each type of product to the corresponding "Assign" module. The products then proceed through a "Batch" module named "Parts Grouping" before proceeding through the "Route" module named "Transfer to System'' ( Figure 2). • As soon as one products batch arrives in one of the departments, it is separated into unit products and put on hold in the department queue via the "Hold" module named "Waiting Queue Department i". This queue is governed by a "Queue" module in which the scheduling rule must be specified. Once one of the department ma- • As soon as one products batch arrives in one of the departments, it is separated into unit products and put on hold in the department queue via the "Hold" module named "Waiting Queue Department i". This queue is governed by a "Queue" module in which the scheduling rule must be specified. Once one of the department machines becomes free, the selected waiting product is released from the "Hold" module. It then passes through a test, represented by the module "Decide" named "Machine Selection", which affects it toward this free machine. The processed products of the machines are grouped again in batches by the "Batch" module named "Grouping of Processed Parts Department i", which succeeds these machines. Finally, each batch of products is transferred to the next step in its production sequence through the module called "Route Department i" (Figure 3). • In the case of the CL simulation model, as soon as a batch of products arrives in on of the cells, it is directed to the first machine in its production sequence. This batch i then separated into unit products by a "Separate" module called "Separation Part Machine i". These products are then placed on hold in the queue of the machine via a "Hold" module named "Waiting Queue Machine i" until this machine become available. The choice of products from the machine queue is made according to th priority rule defined in the "Queue" module corresponding to this "Hold" module The processed products by one of the machines are grouped into batches via th "Batch" module called "Grouping Parts Machine i". This batch is transferred to th next machine in its production sequence via the "Intracellular Route Cell i" module By using this module, the transfer is performed in the cells, and the transfer time in this case is equal to zero. Each product with a completed production sequence mus be evacuated to the system's output section. Hence, the "Cell i Output Route" mod ule is used with a non-zero transfer time ( Figure 4). • In the two FL and CL simulation models, the machines are modeled by "Process" modules. In these modules, the transformation times are defined, which are function of PT and ST weighted by factor δ. Thus, Transformation time = PT + δxST. The valu In the case of the CL simulation model, as soon as a batch of products arrives in one of the cells, it is directed to the first machine in its production sequence. This batch is then separated into unit products by a "Separate" module called "Separation Parts Machine i". These products are then placed on hold in the queue of the machine via a "Hold" module named "Waiting Queue Machine i" until this machine becomes available. The choice of products from the machine queue is made according to the priority rule defined in the "Queue" module corresponding to this "Hold" module. The processed products by one of the machines are grouped into batches via the "Batch" module called "Grouping Parts Machine i". This batch is transferred to the next machine in its production sequence via the "Intracellular Route Cell i" module. By using this module, the transfer is performed in the cells, and the transfer time in this case is equal to zero. Each product with a completed production sequence must be evacuated to the system's output section. Hence, the "Cell i Output Route" module is used with a non-zero transfer time ( Figure 4). • In the case of the CL simulation model, as soon as a batch of products arrives i of the cells, it is directed to the first machine in its production sequence. This ba then separated into unit products by a "Separate" module called "Separation Machine i". These products are then placed on hold in the queue of the machi a "Hold" module named "Waiting Queue Machine i" until this machine bec available. The choice of products from the machine queue is made according priority rule defined in the "Queue" module corresponding to this "Hold" mo The processed products by one of the machines are grouped into batches v "Batch" module called "Grouping Parts Machine i". This batch is transferred next machine in its production sequence via the "Intracellular Route Cell i" mo By using this module, the transfer is performed in the cells, and the transfer ti this case is equal to zero. Each product with a completed production sequence be evacuated to the system's output section. Hence, the "Cell i Output Route" ule is used with a non-zero transfer time ( Figure 4). • In the two FL and CL simulation models, the machines are modeled by "Process" modules. In these modules, the transformation times are defined, which are function of PT and ST weighted by factor δ. Thus, Transformation time = PT + δxST. The value of factor δ depends on the similarity of the types of products entering and leaving the machine. In fact, a module called "Selection Delta Value Machine Selection i" applies a test on all incoming products to the machine to look for the value of this factor. For this, it compares two variables named "Part Type" and "Part Family" defined in the two "Assign" modules, named "Part Type in Machine i" and "Part Type Out Machine i". If the two variables "Part Type" are identical, the module "Delta value machine selection i" directs the incoming product to the module "Assign" named "Delta Equal 0 Machine i" corresponding to the value of factor δ = 0. If the two variables "Part Type" are different but the two variables "Part Family" are identical, the module "Delta Value Machine i" directs the incoming product to the module "Assign" named "Delta Equal 0.5 Machine i" corresponding to the value of factor δ = 0.5. Otherwise, module "Selection Delta Value Selection Machine i" directs the incoming product to the module "Assign" named "Delta Equal 1 Machine i", corresponding to the value of factor δ = 1 ( Figure 5). 0, x FOR PEER REVIEW 12 of 26 incoming product to the module "Assign" named "Delta Equal 1 Machine i", corresponding to the value of factor δ = 1 ( Figure 5). • The leaving products batch proceeds through an "Assig" module, called "Output Performance Measures", for computing and updating all variables defined as performance measures. The acquired data are then stored in an Excel file using a "Readwrite" module for further treatment and analysis. Finally, the batches of products are evacuated from the simulation model via the "Dispose" module named "System Exit" (Figure 6). Design of Experiments (DoE) In this phase, we determine the number of distinct model settings to be run and the specific values of the factors for each of these simulation runs. There are many strategies for selecting the number of runs and the factor settings for each run include the following: random designs, combinatorial designs, sequential designs, factorial designs, etc. Factorial designs are based on a grid, with each factor tested in combination with every level of every other factor. Factorial designs are attractive for three reasons: (i) The number of levels that are required for each factor is one greater than the highest-order power of that variable in the model, and the resulting design permits the estimation of coefficients for all cross-product terms; (ii) they are probably the most commonly used class of designs; and (iii) the resulting set of run conditions are easy to visualize graph- • The leaving products batch proceeds through an "Assig" module, called "Output Performance Measures", for computing and updating all variables defined as performance measures. The acquired data are then stored in an Excel file using a "Readwrite" module for further treatment and analysis. Finally, the batches of products are evacuated from the simulation model via the "Dispose" module named "System Exit" (Figure 6). achines 2022, 10, x FOR PEER REVIEW incoming product to the module "Assign" named "Delta Equal 1 Machine sponding to the value of factor δ = 1 ( Figure 5). • The leaving products batch proceeds through an "Assig" module, called Performance Measures", for computing and updating all variables defined mance measures. The acquired data are then stored in an Excel file using write" module for further treatment and analysis. Finally, the batches of pr evacuated from the simulation model via the "Dispose" module named Exit" (Figure 6). Design of Experiments (DoE) In this phase, we determine the number of distinct model settings to be ru specific values of the factors for each of these simulation runs. There are many for selecting the number of runs and the factor settings for each run include the random designs, combinatorial designs, sequential designs, factorial designs, e Factorial designs are based on a grid, with each factor tested in combina every level of every other factor. Factorial designs are attractive for three reaso number of levels that are required for each factor is one greater than the hig power of that variable in the model, and the resulting design permits the est coefficients for all cross-product terms; (ii) they are probably the most comm Design of Experiments (DoE) In this phase, we determine the number of distinct model settings to be run and the specific values of the factors for each of these simulation runs. There are many strategies for selecting the number of runs and the factor settings for each run include the following: random designs, combinatorial designs, sequential designs, factorial designs, etc. Factorial designs are based on a grid, with each factor tested in combination with every level of every other factor. Factorial designs are attractive for three reasons: (i) The number of levels that are required for each factor is one greater than the highest-order power of that variable in the model, and the resulting design permits the estimation of coefficients for all cross-product terms; (ii) they are probably the most commonly used class of designs; and (iii) the resulting set of run conditions are easy to visualize graphically for as many as nine factors [35]. The case study is about an FMS design with four factors, and each factor has two levels, as mentioned in the Table 3. Therefore, a 2 4 full factorial design was used to collect simulation results. The GP Method The GP is an optimization technique to solve problems with variety of objectives, which are generally incommensurable and often conflict each other in a decision-making horizon. The standard version of GP was first introduced by Charnes and Coper [31]. The GP model is based on an objective function formulated to find the most satisfactory solution that minimizes the total sum of positive and negative deviations from the level of attainment of the objectives levels (goals) set by the decision maker. This objective function is subject to physical and operating constraints of the system. The first type of constraints represents operating physical limits of the studied system. As for the second constraints, they are generally described by mathematical connections between the FMS factors and interactions and the performance measures to optimize. Hence, the principal purpose of the second stage of two first steps of the DoE-GP hybridization method is to build mathematical connections between FMS factors and responses. Statistical analyses are applied on the obtained simulation results to identify significant factors and interactions, and the relationships between the identified significant factors and interactions and the performance measures are translated to mathematical models by using the regression technique. In the third step of this stage, the GP model is developed setting the performance measures as goals and including other FMS constraints. Finally, this model is resolved using resolved using LINGO 18.0 software. The aim of this GP model is to find the most suitable levels of FMS factors that lower the total deviation of each performance measure from their respective target levels obtained in DoE. The GP model takes the following form. Moreover, it is subject to the following: ρx ≤ C (the operating physical constrain of the system) x j ≥ 0 (j = 1 . . . n), where the following is the case: 1. g i : The goal set for the ith objective for (i =1 . . . p) (the objectives here are the performance measures); 2. x j : The jth decision variable for (j = 1 . . . n) (the decision variables here are the significant FMS factors and interactions); 3. a ij : The technological parameters (these parameters are the coefficients of the developed mathematical models relating the performance measures to significant FMS factors and interactions); 4. ρ: The matrix of coefficients related to the physical FMS constraints; 5. C: The vector of available physical FMS resources; 6. δ + i , δ − i : The positive and negative deviations from the goals values. The DF Method The DF method is based on two steps. The first defines a desirability function by assigning values to responses that reflect their desirability. This involves transforming each value of the performance measure 'j' of experiment 'i', y ij , into a partial dimensionless desirability function d i , where 0 ≤ d i ≤ 1. This function includes the choices of the decision maker when constructing the optimization procedure. A one-sided desirability transformation arises when the goal is to maximize or minimize the response, and two values A and B must be specified as the lower and upper limits. Equations (6) and (7) present the one-sided transformation equations that will be used for minimization and maximization goals, respectively. The parameter ω j can be described as a power value or weight allocated according to the researcher subjective impression about the role of the response in the total desirability of the product. A value of ω j equal to 1 implies that a linear desirability function is applied. If the value of ω j is less than 1, the obtained desirability function means that performance does not have to be close to the lower or upper limit, depending on the optimization goal, to have a higher desirability value. In contrast, if the value ω j is greater than 1, the desirability function implying that the performance has to be closest to the lower or upper limit, depending on the optimization goal, to have a higher desirability value. To simultaneously optimize multiple performance sets, the individual desirability is combined using a geometric mean in the composite desirability. A value of DF different from zero implies that all performances are in a desirable range simultaneously. In addition, a value of DF close to 1 means that the combination of the different criteria is globally optimal and the performances values are near the target values. The GRA Method Units of performance measurement are often different, so the influence of some of them may be neglected. This can also happen if some performance measures have a very wide range compared to others. In addition, if the expected optimization goals are contradictory, this will result in incorrect results in the analysis [36]. It is, therefore, necessary to normalize all performance values for each experiment in the first step of the multi-objective GRA-based optimization method's second stage. In the developed DoE, for each of the "m" simulation experiments, "n" performance measures are measured. The ith experiment trial can be expressed as Y i = (y i1 , y i2 , . . . , y ij , . . . , y in ). Here, y ij is the value of the performance measure "j" of experiment "i". The term Y i can be translated into the comparability sequence X i = (x i1 ,x i2 , . . . ,x ij , . . . , x in ) using one of Equations (9) and (10), which are, respectively, used for larger-the-better and smaller-the-better objective values: x ij = y ij − y j y j − y j i = 1, 2, . . . , mj = 1, 2, . . . , n, x ij = y j − y ij y j − y j i = 1, 2, . . . , mj = 1, 2, . . . , n, where the following is the case. After the normalization procedure, all x ij values, relative to the performance measures, will be scaled in [0, 1]. The Grey Relational Coefficient (GRC) is then computed to determine how close x ij is to x 0j = Max{x ij , i = 1, 2, . . . , m}. The larger the grey relational coefficient, the closer x ij and x 0j are. The grey relational coefficient can be calculated in the second step by the following: where the following is the case. Once the entire GRC is computed, the Grey Relational Grade (GRG) is calculated in the third step based on the comparability and the reference sequence X i = (x i1 , x i2 , . . . , x ij , . . . , x in ) and X 0 = (x 01 , x 02 , . . . , x 0j , . . . , x 0n ) using the following: where ω j is the weight for the jth response, chosen by the decision makers. Of course, the sum of ω j is equal to 1. In the final step of the GRA method, the GRG values are ranked in decreasing order. The optimal trial corresponds to the GRG maximum value. The VIKOR Method As in the case of the GRA method, which is based on GRG ranking, the VIKOR method is based on the computation of the VIKOR index and its ranking. In the first step of the VIKOR method, the ideal solution (A*) and the negative-ideal solution (A − ) are to be determinate. A* and A − represent, respectively, the maximum and minimum performance measure values of every experimental trial, and they are described as follows. A * = Max y ij , i = 1, 2, . . . , m = y * 1 , y * 2 , . . . , y * j , . . . , y * n , A − = Min y ij , i = 1, 2, . . . , m = y − 1 , y − 2 , . . . , y − j , . . . , y − n , In the two following steps of the VIKOR method application the utility and the regret measures for the ith experimental trial, S i and R i respectively, are computed as follows: where ω j is the weight for the jth response, chosen by the decision makers. Of course, the sum of ω j is equal to 1. In the fourth step, the VIKOR index of the ith experimental trial is computed as follows: where the following is the case. Note that ν is the weight of the maximum group's utility. It is usually set to 0.5. In the final step of the VIKOR method application, the VIKOR index values are ranked in decreasing order, and the optimal trials correspond to the maximum value. Simulation Results The case study is about an FMS design with four factors, each factor has two levels, as mentioned in Table 3. Therefore, a 2 4 full factorial design was used to collect simulation results. Each of the 16 simulation experiments was replicated 10 times. Simulation results show that a warm-up period of 10,000 min is needed, and models can then be run for 90,000 min. All final simulation results are provided in Appendix A. MFT simulation results are stated in Table A1, WIP simulation results are in Table A2, TR simulation results are in Table A3, WT simulation results are in Table A4, and TT simulation results are in Table A5. The GP Method The use of GP as an MOSO method contains mainly four phases. The first phase is about the selection of the significant coefficient of the metamodel using Student's t-test. The second phase provides the final metamodel of each performance measure. The third and fourth phases concern the application of GP optimization. 1. Determination of statistically significant FMS parameters: The main effects of the studied factors and interactions were analyzed in α = 0.05 of significance levels using the MINITAB statistical package (Table 6). Significant factors and interactions (p ≤ 0.05) are shown in bold. With R 2 = 99.88% and R 2 (adj) = 99.87%, With R 2 = 97.86% and R 2 (adj) = 97.64%, Every constant in each of these equations corresponds to the average responses for each performance measure, and the coefficients assigned to the factors and interactions correspond to their respective effects. 3. GP model formulation and resolution: We propose a GP model in which the selected performance measures are considered. The optimal configuration of decision variables minimizes the sum of penalties (dj). The parameter dj are deviations from the desired levels of the goals that are subject to series constraints. With the regression equations presented previously, the above-mentioned goal programming model can be stated as shown in Equations (32)-(40): which are subject to the following. −61.61 LAYOUT and RULE are binary (1 or 2), The objective G MFT , G WIP , G TR , G TT , and G WT goal values were fixed basing on the experimental design results. 4. The GP model was solved using the mathematical software LINGO 18.0. The best value of the objective function was found to be equal to 136.99 and was obtained for the following levels of the studied factors: LAYOUT = CL, RULE = FCFS, IAT = 25, and BS = 5. The DF Method Applying Equations (6) and (7) for the studied performance measures, the individual desirability functions 'd' are very close to 1.0, as shown in Figure 7. Furthermore, Figure 7 illustrates the effect of each factor (columns) on the FMSs' performance measures and the desirability of the composite (rows). The red vertical lines and the corresponding numbers in red indicate the levels of optimal factors. The blue horizontal lines and the corresponding numbers in blue represent the values of the performance measures corresponding to the levels of optimal factors. Each of the performance measures is accompanied by the corresponding desirability function values 'd i '. In addition, the first row provides the value of the composite desirability 'DF', as presented in Equation (8), corresponding to the levels of the optimal factors. The obtained DF is equal to 0.984, which represents an ideal case of optimization. To obtain this desirability, the factors' levels must be set to the values shown below the global solution in Figure 7. That is, BS = 5, IAT = 5, LAYOUT = CL, and RULE = SPT. The GRA Method Based on Equations (9)-(17), the simulation results were normalized, and the GRC and GRG were calculated (Table 7). Once GRG was ranked, it appears that the optimum performance measures were obtained for the factor levels LAYOUT = CL, RULE = SPT, IAT = 5, and BS = 5. The row in bold in Table 7 indicate the optimal solution obtained using the GRA method, which has a rank equal to 1. Discussion The application of the four optimization methods in the context of FMS shows good results for four of the five performances in the case of the two methods GP and VIKOR, and only two performance measures for DF and GRA methods (Table 9). The VIKOR Method Based on Equations (18)- (26), the utility and the regret measures as well as the VIKOR index were computed (Table 8). Once the VIKOR index was ranked, it appears that the optimum performance measures were obtained for factor levels LAYOUT = CL, RULE = SPT, IAT = 25, and BS = 5. The row in bold in Table 8 indicates the optimal solution obtained using VIKOR method, which has a rank equal to 1. Discussion The application of the four optimization methods in the context of FMS shows good results for four of the five performances in the case of the two methods GP and VIKOR, and only two performance measures for DF and GRA methods ( Table 9). The results show that the MFT, WIP, TT, and WT performance measures met their targets for GP and VIKOR methods. Indeed, they all show relatively minor deviations from their target values. Only the deviation of RT reaches, respectively, −63.53% and −81.87% in the case of these two methods. On the other hand, in the case of DF and GRA methods, only the optimal values of TT and TR were close to their corresponding targets, while the deviations between the achieved values and the objective values in the case of WIP, WT, and MFT can reach +242.529%, +107.377%, and +83.293% respectively. Hence, the optimization results can be considered satisfactory in the case of GP and VIKOR methods. Meanwhile, it was not the case for the optimization results of the DF and GRA methods. The two methods GP and DF require a higher level of analysis effort than the two methods of GRA and VIKOR. Indeed, in addition to the modeling and development of the simulation models, which is a common point to the four compared optimization methods, as well as the planning of experiments with the DoE method, the two methods GP and DF require relatively higher levels of expertise in the use of the analysis software MINITAB and LINGO. On the opposite side, the two methods GRA and VIKOR only need the development of the equations on Excel, which is within the reach of the majority of DMs. This has an impact on the applicability of the MOSO method. Table 10 summarizes the performance of the four MSOSO methods being compared in this study. Signs '+' and "−" are assigned to the optimization methods based on their achieved optimization results and their applicability. A '+' is assigned to each method resulting in a good optimization result, which is expressed by reasonable or small deviations. On the other hand, a "−" is assigned to each method that leads to an optimization result characterized by high deviations. For applicability, a "−" is assigned to each method that requires a high level of analysis and expertise. In the opposite case, a '+' is assigned to this optimization method. These methods are then classified according to assigned signs. Any method obtaining two "+" signs will be considered the most efficient. On the other hand, if it obtains two "−" signs, it will be considered as the most mediocre one. In the case where the optimization method obtains both signs "+" and "−", the classification gives priority to the obtained optimization result. The best method is VIKOR, which belongs to group B in the proposed classification. It is followed by the GP method, from group A, since it reaches good optimization results, although it requires a considerable analysis effort. The GRA method, from group B, comes in third rank and the DF method, from the group A, closes the classification at the last rank. This classification shows that the use of optimization methods based on a metamodel does not always produce the best results. Conclusions Various MOSO methods have been presented, developed, and used in the literature. These methods have been the subject of numerous classifications. However, the performance of these methods is not guaranteed due to the lack of comparative studies. Moreover, these classifications have been very diverse and are rarely related to the specific domain of manufacturing systems. The objective of this research is two-fold. First, we proposed a new conceptual classification of MOSO methods applied to the context of MFS design. Second, four MOSO methods are selected according to this classification and compared through a case study related to the FMS design problem inspired by the literature. This comparison is based on the quality of the optimal solutions obtained by these methods as well as the degree of difficulty of their applicability through the necessary analysis effort and the degree of expertise of the user of these methods. All these studied methods are based on DoE. Two of them are metamodel-based approaches that incorporate the GP and the DF, respectively. The other two methods are not metamodel-based approaches and incorporate GRA and VIKOR, respectively. The comparative results show that the VIKOR method can result in a better optimization than GP, GRA, and DF methods in that order. It is clear, thus, that the use of MOSOs based on meta-models does not produce the best solution in all situations. This research compares four MOSO methods applied in the context of FMS design. Some future research perspectives should be addressed: • In this study, four MOSO methods are compared. Two methods belong to group A of the proposed new classification, while the other two belong to group B. The extension of the current comparison to other MOSO methods belonging to group C is the first objective of our interesting perspectives. • The studied MOSO methods have been applied on a model of an FMS inspired from the literature. This model has six machines grouped in two cells in the CL and three departments in FL. In addition, this FMS processes only four products grouped into two families. Extending the comparison performed in this study to real and more complex FMSs to evaluate the reliability of MOSO methods is the second objective of our interesting perspectives. The application of the compared MOSO methods proceeds through different steps to generate optimization solutions. These steps usually require the intervention of a user to transfer the results from one step to another. The integration of these analysis and optimization steps into the simulation software, as in the case of the OptQuest tool in several simulation tools, would be a very interesting perspective.
2022-04-03T15:47:11.100Z
2022-03-30T00:00:00.000
{ "year": 2022, "sha1": "321a55af58f637e3d589c8e739e9c303d403feba", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-1702/10/4/247/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a9ba45fad3226b79f82dad50c23d17509afc9a24", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
198310345
pes2o/s2orc
v3-fos-license
Describing the steganalysis tool: BBD15 “OurSecret detector” Digital steganography is the art of hiding secret messages and data in innocent cover, most likely in images and videos. Images and videos offer the best cover files for steganography with their high capacity, by being innocent and for being easily exchanged without raising the suspicions of a third party. A lot are the free tools that embeds data in images and videos, among which is OurSecret. In this paper we aim to detect the stego images and videos created by OurSecret by developing a steganalysis tool BBD15. INTRODUCTION Images and videos are innocent and excellent cover files for steganography. Many free embedding tools exists OpenStego [1], StegHide [2], OpenPuff [3] and OurSecret [4] …etc. detecting this stego files is a challenging task for the steganalysts, it requires the knowledge of the embedding algorithm, having the cover file, the stego files or both. There are various steganalysismethods to detect and/or extract hidden datain/from images and videos. Some rely on visual and audible detection which is caused by the noise generated by the embedding algorithms. Other methods are based on statistical and histogram analysis, the change of image properties and header fields, or by looking for the signature of the embedding (steganography) programs [5]. It is worth noting that the blind steganalysis (general) may be less accurate than the specific steganalysis attacks where the embedding algorithm is known and to the best of our knowledge no steganalysis tool was specifically developed to detect stego files created by the tool Oursecret. In this paper, BDD15 is developed to detect stego files created by Oursecret, in section 2 we try to understand the embedding algorithm of OurSecret the steganography tool, section 3 explains the detection process, section 4 contains a performance study of our tool BBD15. 2. OURSECRET, HOW DOES IT WORK? OurSecret is a free steganography tool that embeds the secret data in multimedia files such as images and videos. The algorithm of the tool wasnot officially described by its developer. To understand how the embedding works, we did some tests using the HeX Editor tool[6], we compared between a stego files containing secret data and their original cover files (without data). We tested different kinds of files, for images: JPEG, JP2, BMP, TIFF, GIF and PNG. For videos: AVI, MPEG, MOV. We found that the tool compresses the data (we didnot know the compression algorithm used), optionally encrypts the data (if the passphrase is set) and embeds the compressed data at the end of the cover file without changing any bit from the original cover data. 3. DETECTING OURSECRET STEGO FILES Since we were able to understand how OurSecret embeds the data, we decided to use this knowledge to program a steganalysis tool that detects the stego files generated by OurSecret. The application was developed using java to avoid portability problems and it detects 9 file formats: JPEG, JP2, BMP, PNG, TIFF, GIF, AVI, MOV and MPEG. It detects the files with a rate of 100%. The applications take advantage of the fact that most files have a beginning and an end markers and that OurSecret embeds data at the end of the image / Video file (after the end marker) without changing any other bit. A PERFORMANCE STUDY OF BBD15 To judge the performance of our tool we decided to compare it with other steganalysis tools such as StegSpy [7], StegSecret [8] and Hidden Data Detector [9]. We did the comparison for the accepted format only and for the same stego files. Our application showed the best results in term of detection rate, also it offers both the search by file and by folder, and it is easy to use. The tables and the figures below show the obtained results. We tested the four tools on the same sample of images and videos. We found that StegSecret detects with 100% rate the BMP, JPEG and GIF stego images. Hidden data detector could not detect the stego images of types JPEG2000 (JP2) and PNG while it detects with 100% rate the JPEG images and AVI videos. StegSpy detects best the MPEG videos (rate = 60%). Our Detector BBD15 detects with 100% nine types of files which are BMP, JPEG, JPEG2000, PNG, GIF, TIFF, AVI, MPEG and MOV. The summary of the results is presented in table 2 and figure 2. CONCLUSION OurSecret is a free steganography tool for embedding data in multimedia files (images and videos), it allows the embedding of multiple files at once, it imposes no limit on the size of the secret data and allows the encryption of it. Its algorithm wasn't described by the developer. Having used the tools and tested its ability, we decided to understand its embedding algorithm and use it to develop a steganalysis tool that is specialized in detecting the stego files produced by OurSecret. We developed the java application BBD15 to detect stego files of types: BMP, JPEG, JPEG2000, GIF, PNG, TIFF, AVI, MPEG and MOV. It has the detection rate of 100% and exploits the fact that OurSecret embeds the data at the end of the file.
2019-07-26T06:21:53.295Z
2021-02-02T00:00:00.000
{ "year": 2021, "sha1": "b4e577130afc24cecb4883bb640ffccf27bea190", "oa_license": "CCBYNC", "oa_url": "https://ajss.dz/index.php/ajss/article/download/22/10", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "21b9794848f2a90c41421ddb57be3d3d58636bdb", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
239009716
pes2o/s2orc
v3-fos-license
UniDS: A Unified Dialogue System for Chit-Chat and Task-oriented Dialogues With the advances in deep learning, tremendous progress has been made with chit-chat dialogue systems and task-oriented dialogue systems. However, these two systems are often tackled separately in current methods. To achieve more natural interaction with humans, dialogue systems need to be capable of both chatting and accomplishing tasks. To this end, we propose a unified dialogue system (UniDS) with the two aforementioned skills. In particular, we design a unified dialogue data schema, compatible for both chit-chat and task-oriented dialogues. Besides, we propose a two-stage training method to train UniDS based on the unified dialogue data schema. UniDS does not need to adding extra parameters to existing chit-chat dialogue systems. Experimental results demonstrate that the proposed UniDS works comparably well as the state-of-the-art chit-chat dialogue systems and task-oriented dialogue systems. More importantly, UniDS achieves better robustness than pure dialogue systems and satisfactory switch ability between two types of dialogues. Introduction Dialogue system is an important tool to achieve intelligent user interaction, and it is actively studied by NLP and other communities. Current research of dialogue systems focus on task-oriented dialogue (TOD) systems (Hosseini-Asl et al., 2020;Peng et al., 2020;Yang et al., 2021), achieving functional goals, and chit-chat dialogue systems aiming at entertainment (Zhou et al., 2018;Zhang et al., 2020;Zhao et al., 2020;. Different methods are devised for these two types of dialogue systems separately. However, a more suitable way for users would be to have one dialogue agent that is able to handle both chit-chat and TOD * This work was done during an internship at Huawei Noah's Ark Lab. in one conversation. As illustrated in Figure 1, users may have communication-oriented needs (e.g. chatting about money and happiness) and task-oriented needs (e.g. hotel reservation) when interacting with a dialogue agent. Furthermore, inputs of dialogue systems are often interfered by background noise, such as voice from other people or devices, collected by the preceding automatic speech recognition (ASR) module. Therefore, the chit-chat ability may also improve the robustness of a task-oriented dialog system (Zhao et al., 2017). As shown in Table1, there are many differences between chit-chat and task-oriented dialogues. Creating a single model for different tasks without performance degradation is challenging (Kaiser et al., 2017). Some works attempt to model different dialogue skills via different experts or adapters (Madotto et al., 2020;Lin et al., 2021). However, these methods increase the number of parameters and hard to achieve satisfactory performance on both types of dialogues. Besides, previous work lack the exploration of the ability to switch between different types of dialogues. This work proposes a auto-regressive language model based dialogue system to handle chit-chat Diversity Purpose Turns Mainstream method Chit-chat Strong Entertainment Long End-to-end method Task-oriented dialogue Weak Completing tasks Short Pipeline method * Table 1: Differences between chit-chat and task-oriented dialogues. *: The model will predict belief state and system act before giving a response, to this end, the training set needs to be annotated with belief state and system act. and TOD in a unified framework (UniDS). Specifically, since chit-chat data do not have explicit belief state and agent action, to unify chit-chat and task-oriented dialogues format, we device belief state and agent act for chit-chat dialogues as taskoriented dialogues. On the other hand, because of the diversity of chit-chat, chit-chat dialogue systems need more training data than task-oriented dialogue systems, e.g., 147,116,725 dialogues for DialoGPT (Radford et al., 2019) and 8,438 dialogues for UBAR (Yang et al., 2021). To overcome this difference, we propose to train UniDS in a twostage way. A chit-chat model is first trained with huge chit-chat dialogues, and then we train UniDS from the chit-chat dialogue system with mixed dialogues based on our proposed unified dialogue data schema. We evaluate UniDS using a public task-oriented dialogue dataset MultiWOZ and a chit-chat dataset extracted from Reddit 1 through both automatic and human evaluations. UniDS achieves comparable performance compared to the state-of-the-art chit-chat dialogue system DialoGPT, and TOD system UBAR. In addition, we empirically show that UniDS is more robust to noise in task-oriented dialogues, and UniDS shows a desirable ability to switch between the two types of dialogues. The contributions of this work are summarised as follows: • To the best of our knowledge, this is the first work presenting a unified dialogue system to jointly handle chit-chat and task-oriented dialogues in an end-to-end way. • We design a unified dialogue data schema for chit-chat and TOD, allowing the training and inference of dialogue systems to be performed in a unified manner. • To tackle the gap between chit-chat dialogue systems and task-oriented dialogue systems in the requirement of training data, a two-stage training method is proposed to train UniDS. 1 https://www.reddit.com/ • Extensive empirical results show that UniDS performs comparably to state-of-the-art chitchat dialogue systems and task-oriented dialogue systems. Moreover, UniDS achieves better robustness to dialog noise and satisfactory switch ability between two types of dialogues. Related Work With the development of large-scale language models, chit-chat dialogue systems achieve remarkable success. Based on GPT-2 (Radford et al., 2019), DialoGPT (Zhang et al., 2020) is further trained on large-scale dialogues extracted from Reddit. Di-aloGPT could generate more relevant, contentful, and fluent responses than previous methods. Afterwards, larger pre-train LM based chit-chat dialogue systems (Adiwardana et al., 2020;Bao et al., 2020; are proposed and achieve even better performance. In the area of task-oriented dialogue systems, recent research (Hosseini-Asl et al., 2020;Peng et al., 2020;Yang et al., 2021) concatenated elements in a dialogue into one sequence and utilized pre-train LM to generate the belief state, system act, and response in an end-to-end way and achieved promising results. There are several works related to the unified dialogue system. Zhao et al. (2017) insert one turn chit-chat dialogue into task-oriented dialogues to train a model with better out-of-domain recovery ability. Attention over Parameters (AoP) (Madotto et al., 2020) utilizes different decoders for different dialogue skills (e.g., hotel booking, restaurant booking, chit). However, the performance of AoP can be improved and it largely increases parameters comparing with models that handle a single type of dialogues. ACCENTOR (Sun et al., 2021) adds chit-chat utterance at the beginning or end of task-oriented responses to make the conversation more engaging, but ACCENTOR is unable to have a chit-chat with users. Unlike the above works, UniDS does not add extra parameters to existing dialogue models, and UniDS could alternatively handle chit-chat and task-oriented dialogues in a 14 seamless way. Architecture of UniDS As illustrated in Figure 2, we formulate unified dialogue system as an auto-regressive language model. A dialogue session at turn t has the following components: user input U t , belief state B t , database search result D t , system act A t , and response R t . Each component consists of tokens from a fixed vocabulary. For turn t, the dialogue context C t is the concatenation of all the components of the previous dialogues as well as the user input at turn t: Given the dialogue context C t , UniDS first generates the belief state B t : and use it to search the database to get the search result D t . Then, UniDS generates the system act A t conditioned on the updated context by extending C t with B t and D t : Lastly, the response R t is generated conditioned on the concatenation of all previous components: Unified Dialogue Data Schema In the widely adopted task-oriented dialogue system pipeline, a dialogue session consists of a user input utterance, a belief state that represents the user intention, a database search result, a system act, and a system response (Young et al., 2013;Yang et al., 2021). However, due to the diversity of chit-chat and the cost of manual annotation, chit-chat dialogue systems do not assume the existence of the belief state nor system act (Bao et al., 2020;Zhang et al., 2020). The inconsistency of data format between chit-chat and TOD hinders the implementation of a unified model. To tackle this problem, we design a data schema with belief state, database result representation and system act for chit-chat. Table 2 illustrates such unified data schema with examples. The following sections explain each component in detail. Belief state The unified belief state is represented in the form of "<domain> slot [value]". A belief state could have several domains, each containing several slot-value pairs. As we can observe, extracting belief state of TOD may need to copy some words from the user utterance. To make UniDS keep this copy mechanism, for chit-chat, nouns in the user utterance U t are extracted as the slot or value of belief state. DB result We use a special token to represent the number of matched entities under the constraints of the belief state in the current turn. System act System acts are represented as "<domain> <act> [slot]" for TOD. The meaning of "<domain>" is the same as in belief states. "[act]" denotes the type of action the system needs to perform. Following the "domain-act" pair, slots are optional. For chit-chat, token "<chit_act>" denotes the dialogue system will chat with the user. Therefore, a processed dialogue sequence X t at turn t for either TOD or chit-chat can be both represented as: (4) Two-stage training method Since the diversity of chit-chat in topics and terms, chit-chat dialogue systems need much larger training data than task-oriented dialogue systems. If directly training UniDS with the unified dialogue data which contains much more chit-chat dialogues than task-oriented dialogues, the trained model may ignore the ability to complete task-oriented dialogues. Therefore, this work proposes a two-stage method for training UniDS. As illustrated in Figure 3, we propose to first train a chit-chat dialogue model with huge chit-chat dialogues, and then we train UniDS from the chit-chat dialogue system with mixed dialogues. The mixed dialogue data is obtained by mixing chit-chat and TOD data which are pre-processed by the proposed unified data schema in the ratio of 1:1. Motivated by the recent success of applying GPT-2 for task-oriented dialogue systems (Hosseini-Asl et al., 2020;Peng et al., 2020;Yang et al., 2021) and chit-chat dialogue systems (Zhang et al., 2020), we use DialoGPT (Zhang et al., 2020) in an auto-regressive manner as: where x i is a token of X t , and x <i are the preceding tokens. Chit-chat Dataset We derived open-domain chit-chat dialogue from Reddit dump 3 . To avoid overlapping, the chit-chat training set and test set are extracted from the Reddit posts in 2017 and 2018 respectively. To ensure the generation quality, we conduct a careful data cleaning. A conversation will be filtered when (1) there is a URL in the utterance; (2) there is an utterance longer than 200 words or less than 2 words; (3) the dialogue contains "[removed]" or "[deleted]" tokens; (4) the number of utterances in the dialogue is less than 4; (5) the dialogue contains offensive words. Finally, we sample 8, 438 dialogues for training which is the same size as the training set of MultiWOZ. The validation set and test set contain 6, 000 dialogues and 8, 320 dialogues, respectively. Baselines For chit-chat dialogue, we compare UniDS with Di-aloGPT (Zhang et al., 2020). For fair comparisons, we further fine-tune a 12-layer DialoGPT and a 24layer DialoGPT with our chit-chat dialogue training set, which we refer to as DialoGPT-12L and DialoGPT-24L, respectively. For TOD, we consider the state-of-the-art endto-end TOD system UBAR (Yang et al., 2021) and PPTOD (Su et al., 2021). For a fair comparison with UniDS, we also fine-tune UBAR from 12 layers DialoGPT and 24 layers DialoGPT with Multi-WOZ dataset, the fine-tuned models are denoted as UBAR-12L and UBAR-24L, respectively. Implementation Details UniDS and other baselines are implemented based on HuggingFace's Transformers (Wolf et al., 2019). The max sequence length is 1024 and sequences longer than 1024 are truncated from the head. We use the AdamW optimizer (Loshchilov and Hutter, 2019) and greedy decoding method for inference. All models are trained on a single Tesla V100, and we perform a hyper-parameter search on batch size and learning rate. The best model and hyperparameter are selected through the performance on the validation set of MultiWOZ only. As shown in Table1, chit-chat dialogues need to attract users to talk more, while TOD needs to complete tasks as soon as possible. Therefore, a model trained with the mixed dialogue data tends to talk long turns instead of efficiently completing the task. Since entity recommendation acts are important for dialogue system to complete tasks efficiently, we use a weighted cross-entropy loss as the training objective of UniDS. We assign larger weights to tokens about entity recommendation actions. We empirically set the weight of entity recommendation actions in loss function to 2 4 , weights of other 4 The appendix gives discussions for other values of weight, but does not affect the overall conclusion. tokens are set to 1 by default. Evaluation Metrics For chit-chat dialogues, the BLEU score (Papineni et al., 2002) and the average length of the generated responses are reported. Because of the diversity of chit-chat, BLEU may be difficult to reflect the quality of chit-chat responses, we also report distinct-1 and distinct-2 (Li et al., 2016) of generated dialogues, which is defined as the rate of distinct uniand bi-grams in the generated sentences. We also conduct a human evaluation on 50 randomly sampled test dialogues for two 24 layers models. Three judges evaluate them in terms of relevance, informativeness, and how human-like the response is with a 3-point Likert-like scale (Joshi et al., 2015). For TOD, we follow UBAR to use the following automatic metrics: Inform refers to the rate of the entities provided by a model are correct; success measures the rate of a model has answered all the requested information; and BLEU to measure the fluency of generated responses. A combined score is computed as (Inform + Success) × 0.5 + BLEU to measure the overall response quality. chat ability even after training with the mixed dialogue data. Overall results ii) For the TOD task, UniDS achieves better performance than UBAR for the same parameter size. For both 12L and 24L DialoGPT, UniDS improves the BLEU score and the Combined score compared with UBAR. We believe this is because combining chit-chat dialogues for training helps the model to generate more fluent responses. Furthermore, we also provide the human evaluation results in Table 5. UniDS is compared to DialoGPT regarding three dimensions for chit-chat dialogues. We could see that UniDS consistently wins the majority cases for all three aspects, including relevance, informativeness, and human-like. Ablation Study In this experiment (c.f. Table 4), we compare two simplified versions of UniDS to understand the effects of different components. For comparison, we report the performance of 1) removing slots in belief state of chit-chat, denoted as "UniDS w/o chit-chat BS", and 2) replacing the weighted crossentropy loss with a standard cross-entropy loss, denoted as "UniDS w/o weighted loss". Next, we elaborate our observations w.r.t. these two components. w/o chit-chat BS: When removing the belief state of chit-chat dialogues, the performances of both UniDS-12L and UniDS-24L drop w.r.t. inform, success, and combined score for TOD. We believe the reason is that the process of extracting the belief state needs to copy some keywords from the user utterance, and even extracting nouns as belief state for chit-chat is helpful for UniDS to learn this copy mechanism in the TOD task. Taking the case in Figure 4 as an example, UniDS w/o chit-chat BS (left) fails to extract the user's interest in searching restaurants, while UniDS (right) extracts the restaurant slot successfully. As a result, UniDS could recommend the right entities. Furthermore, removing chit-chat BS does not degrade the performance of chit-chat. Table 7: Switching performance of UniDS when prepending 2 turns task-oriented dialogues before chit-chat. w/o weighted loss: When replacing the weighted cross-entropy loss in UniDS with standard crossentropy loss, we observe a notable drop w.r.t. inform, success, and combined score in task-oriented metrics. These results demonstrate that giving more attention to entity recommendation acts helps the task completion capability. Moreover, dropping the weight loss does not affect the performance of chitchat much. Overall, we contend both "chit-chat BS" and "weighted loss" are beneficial for task-oriented dialogues without degrading the chit-chat capability. Analysis of Switching Ability In real-world scenarios, it is common and natural for users to switch between chit-chat and taskoriented dialogues. Therefore, we investigate the switch ability of UniDS in this subsection. To simulate the scenario of dialogue switching, we consider two setups: (1) having two turns of chit-chat dialogues before the start of a task-oriented dialogue and (2) pre-pending two turns of task-oriented dialogues at the beginning of a chit-chat dialogue. To evaluate the model's ability to switch between two types of dialogues, we propose a metric, called Switch-n, which is defined as the rate of a model switches its response type within the first n turns after a user switches the type of input. Additionally, we also report the model performance after the switching. Tables 6 and 7 present the results of the two switching setups, and we have the following observations: (i) It is not surprising that adding switching tasks for both chit-chat and TOD degrades the performance of UniDS, as the added 2 turns of switching utterances introduce irrelevant con- tent, which distracts the model. However, focusing on the switching task, we observe that for almost 98% of cases, UniDS can success in dialogue task switching, from chit-chat to TOD and vice versa, within the first two turns (Switch-1 and Switch-2). This demonstrates UniDS has a good ability to switch between two types of dialogue tasks. (ii) When switching from task-oriented dialogues to chit-chat dialogues, the value of Switch-1 is relatively low, this may because our model tends to confirm user intents or give a transitional response rather than switch to chit-chat mode immediately. As the case shown in Table 8, when the user switches from TOD to chit-chat, UniDS gives a chatty response and thanks the user for using its services. Robustness Study Many real-world dialogue systems need real-time speech recognition to interact with users, which is easily interfered by background noise from the background environment (e.g. other people and devices). Therefore, we analyze the robustness of UniDS and UBAR by inserting several turns of Figure 5: Examples of UBAR-DialoGPT-24L and UniDS-24L when inserting a task-irrelevant utterance in a task-oriented dialogue. UBAR-DialoGPT reserves a train for the user randomly, which makes the task failed because the user intent is incomplete; while UniDS keeps the previous belief state and gives a chatty response. When the user returns to the TOD, UniDS could continue with the task. irrelevant chit-chat utterances into the TOD, and we evaluate the model performance against such noise. As observed in Table 9, both UniDS and UBAR drops on the combined score when only one turn of chit-chat dialogue is inserted. However, UniDS drop less than UBAR (about 4 vs. 6 points). Similarly, when two turns of chit-chat are inserted into TOD, UniDS drops about 8 points, and UBAR drops about 11 points on the combined score. These results demonstrate that UniDS has stronger robustness to such task-irrelevant noise than UBAR. We present an interesting case in Figure 5. When giving a task-irrelevant utterance, UBAR-24L reserves a train for the user randomly, which makes the task failed because the user intent is incomplete, while UniDS keeps the previous belief state and gives a chatty response. When the user returns to the TOD, UniDS can continue with the task. Conclusion This paper proposes a unified dialogue system (UniDS) to jointly handle both chit-chat and taskoriented dialogues in an end-to-end framework. Specifically, we propose a unified dialogue data schema for both chit-chat and task-oriented dialogues, and a two-stage method to train UniDS. To our best knowledge, this is the first study towards an end-to-end unified dialogue system. Experiments show that UniDS performs comparably with state-of-the-art chit-chat dialogue systems and task-oriented dialogue systems without adding extra parameters to current chit-chat dialogue systems. More importantly, the proposed UniDS achieves good switch ability and shows better robustness than pure task-oriented dialogue systems. Although question answering (QA) is not considered in the proposed UniDS, as an initial attempt, our explorations may inspire future studies towards building a general dialogue system. Ethical Considerations We notice that some chit-chat utterances generated by the proposed UniDS may be unethical, biased or offensive. Toxic output is one of the main issues of current state-of-the-art dialogue models trained on large naturally-occurring datasets. We look forward to furthering progress in the detection and control of toxic outputs.
2021-10-18T01:15:52.285Z
2021-10-15T00:00:00.000
{ "year": 2021, "sha1": "02112df94f257d9e3e6d0077a46acd1eb73526d3", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "b6b4c5be9c168191b339f993800f593b2143c9b1", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
150243215
pes2o/s2orc
v3-fos-license
Discovery of a stable expression hot spot in the genome of Chinese hamster ovary cells using lentivirus-based random integration Abstract The conventional method for construction of stable expression cell lines is mainly based on random integration. However, one drawback of random integration is that the target gene might be integrated into a heterochromatin region or an unstable region of chromatin, thus requiring multiple rounds of selection to obtain desirable expressing cell lines. Rational cell line construction can overcome this shortcoming by integrating transgenes specifically into a stable hot spot within the genome. As such, the discovery of novel effective hot spots becomes critical for this new method of cell line construction. Here we report a practical method for discovery of new stable hot spots through random integration of lentivirus. We describe the thorough study of a hot spot located at NW_006880285.1. The expression stability of this hot spot was verified by detecting Zsgreen1 reporter gene expression for over 50 passages. When cells were adapted to suspension culture, they continuously expressed the Zsgreen1 reporter gene. In addition, this cell suspension was able to stably express the reporter gene for an additional 50 passages. Another finding was that cells with the NGGH gene inserted into the same hot spot were also able to stably express respective protein over 50 passages. In summary, this research offers an easy and new method for researchers to identify stable hot spots within the Chinese Hamster Ovary (CHO) genome on their own, thus contributing to the development of site-specific integration studies in the future. Introduction For over three decades, Chinese Hamster Ovary (CHO) cells have been the main workhorse in the biopharmaceutical industry [1]. There are many reasons why CHO cells have been chosen as the dominant commercial protein manufacturer. These reasons include safety considerations, easy transfer of heterogeneous genes into CHO genomes, adaptation to serum-free media, fast and robust growth of cells, and the ability to express recombinant proteins with human-like posttranslational modifications [2][3][4]. It is time-consuming to establish a CHO expressing cell line through conventional methods mainly because of the unwanted phenotypic heterogeneity caused by the position effect [5,6]. The position effect, which refers to the influence of the chromosomal location of a gene on its activity, has been recognized for over 90 years [7]. It has been influential on the field of genomic engineering [5]: previous researches revealed that certain genomic domains could exert a general activating or attenuating influence on the embedded genes for protein expression level [8]. Indeed, the position effect plays a key role in triggering the instability often observed in the conventional construction of cell lines [6]. Site-specific integration (SSI) of transgenes into stable hot spots, however, was generally believed to be able to overcome the phenotypic heterogeneity caused by the position effect and maintain long-term expression stability [1,3,6,9,10]. Although targeting a gene of interest (GOI) into a stable hot spot appears promising, it could be challenging to discover the hot spot. Considering the potential commercial value, information pertaining to stable hot spots has not been easily accessible to the general public. On the one hand, if researchers have access to highly expressing cell lines, targeted locus amplification (TLA) could be applied to identify new integration sites. The method of TLA could selectively amplify and sequence entire genes on the basis of the crosslinking of physically proximal sequences. As such, this method could be applied in discovering the transgenes' insertion sites [11]. After identifying hot spots, researchers would be required to further screen out the final hit by targeting GOI into different spots. On the other hand, if researchers have no access to stably expressing cell lines, then the random integration screening method might be applied. According to previous researches, lentivirus has been a good candidate in discovering new hot spots [12]. In this study, we used a new method to identify potential targeting sites based on Zsgreen1 protein screening derived from lentivirus infection. We described one hot spot in details and passaged the corresponding model cell line extensively to test this insert's stability. Finally, the model cell line was adapted to suspension culture. Its potential for industrial application was also carefully explored. The method described here provides a new way to discover stable hot spots of their own for SSI related studies. The titer calculation method was as follows: HEK-293T cells were seeded in a 96-well plate and cells within each well were infected with serially diluted lentivirus. Three days later, we chose the well that had 10 $ 30% rate of green fluorescence cells to determine the lentiviral titer. The titer calculation function was as follows: Titer TU=mL ð Þ¼ cell number  fluorescence rate  1000 original lentivirus volume used lL ð Þ : Cell culture, lentivirus infection and stable cell line construction CHO-K1 cells were obtained from ATCC and were cultured in Ham's F12K media (Thermo Fisher Scientific, Waltham, MA) supplemented with 10% FBS (Thermo Fisher Scientific) at 37 C, 5% CO 2 incubator. Cells were seeded on 6-well plates one day before lentivirus infection. The next day, lentivirus was thawed on ice and was mixed with 1 mL fresh medium. The old medium within each well was replaced by this lentivirus suspension. Cells were incubated with this lentivirus mix for 4 hours. Another 1 mL of medium was added to each plate after 4 hours. The following day, all media were replaced by fresh medium. After 72 hour's infection, pools were single-cell-sorted and seeded onto a 96-well plate by FACS (fluorescenceactivated cell sorter) based on fluorescence intensity. Genome walking Genomic DNA was isolated and purified by using NucleoSpin Tissue and NucleoSpin Gel and PCR Clean-Up (Clonetech, Mountain View, CA), respectively. The prepared genomic DNA samples were then separately digested by three different restriction endonucleases: DraI, SspI, and HpaI (Clonetech) at 37 C overnight. The digested products were all purified by NucleoSpin Gel and PCR Clean-Up and were ligated to a genome walker adaptor (Clonetech) at 16 C overnight to make libraries. All three libraries were amplified by using Advantage 2 PCR Kit (Clonetech). This primary PCR was carried out by using a two-step method (5X 94 C for 25 s, 72 C for 3 min; 20X 94 C for 25 s, 67 C for 3 min; 1X 67 C for 7 min) with primer sets AP1 and LSP1 (Supplemental Table S2). The primary PCR products were used as template for the secondary PCR. The secondary PCR reaction condition was the same as the primary PCR except for using another primer set, AP2 and LSP2 (Supplemental Table S2). The secondary PCR products were all sequenced. Adaption to suspension culture The original 2C3 model cell line's culture media were gradually replaced by serum free media M2 þ M4 (1:1) (Kangju, Suzhou, China). Cells were transferred to a bottle on a rotatory shaker (100 r/min) when % FBS was 0 and were cultured for another 3-4 weeks at a speed of 100 rpm. If there were dead cells and the cell concentration was less than 10 6 cell/mL, adherent cells were added into the bottle to maintain a minimum concentration of 10 6 cell/mL. Knock-in cell line construction and PCR verification Cells were transfected with three expression plasmids (Supplemental Table S1): Cas9, sgRNA targeting the integration site and a corresponding donor plasmid (molar ratio of 1:1:1). The donor plasmid (Supplemental Figure S1A and Table S2) was designed to harbour 600 bp homology arms, which were exactly next to the 23 bp Cas9 cleavable sgRNA sequence (Supplemental Table S3). A puromycin resistant gene cassette together with glucagon-like peptide 1 (GLP-1) with human serum albumin fusion protein (NGGH) gene cassette were both placed within the homology arms. A cop-GFP gene cassette was placed outside the homology arm to detect any random integration event. Once the NGGH gene in the donor plasmid was precisely integrated into the hot spot locus, cells would lose green fluorescence expression and express the NGGH gene together with the puromycin resistance gene (Supplementary Figure S1A). Thus, only cells with no green fluorescence were single-cellsorted by FACS and were seeded in the 96-well plates. For each sample, 4  10 5 cells were transfected with a total of 3 lg of DNA using Lipofectamine 3000 (Thermo Fisher Scientific). Stable cell pools were generated by adding 5 lg/mL puromycin as selection pressure to each well on day 3. After 14 days of selection, cells were detached by TrypLE (Thermo Fisher Scientific) and were resuspended in PBS. The cell pool was sorted by a MoFlo XDP FACS cell sorter (Beckman Coulter, Boulevard Brea, CA) and one cell was seeded per well in 200 lL medium in 96-well plates. After 10 days of growth, monoclonal cell lines were passaged over to 12-well plates. When the confluence of cells within each well was close to 100%, they were detached by TrypLE. The genomic DNA of cells was extracted from pellets and was used as the templates for the subsequent PCR reactions. All PCR reactions were conducted by using Phantom Max Super-Fidelity DNA Polymerase (Vazyme, Nanjing, China). The specific reaction condition for 5 0 /3 0 junction PCR was as follows: 95 C for 3 min; 30X: 95 C for 15 s, 66 C for 15 s, 72 C for 2 min; 72 C for 5 min. Out-out PCR was carried out by using the following condition: 95 C for 3 min; 30X: 95 C for 15 s, 66 C for 15 s, 72 C for 6 min; 72 C for 5 min. The 5 0 /3 0 junction PCR were used to verify whether the target gene cassette was inserted site-specifically and the out-out PCR was used to test whether the cell line was homozygous or heterozygous. For the nested PCR, which used out-out PCR products as a template; the PCR reaction condition was 95 C for 3 min; 30X: 95 C for 15 s, 65 C for 15 s, 72 C for 2 min; 72 C for 5 min. The 5 0 /3 0 junction PCR products and nested PCR products were sequenced. Reporting lentivirus construction and titer detection Normally, lentivirus expresses reporter gene when integrated into genome. Here we used Zsgreen1 as our reporter gene for further research. When lentivirus was successfully constructed, its titre value was estimated based on Supplementary Figure S2 where the fluorescence rate was approximately 20% and the titer was $10 8 TU/mL based on the formula given above. Highly expressed model cell line construction and identification of insertion site To identify potential hot spots within CHO-K1 cell line genomes, we set up a high-throughput screening method to achieve the goal. The CHO-K1 cells were infected with the lentiviral construct harbouring the Zsgreen1 gene driven by a ubiquitously expressed cytomegalovirus (CMV) promoter at a low multiplicity of infection (MOI ¼ 0.3) to favour single integration. FACS was applied to isolate single Zsgreen1-positive cells with high fluorescence intensity. The rate of Zsgreen1-positive cells was 4.2% and only cells ranking within top 10% fluorescence intensity were singlecell-sorted and seeded in 96-well plates for further expansion. The monoclonal cells were monitored under a fluorescence microscope to eliminate non-stable positive cells and cells with a slow growth rate. The qualified colonies were expanded step by step. The genomic DNA from the viral constructs was extracted and further analyzed by genome walking to identify all potential viral integration sites within the genome [14][15][16]. The flow diagram of the overall highthroughput screening process is illustrated in Figure 1. Here we describe the specific integration site of one monoclonal cell line (2C3). The 2C3 cell line's colony image (Figure 2A) was captured 6 days after FACS sorting: the colony was round shaped with a strong fluorescence signal. Massive cells were observed in the colony. Therefore, this cell line was chosen for further study to identify the specific viral integration site. The lentivirus integration site of the 2C3 cell line was identified by using genome walking. After conducting secondary PCR, three samples from different libraries were run in a 1% agarose gel ( Figure 2B). There was only one band in all three different lanes, indicating that only one lentivirus copy was inserted in the genome of the 2C3 cell line. The PCR products were further sequenced by primer AP2 (Supplemental Table S2). The sequencing results from three different libraries matched with each other (Supplemental Table S4). The hot spot was located at 1235357 within the scaffold NW_006880285.1 analyzed by BLAST at NCBI. In addition, the hot spot we discovered was located in a copy number variance (CNV) stable region, where the CNV value was 2.0 according to the results of Kaas et al. [17]. Hence, this hot spot could be considered as stable from a CNV point of view and was worth further investigation. Identifying new stable hot spots by using lentivirus with a fluorescence tag has many advantages. First, the integrated form of HIV-1 DNA is traditionally considered to be responsible for viral gene expression [18]. Thus, it provides a good way to link the chromatin position with the expression level of inserts. Plasmid-based screening, however, might potentially be interfered by transient expression and thus was not chosen here. Second, the Zsgreen1 reporter gene allows high-throughput screening by using FACS compared to other regular reporter genes such as b-galactosidase. Third, the hot spot identification method could be preferred by researchers compared to the TLA method, which is very complex and expensive. Finally, the fluorescent model cell line itself could be a good tool for other researches, such as media optimization or genetic screening based on CRISPR/Cas9 technology to further improve the expression level [19][20][21]. By applying this lentivirus-based screening method, we successfully discovered a couple of integration sites which could authenticate its practicability. Moreover, we tested the stability of the selected 2C3 cell line in order to evaluate to its potential for prospective industrial application. Stability assay of adherent model cell line The stability issue is critical for the CHO industry. Based on previous research [22], the unstable expression of CHO cells has been attributed to both genetic factors, such as gene copy loss in the proliferating CHO cell population, and epigenetic factors, such as promoter methylation. To further verify the potential of the site for industrial application, we tested the stability of the fluorescence signal in the model cell line over passages. The model cell line was cultured for over 50 passages and the fluorescence signal of different cell line passages was detected by flow cytometry. We found that the fluorescence rates of cells for both passage 1 and passage 50 were 100% compared to those in the parallel control sample ( Figure 3A-C). Hence, the site identified in the model cell line could be considered as a stable integration site and it would be worth further exploring its potential for industrial application. The specific parallel control sample was obtained by integrating another non-Zsgreen1 gene into the same spot as 2C3 via CRISPR/Cas9 technology. Here we chose NGGH [13] as the targeting gene. A total of three hits were obtained this time. All hits could be amplified by both 5' junction and 3 0 junction PCR (Supplementary Figure S1B). The molecular weight of all the 5 0 junction amplicons was $1.7 kb, which could match the design (Supplementary Figure S1A). The molecular weight of all 3' junction amplicons was $1. 5 kb and could match the design as well (Supplementary Figure S1A). The 5 0 /3 0 genome-donor boundaries were sequenced to verify the precise integration of the donor plasmid into the genome. Indeed, the sequencing result verified the precise integration of the targeting cassette into a hot spot locus (Supplementary Figure S1C). Out-out PCR revealed that all 3 cell lines were heterozygous and were correctly targeted with the intact target integration unit, generating an expected size of amplicons (wild-type amplicon: 1.2 kb þ target integration unit: 4.7 kb % 5. 9 kb; Supplementary Figure S1D). The out-out PCR products ($5.9 kb) were purified and were used as templates for a series of nested PCR, in order to verify that the correct NGGH sequence was targeted into the genome. Indeed, the sequencing result confirmed the complete and correct integration of the NGGH gene sequence into the hot spot within the genome (Supplemental Table S5). Cell line 2 was used as the parallel control sample mentioned in the last paragraph. Adaption to suspension culture and stability assay for suspension cells CHO cells were normally adapted to serum-free suspension culture for potential scale-up culture in industry. Hence we tested the model cell line's expression performance after its adaption to serum-free suspension culture to further confirm its potential for industrial application. We first adapted cells to suspension. When the cell density could double the next day, the cells could be treated as successfully adapted to a suspension culture. In our experiments, the cell density reached 1.98  10 6 cells/mL on Day 2, whereas the original cell density was 10 6 cells/mL by Day 1. Then we diluted the cell suspension back to 10 6 cells/mL, and its cell density reached 2.08  10 6 cells/mL again by Day 3. Continuous observation over a longer period verified the successful adaption of the suspension culture (Supplementary Figure S3A). The parallel control sample also underwent the same adaptation process and its cell density doubled every day as well (Supplementary Figure S3B). The fluorescence rates at 3 different passages of the suspension model cell line together with the parallel control were all detected by using the FACS cell sorter. The fluorescence rate of passage 1 was close to 98% after the adaptation process when compared to the parallel control sample ( Figure 4A). In addition, the samples from passage 25 and passage 50 both maintained the fluorescence rate at around 93%$94% ( Figure 4B and C). Thus, the process of adaptation to suspension culture did not significantly impact the fluorescence rate of the model cell line. These results further verified the stability of this hot spot and revealed its potential for industrial application in the future. The NGGH expression level of the suspended parallel control was also detected. The NGGH protein concentrations within the supernatants were all stably around 15-17 mg/L over 50 passages ( Figure 5). This supported the aforementioned conclusion that the heterogenous gene can be stably expressed once integrated into this stable hot spot. Stability is critical in the biopharmaceutical industry. The conventional method for cell line construction based on random integration cannot guarantee stable cell lines every time [22]. This is because some GOIs become inserted into unstable regions of the genome. The stability of the hot spot we discovered was proved from different aspects: first, the model cell line 2C3 displayed great stability for over 50 passages ( Figure 3); second, when the model cell line was adapted to suspension culture, which brought about significant changes to the extracellular environment, almost all cells maintained the green fluorescence signal ( Figure 4B); third, these suspension cells did not lose any fluorescence for another 50 passages ( Figure 4C-D). Finally, the heterogeneous gene (i.e. NGGH) integrated into the stable hot spot could also be stably expressed for over 50 passages ( Figure 5). Interestingly, all these lab data corresponded well with the stability predictions based on the CNV value [17]. As mentioned above, the heterogeneous gene could be stably expressed when it was inserted into the hot spot, whether the cells were in an adherent or suspension cell culture. Therefore, as a next step, it is worth using CRISPR/Cas9-based technology to target more different transgenes into the hot spot to test whether corresponding stable expression cell lines can be obtained. Genes with different sizes, from the insulin gene to antibody light and heavy chain genes, should all be tested. After the expression stability is verified, other optimizations such as improving the gene expression levels should be considered in the future. Conclusions In summary, we established a simple and efficient screening method to identify a new hot spot within the CHO genome and further verified the stability of this hot spot. This method could serve as a guide for identification of more stable hot spots and application of SSI to construct new expression cell lines in the future.
2019-05-12T13:39:13.175Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "8eba644a14fd6b03e23c23573ec95d98ac42003b", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/13102818.2019.1601998?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "10962737bbda66a3909b05191d747347eeb3f796", "s2fieldsofstudy": [ "Biology", "Engineering" ], "extfieldsofstudy": [ "Biology" ] }
195758328
pes2o/s2orc
v3-fos-license
Systematic evidence maps as a novel tool to support evidence-based decision-making in chemicals policy and risk management Background: While systematic review (SR) methods are gaining traction as a method for providing a reliable summary of existing evidence for health risks posed by exposure to chemical substances, it is becoming clear that their value is restricted to a specific range of risk management scenarios - in particular, those which can be addressed with tightly focused questions and can accommodate the time and resource requirements of a systematic evidence synthesis. Methods: The concept of a systematic evidence map (SEM) is defined and contrasted to the function and limitations of systematic review (SR) in the context of risk management decision-making. The potential for SEMs to facilitate evidence-based decision-making are explored using a hypothetical example in risk management priority-setting. The potential role of SEMs in reference to broader risk management workflows is characterised. Results: SEMs are databases of systematically gathered research which characterise broad features of the evidence base. Although not intended to substitute for the evidence synthesis element of systematic reviews, SEMs provide a comprehensive, queryable summary of a large body of policy relevant research. They provide an evidence-based approach to characterising the extent of available evidence and support forward looking predictions or trendspotting in the chemical risk sciences. In particular, SEMs facilitate the identification of related bodies of decision critical chemical risk information which could be further analysed using SR methods, and highlight gaps in the evidence which could be addressed with additional primary studies to reduce uncertainties in decision-making. Conclusions: SEMs have strong and growing potential as a high value tool in resource efficient use of existing research in chemical risk management. They can be used as a critical precursor to efficient deployment of high quality SR methods for characterising chemical health risks. Furthermore, SEMs have potential, at a large scale, to support the sort of evidence summarisation and surveillance methods which would greatly increase the resource efficiency, transparency and effectiveness of regulatory initiatives such as EU REACH and US TSCA. Introduction Systematic review is the epitome of the evidence-based approaches that have revolutionized clinical decision-making. The methodology was developed in response to medical practitioners' need to distill clear and reliable conclusions about the efficacy of clinical interventions from an evidence base seemingly full of contradiction, heterogeneity and bias (Chalmers et al., 2002;Garg et al., 2008;Higgins and Green, 2011). This need parallels that of chemicals policy; where conclusions regarding the safety of exposure to a chemical substance must be synthesised from a significantly more disparate evidence base (Whaley et al., 2016). Consequently, interest in the application of systematic review to regulatory decision-making contexts within chemicals policy and wider environmental health is growing. This is evidenced by the increasing number of systematic reviews published in the field (Whaley and Halsall, 2016), the establishment of collaborations and workgroups dedicated to development and dissemination of environmental health systematic review methodology (Morgan et al., 2016;NTP, 2015;Woodruff and Sutton, 2014), and the adoption and use of systematic review by regulatory bodies such as the United States Environmental Protection Agency (US EPA) (EPA, 2018; The National Academies of Sciences, 2017) and World Health Organization (Mandrioli et al., 2018). Growing interest in systematic review approaches is indicative of the evolutionary journey chemicals regulation follows as it attempts to reconcile past oversights with present day knowledge and mounting future challenges. A number of legacy chemicals released to market under past regulatory workflows persist on the market without risk assessment. Meanwhile, an overwhelming number of new chemicals are presented for assessment each year while awaiting release to market under modern regulatory workflows (European Comission, 2007;Pool and Rusch, 2014). This amounts to increasing strain on regulatory processes, which must operate without a proportionate increase in resource availability. While providing and/or gathering relevant data for new chemicals now forms a vital part of risk assessment, advances in analytical techniques and scientific understanding continue to broaden the scope of this data beyond the realms of traditional in vivo toxicity testing. broad scope and increasing availability of such data presents challenges for decision-makers tasked with handling, appraising and interpreting this data for risk assessment. Failure to have a transparent structure for considering all relevant data appropriate to risk assessment (e.g. a stepwise approach for addressing in vitro data following evidence from in vivo studies or comprehensive assessment of all in vitro data) reduces stakeholder confidence and has the potential to bias regulatory decisions. Studies reporting results amenable to the observer bias of independent assessors, or to the vested interests of non-independent assessors, may be cherry picked from the wider evidence base. Even where all relevant studies are considered, the role that scientific judgement plays in the process of appraisal and interpretation of data can lead to conflicting conclusions between different regulatory bodies (Whaley et al., 2016). Transparency in identifying both the evidence and scientific judgement are critical to establishing trust in decision-making. Systematic review offers a framework for piecing together this varied data in a transparent and resource efficient manner, such that a more complete picture of toxicity can inform regulatory decision-making. It details methodology for ensuring all such data is identified, gathered and considered -preventing cherry picking of studies that only provide part of the complete toxicity profile for a chemical, or that present biased or unrepresentative results. As well as reducing bias, all steps of the methodology are designed to maximise transparency. A well conducted and reported systematic review effectively outlines the research question, the approach taken to address the question, the evidence considered, and the scientific judgement applied to reaching conclusions. Thus, differences across reviews or regulatory bodies can be effectively identified and explained. Considering the results of all relevant studies makes maximum use of existing data and increases the precision of a systematic review's conclusions. This allows reliable decisions to be made without the commissioning of redundant and repetitive primary research, or conversely identifies specific knowledge gaps at which smart testing strategies can be focused. Although the aim of systematic review (i.e. to transparently and robustly synthesise all available data in answer to a research question) aligns well with the needs of chemicals policy, conflicts between the practicalities associated with the methodology and those associated with regulatory frameworks hinder their wider uptake, and/or the production of reviews that are of sufficient quality to produce trustworthy results (Kelly et al., 2016;Marshall et al., 2018;Reynen et al., 2018). Key areas of conflict include the time and resource intensity of the systematic review process, the scope of the research questions addressed by the methodology, and the ease with which the output of a systematic review can be accessed, interpreted and updated. Further, the fluid and rapidly expanding nature of scientific research and the chemicals industry creates a constant and pressing need for evidence surveillance, such that regulators can keep apace of the growing body of scientific literature and update regulation accordingly. This challenge demands a responsive and living solution beyond the reach of current systematic review practice. In this manuscript, we briefly outline systematic review methodology to illustrate its strengths and highlight the transferable barriers which have been suggested as preventing its wider uptake in other fields (Oliver and Dickson, 2016). We discuss how these difficulties may be addressed through the novel implementation of systematic evidence mapping in environmental health. Systematic evidence maps (SEMs) provide a broad and comprehensive overview of an evidence base (Haddaway, Bernes, Jonsson, & Hedlund, 2016;James et al., 2016). They facilitate the identification of trends which can be used to inform more efficient systematic review, or more targeted primary research. The methodology behind SEMs, and how this might be adapted to suit the demands and limitations of regulatory decision-making in chemicals policy is discussed, along with the advantages and future potential of SEMs as a fundamental tool for evidence-informed risk management and decision-making. The application of systematic review methods in chemical risk management The utility and advantages of systematic review methods for advancing chemical risk assessment have been extensively documented elsewhere (Aiassa et al., 2015;Hoffmann et al., 2017;Hooijmans et al., 2012;Rooney et al., 2014;Vandenberg et al., 2016;Whaley et al., 2016;Woodruff and Sutton, 2014). Systematic review provides a transparent and reproducible approach to summarising and critically assessing existing evidence on potential health risks associated with exposure to a chemical substance. These transparent methods serve to document the basis of scientific judgments, minimising the potential for bias and error presented by more traditional narrative approaches in which opinion is not clearly distinguished from evidence. The key features of a systematic review (Table 1) Specific methodological decisions concerning each of these key features, from definition of the PECO statement to the chosen synthesis approach, are specified in a pre-published protocol. However, with the methodology's pursuit of rigor and comprehensiveness comes a significant demand for time and resources. Evidence from medical systematic reviews indicates it takes on average approximately 70 weeks to progress a systematic review from protocol registration in the PROSPERO registry (National Institute for Health Research, 2018) to publication of the final systematic review (Borah et al., 2017). Variance around this average is wide (from 6 to 186 weeks), but the significance of person-hours and planning time prior to protocol registration is not considered in these estimates. More recent analysis of environmental science systematic reviews estimates an average of 164 (full time equivalent) person-days required for completion of systematic reviews (Haddaway and Westgate, 2018). However, in the absence of comparable evidence in the field of chemical risk assessment, these figures agree with anecdotal reports of the average systematic review taking around 12 to 18 months to progress from inception to publication. A significant factor which contributes to the length of the systematic review process is the manual way in which each step of the methodology is conducted. All studies returned by a systematic search strategy are generally screened by human reviewers, in duplicate, one-by-one, before included studies undergo a similarly manual data extraction and critical appraisal step. . Automation has the potential to result in significantly reduced workloads and subsequent demands for time and resources (Mara-eves et al., 2015). Pending further advances, the time and resource demands of systematic review are at conflict with the intense time/resource pressure under which regulatory processes must operate (Innvaer et al., 2002;Oliver and Dickson, 2016). Also at conflict with the demands of regulatory decision-making is the narrow scope of systematic reviews, which are designed to address a specific and clearly defined objective or research question. To ensure a manageable, relevant and focused review, suitable research questions are typically closed framed, such that the review can synthesise a single, coherent answer. These closed-framed questions are well suited to the decision-making contexts of medicine (the field from which systematic reviews originate), but may be difficult to apply to chemical risk assessment. The web of interlinked endpoints, potential variation in sensitive populations, uncharacterised low dose effects, and unknown behaviour of a chemical in the environment or in contact with other chemicals can mean that the decision critical information which can be supplied by a tightly focused research question is often not readily apparent in chemical risk assessment contexts. Even where such a question can be devised, and the answer reached through systematic review, the specificity of the research problem and its resolution are likely to comprise only part of the much broader range of unaddressed decisions and information requirements faced by risk managers. Systematic evidence maps for chemical risk management In light of the time and resource intensity of current systematic review practice, identifying the most informative research questions is important for maximising the value and efficiency of systematic reviews in regulatory decision-making. Investing resources in systematic review as a means of addressing specific research questions is inefficient if there is a lack of data available for answering those questions. Devising specific research questions therefore becomes a reactive process, rather than a proactive one. This is at odds with the goals of chemicals policy, which aims to predict and prevent harm as a result of exposure to chemical substances. Decision-makers therefore need to monitor and understand the evidence base as a whole -such that emerging trends or issues of potential concern can be identified and investigated in a timely manner. Identifying trends in the evidence base, including evidence clusters and evidence gaps, facilitates the formulation of proactive research questions by relevant stakeholders. Reviewers need not rely on environmental health outcomes becoming infamous or epidemic as an indicator of sufficient evidence for an efficient and valuable synthesis. Instead, trends in the availability of evidence ensure prevention of synthesis attempts for which there is insufficient data (or for which syntheses already exist) and promote the targeting of primary research efforts at evidence gaps. This kind of evidence surveillance has traditionally been the domain of scoping reviews. These reviews are often narrowly focused precursors to systematic reviews. Thus a specific systematic review question has already begun to be framed, and the literature scoped for sufficient data to address/focus it -rather than vice versa (e.g. Bolden et al., 2017). Scoping reviews also typically present their findings in tabular format. This compromises the accessibility of the evidence they scope, and makes them ill-suited for applications beyond determining whether there is sufficient literature to merit a systematic review (Grant and Booth, 2009). Instead, the introduction of systematic evidence mapping, a methodology recently adapted from the social sciences (Clapton et al., 2009) for environmental management (James et al., 2016), has the potential to facilitate evidence surveillance in a transparent and reproducible manner, providing a broader understanding of the extant evidence base through interactive outputs. The methodological steps involved in constructing a systematic evidence map are similar to those involved in the initial stages of producing a systematic review (see Table 2, adapted from James et al., 2016) whereby a systematic search strategy is employed to collate evidence, which is subsequently screened for relevance before undergoing data extraction. The key difference between the methodologies comes in the form of their aims and subsequent outputs. Systematic reviews collate a relatively narrow subset of the evidence base to answer a specific research question. Conversely, SEMs do not attempt to answer a specific, closed-framed research question, and are instead guided by much broader research objectives. SEMs collate a sufficiently broad subset of evidence such that many different specific research questions might be formulated from, and addressed with, a single systematic evidence map. SEMs are concerned with characterising the evidence base within a given research area, such that the availability, type and features of the evidence can be clearly mapped and explored through data visualization. To facilitate this exploration, the output of a SEM takes the form of a queryable database (Clapton et al., 2009;James et al., 2016) as opposed to the lengthy and technical documents which form the main output of a systematic review. The database format allows users to query the evidence base according to their research interests, providing functionality which is void from systematic review documents and their associated static data tables. This format addresses the inability of systematic evidence mappers to predict what the specific research interests of users might be by providing the option to search for, and select, the specific subsets of data relevant to a particular use case. Whereas systematic reviews present users with select information from included studies (i.e. data relevant to addressing the research question), SEMs aim to extract a broader range of data from included studies and aim to maintain the native format of these data. In this sense, the search and screening process are the steps of SEM methodology most affected by its research objective or context, as the focus of data extraction remains broad regardless. This is in contrast to systematic review, where all steps are heavily influenced by its research question. The data extracted for inclusion in a SEM database can then be flexibly categorised, or "coded" to facilitate comparison of an otherwise heterogeneous evidence base. Resolution of coding can be adapted to suit the needs of regulators. For example, coding the species under investigation in a study might use categories such as "Sprague-Dawley", "Rat", "Rodent" or "Mammal"; or may use all of these categories such that the data can be interrogated in successively deeper levels of detail. As well as facilitating variably resolved interrogation of the evidence base, coding plays a significant role in systematic mapping's amenability to updating. Use of universal, standardised ontologies for coding, such as the Unified Medical Language System (UMLS) (U.S. National Library of Medicine, 2016), offers a degree of consistency that future users can readily exploit when updating a map (Baker et al., 2018). These ontologies also offer interoperability between SEMs, creating the potential to expand and merge evidence maps -a feature likely to become increasingly attractive as the scope of evidence relevant to assessing toxicity grows along with our understanding of its interconnectedness. In current practice it is common to present users with SEMs that house only coded information for simplicity and ease of access (e.g. Papathanasopoulou et al., 2016). However, this conflates data extraction with coding. Maintaining the native format of extracted data and applying coding on top of this therefore ensures maximum transparency in SEMs. This additionally promotes the ease with which a map can be updated as advancing scientific understanding calls for coding categories to be redefined. As with systematic reviews, the data extraction and coding steps of a SEM represent a manual workload. Presenting only coded data may offer a saving in the resource intensity of the process. However, in maintaining a transparent link between raw extracted data and the code used to categorise it, SEMs offer a gateway to automation -whereby controlled vocabulary ontologies can be used to train machine learning algorithms to automatically identify, extract and code data from the literature. Pending such advances, the time required to conduct a fit for purpose systematic map in environmental health is uncharacterised. Evidence from the wider environmental sciences (Haddaway and Westgate, 2018) suggests that (on average) systematic maps take longer to complete than systematic reviews. This is due to the generally larger number of studies they manually collate, screen and extract data from. While maps might present a larger upfront cost in terms of time, their multipurpose nature has the potential to offer more long-term resource savings compared to exclusively conducting systematic reviews. This is because a single systematic evidence map may continue to be useful to several different aspects of the regulatory workflow (see Sections 4 and 5 below). As the purpose of a SEM is to characterise the evidence base, there is no risk of allocating resources to the production of an inconclusive output, as is the case for "empty" systematic reviews (systematic reviews which ask research questions for which there is too little included evidence for them to reach a conclusion or be supportive of a decision). In fact, systematic evidence maps may reduce the resource strain associated with systematic reviews. A SEM's broad overview of the evidence base allows fast identification of topics for which there is sufficient data to warrant a full systematic review. The SEM itself, if conducted to sufficiently rigorous standards, can even replace the literature search and screening process of a systematic review. As SEMs present all available relevant evidence on a broader topic such as the "health effects of bisphenol-A" (obtained through a systematic but less specific search strategy), filtering this information according to the PECO statement of a systematic review may act in an equivalent manner to approaching the literature with a more focused search strategy in the first instance. The pre-screened nature of this subset is likely to reduce the number of false positive results, facilitating faster syntheses. As advances in machine learning facilitate more highly resolved data extraction processes, future SEMs may even store enough detail for them to form the basis of meta-analytical syntheses. If all data contained within study reports is extracted and indexed within a SEM, there would be no data required specifically for syntheses which could not be found in the SEM. This would allow SEMs to form the dataset on which meta-analytical and predictive toxicological models are based, the results of which may additionally be incorporated into the SEM itself -facilitating more transparent, resource-efficient and easily updated syntheses. Exploring the evidence base with SEMs Systematic evidence mapping facilitates identification of trends which are informative for many risk management scenarios. To illustrate the flexibility and potential utility of SEMs' trendspotting capacity, this section highlights the type of data visualization and exploration possible through querying subsets of information in a SEM database. Specifically, "priority setting" (National Academy of Sciences, 1983;Pool and Rusch, 2014), the process by which regulators identify the most pressing chemical substances for assessment and regulation (e.g. from a pool of unassessed legacy chemicals) is presented as context for the exploration of a hypothetical SEM. Several factors are relevant to prioritizing individual chemicals for assessment, broadly ranging from recorded levels of exposure to evidence for toxicity. Underlying these broad considerations are several more specific factors such as the bio-accessibility of the chemical, the relevance of its toxicity evidence for predicting health risks in human populations etc. In order to make the most efficient use of resources and the systematic review process, decision-makers require access to a means of comparing these features to justify prioritization of a particular chemical for review/risk assessment. This is the role of a SEM, which may be constructed with the aim of identifying and characterising the risk assessment relevant evidence for a broader group of legacy chemicals, e.g. flame retardants. Once data has been extracted and coded from the literature, the SEM can be explored with a succession of queries of increasingly narrow focus, each considering a narrower subset of the evidence base than the last, such that a research question appropriate for more detailed synthesis is resolved at the end of a process which begins with a very broad research objective. This is illustrated in Fig. 1 using the hypothetical context of priority setting with a group of arbitrary chemicals, in this case flame retardants (FRs) A-F. Queries 1 and 2 depicted in Fig. 1 explore the frequency with which the literature observes a flame retardant in a coded location category (e.g. human blood, human breast milk, house dust, etc.) and the frequency with which the literature observes an association between a flame retardant and a coded toxicity category (e.g. reproductive toxicity, neurotoxicity etc. However, it is important to distinguish the results of SEM queries from synthesis. SEMs only present what has been studied in the literature -they cannot present what has not been studied, and do not always assess the risk of bias of the findings they report. Thus, while a high number of observations of flame retardants A and B in human relevant locations is a valid trend to explore further, it does not necessarily mean that there are fewer of the other flame retardants present in human relevant locations, but rather that there may simply be fewer of these flame retardants studied at all. Identification of such evidence gaps is equally valid for focusing primary research. For example, the relatively high number of observations of reproductive toxicity for FR F, but comparatively low number of observations of this flame retardant in any exposure locations might warrant re-analysis of samples or new exposure studies to verify whether exposure to this substance is of concern. The SEM is also sufficiently flexible that different trends can be investigated, and different research questions formulated, based on the priorities of regulators. For example, the number of observations in the literature which found FR D in aquatic environments might spur further investigation into the ecotoxicity of this compound. A single SEM exercise therefore makes efficient use of resources in its potential to meet the varied needs of several end users. The role of SEMs in wider risk management workflows In addition to priority setting, SEMs have the potential to fill several roles within wider workflows. Data gathering Although evidence synthesis methodology can be considered costly in terms of time and resources, this cost can be dwarfed by the equivalent resource demands associated with conducting primary research relevant to assessing the hazards associated with exposure to a chemical, as illustrated with more established examples in the field of medicine (Glasziou et al., 2006). In an effort to manage these demands, reduce the production of research waste, and comply with principles such as the three Rs (European Chemicals Agency, 2018a, 2018b; National Centre for the Replacement Refinement and Reduction of Animals in Research, 2018), a key first step in many regulatory workflows is the identification and gathering of all pre-existing evidence relevant to a specific risk management decision. This can be illustrated in regulatory frameworks such as the European Union's REACH (Registration, Evaluation, Authorisation and Restriction of Chemicals) initiative, which requires registrants to make an attempt to identify all available, pre-existing evidence on the hazards associated with the chemical substance under registration (European Chemicals Agency, 2018a, 2018b. Similarly, REACH imposes a "one substance, one registration" policy, whereby all parties with an interest in registration of a substance must share data, minimising repeat testing. Although promoted in guidance documents (European Chemicals Agency, 2016), a lack of a sufficiently robust methodology for finding, collating, housing and reporting these data leads to poor transparency, and therefore does not remove the Author Manuscript Author Manuscript Author Manuscript Author Manuscript potential for cherry picking of key studies which may not be representative of the evidence base as a whole. SEMs have the potential to provide this much needed transparency. The nature of a SEM's output being a collection of relevant search results, and specific information coded from those results, introduces a greater level of accountability for registrants. Studies are identified by registrants as "key", "supporting" etc. based on the perceived relevance, adequacy and reliability of the evidence they provide for a specific endpoint, assessed using "sound scientific judgement" (European Chemicals Agency, 2011). These assignments are aided by application of the Klimisch criteria (Klimisch et al., 1997) -a rating methodology criticised for its lack of transparency and failure to consider non-industry sources of evidence (Ingre-Khans et al., 2019). This poor transparency hinders the appraisal of registrants' choices (e.g. of key study), and the degree to which those choices can be considered representative of the wider evidence base. Using SEM methodology alleviates this issue by requiring registrants to clearly document the efforts of their search and screening process, constructing a database of the pool of evidence considered in their evaluations. Additionally, applying code to the specific extracted study features which influence a decision to assign a study as "key", "supporting", "weight-of-evidence" etc. serves to document the basis for these decisions in a structured and queryable way. As registrants submit SEMs at the level of single substances, these efforts can be merged to build a SEM that spans all registered substances. This facilitates appraisal of registrants' choices of key study in the context of the wider evidence base. The ability to explore trends in the features influencing assignment of key studies may even assist in refining and improving the registration process -as emerging issues or shortcomings can be quickly evidenced. Problem formulation Beyond offering improvements in transparency during the data gathering phase, SEMs may be of particular value to the problem formulation stage of regulatory decision-making. Problem formulation is a prerequisite to conducting a chemical risk assessment, identifying an issue of regulatory relevance around which the assessment will be focused (Solomon et al., 2016). These issues can be subtle and difficult to identify at a sufficiently early stage in the field of environmental health, putting the problem formulation process at risk of focusing on issues of lower severity or significance. In implementing a SEM with a broad (lower resolution) coding process, but with a key focus on the hierarchy of coded data and the nature in which this data is related, trends in the evidence base can be effectively and efficiently identified. This allows risk assessors to use these broad, coded parameters to reliably identify problems in need of further assessment, either through secondary syntheses (if the SEM presents a sufficiently large evidence cluster) or primary research (if the SEM indicates an evidence gap). Read-across Identifying trends in the evidence base may also play a significant role in read-across applications. Read-across allows the toxicologically relevant properties of a chemical to be inferred by comparison with a structurally similar chemical of known toxicological behaviour (European Chemicals Agency, 2017a). Read-across aligns well with the need to make best use of existing evidence (van Leeuwen et al., 2009), and the storage of data in a related manner within a SEM could allow the identification of appropriate read-across scenarios. In filtering an evidence map by outcome features, exposures which behave in a similar manner can be identified and investigated further for chemical similarity and/or shared modes of action. This information can be used to group substances, such that data rich members of the group can be used to make predictions about data-poor members, without pursuing further primary research (Vink et al., 2010). Conversely, filtering an evidence map by chemical group or structural similarity may allow identification of shared outcomes, of similar relevance to read-across applications. Evidence surveillance Once regulation is in place, it is vital that it is kept up to date. Such is the role of the ongoing, evidence surveillance phase of regulatory decision-making. Within REACH, registrants are required to update their registration dossiers "whenever new information is available" (European Chemicals Agency, 2017b), such that dossiers are living products. However, a report commissioned by the European Chemicals Agency (ECHA) found that 64% of REACH registration dossiers submitted to ECHA since 2008 have never been updated (Amec Foster Wheeler Environment and Infrastructure UK Limited, 2017). The report details several obstacles experienced by registrants faced with updating dossiers, including technical difficulties, issues of ownership or responsibility for updates among co-and lead registrants, the potentially labour-intensive nature of updating dossiers and a perception of REACH registration being the "end of a process". Openly accessible and easily updated SEMs may serve to address such obstacles. As the population of a SEM database does not require detailed analysis or complex interpretation of the raw data, SEMs could be amenable to automation. Technological advances in text mining and artificial intelligence might assist the automatic screening, extraction and coding of new information as it is published, based on the data fields and coding ontologies used to populate the original SEM. Although some years away from implementation, application of SEM methodology in the interim will promote fast uptake of such technological advances. Conclusion Systematic evidence mapping presents a transparent and robust methodological framework with which to assess the evidence landscape at the level of individual chemical risk management and innovation, to regulatory decision-making in chemicals policy. The broad scope of SEMs lowers the barrier to evidence synthesis in chemical risk assessment through more efficient use of resources. Future developments in text mining and machine learning are likely to further reduce the resource intensity of the methodology, and of chemical risk assessment in general. These advances will enable the automatic production of highly resolved SEMs capable of synthesising evidence or feeding predictive models. In the interim pursuit of a more evidence-based approach to chemicals policy, the resource strain associated with producing a SEM can be managed through adaptation of the methodology to present day limitations. Depending on the needs of the user and the constraints of their use case, SEM methodology is sufficiently flexible that it may be adapted (e.g. by searching fewer databases, extracting data based on only title/abstract etc.) without compromising the utility of the end product in the same way as the results of a synthesis might be adversely affected by modification of systematic review methodology. By working closely with stakeholders to define objectives, the scope of the SEM (i.e. bibliographic databases covered, types of studies included, etc.) can be adjusted as appropriate to objectives. For example, critical appraisal of studies may not be imperative to the aim of the SEM and may therefore be omitted or might be planned as part of a stepwise approach after the SEM identifies pockets of evidence of interest to stakeholders. Although designed to reduce the resource strain of SEM exercises, such flexible adaptation of the methodology does not compromise the fitness-for-purpose of SEMs as a means of identifying and comparing trends in the availability of evidence in a vast and heterogeneous information landscape. Consequently, examples of research activities producing fit-for-purpose SEM outputs and/or developing aspects of SEM methodology specific to chemicals policy contexts are beginning to emerge (Beverly, 2019), with research institutes such as NTP-OHAT and The Endocrine Disruption Exchange (TEDX) conducting evidence mapping activities (NTP-OHAT, 2019; The Endocrine Disruption Exchange, 2019). A key consideration for these emerging efforts is the accessibility of SEMs' queryable output for non-technical audiences. To this end, researchers have made use of a variety of readily available and user-friendly tools (e.g. Datawrapper GmbH, 2019; IBM, 2019; QlikTech International AB, 2019; Tableau Software, 2019 etc.) to facilitate visualization of, and promote interaction with, the data collated in evidence surveillance exercises (e.g. Pelch et al., 2019;Walker et al., 2018). These tools may similarly serve to lower the barrier to accessing (as well as producing) SEMs, provided the underlying database is made available for more specialist users. Although future technological advances will have significant implications for the production and use of SEMs, these efforts indicate how SEM methodology can be effectively applied in present day, highlighting how SEMs can be adapted for engaging with a variety of stakeholders. More immediate establishment of (adapted) SEM infrastructure in current regulatory workflows will therefore not only lower resource barriers to evidence-based decision-making, but will ensure that technological advances in automation, and in SEM methodology itself, can be readily exploited by regulatory decision-makers in chemicals risk management. The process of identifying trends and exploring the evidence landscape involves querying the SEM database and visualizing the results of the query. Queries may start by asking broader questions which consider a wider range and volume of data (e.g. Queries 1 and 2). Users may then further explore any trends of interest discovered in the results of these broad queries by running narrower queries which consider a more specific subset of data (e.g. Queries 3 and 4). Data displayed in this Figure have been artificially generated to illustrate a The key features of systematic reviews and their primary advantages. Primary advantages Pre-published protocol Reduces risk that expectation bias will influence reviewers' choice of methods and approaches for analysis mid-review; if formally published, external peer review can reduce risk of limitations in planned methods from compromising final results. Statement of objectives Provides a structured framework for the aims of the review (including specific statement of the research question and PECO criteria) against which appropriate review methods can be defined. Comprehensive search Reduces risk of only partial retrieval of the overall body of evidence that is relevant to answering the research question. Screening against eligibility criteria (study inclusion) Reduces risk of only partial retrieval of the overall body of evidence that is relevant to answering the research question, in particular the risk of selection bias when reviewers are deciding which evidence to include in the review. Data extraction using appropriate extraction tools Reduces risk of inconsistent or partial retrieval of data from studies included in the review, reducing risk of selective use of data from studies deemed relevant to answering the research question. Critical appraisal of included studies Encourages consistent assessment of validity of included studies according to factors internal to study design, reducing risk of expectation bias or other factors causing studies to be inappropriately weighted, and helping ensure that bias in the findings of the included studies is not transmitted through to the findings of the review. Synthesis of included studies Pooling or integration of sufficiently comparable studies increases the power of an analysis, whether quantitative or qualitative, allowing overall trends in results to be more reliably identified. Characterisation of confidence in the evidence Encourages consistent assessment of the validity of the results of the synthesis according to features which manifest at the level of body of evidence as a whole rather than the individual study. Outlining the scientific judgement applied in rating confidence is key to the transparency of subsequent conclusions. Drawing conclusions/key review output Qualitative and/or quantitative summary effect estimates help direct policy decisions based on permissible exposure levels and related controls; assessment of limitations in the review methods helps ensure that any residual potential biases in the review are made clear to the reader and can additionally be accounted for in uncertainty assessment and consequent risk management action. A comparison of systematic review and systematic evidence mapping methodology and their respective roles in risk management decision-making (adapted from James et al., 2016). Step Screening against eligibility criteria (study inclusion) Inclusion criteria specified in detail for all key elements of the objective. Conduct of step in Inclusion criteria defined in terms of topic rather than key elements of the objective. SR: As for search, specific inclusion criteria ensure SRs efficiently service a specific research question. SEM: Broad objectives ensure inclusion of evidence relating to multiple decision scenarios. Data extraction using tested extraction sheets Complete extraction of meta-data and study findings. Extraction of meta-data; optional extraction of study findings and other study characteristics depending on SEM objectives. SR: Data extraction determined by objectives. SEM: Data extraction more flexible and can respond to needs of risk management process to develop fit-for-purpose maps of varying degrees of comprehensiveness. Coding of extracted data using controlled vocabularies Coding facilitates grouping of included studies for synthesis/integration according to review objectives. Coding is closely related to review objectives and data extraction process, whereby narrow research question and PECO statement inherently define specific code applicable to raw extracted data. Coding facilitates broad comparison of heterogeneous data across an evidence base. Broad map objectives necessitate extensive coding process, whereby specific code must be defined in a step distinct from the formulation of end-users' specific research questions. SR: Tight review objectives pre-specify applied code (e.g. considering ages 0-18 as 'Child' for reviews focusing on a population of 'Children'). Narrower range, or greater specificity of controlled vocabulary terms applicable per item of extracted data. SEM: Code pre-specified where possible, but addition of new terms (which could not be accounted for a priori) considered flexible. Any one item of extracted data may be coded by multiple and variably resolved terms. Openly accessible ontologies may be used for coding to promote consistency and interoperability. Critical appraisal of included studies Assessment of internal validity (risk of bias) conducted for all included studies. Study validity assessment is optional and to some extent restricted if outcome is not a defined aspect of the SEM; study characteristics relevant to risk of bias assessment can be extracted. SR: Describe the internal validity of the evidence base, which is an essential step of characterising confidence in the evidence. SEM: Flexible, critical appraisal step can be omitted; study methods are mapped or methodological quality assessed to goals, can be part of stepwise approach where quality only assessed for studies addressing key outcomes etc. Synthesis of included studies Quantitative synthesis where possible to produce characterisation of hazard from exposure; qualitative synthesis where pooling studies is not possible. Reports of systematic maps can provide narrative synthesis of characteristics of the evidence key to a given decision-making context. SR: Synthesis supports a specific type of decision context. SEM: Primary output is a more context-agnostic database which can be used by risk managers to support multiple decisions in the RM workflow; or to aid in a stepwise approach. Wolffe et al. Page 22 Step Conduct of step in SRs related to assessing chemical health risks Conduct of step in SEMs related to assessing chemical health risks SR vs SEM for responding to risk management needs Characterisation of confidence in the evidence Assessment of confidence or certainty in the results of the synthesis, according to characteristics of the evidence base taken as a whole. SEMs do not synthesise included studies. SEMs help identify regions of evidence with characteristics indicative of being worth further, detailed analysis in support of a prospective decision. SR: Provide detailed conclusions on certainty of evidence in hazard characterisation or to support risk assessments. SEM: Support a range of decisions, particularly decisions to focus research and review, e.g. indicating clusters where evidence may be strong enough to warrant SR (e.g. have a reasonable likelihood of changing a TDI), fill in gaps to reduce uncertainty and for surveillance. Drawing conclusions/key review outputs SRs primarily provide a summary effect estimate and surrounding uncertainty based on strength of the evidence and review methods. SEMs primarily provide a searchable database of the characteristics of the evidence base, making the knowledge base locked away in manuscripts accessible to decision-makers. SR: provide a qualitative and/or quantitative summary effect estimate in answer to a narrow and specific decision-making question. SEM: identify evidence gluts for synthesis. When combined with an understanding of RM needs, transparent criteria for prioritization of gluts for synthesis and gaps for commissioning primary research can be presented. SR = systematic review, SEM= systematic evidence map, RM =risk management, TDI= tolerable daily intake. Environ Int. Author manuscript; available in PMC 2020 April 29.
2019-07-02T13:47:53.883Z
2019-06-26T00:00:00.000
{ "year": 2019, "sha1": "f255e9123e035063147affc448cb2024b9cd89f0", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.envint.2019.05.065", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4c16c6c93fb9607d2cc3c1d5f6d9845613ab849e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
230636736
pes2o/s2orc
v3-fos-license
Information Processing and Decision-Making in Pathological Worriers and their Potential Role in Mechanisms of Generalized Anxiety Disorder Systematic information processing and decision-making under uncertainty are key constructs of new conceptions explaining the severity of pathological worry. The current study attempted to analyze their usefulness in subclinical and clinical groups. In the first phase of the study (N = 251) participants were examined with the Penn State Worry Questionnaire (PSWQ), a GP consultationrelated survey, and a screening survey for generalized anxiety disorder (GAD). In the second phase (N = 220), the State-Trait Anxiety Inventory, the PSWQ, and tasks measuring systematic information processing (SIP) versus heuristic reasoning (HR) were applied. In the third phase (N = 60), GAD (n = 30) and healthy control (n = 30) groups were examined with the above methods and the Iowa Gambling Task (IGT). In the low risk group, a relationship between mood and the representativeness heuristic (ρ = 0.50), as well as anchoring and adjustment heuristic (anxiety-related stimuli) was found (ρ = −0.53). In the GAD group, significant correlations between the PSWQ score, the IGT loss avoidance score (ρ = 0.40), and total IGT score (ρ = 0.48) were found. The results did not confirm a particular usefulness of the systematic/heuristic information processing construct in subclinical and clinical groups. Theory-consistent results were rather found in the nonclinical groups. Nevertheless, the data revealed some interesting findings supporting potential explanatory power of some theoretical models. INTRODUCTION there have been some interesting reinterpretations of the mentioned data. Llera (2011, see also Llera &Newman, 2010), the authors of the contrast avoidance model, undertook a critical analysis of this trend. The authors assumed that persons with generalized anxiety disorder (GAD) are excessively sensitive to negative emotional shifts and deploy worry to decrease the difference between baseline negativity and a shifting state. According to the researchers, worry triggers a whole range of negative emotions. It is thus difficult to treat it as a useful tool for avoidance, as the other concepts suggest. This would mean that emotion avoidance and emotion processing avoidance are two completely different processes characterized by different traits and functions. As mentioned previously, Newman and Llera's studies suggest that worry does not enable negative emotion avoidance per se. Emotional processing should take place on two levels: subjective and physiological. If the process does not take place on each of these levels, or if it is hindered on either of them, emotional processing cannot be successful, and habituation is impossible. As such, worry prevents flexible responsiveness to sadness-and anxiety-inducing stimuli, suggesting a less adaptive response of the autonomous system to emotional stimuli in people with strong worry patterns. Uncontrolled concerns can thus cause prolonged anxiety and depressive states. Worry as Repetitive Information Processing Resulting from an Increased Certainty Threshold The second research trend mostly refers to information processing and decision theories. Early studies (Vasey & Borkovec, 1992) suggested that the difference in catastrophizing between people who do and do not display worry thoughts may reflect an increased ability of the latter to draw from memory when attempting to answer the question: "What if…?". Researchers became increasingly interested in people's difficulties in stopping those attempts. Problematic severity of worry has been associated with uncontrolled processing, and when people are not satisfied with a single answer to the "What if…?" question, perpetuating the process. The study by Martin et al. (1993) on mood as an input to stopping initiated tasks revealed that intermediary factors between a specific mood and stopping a task are the so-called "stop rules. " People with a lowered mood using the "I feel I do not want to continue" rule would stop their tasks earlier than subjects with a lowered mood using the "I will do as much as possible" rule. People in a good mood using the "I feel I do not want to continue" rule would stop their tasks later than subjects in a good mood using the "I will do as much as possible" rule. This can be explained according to the previous example -the participants' moods informed them about their level of satisfaction with a task. The rule that was then discovered was used to explain the perseverance of worry. According to the mood-as-input hypotheses, people with a lowered mood using the "I will do as much as possible rule" tend to persevere similar to people in a good mood using the "I feel I do not want to continue" rule. Davey (2006) applied the results of the study to his own research and assumed that subjects who worry often experience a lowered mood and use the "I will do as much as possible" rule rather strictly when de-ciding to stop catastrophizing. Researchers began to associate stopping worry and its characteristics with systematic information processing, and this, in turn, has its association with decision and information processing theories. The question of systematic and heuristic information processing was holistically covered by Chaiken's (1980) theory. According to this theory, there are two basic, competitive types of information processing: systematic and heuristic. Whenever low mood appears, people tend to use systematic information processing, which is understood as "analytical orientation, where the receiver verifies and evaluates data according to its relevance and significance integrating all the useful information through formulating a judgment" (Chaiken, 1980;Martin et al., 1993). The heuristic trend appears to be based rather on pre-existing knowledge structures (stored in the long-term memory) than on analysis of current data. Such processing requires less cognitive effort and results in quicker decision making. However, it is more prone to distortion and cognitive errors. (Kahneman, 2012). In the reference literature (Chaiken, 1980, Todorov et al., 2002, this is called analytical orientation, in which a person evaluates and analyzes the received information as a whole in terms of its (a) meaning and (b) significance, and integrates all useful information in the formulated judgments. Personal significance is crucial in this concept since algorithmic thinking depends on the way the subject assesses the level of certainty needed for deduction. If the subject says "I do not need high certainty, " heuristic thinking is triggered, while when high certainty is required-algorithmic thinking (systematic information processing) is triggered. Thus, easy tasks trigger heuristic thinking, while difficult tasks trigger algorithmic thinking. In the heuristic-systematic model, the construct of a certainty threshold that determines the exchange between cognitive control and a task's goals is also important. People become engaged in cognitive effort until they reach the threshold level of certainty regarding task completion (Chaiken, 1980). One cannot be entirely sure that a judgment is correct. Nevertheless, some level of certainty can still be achieved. Among heuristic thinking types, Kahneman (2012) mentions the following: (a) availability, (b) representativeness, and (c) anchoring and adjustment heuristics. The availability heuristic describes tendency to attach greater likelihood to events that are more available to awareness and more emotionally charged. For example, if we read two lists of people with men's and women's names mixed up in equal proportions, but one of the lists includes the names of well-known women, the participants will have the impression that the list contains more women's names. This impression is caused by the greater availability of these memory traces. The representativeness heuristic relies on a shortened way of deduction in which events are classified based on their partial similarity to a typical or a well-known case. A typical task that demonstrates this heuristic involves presenting participants with a certain feature that matches a stereotype, for example, "John is an eloquent, well-educated, and competitive man with two children. His hobby is collecting rare books. What is the likelihood of John being a lawyer rather than an engineer?" It is easy to notice that people will tend to ignore sociological statistics based on some rather stereotypical information included in the task. The anchoring and adjustment heuristic is a tendency to rely on some information (anchoring) and then modify it in order to formulate a judgment. An example of this heuristic is giving a distorted answer to a request to estimate some size. When an experimenter (more or less consciously) provides participants with some reference framework, they tend to "anchor" their estimations according to this framework. For example, if we ask participants how many African countries belong to United Nations and then ask one group if it is more or less than 20 and the second group if it is more or less than 30, the estimations in the first group will be close to 20, and those in the second one will be close to 30. All these ways of thinking lead to quicker decision making. However, they are prone to cognitive error risk. Inspired by the heuristic-systematic model, Dash and Davey (2012) have developed an interesting, simple cognitive model explaining the rule of initiating and sustaining the process of pathological thinking. According to this model, there is a factor that precedes and sustains worry in people who worry pathologically-their previously lowered mood. Mood can trigger worry through the previously mentioned stop rules as well as directly through shifting the threshold of accepted uncertainty. The authors quote the results of their own study, which found that lowered mood prompted participants to use systematic information processing while neglecting heuristic processing. The data they obtained proved that systematic information processing fully explains the relation between mood and intensity of pathological worry (Dash & Davey, 2012). Worry as Repetitive Information Processing Resulting from an Increased Certainty Threshold The third mentioned research trend is focused on the degree of pathological worry development that can be influenced by the nonadaptive use of executive functions. Some studies based on processing efficiency theory (Eysenck & Calvo, 1992) have attempted to establish whether worry impairs processing efficiency or if the efficiency in completing a memory task will remain undisturbed. The results were the opposite of what was expected: Worry can increase the level of task efficiency in people with a high level of anxiety and in those in whom the state of increased worry had been induced by tasks requiring engagement of verbal and spatial working memory. In other words, there is a possibility that worry may foster adaptation in cases of specific tasks. However, a person should either be used to worry and anxiety, or worry is interim and connected to a given task. Can we then conclude that containing interim benefits for potential future benefits is associated with any permanent, proven neuropsychological characteristics? The results of the studies on the role of inhibitory control in GAD are not clear. Price and Mohlman (2007), who examined GAD patients, concluded that more beneficial results in inhibitory control were associated with a higher level of symptoms, including worry and trait anxiety. No relationship between inhibitory control and the levels of anxiety and depression was found. The discovered relationship was not found in the control group. Simultaneously, patients and participants from the control group did not differ significantly in terms of the results on the Stroop test. The researchers concluded that positive relations between inhibitory control and the level of symptoms result from nonadaptive use of executive functions. The conclusions obtained in the mentioned studies were partially confirmed by Elderth's (2008) research one year later. The researcher, who examined GAD patients, came to the conclusion that the prefrontal cortex (PFC) is more engaged in inhibiting the emotional processing area (amygdala, hippocampus). However, more severe symptoms and greater inhibitory control did not influence the brain's activity during worry (examined by magnetic resonance imaging). Only elderly patients participated in the studies. In later studies by Price et al. (2011), who used the Stroop test and functional magnetic resonance imaging, among other tests, elderly GAD patients displayed attention deficits in top-down processing, while inhibitory control deficits were not a generalized GAD trait in elderly patients; they only occur when negative emotional content "competes" for information processing resources. This could suggest that the core of potential attention deficits in some patients is the "taking over" of cognitive resources by uncontrolled concerns. Some of the abovementioned conclusions do not stand in confrontation with chosen processing efficiency theory (PET) studies (Eysenck & Derakshan, 2011). To date, in light of this model, it has been assumed that anxiety causes inhibitory and switching function deficits. Some authors (see Eysenck & Derakshan, 2011) think differently: There are some special conditions in which this assumption does not reflect reality. After reviewing various studies, researchers have suggested that there are two possible ways that anxiety influences attention control. Anxiety can be related to lowered migration of attention control resources or to a noticeable (yet ineffective) migration of these resources. The first way is more likely in cases of lower motivation (nondemanding tasks, lack of relevant goals), while the second one is possible under highly motivational conditions (demanding tasks, clear goals). It appears that the particular function's efficiency is less important than the way in which it is used. The abovementioned concepts create a relatively coherent yet questionable image. A person dissatisfied with the result of their own development of a threatening task begins to perpetuate their thinking process. Most likely, it happens either because their mood has been lowered or because they are using a perfectionist strategy to stop a task. The data analysis process that they use is precise and can be associated with systematic information processing. This brings to mind evolutionary aspects of environmental adaptation: using executive functions to solve a problem. However, this process, is not successful because the process itself begins to serve significant functions for the organism: It allows the organism to obtain tangible benefits and avoid certain type of arousal. The consistency of the above image does not, however, change the fact we do not know whether the mechanisms are related to the general population and to clinical groups with generalized anxiety. The current study attempts to answer the following questions: (a) Do people with GAD differ from healthy controls in their preferences of using systematic information processing? (b) Do information processing styles depend on decision making under uncertainty and on the use of executive functions? (c) What are the relations among information processing styles, decision-making styles, and anxiety and worry? It was hypothesized that anxiety (both state and trait) and mood would be related to deployment of systematic information processing (SIP). We can also expect significant relationship of SIP with lost avoidance and inhibitory control. However, it remains unclear how these relationships express themselves in clinical and nonclinical contexts. To answer these questions, a three-phase study has been designed. Participants Participants were recruited on the researchonline.pl Internet service. The service provides an opportunity to conduct research on 2000 demographically verified Polish participants from different age groups. This fact, as well as low costs, were the key reasons for the researchers' (Stoet, 2010(Stoet, , 2017, which is an experimental environment for online research. The local ethics committee approved the study. PENN STATE WORRY QUESTIONNAIRE The Penn State Worry Questionnaire (PSWQ, Meyer et al., 1990) is a 16-item measure of worry that has been shown to have adequate internal consistency and convergent validity in patients with GAD, and it is a widely used screening instrument for GAD. The Polish adaptation authored by Janowski (2007) (Solarz & Janowski, 2013). A cutoff point of 45 points is commonly used to identify pathological worry, and a cutoff of 62 points is used to differentiate GAD from other anxiety disorders (Clark & Beck, 2009). STATE-TRAIT ANXIETY INVENTORY The State-Trait Anxiety Inventory (STAI) was originally developed by Spielberger et al. (1968). It consists of 20 items each for state and trait anxiety. The Polish version of the STAI revealed satisfactory reliability (Cronbach's α ranged from .83 to .92) and validity (Sosnowski et al., 2011). SIP/HR TASKS The tasks used to measure the use of systematic information processing versus heuristic reasoning were inspired by the experiments constructed by Tversky and Kahneman (Chaiken & Trope, 1999;Epstein, 1994;Kahneman, 2012). The subjects were asked hypothetical questions with uncertain answers. Three types of heuristic reasoning were measured: (a) the availability heuristic, (b) the representativeness heuristic, and (c) the anchoring and adjustment heuristic. In the described study, the SIP/HR measures were implemented as described below. Availability heuristic measurement, "neutral" version. The participants were presented with a recording of a list of 42 names of women and men that was read out loud. Within the list, some names of wellknown people were hidden. The participants' task was to estimate the number of men and women on the list. In fact, every time, the sex ratio was 50:50, but there were more well-known women than well-known men on the list. Estimation according to the number of well-known people despite the facts indicates that a person is using the availability heuristic and thus using an implicit memory in an unconscious way. Availability heuristic measurement, "modified" version. Next, the participants were presented with another list (also with a 50:50 sex ratio). However, instead of well-known people, some of the names were preceded by an additional description. Among men, there were far more fear-inducing descriptions (e.g., "murderer", "stabber", etc.). The participants' task was to estimate the number of men and women. Deviation from 50 (in favor of men) was the measure of the availability anxiety memory trait heuristic (thus, the use of implicit memory). Representativeness heuristic measurement, "neutral" version. The participants had to solve a specific task: "There are 70 doctors and 30 psychologists working at a hospital. Karolina is a married, hard-working person with PhD. What is the likelihood that Karolina is a doctor?" The measure of the representativeness heuristic was the deviation of the estimated likelihood from the actual likelihood influenced by the provided information. The actual likelihood was 70%. The more the participant's answer deviated from the actual likelihood, the stronger the use of the representativeness heuristic. Representativeness heuristic measurement, "modified" version. The participants were asked to solve a task: "You are about to undergo a surgery. The surgeon has informed you that 80 out of 100 patients fully recover, while 20 may experience complications. You find out that the surgeon is ill, and the intern will be conducting your surgery. What are the chances you will fully recover?" The measure of the representativeness heuristic was a deviation of the estimated likelihood from the actual likelihood influenced by the "anxiety" information. The value of the deviation was the indicator of the use of the representativeness heuristic. Anchoring and adjustment heuristic measurement, "neutral" version. The participants were given a task: "Please estimate what percentage of African countries belong to the United Nations. Do you think it is more than 20? How many exactly?" The result of the task was an average difference between 20 and the provided values. The tendency to be influenced by the number 20 in a given answer was the measure of the anchoring and adjustment heuristic's use. Anchoring and adjustment heuristic measurement, "modified" version. The participants were given a task: "What is the likelihood that some of your closest family members become chronically ill? Is it more or less than 20%? How much exactly?" The absolute value of the difference between number 20 and the values given by participants was the measure of the anchoring and adjustment heuristic. STATISTICS The data were analyzed with Statsoft STATISTICA 13 and SPSS. Nonparametric Spearman's ρ correlation coefficient analysis and Mann-Whitney's U test of intergroup differences were performed. The studied group did not differ significantly from the normalization groups in terms of the parameters of the used methods. Two extreme groups were selected (each including 30 participants). Subjects meeting the GAD criteria (based on self-assessment) and those with the highest level of worry were included in the high-risk group (HR), while participants who did not match the GAD criteria and those with lowest scores on the PSWQ were included in the low-risk group (LR). The two subgroups did not differ in terms of age. While the sex ratio was rather equal in the LR group, there were far more females in the HR group (n = 22), which does not seem surprising in the light of the subject literature. RESULTS There were, however, some expected and obvious (considering the created subgroups) differences in terms of worry intensity (U = 0.00, p = .00), state anxiety (U = 234, p = .00) and trait anxiety (U = 198, p = .00). All or most of the values were higher in the HR group in terms of worry and anxiety, which was a consequence of the method of subdivision of the sample. Additionally, there was a small yet significant intergroup difference in terms of the mood declared at the time of the study (U = 288, p = 0,01). It is not surprising that participants in the HR group assessed their mood as lower than those in the LR group. The sex ratio in the sample in the third phase was 28 to 32 (46% to 53%). Similar to the second phase, after dividing the extreme subgroups, the proportion of men and women changed significantly. In the GAD group, there were significantly more women (70% to 30%, 21 to 9), while in the control group, there were more men (63.3% to 36.6%; 19 to 11). The subgroups did not differ significantly in terms of age (U = 416, p = .61). However, some significant differences were discovered in terms of state anxiety (U = 124, p = .00), trait anxiety (U = 127.5, p = .00), and mood (U = 211.5, p = .00), and there were borderline significant differences in terms of worry intensity (U = 329.5, p = .07). Because the difference between the average PSWQ scores was 6 raw points, it can be argued that in larger groups this difference would have been unambiguously significant. The subgroup creation criteria caused an unsurprising trend in the results in terms of intergroup differences. The GAD group was characterized by a higher tendency to have anxiety reactions and a higher level of anxiety during the study. Participants in this group reported a significantly lower mood and worried more than those in the control group. Phase 1 The first phase of the study revealed that 12% of the sample (N = 251) reported symptoms meeting the GAD criteria during the study survey, while 30% declared meeting the GAD criteria over their lifetime. After a detailed ICD-10-based phone-interview, the percentage of subjects who were qualified for inclusion in the confirmed GAD group decreased to 4.7% (n = 12). Among subjects with confirmed GAD, only 30% were psychometrically high worriers (subjects who scored > 62 points on the PSWQ). In the studied group, the majority of participants (71%) were people who saw their GPs between once a month and once a year. A total of 192 participants from the group saw their GPs with such frequency. Phase 2 To further verify the hypotheses, data from the second sample (N = 220) were analyzed. The researchers also selected two subgroups denoted as the LR and HR groups. The LR (n = 30) group was created by selecting 30 subjects with the lowest PSWQ scores who also did not meet the GAD criteria. The HR (n = 30) group was selected from among subjects who met the GAD criteria and scored the highest scores on the PSWQ. Among the LR group subjects, 53% were males, while the HR group mainly consisted of females (73.3%). The results of the correlational analysis of the whole sample (N = 220) revealed only small relationships between anxiety and SIP/HR. In the LR group, state anxiety was significantly related to the anchoring and adjustment heuristic (0.42), while mood displayed a relationship with the representativeness heuristic (0.50) and anchoring and adjustment heuristic (anxiety-related stimuli, −0.53, p < .05). The HR and LR groups did not differ significantly in terms of SIP/HR. However, the LR group presented a significantly better (U = 288, p = .02) mood than the HR group. Phase 3 In the third phase of the study, data from the GAD group (n = 30) and the control group (controls, n = 30) were analyzed. The GAD group did not differ from the control group in terms of SIP/HR or IGT scores. In the control group, a significant correlation (ρ = −.37, p < .05) between mood and the anchoring and adjustment heuristic was found. In the GAD group, correlations were found between the PSWQ score and the IGT including the IGT loss avoidance score (ρ = .40, p < .05) and total IGT (decision effectiveness) scores (ρ = .48, p < .05). The re- was visually more linear in the GAD group, as presented in Figure 1. Men and women did not differ in terms of IGT loss avoidance and IGT decision effectiveness. However, observations in men were more scattered, while in women, they tended to be more linear. No significant differences between the clinical and control groups were found in terms of the Stroop B Task. DISCUSSION The results of the presented study appear far more complex than the research questions would suggest. The answer to a question whether people with generalized anxiety differ in their preference to use systematic information processing from healthy individuals is not clear. Mueller et al., 2010). The current study can support Eldreth's (2008, Price et al., 2011 outcomes, which connected intergroup differences more with the way executive functions are "used", than with their level per se. In the nonclinical groups, some interesting relationships have also been found. Subjects experiencing state anxiety are more likely to trigger SIP. However, this cannot be said about people who are generally more anxious (high trait anxiety). These participants, in turn, tend to overestimate the likelihood of negative events and they use heuristic reasoning, especially the representativeness heuristic. It happens while facing an anxiety-inducing content. When confronted with anxiety, their cognitive system tends to shorten the reasoning process rather than make an analytical effort. Healthy controls in good mood were less likely to make a systematic information processing effort-they would rather use the anchoring and adjustment heuristic. In other words-they were more suggestive. Lower moods, in turn, caused their greater tendency to use availability heuristic, meaning unconscious use of one's memory. In the HR group, worry was strongly connected only with anxiety intensity and mood. The results obtained in the SIP/HR suggest, then, that while anxiety intensity and mood are related with the use of algorithms versus heuristics, these results cannot be directly extrapolated to subclinical and clinical groups. Nevertheless, the issue of decision-making is still important in these groups. However, it is likely that the older concept of mood as trigger (Davey, 1983) would be more useful in explaining the processes happening in these groups than the model that treats worry as SIP (Dash & Davey, 2012). The question whether information processing styles are related with decision-making under uncertainty and with the use of executive functions has already been partially answered-there appears to be no relation. The only significant correlation was found in the second and the third phases of the study. People who used the representativeness heuristic also scored higher in the Stroop B task. This might suggest that when participants from the subclinical and clinical groups are part of the study, some of these participants might have a tendency to FIGURE 1. Relationship between Iowa Gambling Task loss avoidance and Penn State Worry Questionnaire score in generalized anxiety disorder and healthy controls groups. However, one could cautiously make a hypothesis that SIP and pathological worry are not necessarily the same thing: Attention deficits may also lead to using heuristics and neglecting SIP. When answering the third research question, it can be said that anxiety and worry show some correlations with SIP/HR and decisionmaking under uncertainty. First, worry is related with better results and more cautious decisions in gambling, but this has been proven only for the clinical GAD group. Second, anxiety revealed correlations with the anchoring and adjustment heuristic, but only in the LR group. In the HR and clinical groups, those factors correlated mainly with the mood. The third phase of the study brought data about the clinical context of postulated questions. First, GAD subjects do not differ from healthy controls in use of systematic information processing and heuristics. Second, the level of pathological worry in this group is significantly related to deployment of loss avoidance measured with a method that actually is a test for executive function. To sum up, it can be hypothesized that in the clinical context, pathological worry is related to usage of executive functions in very specific way: to avoid anticipated losses. The presented studies' results are not free from a number of flaws. First of all, the majority of women in the GAD population was not taken into account when selecting the sample, which could have been anticipated. Thus, the results of the third phase cannot be generalized to men. Women make more cautious decisions, but they also suffer from anxiety and depressive disorders more often. It can be connected with the fact that men are more easily excused for their impulsiveness, while women are brought up to make careful decisions (Braverman, 2006). Second, the completion of some tasks with the use of a remote computer study remains debatable. Despite controlling the data quality, there could have been some significant distortions, especially in the Stroop B task. Third, the correlation model limits the ways of data analysis. Thus, in future, it would be worth to carry a study with a similar methodology on a group several times bigger, as this would enable the use of multivariate analysis. To summarize, some interesting relationships have been discovered in the current study, which, however, suggest that only the part of the mentioned theoretical approaches may be potentially applied to explain the repeatability of worry in subclinical and clinical groups. The SIP/HR construct does not seem to be particularly useful in these groups.
2020-12-17T09:11:41.874Z
2020-10-19T00:00:00.000
{ "year": 2020, "sha1": "594a58aecd8c5fbde1e657d1151c106ed0ac35fa", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.5709/acp-0308-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e20e5c500e0eac3521d72d4e58a6df57bd74eb11", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
53116467
pes2o/s2orc
v3-fos-license
Long-term polarization of alveolar macrophages to a profibrotic phenotype after inhalation exposure to multi-wall carbon nanotubes Background Nanomaterials are widely used in various fields. Although the toxicity of carbon nanotubes (CNTs) in pulmonary tissues has been demonstrated, the toxicological effect of CNTs on the immune system in the lung remains unclear. Methods and finding In this study, exposure to Taquann-treated multi-walled CNTs (T-CNTs) was performed using aerosols generated in an inhalation chamber. At 12 months after T-CNT exposure, alveolar inflammation with macrophage accumulation and hypertrophy of the alveolar walls were observed. In addition, fibrotic lesions were enhanced by T-CNT exposure. The macrophages in the bronchoalveolar lavage fluid of T-CNT-exposed mice were not largely shifted to any particular population, and were a mixed phenotype with M1 and M2 polarization. Moreover, the alveolar macrophages of T-CNT-exposed mice produced matrix metalloprotinase-12. Conclusions These results suggest that T-CNT exposure promoted chronic inflammation and fibrotic lesion formation in profibrotic macrophages for prolonged periods. Introduction Nanomaterials are manufactured chemical substances that are widely used in a variety of fields and exhibit novel characteristics, such as increased strength, chemical reactivity, and conductivity compared with the same materials without nanoscale features [1,2]. Nanomaterials developed using nanotechnology have numerous potential applications in the fields of engineering, electronics, physics, chemistry, industry, biosciences, and medicine [3][4][5]. By contrast, changes to the environment by human activities, such as air pollution due to nanomaterial production, can adversely affect human health by the induction of various diseases [6][7][8]. However, the relationship between the specific features and pathogenesis of nanomaterials remains unclear. Nanomaterials are foreign substances for the human body, which induce an immune response to clear foreign particles and protect health [9][10][11]. Many reports have demonstrated the toxicity of carbon nanotubes (CNTs) in the respiratory organs, especially the lungs [12][13][14][15]. The risk to human health has also been shown by the development of pulmonary inflammation and fibrosis in mice following inhalation of CNTs [16][17][18][19]. Alveolar macrophages play a central role in the engulfment and phagocytosis of CNTs in the lung [20,21]. Alveolar macrophages no longer function after phagocytosis of CNTs, and activated macrophages cause injury to the alveolar epithelial cells via enhanced production of reactive oxygen species [22]. In addition, it has been reported that alveolar macrophages undergo cell death after engulfing CNTs, and the functional failure makes it difficult for the lungs to clear CNTs [23]. Moreover, the risk of pulmonary fibrosis is increased by long-term exposure of CNTs in mice [24]. Also, alveolar macrophages continue to accumulate in the lungs at 12 months after exposure to CNTs [24]. However, the phenotypic change and the function of alveolar macrophages in mice exposed to CNTs remain unclear. A report demonstrated that pulmonary exposure to CNTs exacerbated inflammatory lesions in the lungs of mice infected with bacteria [25]. Because of the decrease in the cell number and the functional failure of alveolar macrophages, the clearance of bacteria in the lung becomes impaired [25]. Pharyngeal aspiration and intratracheal spray methods have been widely used in studies of CNT inhalation [19,26]. However, changes in the particle size and/or shape of CNTs affect the nature and extent of toxicity in lung tissues [12,27]. Taquahashi et al. recently reported a new method, named the "Taquann method," and an apparatus to improve toxicological experiments using multiwall CNTs (MWCNTs) [28]. This method and the chamber with a direct injection system are expected to render inhalation toxicity studies of MWCNTs more relevant [28]. In this study, phenotypic changes and features of alveolar macrophages were analyzed in mice following the long-term exposure to Taquann-treated MWCNTs (T-CNTs) using a direct injection system in which well-dispersed aerosol is generated in an inhalation chamber. The findings of this research will be useful to further elucidate the relationship between alveolar macrophages and toxicity of nanomaterials. Ethics This study was conducted according to the Fundamental Guidelines for Proper conduct of Animal Experiment and related Activities in Academic Research Institutions under the jurisdiction of the Ministry of Education, Culture, Sports, Science and Technology of Japanese Government. The protocol was approved by the Committee on the Animal Experiments of the University of Tokushima and Biological Safety Research Center, National Institute of Health Sciences (Permit Number: T27-7 and 601). All experiment was performed under anesthesia, and all efforts were made to minimize suffering. Mice The protocol of this animal study was approved by the institutional ethics committee and conducted in accordance with the Guidance for Animal Studies of the National Institute of Health Sciences. Thirty, 8-week-old female C57BL/6NCrSlc mice (SLC, Inc., Shizuoka, Japan) were exposed to Taquann-treated MWCNTs (T-CNTs). At 12 months after exposure, tissues were collected from the mice for analysis. Taquan-treated multi-walled CNTs (T-CNTs) and whole body inhalation exposure MWCNTs (Mitsui MWNT-7) were donated by Mitsui & Co., Ltd. (Tokyo, Japan). The Mitsui MWNT-7 is a mixture of dispersed single fibers of various length and width, and their agglomerates and aggregates. In order to obtain aggregates/agglomerates-eliminated, highly-dispersed fibers, pristine MWCNTs were treated by the Taquann method as described previously [28]. In brief, the method involves two processes; liquid-phase fine filtration and critical point drying to avoid re-aggregation by surface tension. MWCNT was suspended in Tert-butyl alcohol, freeze-and-thawed, filtered by a vibrating 25 μm mesh Metallic Sieve (Seishin Enterprise Co., LTD., Tokyo, Japan) snap-frozen by liquid nitrogen, and vacuum-sublimated. The average fiber length of T-CNT was identical to the pristine MWNT-7 (7.1 ± 6.0 μm in T-CNT, 7.1 ± 5.7 μm in pristine MWNT-7). Mice were exposed to the T-CNT aerosol 2hr per day in a week for 5 weeks (total 10 hr) by Taquann Direct-injection Whole Body inhalation System (version 2.0, manufactured by Sibata Scientific technology LTD., Saitama, Japan) [28]. The originally designed direct injection system is able to be generated well-dispersed aerosol in an inhalation chamber. Measured amount of dispersed T-CNTs were preloaded inside the cartridges and then compressed air were injected into the cartridges and blew out the T-CNTs through four small outlets of the cartridge into the subchamber, in which main flow air from mass flowmeter mixes in. The air with the aerosol goes down the connection pipe to the main chamber. Actual average mass concentrations of aerosol were 0, 1.42 and 3.12 mg/m 3 in control, low dose and high dose, respectively. Histological analysis All organs were removed from T-CNT-exposed mice, fixed with 10% phosphate-buffered formaldehyde (pH 7.2), and prepared for histological examination. Sections were stained with hematoxylin and eosin (H&E). Connective tissues of lung sections were detected by Azan staining. The area of the connective tissue was measured using Adobe Photoshop CS6 (Adobe Systems Incorporated, San Jose, CA, USA). Scanning electron microscope (SEM) Lung lobes were collected and treated with lysis solution composed of 5% potassium hydroxide, 0.1% sodium dodecyl sulfate, 0.1% ethylene-N,N,N',N'-tetraacetic acid disodium salt dehydrate, and 2% ascorbic acid in ultra-pure water, dissolved at 80˚C, and centrifuged at 20,000 g for 1 h at 25˚C. The pellet containing T-CNT was recovered. In order to remove debris covering the fibers, 1.8 ml of 70% ethanol was added to the tube and incubated at 80˚C for 30 min, and centrifuged at 20,000 g for 1 h at 25˚C. 100 μl of 1% TritonX-100 was added to the pellet and dispersed by pipetting. 1 μl of the suspension was placed on an inorganic aluminum oxide membrane filter and filtrated on a funnel shape glass filter. The filer was dried at room temperature and osmium coated for SEM. SEM (VE-98, Keyence Co., LTD., Osaka, Japan) was used for detection of the ultrastructure of the samples. Confocal microscopic analysis Free-floating fluoroimmunohistochemistry of the lung was performed with 60 μm-thick sections floating in solution in a 48-well plate. The sections were fixed with 4% paraformaldehyde phosphate buffer, permeabilized with 1% Triton, blocked with 10% goat serum (DAKO, Carpinteria, CA), and then stained with a rabbit anti-MMP-12 antibody (Abcam plc, Cambridge, UK) and FITC-conjugated rat anti-F4/80 antibody (eBioscience). After washing three times with 0.2% Triton, the sections were stained with Alexa 568 goat anti-rabbit immunoglobulin (Ig)G and Alexa 488 goat anti-FITC-IgG (Invitrogen Corporation, Carlsbad, CA). Nuclear DNA was stained with 4 0 ,6-diamdino-2-phenylindole dihydrochloride (DAPI) (Invitrogen Corporation). Sections were observed using a PASCAL confocal laser-scanning microscope (LSM: Carl Zeiss, Jena, Germany) at 400× magnification. LSM image browser version 3.5 (Carl Zeiss) was used for image acquisition. The number of positive cells per square millimeter was calculated. Immunohistochemistry For the immunohistochemical (IHC) analysis of lung tissues, paraffin-embedded sections were deparaffinized and subsequently applied to heat-induced antigen retrieval in HistoVT One (Nacalai Tesque). The sections were incubated with rabbit anti collagen IV antibody (Abcam). Protein binding was detected with a Vectastain elite ABC kit (Vector Laboratories Ltd, Peterborough, UK) and 3,3'-diaminobenzidine, tetrahydrochloride (DAB) as a substrate, and counterstained with hematoxylin. Statistical analysis Differences between individual groups were determined using one-way ANOVA. p < 0.05 was considered statistically significant. Power calculations were performed before the beginning of the experiments to determine the sample size for experiments using animals. Histological findings of lung tissues from T-CNT-exposed mice Normal female C57BL/6 mice were exposed to Taquann-treated multi-walled CNT (T-CNT) for 2 hr. a day in a week for 5 weeks using a whole body inhalation system as described previously [28]. Average mass concentrations of aerosol were 0, 1.42 and 3.12 mg/m 3 in control, low dose and high dose, respectively. At 12 months after the last exposure, all mice were analyzed. Histological analysis of lungs tissues collected from the T-CNT-exposed (low & high dose T-CNT) mice showed thickening of the alveolar wall, as compared with control mice ( Fig 1A, 1C and 1E). In addition, there was accumulation of monocytes in the alveolar space of T-CNT-exposed and control mice (Fig 1B, 1D and 1F). The pulmonary structure of mice exposed to high dose T-CNT was unclear in some area due to hypertrophy of the alveolar wall and monocyte accumulation in the alveolar space (Fig 1C~1F). Single fibers of T-CNT were diffusely detected within the alveolar wall and phagocytes (Fig 1G). Aggregation of T-CNT fibers was hardly observed in the lung of T-CNT-exposed mice. In addition, T-CNT fibers were recovered from the lung tissue of T-CNT-exposed (high dose) mice, and were detected by a scanning electron microscope (SEM). Dispersed single fibers were observed (Fig 1H). These findings demonstrate that phagocytosis of T-CNT by alveolar macrophages continues for long periods after exposure to T-CNT, suggesting that the alveolar immune system may fail to clear CNTs for prolonged periods after exposure. Effect of CNT exposure on alveolar macrophages Fibrotic change in lung tissues by T-CNT exposure In addition to inflammatory lesions of the alveolar space and wall, marked interstitial fibrosis was observed in the lung tissues of T-CNT-exposed mice (Fig 2A). Histological analysis by Azan staining showed that proliferation of collagen fiber was promoted by T-CNT exposure (Fig 2A). There was a significant increase in fibrosis around the bronchi and blood vessels within the interstitial area in the lungs of T-CNT-exposed mice, as compared with control mice (Fig 2B). These findings suggest that besides prolonging alveolar inflammation, T-CNT exposure also induces chronic inflammation in the interstitial area for long periods after exposure. Alveolar macrophages in T-CNT-exposed mice Next, flow cytometric analysis of alveolar macrophages using mononuclear cells in bronchoalveolar lavage fluid (BALF) collected from control and T-CNT-exposed mice was performed. The surface phenotype of almost alveolar macrophages in normal mice is considered to be F4/ 80 + CD11b low . By exposure to T-CNT (high dose), both F4/80 + CD11b low and F4/80 + CD11b high populations were significantly increased, as compared with those of control mice (Fig 3A and 3B). These findings demonstrate that long-term exposure to T-CNT induces accumulation of alveolar macrophages. Phenotype of alveolar macrophage in T-CNT-exposed mice Furthermore, the phenotype of alveolar macrophages in BALF was analyzed by detection of CD192, a marker of M1 macrophages, and CD206, a marker of M2 macrophages, among F4/ 80 + CD11b + macrophages. Compared with control mice, the proportion of CD192 + CD206 − M1-like macrophages was significantly increased in BALF collected from mice exposed to a high dose T-CNTs (Fig 4A and 4B). By contrast, the proportion of CD192 − CD206 + M2-like macrophages was significantly decreased by exposure to a high dose T-CNTs, as compared with that of control mice (Fig 4A and 4B). However, populations of CD192 + CD206 + cells were similar between control T-CNT-exposed mice (Fig 4A and 4B). Although M1-like macrophages increased in T-CNT (high dose)-exposed mice, there was no clear shift to M1 or M2 macrophage differentiation of BALF cells in the mice exposed to T-CNT. These results demonstrate that T-CNT exposure sustains alveolar inflammation and doesn't largely change M1 and M2 polarization of macrophage phenotype, showing a mixed type including M1, M2, and the other phenotype as a whole. To determine the systemic effects of T-CNT exposure on the immune system, phenotypic changes of F4/80 + CD11b + macrophages in the spleen and lymph nodes were analyzed using CD192 and CD206 as markers of M1 and M2 macrophages. The proportion of CD192 − CD206 + M2-like macrophages in the spleen of mice exposed to low doses of T-CNTs was significantly higher than that of control mice (Fig 4A and 4B). There were no changes in the proportion of the other populations in the spleen and LNs (Fig 4A and 4B). Thus, this finding suggests that pulmonary exposure to T-CNTs may influence differentiation or migration of macrophages in the spleen. Expression of macrophage-associated genes of lung in T-CNT-exposed mice To further clarify the characteristics of alveolar macrophages in T-CNT-exposed mice, mRNA expression levels of M1 and M2 macrophage-associated genes of the lung tissues were analyzed by real time RT-PCR. The expression levels of monocyte chemotactic protein-1 (MCP-1), inducible nitric oxide synthase (iNOS), and CD192 mRNA, as M1 macrophage-associated genes, were not changed by T-CNT exposure (Fig 5A). Also, there was no change in the expression level of Arginase-1, resistin like alpha (Retnla), macrophage galactose-type lectin-1 (MGL-1), MGL-2, and chitinase-3-like protein-1 (CHl3L1) mRNA, M2 macrophage-associated genes, between control and T-CNT-exposed mice (Fig 5A). In addition, mRNA expression of cytokines, including interleukin (IL)-1β, interferon-γ (IFN-γ), tumor necrosis factor-α (TNF-α), and IL-12, which are derived from M1 macrophages, were analyzed by real time-RT-PCR using lung tissues. There were no significant differences in cytokine mRNA expression levels between control and T-CNT-exposed mice (Fig 5B). Moreover, there were no changes in mRNA expression levels of IL-10 and IL-13 in the lung tissues between control and T-CNT-exposed mice (Fig 5B). These findings support that T-CNT exposure doesn't affect any particular polarization including M1 and M2 phenotype in alveolar macrophages. Profibrotic phenotype of alveolar macrophages in T-CNT-exposed mice Next, the unique characteristics of alveolar macrophages following T-CNT exposure, were determined in T-CNT-exposed mice with enhanced fibrosis (Fig 2). Pulmonary fibrosis occurs by excess deposition of collagen-rich extracellular matrix through a variety of molecules produced by immune cells including monocytes/macrophages. Among them, collagen IV (Col IV), matrix metalloproteinase-12 (MMP-12), tissue inhibitors of metalloproteinase-2 (TIMP-2), and TIMP-3 mRNA expression levels were significantly increased in the lung tissues of mice exposed to high doses of T-CNTs (Fig 6A and 6B). In particular, expression levels of MMP-12 mRNA was markedly and dose-dependently increased by T-CNT exposure (Fig 6B). By contrast, there were no significant changes in the IL-5, transforming growth factor-β1 (TGFβ-1), ColA2, and Col3A mRNA expressions between control and T-CNTexposed mice (Fig 6A). In addition, immunohistochemical analysis showed that collagen type IV expression in the stromal area around bronchus and vessel, and alveolar wall was enhanced by T-CNT exposure (Fig 6C). Therefore, the alveolar macrophages in T-CNTexposed mice may account for the profibrotic phenotype. MMP-12 expression of alveolar macrophages in T-CNT-exposed mice To confirm the presence of MMP-12 protein in alveolar macrophages in T-CNT-exposed mice, confocal microscopic analysis was performed using frozen lung tissues. A significant portion of F4/80 + alveolar macrophages in T-CNT-exposed mice expressed MMP-12, but only rarely in control mice (Fig 7A). In addition, the number of F4/80 + MMP-12 + macrophages in lung tissues of T-CNT-exposed mice was significantly increased, as compared with that of control mice, in a dose-dependent manner (Fig 7B). Therefore, alveolar macrophages may display the profibrotic phenotype by T-CNT exposure. Discussion Exposure to nanomaterials is known to induce various diseases, including the formation of pulmonary lesions [12,21]. Many studies of CNTs have demonstrated the inability of alveolar Effect of CNT exposure on alveolar macrophages macrophages to engulf CNT fibers, which promote the formation of pulmonary lesions in which reactive oxygen species derived from activated alveolar macrophages cause injury to alveolar epithelial cells ultimately resulting in to cell death [13,14,22,23]. However, it remains unclear whether that aggregation/agglomerate of CNT fibers influences the activity of alveolar macrophages and formation of pulmonary lesions. In the present study, MWCNT fibers were treated by the Taquann method in order to remove aggregate/agglomerate and enrich the well-dispersed single fibers in a dry state without any dispersants. Furthermore, the mice were exposed to T-CNT using a whole body inhalation system which enables to be inhaled well-disperse MWCNT fibers to mice. In fact, we observed single MWCNTs fibers in alveolar regions, therefore the direct effect of CNT exposure on alveolar immune cells was evaluated in this study. Single fibers were diffusely observed in the lung of T-CNT-exposed mice, and the size of granulomatous lesions consisting of macrophage aggregation were relatively smaller than that of untreated MWCNT-exposed mice in the previous report [19]. The direct effect of CNT exposure on alveolar immune cells was evaluated by the application of a T-CNT injection system [28]. In actual human, exposure to nanomaterials results in alveolar lesions by single fibers. Because, the aggregate/agglomerate will form sediments quickly in the ambient air and will be effectively filtered out in human upper respiratory tracts. By contrast, in practical inhalation studies of experimental animals, the animal chamber air is rigorously agitated in order to ensure the homogeneity of aerosol. When given as a mixture, the likelihood of aggregates and agglomerates reaching the animal nose is high, and they would disturb inhalation of single fibers and induce bronchitis/bronchiolitis with granuloma. Thus, we concluded that it is essential to prepare a dispersed single fiber aerosol without aggregates and agglomerates, and further without changes in size and shape of single fiber components. Macrophages play key roles in various immune responses during inflammation in a variety of tissues [29]. In addition to functions in innate immunity, such as antigen phagocytosis and cytokine production, antigen presentation by macrophages represents a link between innate and acquired immunity [30]. During the inflammatory processes, naïve monocytes differentiate into pro-inflammatory M1 and anti-inflammatory M2 macrophages [29,30]. Macrophages originate from at least three sources, including the yolk sac, fetal liver, and bone marrow while alveolar macrophages are derived from the yolk sac and bone marrow [30]. Actually, various differentiated macrophages migrate and exist within the alveolar space in response to inflammatory stimuli. In this study, although the proportion of M1-like macrophages was significantly increased by high-dose T-CNT exposure, we concluded that long-term T-CNT exposure sustains alveolar inflammation and a mixed type macrophage differentiation including M1 and M2 phenotypes as a whole. In addition, there was no increase in expression levels of M1 and M2 macrophage-related genes in T-CNT-exposed mice. A previous report described that MWCNTs can induce macrophages into a mixed phenotype, part M1 and part M2 macrophage [31]. Therefore, the unique population of alveolar macrophages by T-CNT exposure might promote chronic pulmonary inflammation. On the other hand, the concentration dependency on the effect of T-CNT in alveolar immune response was partial in this study. It is possible that any threshold of functional ability by pulmonary macrophage may affect the effect on the pulmonary immune response in T-CNT-exposed mice. On the other hand, the proportion of CD11b high macrophage and the expression of MMP12 were dependent on the concentration of T-CNT. Macrophages are involved in various functions, including tissue repair, fibrosis formation, and angiogenesis [32]. Alveolar macrophages play a resolution-promoting role during the reversible phase of bleomycin-induced pulmonary fibrosis [33,34]. In addition, macrophages contribute to both the induction and resolution phases of acute lung injury [35]. Thus, there has been much controversy regarding the role of monocytes and macrophages in the pathogenesis of pulmonary fibrosis. When an allergy model of house dust mite allergen was exposed to MWCNTs, the allergic response was prevented through suppression of IL-1β and pro-caspase-1 in alveolar macrophages [36]. In addition, MWCNT-induced airway fibrosis was enhanced by allergen challenge [36]. These results suggest that MWCNT-induced inflammasome regulated in an allergy inflammatory microenvironment could play an important role in increased airway fibrogenesis. A recent DNA microarray study demonstrated expression of a wide range of cytokine and chemokine genes in the lung tissues of mice at 1 year after MWCNT exposure [24]. Fibrotic lesion formation in the lung is caused by excessive deposition of interstitial collagens [37]. Maintenance of the extracellular matrix and tissue repair is controlled by MMPs [37]. Various MMPs and cytokines, including MMP-2, 3,7,8,9,12,13, TIMPs, IL-5, and TGF-β1, play profibrotic roles in lung injury and inflammation [37,38]. Among these molecules, MMP-12 is known as a macrophage metalloelastase that is produced by activated macrophages and contributes to fibrotic lesion formation in the lungs [39][40][41]. In this study, MMP-12 mRNA in the lung tissues of T-CNT-exposed mice, as compared with that of control mice, was markedly increased. Further, MMP-12 production by F4/80 + alveolar macrophages was confirmed in T-CNT-exposed mice. These findings suggest that MMP-12-producing alveolar macrophages are involved in the pathogenesis of the fibrotic lesions for long periods after T-CNT exposure. Conclusions In conclusion, use of the newly established Taquann method and aerosol generation system showed that formation of chronic inflammatory lesions in mice continues for long periods after CNT exposure. The alveolar macrophages in T-CNT-exposed mice were sustained in the state of a M1/M2 mixed macrophage phenotype. In addition, the profibrotic character of alveolar macrophages was demonstrated in T-CNT-exposed mice. The findings of this research should prove helpful to further elucidate the toxicological effects of nanomaterials on the pulmonary immune system.
2018-11-11T12:38:52.415Z
2018-10-29T00:00:00.000
{ "year": 2018, "sha1": "c5bf56b789df18c5240426967e10694907905b1b", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0205702&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c5bf56b789df18c5240426967e10694907905b1b", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
6284643
pes2o/s2orc
v3-fos-license
Equitable coloring of k-uniform hypergraphs Let $H$ be a $k$-uniform hypergraph with $n$ vertices. A {\em strong $r$-coloring} is a partition of the vertices into $r$ parts, such that each edge of $H$ intersects each part. A strong $r$-coloring is called {\em equitable} if the size of each part is $\lceil n/r \rceil$ or $\lfloor n/r \rfloor$. We prove that for all $a \geq 1$, if the maximum degree of $H$ satisfies $\Delta(H) \leq k^a$ then $H$ has an equitable coloring with $\frac{k}{a \ln k}(1-o_k(1))$ parts. In particular, every $k$-uniform hypergraph with maximum degree $O(k)$ has an equitable coloring with $\frac{k}{\ln k}(1-o_k(1))$ parts. The result is asymptotically tight. The proof uses a double application of the non-symmetric version of the Lov\'asz Local Lemma. Introduction All hypergraphs considered here are finite. For standard terminology the reader is referred to [4]. Let H be a k-uniform hypergraph with n vertices. A strong r-coloring is a partition of the vertices of H into r parts, such that each edge of H intersects each part. (A weak r-coloring is a coloring where no edge is monochromatic.) A strong r-coloring is called equitable if the size of each part is ⌈n/r⌉ or ⌊n/r⌋. Let c(H) denote the maximum possible number of parts in a strong coloring of H. Let ec(H) denote the maximum possible number of parts in an equitable coloring of H. Trivially, 1 ≤ ec(H) ≤ c(H) ≤ k. In general, k could be large and still ec(H) = c(H) = 1, if we do not impose upper bounds on the maximum degree. Consider the complete k-uniform hypergraph on 2k vertices. Trivially, it has c(H) = 1, and the maximum degree is less than 4 k . In this paper we prove that c(H) and ec(H) are quite large if the maximum degree is bounded by a polynomial in k. In fact, we get the following tight result: The tightness is shown by a construction of a random hypergraph with appropriate parameters. Alon [1] has shown that there exist k-uniform hypergraphs with n vertices and maximum degree at most k that do not have a vertex cover of size less than (n ln k/k)(1 − o k (1)). In particular, no strong coloring (moreover an equitable one) could have more than (k/ ln k)(1 + o k (1)) parts. For completeness, we show a general construction valid for all a ≥ 1 in Section 3. The proof of the main result appears in Section 2. The final section contains some concluding remarks. Proof of the main result In the proof of Theorem 1.1 we need to use the Lovász Local Lemma [5] in its strongest form, known as the nonsymmetric version. Here it is, following the notations in [2] (which also contains a simple proof of the lemma Proof of Theorem 1.1: Let a ≥ 1 be any real number, and let ǫ > 0 be small. Throughout the proof we assume k is sufficiently large as a function of a and ǫ. Let k be sufficiently large such that there is an integer between k (1+ǫ 2 /4)a ln k and k (1+ǫ 2 /8)a ln k . Thus, for some ǫ 2 /8 ≤ γ ≤ ǫ 2 /4, the number t = k (1+γ)a ln k is an integer. Now, let H = (V, E) be a hypergraph with n vertices and ∆(H) ≤ k a . We will show that there exists an equitable coloring of H with a ln k colors. Assume that we have the set of colors {1, . . . , t}. It will be convenient to deal with the finite set of graphs having n < 2k ln k separately. We begin with the general case. The general case: n > 2k ln k In the first phase of the proof we color most of the vertices (that is, we obtain a partial coloring) such that certain very specific properties hold. In the second phase we color the vertices that were not colored in the first phase and show that we can do it carefully enough and obtain a proper strong t-coloring. In the third phase we show how to modify our coloring and obtain an equitable coloring. First Phase Our goal in this phase is to achieve a partial coloring with the following properties: Proof: We let each vertex v ∈ V choose a color from {1, . . . , t} randomly. The probability to choose color i is p = (1+γ/2)a ln k k for i = 1, . . . , t and the probability of remaining uncolored is, therefore, q = 1 − pt = γ 2(1+γ) . For an edge f , let A f denote the event that f contains less than kγ/5 uncolored vertices. Let B f denote the event that f has more than ⌈10/γ⌉ colors missing from its vertex set. For a vertex v, let C v denote the event that there exist z distinct edges f 1 , . . . , f z each f i contains v, and there exist z distinct colors c 1 , . . . , c z , such that c i is missing from f i for each i = 1, . . . , z. For a color c, let D c denote the event that the color c appears in less than n (1+γ/4)a ln k k vertices. We must show that with positive probability, none of the above 2|E| + |V | + t events hold. The following four claims provide upper bounds for the probabilities of the events Proof: Let X f denote the random variable counting the uncolored elements of f . The expectation Since each vertex chooses its color independently we have by a common Chernoff inequality (cf. [2]) Proof: Fix s = ⌈10/γ⌉ distinct colors. The probability that none of them appear in f is precisely As there are t s < k s possible sets of s distinct colors we get that Proof: If the degree of v is less than z there is nothing to prove. Otherwise, fix a set of z distinct colors {c 1 , . . . , c z } and z distinct edges containing v, denoted {f 1 , . . . , f z }. We begin by computing the probability that for each i = 1, . . . , z, c i does not appear in an element of f i . Denote this denote the subset of vertices that belong to the edges f i , for each coordinate x i that is positive in x, and that do not belong to the edge f i for each coordinate x i that is zero in x. This partitions the vertex set into 2 z parts. Let w x denote the number of positive coordinates in x. Clearly, There are exactly t z < (k/ ln k) z ordered sets of z distinct colors. Thus, the probability that f 1 , . . . , f z each miss a distinct color is less than (k/ ln k) z /k a(1+γ/2)z . There are at most ⌊k a ⌋ z distinct subsets of z edges containing v. This, together with Stirling's formula, gives Proof: Let X c denote the number of vertices that received the color c. Clearly, E[X c ] = pn = n (1+γ/2)a ln k k . Put β = n aγ ln k 4k . We shall use the Chernoff inequality (cf. [2]) In our case We now construct a dependency graph for all the events of the form A f , B f , C v , D c (we refer to the events as "type A", "type B", "type C", and type "D" respectively). Consider an event A f . Let E(f ) denote the set of edges of H that are disjoint from f . Let V (f ) denote the set of vertices of H that do not appear in any edge that intersects f . Clearly A f is mutually independent of all the 2|E(f )| + |V (f )| events of the form A g , B g or C v which correspond to the elements of E(f ) and V (f ). Since there are at most k a+1 edges intersecting f and since there are at most k a+2 vertices in these edges, the outdegree in the dependency graph from A f to other events of type A is at most k a+1 . Similarly the outdegree in the dependency graph from A f to other events of type B is at most k a+1 , and to events of type C it is at most k a+2 . A f depends on all events of type D, so the outdegree is t. This explains the first line of Table 1 (the dependency table). The other elements in the table are figured out similarly. Note that events of type D depend on all other events (the fourth line in Table 1). In order to apply Lemma 2.1 we need to assign a coefficient Indeed, recall that n > 2k ln k so (1 − 1/e n/(2k) ) k−1 > e −1 . Since t < k − 1 we get, together with Claim 2.4, The analogous inequalities hold for events of type B and C where we use Claim 2.5 and Claim 2.6 respectively. Finally, consider events of type D. We must show that In any k-uniform hypergraph, |E| ≤ n∆/k. Thus, in our case, 2|E| + n ≤ 3k a−1 n. Using again the fact that(1 − 1/e n/(2k) ) k−1 > e −1 we have, together with Claim 2.7, According to Lemma 2.1, with positive probability, none of the events in the dependency graph hold. We have completed the proof of Lemma 2.3. Second Phase Fix a partial coloring satisfying the four conditions in Lemma 2.3. For an edge f , let M (f ) denote the set of missing colors from f . By Lemma 2.3 we know that |M (f )| ≤ ⌈10/γ⌉. For a vertex v, let S(v) = ∪ v∈f M (f ). We claim that |S(v)| ≤ ⌈10/γ⌉(z − 1) ≤ 11z/γ. To see this, notice that if |S(v)| > ⌈10/γ⌉(z − 1) then there must be at least z distinct edges containing v, say, f 1 , . . . , f z and z distinct colors c 1 , . . . , c z such that c i does not appear in f i for i = 1, . . . , z. However, this is impossible by the third requirement in Lemma 2.3. In the second phase we only color the vertices that are uncolored after the first phase. Let v be such a vertex. We let v choose a random color from S(v) with uniform distribution. The choices made by distinct vertices are independent (In case S(v) = ∅ we can assign an arbitrary color to v). Let f ∈ E be any edge, and let c ∈ M (f ). Let A f,c denote the event that after the second phase, c still does not appear as a color in a vertex of f . Our goal is to show that with positive probability, none of the events A f,c for f ∈ E and c ∈ M (f ) hold. This will give a proper strong t-coloring of H (although not necessarily an equitable one). Let T (f ) be the subset of vertices of f that are uncolored after the first phase. By Lemma 2.3 we have |T (f )| ≥ kγ/5. If c ∈ M (f ) we have that for each u ∈ T (f ), the color c appears in S(u). Hence, Since each event A f,c is mutually independent of all other events but those that correspond to edges that intersect f , we have that the dependency graph of the events has maximum outdegree at most ⌈10/γ⌉k a+1 < k a+2 /e − 1. Since 1 k a+2 ((k a+2 /e − 1) + 1) = 1/e we have, by Corollary 2.2, that with positive probability none of the events of the form A f,c hold. In particular, there exists a strong t-coloring of H. We have shown how to obtain an equitable coloring with t − s = k (1+γ)a ln k − ⌈ √ γ k a ln k ⌉ > (1− ǫ) k a ln k colors. The finite case: n < 2k ln k As in the proof for the general case, let each vertex choose a color randomly and independently, each color with probability p where p = (1+γ/2)a ln k k for i = 1, . . . , t and the probability of remaining uncolored is q = 1 − pt = γ 2(1+γ) . As in the proof of Claim 2.4, the probability that an edge contains less than kγ/5 uncolored vertices is less than 1/k 5a . There are |E| ≤ nk a /k ≤ 2k a ln k edges. Hence, the expected number of edges with less than kγ/5 edges is less than 1/k 3 . Thus. With probability at least than 1 − 1/k 3 all edges have at least kγ/5 uncolored vertices. As in the proof of Claim 2.7, the probability that a color appears in less than na ln k(1 + γ/4)/k vertices is less than 1 k (n/k)(γ 2 /33) . Unlike Claim 2.7 we cannot bound this number from above by e −n/k ; instead, since n ≥ k (otherwise there are no edges at all), we can bound it with k −γ 2 /33 . Since there are t < k colors, the expected number of colors that appear in less than na ln k(1 + γ/4)/k vertices is less than k 1−γ 2 /33 . Thus, with probability at least 2/3 there are less than 3k 1−γ 2 /33 such colors. Finally, let X count the number of pairs (e, c) where e ∈ E and c is a color that is missing from e. Clearly, Hence, with probability at least 2/3, X < kγ/5. We have proved that with probability at least 1 − 1/k 3 − 1/3 − 1/3 > 0 all the following occur simultaneously: 1. All edges have at least kγ/5 uncolored vertices. 3. The number of pairs (e, c) of edges e and colors c such that c is missing from e is less than kγ/5. Fix a partial coloring with all these properties. Trivially we can make it a proper strong coloring by assigning a color c that is missing from an edge e to one of the uncolored vertices of e, and we can do it greedily to all such (e, c) pairs. We therefore obtain a proper strong t-coloring of H, where, in addition, at least t − 3k 1−γ 2 /33 colors appear each in at least na ln k(1 + γ/4)/k vertices. We can now use the same arguments as in the third phase of the general case and obtain an equitable coloring. The only difference is that instead of t we only use t − r colors where r is the number of color classes having less than na ln k(1 + γ/4)/k vertices. Thus, t − r ≥ t − 3k 1−γ 2 /33 > t(1 − γ/33), and it is easily seen that all computations in the third phase hold when replacing t with t(1 − γ/33). A random hypergraph construction Let a ≥ 1 and let ǫ > 0. Let n = k 2a . For simplicity we assume n is an integer in order to ignore floors and ceilings. k will be selected sufficiently large to justify this assumption and the assumptions that follow. Let m = (1−ǫ)k 3a−1 (again, assume m is an integer). Consider the random k-uniform hypergraph on the vertex set [n] with m randomly selected edges f 1 , . . . , f m . Each edge f i is chosen uniformly from all n k possible edges. The m choices are independent (thus, the same edge can be selected more than once). The expected degree of a vertex v (including multiplicities) is mk/n = (1 − ǫ)k a . Notice that for k sufficiently large we have, using a Chernoff inequality, that the degree of v is greater than k a with probability less than 1/(2k 2a ) = 1/(2n). Hence, with probability greater than 0.5 the maximum degree is at most k a . Put t = (1−2ǫ)na ln k/k. Again, we assume t is an integer. We show that with probability greater than 0.5, no t-subset of vertices is a vertex cover. This proves the existence of hypergraphs H with ∆(H) ≤ k a and c(H) ≤ (1 + o k (1))k/(a ln k). Fix X ⊂ [n] with |X| = t. For each edge f i we have, assuming k is sufficiently large, Since each edge is selected independently we have There are n t possible choices for X. It suffices to show that Indeed, for k sufficiently large Concluding remarks • In the proof of Theorem 1.1 we require that ∆(H) ≤ k a for some fixed a ≥ 1. It is possible (although the computations get somewhat more complicated) to prove Theorem 1.1 when a is not necessarily a constant but satisfies a = a(k) = o(k/ ln k). In other words, ∆(H) is allowed to be any subexponential function of k. • The proof of Theorem 1.1 is not algorithmic. It is, however, possible to obtain a polynomial time (in the number of vertices of the hypergraph, and not in its uniformity) algorithm that yields an equitable partition with (1 − o k (1))ck/(a ln k) parts where c is a fixed small constant (depending only on a). This can be done by using the method of Beck for the two coloring of hypergraphs [3] and generalizing it to more colors. We also need to take care that the coloring obtained be equitable (Beck's algorithm does not guarantee this). However, Beck's algorithm can be modified so as to guarantee that all colors use roughly the same number of colors, and then we can use the approach from the third phase of our proof to show that by sacrificing only a small fraction of the colors we can make the partition equitable using the remaining colors. Notice that the third phase can easily be implemented in polynomial time. • A special case of Theorem 1.1 yields an interesting result about graphs. Let G be a k-regular graph. Then, G has an equitable coloring with (1 − o k (1))(k/ ln k) colors such that each color class is a total dominating set (a total dominating set D is a subset of the vertices that has the property that each vertex v ∈ G has a neighbor in D). To see this, we can construct a hypergraph H from the graph G as follows.
2014-10-01T00:00:00.000Z
2002-02-22T00:00:00.000
{ "year": 2002, "sha1": "3a88165cb2f44c13fbf7498f3b9d74956981b93a", "oa_license": null, "oa_url": "http://arxiv.org/pdf/math/0202230", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "41d85b90a654059f41c868d12d63fe04cf8b1f23", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics", "Chemistry" ] }
232322070
pes2o/s2orc
v3-fos-license
Introduction: EU External Relations Law—Shared Competences and Shared Values in Agreements with the EU and Its Eastern Neighbourhood The Treaty of Lisbon not only codified to a large extent the evolutionary and pre-existing case law of the European Court of Justice regarding the Union’s external competences in its art. 3 and 216 Treaty on the Functioning of the European Union (TFEU). It also added new layers to its external action by referring to the Union’s common values in art. 21 Treaty on European Union (TEU), art. 206 TFEU which the EU shall upheld in its dealing with third countries and international organizations. Thus, external and internal issues are determining the Union’s external action, and the question arises whether these additions are enabling the Union to set sail for expanding to new horizons in its external actions or whether it is bound by the old, already known shores for its international activities. New Horizons or Old Shores? The Treaty of Lisbon 1 not only codified to a large extent the evolutionary and preexisting case law of the European Court of Justice 2 regarding the Union's external competences in its art. 3 and 216 Treaty on the Functioning of the European Union (TFEU). It also added new layers to its external action by referring to the Union's common values in art. 21 Treaty on European Union (TEU), art. 206 TFEU which the EU shall upheld in its dealing with third countries and international organizations. Thus, external and internal issues are determining the Union's external action, and the question arises whether these additions are enabling the Union to set sail for expanding to new horizons in its external actions or whether it is bound by the old, already known shores for its international activities. This volume tries to explore some of the remaining legal and practical challenges for the EU of these two additions to the text of the EU founding treaties. First, the Union's external treaty-making power will be analysed. According to the principle of conferral as embodied in art. 4 (1), 5 (1) TEU, the EU is only entitled to act if the Member States have entrusted it with a competence to do so. If not, the Member States remain competent to act internationally. Over the years and with the aim to avoid conflicts of competence between the Union and its Member States, this led to the conclusion of mixed agreements which became the foremost legal tool for the EU's exercise of its external trade power. In its already famous Opinion 2/15, 3 the Court of Justice of the EU (CJEU) examined the external competences of the Union post-Lisbon but was unable to solve all open matters. A side look will be given to the very special relations of the EU with Switzerland which deserve further scrutiny. Secondly, a closer look will be given to the EU's actions in regard to its Eastern Neighbourhood, because the relations to the countries of the Eastern Partnership are largely drawn after the entry into force of the Treaty of Lisbon and are driven by the EU's new legal portfolio and serve as a perfect example for the exercise of its newly acquired powers in the domain of external relations. The post-Soviet states of the Eastern Neighbourhood are culturally deeply linked to the EU but face peculiar difficulties in transforming their societies. The third chapter of the book is devoted to the Union's far lesser relations with the countries of the Eurasian Economic Union, the second block of post-Soviet countries. Evolution and Current Challenges of the Union's External Action Therefore, the first part of this volume is devoted to the evolution and current challenges of the EU external actions by scrutinizing the issues associated with shared competences and shared values. The chapter is opened by Peter-Christian Müller-Graff's contribution on "New Challenges for the Union's Treaty Making Powers and Common Values in Implementing its Agreements". 4 He deals with the political and legal issues stemming from the role of the EU in the world as laid down in the 2017 Rome Declaration of the Union. There, the leaders of 27 Member States pledged to work towards "a stronger Europe on the global scene: a Union further developing existing partnerships, building new ones and promoting stability and prosperity in its immediate neighbourhood to the east and south but also in the Middle East and across Africa and globally". The underlining politically relevant question is whether the Union has the clout for fulfilling this pledge, whereas the legally relevant question is how far the Union's competences match this intention. One of the analysed overarching questions is directed to exploring how far the Union's treaty-making powers carry in the new challenges of international relations. The following works by Lorenzmeier, Vedder, Kumin and van Elsuwege are shedding light on the Union's external powers and mixed agreements, one of the main legal tools for solving the allocation of competence between the EU and its Member States. Christoph Vedder explores the Union's implied competences in his contribution "From ERTA to Singapore -Two Landmark Decisions on the Road to the Union's Powerful Foreign Policy". 5 By providing an overview about the historical evolution of the Union's external powers, he is putting the 2017 Singapore Opinion 2/15 of the CJEU 6 in context. In particular, Christoph Vedder delineates the evolution of the treaty-making powers with special emphasis on the exclusivity of the common commercial policy and, in a broad range of situations, the implied powers after their codification through the Treaty of Lisbon. The strive of the European Commission for the conclusion of "EU-only" instead of mixed agreement with the participation of the Member States led, in 2017, to the Singapore opinion of the CJEU, another landmark in the row CJEU judgments which elucidated the scope of the external powers of the Union. This opinion and some of its implications are also analysed by Stefan Lorenzmeier. In "Exclusive and Shared Competences After the Singapore Opinion of the CJEU: 2/15 revisited", 7 he scrutinizes the landmark decision of the CJEU in respect of the already established case law of the CJEU and the EU treaties. Therein, especially in light on new-generation EU trade agreements, it shows the impact of the decision on these agreements and how they have to be shaped after the decision, especially for avoiding the rather burdensome process of concluding mixed agreements. Additionally, by turning to the German national legal order, a further layer, the impact of the decision for the principle of democracy will be shown as well. Andreas J. Kumin is taking a closer look at mixed agreements after Opinion 2/15 in his contribution "Mixed Agreements After ECJ Opinion 2/15 on the EU-Singapore Free Trade Agreement". 8 The questions addressed by the CJEU in this Opinion have implications on the appropriate handling of concrete present and future comprehensive free trade agreements of the EU including provisions on trade in services, transport services, establishment, investment protection and investment dispute resolution, such as CETA with Canada, the EU-Japan Economic Partnership Agreement or TTIP with the United States, which are looked at by the author. A final layer of mixed agreements is their ratification in the Member States. By using the problematic ratification of the association agreement between the EU and Ukraine in the Netherlands, Peter Van Elsuwege is elaborating on the lack of or delayed ratification in an EU Member State and the provisional entry into force of an agreement in his work entitled "The Ratification Saga of the EU-Ukraine Association Agreement: Some Lessons for the Practice of Mixed Agreements". 9 In particular, the author emphasizes how domestic political agenda in one of the EU Member States may jeopardize the complicated process of the ratification of the EU framework agreement with a third country. The first part of the volume concludes with Christa Tobler's elaboration of the very peculiar EU-Swiss relationship. Her contribution "The EU-Swiss Sectoral Approach Under Pressure -Not Least Because of Brexit" 10 states that Switzerland's unique legal relationship with the European Union experiences constant political pressure, both from the inside and the outside. This concerns notably the debate in Switzerland around the issue of migration from the EU to Switzerland and the demand of the EU for a renewed institutional framework for certain market agreements with the EU that has led to negotiations on this matter. Whilst Switzerland is seeking special solutions in both respects, the EU's rhetoric is increasingly emphasizing the need for homogeneity in the internal market. Brexit has not made matters simpler and is itself influenced by the situation in relation to Switzerland. The EU and Its Eastern Neighbourhood The second part aims at having a closer look on the EU's cooperation with its Eastern Neighbourhood as a case study that illuminates the impact of the EU's regional policies on its external bilateral relations with third countries that pursue different geopolitical objectives. An emphasis of the second part is different assessments of debate between shared values on the one hand and the type of integration of the states of the Eastern Neighbourhood into the EU system. First, the association agreements with Ukraine, Georgia and Moldova will be looked at. This group of association agreements is distinguished by deep level of political and economic cooperation and profound desire of close integration with the EU of Ukraine, Georgia and Moldova. The second group of the enhanced partnership agreements with Armenia and Kazakhstan will be scrutinized. These agreements mirror the association agreements with Ukraine, Georgia and Moldova but lack deep trade cooperation and comprehensive legislative approximation due to participation of Armenia and Kazakhstan in the Eurasian Economic Union. Roman Petrov analyzes the challenges of the effective implementation of the EU-Ukrainian Association Agreement in his work. 11 He looks at the progress of the implementation and application of the EU-Ukraine Association Agreement (AA) which triggered unprecedented political, economic and legal reforms in Ukraine. In particular, the paper focuses on the constitutional challenges that have aroused before Ukraine in the course of implementation of the AA into its legal system. There are two issues which found consideration in the chapter. The first issue is effective implementation and application of the AA within the Ukrainian legal order. The second issue is compatibility between the AA and the Constitution of Ukraine. Latest political and legal developments in Ukraine are being looked at through the prism of effective implementation of the EU-Ukraine Association Agreement and promotion of EU common values. In conclusion, it is argued that the EU-Ukraine AA enhanced the adaptability of the national constitutional order to the European integration project and EU common values. This is followed by Gaga Gabrichidze's work on "National and Bilateral Normative Framework for Legislative Impact of the EU Law on the Georgian Legal System". 12 He explores the Association Agreement concluded between Georgia and the European Union in 2014 which raised the relevance of the EU law for Georgian legislation to a new level. However, long before the conclusion of the AA, the Georgian legislator has expressed its fascination for the EU law in the form of many self-imposed commitments. Gabrichidze's chapter deals with those obligations that Georgia has put on itself, whether on the basis of unilateral actions or under an international arrangement, which form a normative framework for legislative impact of the EU law on the Georgian legal system. A third strain concerning the EU's association agreements with its Eastern Neighbourhood is looked at by Kseniia Smyrnova. Her contribution "Principles and Values of Fair Competition in the EU and Its Association Agreements with Ukraine, Moldova and Georgia" 13 is dealing with the EU's sharing principles and values of fair competition included in the AAs with these countries. The preferential trade relations established by the AAs include rules on fair competition. However, the competition chapters are very diverse, and the provisions on competition rules include some important differences as the Moldovan and Georgian DCFTAs are less ambitious than the Ukrainian DCFTA. The chapter delves into these differences by analysing the legislative enforcement and judicial practice within the implementation of the AAs' competition rules in Moldova, Georgia and Ukraine. The EU's Enhanced Partnership Agreements (EPAs) with Armenia and Kazakhstan do not establish as close political and economic cooperation with the EU regulatory space as in the AAs with Ukraine, Georgia and Moldova. Nevertheless, the EPAs play a role of almost equivalent "substitution" of the AAs for those post-Soviet countries that opted out to transfer part of their sovereignty to the Russia-led Eurasian Economic Union (EAEU). The EU-Kazakhstan Enhanced Partnership is depicted by Zhenis Kembayev in his contribution "The EU-Kazakhstan Enhanced Partnership: An Overview and Evaluation". 14 The author examines the development of the EU-Kazakhstan partnership, states its major problems and identifies the prospects of its future progress by discussing the applicable provisions of the EPA and comparing them with the PCA and the AAs, in particular the one concluded between the EU and Ukraine. Anna Khvorostiankina looks into the EPA with Armenia, another post-Soviet state and its partnership with the EU. Her contribution is entitled "EU-Armenia Comprehensive and Enhanced Partnership Agreement: A New Instrument of Promoting EU's Values and General Principles of EU Law". 15 It deals with the EU-Armenia Comprehensive and Enhanced Partnership Agreement (CEPA) as an instrument of promoting EU common values and general principles of EU Law. The contribution stresses that Armenia is a unique case of a state which is a member of the EAEU and, at the same time, is eager to strengthen its ties with the EU in frames of Eastern Partnership and to implement the required reforms. The author analyses the objectives and legal basis of the Agreement and assesses the potential influence of CEPA on Armenian legal order. The EU and the Eurasian Economic Union Thereby, we are turning to the third strain of the volume, the relationship of the EU with the EAEU and its Member States. The EAEU is partly replicating the EU legal order by establishing a common customs zone but is also very different from it. For instance, it is not build on shared values and stresses the untouchability of the sovereignty of its Member States. 16 Rilka Dragneva-Lewers explores the said relations in her chapter called "Pork, Peace and Principles: the Relations Between the EU and the Eurasian Economic Union" 17 by assessing the Eurasian integration against the dimensions of EU's external policy. The analysis starts with a discussion of the status quo of EU's relations with the Eurasian region and the tensions already observed before exploring the institutional nature and practice of the EAEU. Paul Kalinichenko focuses on the interesting but challenging relationship between Russia and the EU. His contribution on "The EU and Russia: Old Legal Grounds for New 'Selected Engagement' Relations" 18 analyses the modern legal aspects and political and legal circumstances surrounding the EU-Russia relations in the light of recent events and the deterioration of relations between Russia and the EU in general. In 2019, the EU and Russia celebrated the 25th anniversary of the EU-Russia Partnership and Cooperation Agreement (PCA), but most of the agreement's provisions are not in force anymore, and most of them became mostly obsolete. Unfortunately, the negotiations on a new basic agreement between the EU and Russia have stagnated. In best-case scenarios, this situation has led to the increase of soft law instruments of the mutual cooperation. Interestingly, on another strain a certain Europeanization of Russian law can be detected. Finally, the challenges of the Belarus-EU relations are explored by Maksym Karliuk. His chapter on "The EU and Belarus -Current and Future Contractual Relations" 19 scrutinizes the contractual relations between the EU and Belarus as they stand today and the future possibilities given the rocky history of the bilateral relations. The main international agreement between the parties still comes from the Soviet era. Nevertheless, more engagement between parties has been happening, which has already led to new frameworks being established and interest in some 16 See http://www.eaeunion.org/?lang=en. 17 Chap. 13. 18 Kalinichenko,Chap. 14. 19 Karliuk,Chap. 15. continuation seems to be present. The author analyses the effect of international contractual obligations in Belarus, the peculiar case of WTO law being applicable in the country without membership thereof in the organization, the way the EAEU constrains possible deeper engagement of the country with the EU and the role of values. New Shores? The analyses collected in this volume have shown that the EU will remain a very active player at the international stage. Externally, its relations with the countries of the Eastern Partnership are structured differently, depending on the intention of the other party to integrate in or to accept parts of the Union's acquis in its domestic legal order. The range is from a rather deep approximation of laws as achieved by the AAs with Ukraine, Georgia and Moldova to rather loose contacts with Belarus and even more limited and strained relations with Russia. However, countries willing to establish closer relations with the EU have to agree to a shared set of values determined by the Union followed by close monitoring and conditionality by the EU institutions. Internally, the EU's relation with its Member States is determined by the allocation and nature of powers enshrined in the founding treaties. These powers are a common battleground because questions of competence are barometers of power. 20 Thus, special attention has to be carried out by the EU institutions and its Member States. Issues of exclusive or shared competence and the challenge of mixity in all its forms will remain problematic and can only be solved in the course of time by the acting persons, if not the "society" of the Union and its Member States and the judiciary. The judicial organs of the Union and the Member States should use their entrusted internal powers carefully and in the spirit of cooperation because unresolvable conflicts and unexpected natural and health emergencies would hamper the concept of European integration and its promotion externally, 21 maybe even permanently. Thus, and all in all, the "post-Treaty of Lisbon" EU may have not been put in a position to explore new horizons of its internal and external competences, but it still may explore new shores and develop its policies gradually and not evolutionary. Post Scriptum In the course of working on and editing this volume, we learned about sudden departure of our friend and colleague Prof. Zhenis Kembayev in early 2019. This is a sad and irrevocable loss for international and Kazakh academic legal community. Professor Kembayev was one of pioneers of promotion of EU Law and EU Studies in the entire post-Soviet area. He was the first Kazakh academic to be awarded prestigious Jean Monnet Chair in EU Law to be published in leading international books and journals. Zhenis will be remembered as a competent and prolific contributor on various issues of legal reform in Kazakhstan, Europeanization of post-Soviet countries, evolution of the EAEU and future of the Silk Road. This volume is designated to Prof. Zhenis Kembayev's memory. Stefan Lorenzmeier is working at the University of Augsburg's (Germany) Law Faculty and researches and teaches in the areas of Public International Law and European Law. He is a lecturer at various universities and authored numerous works on European Union law. Roman Petrov is a Jean Monnet Chair in EU Law and the Head of the Jean Monnet Centre of Excellence at the National University "Kyiv-Mohyla Academy" in Ukraine. Areas of Prof. Dr. Petrov's research and teaching include: EU Law, EU External Relations Law; Approximation and Harmonization of Legislation in the EU; Rights of Third Country Nationals in the EU, Legal Aspects of Regional Integration in the Post-Soviet Area. Christoph Vedder is a professor emeritus who previously held the Chair of Public Law, Public International Law and European Law as well as Sports Law, a Jean Monnet Chair of European Law ad personam at the University of Augsburg, Germany. He studied law and history in Göttingen, Geneva, and Nice, graduated and earned his doctoral degree from the University of Göttingen. He has been appointed as an assistant professor at the Institute for Public International Law of the University of Munich where he also received his habilitation. He has been a visiting scholar at several universities and is a member of the conference of the State Parties of the OPCW and the author of numerous works on European Union law and its external relations.
2021-03-24T05:08:19.792Z
2021-01-27T00:00:00.000
{ "year": 2021, "sha1": "9c1160cb99ebe4f7984cdbb9a32e6162ad00e8e5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "9c1160cb99ebe4f7984cdbb9a32e6162ad00e8e5", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
234827317
pes2o/s2orc
v3-fos-license
Identification of Phishing Urls Using Machine Learning Phishing is a typical assault on unsuspecting individuals by making them to reveal their one-of-a-kind data utilizing fake sites. The target of phishing site URLs is to purloin the individual data like client name, passwords and web based financial exchanges. Phishers utilize the sites which are outwardly and semantically like those genuine sites. As innovation keeps on developing, phishing strategies began to advance quickly and this should be forestalled by utilizing against phishing systems to recognize phishing. AI is a useful asset used to endeavor against phishing assaults. We as a whole know bunches of assaults are happening continuously situation in light of phishing URLS. There is no programmed procedure has been set up so far Multiple assaults of phishing URLs has not yet coordinated. In the proposed framework finding the phishing assaults/URLs, the System will identify various phishing assaults in equal succession and caution the ordinary clients with respect to phishing URLs. Introduction Presently days, as there are such a significant number of individuals are monitoring utilizing web to perform different exercises like web-based shopping, online bill installment, online versatile energize, banking exchange. Due to wide utilization of this client face different security dangers like cybercrime. There are numerous cybercrime that are generally performed for instance spam , extortion ,digital fear based oppressions and phishing. Among this phishing is new cybercrime and extremely well known these days. Phishing is misrepresentation endeavor, which performed to get delicate data of client. Phisher plan site which looks same as any real site and satire client for acquiring private data of client, for example, username, secret key, banking subtleties for different reasons. Phishing is the riskiest criminal activities in the internet. Since the vast majority of the clients go online to get to the administrations gave by government and budgetary organizations, there has been a huge increment in phishing assaults for as long as not many years. Phishers began to procure cash and they are doing this as an effective business. Different techniques are utilized by phishers to assault the powerless clients, for example, informing, VOIP, ridiculed connection and fake sites. It is anything but difficult to make fake sites, which resembles a veritable site as far as format and substance. Indeed, the substance of these sites would be indistinguishable from their real sites. The purpose behind making these sites is to get private information from clients like record numbers, login id, passwords of charge and Mastercard, and so forth. In addition, aggressors ask security inquiries to reply to acting like an elevated level safety effort giving to clients. At the point when clients react to those inquiries, they get effortlessly caught into phishing assaults. Numerous inquiries about have been proceeding to forestall phishing assaults by various networks the world over. Phishing assaults can be forestalled by recognizing the sites and making attention to clients to distinguish the phishing sites. AI calculations have been one of the incredible strategies in identifying phishing sites. Phishing is the serious issues of the data security. this can happen by two different ways, either it accepting suspicious reports it takes us to the deceitful place or position by clients getting to joins that go legitimately to a phishing site. In any case, the two techniques are regular in a certain something that will be assailant targets human vulnerabilities instead of programming, vulnerability. Phishing can be described as fraudsters attempting to manipulate the customer by sending them their own info for example, username, secret phrase, and a charge card number. These tricks are prompting monetary and money related emergencies for clients [4]. In the mid-90s, phishers made a bogus record with a phony character and AOL organization that gave a web-based interface and was an online specialist organization. Right now, phishers could be abusing its administrations with no expense to them. Sadly, the phishers utilized another technique, taking substantial records by going about as an AOL worker and mentioning clients give their secret phrase to security reasons. It will be happened either by email or by means of text administrations [6]. As of late, there have been a few examinations attempting to take care of phishing issue. It will be arranged into four classifications boycott, the heuristic, content investigation. In light of the fast increment phishing sites, boycott the method gotten wasteful in choosing that it's everything assaults from new phishing destinations [4]. Approach of heuristics utilizes mark databases in every known domain assault, coordinate it's about the mark for a heuristic guy example. Exchange off of utilizing heuristics neglects to recognize, as it is, novel attacks anything but difficult sidestep marks by means of muddling. Additionally, refreshing mark base of data is moderate thinking about development assaults, particularly assaults [7]. Matter examination substance by the methodology recognizing sites, utilizing notable calculations, for example It examinations content substance to the website itself choose whether or not the platform is phishing The precision of phishing detection varies starting with one calculation then onto the next. Related Work T. Peng et al [3] presents a way to deal with distinguish phishing email assaults utilizing regular language preparing and AI. This is utilized to play out the semantic examination of the content to recognize malevolent plan. A characteristic Language Processing (NLP) system is usedto parse each sentence and secures the semantic positions of words in the sentence in association with the predicate. Considering the activity of each word in the sentence, this system perceives whether the sentence is a request or a request. Regulated machine learning [3] is utilized to produce the boycott of pernicious sets. Creators characterized calculation SEAHound [3] for distinguishing phishing messages and Netcraft Anti-Phishing Toolbar is utilized to check the legitimacy of a URL. This calculation is executed with Python contents and dataset Nazario phishing email set is utilized. Aftereffects of Netcraft and SEAHound [3] are thought about and acquired exactness 98% and 95% separately. [5] proposed a model with answer for perceive phishing destinations by using URL distinguishing proof methodology using Random Forest calculation. Show has three phases, specifically Parsing, Heuristic Classification of information, Performance Analysis [5]. Parsing is utilized to examine include set. Dataset accumulated from Phishtank. Out of 31 highlights just 8 highlights are considered for parsing. Arbitrary timberland strategy acquired precision level of 95%. [6] proposed an adaptable sifting choice module to separate highlights consequently with no particular master information on the URL space utilizing neural system model. Right now utilized all the characters remembered for the URL strings and tally byte esteems. They not just tally byte esteems and furthermore cover portions of neighboring characters by moving 4-bits. They insert blend data of two characters showing up consecutively and checks how often each worth shows up in the first URL string and accomplishes a 512-measurement vector. Neural system model tried with three streamlining agents Adam, AdaDelta and SGD. Adam was the best streamlining agent with precision 94.18% than others. Creators likewise reason that this model precision is higher than the recently proposed complex neural system topology. Right now [7] made a relative report to distinguish vindictive URL with traditional AI strategycalculated relapse utilizing bigram, profound learning procedures like convolution neural system (CNN) and CNN long transient memory (CNN-LSTM) [7] as engineering. [12] gave an overview experimental research contributed to characterization procedures URL for phishing location. We use 4500 URLs as a dataset to group highlights into four classes: lexical highlights, URL-related highlights, organize related highlights, to area-specific highlights. A few AL techniques have been studied. Marchal et al. [13] Display a system named phish storm that can differentiate phishing URLs based on the lexical analysis of the link. The system extends to 12 highlights, for example, prevalence of the enlisted space, Alexa Rank, the quantity of words found in web search tool inquiries, and information dependent on these words in URL. The characterization brought about 94 percent accuracy exactness with the low bogus positive: pace 1.4 percent accuracy with a pace like that, framework could Calculate the threat score of URLs on the study dataset with 92.22 percent accuracy for genuine URLs and 83.97 percent accuracy for phishing URLs. Syrageldin et al. [14] exhibited an instrument distinguish sites dependent for two classes: Lexical URLs examination and the content of the website investigation. The disadvantage of this system concerning the highlights assortment is the fractional rendering strategy. Existing System As of late, there have been a few investigations that attempted to take care of the phishing issue. A few scientists utilized the URL contrasted and present boycotts they hold arrangements malignant sites, which they have created, and there are those that have used the URL in a specific manner, to be specific contrasting the URL and a whitelist of real sites [15]. The last methodology utilizes heuristics, which employments mark dataset for every specific item assault coordinate mark out of the heuristic example choose on the off chance that it is a phishing site [16]. Moreover, estimating site traffic utilizing Alexa is another method that researchers have modified to detect phishing pages. Figure 1. Overview of the proposed system Assurance dependent on static highlights of a website page extending from the quantity of iframes to the nearness of realized fake telephone pages. The fundamental highlights which can be checked by the server are recorded as follows 1. Phishing URL, 2. I Frames, 3. Forward cut or question mark in the URL crimp, 4. Inward sub spaces in the HTML page, 5. Diverted web URLs, application will check the informational collection introduced in the fundamental server. In the event that any new URLS are distinguished, at that point consequently the malevolent page connect is included the dataset so that next time our application will check the website page when contrasted and the informational index. Advantages of the proposed system x Identify the malevolent site x block the diverting site x identify iframe symbols Web Deployment Web Application is sent for the client to snap and peruse any web URL according to their solace. Through this web Application client does perusing exercises. The client must enlist their vehicle subtleties. For example, Name, Mobile number, E mail ID and different certifications. Server Right now, informational collection is put away which comprises of List of Phishing URLs. At whatever point client sends a solicitation to peruse a URL, that URL is contrasted and the dataset put away in the server to confirm the URL status. In the event that the URL is recorded in the dataset, at that point URL isn't permitted to open in the client end ie Android Application. New rundown of Blocked arrangement of URLs can likewise be included the rundown for correlation. Detection of malicious phishing holistic web link and sub links Right now, approach is utilized for discovery of phishing URLs. We actualize this methodology by contrasting and the Dataset. When we locate the mentioned URL is available in the phishing set URLs then the mentioned URL is obstructed by the server. With the goal that android client can never permit opening the URLs. Right now will center Web connections and Sub joins. We URL is standard URL connection and Sub joins are the powerless watchwords. Detection of malicious holistic redirect web links We execute this methodology by contrasting and the Dataset. When we locate the mentioned URL is available in the phishing set URLs then the mentioned URL is hindered by the server. With the goal that android client can never permit opening the URLs. Right now, URLs are contrasted and the dataset. Generally the URL interface which we give from the client end is separated from everyone else contrasted and the dataset, however the programmers can set another URL. In the underlying solicitation however concealing the vindictive web interface in the diverting page. So we are checking the divert URL too. Iframe Detection Right now, approach is utilized for recognition of phishing URLs. So android client can never permit opening the URLs. The specialized details right now to confirm any I Frame joins are given in the URL. A portion of the powerless URLS could incorporate some alluring pictures. Clients are tending to tap on the Image which would be a viral action to catch all the client's certifications through the android application. A few highlights are looked at utilizing different information mining calculations. The outcomes focus to the effectiveness that can be accomplished utilizing the lexical highlights. To shield end clients from visiting these destinations, we can attempt to recognize phishing URLs by dissecting their lexical and hostbased highlights. A specific test right now that hoodlums are continually making new methodologies to counter our protection measures. To prevail right now, need calculations that persistently adjust to new models and highlights of phishing URLs. Therefore, the paper derives that through this framework we can confine the divert and vindictive site on the PC devices.
2021-05-21T16:58:13.531Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "dfc6e4808adaec801896ba3169e668c62993e927", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1770/1/012009", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "d08410917de21f448aeabf2e0bd6376c4c0e9fad", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
59409970
pes2o/s2orc
v3-fos-license
Genetic diversity and structure of Elymus tangutorum accessions from western China as unraveled by AFLP markers Background Understanding genetic diversity of wild plant germplasm and the relationships between ecogeographic and genetic characteristics may provide insights for better utilizing and conserving genetic resources. Elymus tangutorum (Nevski) Hand.-Mazz, a cool-season hexaploid perennial, is an important pasture bunchgrass species used for forages and grassland restoration in Qinghai-Tibet Plateau and northwest China. In this study, 27 E. tangutorum accessions from diverse origins of western China were evaluated using AFLP markers in an effort to delve into the genetic relationships among them. The effects of eco-environmental factors and geographical isolation on the genetic diversity and population structure were also elucidated. Results On account of 554 polymorphic fragments amplified with 14 primer combinations, the mean values of some marker parameters including polymorphic information content, resolving power and marker index were 0.2504, 14.10 and 23.07, respectively, validating the high efficiency and reliability of the markers selected. Genetic dissimilarity index values among accessions ranged from 0.1024 to 0.7137 with a mean of 0.2773. STRUCTURE, UPGMA clustering and PCoA analyses showed that all accessions could be divided into the three main clusters; however, this results do not exactly coincide with geographic groups. We found medium differentiation (FST = 0.162) between Qinghai-Tibet Plateau (QTP) and Xinjiang (XJC), and high differentiation (FST = 0.188) among three Bayesian subgroups. A significant correlation (r = 0.312) between genetic and geographical distance was observed by Mantel test in the species level, while the weak correlation was detected between genetic and environmental distance for all accessions and most of geographical groups. In addition, a significant ecological influence of average annual precipitation on genetic distance was revealed in XJC group and the Bayesian subgroup A. Conclusion This study indicates that AFLP technique are a useful tool to measure genetic diversity in E. tangutorum, showing that geographical and environmental factors (especially precipitation) together, play a crucial role in genetic differentiation patterns. These findings underline the importance of local adaptation in shaping patterns of genetic variability and population structure in E. tangutorum germplasm collected in Western China. Electronic supplementary material The online version of this article (10.1186/s41065-019-0082-z) contains supplementary material, which is available to authorized users. Background Elymus Linn. is the largest and most widely distributed genus in the tribe Triticeae, with 150 species grown in temperate regions of the world [1]. This genus has a closer phylogenetic relationship with some of the important cereal crops, such as wheat, barley, rye and triticale [1]. Therefore, it may serve as a valuable natural gene pool of desirable traits for the improvement of these crops [2,3]. Besides, many Elymus species are also used as fodder grasses and ecological protection [1]. Elymus tangutorum (Nevski) Hand.-Mazz, a perennial hexaploid species with the StYH genome (2n = 6x = 42), along with E. duhuricus Turcz. ex Griseb., E. excelsus Turcz. ex Griseb. and E. ivoroschilowii Probat. constitute the E. dahuricus complex [4,5]. E. tangutorum differs morphologically from E. dahuricus Turcz. in the light of the short upward awns and its distribution in the Qinghai-Tibet Plateau (QTP), Xinjiang province of China and the Alpine regions of Central Asia. In QTP, E. tangutorum is widely used for the degraded grassland restoration, due to high productivity, drought resistance, cold tolerance, and adaptability [6]. As one of the more important forage grasses in QTP, E. tangutorum has been widely studied. Most of studies mainly focused on the domestication, cultivation, phylogenesis and genome constitution [4,6]. Understanding genetic diversity of wild plant germplasm and the relationships between ecogeographic and genetic characteristics may provide insights for better utilizing and conserving genetic resources [7]. Investigation of germplasm diversity can be implemented via morphological and molecular means. Previous analyses based on agro-morphological characters and geographical origin indicated that a wide range of phenotypic divergence occurred among Elymus accessions of ecotypes, and/or cultivars [8,9]. DNA-based markers have been regarded as practical tools with high efficiency and wide genome coverage in illuminating the pattern of genetic diversity and phylogenetic relationships in plant germplasm resources, with various advantages over phenotypic traits such as being not subject to environmental influences, potential of unlimited numbers, and assay from any development stage. In Elymus species, diverse PCR-based markers, employed in diversity and phylogeny studies, such as RAPD [10], ISSR [11], SRAP [12], ScoT [9,13], and AFLP [14][15][16] have become prevalent as any prior sequence information is not required. Among different marker systems available at present, AFLP method could explore variation throughout the entire genome, including both coding and non-coding DNA regions and may therefore genome-wide variation was allowed. Moreover, AFLP offers high reproducibility and multilocus polymorphisms simultaneously identified using a single assay. These characteristics make the AFLPs extremely appropriate for molecular characterization of germplasm collection. Although previous work has provided preliminary data to characterize genetic diversity between and within two E. tangutorum populations from Tibet province of China by AFLP analyses [4,5], until today there is no detailed study has been conducted for molecular characterization using a larger germplasm collection. Genetic diversity in natural plant populations is held especially by the spatio-temporal environmental heterogeneity, i.e., genetic differentiation is strongly influenced through isolation by distance or local adaptation [17][18][19]. To estimate the impact of environmental factors on the genetic diversity is indispensable to gain a deeper insight into such evolutionary forces [20][21][22]. Thus, the combination analysis of molecular markers and eco-geographical data can provide beneficial information for taking up suitable strategies for utilization and conservation of wild plant germplasms [19,23,24]. Here, AFLP markers were used to analyze, on a regional scale, genetic diversity and structure of 27 wild E. tangutorum germplasm accessions indigenous from two contrasting climatic zones of Western China, namely Qinghai-Tibet plateau and Xinjiang province (Fig. 1). The goals of the current study were (1) to evaluate and compare the genetic diversity and population structure of E. tangutorum germplasm from different areas, and (2) to verify the potential influence of spatial and environmental factors on the patterns of detected population structure. Plant materials and DNA extraction Twenty seeds of each accession of E. tangutorum were germinated in vermiculite under identical conditions (25°C , 300 μmols·m 2 ·s − 1 ; 16-h photoperiod) in a plant growth chamber. Genomic DNA was extracted from bulked young leaves from five plants per accession employing a CTAB procedure [25]. The quality and quantity of extracted DNA was assayed by 1% agarose gel electrophoresis and NanoDrop® spectrophotometer and was diluted with sterile distilled water to give a final concentration of 100 ng/μL. AFLP analysis AFLP fingerprinting was carried out following Vos et al. (1995) with minor modifications [26,27], but the primers were labeled with 6-FAM fluorescent dye at the 5'end. Briefly, 300 ng of total genomic DNA was digested using the two restrictive enzymes EcoRI and MseI, following ligation of proper adapters to the restricted DNA fragments. After ligation reactions, pre-selective amplification was performed with (EcoRI + 1)/(MseI + 1) primers. Then the products from the previous step were applied to selective amplification using (EcoRI + 3)/ (MseI + 3) selective primers. Initially, 48 selective primer combinations were screened on six accessions, and 14 pairs were chosen based on total number of fragments detected, fragment robustness and polymorphism (Additional file 1: Table S1). The automated separation and detection were fulfilled on an ABI 3730xl sequencer (Applied Biosystems) with the assistance of GeneScan 500 ROX as internal size standard. Genemarker 2.2 (SoftGenetics) was used to score fragments of 60-500 bp with a peak height above or equal to 100 reflective fluorescent units (RFUs). Then AFLP data from all primers was transferred into a binary matrix, scored as"1" (presence of fragment) and "0" (absence of fragment). Data analysis The discriminatory power of each AFLP primer combination was evaluated by calculating polymorphic information content (PIC), marker index (MI), and resolving power (RP) with polymorphic bands only. The formula to calculate PIC value of each primer combination is: where PIC i is the PIC of ith marker, f i stand for the frequency of the present fragments and 1 − f i represent the frequency of absent fragments [28]. Marker index (MI) was calculated as MI = PIC × EMR, where EMR (effective multiple ratio) is defined as the product of the proportion of polymorphic loci (PP) and the number of polymorphic loci (NPF), namely the total number of polymorphic fragments per primer [29]. The resolving power (RP) of each primer was estimated as Rp = ∑Ib, where Ib (fragment informativeness) defined as Ib = 1-2 × |0.5 − p|, where, p is the proportion of accessions in which the fragment is present [30]. To understand the genetic relationship among all 27 accessions, pair-wise genetic distance (GD) were calculated on the basis of the Dice's coefficient by NTSYS-pc v2.10 software. To obtain a more detailed view of the distribution of genetic variation within and between different geographical groups (XJ vs QTP), mean GD among accessions belonging to the same and/or different groups was also calculated. Based on GD matrix, the principal coordinate analysis (PCoA) and Unweighted Pair Group Method with Arithmetic (UPGMA) clustering analysis were performed. The robustness of the dendrograms was assessed with 1000 bootstrap iterations by Freetree software [31]. The Bayesian model-based clustering method was applied to explore the population structure within the studied accessions or genetically diverse population (or groups) using the STRUCTURE software v.2.3.4 (http://web.stanford.edu/group/pritchardlab/structure.html). Following the suggestion by Forsberg et al. [32] for outcrossing species, the admixture model was adopted and the heterozygous loci were treated as missing data. Population structure analysis was performed on AFLP data of all 27 accessions without any prior classification based on the Bayesian model-based method. Evanno's ad hoc ΔK statistic was used to determine the most probable clustering number of subgroups [33]. We set the number of clusters (K) from 1 to 5 (10 times for each K) for accurate assignments of accessions. For each run, the burn-in period of 50,000 iterations and a run length of 500,000 MCMC replications were implemented. Considering that the indication of the most likely number of groups, namely LnP(D) usually plateaus or increases slightly after the "right K" is reached [25], we used Structure Harvester online software (http://taylor0.biology.ucla.edu/struct_harvest/) to detect optimum value of K based on ΔK method [33]. The output files for the selected K based on ten independent runs were then processed in CLUMMP [34] using the Lar-geKGreedy algorithm and the G'pairwise similarities. Results were exported to DISTRUCT [35] for better graphical presentation. We also executed spatial population analysis using TESS 2.3.1 [36], which has been described as carrying high percentage of correct inference than other Bayesian clustering algorithm under certain conditions [37]. Geographical coordinates of each accession were contained in AFLP dataset as prior information, and the MCMC algorithm was run under BYM admixture model with 50,000 sweeps and10 000 burn-in. Fifty independent iteration analyses were performed at each value of clusters K (2~5).The value of K with the highest likelihood was determined according to the deviance information criterion (DIC), and the results of runs with 20% lowest DIC values were processed using CLUMMP and DISTRUCT, respectively, as did in the STRUCTURE analysis. Tess outputs was further visualized in a geographical perspective using an R script (POPSutilities) written by Flora Jay [38]. To validate the genetic variation among and within inferred subgroups according to geographic origins or STRUCTURE results, analysis of molecular variance (AMOVA) based on PhiPT (analogue of F ST ) value was using GenAlEx 6.5 was performed. The observed (Na) and effective number (Ne) of alleles, expected heterozygosity (H E ), Shannon's information index (I) [39], and Nei's genetic distance among geographic groups were also calculated by GenAlEx. Based on geographical location of each accession, the corresponding environmental data (precipitation and temperature parameters) was obtained by DIVA-GIS (version 5.2.0.2, http://swww.diva-gis.org/). The geographic and environmental distance matrix was calculated based on the Euclidean distance between pairs of collection sites using NTSYS-pc v2.10. Correlation between geographical and genetic distances (Nei-Li coefficients) between accessions was measured by Mantel test with 9999 permutations using IBD (Isolation by Distance) software v1.53 [40]. A similar analysis was conducted to compare the genetic distance and environmental distance (altitude, precipitation and temperature) for accessions in different geographic groups. Marker informativeness The AFLP fingerprinting was performed on 27 wild E. tangutorum accessions using 14 primer combinations. A total of 554 scorable fragments were detected with a mean of 40 per assay unit, of which 509 (91.7%) were polymorphic and 70 (12.6%) were unique (Tables 1 and 2 Population structure analysis The results of population using the STRUCTURE software showed a clear peak of the ΔK values when K was set at 2 ( Fig. 2),which indicate that the sampled accessions belonged to two inferred genome fraction (Fig. 3). Samples with a inferred genome fraction value (membership proportion) of 0.8 or more to a subgroups are considered as pure, and less than 0.80, as admixture [32]. Considering individual's membership proportion (Q i ), all 27 accessions were assembled into three Bayesian subgroups designated as subgroup A (pure, blue fraction ≥0.8), subgroup B (admixture, 0.2 ≤ blue fraction ≤0.8) and subgroup C (red fraction ≥0.2). The three subgroups included 10, 8 and 9 accessions, respectively. Geographically, most of accessions from Xingjiang (XJC) (10, 71.43%) belonged to subgroup A and the rest resided in subgroup B (Et20) and C (Et04 and Et13). Six accessions from Sichuan (SCC) were part of subgroup B and the five remaining accessions were part of subgroup C. For the three accessions from Gansu (GSC), Et18 consisted in subgroups B and the remaining two belonged to subgroup C. The Bayesian clustering results by TESS 2.3.1 had lowest DIC values (highest probability) at K max = 4, indicating that the sampled accessions belonged to four inferred genome fraction (Additional file 2: Figure S1). This pattern is slightly different from the result from Structure runs in which the optimal K = 2. However, Considering the proportion of individual's membership (Qi), the Bayesian subgroups based on TESS was almost exactly equivalent to the Structure result, both of which have the two dominant ancestry coefficients (Fig. 3). On the other hand, the geographic map displaying the Q matrix spatially by R script [38] also exhibited a general trend that the samples from XJC is obviously different from the samples from QTP, although the number of collection sites were limited and the spatial mapping for each accession is modeled over the whole Asia area rather than just considered at actual sampling sites (Additional file 3: Figure S2). Cluster and principal coordinate analysis Based on the 509 polymorphic fragments, estimates of the genetic distances (Dice's coefficient) ranged from 0.1024 to 0.7137 with an average of 0.2773, implying a These results were also confirmed by STRUCTURE analysis that most of Xinjiang accessions had pure type of genome fraction. The genetic distance matrix was used to build a hierarchical dendogram of genetic relationships based on the UPGMA clustering method (Fig. 3), in which three major clades were clearly categorized with further internal subgroupings. These three clades were identified at the 0.259 dissimilarity level. Grouping of most of accessions was congruent with their respective geographical origins. However, the accession Et20 and Et13 from Xinjiang did not separate clearly from QTP accessions. The bootstrap support of all major branches were higher than 50%, and a majority of them ranged from 70 to 99% revealing reliability of data and clustering results. Despite some differences, there was coherence between Bayesian method and hierarchical clustering analysis (Fig. 3). The UPGMA dendrogram could separate well three Bayesian subgroups, except for the accessions Et27, Et19 and Et23 were revealed as outliers. A similar clustering patter of the studied germplasm was demonstrated by the principal coordinate analysis (PCoA) (Fig. 4) illustrating that scatter-plot dipiction of PCoA is comparable to either of the hierarchical and Bayesian cluster analysis. That is, most of studied accessions were separated according to their geographical groups and/or the locations sites then put in the same cluster. PCoA was performed based on Nei-Li distances and confirmed division of 27 accessions into three major clusters: ClusterI, II and III. The first two principal coordinates represented the total of 69.1% of the total molecular variation (60.25, and 8.65%, respectively). The first axis (PC1), which contributed 60.25% of the variance, separated most of accessions, which have been assigned to three UPGMA clusters. The second axis (PC2),explained 8.65% of the variation, could further distinguish the accessions from cluster I and III. Genetic diversity estimates Geographically, the region of Gansu and Sichuan Province are both part of the QTP region. Hence, we merged the GSC and the SCC group into a new group named QTP group. The total gene diversity across different ecogeographical regions was estimated based on 509 polymorphic fragments (Additional file 4: Table S2). The percentage of polymorphic loci (PP) was found to be higher in accessions of QTP (73.29%) than those of Xinjiang (70.76%). The Xinjiang group (XJC) showed high Nei's indices of gene diversity (He) of 0.24 and Shannon's diversity (I) of 0.36, whereas QTP group displayed a moderate levels of diversity (He = 0.21and I = 0.32). For the three Bayesian subgroups, the PP value was found to be highest among accessions from subgroup A (67.69%) followed by subgroup C (65.16%). The markers were least informative for assessment of genetic variability of accessions from subgroup B with PP restricted to 53.25%. Similarly, subgroup A showed the highest intra-group diversity (I = 0.36 and He = 0.24), AMOVA analysis The two independent AMOVA analyses implemented relying on geographic origins and STRUCTURE clustering results and a highly significant variation (P < 0.01) was detected (Table 3). For regional groups, 16.24% of the total variation was due to differences among populations and 73.90% variation was due to divergence within groups. Both of the pairwise F ST values of two regional groups were significant (P ≤ 0.01) which were tested by GenAlex 6.5. The total F ST = 0.162 indicated that the genetic variability is moderate between QTP and XJC groups. The three subgroups inferred from STRUC-TURE showed 81 and 19% of total variance within and among the groups. This means that the total F ST was 0.188 (P ≤ 0.01) and the derived three subgroups are highly structured by Bayesian inference panel. Significant differences in the F ST were observed for all the pairwise comparisons of three subgroups inferred from STRUC-TURE analysis ( Table 4). The maximum pairwise F ST (0.274) found between subgroup A and subgroup C was confirmed by their great difference of inferred genome fraction (Table 4). In addition, AMOVA analysis indicate that the level of gene flow either between geographic areas or between Bayesian clusters was low and insufficient to make homogenization in the E. tangutorum collections of western China. Mantel test To verify the eco-geographical factors influencing genetic structuring, Mantel's test was used to estimate matrix correlations between genetic distance and geographical, elevation, average annual temperature, and average annual precipitation distances. A weak but significant pattern of isolation by geographic distance was detected within all accessions (r = 0.312; P = 0.001). A similar result was found in each group except GSC (Table 5). This pattern derives largely from measurable differentiation among geographic groups seen in STRUCTURE and UPGMA clustering analysis. Furthermore, there was a faint or non-significant correlation between distance matrices of genetic and environmental factors for all accessions. However, a significantly positive correlation was demonstrated between genetic distance and the average annual precipitation distance in XJC (r = 0.685; P = 0.01) and Subgroup A (r = 0.594; P = 0.05) ( Table 5). In addition, relatively high Mantel r-value were observed between environmental factors and genetic distance for Gansu accessions, all of which were non-significant due to a small group size (only 3 accessions). AFLP polymorphism and discriminating capacity of the assays The practical availability of molecular markers in plant germplasm characterization depends on their power to discriminate the genotypically different accessions analyzed. In the present paper, the efficiency of AFLPs were assessed by recording different marker parameters concerning polymorphism, PIC (Polymorphic Information Content), MI (marker index) and RP (resolving power [14]. So far, the primer combinations used in this study successfully established high levels of genetic distinctness among the studied E. tangutorum accessions using AFLP fingerprints. The level of polymorphism obtained here with AFLP markers is also higher than that reported for other Elymus germplasm in western China by other types of dominant markers such as E. sibiricus by ScoT (89%) [13] and E. nutans by ISSR (91.4%) [41]. In all, the participation of polymorphic fragments largely relies on the species, origin ecogeographic area and the differentiation degree of sampled accessions, amount of primers and even type of molecular marker used [42,43]. Moreover, as all primer combinations produced unique fragments (on average, 5 per primer), those primers could be used to differentiate particular accessions [44]. The polymorphic information content (PIC) values of 0.25 in current study demonstrated good marker The P value for estimated Fst was calculated using 10,000 permutations (*P < 0.05; **P < 0.01) [45,46]. This values also agree with AFLP studies done with other forage grass such as Dactylis glomerata [47] and Phalaris aquatica [48].The Marker index (MI) and resolving power (RP), two importantly alternative parameters in selecting informative markers for diversity studies [49,50], had also displayed high efficiency of AFLP markers in unveiling polymorphisms in E. tangutorum germplasm. A very strong and positive correlation observed among PIC, RP and MI value in present work indicates the use of any of three parameters to screen the informative primer combinations [38]; although the relationship between the informativeness of molecular markers for identifying germplasms and these marker parameters is not totally clear [46]. Primer combinations E-ATG/M-CTC and E-AAC/M-GAC should be the most informative due to the highest records on PIC, RP and MI. Therefore, they are recommended for use in germplasm diversity analyses of other E. tangutorum collections. The Shannon index is an accurate alternative measure of diversity due to no need for estimate of allele frequencies under Hardy-Weinberg equilibrium. The Shannon diversity detected in this study (0.34) was higher than those of E. sibiricus germplasm from western China by SCoT (I = 0.285) and EST-SSR (I = 0.237) markers [13,51], also indicating a high level of genetic diversity in the accessions of E. tangutorum (Table 2). Clustering pattern and genetic structure It is attractive for breeders and germplasm curators to know the width of genetic diversity within a plant species of great importance [52]. The spatial genetic diversity analysis was conducted to verify the relationship between Elymus tangutorum accessions and biogeographical patterns by different clustering approaches. The UPGMA tree, the PCoA scatter plot, and the STRUCTURE analyses revealed a similar or identical membership among 27 wild E. tangutorum accessions, with a general grouping pattern including three genetically distinct and consistent groups. When these accessions are grouped according to clusters distinguished (Fig. 3), a greater range of Nei-Li distances is observed, which is comparable to previous studies for other Elymus germplasm with similar collection areas [13,53,54]. The first cluster derived from UPGMA tree consisted of ten accessions from Xinjiang Province in southwest of China, having a typical temperate continental climate. Meanwhile, Cluster I had the distinctly pure membership based on Bayesian clustering. The second cluster was primarily formed by accessions from southeast of Qinghai-Tibet Plateau (QTP), a region greatly far from Xinjiang, having a typical plateau mountain climate. Cluster II was corresponding well to the admixture subgroup STRUCTUE by analysis. The third cluster was also chiefly comprised by the accessions from southeast of QTP, with another distinctly pure membership. In short, the above results were in agreement with the geographical proximity of studied accessions and their genetic relationships. Exceptions to this pattern of clustering were three accessions (Et04, Et13 and Et20) from Xinjiang,which were grouped with QTP accessions in Clade II and Clade III respectively. Since E. tangutorum is used in natural grassland restoration projects [6], it is possible that these three accessions are not native accessions from QTP and might have been introduced through domestic trade of seeds from Xinjiang. By contrasting the figures of UPGMA, PCoA and STRUCTURE analyses, two accessions (Et19 and Et27) assigned to Bayesian Subgroup II, fall into outgroups of UPGMA analyses. The possible reason causing such relationships may be STRUCTURE assumes that the loci within a population are under Hardy-Weinberg equilibrium (HWE) and it is susceptible to many factors such as non-random mating, genetic drift, mutations, gene flow and selection [55]. Nonetheless, in different clustering analyses, it was largely found that separate clusters were often obtained in accordance with the respective geographic origin of accessions. Therefore the cluster pattern in present work might be due, in part, to differences in the adaptation of ecotypes to various selection forces from local conditions [56]. These results are in agreement with earlier studies which showed that geographical separation of other Elymus species germplasm did not always result in greater genetic differentiation [12,57]. Indeed, the genetic diversity and population structure are determined by joint effects of many factors including geographic distribution, life cycle, mating system, selection and adaptation [58]. Considering the genetic differences among studied regions, the accessions from Xinjiang province showed higher diversity (I = 0.24, He = 0.36) than those from QTP (I = 0.21, He = 0.32). With regard to the Bayesian subgroups, higher diversity was found among accessions in subgroup A, which correspond to accessions from Xinjiang province. Based on a membership probability threshold of 0.80, most accessions (66.67%) such as subgroup A and C appeared to have a pure population ancestry in their genome component, which might result from limited gene flow among wild accessions with distantly geographical origin [59]. This explains why we have observed high diversity within geographical groups. However, more than half of the accessions from QTP were composed of admixture of genome fraction. Such case could be supported by the fact that E. tangutorum was frequently used to grassland restoration in Qinghai-Tibetan Plateau as mentioned above [6,60]. Therefore, human activities may lead to frequently natural crossing by wind pollination between adjacent stands. Results of AMOVA analysis showed low genetic variation among regions accounted for only 16.2% of the germplasm collection, whereas within-region variation accounts for 83.8% of the total variability. Similarly, only 19% of variation was detected for Bayesian groups,. These results on variation among accessions within geographical groups were confirmed with previous studies for E. nutans and E. sibiricus collections with similar origins [12,57]. However, our results are more extreme. This could be a consequence of high gene flow resulting from grassland restoration project or the limited number of studied accessions for each region. The low among-group variance components indicated that the genetic background attributable to the geographical origin contributes slightly to the observed genetic diversity. Besides, the high gene flow and differentiation within E. tangutorum, supported by clustering and AMOVA, was also probably driven by its mating system, which is one of the important life-history feature that strongly influence genetic diversity and population structure [61]. As reported in genetic diversity study for Elymus dahuricus complex [5], E. tangutorum is probably a predominantly self-pollinated species with 0.9 of F ST value. In their review, Hamrick and Godt [62] claimed that short-lived, self-pollinating species have most of their genetic diversity partitioned among populations rather than within populations. Similarly, Nybom [58] assumed that short-lived self-pollinating species allocate most of their genetic variability within populations. Each E. tangutorum accession in present investigation could be treated as a population consisting of seeds of neighboring individual plants collected from differently distant locations [63]. This situation may limit gene flow between accessions by geographic isolation barriers (i.e. mountains, rivers), thus the variation will maintain its characteristics within each accession. Although the within-accession variability is not determined by individual-based analysis, we have indeed observed the high degree of differentiation within this species according to UPGMA and PCoA analysis based on bulked DNA samples. Hence it is not surprising that the vast proportion of AFLP variation detected in E. tangutorum was present rather within than among groups, since the large-scale geographical group contained numerous accessions with distinctly genetic architecture. These results could be confirmed by that the totally average genetic distance (0.277) between all of accessions was higher than that (0.261) between accessions from the two main geographical groups (QTP and Xinjiang). Correlation of genetic diversity and environmental variables In virtue of the evolutionary forces, the decisive factor to influence the maintenance of genetic variability is the spatial variation of environment and the ecological difference between habitats [24]. Remarkably, the Mantel test between genetic distance and geographical distance in all E. tangutorum accessions and most groups was weakly correlated (Table 5) and similar results were reported in previous studies of Elymus species [2,57]. Within accessions from QTP group, we found lower r-values of geographical distance (r = 0.131) compared with the XJC group (r = 0.313), which was not significantly different from zero and might be related to the highly heterogeneous topography nature of the QTP collection. The complex topographic features and climate will work together to form varieties niches to influence the genetic variability [64]. Thus, spatial distribution cannot be solely accounted for by a simple isolation-by-distance model and requires additional factors that influence the observed genetic population structure, such as environmental factors [24]. In the Mantel test between genetic distance and environmental distance, we found a significantly positive correlation between genetic distance and average annual precipitation (r = 0.685; P = 0.01 and r = 0.594; P = 0.05) in the XJC group and Bayesian subgroup A. A possible explanation for this results is that in Xinjiang with arid and/or semi-arid temperate climate, the lower precipitation hinders seed germination and lead to decrease plants' genetic diversity [17]. In the diversity study of E. nutans germplasm from western China, the environmental divergence like elevation is also related to genetic distance (r = 0.695; P = 0.01) [41]. On the other hand, the relationship between environmental factor and genetic distance is difficult to be fully resolved in complex environment [18]. This could be responsible for the weak and non-significant correlation between genetic distance and environmental distance found in QTP group. Besides, although the accessions of GSC group had high Mantel r-value (Table 5) between genetic distance and environmental distance except the average annual temperature (r = 0.309), the correlation between the genetic diversity and environmental factor was still unraveled due to the small population size. Conclusions This study indicates that AFLP markers are a powerful tool to measure genetic diversity in E. tangutorum and geographical and environmental factors (especially precipitation) together plays a crucial role in genetic differentiation patterns. These findings highlight the importance of local adaptation in shaping patterns of genetic structure inferred in E. tangutorum accessions fromWestern China. Therefore, collecting and assessing E. tangutorum germplasm from major geographic regions and special ecogeographic environment such as Qinghai-Tibet Plateau, will help to expand the genetic base and sample the full extent of available variation.
2019-01-30T14:57:03.925Z
2019-01-29T00:00:00.000
{ "year": 2019, "sha1": "dbd72ef79f67c4e119f8d2f12249e55690760a0f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s41065-019-0082-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dbd72ef79f67c4e119f8d2f12249e55690760a0f", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
18066007
pes2o/s2orc
v3-fos-license
Candidate prioritization for low-abundant differentially expressed proteins in 2D-DIGE datasets Background Two-dimensional differential gel electrophoresis (2D-DIGE) provides a powerful technique to separate proteins on their isoelectric point and apparent molecular mass and quantify changes in protein expression. Abundantly available proteins in spots can be identified using mass spectrometry-based approaches. However, identification is often not possible for low-abundant proteins. Results We present a novel computational approach to prioritize candidate proteins for unidentified spots. Our approach exploits noisy information on the isoelectric point and apparent molecular mass of a protein spot in combination with functional similarities of candidate proteins to already identified proteins to select and rank candidates. We evaluated our method on a 2D-DIGE dataset comparing protein expression in uninfected and HIV-1 infected T-cells. Using leave-one-out cross-validation, we show that the true-positive rate for the top-5 ranked proteins is 43.8%. Conclusions Our approach shows good performance on a 2D-DIGE dataset comparing protein expression in uninfected and HIV-1 infected T-cells. We expect our method to be highly useful in (re-)mining other 2D-DIGE experiments in which especially the low-abundant protein spots remain to be identified. Electronic supplementary material The online version of this article (doi:10.1186/s12859-015-0455-x) contains supplementary material, which is available to authorized users. Background Identification of proteins, their posttranslational modifications, and quantification of their abundance is essential for understanding cellular processes, such as the cellular response to virus infection [1][2][3]. A frequently used technique for measuring protein abundance is 2D gel electrophoresis (2DE). In 2DE a complex protein mixture is separated both on isoelectric point (pI), using isoelectric focusing, and apparent molecular mass (Mw). Based on these two properties, proteins migrate to different locations on a gel and their abundance can be estimated from staining or, upon prior labeling, from the amount of fluorescence. 2DE is very often combined with mass spectrometry (MS) to identify proteins excised from spots on the gel. To decrease gel-to-gel variation and increase sensitivity, two-dimensional differential gel electrophoresis (2D-DIGE) was developed. 2D-DIGE enables quantification of changes in protein abundance by fluorescent labelling of samples with Cy3 or Cy5 and running these on the same gel. Quantification is improved even further by repeating experiments (using biological replicates) and using a Cy2-labeled internal standard consisting of a pool of equal amounts from all samples investigated in the experiment [4]. However, reliable identification of low-abundant proteins after 2D-DIGE is still challenging. Crucial in this respect is that fluorescent labeling by Cy3 or Cy5 is over 40-fold more sensitive than the most sensitive silver stain [5]. As a consequence low-abundant differentially expressed proteins do not become available for follow-up mass-spectrometric analysis upon colloidal Coomassie restaining of the gel. Indeed, often more than half of the differentially expressed protein spots cannot be identified using peptide mass fingerprinting, in combination with either matrix assisted laser-desorption time of flight (MALDI-TOF) or liquid chromatography (LC)-MS/MS analysis, due to the scarcity of the protein they contain [6,7]. Recently, several computational approaches have been proposed that enhance protein identification by exploiting information about the biological context relevant to the performed experiment [8]. Gwinner et al. [9] developed a method to generate a list of candidate proteins that might have remained undetected in a 2D-DIGE experiment. Their approach involves the construction of a Steiner tree on a protein-protein interaction network, which connects already identified, differentially expressed proteins. The nodes of the Steiner tree form a set of suitable candidate proteins that can be validated using Western blotting, for example. Protein differences in the low-abundant range are also difficult to detect using the newest (gel-free) shotgun LC-MS/MS techniques. All proteomic MS analyses are hampered by well known dynamic range problems, in which the most abundant protein around 'sets' the limit of detection for the experiment. Network-based approaches have therefore also been proposed to (re-) mine MS/MS experiments in order to increase protein identification. Ramakrishnan et al. [10] used a diffusion algorithm to propagate the evidence from an MS experiment along the edges of a yeast gene functional network. Proteins that did not pass the confidence threshold for identification can be rescued if proteins in their network neighbourhood were reliably identified. Li and colleagues [11] used protein interaction networks to search for cliques, that is, completely connected subnetworks. A low-confidence protein is rescued if it is a member of a clique that is enriched for reliably identified proteins. Whereas these two approaches do not use quantitative information for protein identification, such information can also be exploited. SNIPE [12] uses a network-based approach in which the spectral counts of a protein and its direct neighbours in a functional network are combined. In a case-control experiment the resulting counts can then be used to highlight proteins that are likely to be active but not detectable in a shotgun proteomic experiment. In this paper, we present a novel computational approach to prioritize candidate proteins for unidentified low-abundant (non-stainable) spots in 2D-DIGE experiments. A limitation of the Steiner tree approach of Gwinner et al. [9] mentioned above is that additional information available for each unidentified spot, namely the pI and Mw of the protein(s) that migrated there, is completely ignored. Our prioritization approach specifically exploits this information in order to propose a list of candidate proteins for each unidentified spot. Functional similarities of candidate proteins to already identified proteins are then used to rank candidates. We applied our prioritization approach to protein spots differentially expressed at the peak of HIV-1 infection of CD4 + T-cells [6]. The procedure developed here shows promise for (re-)mining 2D-DIGE datasets and for obtaining insights regarding expression differences of low-abundant proteins that cannot (yet) be found using alternative methods. 2D-DIGE data The dataset used in this study was generated in a 2D-DIGE proteomic experiment comparing uninfected and HIV-1 infected PM1 T-cells [6]. First-dimension isoelectric focusing (IEF) of the samples was performed using 24-cm precast immobilized pH gradient (IPG) strips (pH 3 to 11, nonlinear [NL]; GE Healthcare). Next, second dimension sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) was performed. After image acquisition and analysis, 296 significantly differentially expressed protein spots were detected at 7-10 days post infection. Performing peptide mass fingerprinting (PMF) with a MALDI-TOF mass spectrometer, 93 unique proteins were identified from 108 spots. UniProt IDs of the identified proteins were updated and three spots corresponding to protein fragments were left out, leading to 105 spots corresponding to 92 unique proteins. See Additional file 1 for the complete list of PMF-identified proteins and their characteristics. The remaining 188 spots did not contain enough protein to allow identification by PMF. Candidate protein prioritization The objective of our method is to identify the most likely protein candidates for low-abundant differentially expressed spots. Our method uses proteins identified by PMF and their (x, y) coordinates on the gel to prioritize candidate proteins for unidentified spots. In this section we present the different steps of our prioritization approach ( Figure 1). Calculation of pI and Mw. In most cases the specific isoform detected on a gel represents the most abundant, mature protein form. We compute theoretical pI and average Mw for the mature form of the PMF-identified proteins using "Compute pI/Mw" [16] given their UniProt protein accession numbers. We added 42 Da to the Mw of those proteins for which N-terminal acetylation was detected by mass-spectrometric analysis of the digested protein spot. Fit calibration curves. Both for first dimension IEF using IPG strips and SDS-PAGE, standard protocols generate gels with well-characterized profiles. IPG strips are available that show either linear or smooth non-linear profiles across a specified pH range. SDS-PAGE separation is characterized by an approximately linear relationship Figure 1 Prioritization of candidate proteins based on pI and Mw. Step 1: pI and Mw (Da) of the mature forms of the proteins identified by PMF are determined using the ExPASy tool "Compute pI/Mw" [13]. Step 2: The (x, y) coordinates of the identified spots and their corresponding pI and Mw (on log10-scale) are used as training data for fitting two cubic smoothing splines. Step 3: For an unidentified test spot u, a candidate list of proteins is generated using the ExPASy tool TagIdent [14] by specifying ranges and δ(%) around the pI and Mw predicted by the smoothing splines, respectively. Step 4: Proteins in the candidate list are ranked by calculating their similarities with the PMF-identified 'seed' proteins using STRING association scores. Step 5 (optional): The ranked candidate list can be further filtered using presence (black) and absence (white) calls from the Gene Expression Barcode 3.0 [15]. A protein is excluded from the ranked list if the corresponding gene is expressed on none of the selected microarrays. between the logarithm of Mw and the migration distance. In most cases the extremes of the pI and Mw ranges are less well-defined and show smooth non-linear gradients. We therefore fitted two cubic smoothing splines for pI and Mw (on log10-scale), respectively, using the R function smooth.spline. The pI fit is estimated from the xcoordinates of the identified spots and the corresponding theoretical pI as determined in the previous step. The Mw fit is estimated from the y-coordinates of the identified spots and the corresponding average Mw (on log10-scale). Optimal values for the smoothing parameter are determined using generalized leave-one-out cross-validation. Generation of candidate list. We use the two calibration curves to predict the pI and Mw for unidentified protein spots from the corresponding (x, y) coordinates. From the estimated pI and Mw a list of candidate proteins is then generated using TagIdent [16] by specifying the pI range ( ) and Mw range (δ) in which the search has to take place. Candidate proteins are retrieved from all human proteins contained in UniProtKB/Swiss-Prot. Prioritization of candidate proteins. Candidate lists generated in the previous step often contain hundreds of proteins. We prioritize candidate proteins based on their similarity to the set of PMF-identified 'seed' proteins P using the principle of guilt by association [17]. In our prioritization approach we use functional proteinprotein associatons provided by the Search Tool for the Retrieval of Interacting Genes (STRING) database (version 9.1) [18]. For a given pair of proteins (p, q), STRING integrates evidence from multiple sources such as cooccurrence in pathways, physical protein-protein interaction, co-occurrence in the abstracts of scientific reports etc., and provides a probabilistic score 0 ≤ S p,q < 1 for the strength of association. For each candidate protein, we determine an overall score by combining the association scores S p,q of a candidate protein q and the PMF-identified proteins p ∈ P as follows: and then rank the candidate proteins according to their scores. This corresponds to a scoring model in which the contribution of individual proteins is assumed to be independent, implying that the probability of association of protein q with all proteins in P can be written as 1 − p∈P (1 − S p,q ). The STRING protein alias file was used to map UniProt accession numbers to Ensembl protein IDs. Results were compared with those obtained using the Endeavour prioritization software [19,20]. Endeavour also ranks a given candidate based on its similarity to a training set. However, in this case ranks are determined for each data source separately and then fused into a global ranking using order statistics. Since Endeavour is gene-based, we first mapped UniProt accession numbers to Ensembl gene IDs using the Bioconductor package biomaRt. Endeavour was used in batch mode. Gene expression-based filtering. As an optional step we filter ranked candidates lists via the Gene Expression Barcode 3.0 [15], which dichotomizes gene expression data in expressed and unexpressed genes on a per sample basis. For this purpose we selected a microarray experiment with a setup similar to the 2D-DIGE experiment, comparing CD4 + T-cells of 11 HIV + individuals and 9 HIVcontrol individuals (GEO accession number: GSE9927, platform: Affymetrix Human Genome U133 Plus 2.0). We used functions frma and barcode from the Bioconductor package frma to determine the gene expression barcode for this dataset. Probeset identifiers were mapped to UniProt accession numbers using Bioconductor packages biomaRt and hgu133plus2.db. A protein was excluded from the candidate list if and only if all corresponding probesets were expressed on none of the selected microarrays. Proteins without corresponding probeset identifier were not excluded. Evaluation We evaluated the performance of our prioritization method by leave-one-out cross-validation (LOOCV). This involves repeatedly leaving out a single spot from the seed list of 105 PMF-identified spots and considering it as unidentified in order to evaluate performance of our prioritization method. From the in-gel (x, y) coordinates of the excluded spot a list of candidate proteins was prioritized using the approach described above. This means in particular that the smoothing splines were refit for each cross-validation fold. We then determined the rank of the protein corresponding to each excluded spot and report the true-positive rate (TPR), that is the fraction of spots for which the correct protein appeared among the top n candidates for n ∈ {5, 10, 15, 25}. True-positive rates were determined for all combinations of values ∈ {0.04, 0.08, 0.12, . . . , 1} for the absolute difference from the estimated pI and δ ∈ {1, 2, 3, . . . , 30%} for the range around the estimated Mw. Implementation The prioritization approach has been implemented in the statistical software package R (v3.0.2). STRING version 9.1 protein links and protein aliases files were downloaded from the STRING website [21]. Files were loaded into an in-house PostgreSQL database. The STRING payload mechanism was accessed using the Bioconductor package STRINGdb. R scripts and data files are available as Additional file 2. Influence of pI and Mw range An important ingredient of our prioritization approach is the information provided by the (x, y) coordinates of an unidentified spot on the pI and Mw of the protein(s) that migrated there. However, this information is noisy and can lead to considerable differences between observed and predicted pI and Mw values (Figure 1, Step 2). Such differences can, for example, be caused by undetected posttranslational modifications (PTMs) leading to changes in migration behaviour in both dimensions, as PTMs can alter both overall apparent molecular mass and charge. All SDS-PAGE separation techniques also have hydrophobic proteins showing anomalous migration due to extra SDS binding [22]. These factors and others can lead to errors of more than 10% when using SDS-PAGE to determine the Mw of a protein [23]. Our method takes the uncertainty of the predicted pI and Mw values into account and generates a list of candidate proteins for an unidentified spot using TagIdent by specifying the pI and Mw range around the estimated pI and Mw values (Figure 1, Step 3). The size of the chosen pI and Mw range is has a large influence on the performance of the prioritization method. When choosing ranges too narrow, candidate lists become short and the probability of the correct protein being included is small (Figure 2). For the smallest pI range ( = 0.04) and Mw range (δ = 1%), the average number of proteins in a candidate list was 6 with a recall of 6%. When choosing ranges too large, candidate lists in general contain the correct protein but become very long. For the largest pI range ( = 1) and Mw range (δ = 30%), the average number of proteins in a candidate list was 2,626 and 96.2% of the seed proteins appeared in their own candidate list. However, long candidate lists will likely lead to the correct protein being lowly ranked after prioritization. With a more moderate choice of pI and Mw range, for example = 0.2 and δ = 8%, 60.9% of the seed proteins were contained in their own candidate list with an average candidate list length of 199. Prioritization performance To prioritize candidate lists that can contain hundreds of proteins, we assumed that the proteins differentially expressed in the 2D-DIGE dataset were functionally related and thus not distributed randomly on a protein association network. The list of 92 PMF-identified proteins was indeed enriched in STRING interactions, with 377 observed interactions as compared to 66.2 expected interactions (P < 0.0001, [18]). We, therefore, prioritized candidate proteins based on their similarity to the set of PMF-identified proteins (Figure 1, Step 4). We applied our prioritization method to the seed proteins using LOOCV. For each combination of predefined values for the pI and Mw range, the true-positive rate was calculated as a measure of performance. The TPR n for the top n = 5, 10, 15, 25 ranked proteins of the candidate list is shown in Figure 3A. The maximal value for TPR 5 equaled 0.438 for ranges = 0.2 and δ ∈ {10, 11}%. This means that 43.8% of the seed proteins were ranked in the top-5 using our approach. For higher values of n, the TPR increased with a maximal TPR 25 = 0.6. As hypothesized in the previous section, large pI and Mw ranges led to inferior performance with a low TPR in the upper right corner of the contour plots ( Figure 3A). Performance increased when reducing either of the ranges and then decreased again for even narrower ranges. Figure 3A were based on the combined STRING association scores computed by integrating the probabilities from seven different evidence types. We assessed the contribution of each individual evidence type by calculating type-specific TPR values when ranking the candidate proteins. 'Gene coexpression' and 'textmining' contributed most to the overall ranking with a maximal TPR 5 equal to 0.41 and 0.371, respectively (Table 1). 'Gene fusion' only had a very minor contribution with TPR 5 = 0.076. The superior performance for 'gene coexpression' and 'textmining' is explained by the fact that these evidence types have a large coverage, whereas events such as 'gene fusion' are relatively rare. Also for higher values of n, the coexpression-based and the combined association-based TPR were highly similar (Table 1). We also assessed the contribution of low-confidence asociations to the overall ranking by comparing the TPR using our current strategy, i.e. no cut-off on the STRING association score, and using a required score of 0.15, 0.4, and 0.7 respectively. Low-to-medium confidence scores positively contribute to the overall ranking, with a considerable decrease in TPR for cut-offs of 0.4 and 0.7. Results presented in In human cells, transcription has been reported to explain only 30% of variation in protein abundance levels, with translation and protein degradation contributing up to 40%. However, mRNA abundance is often a very good indicator whether or not the corresponding protein is detectable [24]. Thus, pruning a candidate list by filtering out proteins for which the corresponding gene is not expressed in a microarray experiment performed under similar conditions might eliminate unlikely candidates. We used the Gene Expression Barcode [15] to determine presence and absence calls from a microarray experiment comparing gene expression in CD4 + T-cells of HIV + individuals and HIVcontrol individuals (Figure 1, Step 5). Of the 91 PMF-identified proteins that could be mapped to a probeset identifier, 81 showed evidence of expression at the mRNA level. Thus, the seed proteins were indeed strongly enriched for expression at the mRNA level; of the 23,366 human proteins that could be mapped to a probeset ID, only 7,335 showed evidence of mRNA expression (P < 2.2 · 10 −16 , Fisher's exact test). Using gene expression-based filtering, we observed no increase in the maximal TPR 5 and only a slight improvement in the maximal TPR 25 with a 5% increase from 0.6 to 0.629 (Table 1 and Additional file 3). Note, however, that for almost all combinations of predefined values for the pI and Mw range, the TPR after filtering is at least as high as before filtering ( Figure 3B). We also compared our results with those obtained using Endeavour [19] to prioritize candidate proteins (Step 4). Endeavour is a popular prioritization tool that compared favorably to most other tools in a recent benchmarking study [25]. Endeavour uses multiple heterogeneous data sources to rank the proteins in the candidate list. We selected GeneOntology, Kegg, IntAct, String, Text and Blast as data sources. Maximal TPR values using Endeavour were considerably lower with TPR 5 = 0.324 and TPR 25 = 0.581 (Additional file 4). Possibly, the drop in performance using Endeavour is related to its use of an older STRING version. Prioritization of unidentified spots We applied our prioritization method to the 188 unidentified, differentially expressed spots from the 2D-DIGE dataset (Additional file 5). Based on the LOOCV results True-positive rates TPR n estimated using LOOCV for the top n = 5, 10, 15, 25 ranked candidates using our prioritization approach with single evidence type association scores and combined association scores. STRING assocation scores with a value less than the cut-off value were not taken into account. With a cut-off value of zero all associations contribute to the overall ranking score. Maximal TPR across all combinations of predefined values for the pI and Mw range is reported. For each value of n the highest TPR is indicated in bold. mentioned earlier, = 0.2 and δ = 11% were chosen as pI and Mw range for TagIdent leading to an average candidate list length of 242. Using gene expression-based filtering we obtained a total of 393 unique proteins that were included in at least one top-5. As an in silico validation, we examined whether these top-5 candidate proteins had a documented relationship with HIV-1 infection. For this purpose we used the NIAID HIV database of (HIV-1)-human protein interactions [26]. Of the 389 unique top-5 candidate proteins that could be mapped to an Entrez Gene ID, 213 had documented evidence for interactions with HIV-1 proteins. Thus, the top-5 candidates were strongly enriched for such interactions; of the 12,544 proteins found in at least one candidate list, only 1,659 showed evidence of interactions with HIV-1 proteins (P < 2.2 · 10 −16 , Fisher's exact test). The efficacy of our strategy is also illustrated by a clear decreasing trend in the occurrence of HIV-1 interacting proteins at lower ranks (Additional file 6). This provides strong evidence that our prioritization method provides candidate proteins that are plausible in terms of their in-gel migration behavior and functional relevance. Discussion Although the human proteome can now be probed at an unprecedented scale [27], the identification and quantification of low-abundant proteins remains a formidable challenge. We presented a prioritization method that generates ranked lists of candidate proteins for unidentified low-abundant (i.e. only visible using fluorescense) spots from a 2D-DIGE experiment. Candidate proteins are proposed, based on the in-gel location of a spot, and resulting candidate lists are ranked, based on the strength of association of candidates with the PMF-identified proteins using STRING functional association scores. We assessed the performance of our approach on proteins differentially expressed at the peak of HIV-1 infection of T-cells [6]. Evaluation by LOOCV showed that our method ranked 43.8% of the proteins in the top-5 of their respective candidate lists. Several other approaches have been developed to prioritize genes and -to a lesser extent -proteins from a list of candidates based on text mining, similarity profiling and network analysis [28]. Existing tools have several limitations for our purpose. First, most tools are web-based and do not provide a programmatic interface or allow for batch queries. They are therefore not suited for our experimental setup, which involves leave-one-out crossvalidation on 92 proteins for 25 · 30 = 750 combinations of possible values for the pI and Mw range. Second, prioritization methods often integrate multiple data sources and are therefore difficult to keep up-to-date. Third, surprisingly, most prioritization tools do not provide detailed information about the evidence on which the candidate ranking is based [28]. Such information is invaluable for making an informed decision on which top candidates to validate experimentally. We decided to base our prioritization method on functional protein-protein associations provided by STRING. Over the past 10 years STRING has established itself as a high-quality resource of functional links between proteins. Moreover, the data content of STRING is frequently updated and all information regarding interactions and the interacting proteins themselves can be downloaded. This enabled us to develop a computational pipeline that can be easily updated to a new of version of STRING. One of the main strengths of STRING is its interactive and intuitive user interface, which provides a detailed overview of the STRING network and the evidence for each protein-protein association. We employed the payload mechanism that enables projecting external information onto STRING [18] to visualize networks consisting of seed proteins and top-5 candidate proteins for unidentified spots (for a typical example, see Figure 4; for the full list, see Additional file 5). In addition, we integrated information on the rank of a candidate protein and on evidence of (HIV-1)-human protein interactions for each of the top-5 candidates to further enhance interpretation (Additional file 7). We also demonstrated that our prioritization method clearly outperformed Endeavour. However, prioritization could possibly be further improved by also incorporating association scores of indirect interactions, for example via prioritization based on random walks or diffusion kernels [29]. Despite the capability of our approach to correctly rank the correct protein among the top candidates, it has several potential limitations. First, we could not take the extensive diversification of the human proteome due to different isoforms, posttranslational modifications and processing (e.g. of signal peptides) into account. In order to determine the theoretical pI and Mw of the identified proteins we assumed that their spots corresponded to the most abundant, mature form. Clearly, the estimated calibration curves are affected by the validity of this assumption. This problem is illustrated by the fact that multiple spots corresponding to different forms of the same protein had to be assigned identical pI and Mw values; see for example the four spots for heat shock protein 60 (P10809; Additional file 1). This partly explains the relative inaccuracy of the predicted pI and Mw values and the rather wide optimal pI and Mw ranges = 0.2 and δ ∈ {10, 11}%. These wide ranges presumably also lead to several proteins being highly ranked for multiple unidentified spots (Additional file 5). For example, catenin beta-1 (P35222; pI = 5.53, Mw = 85365) was ranked first for 8 spots with predicted pI values in the range 5.34-5.42 and predicted Mw values in the range 82351-95847. Whether these spots really represent (modified forms of ) catenin beta-1 awaits further experimental validation. Note that even if CTNB1 is not in any of these spots, it can still be differentially expressed in our experiment, as it is strongly associated with the already identified differentially expressed proteins [9]. Isoforms are also not taken into account by STRING, only the mature form being used in the prioritization step. However, TagIdent contains all isoforms listed by UniProt even though some of them must be really rare. This implies that for spots corresponding to non-canonical isoforms, correct proteins can still end up in the candidate list. For example, cellular tumour antigen p53 (P04637; pI = 6.33, Mw = 43653), for which UniProt lists 9 isoforms, was ranked first for 11 unidentified spots with predicted pI values in the range 5.34-7.91 and predicted Mw values in the range 31284-45252. One should also remember that the optimal ranges for pI and Mw are determined by a trade-off between specificity and sensitivity. With a pI deviation +/-0.2 and an Mw deviation +/-11%, the average number of proteins in a candidate list is 266 and 67.6% of the seed proteins appear in their own candidate list. Thus, even with optimal ranges almost one-third of the seed proteins could not be prioritized since differences between observed and predicted pI or Mw values were too large (Additional file 1). Some deviations are probably caused by lacking data for extreme values of x or y coordinates. For example, pI deviations are large for such highly basic proteins as 40S ribosomal protein S5 (P46783) and Histone H2B type 1-L (Q99980). However, large deviations can also be observed for intermediate values of the x or y coordinates. For example, triosephosphate isomerase (P60174) displayed both a pI deviation of -1.113 and an Mw deviation of 22.9%. Differences such as these occur if the corresponding spot did not contain the mature form of the protein or, more likely, was posttranslationally modified. With 188 unidentified spots a lot of candidate lists are being generated. Even looking at the top-5 lists only, 940 candidates could in principle come up. The actual number is much less, 393 unique proteins, as many candidates come up multiple times (e.g. P53; 11 times or HIF1A (Q16665); 17 times). Still, it is easy to cherry-pick some proteins from these lists for discussion of their possible roles in the context of HIV-T cell interaction. Checking whether a candidate is indeed differentially expressed should be performed first, e.g. by Western blotting. Working out whether the expression pattern change is due to PTMs is harder, and working out the biological relevance of the change is harder still. Keeping in mind these caveats, the chances that real changes are occurring in the 2D patterns of e.g. the two predicted proteins mentioned, P53 and HIF1A, are rather good. P53 is a protein that strongly reacts to cellular stresses both at the level of amounts and PTM changes, which, as a very central player, is involved in regulating choices between cell growth, cell arrest for repair, or apoptosis. HIF1A, also a central switch protein, is, amongst others, involved in the choice between more pronounced glycolysis with less oxidative phosphorylation and 'normal' glycolysis with more pronounced mitochondrial oxidative processes. This so-called Warburg shift can occur in T-cells [30], and is heavily influenced by HIF1A. Interestingly, the identified proteins from our dataset (Additional file 1) showed a clear down regulation of proteins involved in glycolysis [6]. An overall down regulation of HIF1A, as it stimulates glycolysis, would thus be expected. However, the difficulty of making sense of PTM patterns is nicely illustrated in this case: HIF1A pops up 17 times in the candidate top-5 lists, 9 times in spots that are up regulated in response to infection, 8 times in down regulated ones. How many of these spots really represent HIF1A and in what forms remains to be investigated, but HIF1A is clearly one of the candidates that deserve further study, nicely illustrating the power of our approach. The information regarding up and down regulation of specifically modified forms of such important proteins can be obtained much more efficiently from large-scale 2D-DIGE experiments with the aid of our computational method. Small amounts of protein from both control and infected T-cells could be run on small IEF strips having the appropriate (restricted) pI range, followed by standard SDS-PAGE separation. Upon Western blot analysis, specific protein patterns will be obtained. Combining these with the more reliable quantitative information in the original 2D-DIGE experiments could illuminate how the protein of interest has been modified in response to the stimulus under investigation, in this case viral infection. In conclusion, though using 2D-DIGE datasets in combination with our algorithm to analyse changes in PTMs upon a biological stimulus is not straightforward, it represents a promising alternative to study this crucial way of responding to changes in the environment. The applicability of our method could be extended in several ways. First, in 2D-DIGE experiments with only limited amounts of differentially expressed spots one could identify additional, non-differentially expressed spots using mass spectrometry to fit more reliable calibration curves. Secondly, it is conceivable that certain affected cellular pathways are not represented in the set of abundant identified seed proteins used in our prioritization approach. Although more difficult, one could look at pathway clustering in the ranked candidate lists directly. This might lead to the unbiased identification of interrelated low-abundant protein changes. Finally, iteratively improving the analysis should be straightforward: whenever further analysis based on the respective candidate lists gives additional identified proteins, both calibration curves and the prioritization approach for candidate ranking can be further optimized.
2017-06-27T01:09:18.040Z
2015-01-28T00:00:00.000
{ "year": 2015, "sha1": "8d0dcb3e756aa4a8d3b29d5873a0c24cb8f15ac5", "oa_license": "CCBY", "oa_url": "https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/s12859-015-0455-x", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2d1f410ec142ecd9a0bb78424efdee164c29093c", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [ "Biology", "Computer Science", "Medicine" ] }
233208628
pes2o/s2orc
v3-fos-license
Does the Age Affect the Outcomes of Cardiac Resynchronization Therapy in Elderly Patients? Background: More and more heart failure (HF) patients aged ≥ 75 years undergo cardiac resynchronization therapy (CRT) device implantation, however the data regarding the outcomes and their predictors are scant. We investigated the mid- to long-term outcomes and their predictors in CRT patients aged ≥ 75 years. Methods: Patients in the Cardiac Resynchronization Therapy Modular (CRT MORE) Registry were divided into three age-groups: <65 (group A), 65–74 (group B) and ≥75 years (group C). Mortality, hospitalization, and composite event rate were evaluated at 1 year and during long-term follow-up. Results: Patients (n = 934) were distributed as follows: group A 242; group B 347; group C 345. On 12-month follow-up examination, 63% of patients ≥ 75 years displayed a positive clinical response. Mortality was significantly higher in patients ≥ 75 years than in the other two groups, although the rate of hospitalizations for HF worsening was similar to that of patients aged 65–74 (7 vs. 9.5%, respectively; p = 0.15). Independent predictors of death and of negative clinical response were age >80 years, chronic obstructive pulmonary disease (COPD) and chronic kidney disease (CKD). Over long-term follow-up (1020 days (IQR 680-1362)) mortality was higher in patients ≥ 75 years than in the other two groups. Hospitalization and composite event rates were similar in patients ≥ 75 years and those aged 65–74 (9 vs. 11.8%; p = 0.26, and 26.7 vs. 20.5%; p = 0.06). Conclusion: Positive clinical response and hospitalization rates do not differ between CRT recipients ≥ 75 years and those aged 65–74. However, age > 80 years, COPD and CKD are predictors of worse outcomes. Introduction Cardiac resynchronization therapy (CRT) is a validated strategy for improving cardiac pump function through biventricular pacing in heart failure (HF) patients with interventricular conduction delay and mechanical dyssynchrony [1]. The incidence of HF increases with aging; indeed, a survey conducted by the European Society of Cardiology (ESC) revealed that, in current European practice, about 32% of CRT devices are implanted in patients aged ≥ 75 years [2]. However, as only a minority of patients included in the clinical trials belong to this age-group, whether CRT is still of benefit in these patients is debated. Previous studies have shown that CRT device implantation improves symptoms, quality of life and functional class in elderly people [3,4]. However, data regarding the outcomes and their predictors are scant and limited to old studies [5][6][7][8][9][10]. In this study, we analyzed the large database of the CRT MORE registry in order to investigate the clinical response, the mortality and the hospitalization rates in elderly CRT recipients (≥75 years). Study Population The Cardiac Resynchronization Therapy Modular (CRT MORE) Registry (clinicaltrials.gov Identifier: NCT01573091) was a prospective, single-arm, multi-center cohort study designed to evaluate the association between baseline and implantation variables and the outcomes of patients in whom a CRT device had been implanted in accordance with current guidelines [11]. Enrollment started in December 2011 and ended in November 2013 [12]. In the present analysis, the population of the CRT MORE registry was stratified into three groups according to age: <65 years (young, group A), between 65 and 74 (young old, group B), and ≥75 years (old, group C) [13], and comparisons were made among groups. The study complied with the Declaration of Helsinki, the local ethics committee approved the research protocol, and informed consent for data collection was obtained from the subjects. CRT Implantation In all patients enrolled in the Registry, devices and pacing leads were implanted by means of standard techniques and all devices were programmed in accordance with the clinical practice of each center. Procedural details have been previously described [12]. Clinical Response and Long-Term Outcomes Clinical response was evaluated at 12 months; death from any cause and HF hospitalization, whichever occurred first after CRT implantation, were also evaluated. For the clinical evaluation, we used both the Clinical Response (CR) [14] and Clinical Composite Score (CCS) [15]. The CR was assessed in accordance with a hierarchical composite criterion comprising live status, hospitalization for HF, and variations in NYHA functional class. Specifically, a positive response was attributed to patients who remained alive without any episode of HF hospitalization after 12 months of CRT delivery and showed an improvement in NYHA class or remained in NYHA class I or II. A negative clinical response was attributed to patients who died or were hospitalized for signs of HF, showed worsening of their NYHA class or remained in NYHA class III or IV. The CCS has been used as an intermediate-term primary endpoint in several trials of new interventions for the treatment of chronic heart failure. The CCS classifies each randomized patient as improved, unchanged, or worsened, according to the clinical response during the study and the clinical status at the end of the study. Patients are considered to have worsened if they have experienced a major clinical event or reported worsening of their NYHA class or global assessment. Furthermore, left ventricle (LV) reverse remodeling was evaluated by measuring the effect of CRT on LV end-systolic volume (LVESV) and on left ventricle ejection fraction (LVEF), by comparing the baseline value with that recorded at the 12-month follow-up examination of surviving patients, and by calculating the proportion of patients who displayed a relative reduction of 15% or more in LVESV [14]. Statistical Analysis Descriptive statistics are reported as means ± SD for normally distributed continuous variables, or medians with 25th to 75th percentiles in the case of skewed distribution. Normality of distribution was tested by means of the nonparametric Kolmogorov-Smirnov test. Differences between mean data were compared by means of a t-test for Gaussian variables, and the F-test was used to check the hypothesis of equality of variance. The Mann-Whitney non-parametric test was used to compare non-Gaussian variables. Differences in proportions were compared by applying χ 2 analysis or Fisher's exact test, as appropriate. Hazard ratios (HR) and their 95% confidence intervals (CI) were computed by means of a Cox regression model, in which baseline variables were fixed covariates and deaths or cardiovascular hospitalizations were time-dependent covariates. The cumulative probability of death or HF hospitalization was displayed by means of the Kaplan-Meier method, and the log-rank test was used to compare cumulative events. A p value < 0.017 was considered significant after Bonferroni correction. All statistical analyses were performed by means of STATISTICA software, version 7.1 (Stat Soft, Inc., Tulsa, OK, USA). Patient and Public Involvement Statement Patients were not involved in the research study. Clinical Characteristics According to age, 934 patients included in the CRT MORE registry were divided into three groups: 242 in group A (<65 years), 347 in group B (between 65 and 74 years), 345 in group C (≥75 years). Baseline clinical characteristics are reported in Table 1. In every group the majority were males and the mean age was, respectively: 80 ± 4, 70 ± 3, 57 ± 7; (A vs. C p < 0.0001; B vs. C p < 0.0001). The prevalence of patients in NYHA classes III-IV increased with age. Patients aged ≥75 years had more comorbidities (renal disease, atrial fibrillation, and hypertension) than those <65 years. In patients <65 years, an ischemic etiology of HF was more common (49.6% group A vs. 36% group C; p = 0.0014). The implantation of an implantable cardiac defibrillator (ICD) combined with a CRT decreased with age. Clinical and Echocardiographic Response of Elderly Recipients of CRT Devices At 12 months, the rate of positive CR was similar in patients aged 65-74 years and those ≥75 (68 vs. 63%; p = 0.18), whereas it was significantly higher in the youngest patient group (83.1%; p A vs. C = 0.0001; p A vs. B = 0.0001) ( Table 2). On the other hand, 19.7% of patients aged 65-74 and 20% of patients aged ≥75 experienced worsening of CCS, which is almost twice the rate recorded in patients <65 years. By contrast, we did not find any difference among the groups in terms of echocardiographic response on 12-month follow-up examination (63.6% group A, 59.6% group B, 61.3%, group C; A vs. C p = 0.94; B vs. C p = 0.58). One-Year and Long-Term Outcomes Mortality at 12 months was higher in patients aged ≥75 years than in the other two groups, although the rate of hospitalization for worsening of HF was similar in patients ≥75 years and those aged 65-74. The 1-year composite event rate was 15.7% in patients ≥75 years, which was significantly higher only than that of patients aged <65 years (15.7% group C vs. 5.8% group A; p = 0.0002) ( Table 3). Likewise, during a median follow-up of 1020 (680-1362) days, only the mortality rate was significantly higher in the oldest group, whereas hospitalization and composite event rates were similar to those of patients aged 65-74. (Figure 1). There were no differences of outcomes in CRTD vs. CRTP patients in the three groups (Supplementary Table S1 and Supplementary Figure S1). Predictors of Death and Association with CR and CCS at 1 Year in the Elderly Group (Age ≥ 75 Years) Death occurred more frequently in those aged >80 years, with atrial fibrillation (AF) at implantation, chronic obstructive pulmonary disease (COPD), chronic kidney disease (CKD), and in NYHA class III-IV (Table 4). On multivariate Cox regression analysis, adjusted for baseline confounders, only age >80 years, COPD and CKD ((Age: HR 2.32 (95% CI 1.1139-4.8319) p = 0.0253; COPD: HR 2.78 (95% CI 1.3763-5.6030) p = 0.0046; CKD: HR 2.70 (95% CI 1.3141-5.5463) p = 0.0071) remained associated with death. On plotting mean survival according to the number of risk factors, a clear separation of curves emerged between patients who had more than one predictor and those with 1 or no predictor (p < 0.0001) (Figure 2). Furthermore, these risk factors were also associated with the CCS and the CR; when these patients had more than 1 risk factor, the rate of positive CR progressively decreased, whereas the rate of patients with worsened CCS significantly increased (Figure 3). Main Findings In the present study, we analyzed the clinical response, the mid-to long-term clinical outcomes and their predictors in a large elderly population included in the CRT MORE Registry. We found that: (1) at 1 year (mid-term follow-up), the rate of positive clinical response was similar in patients aged ≥ 75 years and in patients aged 65-74 years; likewise, the hospitalization and composite event rates were not significantly different; (2) age > 80 years, COPD and CKD were independent predictors of death at 1 year in elderly patients. The risk of death rose concomitantly with the presence of these factors, with the highest risk being observed in patients with all three factors; (3) these risk factors were also associated with a negative clinical response: patients with no risk factors had >80% probability of having a positive clinical response, whereas with two or more risk factors the probability was <30%; (4) over long-term follow-up, patients aged ≥ 75 years had similar HF hospitalization and composite event rates to patients aged 65-74, despite their higher mortality rate. Our findings suggest that patients aged ≥ 75 years are good candidates for CRT, as the benefits are seen over both mid-and long-term follow-up. However, age > 80 years, CKD and COPD reduce the probability of positive clinical response and survival. Elderly Patients and Medium-To Long-Term Clinical Outcomes after CRT Implantation Nowadays, many people aged > 65 years are very active and, as suggested by Orimo et al., only patients aged 75 or above should be defined as "elderly" [16]. The European CRT survey provided important information on current European practice and revealed that about 32% of CRT devices are implanted in patients ≥ 75 years [2]. Whether these patients benefit from CRT has been investigated by previous studies, but their small sample sizes, short follow-up periods and differences in cut-off values used to categorize elderly patients have prevented this issue from being fully addressed. Bleeker et al. [3] and Verbrugge et al. [4] reported that elderly recipients of CRT devices displayed similar improvements to those observed in younger patients in terms of clinical symptoms, NYHA class, quality-of-life scores and 6-min walking distance. Furthermore, no differences were found in the number of responders, the magnitude of LV ejection fraction improvement or the extent of LV remodeling. In the large InSync/InSync ICD Italian registry, Fumagalli et al. [6] divided patients into three groups: <65 years, 65-74 and ≥75 years, as we did, and investigated their echocardiographic responses to CRT and long-term outcomes. However, these patients were enrolled between 1999 and 2005; since then, many improvements in CRT have been made and also the echocardiographic cut-off used to define responders has changed. Therefore, new data on medium-and long-term outcomes were needed. The CRT MORE Registry holds data on patients who underwent CRT implantation from 2011 to 2013. Although previous studies have analyzed outcome and its predictors in this large population [14,17,18], these outcomes have never been analyzed in relation to age. In the present sub-study, we found that patients aged ≥ 75, despite their higher mortality, had a similar rate of positive clinical response at 1 year to that of patients aged 65-74 (63 and 68%, respectively). As expected, on long-term follow-up their mortality remained higher, although their hospitalization rate was similar. Interestingly, however, age ≥ 80 years, like CKD and COPD, identified elderly patients with a lower probability of a positive clinical response and a higher rate of death. Clinical Perspectives In the last decades, patients affected from cardiovascular diseases have greatly improved their prognosis especially in terms of mortality rate reduction. For instance, people with acute myocardial infarction very often survive thank to a wide use of early invasive coronary revascularization. As a consequence, along with the increasing age of the population, heart failure patients are continuously increasing. We currently have several pharmacological and invasive strategies to ameliorate the clinical status, as well as the prognosis of these patients. Therefore, it is becoming always more important to better identify the right patient needing a specific procedure. Our findings have important clinical implications, as they demonstrate that elderly patients can still benefit from CRT device implantation. However, the presence of comorbidities such as CKD and COPD, especially in those aged >80, should be a warning sign as these conditions may synergistically interact and reduce the benefits of this approach. The VALID-CRT prognostic score has recently been demonstrated to predict both mortality and clinical response [14]. In this score, however, of the three aforementioned predictors, only age is taken into account. We therefore identified two novel variables for risk stratification and for tailored treatment. As for the influence of the ICD on outcomes, we found no difference between CRTP and CRTD patients. However, the goal of our study was to understand clinical response to CRT in the elderly, therefore a specific evaluation of the difference of CRTP vs. CRTD cannot be done in our population. We believe that only a randomized trial would address this question. In fact, CRTP is usually implanted in patients with more comorbidities and without a RCT study this would create a very significant bias in the results. Current ESC guidelines recommend implanting a CRT-D if life-expectancy is >1 year [11]. However, as the procedure is associated with more complications, a longer in-hospital stay and a higher risk of infections, it may prove cost-effective only in patients who are expected to live 5-7 years after implantation [19]. As the prevalence of CKD and COPD is especially high in octogenarians, therefore every physician when implanting a CRT should be aware that the probability of clinical improvement in these patients is lower. Limitations (1) The data used in this study were taken from a registry; we cannot therefore exclude the presence of some selection bias; (2) As the CRT MORE is a multicenter registry, we cannot guarantee that data collection was homogenous, although all centers followed a pre-specified protocol; (3) We did not assess the frailty of the patients. Thus, it is possible that the results would have been different in a very frail population; (4) Pharmacological therapy on enrollment was not optimal, especially in terms of βblockers and angiotensin-converting enzyme inhibitors, as treatment was based on clinical evaluation by the attending physicians. However, this observational prospective study may provide a representative image of the real-life scenario of pharmacological therapy in patients undergoing CRT implantation; (5) The echocardiographic data at follow-up were available only for 589 (63%) patients, therefore the ad hoc analysis was not performed. Conclusions Elderly patients ≥ 75 years still have benefit from the CRT implantation as similar rates of positive clinical response are seen in patients ≥ 75 years and those aged 65-74. Although elderly patients have higher mortality, however this is driven by the age itself, as conversely the hospitalizations rates do not differ from those of patients aged 65-74. Predictors of worse outcomes are age > 80 years, CKD and COPD. A proper characterization of baseline parameters can be helpful to estimate upfront the probability of response to the CRT.
2021-04-12T17:22:41.404Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "b69d4b487f2b92a0b73bc99764a98c2faf58e12a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/jcm10071451", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b69d4b487f2b92a0b73bc99764a98c2faf58e12a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13925970
pes2o/s2orc
v3-fos-license
Current-sheet Evolution near a Hyperbolic Magnetic Neutral Line in Hall Magnetohydrodynamics: An Exact Solution \large{\bf Abstract-} Unsteady Hall Magnetohydrodynamics (MHD) near a hyperbolic magnetic neutral line is investigated. An exact analytical solution describing a self-similar evolution is given. This solution shows a negligible impact on the current-sheet formation process near the hyperbolic magnetic neutral line at small times by the Hall effect but, subsequently, a quenching by the Hall effect of the finite-time singularity exhibited in ideal MHD and, hence a prevention of the current density blow-up at large times. The asymptotic result given by this time-dependent solution is in full quantitative agreement with the formulation of \textit{steady} Hall MHD near a $X$-type magnetic neutral line (Shivamoggi [23]). The latter formulation showed that this asymptotic result indeed corresponds to a hyperbolic configuration of the magnetic field lines in the \textit{steady} case. Introduction When a plasma collapses near the neutral line of the applied magnetic field the continual accumulation of the magnetic flux in the region of the neutral sheet puts the current sheet in a non-stationary state (Syrovatskii [1]). An exact self-similar solution of the MHD equations for a time-dependent, two-dimensional (2D) flow of an incompressible plasma in a hyperbolic magnetic field was given by Uberoi [2] and Chapman and Kendall [3]. This solution had an initialy current-free magnetic field, so it is not appropriate for the reconnection problem. This solution was modified (Shivamoggi [4]) so as to relax this restriction, and hence make it suitable for the reconnection problem. This solution was generalized to incorporate a uniform shear-strain rate in the plasma flow (Shivamoggi [5], Rollins and Shivamoggi [6]), so the magnetic field lines now undergo not only sweeping but also shearing by the plasma flow. The above solution predicted a sequence of events associated with the evolution of a current sheet in a hyperbolic magnetic field, in agreement with laboratory experiments (Frank [7]) on the collapse of a plasma near the hyperbolic neutral line. In recognition of the numerical results (Brunnel et al. [11]) showing the significant effect plasma density variations near the magnetic neutral point have on the magnetic reconnection processes taking place there the above solution was generalized further to incorporate density variations in the plasma (Rollins and Shivamoggi [12]). The current-sheet formation process was found to speed up in the presence of a plasma density build-up near the current sheet. Fast magnetic reconnection processes in laboratory (ex: sawtooth collapse in tokamak dischanges) and space (ex: solar flares and magnetospheric substorms) can be described using collisionless plasma models (Yamada et al. [13], and Shibata [14]). In a high-β collisionless plasma, on length scales shorter than the ion skin depth d i , the electrons decouple from the ions and the electron dynamics is governed by Hall currents (Sonnerup [15]). The decoupling of ions and electrons in a narrow region around the magnetic neutral point allows for rapid electron flows in the ion dissipation region and hence a faster magnetic reconnection process (Mandt et al. [16] and Biskamp et al. [17]). It may be mentioned that even in the absence of the Hall currents fast collisionless magnetic reconnection has been shown to be feasible (Scholer et al. [18], Jaroschek et al. [19], Bessho and Bhattacharjee [20], Ishizawa and Horiuchi [21]). This is caused by - • inductive electric fields generated by lower hybrid drift instability, or • non-gyrotropic pressure tensor effects caused by the ion-meandering motion near the magnetic neutral point. In recognition of the important role played by the Hall effect in fast magnetic reconnection processes an investigation of unsteady Hall MHD near a hyperbolic magnetic neutral line 1 is therefore in order -this is the objective of this paper. The asymptotic result given by the time-dependent solution in question turns out to be in full quantitative agreement with the formulation of steady Hall MHD near a X-type magnetic neutral line (Shivamoggi [23]). Governing Equations for Hall MHD Consider an incompressible, two-fluid, quasi-neutral plasma. The equations governing this plasma dynamics are (in usual notation) - where, Neglecting electron inertia (m e ⇒ 0), equations (1) and (2) can be combined to give an ion-fluid equation of motion - and a generalized Ohm's law - Non-dimensionalize distance with respect to a typical length scale a, magnetic field with respect to a typical magnetic field strength B 0 , time with respect to the reference Alfvèn and introduce the magnetic stream function according to and write the ion-fluid velocity as and assume the physical quantities of interest have no variation along the z−direction. Hall MHD Near a Hyperbolic Magnetic Neutral Line Consider the initial-value problem near a hyperbolic magnetic neutral line in Hall MHD with initial conditions - where γ 0 and k are externally determined parameters with γ 0 > 0 and C > 0. This initial condition describes a stagnation-point plasma flow impinging transversely onto the x = 0 plane and incorporates equations (4) and (5). The spatial structure for the out-of-plane magnetic field described by this initial condition is in recognition of the quadrupolar out-ofplane magnetic field b pattern characterizing the Hall effects (Terasawa [24]). Laboratory experiments (Ren et al. [25]) and in situ measurements in the magnetotail (Fujimoto et al. [26], Nagai et al. [27], Oieroset et al. [28]) have also confirmed the latter signature of the Hall effect. The Hall magnetic field b is believed to be produced by the dragging of the in-plane magnetic field in the out-of-plane direction by the electrons near the X-type magnetic neutral line ( [16]). The Lorentz force due to the initial magnetic field is We take k > 1 so that this Lorentz force is directed so as to maintain the prescribed initial stagnation-point flow. Let us assume that the solution, for t > 0, of equations (4), (5) and (13) -(16) with the above initial conditions is of the self-similar form with For the solution (19), ∇ 2 ψ = f (t) and ∇ 2 b = 0, so the effect of resistivity in this case is to add a function of t to ψ (which leaves the magnetic field unaltered) and hence to introduce an electric field along the z-axis. We therefore drop the resistivity in the following. Further, for an incompressible plasma, the pressure does not have a dynamical role. So, it is forced to be an enslaved variable in the sense that its form is chosen so as to be compatible with equations (13) - (16) given the ansaetze for v x , v y , w, ψ and b. 2 3 Substituting (19) into equation (13), we obtain while equation (14) givesα Equations (15) and (16) are identically satisfied by the solution (19). We have from equations (21)- (23), with t = 0 : γ = 0. Equation (26) along with (20) and (27) yields It may be noted that (24) and (25) are consistent with the ion-fluid incompressibility conditiond dt which is derivable from equation (14) on substituting (19). For small t, equation (26) gives while for large t, equation (26) gives (30) shows that the Hall effect (σ = 0) does not materialize to O(t 2 ). However, (31) shows that the finite-time singularity exhibited in ideal MHD (Shivamoggi [29]) is quenched by the Hall effect. Thus, though the Hall effect does not impact the current-sheet formation process for small times, it prevents the current density blow-up at large times. This result may be further appreciated by noting that equations (24)- (26) lead to the exact invariant - (32) clearly shows the suppression of the plasma collapse process near a hyperbolic magnetic neutral line in Hall MHD, for large times (when (31b) becomes valid). Physically, the suppression of the plasma collapse process near a hyperbolic magnetic neutral point in Hall MHD appears to be caused by the dispersive activity of whistler waves which is known to lead to current-sheet broadening, as confirmed by laboratory experiments (Urrutia et al. [30]) and satellite observations at the magnetopause and the magnetotail plasma sheet (Sonnerup et al. [31], Fairfield et al. [32]) as well as numerical simulations (Shay et al. [33] and [34]). (31b), in conjunction with (19), also shows that, for large t, the level curves of the out-ofplane magnetic field are also the streamlines of the in-plane ion flow. It is pertinent to note that the asymptotic result (31b) is in full quantitative agreement with the formulation of steady Hall MHD near a X-type magnetic neutral line ( [23]). The latter formulation showed that (31b) indeed corresponds to a hyperbolic configuration of the magnetic field lines in the steady state. Discussion In recognition of the important role played by the Hall effect in fast magnetic reconnection processes this paper makes an investigation of unsteady Hall MHD near a hyperbolic magnetic neutral line in Hall MHD. The Hall effect is found not to impact the current-sheet evolution process near the hyperbolic magnetic neutral line for small times. However, subsequently, the Hall effect is found to quench the finite-time singularity exhibited in ideal MHD and hence to prevent the current-density blow-up at large times. The asymptotic result (31b) is in full quantitative agreement with the recent formulation of steady Hall MHD near a X-type magnetic neutral line ( [23]) which showed that (31b) indeed corresponds to a hyperbolic configuration of the magnetic field lines in the steady case. Besides, in this range of time, the level curves of the out-of-plane magnetic field are also the streamlines of the in-plane ion flow. Acknowledgements I acknowledge with gratitude helpful communications and discussions with Drs. Luis Chacon, Michael Shay and Michael Johnson.
2008-08-29T19:34:03.000Z
2008-01-22T00:00:00.000
{ "year": 2008, "sha1": "0c59ffd0a4c8e08e449e85a8c1ab777abb337ff2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "af7ce951a396d7df53053e99b016d83711f38175", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
238751561
pes2o/s2orc
v3-fos-license
Extending the Theory of Planned Behavior to Explore the Influence of Residents’ Dependence on Public Transport The accurate depiction and understanding of the influence mechanism of residents’ dependence on public transport (RDPT) is an important foundation for increasing the proportion of green trips and alleviating urban traffic congestion. To explore the influence mechanism of RDPT, this paper extends the theory of planned behavior (TPB) by introducing objective factors of attributes, environment, and travel characteristics. The agglomerative nesting clustering algorithm and multiple structural equation modeling (SEM) are then developed to identify the RDPT levels and analyze the influence relationships between the integrated influencing factors and RDPT based on travel survey data. The results indicate that the objective variables have indirect impacts on RDPT by influencing psychological variable attitudes, subjective norms and perceived control and travel intention. The residents’ self-selection (RSS) effects of different clusters are all detected under normal conditions, and the environment on RDPT is still significant after controlling for this effect. The findings reflect that the influence mechanisms of the three SEMs for different clusters are distinct, unlike those of the baseline model, and distinctive observation variables have dissimilar explanatory abilities for the travel intention of different residents. Therefore, some beneficial policy implications are proposed for residents, especially those who retain low and relatively low public transport dependence levels, to increase public transport usage while reduce the car usage based on the significant findings. I. INTRODUCTION With the rapid development of the social economy, the way urban residents live and travel are undergoing great changes in China. Currently, the most frequently used mode of transportation for all types of travel is private transportation [1], [2], which has led to various urban problems, including traffic congestion and air and noise pollution. As an intensive, wide-coverage and high-capacity transport mode, public transport (PT) is regarded as an alternative to private car travel [3], which can effectively alleviate various urban transport problems and promote the development of sustainable transport systems. Therefore, traffic management departments have focused on the construction and optimization of urban PT systems to restrict car use and guide transit travel behavior in recent years, with the continuous promotion of the National Transit Metropolis and PT priority in China. However, transit ridership has not increased significantly in China, showing only minor growth from 16.3% in 2004 to 19.2% in 2017 [4]. Furthermore, due to the diversity of urban traffic modes and the spatial imbalance of infrastructure construction, residents' mode selection intention and travel behavior are complex, based on heterogeneity characteristics and are difficult to predict. Thus, travelers with different individual attributes have different PT dependence levels, which are influenced by internal and external factors [5]. To better understand travel choice decision-making and improve PT service quality, it is essential to explore the causal relationship between the multidimensional influencing factors and residents' dependence on PT (RDPT). There have been many studies on behavioral theory applications and travel behavior modeling in different scenarios. In terms of the theoretical analysis of the travel behavior influence mechanism, the theory of planned behavior (TPB) is one of the most widely used theories. Based on the modified technology acceptance model and the TPB, Chen found that perceived pleasure and subjective norms have the strongest power to influence loyalty by both users and nonusers [6]. Considering the TPB paradigm, researchers developed a fourstep analysis to investigate the psychosocial determinants of PT usage behavior [7]. Simsekoglu and Klöckner collected data through an online survey and utilized the structural equation model and the TPB to examine the role of normative and environmental beliefs, perceived attributes, and innovativeness and demographical factors related to e-bike usage [8]. To study the expected usage of future metro services when they become available, Shaaban and Maher used the TPB to predict residents' intentions of using the forthcoming metro [9]. Borhan et al. proposed three new constructs, and found that attitude, subjective norms and perceived behavior control in the theoretical framework can directly and positively influence the behavior intention of Libyan car users for intercity travel [10]. These studies provide a new perspective to expand the TPB for analyzing residents' travel dependence behavior. In addition, few researchers also proposed new constructs for the TPB to strengthen theoretical explanations. Conner and Abraham explored the impacts of past behavior, personality traits on intentions and behavior, as well as the relationship between health protection, exercise, predictions of intentions and self-reported behavior [11]. According to [12], four predictors including situational factors, trust, novelty seeking and external influence are added to the TPB to understand travelers' willingness to use the train in Petaling Jaya. To understand the commuting travel mode choice among office workers further, Lo et al. extended the contents of attitude, descriptive norm and perceived control in TPB [13]. The results indicate that these three indicators were consistently associated with intentions. Analogously, Jing et al. proposed an extended TPB to delve into the psychological factors caused by the effects of adults' cognition and behavioral habits, and explored the factors' relationship paradigm [14]. Research on the determinants of residents' travel behavior has also resulted in some findings under different situational conditions. On one hand, the normalized condition was taken as the research background to explore the related studies. Sonmez and Graefe investigated the influence of past international travel experience, types of risk associated with travel, and the overall degree of safety, using information integration theory and protection motivation theory [15]. According to [16], an integrated path analysis-discrete choice model was proposed to examine the influence of commuting mode choice. The results indicated that parking availability and the built environment in residences and workplaces both have significant effects on car use for commuting. Rahimi, Azimi, and Jin presented a comprehensive analysis of people's attitudes, including preference, perception, reasons and motivations toward shared mobility options and autonomous vehicles, focusing on underlying patterns and determinants [17]. On the other hand, previous studies also show that the adverse weather conditions are observably associated with residents' travel behavior [18], [19]. Zanni and Ryley collected data from more than 2,000 residents and adopted two improved logit models to capture the impact of adverse weather, including heavy snow and volcanic ash, on longdistance travel behavior [20]. Wu, Liao, and Rose used a survey of residents and subway ridership data to examine the interrelation between weather and travel behavior in Beijing [21]. They found that extreme weather events affect recreational travel, reduce travel demand, and change travel modes. A network relationship diagram shown in Figure 1, based on the above analytical literature, depicts how the relationship of the keywords to the influence of residents' travel behavior is identified and understood. Through the analysis of keywords' network relationships, it is clear that PT is usually regarded as an alternative to cars, and residents' selection of transport mode is influenced by many factors, such as weather, land use, travel distance, and residential selfselection (RSS) effect, as well as the psychological elements of attitude, perception, satisfaction and loyalty. Most previous studies regarding the residents' travel dependence focus on the analysis of residents' car dependence, while few achievements related to RDPT and its potential for changing determinants have been found. Examples include automobile dependence, expressed through comparative levels of car ownership and usage, and transit service and usage, which varies widely and systematically across a large sample of international cities [22]. In efforts to explore residents' car dependence in a PT-dominated city, a survey of 401 car owners was implemented. The collected data were utilized to analyze why people owned cars and how dependent car owners were on their vehicles [23]. Naess investigated urban structure matters by capturing the relations between residential location, car dependence and travel behavior [24]. According to the literature [25], household travel survey data were obtained to analyze the connection between residents' dependence on cars and physical inactivity. The results showed that there is a large variation in physical activity within auto-dependents. Wang, et al used descriptive statistics and sequencing logistic regressions to analyze the car usage behavior of urban residents [26]. The results showed that the purpose of automobile usage, built environment, and various socioeconomic characteristics have important effects on the intensity and dependence of automobile usage. Overall, although TPB theory is relatively mature in the field of travel behavior, limited efforts have been made to expand TPB theoretical paradigm from a multi-dimensional perspective. Few studies related to travel dependence and influence relationship analysis about travel dependence from the perspective of PT systems have been conducted. In addition, a large number of previous studies on the influence of travel behavior have been carried out, but only from a single perspective of subjective psychology or the objective environment; a comprehensive analysis that combines both factors is lacking. In this context, the RSS effect, which remains a significant contributory factor for certain populations when sustainable modes are considered [27], should be of greater concern because the travel behavior and especially the transit preferences of Chinese residents are dissimilar to those of Europeans and Americans [28]. Compared with previous research, the present study has been conducted to examine objective and psychological factors, including individual attributes, travel environment, travel characteristics, attitude, subjective norms, and perceived behavior control. Although some of these variables have been examined in previous studies separately, they have not been inspected comprehensively. In addition, the TPB is expanded to more comprehensively analyze the causal relationship between the influencing factors and travel behavior performance, and a conceptual framework is also constructed to evaluate these relationships quantitatively. Moreover, this study contributes to a comprehensive understanding of what determines residents' PT usage behavior and how residents make their travel choices, which supports transportation planning and design, operation management and policy formulation for decision-makers. The remainder of this paper is organized as follows. The next section proposes a theoretical framework of extended TPB. Section 3 describes the survey design scheme, individual discrimination model, and causal model of RDPT. Section 4 presents the relevant results from the model estimation. Section 5 discusses the implications of the findings, proposes policy implications, provides a conclusion of the main results, and presents future related research ideas. II. THE THEORETICAL ANALYSIS OF TPB The TPB is a well-known theoretical approach to the behavior of cognitive determinants and has been successfully used to evaluate the relationship between cognitive and behavioral determinants [29], [30]. Many applications of the TPB in the field of traffic behavior also demonstrate the applicability and validity of the theory in explaining perceived decision behavior [7], [31]. The core idea of TPB is that behavioral intention directly determines actual behavior, and is jointly affected by attitudes, subjective norms and perceived behavior control [32]. The theoretical framework shows that attitudes, subjective norms and perceived behavior control have direct influence on travel intention, but indirect influence on travel behavior. However, the TPB focuses on behavioral intention analysis from the subjective psychological dimension and lacks an explanation of the objective influence variables on actual behavior. To enhance the interpretability and theoretical coverage of TPB, many previous studies have added additional constructs to their theoretical frameworks, such as individual attributes, travel characteristics, built environment and situational factors [12], [28], [33]. First, the individual attributes demonstrate the population diversity and basic personal information. Residents can be divided into groups according to their attributes, and different residents consider different self-factors when making travel decisions. Thus, individual attributes are considered influencing factors of travel behavior [34], [35], such as the education level [16], [36]. Second, in terms of travel characteristics, there is an effect of influence and being affected in exploring travel behavior. On the one hand, the travel characteristics of distance, mode and time are treated as travel behaviors that are influenced by external variables [28], [37]. In addition, some research has considered travel distance, time and purpose as factors that influence travel choice behavior [35], [38]. As a result, travel characteristics are introduced as influencing factors of RDPT in this study. Last, considering the diversity of residential communities regarding travel behavior, studies usually focus on the residential built environment. Residential environmental factors, including residential density, mixed land use, distance to transit and street network connectivity, have been proven to have a significant positive influence on PT travel choice behavior [36], [39]. Therefore, the environment is also chosen to enhance the effectiveness of the prediction of RDPT. In terms of the relationship between these variables, the hypothesis is that the objective variables, including individual attributes, travel characteristics and environment, have an indirect influence on residents' travel behavior and a direct effect on their psychology [10], [35], while the psychological variables in TPB have a direct effect. Additionally, the RSS indicates that individual attributes or attitude preferences can affect residents' choice of residence with certain built environment characteristics, and the influence of the built environment on travel behavior could be overestimated if the RSS effect was ignored [28], [40]. Therefore, we take the RSS effect into account and try to control it if the individual attributes have a significant influence on the environment and PT attitude observably influences RDPT. Thus, this paper develops an innovative and extended model framework based on the TPB, shown in Figure 2, that focuses on identifying the influence mechanism of the RDPT. III. METHODOLOGY Based on travel survey data, this study uses an agglomerative nesting (AGNES) clustering algorithm to identify RDPT, and then multiple structural equation modelings (SEMs) are adopted to explore the influence mechanism of RDPT. The specific research process is shown in Figure 3. A. SURVEY DESCRIPTION AND DATA COLLECTION In this study, Beijing, which consists of 16 administrative regions, was chosen as the research case. Beijing's PT system is expanding rapidly as a result of the city's recent economic development and population growth. By the end of 2020, Beijing had formed a relatively excellent PT infrastructure system. Beijing implemented 24 metro lines and 727 kilometers of operation mileage, and there were more than 1,100 bus lines in the urban area of Beijing and approximately 1,000 kilometers of bus lanes. The 750-meter coverage rate of rail transit stations in the central urban area of Beijing reached 90% in 2020, and the 500-meter coverage rate of bus stations will reach 99.5% in 2022. Beijing was severely disrupted by the COVID-19 epidemic in 2020; the annual total number of bus and subway passenger trips was VOLUME 9, 2021 183 million and 229 million, respectively, approximately 45% lower than 2019. In addition to the traditional bus service, the city also has diversified bus services containing business shuttle buses, rapid transit buses and customized buses. In the context of the expanded TPB paradigm, a stated preference (SP) and revealed preference (RP) survey questionnaire, including individual attributes, travel environment, travel characteristics, PT attitude, subjective norms, perceived behavior control and travel intention, was designed and implemented online in July 2020. The IP address restriction was carried out to collect survey information only of Beijing residents. Through this survey, information related to the trips and the psychological determinants of travel, such as origin, destination, purpose, travel mode, length of the trip, and mode preference, was collected. Data on socioeconomic characteristics and travel environments, such as age, gender, level of education, income, vehicle ownership, mixed land use and distance to transit, were also collected during the survey. The items related to TPB factors were measured using a fivepoint Likert scale, ranging from 1 = very low to 5 = very high. Higher scores show a higher level of interest in a particular measure. While other items of objective factors are measured according to the actual situation of the respondents. A total of 408 valid household samples were collected. Several sample control approaches, including prior knowledge information verification and sample proportion control, were adopted to eliminate invalid questionnaires. A final dataset of 307 questionnaires was used in the study, representing an overall response rate of 75.2%. Table 1 shows the statistics of the sample data, and a number of respondents are younger people with higher education. This may be because Beijing is an attractive metropolis in China and attracts many young and highly educated people to work in Beijing every year. By 2020, the number of people from other cities accounts for 38.5% of the permanent resident population in Beijing. The data recruited from this web-based survey were input into the Social Sciences Software (SPSS) version 23 to test the reliability and validity of the questionnaire data. Statistical analysis was conducted by calculating the Cronbach's alpha value to determine the internal consistency coefficient of the data measuring tool [41], and the Kaiser-Meyer-Olkin (KMO) and Bartlett sphericity tests were used for validity testing [34]. The Cronbach's alpha values and KMO values of the latent variables for this survey are all above 0.764 and 0.791, respectively, which exceeds the cutoff value of 0.7 recommended by [42]. Hence, all constructs of this questionnaire are deemed to be reliable and valid. B. THE PT DEPENDENCE IDENTIFICATION MODEL To accurately explore the influence of RDPT, it is essential to first divide the residents into different groups according to their PT dependence levels. The identification results provide an input variable which belongs to the dependent variable of the causal analysis model for the whole sample, and the subgroup sample dataset for analyzing the PT dependence influencing mechanism of residents in each cluster in the following parts. The AGNES clustering algorithm, a hierarchical clustering algorithm, is a widely used and wellresearched unsupervised clustering method [43]. It has been successfully applied in many research fields, such as network user behavior [44], [45], communication signals [46], and medicine [47]. A hierarchical clustering algorithm combines the two most similar data points by calculating the similarity of the two points in the data set and creates a hierarchical nested clustering tree. The dataset is then divided into different clusters of layers. The calculation flow of the algorithm is shown in Figure 4. The algorithm can find the hierarchical relationship between clusters and determine the optimal number of clusters. There are three methods used to calculate the distance between two composite data points by the hierarchical clustering algorithm: single linkage, complete linkage and average linkage. Among them, the calculation method of average linkage is used to calculate the distance between each data point in the two datasets and all other data points, and the mean value of all distances is taken as the distance between the data points in two datasets. Although the calculated load of this method is large, the result is more reasonable than that of the other two methods. Considering the small scale of the sample size in this paper, the average linkage method is expressed as Eq. (1) and used to measure data similarity. where u and v are different data point sets and i, j = 1, 2, 3 . . . , |u| and |v| indicate the number of data points in the two datasets. The contour coefficient is a classical cluster performance evaluation index based on sample distance, which can effectively measure the degree of dissimilarity within and between clusters. Therefore, the contour coefficient is introduced to measure the quality of model classification, and it can be defined as follows: where a (i) represents the average distance between sample i and other samples in the cluster, b (i) indicates the average distance between sample i and other cluster samples. C. PT DEPENDENCE INFLUENCING VARIABLE SELECTION AND MODELING 1) MODEL VARIABLE SELECTION Effective selection of model variables is the basis of successful model construction. The extended TPB theory in section 2 is adopted to extract accurately the model variables to analyze the influence mechanism of the RDPT. Specifically, 3 objective factors and 4 subjective factors containing a total of 23 measurement variables were selected, and the statistical results of the variables are shown in Table 2. The intensity of mixed land (S1), which assesses the diversity of land use, is calculated by classifying the points of interest (POIs) in Beijing into six categories: residence, employment, commerce, scenic spot, transportation and education. The indicator S1 is explained in expression (3). where N ij represents the number of POI in category i in region j, and A j indicates the area of the region j. 2) STRUCTURAL EQUATION MODELING FOR PT DEPENDENCE INFLUENCE The discrete choice models can effectively describe the direct relationship between the PT dependence and its influencing variables, but lack the ability to express the structural relationships between the influencing factors, as well as the influencing factors and the PT dependence. SEM is a multivariate statistical method to analyze the relationship between variables based on the covariance matrix of variables [48]. SEM can be used to analyze complex multivariate data, and explores and tests the causal relationship between the influencing variables and actual behavior. Additionally, SEM can explore the relationship between multiple dependent variables and break through the ''black-box'' expression paradigm of behavior influence compared with the traditional multivariate statistical method, so the approach is more suitable for the influence analysis of multiple factors synchronously [49]. Therefore, SEM is adopted in this paper to investigate the direct and indirect effects of internal and external factors on RDPT. To specify both the travel choice behavior and the latent variables, SEM contains two kinds of equations: a) the measurement equation that links the latent variables to the indicator variables and b) the structural equation that relates the latent variables to the explanatory variables [35]. In terms of the measurement equation, the combinatorial calculation model can be written as: where x is the vector matrix formed by 12 observed values of the exogenous latent variables, ξ means the vector matrix composed of 3 exogenous latent variables, x indicates the factor load matrix of x to ξ , δ represents the measurement error vector matrix of 12 exogenous variables, y is a vector matrix composed of 12 observed values of endogenous latent variables, η indicates the vector matrix composed of 5 endogenous latent variables, y refers to the factor load matrix of y to η, and ε demonstrates the measurement error vector matrix of 12 endogenous variables. The expression of the structural equation is as follows: where B represents the structural coefficient matrix of the endogenous latent variables η, indicates the structural coefficient matrix of the exogenous latent variables ξ , and ζ is the error vector of the endogenous latent variables. A. TRAVELER GROUP CLASSIFICATION Taking residents who live in Beijing with an advanced PT system as the main research objective in this paper, the three indicators of the ratio of PT travel days (TD), the ratio of PT travel number (TN) and the ratio of PT roundtrip were VOLUME 9, 2021 considered representative indicators regarding RDPT [5]. Figure 5 shows the results of the statistical analysis of these three measurement indicators. The results indicate that there are significant heterogeneity characteristics between travel intensity and the travel mode selection of the respondents. This also reflects how different urban residents have distinct PT dependence, which is the core issue we attempt to analyze and investigate in terms of its influence mechanism. The identification of the RDPT levels and travel group classification was conducted using the AGNES clustering algorithm. The values of the ratio of PT to TD, the ratio of PT to TN and the ratio of PT to roundtrips were input into the initial model. The model was then conducted multiple times, and the model's parameters were adjusted accordingly. The model's parameters were then determined, namely, the distance threshold t = 1 and the decision tree depth threshold depth = 2. Figure 6 shows the hierarchical clustering results. When different cluster numbers were set, the corresponding contour coefficients can be calculated. The results of contour coefficients are shown in Figure 7. The results show that the contour coefficient can achieve the largest value when the cutting height is set at 60, and the optimal cluster number k is 4. The classified clustering samples have significant heterogeneity, which is suitable for analyzing the influencing mechanism of RDPT. Thus, the respondents are divided into four clusters, which are ranked in descending order according to the RDPT. The proportions of respondents in different clusters were 14.1%, 30.4%, 23.2% and 32.3%, respectively. The clustering results provide the dependent variable values of PT dependence influencing model for the whole sample B. THE INFLUENCE ANALYSIS OF RDPT To further analyze the PT-dependence influence mechanism of different PT passenger groups, this section will explore the influence relationship between the influencing factors and RDPT for the entire sample and residents with different PT dependence levels. 1) SEM FOR THE WHOLE SAMPLE The values of the 23 observed variables presented in Table 2 were input into the SEM to estimate the influence effects of the RDPT. An assessment of the structural coefficient of the overall model was conducted after verifying the measurement model to provide a basis for testing the proposed hypotheses and the extended theory paradigm. To modify the model, various approaches, including adding a covariation relationship between the influence path and the error variable, deleting the path with insignificant influence and adjusting the path weight coefficient, are employed until the modification indices (M.I.) do not prompt modification of the model. The fit indices of the baseline model shown in Table 3 meet the cutoff criterion, which means the model has a good fitting effect on the data including the classification results achieved by the AGNES clustering algorithm. Thus, the rationality and reliability of our research design in Figure 3 can be verified laterally. The direct and indirect influence effects of factors on the PT dependence of the whole sample are presented in Figure 8, and the results of the unstandardized and standardized model estimates are shown in Table 4. The modified model is regarded as a baseline model relative to the following 3 models for the different clusters of respondents. The model results show two folds. First, there is a significant relationship between individual attributes and the environment, and PT attitude has a salient impact on RDPT. This demonstrates that residents with different attributes choose VOLUME 9, 2021 to live in communities with different environments and have some particular travel behavior characteristics. Therefore, the RSS effect is detected in these results. The environment still has a significant influence on RDPT after controlling the RSS effect. Second, psychological variables, as the mediating variables of the objective variables, have a direct influence on RDPT, and there are significant influencing relationships between the influencing variables. Thus, the validity of the extended TPB theoretical framework and the rationality of hypothesized relationships in section 2 is verified. The structural equation shown in Figure 8 and Table 4 The results indicate that only the attributes are negatively associated with RDPT, which is consistent with previous findings [36], while the environment and travel characteristics separately have the strongest and weakest indirect effects on RDPT, respectively. Additionally, the direct effects of PT attitudes, subjective norms and perceived control on PT dependence are estimated; the total effects are 0.183, 0.021 and 0.044, respectively. The results suggest that PT attitude significantly affects RDPT, which confirms the expectation that intentions are consistent with actual behavior. The measurement equation reveals the relationship between the observed variables and the potential variables, and several significant results are found. Bike availability (A2) and occupation (A5) are negatively associated with attributes but are positively associated with RDPT. The other five attributes have the opposite effect, and income (A7) has the highest explanatory ability. Housing price S3 has a relatively high explanatory ability of the environment, which reflects that housing price S3 contributes to the job-residence separation of residents in Beijing [50], exerting a strong influence on PT choice behavior. In terms of the psychological variables, the observed variables all have a positive and high degree of explanatory ability above 0.52 and possess a positive promotion effect on RDPT. According to the explanatory coefficients of the observed variables, residents focus more on convenience (AT2) and overall satisfaction (AT3) when making travel decisions. The degree of convenience and freedom in traveling by PT (P2) has a higher effect on residents' perceived control (0.87). Then, cycling preference (I3) and walking preference (I4) have a positive impact on PT dependence (0.2 and 0.36, respectively). This is because 78% of the respondents who prefer riding and walking also prefer PT. Noticeably, the effect of car travel preference (I2) on RDPT was not found in Beijing, which differs from previous findings [51]. This is because a number of the respondents who maintain preferences for cars also prefer other travel modes or do not have the right to buy a car, which neutralizes the influence effect of the car travel preference. 2) SEM FOR RESIDENTS WITH A HIGH PT DEPENDENCE LEVEL (PTDL) Model 1 (Figure 9), which is adjusted according to the M.I. and goodness of fit indices in Table 5, focuses on the interrelationships between the influencing variables and travel intention instead of RDPT. This is because travel intention has a significantly positive effect on RDPT, which is constant. After evaluating the influence effects of the objective and psychological variables on RDPT, several significant results are found. According to the results of the structural equation, the whole relationship structure of Model 1 is similar to the baseline mode. Additionally, the attributes and environment have a negative interactive relationship, which is opposite to that in the baseline model. This may be because the majority of residents with high-attribute characteristics (such as education level and income) tend to choose to live in the suburbs belonging to a low built environment due to the high price of housing in Beijing. Noticeably, the influence direction of attributes toward residents' travel intention is also different from that in the baseline mode but is consistent with previous findings [16]. This result may be because nearly two-thirds of the respondents with high-attribute characteristics have low accessibility to cars, which increases the probability of traveling by PT. In addition, the influence effect of the PT attitude of the residents with high PTDL decreases saliently due to the low travel choice. Thus, the influence effect of perceived control on travel intention seems more important. The measurement equation indicates that bicycle availability (A2), taking children to school (A3) and income (A7) have negative explanatory effects, and the observed variables of attributes, apart from bicycle availability (A2), have a lower explanatory ability than those in the baseline mode. However, the explanatory ability of intensity with regard to mixed land use (S1) and housing price (S3) in Model 1 is higher. Likewise, the trip purpose (C1) also has a higher explanatory ability (0.92) in Model 1. This may be because the majority of the respondents with high PTDL are commuters who have frequent travel demands [52]. Moreover, the explanatory ability of the observed variables of the psychological factors is similar to that in the baseline model. As expected, the preference for car travel shows an obvious negative effect on travel intention, which is in accord with previous findings [53]. 3) SEM FOR THE RESIDENTS WITH RELATIVELY HIGH PTDL Model 2 ( Figure 10) has a similar influence structure between latent variables and the baseline model. The results of the fit indices in Table 6 demonstrate that the fit of Model 2 is acceptable. Considering the effects of the objective and psychological variables on RDPT, several significant relations are found in Model 2. According to the structural equation, the relationship between attributes and environment is different from the negative effect in Model 1. This may be determined by the different social attribute characteristics between the residents in clusters 1 and 2, as only one-third of the latter have a higher level of education and income. In addition, the influence direction of attributes on the subjective norms and perceived control is different from that in the baseline model. The total influence effect of attributes on RDPT is consistent. Moreover, a weaker positive association is found between the environment and psychological variables. Notably, only the PT attitude of the psychological factors has a significant positive effect on travel intention. This reflects that the influence of PT attitude generated by individuals on the PT dependence of residents with relatively high PTDL occupies a dominant position. Regarding the results of the measurement equation, the observed variables of attributes retain a positive explanatory level where only occupation (A5) has no significant effect on the RDPT, which is different from the result VOLUME 9, 2021 in the baseline model but is consistent with the previous finding [53]. The corresponding explanatory ability of the observed variables of environment and characteristics is similar to that in the baseline model, while the travel environment shows a lower influence effect on the psychological factors. In addition, the degree of support for PT use (N1), the degree of travel convenience and the freedom of PT (P2) show a lower explanatory ability of the latent variables than those under the whole sample conditions, while the degree of familiarity with PT networks (P1) has a slightly higher explanatory ability. In addition, the explanatory ability of the observed variables of travel intention is consistent with that in the baseline model, and PT travel preference (I1) has a better explanatory effect. In particular, PT travel preference largely determines the PT dependence of the residents in cluster 2; therefore, improving PT service quality is an effective measure to increase residents' willingness to travel by PT. 4) SEM FOR RESIDENTS WITH LOW AND RELATIVELY LOW PTDL Although residents with low and relatively low PTDL make few trips by PT in their daily lives, they commonly do not exclude the use of PT. Therefore, these residents are important groups for PT operators and managers to further improve the PT sharing rate. The influence mechanism of PT dependence of these two clusters is next investigated jointly. The goodness of fit shown in Table 7 meets the cutoff criterion; thus, the results of Model 3 are also valid. Model 3 ( Figure 11) indicates that the relationship between endogenous and exogenous latent variables is consistent with the above models. As expected, the RSS effect is also found among the residents with low and relatively low PTDL. The relation between attributes and the environment coincides with that of model 1. This may be because the residents in clusters 3 and 4 have similar socioeconomic attributes to residents in cluster 1. Regarding the results of the structural equation, several compelling results are found. The attributes of the residents are negatively associated with the psychological variables while having a lower influence effect than that in the baseline model. In addition, the environment also shows a weaker positive effect on the psychological variables, while travel characteristics have a more significant positive effect on the subjective norm (0.59). In addition, PT attitude and perceived control show a similar influence effect on travel intentions, indicating that the PT dependence of residents with low and relatively low PTDL is mainly affected by their PT attitudes. Surprisingly, the subjective norm has a negative effect on travel intentions, and the degree of support for PT use from relatives and friends (N1) has a higher explanatory ability toward the subjective norm than the degree of influence (N2). According to the travel survey, although the relatives and friends of 78.7% of the respondents in cluster 3 demonstrate a high level of support toward PT, almost two-thirds of them have a relatively short trip distance, within 15 kilometers, which is suitable for cycling [54], [55]. Namely, although their relatives and friends are supportive of PT, most of them adopt active modes such as cycling for their relatively short trips. Regarding the results of the measurement equation, it is noticeable that only the variables of taking children to school (A3), age (A4) and education level (A6) have a positive relationship with attributes, which shows heterogeneity with the results in other models. This result may be related to the attitude preference and the different socioeconomic attributes of the respondents in clusters 3 and 4. In addition, the trip distance (C2) has a higher explanatory ability than the trip purpose (C1), and it shows a total negative effect on the RDPT. This is mainly related to the inconvenience of taking PT for 46.2% of respondents in clusters 3 and 4 and the relatively short daily trip distance. In addition, the degree of support for PT use from relatives and friends (N1) and the degree of familiarity with PT networks (P1) show a more significant explanatory ability. This may be because of the complex PT network around respondents' residences and their young attributes. Moreover, car travel preference (I2) is positively correlated with travel intention. This may be explained by the fact that only 13.6% of the respondents who have a positive reference for car travel in clusters 3 and 4 have a high availability of a vehicle, so they have to choose the PT. V. DISCUSSION, POLICY IMPLICATIONS In terms of the overall structural influence relations of the residents with different PT dependence levels, the findings demonstrate that residents' travel dependence behavior is directly affected by psychological factors. These factors are significantly affected by external and objective conditions which have an indirect mediating effect on RDPT. Thus, the innovative and extended model framework proposed in this paper has been verified. In addition, the RSS effect was found in different clusters in our research context, which is consistent with previous findings [28]. The heterogeneity of RSS effects for different clusters is mainly related to diverse socioeconomic attributes and PT attitudes. Specifically, the model results indicate that the multivariate determinants of RDPT show similar or different influence effects in clusters. Unlike other models, the variable of attributes in Model 1 has a positive direct effect on psychological variables and a total positive effect on travel intention. This conclusion is consistent with previous findings [7]. The environment has a significant effect on travel intention in the baseline model, while the effect in Models 1, 2 and 3 is relatively low. The reason may be that the influence effect of the environment is weakened through group classification and controlling the RSS effect. In addition, the travel characteristics of the residents in clusters 3 and 4 have a more significant positive effect on the subjective norm and result in a total negative effect on RDPT. This finding is mainly related to the inconvenience of taking PT and the relatively short trip distance. Moreover, it is an interesting finding that attitudes toward PT have positive direct effects on travel intention in different group models, and own the highest effects on RDPT in Models 1 and 2 and the baseline model at the psychological level, which is consistent with many previous findings [8], [28]. However, perceived PT control has the most influence on travel intention only in Model 1. This finding reflects the fact that PT is the optimal travel choice for residents with high PTDL, which results in a weaker effect of PT attitudes and subjective norms on travel intention. It is worth noting that PT travel preference has the highest variable explanatory ability of travel intention, which indicates that most residents have consistent recognition of green and sustainable PT travel mode in the current social environment. The results showed that the influence mechanism of PT dependence was significantly different among different clusters. Thus, some efficient measures and policies are proposed to guide or influence travel choices for residents in different clusters according to the influence effects of the observed variables of the RDPT. Once a strong habit is developed through policy incentives, its effect is expected to result in more frequent usage of PT in the future [30], which is beneficial for RDPT. The findings show that car availability retains a general negative influence on PT dependence. Policies restraining the use of private cars, such as reducing parking bays in congested areas, imposing high toll and parking fees [52], [56], promoting the 'park and ride' (P+R) model, implementing the odd-even car using the scheme and car traveling restrictions [57], should be implemented. However, it is worth noting that low car availability may incentivize residents with low and relatively low PTDL to use cars to travel in Beijing. Thus, the implemented intensity of private car restriction policies that cover approximately 46.4% of residents should be analyzed further. In addition, expanding the bike-sharing network [58] and providing diversified payment methods for bike rentals are conducive to boosting bicycle availability to improve the RDPT for these residents, which accounts for 44.5% of the entire sample. Furthermore, the changeable variables of the intensity of mixed land use and distance to transit have a significant positive effect on RDPT for different residents. Improving the diversity of land use to build a multigroup city [59] and optimizing the layout of PT stations could result in 38.2% of the total sample improving their PTDL. Considering the significant variables of PT attitudes for the residents in clusters 2, 3 and 4, other policies could also improve residents' subjective cognition and willingness to use PT. Convenience and overall satisfaction are of greater concern to residents when they travel by PT. In terms of convenience, the government should develop some applicable policies and measures, such as providing real-time information on PT services [60], increasing the frequency of highfrequency bus lines, adding bus stops at places with high travel demands and encouraging multimode integration (P+R and MASS). Additionally, these policies are conducive to improving residents' overall satisfaction with PT. In addition, adjusting PT service schedules (frequencies, operating times, number of routes) and providing congestion levels inside PT vehicles [61], improving the carriage environments and the comfort of the seats [62], and opening exclusive bus lanes in areas or periods with heavy traffic [63] are also key factors further promoting residents' positive attitudes and affective satisfaction. Wherein, the provision of ramps on buses, popularizing low floor buses, and increasing the number of reserved seats for senior citizen could be adopted to improve PT carriage environments [64]. Moreover, the significant impacts of subjective norms and perceived control on the perception of travel intention and RDPT indicate the importance of creating an overall pro-public transit social atmosphere and providing a highly user-friendly PT service system. In such situations, it is expected that residents might be more satisfied with PT and be more willing to voluntarily travel by PT. Meanwhile, employing the above measures to increase PT ridership can also reduce the private car share on the city roads [57]. VI. CONCLUSION This study aims to investigate the influence mechanism of RDPT to better understand how the influencing variables affect residents' PT selection behavior, based on which policies and strategies could be developed in an attempt to manage their behavior. By taking a more comprehensive perspective than previous studies, this study has four objectives: (1) to analyze residents' transport dependence behavior from the perspective of PT instead of private cars, (2) to extend the TPB framework and include both objective and psychological variable measures of the influence of RDPT, (3) to consider the RSS effect, and (4) to develop multigroup SEMs for different clusters of respondents to specifically explore the influencing mechanism of RDPT. A better understanding of the interaction relations between objective and psychological variables and the RDPT can be used to obtain optimal operation strategies and thus achieve better PT service quality. The research results indicate that the proposed theoretical concept, hypotheses of variable relationships, and models are effective in measuring the influence mechanism of RDPT. The findings show that the indirect effect of objective variables and the direct effect of objective variables on RDPT and the RSS effect were detected in different travel clusters. Moreover, some management and operational policies are proposed from the perspective of objective conditions and social psychology based on different influence effects on RDPT. It is hoped that the implications would work effectively in the current background at least on a certain group of individuals. Admittedly, this study has three limitations that call for further investigation to develop a greater understanding of overall travel mode choice behavior. First, the study focuses on the impact of RDPT in a normal environment but lacks a comparative analysis of the influence effects under special conditions, especially public health events. Future studies should attempt to analyze the influencing mechanism of RDPT under different research conditions to better understand and guide residents' PT usage behavior. Second, future research on trip-based behavior should consider the relationships between companions and respondents in each trip to detect how the heterogeneities between the companions chosen for different travel purposes influence the respondents. Third, we carried out this study only in the case of Beijing, then explored horizontally the impact of different attributes of residents within the same city level on their PT dependence behavior. The vertical comparison of the influencing factors of RDPT in different implementation cities and countries will be further explored in the future work.
2021-10-14T13:31:45.394Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "61c41df70e94bc04620d9577c671cae036b67942", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/09557286.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "b92a8fa1c860e554dd126f1bf8f769500a327bc2", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
14985487
pes2o/s2orc
v3-fos-license
Validation and Recommendation of Methods to Measure Biogas Production Potential of Animal Manure In developing countries, biogas energy production is seen as a technology that can provide clean energy in poor regions and reduce pollution caused by animal manure. Laboratories in these countries have little access to advanced gas measuring equipment, which may limit research aimed at improving local adapted biogas production. They may also be unable to produce valid estimates of an international standard that can be used for articles published in international peer-reviewed science journals. This study tested and validated methods for measuring total biogas and methane (CH4) production using batch fermentation and for characterizing the biomass. The biochemical methane potential (BMP) (CH4 NL kg−1 VS) of pig manure, cow manure and cellulose determined with the Moller and VDI methods was not significantly different in this test (p>0.05). The biodegradability using a ratio of BMP and theoretical BMP (TBMP) was slightly higher using the Hansen method, but differences were not significant. Degradation rate assessed by methane formation rate showed wide variation within the batch method tested. The first-order kinetics constant k for the cumulative methane production curve was highest when two animal manures were fermented using the VDI 4630 method, indicating that this method was able to reach steady conditions in a shorter time, reducing fermentation duration. In precision tests, the repeatability of the relative standard deviation (RSDr) for all batch methods was very low (4.8 to 8.1%), while the reproducibility of the relative standard deviation (RSDR) varied widely, from 7.3 to 19.8%. In determination of biomethane concentration, the values obtained using the liquid replacement method (LRM) were comparable to those obtained using gas chromatography (GC). This indicates that the LRM method could be used to determine biomethane concentration in biogas in laboratories with limited access to GC. INTRODUCTION 30 million small biogas plants, and in India about 4 million biogas plants are planned or already in operation (Bhattacharya and Jana, 2009;Jiang et al., 2011). Due to the many benefits of biogas digestion, it is anticipated that this technology will also be promoted in other developing countries, such as the Philippines, Thailand, Nepal and Brazil. However, to support efficient use of the technology, there must be strong local competence to assess the biogas production potential and to develop appropriate management schemes. At present, end-users in Vietnam, China and India often fail to control the technology efficiently, due to poor management competence . This leads to production being inadequate in periods of high demand in low temperature regions during winter, and excessive during periods of high temperature and high production of excreta (Cu et al., 2012). There is thus a need to improve knowledge about biogas production potential using local biomass, in order to develop digesters adapted to the local environment and individual management schemes, thus ensuring production of the gas needed for cooking, heating and light (Vu et al., 2007;Cu et al., 2012). Hence, there is an associated need to review, develop and validate methods to assess biogas production which can be used in laboratories with limited access to analytical instruments. Research carried out at laboratories in regions with limited access to high-tech instruments must be of international standard, so as to ensure useful results and contribute to progress in development of the technology. Biochemical methane potential (BMP), the maximum methane production capacity of each feedstock, is a key parameter in designing and operating a successful real-scale biogas plant. A recent study using data from different laboratories gave the impression that the data may vary between laboratories (Triolo et al., 2011), confirming observations by Angelidaki et al. (2009). Thus BMP values determined by different researchers and institutes cannot usually be compared, due to differences in the experimental design and equipment used and variations in temperature and experimental conditions (Hansen et al., 2006;Kiilholma, 2009;Raposo et al., 2011). The aim of the present study was therefore to test and validate methods and analytical procedures suitable for use in simple laboratories. The specific objective was to determine and compare the analytical precision of the most widely used BMP and gas volume measurement methods. Determination of methane concentration in biogas by gas chromatography (GC) and by absorption of CO 2 in alkaline liquid was compared in order to test the precision of an alternative method for determining methane concentration in laboratories with limited access to analytical equipment. Overview of methods tested The fermentation procedures, gas volume measurement methods and precision tests evaluated are summarised in Table 1. Regarding the fermentation procedures, the three most widely used were tested, which were the German standard procedure VDI 4630 (VDI, 2006) ('VDI method'), the BMP procedure used by Møller et al. (2004) ('Moller method') and the procedure proposed by Hansen et al. (2004) ('Hansen method'). Regarding the gas volume measurement methods, two liquid replacement tests were compared with the large syringe method. The precision of gas concentrations determined by absorbing CO 2 in alkaline liquid was tested by comparing the results obtained by GC; the gas-tightness of liquid replacement was tested using different tubes; and the analytical precision in determination of dry matter (DM) and volatile solids (VS) was tested. Comparison of BMP by different batch protocols Substrate and inoculum used: The fattening pig manure ('pig manure') and dairy cow manure ('cow manure') used as substrates were collected from Fangel biogas plant. Microcrystalline Cellulose (Sigma Aldrich) was used as a standard substrate for all three methods according to VDI 4630 (VDI 2006). Microcrystalline Cellulose is commonly used reference substrate to estimate quality of batch experiment. Most widely used two different digestion temperatures were chosen, thermophilic (55C) and mesophilic (35C). Since the Hansen and Moller methods describe anaerobic digestion at thermophilic and mesophilic conditions, respectively, thermophilic digestions were carried out for the Hansen method and mesophilic digestions for the Moller method. VID4630 (VDI2006) describes both thermophilic and mesophilic conditions, hence the mesophilic condition was chosen for VDI 4630. Two different inocula were used, mesophilic inoculum from Fangel biogas plant for the VDI and Moller methods, and thermophilic inoculum from Linko biogas plant for the Prior to the BMP test, biochemical and physiochemical analyses of the pig manure and cow manure were carried out (Table 2). DM, VS, crude lipid, total ammoniacal nitrogen (TAN = NH 3 +NH 4  ) and total Kjeldahl nitrogen (TKN) were determined according to standard procedures (APHA, 2005). "The protein content was determined by multiplying the difference between TAN, and TKN with factor 6.25" (Triolo et al., 2011). Volatile fatty acids (VFA) were determined according to the method of Lahav et al. (2002) Ash-free acid detergent lignin (ADL) was determined by acid detergent extraction, as described in ISO Standard 13906 (ISO13906, 2009). BMP assays: Each of the three methods was monitored by triplicate measurements of gas production from each of the substrates (pig manure, cow manure and cellulose). Twelve 1-litre digester glass bottles (reactors) were used for each of the batch fermentation tests. VDI 4630: The preparations for fermentation were carried out according to VDI 4630 (VDI, 2006). A test medium was prepared to ensure sufficient nutrients for bacterial growth and standard pH buffer capacity following the recommendations of VDI 4630 and ISO Standard 11734 (ISO11734, 1995). The composition of the medium used was as follows: anhydrous potassium dihydrogen phosphate (KH 2 PO 4 ) 0.27 g; disodium hydrogen phosphate dodecahydrate (Na 2 HPO 4 12H 2 O) 1.12 g; ammonium chloride (NH 4 Cl) 0.53 g; calcium chloride dihydrate (CaCl 2 2H 2 O) 0.075 g; magnesium chloride hexahydrate (MgCl 2 6H 2 O) 0.10 g; iron (II) chloride tetrahydrate (FeCl 2 4H 2 O) 0.02 g; sodium sulphide nonahydrate (Na 2 S9H 2 O) 0.1 g. The constituents were added to 1 liter distilled water containing less than 1 mg/L dissolved oxygen. The test medium prepared was flushed with nitrogen before the batch test to remove oxygen, and then 150 ml of the prepared test medium were added to reactors. The inoculum was degassed for two weeks. The inoculum and substrate (i.e. manure and test substrate) were added at a ratio of 2:1 (VS basis), allowing the volume of mixture of inoculum and substrate to be 620 ml. Three reactors containing 620 ml inoculum were used to measure gas production from the inoculum. Moller method: The preparations of fermentation were carried out according to Møller et al. (2004). The same inoculum as used for VDI 4630 was applied. The difference between the Moller and VDI methods is in the I:SR and buffer solution. The inoculum and substrate was added at a ratio of 1:1 (VS basis) and buffer solution was not added. The reactors were each filled with 770 ml inoculum, which was gently homogenised. Hansen method: The preparations for fermentation were carried out according to Hansen et al. (2004). Thermophilic inoculum was degassed for 3 days at 55C prior to the batch test. Then 50 ml of each pig and cow manure sample were mixed with 200 ml inoculum and the mixture was homogenised by mixing carefully. Buffer solution was not added. After mixing substrates with inoculum according to three different methods, all reactors were closed carefully with butyl rubber bungs. The headspace of all reactors was flushed with nitrogen gas to ensure anaerobic conditions. The reactors were placed in a climate chamber at 37C for the Moller method and at 55C for the Hansen method, and in a thermostat-controlled water bath at 37C for the VDI method. In the Moller and Hansen methods, the volume of biogas was measured using a 1,000-ml syringe (Hamilton Super Syringe) supplied with a tube with a needle at the open end. Gas volume produced using the VDI method was measured by continuous connection to a liquid replacement system (CLRS). For all three methods, the gas yield was measured daily during the first week of incubation, every 2 or 3 d in the second week, and then weekly or every 2 weeks during the following incubation period. Incubation was stopped when the gas production rate was less than 1% of the accumulated gas produced. The methane concentration in the collected biogas was measured weekly over the whole experimental period for all three methods. The gas samples were stored in 10-ml vials with butyl bungs, which were filled by flushing gas through the vial using a 1,000-ml syringe. The CH 4 and CO 2 concentrations were measured using a gas chromatograph (HP 6890 series) equipped with a thermal conductivity Methods to measure volume of biogas produced Three techniques were used to measure the biogas produced. For each technique, the VDI batch fermentation method was used to ferment pig manure, cow manure, cellulose and inoculum samples in triplicate. The measurements of biogas produced were carried out using a syringe, the liquid replacement system used intermittently (LRS), and a liquid replacement system continuously connected to the reactors (CLRS). Intermittent measurements with syringe: Biogas volume was measured using a 1,000-ml syringe. The syringe was connected to the reactors by injecting the needle through the butyl bung, then drawing the plunger out until the pressure in the headspace dropped to ambient pressure. The volume of gas in the syringe was taken as a measurement of the gas produced. Intermittent measurements with liquid replacement system (LRS): Biogas volume was measured using a LRS, which was connected to the reactors with a needle for each measurement time. The volume of gas produced was measured by replacing liquid, i.e. the headspace of the batch fermentation flask was connected to a cylindrical flask filled with the liquid and with the opening connected with a hanging tube to a container of the same liquid; the biogas produced flowed from the headspace up into the cylindrical flask and replaced the liquid. The hanging tube prevented the gas from flowing from the cylindrical measuring flask to the liquid container. The volume of gas was taken as the volume of released water. LRS was connected to reactors at the same time intervals as when measuring biogas volume production using a syringe. Continuous measurements with liquid replacement system (CLRS): Biogas volume was measured using a CLRS, which was permanently connected to reactors for the entire experimental period. Wet chemistry CH 4 measuring method The concentration of CH 4 in biogas was often measured by absorbing CO 2 in an alkaline liquid (Guwy, 2004;Rozzi and Remigi, 2004;Raposo et al., 2011). A cylindrical flask was filled with liquid and placed with the opening in the same liquid in a container (Figure 1), so that the flask remains full of liquid. To the inside of the cylindrical flask was attached a tube closed with a clamp and with a syringe at the other end. The syringe was injected through the butyl bung of the reactor, the clamp was opened, and the gas produced flows into the cylindrical flask and replaces the liquid. The amount of liquid replaced corresponds to the volume of gas produced. If the liquid was acid, the volume of biogas produced was measured, while if the liquid was basic, the CH 4 production was measured. In the test, a 50-ml graduated measuring cylinder and a 1liter container were used, as described above. About 700 ml 0.5 M hydrochloric acid (HCl) were used to fill the cylinder and added to the container. On injecting the needle into the bung of a reactor, biogas bubbles through the liquid and fills the cylinder, replacing the liquid, and the gas volume can then be read (V 1 , ml). Thereafter, KOH was added to increase pH to above 9 and to absorb CO 2 and H 2 S. This absorption reduces the volume of gas in the measuring cylinder (V 2 , ml). The volume V 2 was an estimate of CH 4 in the gas; the difference between initial and final volume corresponds to the CO 2 content in the biogas, i.e. H 2 S concentration wa taken as negligible compared with CO 2 concentration. Statistical analysis Data were evaluated using analysis of variance (ANOVA) followed by the Ryan-Einot-Gabriel-Welsch multiple-range test where appropriate (SAS 9.2 TS Level 2M0). In all cases, a significance level of  = 0.05 was used. When necessary, data were transformed to obtain normality and homogeneity of variances. Calculating the methane potential in terms of standard temperature and pressure (STP): The ultimate methane and biogas production in terms of standard temperature and pressure (STP) (eq. 1), litres of CH 4 and biogas per kg organic matter expressed as volatile solids (VS) were presented in Table 3: where V 0 dr was the volume of the dry gas in the normal state (NL); V was the volume of the gas as read off (ml); P was the pressure of the gas phase at the time of reading (hPa); P w was the vapour pressure of the water as a function of the temperature of the ambient space (hPa); T 0 was normal temperature (= 273K), p 0 was normal pressure (= 1,013 hPa), and T was the temperature of the fermentation gas or of the ambient space (K). Biodegradability and the rate of methane production: Anaerobic biodegradability can be determined by comparing the ratio of BMP obtained to theoretical BMP (TBMP) ((BMP/TBMP)100(%)) (Triolo et al., 2011;Triolo et al., 2012). In the present study, TBMP was determined according to Triolo et al. (2011) as: with TBMP as CH 4 NL (kg VS) -1 , and lipid, protein, carbohydrate and lignin as g (kg VS) -1 . The coefficients in equation (2) were unit methane formations derived from Buswell's anaerobic degradation equation (Symons and Buswell, 1933) for each organic compound, using an average formula. The empirical formula of lignin was C 10 H 13 O 3 according to Triolo et al. (2011). Anaerobic degradation rate of each BMP method was compared, employing a nonlinear regression test using Sigma Plot 5 (GraphPad Software, USA). Cumulative methane production curves were fitted to the first-order kinetic model and first-order kinetic constants (k) were obtained, assuming hydrolysis to be the rate-limiting step (eq. 3): where B t (CH 4 NL (kg VS) -1 ) was the cumulative methane yield at time t, B 0 (CH 4 NL (kg VS) -1 ) was the maximum value of BMP, k (d -1 ) was the first-order kinetic constant, and t was the time (days). Evaluation of precision: A method for assessing the detection limit, repeatability and reproducibility of biogas measurements has been developed by Hansen et al. (2004). This method can be used by researchers who have access to Excel data treatment programs. Repeatability: The repeatability and reproducibility of the batch fermentation methods can be determined using the series of triplicate measurements of the methane potential of cellulose, which was the biomass at its most homogeneous, i.e. biomass characteristics do not contribute to variation in measurements. The repeatability (r) was defined as 'the uncertainty of repeated measurements of the same sample within the same analytical series' (ISO5725-2, 1994; Hansen et al., 2004). The repeatability of two measurements of one sample was (eq. 4): r 1 s 2 √ 1.96 = r where s r was the standard deviation of the average of the cellulose measurements from the series of measurements carried out using a specific method. This repeatability represents the interval r 1 , where two measurements of the same sample are similar. Reproducibility: The reproducibility of the measurements can also be estimated using the standard deviation s R of the measured gas production between series of measurements of biogas production from fermentation of cellulose. Hansen et al. (2004) recommend the use of the ISO 5725 formula (ISO5725-2, 1994): where R was the reproducibility, which reflects variation in measurements due to differences caused by the data coming from measurements not being carried out in the same series (period), adding effects such as variation in inoculum, variation in environment (e.g. incubators performing slightly differently between series) and effects of management. The R value indicates the interval where two average values from two series are equal. In the present study, the precision of BMP results was estimated by employing the repeatability relative standard deviation (RSD r ) and reproducibility relative standard deviation (RSD R ) according to the practical guide for ISO 2005 (ISO/TR22971, 2005). RSD r was calculated using equation (7): Where o was the mean of triplicate values and SD r was repeatability standard deviation from triplicate results. RSD R was calculated using equation (8): where o was the overall mean and SD R was reproducibility standard deviation between triplicates from the different groups. BMP results using different fermentation protocols The results for BMP showed large variations depending on the procedure used (Table 3). These large differences in BMP measured by different procedures are in agreement with Raposo et al. (2011), who reported that the relative reproducibility standard deviation of BMP measured by different laboratories was large, ranging from 15 to 34% with outliers, and from 8% to 11% excluding outliers. Overall results of cumulative methane production of pig manure and dairy cow manure using three different batch fermentation techniques were present in Figure 2, where they fit very well to the first-order kinetic curves as can be seen in Figure 3. The BMP (CH 4 NL kg -1 VS) of pig manure, cow manure and cellulose determined with the Moller and VDI methods was not significantly different in this test (p>0.05) (Table 3). However, the CH 4 production from cow manure and cellulose measured with the Hansen method differed significantly from the Moller or VDI methods. All methods gave similar estimates of biogas production from pig manure. The BMP measured with the Hansen method was higher for cow manure and lower for cellulose than estimates using the VDI or Moller methods. Some recent studies have indicated that hightemperature incubation increases gas production from organic matter with a high concentration of slowly digestible organic matter (Alvarez and Lidén, 2008;Ferrer et al., 2008). Thus the results obtained here with the Hansen method can be expected, as the cow manure with its higher concentration of slowly digestible organic material produced more gas than the two other substrates. The low gas production from pig manure measured with the Hansen method was probably due to NH 3 inhibition, as can be seen in Table 3. Similarly, Angelidaki and Ahring (1994) reported increasing NH 3 inhibition at increasing temperature. Hence, it was often the case that biogas digesters fed with pig manure are run at mesophilic temperatures (35 to 37C). Precision: The precision of the batch methods tested was evaluated employing RSD r , RSD R and the ratio between BMP and TBMP, using cellulose as the standard substrate (Table 4). RSD r for all the batch methods was very low (4.8 to 8.1%). On the other hand, RSD R showed large variation, ranging from 7.3 to 19.8%. Nevertheless, BMP/TBMP was close to 0.90 using the Moller and VDI methods. Thus a fraction of the VS is not transformed into biogas, due to the use of carbon for growth of the microorganisms. VDI 4630 (VDI, 2006) states that BMP/TBMP must reach at least 80% in control batches. BMP/TBMP was only 0.61 in the Hansen method and this together with a very low RSD R shows that the BMP obtained was consistently low. The low BMP of cellulose results obtained using the Hansen method could have been caused by a large amount of gas production at the start of the fermentation period. This gas production results in high pressure in the headspace of the batch reactors at high temperature and with active inoculums, as rising pressure creates a back-pressure resulting in higher gas losses (VDI, 2006). Such gas production probably occurred for complex reasons, including the thermophilic conditions; gas production from the inoculum being subjected to only 3 days of degassing; and fast degradation of cellulose. Biodegradability and degradation rate: The biodegradability according to BMP/TBMP ratio was slightly higher using the Hansen method, but there was no clear difference between methods. The values obtained were 34.2 (1.14)% for the Hansen method, 32.9 (7.9)% for the Moller method and 33.8 (9.1)% for the VDI method. On the other hand, biodegradability clearly varied within the two manures tested, being 37.4 (3.5)% for pig manure to 29.9 (4.4)% for cow manure. Degradation rate assessed by methane formation rate showed wide variation between the three batch methods. The curve of cumulative methane production of pig and cow manure using each batch method and the best fitted first-order kinetic curves are presented in Figure 3. As can be seen from the diagram, the curve of cumulative methane production using the Moller method fitted the first-order kinetic curves best. However, all cumulative methane production curves also showed good agreement with firstorder kinetic curves for the Hansen and VDI methods. The coefficient of determination (R 2 ) between the cumulative methane production curve and first-order kinetic curves was highest for the Moller method, i.e. 0.9713 for pig manure and 0.9827 for cow manure. For the Hansen method, the values obtained were similar, 0.9654 for pig manure and 0.9815 for cow manure, while for the VDI method, R 2 was comparatively low, 0.9256 for pig manure and 0.9591 for cow manure. There was a tendency for cow manure to have slightly higher R 2 . The results may indicate that hydrolysis was the dominant rate-limiting step for the degradation of cow manure, while the lower R 2 of pig manure shows that the methanogenesis from a high content of hydrolyzed components, i.e. VFA, could be the dominant and ratelimiting step at the beginning of the fermentation procedure. The first-order kinetic constant k was highest when the two animal manures were fermented using the VDI method, showing rapid degradation of substrate. The results indicate that the VDI method reached steady conditions in a shorter time, reducing the fermentation duration. The reason for the higher degradation rate with the VDI method is probably the higher I:SR, giving a kinetic advantage due to a larger bacterial population within the substrate (Raposo et al., 2006). This study showed that increasing I:SR to some extent has a positive influence on shortening duration of the fermentation. Thus, the medium added as buffer solution for the VDI method could accelerate microbial activity, resulting in increased degradation rate. Surprisingly, the Hansen method had a slightly lower k value than the VDI method, even though thermophilic conditions provide a kinetic advantage for the degradation rate. This could be because the degassing period of 3 days was short, allowing considerably higher methane potential for the inoculum itself, which consequently delayed degradation of the substrate. Comparison of biogas volume determination with three different measurement techniques CLRS and syringe extraction were two widely used methods for measuring biogas volume in different studies related to biogas research (Abu-Dahrieh et al., 2011), but LRS was used for this purpose the first time in the present study. The differences in gas volumes obtained using these three different measurement techniques were much less than the differences caused by different fermentation procedures and gas measurement techniques ( Figure 4). Nevertheless, LRS showed a tendency for higher gas volume measurements than the syringe and CLRS methods. The reason could be that the syringe plunger was not withdrawn far enough to get the total production in each test and left a higher pressure in the headspace. The CLRS method is more subject to small leaks in the set-up, as the biogas is contained not only in the digester but also through the whole water replacement system. Test of gas concentration measuring techniques: Most researchers measure CH 4 concentration in biogas by GC, which is precise at the concentration levels in the gas (Shahriari et al., 2012). Alternatively, CH 4 concentration can be determined by the liquid replacement method (Demirer et al., 2000), whereby the biogas volume produced is determined by replacement of an acid liquid, then a base was added to the liquid and CO 2 is absorbed in the liquid. This wet chemistry method can be used in all laboratories where scientists have access to acids and bases and was simple and cheap compared with the GC method. Therefore, the accuracy of the method was tested here. The results showed that if raw data from the liquid replacement method (LRM) were used when comparing the methods, then the CH 4 concentration (%) measured with LRM was linearly related to that measured with GC ( Figure 4). This was the case for measurements of CH 4 gas production from each substrate and of pooled estimates. There are gases other than CH 4 and CO 2 in the biogas, i.e. H 2 O, NH 3 , H 2 S and NO 2 (VDI, 2006;Chen et al., 2008). H 2 S will also be absorbed in the basic liquid and, of the remaining gases, H 2 O was without doubt the most abundant. By assessing the water gas concentration of the gas, the CH 4 determined was slightly higher using the LRM method (68.00%) than the GC method (64.94%). Standard deviation and relative standard deviation between the two methods were 3.15% and 4.70%, respectively, showing that the differences were not very great. Nevertheless, there was a tendency for higher CH 4 concentrations to be measured when using LRM than when using GC. This could be due to a low amount of CO 2 (less than 5%) not being dissolved in the base liquid. In addition, trace gases such as NH 3 and N 2 O could affect the results to some extent. However, the very low differences when using LRM indicate that in laboratories with limited access to expensive equipment such as GC, the simple, cheap and affordable LRM method could be used to measure biogas content. CONCLUSIONS Biodegradability was slightly higher using the Hansen method, but the differences were not significant. The higher degradation rate combined with no apparent system instability of the VDI method suggests that it could be the most suitable batch method to determine the BMP for pig slurry, with the shortest fermentation duration. However, the Hansen method could be preferable for determining the BMP of cow slurry, which contain highly resistant organic compounds and a little TAN. With regard to determination of biomethane concentrations, the LRM method differed only slightly from GC and could thus be used to determine biomethane concentrations in biogas in laboratories with limited access to GC.
2017-04-03T19:22:18.927Z
2013-06-01T00:00:00.000
{ "year": 2013, "sha1": "36f0bf9c6fab6e394f099a7a0470c2dfecacfe28", "oa_license": "CCBYNC", "oa_url": "http://www.ajas.info/upload/pdf/ajas-26-6-864-15.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d59cc5669385960efa843cb4a05ada1206758569", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science", "Medicine" ] }
23690449
pes2o/s2orc
v3-fos-license
Purification of a 3 1,000-Dalton Insulin-like Growth Factor Binding Protein from Human Amniotic Fluid ISOLATION OF TWO FORMS WITH DIFFERENT BIOLOGIC ACTIONS* Human amniotic fluid has been shown to contain a protein that binds insulin-like growth factor I and I1 (IGF-I and IGF-11). Partially purified preparations of this protein have been reported to inhibit the biologic actions of the IGFs. In these studies our laboratory has used a modified purification procedure to obtain a homogeneous preparation of this protein as determined by polyacrylamide gel electrophoresis and amino acid sequence analysis. During purification the ion ex- change chromatography step resulted in two peaks of material with IGF binding activity termed peaks B and C. Each peak was purified separately to homogeneity. Both peaks were estimated to be 31,000 daltons by polyacrylamide gel electrophoresis and their amino acid compositions were nearly identical. Amino acid sequence analysis showed that both peaks had identical N-terminal sequences through the first 28 residues. Neither protein had detectable carbohydrate side chains and each had a similar affinity for radiolabeled IGF-I (1.7-2.2 X 10” liters/mol). In contrast, these two forms had marked differences in bioactivity. Con- centrations of peak C material between 2 and 20 ng/ ml inhibited IGF-I stimulation of [3H]thymidine incor- poration into smooth muscle cell DNA. In contrast, when peak B (100 ng/ml) was incubated with IGF-I there was a 4.4-fold enhancement of stimulation of DNA synthesis. Additionally, pure peak B was shown to adhere to cell surfaces, whereas peak C many types of cultured cells (1,2). Although many cell types and tissues secrete IGF-I (3,4), it is uncertain whether this locally produced IGF-I stimulates growth in the regional microenvironment or is transported through blood to stimulate growth at sites distant from its site of synthesis (5). Compounding this difficulty in understanding the mechanisms by which the IGFs stimulate growth is the observation that IGF-I and IGF-I1 circulate in blood bound to binding proteins (6,7). Extracellular fluids (8) and cell culture supernatants (9) also contain IGF binding proteins, suggesting that the IGFs are present in a bound form in the extracellular microenvironment. Since these proteins bind IGF-I and IGF-I1 they are believed to inactivate these substances (10). There are two major classes of IGF binding proteins. One is a glycoprotein (-53 kDa) that is synthesized by hepatocytes (11) and in plasma forms a stable 150-kDa complex with IGF-I (12). This protein is growth hormone-dependent (13). In contrast, extracellular fluids such as ascites (14), spinal (15), follicular (16), and amniotic (17) contain a protein whose molecular size has been estimated to be between 30 and 38 kDa and is not growth hormone-dependent. Recently our laboratory has shown that human fibroblasts secrete this protein and that it adheres to the fibroblast surface (18). The surface-adherent protein directly alters IGF-I binding such that the binding of radiolabeled IGF-I is paradoxically increased when low concentrations of unlabeled IGF-I are added (18). Since this form of binding protein is present in many types of extracellular fluids it has the potential to alter the cellular responses to IGF-I. These studies were undertaken to determine the physiochemical properties of a pure preparation of the human amniotic fluid derived IGF binding protein and to determine whether it could alter the biologic effects of IGF-I. was discarded and the supernatant was adjusted to 50% saturation with ammonium sulfate and stirred for 30 min and the centrifugation step repeated. This pellet (33-50%) was resuspended in 50 ml of 0.05 M Tris, pH 7.4, and 1.2 ml of saturated ammonium sulfate added to achieve a final concentration of 0.14 M. This solution was applied to a phenyl-Sepharose column (2.2 X 15.0 cm) that had been previously equilibrated with 0.05 M Tris, pH 7.4, in 10% ammonium sulfate. Following sample loading, the column was washed with the loading buffer until the absorbance (280 nm) returned to base line. The column was eluted with step gradients of the following composition: 1) 0.05 M Tris, 0.5 M sodium thiocyanate, pH 7.4; 2) 0.05 M Tris, pH 7.4; 3) 0.02 M Tris, pH 9.0; and 4) HzO. Each fraction was assayed for IGF-I binding activity (see below). The active fractions were pooled, the pH was adjusted to 7.2 with 1.0 M acetic acid, and the solution was applied directly to a DEAE-cellulose column that had been equilibrated with 0.01 M (NH4)&03,0.01 M NaCl, pH 7.2. After sample application the column was washed extensively with the equilibration buffer until the absorbance (280 nm) returned to base line. The column was eluted with step salt gradients containing 0.1, 0.25, and 1.0 M NaCl in the equilibrium buffer. The fractions were assayed for IGF binding as described. Greater than 80% of the activity eluted with 100 or 250 mM NaCl. These two peaks, termed B and C, were purified separately. 1.5 ml of the peak B pool was applied to a C-4 Vydac reverse-phase HPLC column (0.46 X 25 cm) that had been equilibrated with 0.04% trifluoroacetic acid. The mobile phase was run isocratically for 5 min and then a linear gradient from 0 to 100% acetonitrile plus 0.04% trifluoroacetic acid was run over 25 min. The IGF binding protein activity of each fraction was determined and the active fractions were pooled and stored at -20 "C. Pool C from the ion exchange column was first purified by Sephadex G-100 column chromatography. 10 ml of pool C was applied to a 2.2 X 90 cm column that had been equilibrated with 0.01 (NH4)2C03, 0.05 M NaCl, pH 7.2. The column was eluted using a flow rate of 30 ml/h and approximately 9-ml fractions were collected. The IGF-I binding activity was determined (as described below). The active fractions were pooled and applied directly to the reverse-phase C-4 column. The elution conditions that were identical to those stated previously were used. Iz5f-IGF-I Binding Capacity-IGF-I binding activity of the column fractions was determined as follows using a polyethylene glycol precipitation method (18). 10 pl of each fraction was incubated with lZ6I-IGF-I (340 pCi/pg) (final concentration of 0.27 ng/ml) for 60 min at 22 "C in 0.1 HEPES, 0.1% BSA, 0.01% Triton X-100, 44 mM Na-HCOa, 0.02% NaN3, pH 6.0 (250 pl total volume). The IGF-I was iodinated by a published method (19). Bound and free '2SI-IGF-I were separated by adding 250 p1 of 1% human y-globulin and 500 p1 of 25% polyethylene glycol (Mr 8000) (final concentration of 12.5%). The mixture was centrifuged at 1000 X g for 15 min. The pellet was washed with 1 ml of 6.25% polyethylene glycol and the final pellet counted in a y spectrometer. Nonspecific binding was determined by measuring the amount of "'I-IGF-I that could be precipitated in the presence of 1.0 pg/ml unlabeled IGF-I. It was consistently <5% and was subtracted from the total radioactivity that was precipitated. To determine overall recovery, each pool of active fractions was reassayed at several concentrations and the binding capacity of the pools was compared to a human amniotic fluid standard. The data were used to assign a unit value to each pool. One unit was the quantity of the binding protein in human amniotic fluid necessary to achieve halfmaximal IGF-I binding in that assay. This corresponds to 250-300 pg of pure IGF binding protein which binds approximately 83-100 pg To determine the binding capacity and affinity of the pure IGF binding protein for IGF-I, radiolabeled IGF-I (340 pCi/pg; 0.27 ng/ ml) was incubated with 14 ng/ml each binding protein and Increasing concentrations of unlabeled IGF-I in 0.25 ml of 0.03 M NaHzP04, 0.01 M EDTA, 0.05% Tween 20, pH 7.4. After 48 h at 4 "C the bound and free "'I-IGF-I were separated by adding a 1:250 dilution of a rabbit anti-binding protein antibody which had been prepared using a mixture of peaks B and C binding proteins and the incubation continued for 24 h. At that time 8 pl of goat anti-rabbit serum was added and the mixture incubated for 1 h at 22 "C. 2 pl of normal rabbit serum was added and the mixture incubated for an additional 1 h. The bound and free growth factors were separated by centrifugation at 8000 X g for 10 min. Physiochemical Analysis-The purity of both peaks B and C was determined by SDS-polyacrylamide gel electrophoresis. The running gel was 12% acrylamide containing 0.375 M Tris, 0.1% SDS, pH 8.8, and the stacking gel was 4% acrylamide in 0.125 M Tris, 0.1% SDS, of IGF-I. pH 6.8. 0.1-10 pg of sample was diluted to 75 pl in 0.1 M Tris, pH 6.8, 10% glycerol, 5% SDS, and 0.02% bromphenol blue and the samples were heated to 100 "C for 5 min. The supernatants were clarified, the gel lanes loaded, and the proteins separated for 14 h at 65 V. Silver staining was performed using Gel code silver staining kit. The lower limit of detection of the technique was 25 ng as determined using known protein standards. Amino Acid Composition and Sequence Analysis-Amino acid analysis was performed by the PICO-TAG method (19), described briefly as follows. 500 ng of each protein was hydrolyzed in an evacuated, sealed vessel containing fumes of 6 N HCl and 0.1% phenol at 150 "C for 1 h. The hydrolysate was derivatized with phenylisothiocyanate to generate the phenylthiocarbamyl derivative of each amino acid and the mixture applied to a reverse-phase HPLC column (20). To determine the amino acid sequence of peak B, an aliquot (12 pg) was extensively reduced and alkylated to modify cysteine residues to their more stable carboxymethyl derivatives. Briefly, the aliquot was reduced in the presence of 5 M guanidine HCl at pH 8.6 for 1 h at 37 "C. The reducing agent was 0.05 M dithiothreitol. Alkylation was performed at room temperature by adding iodoacetic acid to 0.11 M and incubating the mixture in the dark for 1 h. The alkylated protein was separated from reagents by rechromatography using HPLC ((2-4 column) as previously described. Guanidine and other reagents are not retained by this reverse-phase column. 6 pg of eluted, modified protein was placed on a polybrene (21)-treated glass fiber filter in an Applied Biosystems model 470A gas phase sequenator and subjected to repetitive Edman degradation. Phenylthiohydantoin deobtained using a Waters gradient HPLC and a NOVA-PAK C-18 rivatives were identified by comparing their HPLC elution profiles reverse phase column with the elution profiles of known mixtures of phenylthiohydantoin derivatives. Determination of Carbohydrate Content-To determine whether either peak B or C contained carbohydrate, 20.0 pg of each peak was loaded on a 12% SDS-polyacrylamide gel and separated for 14 h as described previously. Fetuin was run in parallel as a standard. The gel was fixed with 10% acetic acid/25% isopropyl alcohol. The gel was then washed sequentially with 1) 0.5% periodic acid, 2) 0.5% sodium arsenite/5% acetic acid, 3) 0.1% sodium arsenite/5% acetic acid, 4) 5% acetic acid, 5) Schiffs reagent (overnight), and 6) 0.6% sodium metabisulfite/O.Ol M HCl. To further determine whether either peak B or peak C contained carbohydrate, 1.5 pg of each protein was applied to a concanavalin A-Sepharose column that had been equilibrated in 0.02 Tris, pH 7.5, 2 mM CaClz, and 2 mM MgClZ. The column was slowly loaded over 2 h and allowed to stand for 45 min at 22 'C. The column was further washed with 20 ml of starting buffer and then eluted with 10 ml of 0.02 M Tris, pH 7.5, containing 0.5 M a-methyl-D-mannoside and 0.1 M NaC1. After standing for 1 h, the column was reeluted with the same buffer. The fractions were tested for IGF binding activity as described previously. Determination of PHIThymidine Incorporation into DNA-The biologic activity of pure peak B and C material was assessed by determining the capacity of each to stimulate DNA synthesis in porcine aortic smooth muscle cells. The smooth muscle cells were isolated and maintained in stock cultures using previously described methods (22). The cells from stock cultures were subcultured in microtest 96-well plates (Falcon 3004) by plating at 8000 cells/well in DMEM (GIBCO) containing 10% fetal bovine serum. 5 days after plating, the wells were washed once with serum-free DMEM, and then test factors were added to each well in 0.2 ml of DMEM supplemented with 1% platelet-poor plasma (PPP) and 0.5 pCi of [3H]thymidine. PPP was prepared by a previously described method (23). After 36 h of incubation the wells were washed twice with Ringer's bicarbonate and twice with 5% trichloroacetic acid (4 "C), and the DNA was extracted twice with 0.4 ml of 0.3 N NaOH. [3H] Thymidine incorporation was determined by liquid scintillation counting. Isoelectric Focusing-To determine their isoelectric points, 2.5 pg of peak B and C proteins were loaded onto precast isoelectric focusing plates, pH 3-10 (Servalyt Precotes). 20 pg of known standards was run in a parallel lane. The prot,eins were electrofocused for 1.5 h at 200 V, and then for 1.5 h at 1000 volts. The gel was divided into two sections and one half was fixed in 10% trichloroacetic acid and then stained with Serva Blue according to directions. The other half was cut into 0.5-cm sections and eluted with 0.04% trifluoroacetic acid. The eluates were analyzed for IGF binding activity as described previously. Determination of lZ51-IGF-I and "'I-IGF Binding Protein Binding to Cell Monolayers and Affinity Labeling-Prior to conducting the binding experiments, pig smooth muscle cells were grown to confluency in 24-well plates (Falcon 3003) and washed three times in PBS. The cultures were then incubated for 14 h at 37 "C with varying concentrations of pure peak B or C in 0.5 ml of minimum essential medium. The plates were washed twice with PBS and fresh peak B or C was added with '"I-IGF-I (0.27 ng) to 0.25 ml of minimum essential medium containing 20 mM HEPES and 0.1% BSA. After 2 h at 4 'C the media was aspirated, the monolayers were washed four times in PBS, and the cell-associated lZ5I-IGF-I was determined as previously described (18). Nonspecific binding was determined in the presence of 500 ng/ml unlabeled IGF-I and that value was subtracted for all points. The IGF binding protein was iodinated using a modification of the method that was used to prepare '261-IGF-I (19). 0.5 pCi of Na-lZ6I was added to 0.1 ml of 0.5 M NaP04, pH 7.5, containing 2.0-4.0 pg of protein. Chloramine T (50 p~) was added. After 3 min, the percentage precipitability in 20% trichloroacetic acid was determined and further chloramine T was added until the iodinated protein was 70% precipitable. The mixture was purified by Sephadex G-100 chromatography. The relative specific activity of peaks B and C was 81 and 152 NCi/pg, respectively. Direct measurements of I2T-binding protein binding were determined as described for "'I-IGF-I except that the concentration of radiolabeled protein in the incubation mixture was 2.0 ng/ml. Affinity labeling was performed using a previously described method (18). The cells were grown to confluency in 35-mm dishes (Falcon 3002). The preincubation step with peaks B and C and the binding reaction were carried out as described above except that a 1.0 ml incubation volume and 2.0 ng/ml of 'T-IGF-1 was used. Following the binding experiment, the monolayers were washed and disuccinimidyl suberate was added in a final concentration of 0.1 mM in 1.0 ml of binding buffer without BSA (18). After 10 min at 22 "C, the reaction was quenched with 10 mM Tris, pH 7.0. The cell monolayers were extracted with 1% SDS and boiled for 5 min, and the supernatant was clarified by centrifugation at 10,000 X g for 3 min. The supernatants were loaded on to 10% SDS-polyacrylamide gels and the proteins separated as described previously (18). The gels were fixed with 10% acetic acid and 30% methanol, washed, dried, and exposed to Kodak X-0-mat film. RESULTS Ammonium sulfate precipitation of 230 ml of amniotic fluid resulted in recovery of IGF binding activity in both the 33 and 50% pellets. The majority of the activity was present in the 50% pellet and this was chosen for further purification. During phenyl-Sepharose chromatography, the majority of contaminating protein eluted with 0.5 M sodium thiocyanate as described previously (24) (Fig. 1). The peak containing the IGF binding protein eluted with 0.02 M Tris, pH 9.0, and had been purified 9.5-fold (Table I). Further purification by ion exchange chromatography resulted in separation of two major peaks of binding activity which eluted at 100 and 250 mM salt (Fig. 2). These peaks (termed peaks B and C) were pooled and further purified separately. Peak C material was purified by Sephadex G-100 chromatography. The binding protein activity eluted over a broad peak but was separated from larger molecular weight contaminants (Fig. 3). 60 pg of G-100 purified material was further purified by reverse-phase HPLC using a C-4 column. The active material was eluted as a single peak a t 50% acetonitrile and was stable during storage at -20 "C for periods of up to 3 months (Fig. 4A). Peak B was purified by reverse-phase HPLC and this step resulted in a 9.4-fold purification (Fig. 4B). T o determine the purity, estimate the molecular size of each protein, and determine the efficacy of each separation method in removing contaminants, the protein at each stage of purification of peak C was subjected to polyacrylamide gel electrophoresis under nonreducing conditions followed by silver staining (Fig. 5 ) . The pure product has a molecular mass estimate of 31 kDa and appears as a single band after the final purification step (panel F ) . The phenyl-Sepharose step appeared to be the most effective procedure for removing the contaminating proteins. Comparison of pure peaks B and C on SDS-PAGE showed that they both had identical RF values (Fig. 6). The molecular mass estimates were 31 kDa under nonreducing conditions but the estimate of each increased to 36 kDa if the proteins were reduced prior to electrophoresis (data not shown). This gel was deliberately overloaded (10 pg of each protein) to detect contaminants. The 69-kDa band is a dimer. Isoelectric focusing of each protein showed peak B had a PI value of 5.4, whereas peak C was 5.3 (Fig. 6). When the amino acid compositions of peaks B and C were determined nearly identical amino acid ratios were obtained (Table 11). The actual composition is in close agreement with previously published data (24). Reduction and alkylation of peak B and peak C followed by N-terminal sequence determination is shown in Table 111. The result for amino acids 1-10 agrees with that published by two groups (24,25) and further confirms that the protein that was purified was the IGF binding protein. Positions 11 and 12 differ from the published sequence for placental protein 12 (25). However, these differences are only one base substitution in the codons coding for each amino acid. This suggests that the placental protein 12 sequence may be distinct and that the difference is not artifactual. When compared with the sequence of the rat IGF binding protein, the cysteine positions a t 5,8, and 16 appear to be conserved. Both proteins were stable after heating to 100 "C for 10 min and were stable to pH 2.5. Further physiochemical analysis was performed to determine whether carbohydrate side chains were present. Although one group had reported that the protein contained no carbohydrate (24) we noted that pure preparations of peak B or C adhered to concanavalin A. When 2 pg of each protein was applied to the concanavalin A column, 51% of peak C adhered and was eluted with 0.5 M a-methyl-D-mannoside, whereas only 24% of peak B was adherent. When each peak was treated with N-glycanase prior to concanavalin A chromatography, no change in the elution pattern was noted (data not shown). This suggested that the binding to concanavalin A was nonspecific. This result was confirmed by SDS gel electrophoresis of 20 pg of either peak B or C followed by staining with Schiff s base, which showed that neither protein a One unit of activity is the quantity of fluid necessary to stimulate one-half of maximal binding activity in the IGF-I binding capacity assay. Based on binding capacity assay protein determination. Based on amino acid composition. e Based on absorbance at 280 mM. contained detectable carbohydrate. Based on the staining intensity of a fetuin standard it could be determined that each protein contained less than 0.5% of its weight as carbohydrate. To determine the affinity of each protein for IGF-I, increasing concentrations of unlabeled IGF-I and lZ5I-IGF were incubated with peak B or C and the bound complexes immunoprecipitated. The data were analyzed using Scatchard plots. Both proteins have binding characteristics that are consistent with either a two-site model with high and low affinity binding sites or a one-site model with negative cooperativity. The relative affinities of the high affinity sites of the peak B and C proteins are very similar: 1.7 and 2.2 X 10" liters/mol, respectively (Fig. 7). In spite of their physiochemical similarity, the peak B and C materials were found to have markedly distinct biologic properties. Pure peak B material greatly potentiated the smooth muscle cell DNA synthesis response to IGF-I, but had no effect alone or with an equivalent concentration of human insulin (Fig. 8). In contrast, peak C material inhibited basal and IGF-I-stimulated [3H]thymidine incorporation and markedly inhibited the response to peak B plus IGF-I (Fig. 8). This effect was detectable at concentrations as low as 2.0 ng/ml peak C and was maximal at 20 ng/ml. To exclude the possibility that these changes in [3H]thymidine incorporation were To further characterize potential differences in the cellular response to the peak B and peak C proteins, lZ5I-IGF-I binding was determined in the presence of both forms of the binding protein. Addition of 100 ng/ml peak B to the cultures for 14 h prior to and during the binding reaction resulted in a 72% increase in IGF-I binding (Fig. 9). In contrast, addition of 25 ng/ml peak C protein resulted in a 36% decrease in the amount of IGF-I that was specifically bound. To determine whether the differences were due to differences in the capacity of each form of the binding protein to adhere to cell surfaces, smooth muscle cell cultures were exposed to 50 ng/ml peak B binding protein for 14 h at 37 "C and during the binding reaction. Following binding and affinity labeling, a band was detected at 42 kDa (Fig. 10, lane B ) . lZ5I-IGF-I binding to this band was specific since it was 4. A, HPLC of G-100 pool of peak C IGF-I binding activity. One ml was injected on to a Vydac C-4, reverse-phase column (4.6 mm X 25 cm). Sample was eluted isocratically for 5 min with 100% solvent A (0.04% trifluoroacetic acid in dH2O) followed by a linear gradient to 100% solvent B (0.04% trifluoroacetic in acetonitrile) for 25 minutes. The flow rate was 1.5 ml/min and absorbance was monitored at 214 nm. IGF-I binding activity (indicated by shaded area) eluted a t 51% solvent B. B, HPLC of DEAE-pool B IGF-I binding activity. 1.5 ml of DEAE-pool B was injected on to a Vydac C-4, reverse-phase column (4.6 mm X 25 cm). Sample was eluted isocratically for 5 min with 100% solvent A, (0.04% trifluoroacetic acid and H20) followed by a linear gradient to 100% solvent B (0.04% trifluoroacetic acid in acetonitrile) for 25 min. The flow rate was 1.5 ml/min and absorbance was monitored at 214 nm. IGF-I binding activity (indicated by shaded area) eluted a t 51% B. inhibited by excess unlabeled IGF-I but not by insulin. When peak C was added, no labeled band was detected in the 42-kDa region of the gel (lune E ) and binding to the type I receptor appeared to be reduced. T o determine whether the differences in the 42-kDa band intensity were due to differences in the adherence properties of peaks B and C, we determined the capacity of radiolabeled forms of each protein to bind to smooth muscle cell cultures. The addition of radi- lune D, peak C after DEAE-cellulose chromatography; lane E, peak C further purified by reverse-phase HPLC C-4 column; lane F, peak C further purified by G-100 and HPLC. BSA (69,000), ovalbumin (43,000), and myoglobin (18,500) were run as standards as indicated. Silver staining was performed as described under "Experimental Procedures." olabeled peak B resulted in 8% of the total counts per minute added being specifically bound to the cell surface, whereas incubation with an equal amount of peak C showed no specific binding (Fig. 11). Addition of 50 ng/ml non-radiolabeled peak B resulted in significant competition, whereas an equal amount of peak C showed no competition. DISCUSSION The insulin-like growth factor binding proteins are known to circulate in blood and to be present in extracellular fluids. The extracellular fluid form(s) of the protein are usually unsaturated; therefore, they have the potential to bind free IGF-I and IGF-11. It has been assumed that this large pool of carrier protein can act as a storage reservoir for IGF-I and that bound IGF-I is in an inactive form. These studies demonstrate that this model is too simplistic. The results show that human amniotic fluid contains two forms of the IGF binding protein that have similar physiochemical properties but differ in their capacity to bind to cell surfaces and in their capacity to enhance the cellular DNA synthesis response to IGF-I. Following separation on DEAE-cellulose the two proteins were purified to homogeneity and the homogeneous preparations had markedly different biologic actions. For isoelectric focusing of peaks B and C, 2.5 pg of each protein were separated on precast isoelectric focusing plates as described under "Experimental Procedures." The gel was stained with Serva Blue or cut into 0.5-cm slices and eluted with 0.04% trifluoroacetic acid. IGF binding activity of the eluates was determined and was detected in the slice corresponding to the stained protein. achieved with IGF-I and PPP alone (26). This result is not accounted for by a contaminant since contaminants composed less than 0.5% of the sample and the addition of the peak B protein without IGF-I or in the presence of insulin has no stimulatory activity (26). In contrast, the peak C form of the protein inhibited the effect of IGF-I alone or the combined stimulatory effect of peak B plus IGF-I. Therefore, the peak C form appears to be able to negate the effect that the peak B form exerts on IGF-I action. These effects are not due to changes in cell cycle kinetics since addition of peak B did not alter the time course of DNA synthesis by smooth muscle cells. Likewise, peak C did not simply delay the onset of [3H] thymidine incorporation by a mechanism similar to the effect of transforming growth factor-@ on ARK-2B cells (27). Since the net effect of the two proteins appears to determine the cellular response to IGF-I, it will be important to determine the relative abundance of these two forms in extracellular fluids. In contrast to these findings, several investigators have reported that partially purified preparations of IGF binding protein inhibit either the insulin-like (28) or growth-promoting actions (29) of IGF-I. Furthefiore,' one group used a homogeneous preparation of the rat MSA binding protein and showed that it blocked the DNA synthesis response of chick embryo fibroblasts to MSA (10). Since many of the purification schemes that were used to purify these proteins did not include DEAE-cellulose chromatography it is possible that these partially purified preparations contained both the peak B and C forms of the binding protein. Since peak C is capable of inhibiting the cellular response to peak B plus IGF-I, failure to separate these two forms during purification could lead to these results. It is also possible that species differences could account for these discrepancies since the rat homologue of the extracellular binding protein has a different N-terminal sequence (30) and therefore it might not be capable of eliciting the same biologic response. The exact molecular property that accounts for the differences in the cellular response to peaks B and C was not identified. Although we found that the two components had slightly different elution profiles from DEAE-cellulose, they had nearly identical isoelectric point determinations. This discrepancy could be due to preferential association of the peak B form of the protein with other proteins that elute at lower salt concentrations. A second possibility is that peak B aggregates into multimeric forms during the ion exchange step as a result of concentration and that such aggregation alters the exposed charge groups but that aggregation does not occur during isoelectric focusing. In addition, both forms had identical molecular weight estimates, very similar amino acid compositions, and identical N-terminal sequences, and both had no detectable carbohydrate content by Schiff s staining. The binding affinity estimates of each form of the protein for IGF-I showed complex kinetics that were consistent with a two-site model of competition for each form, but the affinity estimates for each form were not substantially different. The affinity of these proteins for IGF-I1 was not determined and, therefore, we cannot directly compare our binding results to those of Binoux et al. (15) who found two binding proteins with different affinities for IGF-I and IGF-I1 in a crude preparation of human spinal fluid. A major difference in the membrane adherence properties of these two proteins was noted. Direct measurements of the binding of radiodinated forms of each protein showed that peak B attached to the cell surfaces, whereas peak C did not. Likewise, non-radiolabeled peak B was shown to both adhere to smooth muscle cell surfaces and to increase the total amount of '251-IGF-I that was bound. In contrast, non-radio- 10 20 Thr -Arg -P l a c e n t a l p r o t e i n 1 2 FIG. 8. Effects of peak B and C IGF-binding proteins on IGF-I-stimulated DNA synthesis. Quiescent porcine smooth muscle cell cultures were exposed to a basal medium containing 0.2 ml DMEM and 1% PPP. Additional cultures received 20 ng/ml IGF-I or 10 pg/ml insulin. Other cultures were exposed to pure peak B or C with or without IGF-I. After a 36-h incubation [3H]thymidine incorporation into DNA was determined. The values plotted are the means of triplicate determinations. labeled peak C did not attach to cell surfaces and exposure of cells to peak C did not result in an enhancement of lZ5I-IGF-I binding. Since peak C inhibits the DNA synthesis response to IGF-I, it is possible that the presence of peak C in the incubation medium competitively inhibits lz5I-IGF binding, and thereby reduces the amount of IGF-I that is available to attach to the type I IGF receptor. These findings also suggest that attachment of peak B to the cell surface and the subsequent increase in IGF-I binding that is detected are linked to potentiation of the DNA synthesis response. Since at present this difference in the membrane adherence of these two forms of IGF binding protein is the only identifiable distinctive feature that has been linked to the differences in the cellular DNA synthesis response, it is critical to determine how this increase in the amount of IGF-I that is bound affects the type I receptor signaling mechanism. Potential binding proteintype I receptor interactions that might be modified by adherence of the IGF-I-binding protein complex to cell surfaces and subsequently enhance the transmembrane mitogenic signaling would include acceleration of receptor clustering, retardation of the rate of type I receptor internalization, blocking IGF-I degradation, and a change in receptor confirmation that results in enhanced affinity for IGF-I or direct binding of the IGF-I-binding protein complex to the receptor to a site that is distinct from the IGF-I binding site. Direct evidence supporting one of these mechanisms is not available, but since the IGF-I mitogenic signal is believed to be type I receptormediated each of these potential mechanisms is worthy of consideration. Ala-Pro-Trp-Gln-Cys-Ala-Pro-Cys-Ser-Ala-Asp-Glu-Leu-Ala-Leu The structural difference between peak B and C that accounts for the differences in membrane adherence and biologic response was not identified. Although the amino acid compositions were very similar and the first 28 residues of each protein are identical, it is possible that there are other as yet unidentified minor sequence differences. Likewise, other post-translational modifications such as fatty acid addition (31), carboxylation (32), phosphorylation, or internal disulfide bond rearrangements have not been excluded. It is likely that one of these modifications exists and that it accounts for the observed differences in biologic activity. Addi-tion of fatty acids such as palmitate can account for the membrane adherence properties of proteins. Specifically, the P-21 RAS protein will not adhere to the cytoplasmic surface of the plasma membrane unless palmitate has been added (33). Since the membrane adherence properties of the peak B binding protein correlate with its capacity to simulate DNA synthesis, it is possible that such a modification could explain both of the observed differences between peaks B and C. Identification of this specific difference would be of major importance in understanding the control of IGF-I action at the cell surface.
2017-10-28T01:04:09.793Z
1988-10-05T00:00:00.000
{ "year": 1988, "sha1": "429446687cd0e23ef354c70143a34490016b35d3", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/s0021-9258(18)68206-7", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "0a7fc4890e6708b1c79621be486ad04d85781d14", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
119541890
pes2o/s2orc
v3-fos-license
$D^*D\pi$ and $B^*B\pi$ form factors from QCD Sum Rules The $H^*H\pi$ form factor for H = B and D mesons is evaluated in a QCD sum rule calculation. We study the Borel sum rule for the three point function of two pseudoscalar and one vector meson currents up to order four in the operator product expansion. The double Borel transform is performed with respect to the heavy meson momenta. We discuss the momentum dependence of the form factors and two different approaches to extract the $H^*H\pi$ coupling constant. The coupling of the pion to the heavy mesons (g B * Bπ and g D * Dπ ) is related to the form factor at zero pionic momentum and its precise value has been often needed in phenomenology. In particular, the g D * Dπ coupling is needed in the context of quark gluon plasma (QGP) physics. Suppression of charmonium production in heavy ion collisions is one of the signatures of QGP formation [1]. Therefore a precise evaluation of the background, i.e., conventional J/ψ absorption by co-moving pions and ρ mesons [2], is of fundamental importance. Since pions are so abundant in a dense nuclear environment, the reactions π + J/ψ → D + D * (and consequently the coupling g D * Dπ ) are of special relevance [3]. In the case of g D * Dπ , the D * + → D 0 π + decay is observed experimentally. However, present data provide only an upper bound: g D * Dπ ≤ 21 [4]. For g B * Bπ , there cannot be a direct experimental indication because there is no phase space for the B * → Bπ decay. Recently, a direct preliminary determination of g B * Bπ on the lattice has been attempted [5]. The D * Dπ and B * Bπ couplings have been studied by several authors using different approaches of the QCD sum rules (QCDSR): two point function combined with soft pion techniques [6,7], light cone sum rules [8,9], light cone sum rules including perturbative corrections [10], sum rules in a external field [11], double momentum sum rules [12]. Unfortunately, the numerical results from these calculations may differ by almost a factor two. In this work we use the three-point function approach to evaluate the D * Dπ and B * Bπ form factors and coupling constants. The advantage of using the three-point function approach with a double Borel transformation compared with the two-point function with a single Borel transformation is the elimination of the terms associated with the pole-continuum transitions [8,13]. The three-point function associated with a H * Hπ vertex, where H and H * are respectively the lowest pseudoscalar and vector heavy mesons, is given by where j = iQγ 5 u, j 5 = iūγ 5 d and j † µ =dγ µ Q are the interpolating fields for H, π − and H * respectively with u, d and Q being the up, down, and heavy quark fields. The phenomenological side of the vertex function, Γ µ (p, p ′ ), is obtained by the consideration of H and H * state contribution to the matrix element in Eq. (1): The matrix element of the pseudoscalar element, j 5 , defines the vertex form factor g H * Hπ (q 2 ): where q = p ′ −p, f π is the pion decay constant and ǫ ν is the polarization of the vector meson. The vacuum to meson transition amplitudes appearing in Eq. (2) are given in terms of the corresponding meson decay constants f H and f H * by and Therefore, using Eqs. (3), (4) and (5) in Eq. (2) we get where The contribution of higher resonances and continuum in Eq. (6) will be taken into account as usual in the standard form of ref. [14]. The QCD side, or theoretical side, of the vertex function is evaluated by performing Wilson's operator product expansion (OPE) of the operator in Eq. (1). Writing Γ µ in terms of the invariant amplitudes: we can write a double dispersion relation for each one of the invariant amplitudes Γ i (i = 1, 2), over the virtualities p 2 and p ′ 2 holding Q 2 = −q 2 fixed: where ρ i (s, u, Q 2 ) equals the double discontinuity of the amplitude Γ i (p 2 , p ′ 2 , Q 2 ) on the cuts m 2 Q ≤ s ≤ ∞, m 2 Q ≤ u ≤ ∞, which can be evaluated using Cutkosky's rules [14,15]. Finally we perform a double Borel transformation [14] in both variables P 2 = −p 2 and P ′ 2 = −p ′ 2 and equate the two representations described above. We get one sum rule for each invariant function. In the p µ structure: and in the p ′ µ structure: where s 0 and u 0 are the continuum thresholds for the H * and H mesons respectively, which are, in general, taken from the mass sum rules. The two Borel masses M 2 and M ′ 2 are, in principle, independent and they should vary in the vicinity of the corresponding meson masses: m 2 H * and m 2 H respectively. Since for heavy mesons m H and m H * are very close, many authors use M 2 = M ′ 2 [8,10,11]. To allow for different values of M 2 and M ′ 2 we take them proportional to the respective meson masses, which leads us to study the sum rule as a function of M 2 at a fixed ratio We will consider diagrams up to dimension four which include the perturbative diagram and the gluon condensate. The quark condensate term does not contribute since it depends only on one external momentum and, therefore, it is eliminated by the double Borel transformation. Higher dimension condensates are strongly suppressed in the case of heavy quarks [6][7][8][9]11,12]. The double discontinuity of the perturbative contribution reads: and the integration limit condition is In this paper we focus on the structure p µ which we found to be the more stable one. For consistency we use in our analysis the QCDSR expressions for the decay constants up to dimension four in lowest order of α s : where we have omitted the numerically insignificant contribution of the gluon condensate. and The values of u 0 and s 0 are, in general, extracted from the two-point function sum rules for f H and f H * in Eqs. (16) and (17). Using the Borel region 2 ≤ M 2 ≤ 5 GeV 2 (for the D * and D mesons) and 10 ≤ M 2 ≤ 25 GeV 2 (for the B * , and B mesons) we found a good stability for f H and f H * with ∆ s = ∆ u ∼ 0.5 GeV, in agreement with the results in ref. [8]. We have checked that bigger values for ∆ s(u) , of order of 1 GeV, lead to unstable results for f H and f H * , in the case of the sum rules Eqs. (16) and (17). In our study we will allow for a small variation in ∆ s and ∆ u to test the sensitivity of our results to the continuum contribution. We first discuss the D * Dπ form factor. In Fig. 1 we show the behavior of the perturbative and gluon condensate contributions to the form factor g D * Dπ (Q 2 ) at Q 2 = 1 GeV as a function of the Borel mass M 2 using ∆ s and ∆ u given in Eqs. (18) and (19) equal to 0.5 GeV. We can see that, in the case of the form factor, the gluon condensate is not negligible and it helps the stability of the curve, as a function of M 2 , providing a rather stable plateau for M 2 ≥ 3 GeV 2 . The behavior of the curve for other Q 2 and continuum treshold values is similar. Fixing M 2 = 3.5 GeV 2 we show, in Fig. 2, the momentum dependence of the form factor (dots). Since the present approach cannot be used at Q 2 = 0, to extract the g D * Dπ coupling from the form factor we need to extrapolate the curve to Q 2 = 0 (in the approximation m 2 π = 0). In order to do this extrapolation we fit the QCD sum rule results (dots) with an analytical expression. We tried to fit our results with a monopole form, since this is very often used for form factors, but the fit is very poor. We obtained good fits using both the gaussian form and a curve of the form In Fig. 2 (20) and (21) are given in Table I for two different values of the continuum threshold. (20) and (21) which reproduce the QCDSR results for g D * Dπ (Q 2 ), for two different values of the continuum thresholds in Eqs. (18) and (19). In view of the uncertainties involved, the results obtained with the two parametrizations are consistent with each other, the systematic error being of the order of 10%. In refs. [8,16] it was found that the form factor in the semileptonic decay H → πlν, which is also normalized by the H * Hπ coupling constant, can be well approximated by a monopole form factor. In the case of the H → πlν form factor, a vector dominance approximation gives a phenomenological explanation for a pole fit at q 2 = m 2 H * , which is not the case of the form factor studied here. It is important to notice that here the dispersion relation is written in terms of the two heavy meson momenta, while in the case of semileptonic decay the dispersion relation is a function of the H and π momenta. Therefore, our form factor is a function of the pion momentum, exhibiting a peak at the pion pole Q 2 = 0. To test if our fit gives a good extrapolation to Q 2 = 0 we can write a sum rule, based on the three-point function Eq. (1), but valid only at Q 2 = 0, as suggested in [17] for the pion-nucleon coupling constant. This method was also applied to the nucleon-hyperon-kaon coupling constant [18,19] and to the nucleon−Λ c − D coupling constant [20]. It consists in neglecting the pion mass in the denominator of Eq. (6) and working at Q 2 = 0, making a single Borel transformation to both P 2 = P ′ 2 → M 2 . As discussed in the introduction, the problem of doing a single Borel transformation is the fact that the single pole contribution, associated with the N → N * transition, is not suppressed [6,8,13]. In ref. [13] it was explicitly shown that the pole-continuum transition has a different behavior as a function of the Borel mass as compared with the double pole contribution and continuum contribution: it grows with M 2 as compared with the double pole contribution. Therefore, the single pole contribution can be taken into account through the introduction of a parameter A, in the phenomenological side of the sum rule [8,13,19]. Thus, neglecting m 2 π in the denominator of Eq. (6) and doing a single Borel transform in P 2 = P ′ 2 , we get for the structure p μ Γ (phen) 1 where C H * H in given in Eq. (7) with f H and f H * given by Eqs. (16) and (17). On the OPE side only terms proportional to 1/Q 2 will contribute to the sum rule. Therefore, up to dimension four the only diagram that contributes is the quark condensate given byΓ Equating Eqs. (22) and (23) and taking Q 2 = 0 we obtain the sum rule for g H * Hπ + AM 2 , where A denotes the contribution from the unknown single poles terms. It is interesting to point out that in the limit m 2 H + m 2 H * = 2m 2 H * , the sum rule obtained in the p ′ µ structure coincides with the sum rule in the p µ structure. In Fig. 3 we show, for ∆ s = ∆ u = 0.5 GeV, the QCDSR results for g D * Dπ + AM 2 as a function of M 2 (dots) from where we see that, in the Borel region 2 ≤ M 2 ≤ 5 GeV 2 , they follow a straight line. The value of the coupling constant is obtained by the extrapolation of the line to M 2 = 0 [13]. Fitting the QCDSR results to a straight line we get in excellent agreement with the values obtained with the extrapolation of the form factor to Q 2 = 0, given in Table I. It is reassuring that both methods, with completely different OPE sides and Borel transformation approaches, give the same value for the coupling constant. In the case of B * Bπ vertex, we show in Fig. 4, for ∆ s = ∆ u = 0.5 GeV, the Q 2 = 0 sum rule results for g B * Bπ + AM 2 (dots) as a function of M 2 . It also follows a straight line in the Borel region 10 ≤ M 2 ≤ 25 GeV 2 , and the extrapolation to M 2 = 0 gives In Fig. 6 we show the QCDSR result for the perturbative and gluon condensate contributions to the form factor g B * Bπ (Q 2 ) at Q 2 = 2 GeV 2 as a function of M 2 using ∆ s = ∆ u = 0.5 GeV. In this case the gluon condensate is very small but it still goes in the right direction of providing a stable plateau for M 2 ≥ 15 GeV 2 . Fixing M 2 = 17 GeV 2 we show, in Fig. 6, the Q 2 behavior of the form factor (dots). The dots can still be well fitted by Eq. (21) (solid line). However, the fit with Eq. (20) is not so good, as can be seen by the dashed line in Fig. 6. In Table II we give the value of the parameters in Eqs. (20) and (21) that reproduce our results for two different choices of the continuum thresholds. In this case the agreement of the two different approaches to extract the coupling constant is not so good, but the numbers are still compatible. One possible reason for that is the fact that for heavier quarks the perturbative contribution (or hard physics) becomes more important, as can be observed by the decrease of the importance of the gluon condensate in Fig. 5 as compared with Fig. 1. Since in the sum rule given by Eqs. (22) and (23) there is only soft physics information, we expect α s corrections to the sum rule to be more important in the case of g B * Bπ (Q 2 ) than for g D * Dπ (Q 2 ). (20) and (21) which reproduce the QCDSR results for g B * Bπ (Q 2 ), for two different values of the continuum thresholds in Eqs. (18) and (19). Comparing Table I with Table II we see that the cut-offs are of the same order in the two vertices and are very hard. Concerning the parameter a, it is smaller in the case of the B * Bπ vertex. This is because of the fact that the form factor g B * Bπ (Q 2 ) has a flatter peak around Q 2 = 0 than g D * Dπ (Q 2 ). This can be interpreted as an indication that the spatial extension of the vertex is smaller for B * Bπ than for D * Dπ. This is also the reason why the gaussian fit is not so good in the case of the B * Bπ vertex, and leads to bigger values for the coupling. It is interesting to notice that our results for the coupling constants are completely consistent with the QCDSR calculation of ref. [12]. As a final exercise, we use our result for g B * Bπ to extract the coupling constant g which controls the interaction of the pion with infinitely heavy fields in effective lagrangian approaches [21,22]. They are related by [6][7][8][9]11,12,21,22] The knowledge of g is of great phenomenological value, since its strenght is required in the analyzes of many electroweak processes [21]. Therefore, during the last years, a large number of theoretical papers has been devoted to the calculation of g. However, the variation of the value obtained for g, even within a single class of models, turns out to be quite large. For instance, using different quark models one obtains 1/3 ≤ g ≤ 1 [22,23] while QCDSR calculations points in the direction of small g, with a typical value in the range g ≃ 0.13−0.35 [6][7][8][9]11,12]. Using the values for g B * Bπ given in Table II we get, at order α s = 0 therefore, we corroborate the overall conclusion drawn from different QCDSR calculations, that the coupling g is small. In conclusion, we extracted the H * Hπ coupling constant using two different approaches of the QCDSR based on the three-point function. We have obtained for the coupling constants: where the errors reflect variations in the continuum thresholds, different parametrizations of the form factors and the use of two different sum rules. There are still sources of errors in the values of the condensates and in the choice of the Borel mass to extract the form factor, which were not considered here. Therefore, the errors quoted are probably underestimated. The D * Dπ coupling is directly related with the D * → Dπ decay width through Using Eq. (28) we get which is much smaller then the current upper limit [4] Γ(D * − → D 0 π − ) < 89 keV. Acknowledgements: This work has been supported by CNPq and FAPESP (under project number 1998/2249-4). C.L.S. thanks FAPERJ for financial support. FIG. 6. Momentum dependence of the B * Bπ form factor for ∆ s = ∆ u = 0.5 GeV (dots). The solid and dashed lines give the parametrization of the QCDSR results through Eqs. (21) and (20) respectively.
2019-04-14T02:31:28.407Z
2000-05-03T00:00:00.000
{ "year": 2000, "sha1": "6b603a0f04628dc56f66f5fe2b6f8228e59b6e2e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/0005026", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f76a1a13de02a1a0bc37dc3aea76ad305e1d4b75", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
15229776
pes2o/s2orc
v3-fos-license
Effect of Parenteral Antioxidant Supplementation During the Dry Period on Postpartum Glucose Tolerance in Dairy Cows Background Exacerbated postparturient insulin resistance (IR) has been associated with several pathologic conditions in dairy cattle. Oxidative stress (OS) plays a causative role in IR in humans, and an association, but not direct relationship, between OS and IR recently has been reported in transition dairy cattle. Hypothesis Supplementation with antioxidants shortly before calving improves glucose tolerance after parturition in dairy cattle. Animals Ten late‐pregnant Holstein cows entering their 2nd to 5th lactation. Methods Randomized placebo‐controlled trial: 15 ± 2 days before expected calving, the treatment group received an injection of DL‐alpha‐tocopheryl acetate at a dosage of 6 mg/kg body weight (BW) and 0.06 mg/kg BW of sodium selenite, and the control group was injected with isotonic saline. During the first week after calving, both groups underwent glucose tolerance testing (0.25 g glucose/kg BW). Commercial assays were used to quantify the concentrations of glucose, insulin, nonesterified fatty acids (NEFA), beta‐hydroxybutyrate, and markers of redox status in blood. Data were analyzed using the Mann–Whitney U‐test (α = 0.05). Results Supplemented cows showed a lower risk for OS, as reflected by a lower OS index (P = .036), different areas under the curve for the concentrations of glucose (P < .01), insulin (P = .043), and NEFA (P = .041), more rapid elimination rates (P = .080, <.01 and .047 respectively), and shorter half‐lives (P = .040, <.01 and .032) of these metabolites. Conclusions and Clinical Importance Supplementation with antioxidants before calving resulted in greater insulin sensitivity after calving, thereby suggesting the role of OS in the development of IR in cattle and the potential benefits of antioxidant supplementation in minimizing the consequences of negative energy balance. A s dairy cows transition from late pregnancy to the onset of lactation, they are faced with marked and sudden metabolic and endocrine changes that negatively impact their performance and health status. Insulin plays a pivotal role in the partitioning processes that take place to support lactation. Cows undergo a period of insulin resistance (IR) before calving to support fetal glucose needs as well as after calving to prioritize the insulin-independent uptake of glucose by the mammary gland. 1 A prolonged IR state has been related to several pathologic processes, including economically important postpartum conditions such as displaced abomasum 2 or decreased fertility as a consequence of enhanced lipolysis. 3 The mechanisms causing IR are not fully understood in dairy cattle, but this period of IR has physiologic similarities to human type I and type II diabetes, 4 with the major difference being that cows have low glucose concentrations. 5 In human type II diabetes, strong evidence supports that oxidative stress (OS), the imbalance between pro-oxidant production and antioxidant capacity, plays a causative role in the development of IR, 6,7 and antioxidant supplementation can be used to decrease the consequences of IR. [7][8][9][10] It is now well known that dairy cattle experience OS after calving, 11 and antioxidant supplementation can diminish the harmful effects of excessive pro-oxidant production. 13 We recently found a significant association between oxidant status and whole-body insulin sensitivity, measured by means of surrogate indices, in periparturient dairy cattle. 14 We therefore hypothesized that antioxidant supplementation before calving may impact glucose homeostasis (assessed by means of intravenous glucose tolerance testing [IVGTT]) after calving. Hence, our study aimed to establish a causal relationship between oxidant status and insulin sensitivity in dairy cattle during the transition period. Material and Methods A randomized placebo-controlled study was used. The protocols of this study were approved by the Bioethical Committee of the University of Santiago de Compostela (Spain), and the animals were enrolled with owner consent. Animals, Nutrition and Husbandry Ten nonlactating, late-pregnant Holstein cows from the same commercial herd, located in Meira (northwest Spain), were used in this study. Selection criteria included: parity (entering their 2nd or greater lactation), milk production in the preceding lactation (9000 to 9500 kg), body condition score (3 to 3.5, on a 1 [lean] to 5 [obese] scale as previously described 15 ), and proximity in their expected calving date. Cows in both groups were maintained under identical conditions throughout the study. Animals were kept in a free-stall barn with concrete stalls and fed a total mixed ration (Table 1), delivered once daily at 9:00 AM and formulated according to the National Research Council (NRC) 16 to meet or exceed their requirements. Lactating animals were milked twice daily and cows were dried-off 60 days before their expected calving date. Treatment Allocation Animals were randomly allocated to treatment or control groups using the random function of Excel. a A blood sample was collected 15 AE 2 days before expected calving by coccygeal venipuncture into evacuated tubes without anticoagulant, b and animals in the supplementation group subsequently received an IM injection of a commercial product c at a dosage of 6 mg/kg body weight (BW) of DL-alpha-tocopheryl acetate (equivalent to 6 IU/kg BW of vitamin E) and 0.06 mg/kg BW of sodium selenite, whereas cows in the control group were injected with isotonic sterile saline solution. d BWs for dose calculation were adjusted estimating the weight of the conceptus according to NRC 16 using an estimated calf birth weight of 40 kg. The farm personnel, but not the investigators, were blinded to group allocation. Because of longer gestation lengths than expected, the interval between treatment and calving ranged from 9 to 19 days (mean AE SD: 16 AE 4.76). Intravenous Glucose Tolerance Test Between days 3 to 7 after calving, animals in both groups were subjected to IVGTT around 3:00 PM, thereby allowing 6 hour between when the ration was offered and the infusion of glucose to decrease any potential interference in blood metabolite clearance patterns. Cows were restrained in the feedbunk headlocks and the feed was removed from their access. A 14-gauge 9 8 cm catheter with a 250 mL/min capacity e was inserted in either the right or the left jugular vein. Cows were allowed to rest for 15 minute after insertion of the catheter until blood sampling started. Stress was avoided as much as possible and cows generally appeared relaxed and continued to ruminate during the test. Blood samples were collected at À10, À5, 5, 10, 20, 30, 45, 60 and 90 minutes after the infusion of 0.25 g/kg BW of glucose. f The infusion of glucose was completed in 3 to 4 minutes. After infusion, the catheters were irrigated with 10 mL of sterile saline d and the first 5 mL of blood discarded from the first collection. Samples were collected into tubes without anticoagulant and tubes containing fluoride heparin. g Laboratory Analysis Samples were transported under refrigeration to the laboratory, where they were centrifuged at 2000 9 g for 20 minutes within 2 hour after collection and the supernatant serum or plasma was harvested, aliquoted into 1.7 mL microcentrifuge tubes h and stored at -80°C pending analysis within 3 months of collection. Commercially available kits were used for analysis. Plasma was analyzed for glucose concentration, i whereas serum was used to measure the concentration of nonesterified fatty acids j (NEFA) and beta-hydroxybutyrate k (BHBA). Biomarkers of oxidant status were measured at enrollment into the study and in the basal IVGTT samples. Reactive oxygen species l (ROS) were quantified in serum samples as markers of prooxidants. The assay employed determines hydroperoxides (breakdown products of lipids and other organic substrates generated by oxidative attack of ROS) through their reaction with the chromogen N,N-diethylparaphenylenediamine. This assay previously has been validated against electron spin resonance. 17 Results are expressed in arbitrary 'Carratelli units' (Carr.U), with 1 Carr.U corresponding to the oxidizing power of 0.08 mg H 2 O 2 /dL. Total serum antioxidant capacity (SAC) also was quantified using a commercial assay. m This test exploits the capacity of a concentrated solution of hypochlorous acid (HClO) to oxidize the complete pool of antioxidants in serum (albumin, bilirubin, uric acid, thiol groups, vitamins, glutathione, glutathione peroxidase, super- oxide dismutase, catalase, and other compounds). Thus, SAC considers the cumulative action of all the antioxidants present in serum, rather than simply the sum of measurable antioxidants. Results are expressed as lmol HClO/mL. The oxidative stress index (OSi) was calculated as ROS/SAC. 11 Thus, an increase in the ratio indicates a higher risk for OS because of an increase in ROS production, defensive antioxidant consumption, or both. These analytical determinations were performed in duplicate on a biochemistry autoanalyzer n calibrated against a multipoint calibrator. o Physiologic p and pathologic q control sera, as well as an in-house reference sample, were analyzed alongside the samples for quality control. Duplicated serum samples also were analyzed for insulin using a bovine-specific ELISA kit, r which has a limit of detection of 0.025 lg/L. Two samples fell below this limit and were assigned a concentration of 0.025 lg/L. The intra-assay coefficients of variation for all the determinations were below 5%, with all samples analyzed in the same run. IVGTT Data Processing Basal concentrations for the studied analytes were determined as the mean concentration of the 2 blood samples taken before glucose infusion (À10 and À5 minutes samples). The area under the curve (AUC) of glucose, insulin, NEFA, and BHBA were computed with the trapezoidal method as the total increment of these metabolites above (below for NEFA and BHBA) basal concentrations during the 90 minutes after infusion. Peak and nadir concentrations of these analytes also were determined. Elimination rates and times to reach half-maximal (T 1/2 ) and basal (T basal ) concentrations for glucose, insulin, NEFA, and BHBA were computed with the following formulas, as previously described 18 : In these formulas, [t a ] is the concentration of the metabolite at time a (t a ) and [t b ] is the concentration of metabolite at time b (t b ). Statistical Analyses No assumptions for normality of data were made because of small sample size. All variable concentrations were analyzed with the Mann-Whitney U-test using SPSS software s and expressed as medians. Statistical significance was declared at P < .05, and values of P between .05 and .10 were considered a trend toward significance. Results No statistically significant differences between the control and supplemented groups were observed for the distribution of parity (mean AE SD: 2. (Table 2), whereas during the IVGTT basal measurements, only the SAC and OSi differed, being higher and lower, respectively, in the supplemented group (Table 2), thereby indicating a decreased risk for OS. Responses to the IVGTTs are quantified in Table 3. Cows supplemented with vitamin E and selenium showed a smaller glucose AUC, lower nadir concentration, and shorter half-life. There was no difference between groups in maximum concentration during IVGTT (Fig 1A, P = .64) or T basal (P = .77) for glucose during IVGTT. Similar to the changes observed in glucose, the insulin AUC, insulin minimum concentration, and insulin half-life were decreased in supplemented cows. Insulin secretion in response to glucose infusion (peak concentration) was not affected by treatment (Fig 1B, P = .29), but supplemented cows had more rapid insulin clearance (elimination rate) and a shorter T basal , requiring only 44% of the time required by nonsupplemented animals to reach basal insulin concentration after glucose infusion. Differences in fatty acid metabolism also were observed between groups ( Fig 1C). Supplemented cows had larger NEFA AUC, a faster NEFA elimination rate, and a decreased NEFA half-life. However, neither the peak nor nadir concentrations of NEFA were different between groups. In addition, the metabolism of ketones was similar between the 2 groups (Fig 1D), where only the nadir concentration of BHBA tended to be lower in supplemented animals (P = .086). Discussion In humans suffering from diabetes, OS plays a causal role in the development of IR, 7,19 decreasing insulin biosynthesis and release. 8 However, this direct relationship has hitherto not been proven in periparturient cattle, although from epidemiologic data we recently reported a significant association between markers of OS and IR in these animals. 14 OS is well known in cattle as an underlying cause of dysfunctional inflammatory and host immune responses around the time of calving, thereby increasing cows' susceptibility to health disorders. 20 Indeed, antioxidant supplementation has shown an overall beneficial effect on the health status and performance of cows. 13 OS links nutrient metabolism with inflammatory responses in transition cattle 12 and therefore, supplementation with vitamin E and selenium precalving has the potential to alter the metabolic response of the animals to an IV infusion of glucose. Vitamin E (a-tocopherol) is a potent lipid-soluble, chain-breaking antioxidant, 21 and selenium also exerts antioxidant functions both directly and as a cofactor for selenoproteins. 22 Hence, the parenteral administration of these 2 compounds increased the SAC of the animals (Table 2), thereby decreasing the risk for OS in the supplemented animals when they underwent IVGTT, as shown by the lower OSi values. Also, more individual variability was observed in SAC before treatment application than at IVGTT. Cows managed under identical conditions show high individual variability in their physiologic adaptation to metabolic stress around calving. 23 Yet, cows at the onset of lactation typically show decreased antioxidant capacity, 13 which could explain the decreased variability in control cows in the first week of lactation. On the other hand, supplemented cows all received the same dose of vitamin E and selenium at a similar time point, which contributes to a similar total antioxidant potential. The dose of glucose administered during the IVGTT differed from some previous studies, which employed larger 24 and smaller 25 doses than used in this study. We selected a dosage of 0.25 g glucose per kg BW to facilitate the comparison of results, because this was the same or a similar dosage to that used in the majority of previous studies. 26 -28 Higher glucose tolerance was found in the supplemented animals, with lower glucose AUC and T 1/2 . Increased glucose elimination rates, decreased half-life and decreased AUC are thought to involve increased insulin sensitivity. 29 This assumption is further supported by the smaller insulin AUC, a quicker elimination rate, and shorter T 1/2 and T basal for insulin found in supplemented animals. Similarly, a higher insulin AUC in control animals clearing the same dose of glucose indicates a higher degree of IR. 27 In addition, differences in fatty acid metabolism were observed in the response to IVGTT in this study. In accordance with previous studies, 27,28 NEFA concentrations reached their nadir at approximately 45 minutes, representing rapid inhibition of lipolysis by insulin. 30 Supplemented cows had a more rapid NEFA elimination rate after glucose infusion (Table 3), higher NEFA AUC, and a shorter NEFA half-life, thereby suggesting that supplemented cows had lower IR related to lipid metabolism than did nonsupplemented cows. Conversely, regarding the response of NEFA to the IVGTT, no differences in the metabolism of BHBA after glucose infusion were observed between the 2 groups. However, concentrations of NEFA and BHBA do not correlate well, 31 because the synthesis of ketone bodies does not depend only on energy balance. Therefore, the greater decrease in serum NEFA may not directly translate to a greater decrease in the concentration of BHBA. To the best of the authors' knowledge, ours is the first study to investigate the effect of supplementation with vitamin E and selenium, the most widely used antioxidants included in the diets of dairy cows, 13 on glucose tolerance during early lactation. However, 2 previous reports investigated the effect of chromium supplementation, which has some antioxidant effects in cattle, 32 on the response to IVGTT in cows. 33,34 These studies found differences in glucose elimination rates, but not in the clearance of NEFA. However, despite the limited antioxidant potential of chromium, its role in metabolism is believed to be through the glucose tolerance factor, 35 enhancing glucose uptake by cells. Therefore, it is not surprising that these studies reported improved glucose clearance, but no changes in the NEFA response to the IVGTT. Inflammation around the time of calving has gained much attention in recent years. 36 The nuclear factor kappa B (NF-jB) pathway is a pro-inflammatory signaling pathway responsible for provoking IR. 37 This pathway can be activated by OS in cattle during times of negative energy balance. 38,39 In addition, endoplasmic reticulum stress, present in the liver of high-yielding dairy cows, 40 also activates inflammation via the NF-jB pathway. 41 Hence, the lower IR observed in supplemented cows may be a consequence of the down regulation of these pathways because of increased antioxidant capacity. However, as a consequence of the tight interplay among nutrient metabolism, OS, and inflammation in dairy cattle, 12 several other factors may play key roles in the development of IR in dairy cows, which must also be taken into consideration when designing nutritional interventions to control IR and the associated enhanced lipolysis. The use of the IVGTT to assess insulin sensitivity implies normal insulin secretion after glucose administration and assumes similar insulin secretion among animals, which may not always be the case. 1 The IVGTT, however still, is considered a good method for assessing IR in cattle given its practicality and agreement with the hyperinsulinemic euglycemic clamp, the gold standard test. 26 The major limitation of this study was the small sample size, as there were only 5 animals per study group. Nevertheless, this number was sufficient for showing statistical differences in the response to IVGTT, although the basal metabolic status of the animals was not affected by the supplementation. Animals in this study were not supplemented with any dietary antioxidants aside from the limited amount contained in preserved forages, 16 and therefore the improved responses observed in this study might also be in part because of some antioxidant deficiency during the dry period. Hence, further studies should investigate whether antioxidant therapy ameliorates the degree of IR beyond the first week postcalving, as well as the impact that antioxidant supplementation can have on the metabolic and health status of cows. Conclusions Cows supplemented parentally with antioxidants (vitamin E and selenium) before calving showed improved insulin sensitivity during the first week of lactation, thereby supporting an effect of OS on the development of IR in dairy cows. Further studies should investigate the effects of different supplementation strategies as adjunct therapies to ameliorate the consequences of prolonged IR and its impact on metabolic stress in cows.
2018-04-03T03:12:06.853Z
2016-03-12T00:00:00.000
{ "year": 2016, "sha1": "705ffb8b270c29dcbbc6bfd5a603060ec1ce497f", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jvim.13922", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "705ffb8b270c29dcbbc6bfd5a603060ec1ce497f", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
232240641
pes2o/s2orc
v3-fos-license
Strong CP problem and axion dark matter with small instantons The axion mass receives a large correction from small instantons if the QCD gets strongly coupled at high energies. We discuss the size of the new CP violating phases caused by the fact that the small instantons are sensitive to the UV physics. We also discuss the effects of the mass correction on the axion abundance of the Universe. Taking the small-instanton contributions into account, we propose a natural scenario of axion dark matter where the axion decay constant is as large as $10^{15\text{-}16}$GeV. The scenario works in the high-scale inflation models. Introduction The axion, a hypothetical light particle couples to QCD and QED, drastically modifies physics at long distance scales. Especially, when its mass is dominated by the contributions from the QCD, the vacuum is chosen to eliminate the strong CP phase, thereby solving the strong CP problem in the standard model of particle physics [1][2][3][4]. The existence of such a new degree of freedom is also motivated as a candidate of dark matter of the Universe [5][6][7]. The axion can be originated from various microscopic models, for example, as the Nambu-Goldstone particle associated with the Peccei-Quinn (PQ) symmetry [1][2][3][4] and also as a part of theories of quantum gravity [8][9][10][11][12][13][14][15] (see Refs. [16][17][18][19][20][21][22] for reviews.) An essential feature of the axion in order to solve the strong CP problem is the shift symmetry, a → a+c, which is only broken by the non-perturbative dynamics of QCD through the coupling to the instanton density in the Lagrangian. The vacuum is automatically CP conserving once this system is realized as the low energy effective theory. The shift symmetry is naturally realized as the Nambu-Goldstone mechanism of the spontaneously broken PQ symmetry. Although it sounds like a good solution to the strong CP problem, the requirement of the shift symmetry poses another question that why or how the PQ symmetry, which is anomalous to QCD, is maintained with a great accuracy in fundamental theories to UV complete the effective theory. Indeed, violation of the PQ symmetry is generally present in theories of quantum gravity [23][24][25][26]. Field theoretic model building to resolve this "quality problem" has been discussed in literature. One approach is to regard the PQ symmetry as an accidental symmetry protected by some gauge symmetries [27][28][29] [30][31][32] [59][60][61][62][63][64][65]. (See also Ref. [33] and Refs. [34,35]). Another interesting approach is to identify the axion as a part of gauge fields in larger space-time dimensions [12,[36][37][38]. Once the effective theory with approximate shift symmetry is realized, there is a relation between the axion mass m a and the decay constant f a as m a f a ∼ (100 MeV) 2 . The axion scenarios with this relation has a particular name, the QCD axions, and have been distinguished from other axion-like particles which do not solve the strong CP problem. Recently, however, there have been extensive discussions on the possibility of heavier axions than the QCD ones while the strong CP problem is still solved. Such a scenario is possible if there is some UV dynamics which makes the QCD get strong again at high energy. In this case, the instanton configurations with small sizes can give a large contribution to the axion mass while it is aligned to the low energy contributions since it is still the QCD effects. Examples to realize such UV strong QCD is to embed the SU(3) gauge group to a larger gauge group at a high energy scale [39][40][41][42] and also to let gluons propagate into a small extra dimen-sion [43,44]. The enlarged parameter space of the QCD axions will be quite important for low energy axion phenomenology, axion cosmology, and also for axion astrophysics. In this paper, we consider how the small instantons affect the axion potential in the current Universe, in the early Universe, and during the inflation. In particular, if the contributions from small instantons are important, CP phases in dimension six operators in the standard model effective theory cause a misalignment of the vacuum from the CP preserving one, and thus reintroduce the strong CP problem. Taking this effects into account, we discuss how large the UV contributions can be in general setups. Note that the misalignment caused by the small instantons is independent of the quality problem of the PQ symmetry as the dimension six operators to cause the problem are invariant under the PQ symmetry. In the presence of the small instanton contributions to the axion mass, the axion abundance generated by the misalignment mechanism is modified. There are parameter regions where the axion abundance is reduced, which allows a larger axion decay constant. The small instantons are particularly important in the models where the axion arises from a gauge field in the extra dimension such as string axions. The small instantons which stretch over the extra dimension is considered in Ref. [44] and it is found that the axion mass can be much heavier than that in the conventional scenario by many orders of magnitude. However, by considering the CP phases in the higher dimensional operators, such an enhancement of the axion mass should be avoided. We find that a consistent scenario requires the size of the extra dimension, i.e., the axion decay constant to be larger than about 10 15−16 GeV where the UV contributions to the axion mass is much smaller than the conventional low energy contribution. Although the strong CP problem can be avoided, such a light axion is cosmologically severely constrained by the isocurvature of the density perturbation and the overproduction of the axion by the misalignment mechanism. The small instanton effects, however, provides us with an interesting cosmological scenario in the extra-dimensional model. One can consider the possibility that the radius of the extra dimension during the inflation is smaller than the current size, i.e., the grand unification scale. Such a situation can be easily realized when the volume modulus (radion) is sufficiently light or if the radion is the inflaton itself. In this case, the QCD scale gets higher during the inflation and the axion field can be stabilized near the CP preserving point without introducing the isocurvature perturbations. A small displacement caused by the CP violating small instanton effects can explain the correct abundance for the axion dark matter. Small instantons and CP problem In this section, we study the small instanton contribution to the axion potential by taking account of various higher dimensional terms with CP-phases. We will see that such contributions generically shift the minimum of the QCD axion potential to a CP-violating position and thus reintroduce the strong CP problem. We discuss the relation to the quality problem of the PQ symmetry. Aligned axion potential from small instantons We discuss contributions to the axion potential from small instantons in general models. We are particularly interested in the contributions with the instanton sizes much smaller than the electroweak scale, and thus the vacuum expectation value of the Higgs field can be ignored. The chirality flips required to close the 't Hooft vertex can be obtained by using the Yukawa interactions, with the coupling constants Y u and Y d , and the loops of the Higgs lines as in Fig. 1. By using the dilute instanton gas approximation [45], one can evaluate the UV contribution to the axion potential [46][47][48] V (a) = V QCD (a) + Here, we separated the contribution from the infrared QCD dynamics, V QCD . The integration over ρ represents that of the size modulus of the instanton solution. The dependence of the effective action on ρ arises from the quantum corrections which are captured by the running coupling constant, S eff [1/ρ] ≈ 2π/α s [1/ρ]. The normalization of the instanton density is evaluated as F [ρ ] ≈ 10 −3 (2π/α s (1/ρ )) 6 for a single instanton [49]. Depending on the behavior of the running gauge coupling, there can be a large contributions from the second term. The IR cut-off, Λ SM , can be arbitrary as the dependence on Λ SM is formally absorbed in V QCD . The UV cut-off, Λ cutoff , represents the scale above which we do not know the effective field theoretic description. If the integral is dominated by the ρ ∼ 1/Λ cutoff region, the axion potential is not calculable within the effective theory although the integral still provides an estimate. We denote the typical instanton size which provides the largest contributions to the integral as ρ in the following discussions. It is important to note that at this stage V (a) and V QCD (a) have the minimum at the same value of a since there is no additional CP phase in the discussion. This aligned contributions can enhance the axion mass while the strong CP problem is solved. We denote this aligned contributions, i.e., the second term as where χ 0 is the topological susceptibility in QCD. The parameter represents the relative size of the UV contribution to the axion mass. CP violation from small instantons Any field theory involving gravity should be UV completed at a UV scale Λ, and there should be many higher dimensional terms (c.f. [23][24][25][26]). 1 Thus, in general, we expect CP violating 1 If there were no higher dimensional terms, the small strong CP phase is natural since it is rarely generated via radiative correction. If there is, on the other hand, O(1) phase is easily generated at the loop level, which makes the strong CP problem really a problem. terms originated from the UV physics, e.g. where Λ Λ cutoff is the energy scale generating the operator, and C ijkl ud a dimensionless coefficient. The fields Q, u, and d are quark fields in the standard model with the generation indices. Note that this term does not include the axion and do not violate the PQ symmetry. We expect since gravity is argued to break any global symmetry. (We discuss the case where C has chirality suppressions, i.e., C ud ∼ Y u Y d later.) Here (and hereafter) for concreteness we have implicitly assumed that a KSVZ-like axion model [50,51], in which the quarks are not charged under the PQ symmetry, and thus (3) is allowed by the PQ symmetry. In the case of DFSZ axion [52,53], on the other hand, this higher dimensional term is forbidden. Instead we can consider the terms such as being the up-type, down-type and PQ Higgs fields, respectively. Our conclusions do not change qualitatively in these cases. Let us consider the small instanton contribution involving this term via, e.g., the diagram in Fig. 2. The contribution is in general not aligned to V QCD (a) such as where we have neglected the CKM matrix for simplicity of notation, and we have used that C 1111 ud is the dominant since the SM Yukawa couplings satisfy y u , y d y s , y c , y b , y t . This integral has an extra ρ −2 factor compared to the V UV by the dimensional analysis. This makes the UV contributions more important. Since the important integration region shifts to smaller ρ than that for V UV , which we denote as ρ, we find gives a conservative estimate of the CP violating effects. The parameter C can be very large but also can be of O(1) if there are chirality suppressions in the higher dimensional operator. On the other hand, the phase θ UV is O(1) in general. Throughout this paper we assume θ UV = O(1). The experimental constraints from the neutron electric dipole moment (EDM), |d n | 3 × 10 −26 e cm [54,55], put an upper bound on the effective θ angle as θ QCD 2 × 10 −10 , whereθ QCD = a/f a , through the theoretical estimates of d n = (1.52 ± 0.71) × 10 −16θ QCD e cm and d p = (−1.1 ± 1.0) × 10 −16θ QCD e cm, [56,57]. Future storage ring experiments have sensitivities of |d p | ≈ 10 −29 [58] which translates to |θ QCD | ∼ 10 −13 . Small instanton The condition for solving the strong CP problem can be obtained by calculating the VEV with the potential In particular if V UV dominates over the QCD potential, i.e. in the heavy axion scenario, we need For example, for C ∼ 3 × 10 9 and Λ ∼ M pl , the instanton scale, ρ −1 , should be smaller than O(10 9 ) GeV. This condition does not depend on the size of the decay constant of the axion. The parameter region is shown in Fig.3 in − ρ −1 plane with Λ = M pl . The region above the light gray and gray range, respectively, with Cθ UV = 3 × 10 9 and 1, are excluded due to the EDM bound. The purple region may be searched for in the future. The constraints and lines relevant to the axion dark matter will be discussed in Sec.3. From this figure, we can conclude that a heavy QCD axion, > 1, must accompanied with a large enough instanton size ρ 10 −10 GeV −1 in the case of no chirality suppressions in the higher dimensional operators, C ∼ 3 × 10 9 . The constraints are milder for C ∼ 1, but it is still important to note that heavy axions cannot be realized by UV dynamics higher than 10 14 GeV. This fact is important in the discussion of the axion in the extra-dimensional model or in general string axion models. Relation to the quality problem of the PQ symmetry The CP violation we discussed is present even when the PQ symmetry is exact (but anomalous), and thus known solutions to the quality problems may not work for the UV instantons. The discussion is qualitatively different depending on whether 1/ρ is smaller or larger than the decay constant f a . In the case of f a 1/ρ, the discussion is similar to the ordinary scenarios. In this regime, one should consider the UV contributions to the PQ breaking dynamics including the quark fields which make the PQ symmetry anomalous. For example, let us consider a model where a VEV of a scalar field Φ PQ breaks the PQ symmetry spontaneously. In the presence of a single pair of vector-like PQ quarks Q PQ with an interaction the potential for Φ PQ receives contributions from small instantons which scale as The axion appears as the pseudo Nambu-Goldstone boson associated with the PQ breaking, Φ PQ ∼ f a 1/ρ. The linear term of Φ PQ from the UV instanton contributes to the axion mass. The appearance of the linear term can be avoided if Φ PQ is charged under some gauge symmetry like gauged Z N symmetry (and also we need non-trivial PQ quark contents to Strong CP problem For Φ PQ ∼ f a 1/ρ, the problem is more serious. The reason is that the PQ field Φ PQ appears in the instanton through the f a dependence of the running gauge coupling, which means the dependence is a singular log Φ N PQ with the Z N gauge symmetry. This is no longer suppressed by the scales of higher dimensional terms with larger dimension for large N . The approach by the Z N symmetry does not work in this case. In appendix A, a heavy axion model to avoid the UV problem is discussed. Although we focused on the new CP violation in the axion models, there should be a similar CP violating effect in the case of the massless up quark solution discussed in Refs. [42,66]. If we take into account various CP-violating higher dimensional terms, the up quark mass would receive an additional phase of ∝ (ρΛ) −2 . Therefore to generate the mass and preserve the vanishing strong CP phase, we need the instanton size to be large enough. One needs to check if the enough size of up-quark mass at low energy is realized in this case. Axion dark matter with UV instantons In this section, we study the cosmological abundance of the QCD axion by taking account of the small instanton. The decay constant of the QCD axion has a window in which the axion can have a consistent cosmological history (without fine-tuning): The lower bound comes from the duration of the neutrino burst in SN1987A [67][68][69][70], the upper bound, f max a , is obtained from the over production of the QCD axion in the early Universe, i.e. the axion dark matter is over abundant. The axion dark matter is realized around f a ∼ f max a with the initial misalignment angle |θ i | ∼ 1. The estimation of the abundance relies on the temperature dependence of the topological susceptibility [5][6][7]. (See Refs. [71][72][73][74][75] for recent lattice computations.) When the QCD axion mass becomes comparable to the Hubble parameter, the axion starts to oscillate around the potential minimum from its initial misalignment angle θ i . The axion number conserves and later the axion energy density behaves as dark matter. The temperature dependence of the axion mass determines when the axion starts to oscillate. The inclusion of the aligned small instanton contribution changes this temperature dependence, and the estimation of the abundance becomes non-trivial. In the following, we will estimate carefully the abundance of the axion in the presence of the small instanton. We assumeθ CP 1 and neglect the contribution of CP violating term for a while: i.e. we use the approximation of Then, the axion physical mass at the vacuum can be obtained as The abundance formula is known for 1 or 1 since we can neglect V QCD or V UV . With only V UV we obtain (we use the fit in Ref. [76]) Here g ,osc is the relativistic degree of freedom at the onset of oscillation. With only V QCD we obtain [77] Ω a h 2 ≈ 0.35 The difference of the two cases comes from the temperature dependence of the potential. For ∼ 1 (more precisely 10 −13 -10 −6 1 for f a = 10 8 -10 14 GeV) , on the other hand, the abundance gets suppressed compared with 1 because in the presence of both terms, the time for the onset of oscillation is faster than the case of QCD potential only. To check this we solveä being the Hubble parameter in the radiation dominated Universe, s = 2π 2 g s 45 T 3 the entropy density, and g (g S ) the relativistic degrees of freedom (for entropy) at the cosmic temperature T . We evaluate the abundance parameter defined by with ρ a [t] =˙a 2 2 + V (a), and s 0 and ρ crit the current entropy density and critical density, respectively. When T T c , we can easily show that this value conserves. We should also compare this with the observed dark matter abundance Ω DM h 2 ≈ 0.12 [78]. The numerical result (where we approximate the potential by the quadratic term) by varying is shown in Fig. 4 for the blue, green and red points, respectively, with f a = 10 16,15,14 GeV from top to bottom. For later convenience, we find a fitting formula for the abundance where T fit,osc is obtained by equating m fit = H, with m fit ≡ (C fit,3 χ(T ) + χ 0 )/f 2 a . Here C fit,i ≈ {1.03, 46.2, 5.35} is obtained by fitting the numerical data (red points in Fig. 4) at f a = 10 14 GeV. This formula agrees well also with the numerical results for f a = 10 15,16 GeV. We display the abundance in Fig. 4 for analytical results of smaller f a , which requires calculation costs if we solve the equation of motion. Importantly, 10 −13 1, the abundance We overlay the maximal value of the decay constant allowed by the axion abundance as a function of in Fig. 3 (red solid line). We took θ i = 1 in the figure. The constraints form X-ray observation are translated from Ref. [79], by assuming dominant axion dark matter (whose photon coupling is induced through the meson-axion mixing) from the misalignment mechanism in blue shaded region. The mass corresponds to m a ∼ 5 keV around the boundary. 2 The upper bound of the axion window or the natural decay constant for the axion dark matter is by varying . In concrete models, , ρ −1 and f max a are related. For example, in the model of Appendix A, ρ −1 ∼ f a . In this model, the heavy axion ( > 1) is difficult to be the dominant dark matter from misalignment otherwise the strong CP problem exists unless C has chirality suppressions. For heavy axion dark matter where we should have f a ∼ 1/ρ 10 9 GeV we need to consider other mechanisms to produce the correct abundance e.g. [80][81][82][83][84][85][86][87][88][89]. This scenario needs to evade the SN1987A constraint, f a ∼ 1/ρ 10 8 GeV, and so the heavy axion dark matter is likely to be fully tested in the future EDM experiment. Before ending this section, let us comment on −1 < < 0. In this case the axion starts to oscillate towards a/f a ≈ π in the early Universe and when IR contribution dominates the oscillation is around the a 0. We do not consider this possibility because from phenomenological side it will cause a serious domain wall problem. If < −1, it predicts θ CP = −π, which is excluded from observation. In a wide class of models, we expect that the sign of the UV contributions is the same as the IR one since it is ensured by the positive path-integral measure of QCD [90]. Small instantons in the extra-dimension scenarios In certain models, the integral (1) is dominated at around the UV cutoff, i.e. ρ ∼ 1/Λ cutoff . Once this is the case, the CP violation contribution (5) is also UV dominated, which gives by assuming Λ cutoff Λ. Let us first consider the case where the higher dimensional operators have no chirality suppression. In this case, C ∼ 3 × 10 9 gives a large enhancement. For Λ cutoff /Λ y i , the dominant contribution is the diagrams with multiple insertion of the higher dimensional operators. In particular with Λ cutoff ∼ Λ, we obtain Now C ∼ 1/(y u y d y s y c y t y b ) ∼ 10 18 and θ UV = O(1). To evade the EDM constraint we need To be more concrete, let us study the model where the axion is originated from the gauge field in the extra dimension. The model has a nice feature to protect the PQ symmetry by the gauge invariance and the locality, and catches the essential feature of axions in string theories. A simple realization is possible in five dimensional model with the fifth dimension compactified by an orbifold S 1 /Z 2 [91]. The gluon lives in the fifth dimensional bulk with size R and the axion is identified as the Wilson line of the fifth direction of a U (1) gauge field. The cut-off scale of the theory is identified as the 5d planck scale, if there is no other interactions which gets strong below this scale. We consider the case where the quarks and the Higgs field live on the orbifold fixed point. The PQ symmetry is identified as the shift symmetry of the A 5 component of the U (1) PQ gauge field on the boundary. While it is protected by the gauge symmetry and locality, the mixed Chern-Simons term, can give the anomalous coupling between the axion and the gluons. This setup provides a very simple origin of the axion (i.e. string axion). In this model, the axion has a decay constant f a ∼ 1/R. On the brane, we have "Planck scale" suppressed terms similar to (3) 3 i.e. The aligned contributions is studied in Ref. [44], where the 5d instanton solution is found to contribute as where The function, ξ, which increases to approach to ∼ 0.35 with R/ρ → RΛ cutoff , can be found in [44]. The beta function coefficient, b 0 , and the coupling constant, α SM s , are, respectively, given by b 0 = 7 and 1/α SM with α s,EW = 0.118 and v EW = 100 GeV. The suppression of S eff at large 1/ρ makes the integral dominated around ρ ∼ 1/Λ cutoff . Here, the fundamental 5d gauge coupling, g 2 5 , (dimension -1) is matched to the SM gauge coupling as Since the gauge interaction is from a higher dimensional term, the theory gets strong at a scale ∼ 24π 3 /g 2 5 . Considering a diagram as in Fig. 2, we obtain By numerically performing this integral, we confirm the previous general argument at a good precision with C ∼ 10 9 by taking the ratio of Eq. (33) to Eq. (29) with Λ ∼ Λ cutoff . In this model, however, the dominant contribution is from the term corresponding to (23). This is again consistent with our general argument. In the case of Λ ∼ Λ cutoff , however, it is natural to assume that there are chirality suppressions for the higher dimensional terms, so that the small coupling constants, such as y u and y d , are stable under the radiative corrections. Then (3) is in the form, with Then we obtain Here we introduced Λ eff for a later convenience. This satisfies Λ eff ∼ Λ Λ cutoff and it includes |C ud | as well as relative numerical uncertainty compared with the integral in Eq. (29). This contribution may be suppressed compared with the aligned instanton constribution by (Λ cutoff /Λ eff ) 2 . by assuming Λ eff = Λ = Λ cutoff . The gray (purple) region is the bound (future reach) from nucleon EDM experiments by assuming θ UV ∼ 1. Interestingly with 1/R 10 15−16 GeV we have an allowed region. This is because in this setup g 2 5 Λ cutoff ∝ (M 2 pl R 2 ) 1/3 α SM s (1/R) which makes the gauge interaction at around the cut-off weaker for smaller R. In this regime, the axion mass is dominated by the IR QCD contributions, and the low-energy axion physics is the same as the conventional one. Cosmology of extra-dimensional axions The scenario of the axion from extra-dimension is a natural possibility for the solution of the strong CP problem. We have seen in the previous section that the radius of the extra dimension is required to be above the scale of the grand unification theory (GUT), 10 15 GeV, in order to avoid yet another strong CP problem from the UV instantons. The axion decay constant f a is also required to be above the GUT scale. In other words, we need the gauge coupling to be weakly coupled up to the 5d Planck scale also for solving the strong CP problem. This sounds quite natural in string theories or in general in the theory of quantum gravity, where the typical scale is the GUT or the Planck scale. However, cosmologically, it implies that the axion has an overproduction problem if we do not require a fine-tuning of the initial misalignment angle θ i . In the extra-dimensional scenario, we find that there is a simple possibility to avoid the overproduction and realize the axion dark matter naturally by the help of the UV instantons. We mainly consider the case where the higher dimensional operators have chirality suppressions. The case with no chirality suppression will also be discussed later. Natural axion dark matter In the extra dimensional scenario, there is a modulus field, the radion, to represent the size of the extra dimension. In the current Universe, the radion field r is stabilized at r = R. By saying that the radion is a dynamical field, we implicitly assume that the mass, m 2 r ≡ V [r], is not extremely heavy. We will come back to the dynamics of the radion in Sec. 5.2 Then, it is possible during the inflation that r is displaced from the current location. This changes the coupling of 4d gravity via In the following we perform a Weyl transformation to move to the Einstein frame, so that the coefficient of the Ricci scalar is always M 2 pl . Instead, various potentials at the zero temperature obtain an additional factor of M 4 pl /(Λ 3 r) 2 . For instance the axion potential from QCD isṼ Here we useχ and Λ QCD [R] ∼ 400 MeV. We emphasize that and g 2 5 is fixed by (32) once r = R is given. This form is a good approximation when the QCD scale, Λ QCD [r], is higher than the electroweak scale. The chirality flips are supplied by the Yukawa interactions whereas the VEV of the Higgs field is induced by the chiral symmetry breaking of SU(6) × SU(6) → SU (6). Due to the top quark condensation, the Higgs field obtains an expectation value H r ∼ Λ QCD [r], with r denotes the expectation value in a spatially homogeneous radion background r. More precisely, there could be CPviolating effect to the IR QCD contribution via not only the higher dimensional operators but also from the SM interactions through the CKM phase e.g. [56]. The latter contribution is estimated to be at most comparable to the QCD potential in Eq. (39). Although this could be important in some cases, we are particularly interested in the case thatṼ QCD is not the dominant contribution and thus the effects can be ignored in the most successful parameter region. One should keep in mind that there can be significant corrections from weak interactions whenṼ QCD is important. On the contrary the CP violating small instanton contribution is estimated as Here,χ Similarly we get the aligned contribution, Fig. 6. We fix 1/R = 3 × 10 15 GeV. First, we can see that the all contributions increase with increasing 1/r. This is because the coupling α s [1/r] ∝ 1/r increases. Thus the exponential suppressions in all instanton contributions are alleviated (see Refs. [83,[91][92][93][94][95] for stronger QCD, Ref. [96] for stronger small instanton by enhancing Yukawa couplings.). Second, the CPV contribution as well as the aligned contribution increases faster than the IR QCD contribution does. This is because the exponential suppression in χ aligned or χ αs [1/r] . (Here we neglect the second and third terms in (30) for analytic argument since they are irrelevant in the weakly coupled region to satisfy the EDM bound.) At larger 1/r, the former increases faster. This feature is important in our mechanism: if r = r inf during inflation is smaller than R, one may obtain the UV instanton contribution dominating over the IR contribution. Furthermore, the axion mass can easily satisfy with H inf being the Hubble parameter during inflation. We find that the axion, then, is stabilized at a CP-violating position, After the inflation the radion either rapidly or slowly settle into the minimum of the potential [97,98]. In any case, since the IR QCD contribution is absent due to the finite temperature effect from inflaton decays (by assuming the SM radiation temperature is higher than the QCD scale), θ r inf is kept intact after inflation until the temperature decreases to around the QCD scale. This means If the chiral suppression were absent in the higher dimensional operators, this mechanism would give the initial misalignment of order θ UV ∼ 1, and the axion would be overproduced. On the other hand, with chirality suppressions, we find Consequently the axion dark matter can be explained by using Eq. (16) in a wide range of parameters satisfying H inf m a [r inf ] and UV instanton dominance. Although we need a mild tuning on Λ cutoff /Λ eff , Λ cutoff /Λ eff < 1 may be needed for a weakly coupled theory. The allowed region in Fig. 6 is consistent with the upper bound of H inf 6 × 10 13 GeV (50) from the constraint for the tensor-to-scalar ratio [99] for 1/r inf 10 16 GeV. On the other hand, in the region 1/r inf 10 17 GeV, should be avoided since the potential energy from QCD exceeds that during the inflation. Dynamics of r and parameter regions Let us consider the radion mass during inflation. We may take the radion nearly massless in 5d by assuming that the brane position is not strongly fixed. Since the radion kinetic term in the Einstein frame is given by (e.g. [100]) it couples to other fields through Planck suppressed operator with normalized kinetic terms. In any case there is a radiative induced mass after the compactification. This mass may be dominated by the bulk gauge/gravity interaction [100] ∼ 1/(16π 2 R 4 M 2 pl ), which sets the lower bound of the radion mass without a tuning: In addition the brane localized potential may give heavier mass to the radion. In the following we take m 2 r arbitrary to check which mass range of the radion is consistent with the previous discussion. While we implicitly assume that the inflaton field to drive inflation is not the radion itself, it is possible to identify those two fields as we briefly mention in Sec.5.3. During the inflation, the radion field value may be different from R. In general the radion would acquire a run-away potential V runaway ∼ M 2 pl rΛ 3 2 H 2 inf M 2 pl . This biases r to a larger value. The localized radion potential on the brane may involve both inflaton and radion. This can give an interaction between the radion and inflaton. We note that there is also a contribution from the QCD instanton to the radion mass squared δm 2 r [r] ∼ √χ aligned +χ QCD M pl which is always subdominant to the Hubble induced mass if the QCD potential is negligible compared to inflation energy density. To be generic we consider the radion-inflaton potential in the form of where A ≡ 3 2 M pl log (r/R) is the radion with the kinetic normalization. C inf denotes the interaction strength between inflaton and r in the 4D Planck unit. This should be a good description when r inf is not too much deviated from R. Then we obtain . This gives r inf R if C inf > 0 and By concerning Eq. (52), the inflation scale turns out to be in the range inf 10 10 GeV This is consistent with Eq. (50). For r inf much larger or smaller than R, the expansion in (53) is invalid. One should instead define A with R replaced by r inf to conduct a similar discussion. The same lower bound on H inf is obtained where 1/R is replaced by 1/r inf . Notice that we can only have 1/R 10 15 GeV to avoid the strong CP problem. From Fig.6, we find that with 1/r inf ∼ [1.5 − 3] × 10 16 GeV we can have (46). For larger 1/r inf , the UV instanton contribution is larger than the upper bound of the inflation scale (50). Consequently we find that our mechanism works for high scale inflation. To explain the parameter region, we display θ i × Λ cutoff . Those values are calculated by using Eqs. (29) and (37) with multiplying M 4 pl /(rΛ 3 cutoff ) 2 and Eq. (40). We also colored the range above 1% which may account for the axion dark matter with Λ cutoff /Λ eff O(10)%. Larger than a critical point, θ i approaches to a constant irrelevant to r inf and 1/R when 1/R 10 16 GeV. This r inf insensitive region is what we have been focusing on. Smaller than the critical point we also have a tiny parameter region. In the lower panel we present the corresponding χ aligned andχ QCD by red solid and dashed lines respectively. We can see that m CS a [r inf ] can be larger than ∼ 10 10 GeV, and satisfy both (46) and (56) This region may be searched for in the future EDM experiments and the measurement of the tensor-to-scalar ratio. If the scale is linked to the proton decay operator we may further have implications in the experiments searching for the proton decay (see the next section). When 1/R 2 × 10 16 GeV, on the other hand, θ i and m CS a [r inf ] are too small for our mechanism to work. However we may have other mechanism to explain the axion dark matter. It was shown that if inflation lasts long, θ i approaches to the Bunch-Davies distribution [101,102]. In particular, when (χ QCD +χ aligned ) 1/4 ∼ H inf we may again obtain dominant axion dark matter. However, since m CS a [r inf ] H inf , the axion is almost massless during inflation. There is a constraint on the axion quantum fluctuation from the isocurvature of density perturbation [99], which sets an upper bound on the inflation scale: This can be satisfied with C inf 100. We have checked that when 1/R 10 17 GeV there exists r inf to satisfy (χ QCD +χ aligned ) 1/4 ∼ H inf ∼ 10 9 GeV. If C inf is not extremely large, the scenario may be tested in the future CMB observation of the isocurvature perturbation. Lastly, let us comment the case without chirality suppressions. In this case, we can still obtain the axion dark matter similar to the discussion here. However the inflation scale is extremely low with H inf 1 GeV (see Fig.6) and the radion mass needs to be very light, which may require certain tuning. More unified pictures More minimally, radion may be the inflaton. In this case it must be away from the potential minimum, r = R, during inflation. We need to make the radion potential flat enough at around r ∼ 1/Λ cutoff to drive a hilltop inflation. This is always possible if we can write down a general localized potential on a brane. In this case, if we can take the potential arbitrary flat, in principle we can have arbitrary small H inf . An interesting question is whether we can build a GUT model broken at a scale of 1/R. Let us consider a simple GUT gauge group in the bulk and matters are localized on a brane. We expect not too different size of small instanton contribution from that discussed in the main part since the exponential term has the exponent S eff ∼ 2π/α s ≈ 2π/α GUT ≈ 8π 3 R/g 2 with CPV to the axion potential even in the renormalizable theory (c.f. [47]). Whether the axion dark matter scenario works would depend on the detail of the GUT models. Conclusions The axion window, 10 8 GeV f a 10 12 GeV, has been discussed as the allowed region for the QCD axion models. The upper bound, 10 12 GeV, is put by cosmological considerations where the misalignment of the axion value in the early Universe produces the axion energy density at later time as oscillations about the true minimum of the potential. This upper bound has given a theoretical challenge to lower the decay constant much below the scale of grand unification or quantum gravity. The challenge should however be considered carefully. In the UV physics, there can be unknown significant contribution to the axion mass which may be aligned to the low energy QCD contributions so that the strong CP problem is still solved. Also, the UV physics may modify the axion potential in the early Universe so that the upper bound itself is not reliable. We discussed that the UV contribution, if it is dominated by high scale physics such as the Planck scale, is severely constrained by a new CP violation caused by the combination of instantons and higher dimensional operators. Especially, in the model where the axion arises from a gauge field in the extra dimension, such as string axions, the new CP problem excludes the possibility for the UV contributions to overwhelm the ordinary low energy QCD contributions. The consistent scenario is only possible for the size of the extra dimension, i.e., the axion decay constant, to be larger than about 10 15−16 GeV which is beyond the axion window. It is however important to realize that the UV contribution can be much larger during the cosmological inflation as the size of the extra dimension can be different from the current one. By considering the enhancement of the QCD coupling during the inflation, the minimum of the axion potential is located near the current minimum in a wide range of parameters, so that a small misalignment angle is realized. Even for a large decay constant, such as f a ∼ 10 15−16 GeV, axion abundance can naturally be that of dark matter of the Universe while the strong CP problem is still solved. This discussion opens up the possibility of natural string or GUT axion scenarios where inflation dynamics is tied to the size of the compactified directions. The upper bound on the axion window should not be taken so seriously in string models. A A model of heavy axion from accidental PQ symmetry To have a closer look of the quality problem raised here. Let us build a new kind of heavy axion model, and discuss the CP violation. Let us consider again Z N (gauge) symmetry to solve the ordinary quality problem and "define" the PQ symmetry. We consider N fundamental quark pairs,Q a PQ , Q a PQ with interaction of Under Z N , Φ * PQ , andQ a PQ transform with a phase of −2π/N , Q PQ is a singlet. The quark masses are assumed to be around the PQ scale f a , i.e. y (a) Φ ∼ 1. In addition, we introduce N φ of fundamental colored scalars, φ i , around the mass scale, m φ . Then Eq. (1) can be denoted as the integral by parts as If the number of φ, N φ is large enough, the gauge coupling is no longer asymptotically free. Then we can obtain exp [−S (1,2) eff ] ∝ ρ b 1,2 with b 1 = b 0 − N φ /6 and b 2 = b 0 − N φ /6 − 2N/3. Neglecting the ρ dependence in F (i) [ρ ], the integral may dominates around depending on N φ and N. The first case is not at all problematic because S eff [1/ρ] ≈ 2π/α SM [1/ρ], which is around the suppressed SM contribution. The third case may induce an additional potential to axion but it can be suppressed by assuming a large N as the ordinary solution to the quality problem, i.e. in this case the term is suppressed by f N a /Λ N −4 cutoff . This also says that with increasing N the case 3 approaches to the case 2 since the integral in (61) tends to dominate at IR. The case 2, the integral dominates at ρ ∼ 1/f a , may generate additional axion mass. Then we get the small instanton contribution as with If −b 1 + 4 ≥ 0, this contribution is enhanced. For instance with N φ = 60 and f a = 8 × 10 8 , we get ∼ 10 5 and obtain the heavy axion. Here α s [f a ] ∼ 0.12. We do not need to care the UV contribution since we can take N large enough to suppress it. But we should make sure that before the coupling blows up, there is some UV completion appear (by integrating out which we may have another small instanton effect but again it is suppressed as long as N is large). Then (8) suggests 1/ρ ∼ f a 10 10 GeV when 1. The parameter region of the axion dark matter of this model is shown by the red line in Fig. 3.
2021-03-17T01:16:28.968Z
2021-03-16T00:00:00.000
{ "year": 2021, "sha1": "40111f5a656c234f8017217a105c1664a32912b3", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP07(2021)078.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "40111f5a656c234f8017217a105c1664a32912b3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
104372278
pes2o/s2orc
v3-fos-license
Determination of Dissolved Iron Redox Species in Freshwater Sediment using DGT Technique Coupled to BDS In this work we have developed a novel method for determination of iron redox species by the use of diffusive gradients in thin-film (DGT) technique coupled to photothermal beam deflection spectroscopy (BDS). The combination of both methods achieved low limit of detection (LOD) of 0.14 μM for Fe (II) ions. The total Fe concentration determined in the Vrtojbica river sediment (Slovenia, Rožna Dolina, 5000 Nova Gorica) was 49.3 μgL–1. The Fe (II) and Fe (III) concentration amounted to 12.8 μgL–1 and 39.9 μgL–1, respectively. Such an approach opens new opportunities for monitoring the content of iron species in natural waters and sediments and provides highly sensitive chemical analysis and an accurate qualitative and quantitative characteristic of the materials under study. Introduction Metals in trace amounts are natural components of the environment, but at high concentrations they can become toxic to living organisms since they act as conservative pollutants. All trace elements (including iron, Fe) that are essential for supporting various life processes have a fairly narrow "concentration window" between their biogenic and toxic levels. Iron is a vital constituent of plant life since it is essential for photosynthetic and respiratory electron transport, nitrate reduction, chlorophyll synthesis, and detoxification of reactive oxygen species. 1 At low concentrations, Fe plays an important role in metabolic and fermentation processes, as an enzyme activator, stabilizer and functional component of proteins, and may be limiting for growth of organisms. Its redox state will also have influence on being available for the uptake. Human populations in areas contaminated by iron and other heavy metals could be significantly exposed to these contaminants due to their bioaccumulation properties. They can accumulate in bone, hair and in some soft tissues, such as the liver, kidney and lungs. Prolonged exposure and high concentration levels can lead to heart disease, the development of cancer, as well as other complications such as arthritis, diabetes or liver disease. 2 As a result of these health concerns various methods have been developed for determination of iron concentration in the environmental samples, including UV-Vis spectrophotometry, 3 atomic absorption spectrometry, 4 ion chromatography 5 and high-performance liquid chromatography. 6 Unfortunately, the information about the bioavailable fraction content of its redox species is very difficult to measure and is in most cases lacking, although it is very important for understanding Fe toxicity. 7,8 This is partly due to complex Fe geochemistry; either of the two redox states (Fe(II), Fe(III)) may be present in various Budasheva et al.: Determination of Dissolved Iron Redox Species ... complexes and size fractions (e.g. as truly dissolved, in soluble coordination complexes with inorganic ligands and organic ligands, or in a variety of colloidal and/or particulate forms). Investigation of the fractions accessible to biota (bioavailable) is often hampered by their extremely low environmental concentrations, which requires the use of contamination-prone detection methods (e.g. voltammetry and potentiometry). [9][10][11] Studying Fe cycling in the environment is further complicated also because the distribution of its chemical species often changes during sampling and storage. Since the above methods are not sensitive enough to satisfy the requirements associated with detection of ultra-trace amounts of Fe, thus, there is need to develop new sensitive techniques that provide reliable measurement of Fe redox species in natural environments. Diffusive gradients in thin-film (DGT) technique has been increasingly used for monitoring of environmental pollution due to its robustness, versatility, precision and capacity of pre-concentrating trace-level metal pollutants. In the uptake process, metals diffuse from natural waters through the diffusive layer to the binding layer (commonly Chelex-100 resin), which is selective to transition metals and their species, 12,13 such as Fe(II) and Fe(III). It is important to point out that DGT technique advantage is capability of pre-concentrating Fe species from the dissolved phase. It samples labile fraction passively, which is without external pressures, sample manipulation, transport, derivations, etc. It provides also a time-average of environmental species concentrations during the deployment time. In contrast to the above enumerated methods, optothermal methods provide high-sensitivity measurements for spectroscopic characterization and detection of low-absorption transparent samples. [14][15][16][17] The high sensitivity of the optothermal methods has already been repeatedly improved by combining with other methods. [18][19][20][21][22] In this work the detection of iron species strongly bound in the resin gel was performed by the photothermal beam deflection spectroscopy (BDS). In the theoretical approach to the coupled DGT-DBS method, an intensity modulated beam of light illuminates (excitation beam EB) the absorbing sample with iron species. As a result of nonradiative deexcitation processes, thermal waves are generated. They diffuse into the sample and the adjacent medium inducing the thermal oscillations (TOs) called the temperature field. Causing the intensity change of another light beam (probe beam PB) passing through the samples adjacent medium and grazing its surface. 23,24 Intensity changes are correlated to the iron concentration bonded in the examined gel. 25 Presumably, the BDS technique would provide a highly sensitive chemical analysis, will be non-invasive and will retain optical and structural characteristics of the sample, thus, offering new possibilities for determination of iron species in natural water environments. The goal of this work was therefore to couple DGT and BDS methods, and to determine dissolved Fe redox species concentration as well as the amount of dissolved total Fe in the river sediments. 1. Solutions and Reagents The solution of 3 mM 1.10-phenanthroline (PHN) was prepared by adding 2.61 g of PHN (Merck) to 5.0 mL of 6 M HCl and dissolving both in 500 mL of double-deionized water (18 MΩ m -1 , NANOPURE), then diluted 10-times in 100 mL flasks. While 6 M of hydrochloric acid (HCl) solution was prepared by dissolving 5.9 mL of 32% pure HCl (Sigma-Aldrich) in 10 mL of double-deionized water. The solution of 5.1 mM of L-ascorbic acid (Sigma-Aldrich) was prepared in 100 mL flask by dissolution of 9 mg of solid L-ascorbic acid in 0.1 M acetic acid. While 0.1 M acetic acid solution was prepared by dissolving 0.6 mL of 99.8% pure acetic acid (Merck) in 100 mL flask and diluted with double-deionized water. Working solutions of Fe(II) and Fe(III) to construct the calibration curves were prepared using concentrations All reagents and solvents were used as purchased without further purification. Preparation of DGT The procedure for DGT probe preparation is in depth described elsewhere. 26 Briefly, the probe base was overlaid in the following order with 1) ground Chelex-100 resin gel, 2) polyacrylamide diffusive gel (APA, 0.8 mm final thickness) and 3) acid-precleaned 0.45 µm HVLP filter (Millipore). The layers were secured by the probe cover with a fixed-area window for diffusion ( Figure 1). To avoid contamination of the samplers, all used equipment, as well as DGT sediment probes were precleaned in 5% HNO 3 . Before assembling DGT sampler, its parts were thoroughly washed with double-deionized wa-ter (18 MΩ m -1 , NANOPURE) to protect from acid coming into contact with the gels. Both polyacrylamide gel and Chelex-100 resin were stored in closed plastic vials in double-deionized water before use to prevent them from drying. The gels were cut with Teflon-covered blade to fit the sediment probe. One day before field work, the assembled DGT probe was inserted into a bottle filled with double-deionized water and purged with nitrogen to expel oxygen, which could affect redox speciation. Immediately before sampling, the bottle was tightly closed and transferred to the sampling site, where the sampler was inserted into the sediment. 3. Field Work DGT sediment probes were used for accumulation and pre-concentration of the Fe redox species in situ in the Vrtojbica River sediment. Vrtojbica flows through an anthropogenically-impacted environment of the city of Nova Gorica and its sedimentary Fe content is expected to be sufficiently high for reliable analysis. Two assembled DGT sediment probes were placed back-to-back in the river sediment for 5 days (from 18.07.2018 to 23.07.2018), reaching approximately 7.5 cm deep. The water temperature recorded at the beginning and the end of the experiment ranged between 23.5 °C and 24.5 °C. After sampling, the DGT probes were carefully recuperated from the sediment, rinsed with a double-deionized water, inserted into the plastic bag and transferred to the laboratory. 4. Laboratory Analysis In the laboratory the diffusive layer and filter were discarded, and resin layer was transferred into clean vial with double-deionized water until analysis. One probe gel was used to determine the dissolved Fe(II) concentration, whereas the second one to determine the total dissolved amount of iron. Fe(III) was calculated as the difference between dissolved total and Fe(II) values. The procedure for total Fe determination was the same as described below (see 2.3). After determination of Fe redox values in the gel, DGT equation was applied to calculate the concentration of Fe species (C) in the sediment pore waters: (1) where M is the mass of accumulated Fe species, t is the time of exposure, A is the area of exposed surface (A = 0.15 × 10 -3 m 2 ), ∆d is the diffusive layer thickness and D is the diffusive coefficient of the labile Fe species (D = 5.9 × 10 -6 m 2 /s). The determination of Fe redox species is based on colorimetric reaction of the Fe(II) and PHN, accompanied by the formation of a stable orange complex, named Ferroin ([Fe(phen) 3 ] 2+ ), with high absorptivity at 508 nm ( Figure 2). For the total iron content, the Fe(III) was reduced to Fe(II) with L-ascorbic acid, followed by determination of total Fe as Fe(II). For a comprehensive understanding of how Fe ions are distributed in the aqueous phase of the sediments using of the sampler described above, the binding gel was cut into smaller pieces vertically and horizontally by Teflon-covered blade. They were then separately, piece by piece, immersed directly in the 3 mM solution of PHN for the formation of a coloured complex. Before immersing in the PHN solution the pieces of gel from the second sampler were enriched by 5.1 mM L-ascorbic acid for reduction reaction of Fe(III) to Fe(II) for determination of the total Fe content. After 24 hours of soaking in the PHN solution, the gels were dried between clean glass layers for another 24 hours before performing the BDS measurements. 5. Experimental Setup for BDS Method For determination of Fe concentration, the dried gels on glass support was placed on the sample's holder in BDS system (Figure 3, 4). As EB (excitation beam) was chosen a solid-state laser at 532 nm output wavelength because the absorption maximum of Ferroin complex is close to it (508 nm), and 30 mW output power (CST-H-532nm-1000 MW). He-Ne laser (Uniphase, Model 1103P) was used as PB (probe beam) source at 633 nm output wavelength and 3 mW output power since this wavelength is not absorbed by Ferroin complex. Both beams were focused by a set of lenses (Bi-Convex, AR Coated: 350-700 nm, EDMUND OP-TICS). A variable-speed mechanical chopper (SCIENTIC INSTRUMENTS, Control unit model 300C, chopping head model 300CD, chopping disks model 300H) at frequency of 3.0 Hz was used to modulate the EB. The used frequency range was chosen to ensure the TOs penetration only within the sample (the thickness of the dried gel is 0.04 mm) to get the information only from it without the influence of the support. The sensitivity of the BDS system was improved by using additional mirrors (400-750 nm, Thorlabs) that directed PB through the TOs to increase its intensity change and thus enhance the BDS signal. The intensity change of PB was measured by a quadrant photodiode (RBM-R. Braumann GmbH, Model C30846E) equipped with an interference filter (633 nm, Edmund Optics) and connected to the lock-in amplifier (Stanford research instruments, Model SR830 DSP). The examined sample was placed on a 3D translation stage (CVI, Model 2480M/2488) to vary its position in x, y and z direction and optimize the experimental configuration. 1. Determination of Fe(II) Content The calibration curve obtained for the Chelex-100 resin spiked with different concentration of Fe(II) including best fit equation is shown on the Figure 5. After immersing the gels in the Fe(II) solution for 5 days, the gels were transferred to the PHN solution for 1 day to form a coloured Ferroin complex, then transferred to the glass layers for drying. The achieved limit of detection was 0.14 μmol L -1 . A linear relationship between Fe concentration and BDS signal was obtained between 0 and 20 µM of Fe(II). All our samples fit in this concentration range. To determine the 2D distribution of Fe redox species in the gels, the binding gel was cut into 4 parts horizontally and into 3 parts vertically. In each part, Fe(II) concentration was determined. The gel concentrations from Vrtojbica River sediment are presented in Table 1 and Figure 7a. The lower horizontal part was damaged during the deployment, so the data for Fe (II) (respectively for Fe (III) also) in this layer at the depth 7.5 cm is not available. The obtained data indicate that the concentrations of Fe(II) do not vary much in the sediment pore waters. Generally, the absence of Fe(II) indicates oxidative environment and the presence implies reductive conditions. There is an increase on the left side of the investigated area, which could imply more reductive localized condition during the time of the sampling. Considering that DGT binds the dissolved and labile fractions of total Fe(II), our data suggest there was a constant amount of Fe(II) available for geochemical transformations and as well for organisms. Surprisingly, no decrease in Fe(II) concentrations were observed at the sediment-water interface (SWI), suggesting that there might be a loss of this species to the water. Also interestingly and somewhat contrary to general behaviour of reduced metals species, the values around -2.5 and -5 cm were below LOD on the right side of the gel. Usually, in the sediments the anoxia begins somewhere below SWI and extends in the interior of the sediment, where the reduced species dominate 26 . The Vrtojbica River however, is a quickly flowing stream of water, hence the absence of Fe(II) in the part of the sediment of might represent a well aerated sediment. 2. Determination of Total Fe Content Dissolved total Fe was determined by conversion of the Fe (III) to Fe (II) with L-ascorbic acid as a reducing agent, using this method previously described in the literature for photothermal techniques. [18][19] The calibration curve obtained for the Chelex-100 resin spiked with different concentration of Fe(III) re-duced into Fe(II) including best fit equation is shown in the Figure 6. The achieved limit of detection in this case was 0.21 µmol L -1 . Using a linear equation of the calibration curve the total Fe concentrations in river water and sediment were calculated. The results are presented in Table 2 and its distribution in the gels in Figure 7b. The distribution of the total dissolved Fe in the sediment of the Vrtojbica River is quite uniform. This suggests stable conditions during the deployment, and also a stable pool of labile, dissolved Fe species that was continuously present in the sediment during the time of the sampling. At the SWI, the concentration of total dissolved Fe species is lower than inside of the sediments, clearly indicating general loss to the water of both redox species. The increase of dissolved species in the sediment interior is associated with dissolved Fe(III) increase (see 3.3). Determination of Fe(III) Content One of the goals of the study was to determine the concentration of Fe(III) ions in the sediment since it is one of the species in which iron as essential metal appears in the nature. Numerous factors contribute to the environmental ratios of Fe(II) and Fe(III), e.g. pH, temperature and reductive-oxidative environmental conditions, presence of sulphide ions, ammonia and oxygen. Furthermore, the ratio is also dependant on geological features of the river sediment. The content of Fe(III) was calculated as a difference between the total Fe content ( Table 2) and Fe (II) ( Table 1). The results are given in the Table 3 and presented in the Figure 7c. Generally, the distribution of the Fe(III) in the sediment follows the distribution of total dissolved Fe. There is an increase of Fe(III) below the SWI at the depth of ap-proximately 2.5 -5 cm in the centre of the gel. This might indicate a geological source dissolving and releasing Fe(I-II) into the pore waters, or a local oxidation hotspot that would oxidize any Fe(II) to Fe(III). The oxidation source might be geochemical or microbial. Very likely this feature indicates sediment heterogeneity, which we were able to observe as a result of the newly developed method. The coupling of DGT and BDS methods enables the determination of the distribution of Fe redox species in two dimensions. While Fe(III) is present over all investigated area and occurs simultaneously with Fe(II), Fe(III) is exclusively present on the right side of the gel. Combined with the Fe(II) results, this part of the sediment appears to be fully oxygenated and/or excludes formation of Fe(II), at least during the sampling period. As the DGT technique reports time-average values, this indicates very stable conditions in the time of the sampling. The DGT-BDS method does not require intensive manipulation after the sampling, which renders the possibility of transport or storage artefacts that could influence Fe speciation less likely and increases the reliability of the obtained results. Therefore, the observed patterns of Fe redox species likely accurately represent the sedimentary conditions. To summarize the data obtained in Figure 7 presents the Fe (II), total Fe and Fe (III) distribution in the Chelex-100 resin, respectively. Although our preliminary results are not accompanied with a suite of other geochemical parameters, they clearly demonstrate the potential and applicability of the newly developed method to be used for two-dimensional imaging of dissolved, bioavailable Fe redox species in natural environments. 4. Concentrations of the Fe Redox Species in the River Sediment Using the equation (1) we calculated the concentration of Fe species in the sediment pore waters (Table 4). Although not much data exist for comparison of DGT-derived Fe redox species concentrations, our data fit in the range of the published results for pristine environments. 18 Total concentrations obtained from polluted or strongly impacted river sediments are higher for factor of 10 or 100. [28][29][30] Nonetheless, the distribution of dissolved, labile and potentially bioavailable fraction of Fe redox species in the Vrtojbica River indicates a dynamic sediment system. As observed before, the low values at SWI indicate sediments are a source of both Fe redox species to the river water. The observed increases in dissolved Fe(II) and dissolved total Fe in the sediment interior could be attributable either to geochemical and microbial interactions with Fe-rich minerals, or both. Since these are the first data on iron speciation in this system, it cannot be said with cer-tainty if the concentrations are representative of this particular environment. However, despite its city location, Vrtojbica river sediment is far from polluted and not likely to increase river water Fe concentration to above WHO guidelines on Fe in drinking water (0.3 mg/L). Together with low SWI concentration our data suggests that anthropogenic activity might have not affected the river and that the sediment does not act as a sink for Fe. Simultaneously, however, it is also yet unclear which factors have highest influence on the redox state of dissolved Fe in the sediment. Conclusions We report for the first time the dissolved Fe redox species distribution in freshwater sediments measured by coupled DGT technique and BDS method. The average total iron concentration in the Vrtojbica River river sediment was found to be 49.3 µgL -1 . The average amount of Fe(III) was 3 times higher than the average Fe(II) concentration and reached the value of 39.9 µgL -1 and 12.8 µgL -1 , respectively. The obtained results show the potential of using DGT method coupled to BDS technique for monitoring biologically relevant Fe species at environmental concentrations in natural waters and sediments. The information received from the newly coupled method will advance our understanding of the basic biogeochemical processes gov- erning trace metal behaviour in pristine and anthropogenically-impacted environments. Its results may be well incorporated in the existing mathematical models, which are currently based primarily on one-dimensional profiles, thereby increasing reliability of the model predictions. Further development, application and testing of the DGT-BDS is warranted, since it is reliable, precise, resistant to contamination, inexpensive, and time-saving analytical method.
2019-04-10T13:13:10.491Z
2019-02-01T00:00:00.000
{ "year": 2019, "sha1": "760076e6a8f07eb43dc356416610697d660cca9e", "oa_license": "CCBY", "oa_url": "https://journals.matheo.si/index.php/ACSi/article/download/4848/2062", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "37f844f4eed02e5fa444b6e9f06d2f9dfef0f40a", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
220650833
pes2o/s2orc
v3-fos-license
Lack of redundancy between electrophysiological measures of long-range neuronal communication Background Communication between brain areas has been implicated in a wide range of cognitive and emotive functions and is impaired in numerous mental disorders. In rodent models, various metrics have been used to quantify inter-regional neuronal communication. However, in individual studies, typically, only very few measures of coupling are reported and, hence, redundancy across such indicators is implicitly assumed. Results In order to test this assumption, we here comparatively assessed a broad range of directional and non-directional metrics like coherence, Weighted Phase Lag Index (wPLI), phase-locking value (PLV), pairwise phase consistency (PPC), parametric and non-parametric Granger causality (GC), partial directed coherence (PDC), directed transfer function (DTF), spike-phase coupling (SPC), cross-regional phase-amplitude coupling, amplitude cross-correlations and others. We applied these analyses to simultaneous field recordings from the prefrontal cortex and the ventral and dorsal hippocampus in the schizophrenia-related Gria1-knockout mouse model which displays a robust novelty-induced hyperconnectivity phenotype. Using the detectability of coupling deficits in Gria1−/− mice and bivariate correlations within animals as criteria, we found that across such measures, there is a considerable lack of functional redundancy. Except for three pairwise correlations—PLV with PPC, PDC with DTF and parametric with non-parametric Granger causality—almost none of the analysed metrics consistently co-varied with any of the other measures across the three connections and two genotypes analysed. Notable exceptions to this were the correlation of coherence with PPC and PLV that was found in most cases, and partial correspondence between these three measures and Granger causality. Perhaps most surprisingly, partial directed coherence and Granger causality—sometimes regarded as equivalent measures of directed influence—diverged profoundly. Also, amplitude cross-correlation, spike-phase coupling and theta-gamma phase-amplitude coupling each yielded distinct results compared to all other metrics. Conclusions Our analysis highlights the difficulty of quantifying real correlates of inter-regional information transfer, underscores the need to assess multiple coupling measures and provides some guidelines which metrics to choose for a comprehensive, yet non-redundant characterization of functional connectivity. Supplementary Information The online version contains supplementary material available at 10.1186/s12915-021-00950-4. Background Communication between different brain regions is vital for cognition and emotion and is impaired in a variety of neurological and psychiatric disorders, including schizophrenia and depression. In order to better understand interregional communication in health and disease at the electrophysiological level in rodent models, local field potentials (LFPs) and sometimes action potentials (spikes) are typically recorded from two or more brain areas simultaneously in awake subjects. Subsequently, some measure of interdependency of the signals from two regions are computed (see Table 1 for an overview). For example, an influential hypothesis known as communication through coherence (CTC) states that information exchange between two connected brain areas depends on the timing of the arrival of incoming activity in a specific phase of a certain network oscillation [25][26][27][28]. Therefore, coherence measures a synchrony of oscillations in a certain frequency range and with a certain phase shift that may allow the activity generated in one region to optimally affect the activity of another region. In general, measures of phase synchronization aim to determine if two signals have a consistent phase relationship between each other. Despite being widely used, coherence is prone to confound by volume conduction [22,29]. Therefore, alternative measures of phase synchronization have been suggested. Nolte et al. demonstrated that using only the imaginary component of coherence (ImC) effectively reduces the influence of a volume-conducted signal originating from a common source [3]. Alternatively, the Phase Lag Index (PLI) may be used, which disregards the magnitude of the phase lag between signals from two brain regions but evaluates if they differ from a symmetrical distribution [4]. The weighted PLI (wPLI), in turn, applies the combined advantages of the ImC and PLI by taking the detected phase lead or lag and weighing it by the magnitude of the ImC [5]. A constraint related to measures of phase synchronization like the ImC, PLI and wPLI is sample size bias, i.e. the observation of spurious non-zero synchrony even in the absence of real connections which increases with a lower number of samples [5]. Therefore Vinck et al. additionally introduced a debiassed estimator of the wPLI which is more independent from sample size and thus has a higher statistical power than previous measures [5]. For the sake of clarity, the debiassed wPLI will be referred to as wPLI throughout this study, and the stated older measures are not used. It should be noted that several other metrics for phase synchronization exist, such as the phase-locking value (PLV) [6] and pairwise phase consistency (PPC) [7]. These measure the constancy of the difference between Table 1 Common measures of synchrony and directionality in neuronal communication Category Acronym Metric Description References Non-directed coupling, synchrony -Coherence (magnitude) Magnitude of the complex cross-spectrum [1,2] ImC Imaginary part of coherence Discards the real component of the cross-spectrum [3] PLI Phase Lag Index Disregards the magnitude of the cross-spectrum and averages the sign of phase differences [4] wPLI Weighted phase lag index Phase lags are weighed by the magnitude of the imaginary component of the cross-spectrum [5] PLV Phase-locking value Circular resultant vector length of the phase differences [6] PPC Pairwise phase consistency Computed based on the distribution of phase differences [7] Directed (lead/lag, LFP-based) -Coherence phase angle Angle of the complex cross-spectrum [8,9] CC Cross-amplitude coupling, amplitude cross-correlation Instantaneous amplitudes of two filtered LFPs are cross-correlated and the lag at which the peak occurred is determined [10] Directed (causal influence) GC Granger causality Quantifies if the past of one time series can predict the future of another time series using autoregressive modelling [11][12][13][14] npGC Non-parametric Granger causality Granger causality based on spectral matrix factorization [15] PDC Partial directed coherence Normalized metric based on GC that measures direct influence from one time series to another [16] DTF Direct transfer function Adaptation to multiple input variables closely related to PDC [17,18] Directed (phaselocking of local activity) SPC, MRL Spike-phase coupling, Mean resultant vector length Circular concentration of the phase distribution at which spikes occurred [2,[19][20][21] PAC, CFC, MI Phase amplitude coupling, crossfrequency coupling, modulation index Modulation of the amplitude of high-frequency oscillations in one area by the phase of low-frequency oscillations from another area [22][23][24] Directed (lead lag, spike-based) -Phase angle of MRL Mean phase at which spikes occurred [20] -Phase-shifted MRL Calculation of the MRL based on phases at shifted lags [2,19] the instantaneous phases of two signals obtained either by applying Hilbert, wavelet or Fourier transformation and quantifying the distribution of phase differences either by taking the vector average or by determining the distribution of phase differences across observations, respectively. While PPC and PLV are very similar measures, the main advantage of the PPC metric is that it is not biassed by sample size and therefore more suitable for comparing datasets with varying sample size as reviewed in [30]. The stated measures of phase synchronization are attempts to quantify non-directed connectivity. This means that the quantification of coupling is essentially based on correlation analysis, ignores its temporal structure and assumes no direction of the influence from one region to another [30,31]. However, LFP data can also be used to measure effective or directed connectivity [31]. These are parameters that quantify the potentially causal influence that the activity in one region exerts on another region by taking recurring pairwise patterns in the time series obtained from both regions into account. A computationally simple measure to detect directionality between two time series is cross-correlation. That means that correlations are calculated as the LFP signals are incrementally shifted against one another to obtain a cross-correlation as function of temporal shifts (lags). Adhikari et al. developed a method termed amplitude cross-correlation or cross-amplitude coupling in which the instantaneous amplitudes of two oscillatory signals filtered in a certain frequency range are cross-correlated to determine if one is leading or lagging the other [10]. If the lags at which the peak of the amplitude crosscorrelation function occurs are significantly different from 0 ms, it is indicated that one region leads the other one with a certain consistency, which could be due to a directional influence from the leading onto the lagging region. This method was able to identify directional connectivity in the brain related to working memory and fear processing [32][33][34]. A different measure of directed influence is Granger causality (GC). It aims to infer causation based on the notion that one signal is helpful in predicting the other. In parametric GC, two separate autoregressive models (ARMs) are calculated and statistically compared: a univariate ARM, where the signal is predicted by a weighted combination of its own past values, and a bivariate ARM where the signal is additionally predicted by the second signal. If the inclusion of the bivariate AR leads to a reduction of variance of the predicted signal, one signal is said to Granger-cause the other [11]. GC can also be computed with non-parametric methods where the same information is obtained by first calculating the crossspectral density matrix and then applying Wilson's spectral matrix factorization as input to the GC algorithm; this approach has been demonstrated to be equivalent to parametric GC [35]. The mathematical foundations of GC and its application to neuroscience have been reviewed extensively elsewhere [11][12][13][14]. Related measures that can either be based on multivariate autoregressive models or on non-parametric methods for directionality estimation and allow analysis of more than two channels include the directed transfer function (DTF) [17] and partial directed coherence (PDC) [16]; see [18] for a review. Other indicators of inter-regional communication that partly circumvent problems caused by volume conductance and are typically interpreted as indicating a causal directional influence include those that measure different types of neuronal activity in the different regions, i.e. a low-frequency LFP oscillation (usually in the theta-range) in the presumed dominating region and a local and high-frequency activity at the receiving end. In contrast to the metrics introduced before, historically, such measures were introduced by way of an actual biological discovery of such coupling phenomena, rather than by a priori mathematical considerations on how to best assess inter-regional communication. One option is to quantify the extent to which oscillations of distinct frequencies are coupled to each other, a phenomenon called crossfrequency coupling (CFC, [36]). Particularly, local phaseamplitude coupling (PAC)-the statistical relationship between the phase of a low-frequency and the amplitude of a high-frequency component-plays an important role in memory processing in the hippocampus of rats [37,38] and humans [39]. However, cross-regional PAC between the hippocampus and prefrontal cortex has also been used and was associated with directed information flow and cognitive functions [22][23][24]40]. Since high-frequency brain oscillations mainly reflect local aspects of information processing and low-frequency brain rhythms are relevant for inter-regional communication, CFC might represent a mechanism of transferring information from large-scale neuronal networks to local processes [36,41]. Another widely used measure is based on the recording of spikes in one (potentially the influenced) region alongside the LFP in another (potentially the influencing) region. Spikes are generally not considered to be confounded by volume conductance or referencing, and they represent a more direct readout of the actual neuronal activity of a region. Phase-locking of neuronal firing to theta frequency hippocampal oscillations was shown for example in the prefrontal cortex (PFC) [1,19], entorhinal cortex [42] and the amygdala [43]. For example, action potentials in these brain regions occurred rhythmically at the same phase of the hippocampal theta rhythm. Such spike-phase coupling (SPC) was observed to correlate with performance in multiple cognitive tasks [1,19,44] and has been used to evaluate coupling deficits in mouse models related to schizophrenia [2,20,45]. The above-mentioned measures have been widely used for two decades to assess inter-regional neuronal communication in rodents during a variety of cognitive tasks and disease-related manipulations, mostly involving recordings from the hippocampus and prefrontal cortex [1,2,20,21], but also increasingly from the thalamus [46] and the amygdala [47]. However, typically, only one or two measures of coupling are calculated and interpreted as sufficient surrogate to quantify task-or manipulation-related differences in actual information exchange between the analysed regions. In this analytical set-up, the contingency of the achieved conclusions on the choice of the coupling measure is usually not evaluated, but the redundancy of the various measures is implicitly assumed. This assumption is not justified, however, given the mathematical and partly biological differences between these constructs. Likewise, the dependence of the conclusions on the exact placements of electrodes within the analysed regions and the choice of reference are often not evaluated either. This presents a problem especially when interpreting negative data, i.e. the supposed absence of differences in coupling. We therefore sought to evaluate the redundancy and contingencies of such coupling metrics. To this end, we recorded data during a simple behavioural assay-novelty-induced locomotion and its habituation over timein Gria1 −/− (KO, knockout) mice and their littermate controls. We have recently shown that the Gria1-KO model, which recapitulates some behavioural deficits relevant to schizophrenia, shows profound and statedependent aberrations of hippocampal-prefrontal coupling in this task [48]. We focused on the most widely used connectivity measures-coherence magnitude and phase angle, wPLI, PPC, PLV, cross-amplitude coupling, parametric and non-parametric GC, PDC, DTF, cross-regional PAC, SPC and SPC-related directionality-with respect to three 'litmus tests' for redundancy: (a) detection of KO-related alterations of coupling across the 10 min test, (b) detection of KO-related changes of a measure over time and (c) bivariate withinanimal correlation of the analysed measures. We investigated connectivity between the medial prefrontal cortex (PFC) and the hippocampus-both the dorsal (dHC) and the ventral (vHC) partition. For the majority of the analysis, four commonly used frequency bands, delta (δ, 1-4 Hz), theta (θ, 5-12 Hz), beta (β, 15-30 Hz) and low gamma (γ, 30-48 Hz) are distinguished, whereby the analysis of theta and gamma may be regarded as particularly informative due to the existence of spectral peaks indicating real underlying oscillatory processes. Results Elevated locomotor activity in Gria1-KO mice during measurement of interregional communication In order to measure inter-regional coupling, we implanted 15 adult Gria1 −/− mice and 12 littermate controls unilaterally with LFP electrodes in 4 regions, PFC (2 electrodes), mediodorsal thalamus (MD, 1 electrode), dHC (1 electrode) and vHC (2 electrodes), and inserted screws for ground and reference above the cerebellum and frontal cortex, respectively (Fig. 1a). Recordings from all sites were made during a 10-min test of novelty-induced locomotor activity which confirmed the strongly elevated behavioural activity and failure of its short-term habituation over time in Gria1 −/− mice, as observed before (Fig. 1b, c [48]). After the experiments were completed, the placement of electrodes was evaluated through electrolytic lesion sites, and misplaced electrodes were excluded from the dataset; data from the MD was disregarded for most of the subsequent analysis because of the low number of animals with accurate placements. In accordance with our previous study in this mouse line [48], we recorded and analysed all data as referenced to the ground screw above the cerebellum by default and used the data from the frontal reference screw for a separate analysis (displayed in Fig. 7). We extracted LFP signals (Fig. 1d, e) from all depth electrodes and multi-unit activity (MUA) spikes from the prefrontal wires. For PAC, amplitude cross-correlations and SPC, the theta phase angle was extracted using a Hilbert transform or linear interpolation between consecutive cycles (Fig. 1f, g). Additionally, we sorted the LFP power values obtained from each electrode in distinct frequency bands according to the placement of electrodes in different subdivisions of the PFC (PrL, Cg1 and Cg2), dHC (apical dendritic layers of CA1, CA1 pyramidal cells, CA1 stratum oriens) and vHC (apical dendritic layers of CA1/CA3, CA1 pyramidal cells, dentate gyrus). While we did not conduct statistical analysis given the much smaller number of sites outside the target region (PrL in PFC and apical dendritic layers, including fissure, in the hippocampus), a qualitative inspection suggested that the placements inferred from lesion sites did not noticeably alter the obtained spectral LFP power ( Fig. 1h-j). Differences in detecting delta and gamma-range coupling in Gria1-KO mice across measures of synchrony We first analysed phase synchronization along the two prefrontal-hippocampal connections (PFC-dHC and PFC-vHC) and within the hippocampus (vHC-dHC) using coherence, wPLI, PLV and PPC ( Fig. 2a-r). We confirmed our previous observation [48] that PFC-dHC theta coherence is strongly elevated in Gria1-knockouts in a novel environment and further increases with time, mirroring the spatial exploration behaviour of this genotype (Fig. 1b, c, Fig. 2a, d). However, this phenotype was by no means specific to the PFC-dHC coupling, but also re-appeared in the PFC-vHC and vHC-dHC connections suggesting a broader deficit of excessive theta-range While all indicators revealed a reduced gamma-range PFC-dHC coupling in knockouts, a sole analysis with wPLI suggested further differences in the delta (PFC-dHC, vHC-dHC) and gamma (PFC-vHC) ranges that would have gone undetected, if using the other metrics ( Fig. 2d-f, j-r). Also, qualitatively, wPLI resulted in spectra with a quite different shape compared to the other ones. Differences in detecting elevated inter-regional thetarange coupling in Gria1-KO mice across measures of directional communication An analysis of directional connectivity with parametric GC revealed a confirmatory but much more fine-grained picture with KO-induced aberrations in all four frequency bands depending on the connection and direction ( Fig. 3a-c). Most prominently, we found strongly elevated theta range GC in knockouts for all projections departing in either subdivision of the hippocampus. This confirms the hippocampal (as opposed to prefrontal) origin of the theta hyper-connectivity phenotype in Gria1knockout mice that we had postulated before based on the normalization of this phenotype in mice with hippocampal rescue of GluA1 expression [48]. Likewise, beta/ gamma dHC➔PFC GC was strongly reduced in knockouts ( Fig. 3a), in line with reduced phase synchronization measures (Fig. 2d, j, m, p), while PFC➔dHC beta and gamma GC were even mildly elevated. This again suggests a hippocampal origin of the observed reduced synchrony in this frequency range. The most prominent GC was found in the delta range, with PFC➔d/vHC GC being significantly larger than the delta GC in the opposite direction in both genotypes. Further, genotype-related differences in vHC➔PFC and dHC➔vHC delta GC were found that do not match with the results from the non-directional synchrony metrics ( Fig. 2). In contrast to GC, significantly elevated theta PDC in knockouts was only detected in the dHC➔PFC/vHC connections, but not in the vHC➔PFC/dHC projections. And in the beta/gamma ranges, there were virtually no matches between PDC and GC at all regarding genotype-related differences (except for a minority of null-results and trends; Fig. 3a-f). Assessing SPC using the mean resultant vector length (MRL) of the vector representing average spike occurrence in theta phase space [21], we found the opposite of what would have been assumed from the PDC metric: locking of PFC spikes to vHC theta was higher in Gria1-knockouts, but phase-locking of PFC spikes to dHC theta showed no difference between genotypes (the latter also contrasts with GC and all synchrony measures; Fig. 3g). Further discrepancies appeared when analysing consistent phase differences (leads and lags) between potentially coupled oscillations in different regions to assess directionality. We investigated two directional measures obtainable from the SPC: the average theta phase of the MRL [20] and analysis of the effect of incremental shifts of the MUA relative to the theta cycle on the MRL [19]. The MRLs of PFC spikes relative to the dHC-but not vHC-theta phase were significantly shifted between genotypes: while they occurred during the rising phase of theta in knockouts, they occurred at its through in wildtype mice (Fig. 3h). Leading of PFC spikes relative to dHC and vHC theta was seen with phase-shifted MRL analysis in knockouts, but no significant difference between genotypes was detectable in this metric (Fig. 3i). The equivalent analysis, but conducted with PFC LFP (instead of spikes) using cross-amplitude coupling showed the opposite, namely a lead of dHC and vHC theta relative to prefrontal theta in knockouts, and differences between genotypes in both connections (Fig. 3j). c The same data as in b but displayed as total distance moved in 10 min (top) and slope of the interpolated line (bottom). ***p < 0.001, t test. d Examples of unfiltered LFP traces recorded in the four brain regions. e Illustration of the processing for connectivity measures using the same LFP frequency band in both regions; raw LFP signal (top) and LFP signal filtered in a specific frequency range (bottom). f Illustration of cross-regional θ-γ PAC, whereby the signal in one region is filtered in the low-γ range and the amplitude is extracted (top), while the signal in the other region is filtered in the θ range and Hilbert-transformed to extract the θ phase (middle). The coupling strength is derived as a modulation index (MI) measuring the phase-related change of γ amplitude (bottom). g Illustration of SPC; the hippocampal LFP (top) is filtered in the theta range and the instantaneous phase angle is extracted by linear interpolation (below, brown); the prefrontal high-pass filtered signal reveals MUA from which spikes are extracted by thresholding (below, single spikes, and bottom left, average of all extracted PFC spikes, black). A circular histogram is computed by assigning each spike to its theta phase angle, and the average of all vectors is calculated as mean resultant vector (red) whose length (MRL) is taken as indicator of SPC strength (bottom right). h-j Power of LFP in the indicated frequency bands (x-axis) and region (top of panel) displayed for each individual electrode that contributed to the WT dataset colour-coded by the sub-division in which it was placed; hippocampal layers: pyramidal (Pyr), stratum oriens (Or), lacunosummoleculare (LM), radiatum (Rad), and fissure (Fis). No statistical analysis was done given that rare placements In reverse, in the gamma range, PFC led both hippocampal regions exclusively in the knockouts (Fig. 3j), which is not consistent with GC, but-at least for the PFC-vHC connection-with PDC. Lastly, we examined the coherence phase angle. This showed a characteristic9 0°shift between the theta, beta and gamma oscillations of the PFC vis-à-vis the dHC, particularly in wild-type mice. In contrast to the other directional metrics, significant genotype-related differences were only seen in the gamma range, and they were prominent in the two HC-PFC connections (Fig. 3k). Finally, dHC-and vHC-gamma oscillations were coupled stronger to theta oscillations in PFC and the mutually coupled part of the hippocampus in knockouts (gamma-theta cross-regional PAC; Fig. 3l). However, PFC-gamma to hippocampal theta coupling was even reduced in knockouts (Fig. 3l) which contrasts sharply with the results from all other measures. In summary, while the identification of genotyperelated differences in coupling was similar between some measures (especially coherence, PLV, PPC and GC), there was also a considerable lack of redundancy across the different measures of interregional connectivity (see overview in Table 2). Differences in detecting increases of inter-regional coupling over time in Gria1-knockouts across measures As a second indicator for redundancy between connectivity measures, we investigated the potential physiological correlates of the characteristic divergence of exploratory drive between the two genotypes over time (Fig. 1b, c). This divergence is likely induced by a failure of spatial short-term habituation in Gria1-knockout mice resulting in increasing exploration-as opposed to the decreasing activity seen in controls [48,49]. To allow for an efficient analysis, we captured the change of a given parameter over time in a single number, namely the slope of the linear interpolation across the time series over the 10-min test. We previously found that both local theta power in the dHC and also dHC-PFC theta coherence displayed a characteristic divergence between the groups that mirrored exploratory behaviour [48]. In this novel dataset and analysis, this pattern emerged much more broadly, namely across multiple power and coherence measures in all three connections (compare Fig. 1b, c with Fig. 4a-d). This included local PFC power in all analysed frequency bands and gamma and (at trend-level) theta peak power in the hippocampal regions (Fig. 4a, b). For coherence, the KO-related increase in slopes was limited to the delta and theta range and was apparent in the hippocampal-prefrontal connections (confirming our earlier results) and marginally for intra-hippocampal coupling (Fig. 4c, d). In the beta and gamma range, either no group difference occurred orfor PFC-dHC beta coherence-it was even inversed with a higher slope in wild-type mice. Stunningly, this pattern was not reproduced by the wPLI analysis (Fig. 4e, f)even in the one case where the coupling slope was increased in knockouts in both metrics (PFC-vHC, theta), the metrics differed in the respect that, in wild-type controls, theta-wPLI remained constant, while theta coherence decreased over time. GC remained largely constant or slightly decreased over time in wild-type mice, irrespective of connection or frequency band (Fig. 4g-i). In Gria1 −/− mice, in contrast, GC increased over time in the delta and theta range in most connections leading to genotype-related differences in the vHC➔PFC (δ, θ), vHC➔dHC (θ), dHC➔vHC (δ, θ), PFC➔vHC (δ, γ) and PFC➔dHC (δ, θ) projections. Thus, except for an isolated match in the vHC➔PFC theta connectivity, the GC metric did not align with the wPLI-based slope assessment but provided a near-perfect match to the coherence slope pattern ( Table 2). The latter observation even extends to the one instance of PFC-dHC beta coupling where the slope is higher in wild-type than in KO mice ( Fig. 4g-i). The slope of the gamma-theta PAC also showed the expected divergence between genotypes in coupling strength along vHC connections, but not in the PFC-dHC connections (Fig. 4j). This pattern matched neither with coherence and GC (as they detected temporal changes in the PFC-dHC connection) nor with wPLI (which detected no changes in the vHC-dHC connection). Likewise, crosscorrelational lags did not change in any pattern that resembled the other measures (Fig. 4k). The slopes of MUArelated metrics were not determined because SPC analysis requires a considerable and equal number of spikes (not suitable for short intervals), and PDC and other lag metrics were not further regarded given that they already differed from the other metrics in the first comparison (Fig. 3). Lack of redundancy between most coupling measures revealed by bivariate correlation analysis Given that the above analysis of comparing genotyperelated differences across measures ultimately allows (See figure on previous page.) Fig. 2 Non-directional measures of synchrony in Gria1 −/− and wild-type controls across 10 min novelty-induced activity. a-l Spectrogrammes (ac, g-r) and frequency spectra (d-f, j-r) displaying coherence (a-f), wPLI (g-l), PLV (m-o) and PPC (p-r) along the PFC-dHC (a, d, g, j, m, p), PFC-vHC (b, e, h, k, n, q) and vHC-dHC (c, f, i, l, o, r) connections. Dotted red lines in spectra indicate boundaries of the analysed frequency bands named by the greek letters at the top. Stars indicate significant differences between genotypes (t test) in mean (black) or peak (grey) synchrony metrics. Lines display mean ± SEM. *p < 0.05; **p < 0.01; ***p ≤ 0.001 (See figure on previous page.) Fig. 3. Directional metrics of inter-regional coupling in Gria1 −/− and wild-type controls across 10 min novelty-induced activity. a-c Parametric GC on log 10 scale in the frequency bands indicated by greek letters and along the directional connections identified by the colour (blue: dHC➔PFC (a), vHC➔PFC (b), vHC➔dHC (c); orange: reverse of the before). Statistical indicators in the same colour identify a difference between genotypes (Sidak); statistical indictors in black (WT) or purple (KO) refer to a significant difference between the GC values of the two opposing directions within the colour-coded genotype whereby the location of the indicator identifies the direction with smaller average GC. d-f The same display as a-c but for PDC. g Mean resultant vector length (MRL) as an indicator of SPC of prefrontal spikes to hippocampal theta. h Average theta phase angle of the mean resultant vector from SPC analysis. The theta phase corresponding to the degree value is shown on the right (horizontal axis illustrates voltage of LFP). i MRL as a function of lag between prefrontal MUA and hippocampal LFP. Some data was excluded based on lag amplitudes above 100 ms; contributing N numbers are stated; statistics identical to g. j Cross-correlation functions of instantaneous amplitude curves in the connections and frequency bands named at the top of each sub-panel with peak values indicated by a red dot. Statistical indictors in black (WT) or purple (KO) refer to a significant difference of the lag (temporal shift) from 0 ms (Wilcoxon's signed rank test). k Spectra of coherence phase angle along the named connection. Dotted red lines and greek letters indicate analysed frequency bands. l Theta-gamma crossregional PAC for the named directional connections. Solid lines display mean, and shaded area SEM throughout; bars display mean ± SEM throughout. Grey stars in g-l indicate genotype differences (t test in g, i, j and l; Watson-Williams test in h and k). # p < 0.1; *p < 0.05; **p < 0.01; ***p ≤ 0.001 only a qualitative judgement about the epistemological redundancy of interregional coupling metrics, we supplemented our analysis by a more quantitative analysis in form of bivariate Spearman correlations between pairs of parameters and within genotypes and connections using the average value for each parameter in each electrode pair as a dependent variable. We included all metrics analysed in Figs. 2 and 3 and also partial directed coherence (PDC) and non-parametric Granger causality (npGC). This revealed multiple levels of complexity when analysing the relation between the metrics. On the one hand, at the level of isolated observations, the correlations supported the commonalities between measures already seen with the two prior analyses. For example, PFC-dHC theta coherence correlated strongly with dHC➔PFC theta-GC in wild-types (Fig. 5a). However, this correlation did neither exist in the knockouts in the same connection (Fig. 5a) nor in the same genotype but the PFC-vHC connection (Fig. 5b). Indeed, PFC-vHC theta coherence did correlate highly with GC in the opposite, i.e. PFC➔vHC, direction but not in the vHC➔PFC direction-and it did so across all frequency bands (Fig. 5b, see Additional file 1: Table S1 and S2 for the full correlation tables in wild-type mice including all metrics and four frequency bands), which was not the case in the other two connections (Figs. 5a and 6a). In general, when carefully examining each pair of metrics, it became apparent that a correlation seen in one genotype and connection would rarely reappear in another one (Fig. 5a, b, 6a). In order to evaluate this systematically, we calculated the average correlation coefficient for each pair across the three connections and indicated its significance only if it was given in all of them (Fig. 6b). Reassuringly, the three pairs of mathematically closely related metrics showed consistent correlations in each connection and frequency band: PPC and PLV, parametric and nonparametric GC, and PDC and DTF. Beyond that, however, there was not a single pair of distinct metrics that achieved a significant correlation in all three connections in wild-type mice in the theta band, and only two (coherence correlating with PPC and PLV) in the gamma band (Fig. 6b, Additional file 1: Table S4). In Gria1knockouts, the picture was similar, except that, here, coherence correlated significantly with PLV and PPC in both the theta and the gamma bands, and additionally, gamma wPLI correlated with coherence, PPC and PLV, across connections. The latter result contrasts sharply with the absence of such wPLI correlations in wild-type mice, illustrating that some observed correlations may depend on the genotype and are hence not reflecting a priori redundancies. We further examined the correlations that were not significant in all three connections, but yet achieved a high correlation coefficient on average. In the theta range, coherence also correlated strongly with PPC and PLV (average rho ≥ 0.8)-in alignment with our first analysis (Figs. 2 and 3), the correlation result in knockouts, and the gamma band in both genotypes (Fig. 6b)and with coherence phase angle (average rho > 0.7); further correlations yielded a medium (0.6-0.7) average rho: (a) coherence phase angle with PPC, PLV, PDC, DTF, GC and npGC and (b) PPC/PLV with wPLI, PDC and DTF. In the gamma range, coherence phase angle also showed the largest number of medium average correlations with other measures, namely wPLI (average rho = 0.77) and with coherence magnitude, PPC, PLV, PDC, DTF, GC and npGC (average rho 0.6-0.7); the only remaining medium average correlations (0.6-0.7) in the gamma range were wPLI with PDC and DTF (Fig. 6b, Additional file 1: Table S4). Also in knockouts, the coherence phase angle showed average medium correlations with most other LFP-based metrics in the theta and gamma range (Fig. 6b). It should be noted that this combined analysis may overlook correlations with directional metrics in case they occur in only one direction. For example, theta GC (and npGC) did actually correlate with theta PDC (and DTF) in each of the three connections but only in one direction each: PFC➔vHC, dHC➔PFC and vHC➔dHC which is difficult to interpret given that we always recorded significant GC and PDC in both directions. Results from the SPC (MRL), PAC and amplitude cross-correlation (lag) analyses did not correlate with any other measure consistently in any genotype. This synopsis largely aligns with the redundancy patterns seen with the two former analyses (Table 2). (x-axis). g, h, i Slope of GC in the frequency bands indicated by greek letters and along the directional connections identified by the colour (blue: dHC➔PFC (g), vHC➔PFC (h), vHC➔dHC (i); orange: reverse of the before). Statistical indicators in the same colour identify a difference between genotypes (t test). j Slope of theta-gamma PAC in the stated directional connections. k Slope of cross-correlation lags indicating putative changes of temporal shifts of the oscillations in the named frequency bands. Black stars indicate significant differences between genotypes (t test), and error bars or shaded regions indicate SEM throughout. # p < 0.1; *p < 0.05; **p < 0.01; ***p ≤ 0.001 Sensitivity of measures to reference location The choice of placement site for the reference electrode varies considerably between studies, and both the referencing to the ground screw above the cerebellum (as done for all above analyses) and to the anterior part of the frontal cortex are widely used. In order to investigate the effect of this difference, we recorded a separate reference signal from a frontal reference screw [2,45] and used it to digitally re-reference all recorded data by subtracting this signal from the recorded LFP traces before re-calculating the local power, coherence, wPLI and GC. Using repeated-measure ANOVAs with the withinsubject factor of re-referencing and the between-subject factor of genotype, we found that the location of the reference has quite a substantial influence on the results. There were significant effects of re-referencing on delta power, coherence and GC in all brain regions (except for the MD) and connections, while the effect on wPLI was comparatively minor (but note that delta wPLI is generally very low and entirely different from delta-coherence and GC; Fig. 7a-m). In the theta range, in contrast, rereferencing affected power only in the dHC but strongly impacted coherence, wPLI and GC alike along both hippocampal-prefrontal connections-not only in terms of significant effects of re-referencing, but also in terms of genotype-reference interactions, which indicate that the prior conclusions on theta range connectivity are partly dependent on the position of the reference. In the GC measure, interactions were apparent in the d/ vHC➔PFC direction but not in the reverse (Fig. 7k, l). Nevertheless, there were also significant effects of genotype in those connections and measures, suggesting that the fundamental observation of elevated hippocampalprefrontal theta connectivity in knockouts still holds, especially for the PFC-dHC connection and the GC measure in general (Fig. 7e, f, h, i, k, l). Intra-hippocampal theta connectivity was not much affected by the reference placement, irrespective of measure (Fig. 7g, j, m). In the higher frequency ranges, the effects were more mixed. Beta power in the dHC and coherence-but only partly wPLI and GC-along its connections were affected by reference placement. In the gamma range, rereferencing impacted power in the PFC and dHC, wPLI in the PFC-d/vHC connections and coherence along all three connections (Fig. 7a-j). In fact, the formerly observed lower PFC-dHC gamma coherence and wPLI in knockouts (Fig. 2d, j) were dependent on the reference placement for detection (interaction effect only for coherence and wPLI, Fig. 7e, h). A similar observation holds for the PFC-vHC gamma connectivity which was increased in KOs in the wPLI, but not the coherence measure (Fig. 2e, k). Here again, an interaction indicated that the absence or presence of this difference in the coherence measure depends on the reference location (Fig. 7f), while an effect of genotype is maintained when using wPLI even though an interaction is found in addition (Fig. 7i). The impact of referencing on gamma-GC, in contrast, was limited to the dHC➔PFC projection ( Fig. 7k-m). In summary, a frontal reference screw-as often used when studying prefrontal-hippocampal connectivity [2,45]-may considerably alter the results obtained for LFP-based measurements of connectivity between the PFC and the hippocampus. Somewhat surprisingly, the wPLI measure does not eliminate this contingency but only reduces it, especially in the beta-gamma range. Referencing effects on GC are particularly visible in the low (delta/theta) frequency range and (as interactions) in the direction from the hippocampus to PFC. Discussion We here examined the level of redundancy and experimental contingencies of the most widely applied measures of interregional directed and non-directed neuronal connectivity that are obtainable with chronically implanted field electrodes in awake rodents. This analysis revealed a surprisingly large absence of redundancies between such metrics and a worrying contingency with respect to the location of the reference electrode. Both findings suggest that the implicitly held belief that experimental results obtained with one metric of connectivity and one configuration for referencing would allow general conclusions about aberrations in inter-regional functional connectivity is problematic. Intriguingly, a similar conclusion has been reached by a recent study on connectivity measures applied on human EEG data [50]. While this finding was somewhat expected a priori when regarding metrics of distinct conceptual foundation-e.g. non-directional synchrony vs. measures of causation-the lack of similarity even within the same analytical category is unreckoned. From a conceptual perspective, the result reveal the absence of a concrete empirical counterpart of the rather interchangeably used terms of inter-regional communication, coupling, information transfer or functional connectivity. Given these contingencies of a result obtained with any single metric, it is difficult to equate it with the too generic notion of neuronal communication. A particular analytical problem appears to be the lack of benchmarking of the sensitivity, specificity and robustness of the individual measures against a ground truth of actual physiological trans-synaptic activity along anatomically verified connections. Notably, we here, like previous studies, found evidence for significant causal influence not only along the direct anatomical projections-dHC➔vHC [51,52], vHC➔dHC [52] and vHC➔PFC [52][53][54][55]-but also along the PFC-dHC connection that is mediated only indirectly via the nucleus reuniens [56,57], and even in the direction for which no obvious anatomical correlate has been described yet to our knowledge (PFC➔vHC [52]), which complicates the validation and interpretation of the functional connectivity measurements. In the absence of such benchmarking and while facing considerable logistical limits in applying multiple referencing and metrics for every experiment, our analysis at least qualitatively implies some guidelines to choose the set of coupling metrics suited for a rather comprehensive, yet non-redundant analysis of inter-regional communication. Firstly, we demonstrate that some mathematically related measures do actually show a pairwise redundancy and hence do not need to be included into the same analysis, namely PPC and PLV [30], parametric and nonparametric GC (allowing for considerably faster computation by using the non-parametric approach [15]) and PDC and DTF [16,18]. Secondly, beyond these reliable redundancies, we found further partial redundancies across connections, genotypes and frequencies helping to narrow the list of metrics to include in an analysis further. Most importantly, PPC and PLV also showed considerable overlap with both the magnitude and (to a lesser degree) the phase angle of coherence, and medium average correlations with wPLI, PDC and DTF. In addition, coherence phase angle correlated broadly at a medium average level with coherence amplitude, PDC, DTF, GC and npGC in addition to PLV and PPC. For practical purposes, this suggests that an assessment of two metrics-PPC and coherence phase angle-would be a useful first-pass approach to survey LFP data for possible aberrations in functional connectivity, which can then be followed up with mutually non-redundant directional metrics. Thirdly, while in such further analysis, GC and PDC (or DTF) may seem particularly attractive metrics given that they deliver a more fine-grained picture of coupling in distinct directions and may be interpreted as indicators of causal influence between two brain regions, it is important to note that they do not yield similar results even though they are sometimes (erroneously [58]) equated. Despite some correlations between PDC (and DTF) with PLV, PPC and coherence phase angle in the correlation analysis (Figs. 5 and 6), there were actually considerable and irresolvable discrepancies between these measures in the genotype comparison (compare Fig. 2 with Fig. 3d-f); for example, genotype-related differences in PFC-dHC gamma-range coupling seen across all measures of synchrony and coherence phase angle were not detected by PDC, while the reverse was true for the vHC-dHC connection. GC, in contrast, did mostly reflect aberrations seen with the synchrony measures and could clarify their directional underpinning (Table 2). Therefore, PDC/DTF and GC may serve as complementary metrics rather than surrogates. Fourth, spike-phase and phase-amplitude coupling cannot be expected to be equivalent to any of the other parameters and are therefore very useful to include to deliver a different perspective on functional connectivity. While this may have been expected given their distinct biological nature, the degree of absence of redundancy is nevertheless astonishing. It should be noted, however, that the presented SPC analysis using MUA [21] is likely far from optimal given that units cannot be chosen by the movement of the electrodes and not properly sorted. The recording of single-unit activity from moveable electrode bundles or arrays [2,45] will certainly improve the assessment of SPC and its related directional measures. Finally, for LFP-based measures, the reference electrode should be placed in a brain structure that is largely separate from the brain regions between which connectivity is studied. A frontal screw may easily obscure phenotypes in prefrontal connectivity as it may pick-up field potential signals from the PFC [29,59]. Conclusions In summary, our analysis calls for a more cautious interpretation of previous findings in the rodent literature on (See figure on previous page.) Fig. 6. Correlations between individual measures of intra-hippocampal and overall connectivity. a Spearman's coefficient (rho, colour of squares) and significance (star within squares) of bivariate correlations between individual measures of connectivity in the vHC-dHC connection within KO (top-right) and WT (bottom-left) mice. b The same display as in a but indicating the average correlation coefficient across the three connections (Figs. 5a, b and 6a) by the colour of a square and significance only if a significant correlation existed in every one of the three connections. White stars, p < 0.01; purple stars, p < 0.001. Theta and gamma metrics are spatially separated, and delta and beta metrics are omitted (see Additional file 1: Table S3 and S4 for pairwise correlations of all metrics analysed in this figure) Fig. 7. Assessment of the impact of the reference electrode placement on the measurement of power and connectivity. a-j Spectra for power (a-d), coherence (e-g) and wPLI (h-j) for the regions or connections indicated at the top of each panel, shown for standard referencing to the ground screw above the cerebellum (black, WT; purple, KO; as in Figs. 2 and 3) or digitally re-referencing to the reference screw above the frontal cortex (blue, WT; orange, KO). Red lines indicate the boundaries of the analysed frequency bands named by the greek letters at the top. k-m GC in the frequency bands indicated by greek letters and along the directional connections identified by the colour (blue: dHC➔PFC (k), vHC➔PFC (l), vHC➔dHC (m); orange: reverse of the before; display as in Fig. 3a-c). Throughout, shaded regions indicate SEM, and stars indicate the results of RM-ANOVA: black, effect of genotype; green, effect of chosen reference; grey, genotype-reference interaction. In the theta range, the statistics for coherence and wPLI refer to peak theta. *p < 0.05; **p < 0.01; ***p ≤ 0.001 inter-regional coupling (especially when regarding negative results), the need for better benchmarking of individual measures and the necessity to report multiple measures of connectivity in future studies. Animals Male and female Gria1 knockout (Gria1 −/− , Gria1 tm1Rsp ; MGI:2178057) [60] mice (N = 15, 9 males) and wild-type littermate controls (N = 12, 8 males) were bred from heterozygous parents. Animals were group-housed in type II long individually ventilated cages (Greenline, Tecniplast, G), enriched with sawdust, sizzle-nest™ and cardboard houses (Datesand, UK) and subjected to a 13-h light/11-h dark cycle. The mice were implanted with electrodes at ca. 9 months of age and were tested in the open-field test ca. 3-5 weeks later to allow recovery from surgery intermittently. Surgery Electrode implantation surgeries under general isoflurane anaesthesia and a broad peri-operative analgesic regime were conducted similarly as previously described for a similar dataset from a distinct cohort [48]. Briefly, single polyimide-insulated tungsten wires of 50 μm diameter (WireTronic Inc., CA, USA) were implanted, with reference to the bregma (in mm), into the PFC (AP + 1.8-1.9, ML 0.3-0.35; 1.8-1.9 below pia), MD (AP − 1.2, ML 0.3, 2.7 below pia), dHC (AP − 1.9-2.0, ML 1.5, 1.4 below pia) and vHC (AP − 3.1-3.2, ML 2.9-3.0, 3.4 mm for single and 3.8-3.9 mm for dual electrodes below pia). In a majority of mice, dual electrodes were used for PFC and vHC, whereby the second electrode was placed about 0.5 mm higher than the stated distance from pia. In later analysis, the data from each electrode was regarded as the unit of observation (N), so that a single mouse could contribute up to an N = 4 for vHC-PFC connections and up to an N = 2 for dHC-vHC, PFC-vHC, MD-PFC and MD-vHC connections. Both hemispheres were implanted at roughly equal proportion. Stainless steel screws (1.2 mm diameter, Precision Technologies, UK) were implanted in the contralateral hemisphere ca. 1 mm from the midline above the cerebellum (AP − 5.5) for ground and above the anterior frontal cortex (AP + 4.0) for additional reference, and were connected with a 120-μm PTFE-insulated stainless steel wire (Advent Research Materials Ltd., UK; Fig. 1a). All electrode wires were connected to pins in a dual-row 6pin or 8-pin connector (Mill-Max, UK). To later determine electrode placements post-mortem, electrolytic lesions were made after breathing ceased under terminal ketamine/medetomidine anaesthesia. Immediately afterwards, animals were transcardially perfused with PBS followed by 4% paraformaldehyde (PFA)/ PBS, and the brains were post-fixed for 24 h in PFA/ PBS. Coronal sections of 60 μm were cut on a vibratome in PBS and then washed 3 times in PBS, stained with DAPI and mounted for inspection of lesion sites on an epifluorescence microscope (DM6, Leica). Novelty-induced locomotion and recording Animals were tethered to enable electrophysiology recordings and then placed into a novel environment consisting of a clear type III plastic cage (length 43 cm, width 22 cm, height 20 cm; Tecniplast) containing clean sawdust. Animals were allowed to explore for 10 min. The animals' location in the open field was videotracked with ANY-maze (Stoelting, UK), and the distance travelled was calculated in 20 sec time bins. Prior to testing, a 32-channel RHD2132 headstage (Intan Technologies, CA, USA) was plugged into the implanted connector via a custom-built adaptor that interfaced a 36-pin Omnetics connector (A79022-001, MSA components, G) with another 6-pin or 8-pin Mill-Max connector. The headstage was wired to an Open-Ephys acquisition board (https://open-ephys.org, USA; obtained through the Open-EPhys store at Champalimaud, Portugal) via two light-weight flexible SPI-cables (Intan Technologies), daisy-chained through a customconnected miniature slip-ring (Adafruit, NY, USA). The adaptor was wired so that all signals were referenced to the ground signal obtained from above the contralateral cerebellum, while the signal from the additional frontal reference screw was recorded separately (for later offline re-referencing) like the LFP channels, i.e. also referenced to ground. Using the RHD2132 headstage, the Open-Ephys acquisition board and the Open-EPhys acquisition software, data were amplified and digitized, sampled at 10 kHz and digitally high-pass filtered at 0.1 Hz for the acquisition of raw data (for MUA and GC analysis) and simultaneously band-pass filtered at 0.1-250 Hz (for all remaining analysis of LFP signals). Data processing and analysis All signal analyses were done in MatLab (MathWorks). Data were exported to MatLab and, for all LFP analyses, down-sampled to 1 kHz and analysed with customwritten scripts. To reduce low-frequency drift, signals were first detrended using the locdetrend function of the Chronux signal processing toolbox (http://chronux.org/) with 1 s of data and a sliding window of 0.5 s. Spectral analysis Power and coherence spectra as well as the phase angles were calculated with Chronux routines implemented in the Chronux toolbox using the multi-taper method [61]. Power values were expressed as 10*log 10 values for all analyses, and the range of frequencies was set from 0.1 to 48 Hz. A bandwidth of 0.2 Hz and a total of 220 tapers were used to calculate power and coherence over the course of the 10 min exploration time. To analyse the temporal development, power and coherence were also calculated in 10-s bins using a bandwidth of 1 Hz and 19 tapers. Weighted phase lag index To address the issue of volume conduction, we calculated the weighted Phase Lag Index (wPLI) [5] using the routines implemented in the FieldTrip toolbox [62]. The 10-min exploration time was divided into nonoverlapping 1-s bins and padded to the next power of two. The complex cross-spectrum was computed using a Hann taper with a spectral smoothing of 0.5 Hz. For temporal analysis, wPLI was averaged for each minute of the 10-min period using the same spectral parameters. Phase-locking value and pairwise-phase consistency Phase-locking was assessed using two of the most widely used metrices, namely the phase-locking value (PLV) [6] and pairwise-phase consistency (PPC) [7]. Both were calculated using routines implemented in the FieldTrip toolbox [62]. The 10-min exploration time was divided into non-overlapping 1-s bins and padded to the next power of two. The complex cross-spectrum was computed using a Hann taper with a spectral smoothing of 0.5 Hz. Phase-amplitude coupling Cross-frequency coupling (CFC, [36]) was assessed using the measure of phase-amplitude coupling (PAC), the statistical relationship between the phase of a lowfrequency and the amplitude of a high-frequency component, in a cross-regional analysis [22,23]. The 10-min recording was split into 1-min bins during which the PAC was calculated using the Modulation Index (MI, [23,63]). Briefly, time-series data was first band-pass filtered in the desired frequency range, followed by a Hilbert transform using the MatLab function hilbert which calculates the real and imaginary part of the signal to obtain the instantaneous amplitude and phase at any given time point. Theta phases were binned into eighteen 20°intervals, and the mean gamma amplitude was calculated in each phase bin. The distribution across bins was assessed using the Kullback-Leibler divergence [64] and normalized between 0 and 1. The MI is close to zero if the mean gamma amplitude is uniformly distributed over the theta phases and close to one if the mean gamma amplitude is exceptionally higher within one phase bin [23]. Cross-correlation of instantaneous LFP amplitudes To determine whether one signal was leading or lagging the other, amplitude cross-correlations of instantaneous amplitudes of LFP oscillations between all brain regions were performed [10]. The 10-min period was divided into 1-s bins with a 95% overlap. First, the two signals were band-pass filtered in the respective frequency range; the Hilbert transform was computed using the MatLab function hilbert to calculate the instantaneous amplitude and the envelope of the signal. The mean amplitude was subtracted, and the cross-correlation between the amplitudes of the two signals was calculated with the MatLab function xcorr over lags ranging from − 100 to + 100 ms; the lag at which cross-correlation peaked was determined [10]. While lags below − 100 ms or above 100 ms would have led to the exclusion of the respective data point [65], no instances of such lags were found in our dataset. To determine if the obtained lags or leads significantly differed from zero, Wilcoxon's signed rank tests were performed. Granger causality Parametric Granger causality (GC) was calculated using the MVGC-toolbox [66]. GC mainly applies to stationary signals which means that the variances are not excessively changing over time [13,67]. Therefore, the 10min period was divided into 1-min bins and the in-built trial averaging function was used to calculate GC in non-overlapping 10-s sections to ensure reasonable stationarity [68][69][70]. The 1-min bins were used for the analysis of GC over time and then averaged to obtain a GC value for the whole 10-min testing period. Raw LFP data was sampled down to 250 Hz to ensure a reasonable model order for autoregressive modelling [14,66,71]. The model order was obtained using the Bayesian Information Criterion (BIC, [72]) as it was shown to provide the best fit to electrophysiological data [66]. The model order was fixed to 27 across all animals and trials to obtain comparable results [73]. Non-prefiltered data were used because empirical analyses have shown that filtering time-series data increases the VAR model order and leads to high variances making it unsuitable for GC analysis [71]. To obtain GC values for specific frequency bands, we first computed GC up to the Nyquist frequency and then integrated over the desired frequency range [71]. A permutation procedure implemented in the MVGC-toolbox was performed to test the null hypothesis that values obtained by GC estimation occurred by chance [13,66]. Non-parametric Granger causality (npGC), directed transfer function (DTF) and partial directed coherence (PDC) were calculated using the Field-Trip toolbox [62]. The same temporal configurations were used as described above for parametric GC, and raw LFP data was sampled down to 250 Hz as well. Instead of deriving the noise covariance matrix and transfer function by autoregressive modelling (as done for parametric GC), these were obtained by applying Wilson's spectral matrix factorization to complex Fourier spectra. This non-parametric approach was shown to be better at capturing all spectral features, less errorprone because no model order had to be chosen and computationally faster [15,35]. Spike-phase coupling Multi-unit activity was extracted by high-pass filtering the raw signal above 800 Hz and applying a threshold at 3.5 standard deviations from the mean. Spikes were excluded, if the threshold exceeding was longer than 2 ms, and if spikes occurred within 1 ms form each other. LFP of the second brain region was filtered between 5 and 12 Hz using the eegfilt-function of the EEGLAB-toolbox [74]. To account for speed-dependent waveform asymmetry in the theta oscillation, the theta phase was defined by linear interpolation between troughs of consecutive cycles [75,76]. Only periods in which the theta amplitude was above 0.25 standard deviations of its mean were included to ensure sufficient theta oscillations and prevent spurious phase determination. The number of spikes was fixed to 1000 for each recording to prevent spuriously high MRL values and fluctuations in the firing rate. Each spike was assigned a theta phase, and the mean resultant vector length (MRL) was calculated as an indicator for the strength of coupling using the CircStat-Toolbox [2,77]. The MRL gets close to one when the spikes are concentrated around a certain phase of the theta oscillation and approaches zero when they are uniformly distributed. Additionally, the phase angles of the mean resultant vector were used to quantify the differences in phase angles between genotypes, which were statistically assessed with the Watson-Williams test for two samples [20,77]. To determine the directionality between multiunit activity and theta oscillations, phase-locking was calculated for 50 different temporal offsets ranging from − 100 to + 100 ms in steps of 4 ms. If the MRL peaked at a positive offset, spikes were most strongly locked to the next theta cycle, suggesting that spiking activity drives theta [19]. Wilcoxon's signed rank test was applied to determine if the lag or lead was significantly different from zero. Statistics Genotype-related differences within the same metric and frequency range were assessed by independent-sample t test or, in the case of GC (Fig. 3), by Sidak paired post hoc tests conducted after a significant effect of genotype or interaction in the prior repeated-measures (RM) ANOVA. For circular data (spike and coherence phase angles) the Watson-Williams two-sample test was used to assess genotype-related differences. A p value < 0.05 was used as an indicator for statistical significance; no further correction for multiple comparisons were applied, given that we aimed to emulate the situation that only a single measure is used to characterize connectivity, and false negatives were to be avoided given the analytical goal of detecting redundancies between metrics. Bivariate correlations were calculated using Spearman's rho. To detect correlations between circular and circular and between circular and linear data, we used circularcircular correlation and circular-linear correlation as implemented in [77]. Variability in the data is displayed as standard error of the mean (SEM) throughout. Additional file 1: Contains four tables, Table S1-S4, that state the results of pairwise Spearman correlations between all metrics analysed in this study for a given connection. The first number in each cell states Spearman's rho, the second number its p-value. Significant correlations are highlighted in green. See the 'Info' sheet of the file for further information, including n-numbers. Table S1. Correlations of connectivity metrics along the PFC-dHC connection, corresponding to main Figure 5a. Table S2. Correlations of connectivity metrics along the PFC-vHC connection, corresponding to main Figure 5b. Table S3. Correlations of connectivity metrics along the vHC-dHC connection, corresponding to main Figure 6a. Table S4. Average correlations of connectivity metrics along all three connections, corresponding to main Figure 6b. Each cell lists the average Spearman's rho, followed by an indicator of a significant correlation in all three connections (0) or lack of it (1).
2020-07-21T13:14:03.816Z
2020-07-17T00:00:00.000
{ "year": 2021, "sha1": "c881675d023d7a55d71a6853c2cbff1bb87f241f", "oa_license": "CCBY", "oa_url": "https://bmcbiol.biomedcentral.com/track/pdf/10.1186/s12915-021-00950-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8af5ad6256c7d45cf3a868957e74149df5e76abf", "s2fieldsofstudy": [ "Psychology", "Biology" ], "extfieldsofstudy": [ "Biology", "Computer Science" ] }
5018384
pes2o/s2orc
v3-fos-license
Chronic Administration of Baicalein Decreases Depression-Like Behavior Induced by Repeated Restraint Stress in Rats Baicalein (BA), a plant-derived active flavonoid present in the root of Scutellaria baicalensis, has been widely used for the treatment of stress-related neuropsychiatric disorders including depression. Previous studies have demonstrated that repeated restraint stress disrupts the activity of the hypothalamic-pituitary-adrenal (HPA) axis, resulting in depression. The behavioral and neurochemical basis of the BA effect on depression remain unclear. The present study used the forced swimming test (FST) and changes in brain neurotransmitter levels to confirm the impact of BA on repeated restraint stress-induced behavioral and neurochemical changes in rats. Male rats received 10, 20, or 40 mg/kg BA (i.p.) 30 min prior to daily exposure to repeated restraint stress (2 h/day) for 14 days. Activation of the HPA axis in response to repeated restraint stress was confirmed by measuring serum corticosterone levels and the expression of corticotrophin-releasing factor in the hypothalamus. Daily BA administration significantly decreased the duration of immobility in the FST, increased sucrose consumption, and restored the stress-related decreases in dopamine concentrations in the hippocampus to near normal levels. BA significantly inhibited the stress-induced decrease in neuronal tyrosine hydroxylase immunoreactivity in the ventral tegmental area and the expression of brain-derived neurotrophic factor (BDNF) mRNA in the hippocampus. Taken together, these findings indicate that administration of BA prior to the repeated restraint stress significantly improves helpless behaviors and depressive symptoms, possibly by preventing the decrease in dopamine and BDNF expression. Thus, BA may be a useful agent for the treatment or alleviation of the complex symptoms associated with depression. INTRODUCTION Baicalein (BA; 5,6,7,-trihydroxyflavone), one of the most active natural plant flavonoids, is found in the dry roots of Scutellaria baicalensis Georgi [1]. This compound exhibits to improve multiple physiological actions, and produce a variety of biological effects in the central nervous and immune systems, and several studies have investigated the anti-inflammatory, antioxidant, anti-proliferative, anti-apoptotic, and anti-tumor, properties of BA [2][3][4]. BA has been shown to cross the blood-brain barrier and may act directly in brain nuclei to produce pharmacological effects [5]. Several studies in experimental animal models have shown that BA has antidepressant-like effects and the underlying mechanisms may be related to modulation of the extracellular signal-regulated kinase (ERK) signaling pathway in the hippocampus [1]. In addition, BA has been reported to attenuate irradiation-induced impairment of hippocampal neurogenesis by modulating oxidative stress and elevating brain-derived neurotrophic factor (BDNF) signaling [6], and to attenuate memory impairment in beta-amyloid peptide- (25)(26)(27)(28)(29)(30)(31)(32)(33)(34)(35)-induced amnesia in rats [7]. BA may improve cognitive deficits and reduce apoptosis following transient global cerebral ischemia/reperfusion injury-induced hippocampal neuronal damage in mice via phosphorylation of ERK (pERK) and stimulation of BDNF expression in vivo [8,9]. A limited amount of information on the clinical effects of BA on depression and morbid forgetfulness is available [10], and the effect of BA treatment on depression-like symptoms induced by repeated restraint stress in rats is not known. Chronic exposure to stressful life events is a well-established and significant risk factor for the development and maintenance of several neuropsychological conditions and helplessness including major depression [11,12]. Chronic stress can trigger or exacerbate a disruption in the activity of the hypothalamic-pituitary-adrenal (HPA) axis, as evidenced by observations that the elevated circulating corticosterone (CORT) levels disrupt the circadian regulation of CORT secretion as well as the glucocorticoid (GC) receptornegative feedback circuit [13,14]. Elevated GC levels cause changes in brain function that impair the regulation of physiological and behavioral responses to stressors and are closely associated with psychosomatic disorders and affective behaviors that are indicative of or consistent with depressive-like symptoms [15,16]. Furthermore, several animal studies have shown that chronic stress disrupts HPA axis activity, leading to morphological changes in the hypothalamus, hippocampus, and prefrontal cortex [17,18] as well as in a variety of neurotransmitters [19,20], reductions in body weight, and altered behaviors [21][22][23][24]. A reduction in brain dopamine (DA) and serotonin (5-HT) levels has been reported to disrupt HPA axis activity and cause depressive-like symptoms in rats [25], mimicking the symptoms of human depression [19]. Several antidepressant medicines currently in use were developed several decades ago based on evidence from basic and clinical studies suggesting that low levels of monoamines cause depression [26]. Thus, current treatment for depression primarily relies on traditional therapeutic strategies that modulate the serotonergic and noradrenergic systems with the goal of restoring 5-HT and DA levels in the brain [24]. Therefore, the present study used repeated restraint-induced stress to investigate the effect of BA on the symptoms of chronic stress-induced depression in an animal model. We used the forced swimming test (FST) as a behavioral measure and brain concentration of DA and 5-HT and BDNF mRNA expression in the hippocampus as a neurobiological measure of BA action and potential underlying mechanisms. Animals Adult male Sprague-Dawley (SD) rats weighing 260∼280 g were obtained from Samtako Animal Co. (Seoul, Korea). Animals were maintained on a 12-hour light/dark cycle (lights on at 7:00 a.m., lights off at 7:00 p.m.) under controlled temperature (22±2 o C) and humidity (55±15%), and they were given standard diet and water during the experiments. The rats were housed in a limited access rodent facility with up to five rats per polycarbonate cage. The animal experiments were conducted in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals (NIH Publications No. 80-23), revised in 1996, and were approved by the Kyung Hee University Institutional Animal Care and Use Committee. All animal experiments began at least 7 days after the animals arrived. The effects were made to minimize the number and suffering of animals. Experimental groups This study was designed to explore the efficacy of BA administration for healing repeated restraint stress-induced depression-like behavior in an animal model using behavioral and neurobiological methodologies. The rats were randomly divided into six groups of six to seven individuals each as follows: unstressed group daily treated with saline instead of BA (0.9% NaCl, i.p., SAL group, n=6), restraint-stressed group daily treated with saline instead of BA (STR group, as a negative control, n=7), restraintstressed plus 10 mg/kg BA-treated group (STR+BA10 group, n=6), restraint-stressed plus 20 mg/kg BA-treated group (STR+BA20 group, n=6), restraint-stressed plus 40 mg/kg BA-treated group (STR+BA40 group, n=6), and restraint-stressed plus 10 mg/kg fluoxetine-treated group (STR+FLX group, as a positive control, n=6). BA and fluoxetine (FLX) were purchased from Sigma-Aldrich Chemical Co. (St. Louise, MO, USA). The rats were administrated by intraperitoneally (i.p.) with BA and FLX 30 min prior to a daily restraint stress for 14 days, and BA and FLX were dissolved in 0.9% physiological saline solution before use. All drugs were freshly prepared right before every experiment. The restraint stress procedure was carried out once daily for 2 h from 10:00 a.m. to 12:00 p.m., and 14 consecutive days in rodent immobilization bags. In brief, rats were forced to be placed in a transparent plastic tubes (20×7 cm), of which one end is conical shaped and has several 3 mm-holes for breathing, and the other end is open, as described previous our study [27]. The animals have am-ple air but were unable to move within the tubes. The following parameters were measured to monitor the effects of the development of psychosomatic disorders by repeated restraint stress: changes of body weight gains (at the beginning step of restraint stress), and serum CORT levels (after repeated restraint stress-induced depression-like symptoms). Behavioral testing for depression-like behavior was done 24 h after the end of the chronic physiological stress protocol. All rats sequentially performed to take the FST on the 15 th day after repeated restraint stress. After the behavioral testing and body weighting, rats were sacrificed and brain tissues were immediately collected for experiments or stored at -70 o C for later use. All rat groups except SAL group were received same restraint stress. The entire experimental schedule of all drug administration and behavioral examinations are shown in Fig. 1. M easurement of sucrose intake The sucrose intake test was performed as described previously with minor modifications [28]. For this test, rats were trained to consume 1% sucrose solution prior to the start of the experiment. Briefly, 48 hours before the test, the rats were trained to adapt to 1% sucrose solution (w/v): two bottle of 1% sucrose solution were placed in each cage, and 24 hours later 1% sucrose solution in one bottle was replaced with tap water for 24 hours. After the adaption, rats were deprived of water and food for 10 hours. Sucrose preference test was conducted at 9:00 a.m. in which rats were housed in individual cages and were free to access to two bottles containing 100 ml of sucrose solution (1%, w/v) and 100 ml of water, respectively. After 3 hours, the volumes of consumed sucrose solution and water were measured, and the sucrose preference was calculated by the following formula: sucrose preference=sucrose consumption/(water consumption+sucrose consumption)×100% [24,28]. CORT, DA and 5-HT analysis After restraint stress for 14 days, CORT concentration in blood, and DA and 5-HT concentration in brain tissue were determined. Animals were killed by decapitation one day after behavioral measurement. For this, the unanesthetized rats were rapidly decapitated, and blood was quickly collected via the abdominal aorta. The hippocampus or medial prefrontal cortex were rapidly removed from the rat brains in randomized order. Special care was taken to avoid pre-decapitation stress; while rats were rapidly decapitated, the other animals were left outside the room and handled for a few minutes prior to sampling. The blood samples were centrifuged at 4,000 g for 10 min, and serum was collected and stored at -20 o C until use. The CORT concentration was measured by a competitive enzymelinked immunoassay (ELISA) using a rabbit polyclonal CORT antibody (Novus Biologicals Corticosterone kit; Novus Biologicals, LLC., Littleton, CO, USA) according to the manufacturer's protocol. The brain tissue samples were stored at -80 o C until use. Hippocampus or medial prefrontal cortex were homogenized in a lysis buffer containing 137 mM NaCl, 20 mM Tris (pH 8.0), 1% NP40, 10% glycerol, 1 mM PMSF, 10 mg/ml aprotinin, 1 mg/ml leupeptin and 0.5 mM sodium vanadate. Homogenization was carried out on ice using a tissue homogenizer and incubated for 1 min at 4 o C with shaking. Homogenates were centrifuged and supernatants were collected. Protein concentrations were estimated by the procedure of Gmeiner and Seelos [29] with BSA as the standard. The DA and 5-HT concentration was measured by a competitive enzyme-linked immunoassay (ELISA) using a mouse monoclonal DA and 5-HT antibody (Novus biologicals DA and 5-HT kit; Novus Biologicals, LLC., Littleton, CO, USA) according to the manufacturer's protocol. Samples (or standard) and conjugate were added to each well, and the plate was incubated for 1 h at room temperature without blocking. After wells were washed several times with buffers and proper color developed, the optical density was measured at 450 nm using an ELISA reader (MutiRead 400; Authos Co., Vienna, Austria). Forced swimming test (FST) Forced swimming test, a representative behavioral test for depression, is frequently used to evaluate the activities of potential antidepressant drugs in rodent models. Forced immersion of rats in water for an extended period produces a characteristic behaviors of immobility. The antidepressant treatments decrease the immobility behavior accompanying with an increase in the escape responses such as climbing and swimming behaviors. A transparent Plexiglas cylinder (20 cm diameter×50 cm height) was filled up to a depth of 30 cm with water at 25 o C. At this depth, rats could not touch the bottom of the cylinder with their tails or hind limbs. On day 14, the rats in all groups were trained for 5 min by placing them in the water-filled cylinder. On day 15, animals were subjected to 5 min of forced swim, and escape behaviors (climbing and swimming behaviors) were determined. The duration of immobility was scored during the 5 min test period. Climbing behavior was defined as upward-directed movements of the forepaws alone the side of the swim chamber and swimming behavior was considered as movements throughout the swim chamber including crossing into another quadrant. Immobility behavior was calculated as the length of time in which the animal did not show escape responses (e.g., total time of the test minus time spent in climbing and swimming behaviors). The animals' behavior was continuously recorded throughout the testing session with an overhead video camera. After the test, the rat was removed from the tank, dried with a towel and placed back in its home cage. The water in the swim tank was changed between rats. Open field test Prior to forced swimming test, the rats were individually housed in a rectangular container that was made of dark polyethylene (60×60×30 cm) to provide best contrast to the white rats in a dimly lit room equipped with a video camera above the center of the room, and their locomotor activities (animal's movements) were then measured. The locomotor activity indicated by the speed and the distance of movements was monitored by a computerized video-tracking system using S-MART program (Panlab Co., Barcelona, Spain). After 5 min adaptation, the distance they traveled in the container was recorded for another 5 min. The locomotor activity was measured in centimeters. The floor surface of each chamber was thoroughly cleaned with 70% ethanol between tests. Immunohistochemistry of corticotrophin-releasing factor (CRF) and tyrosine hydroxylase (TH) For immunohistochemical studies, the three rats in each groups were deeply anesthetized with sodium pentobarbital (80 mg/kg, by intraperitoneal injection) and perfused through the ascending aorta with normal saline (0.9%) followed by 300 ml (per rat) of 4% paraformaldehyde in 0.1 M phosphate-buffered saline (PBS). The brains were removed in a randomized order, post-fixed over-night, and cryoprotected with 20% sucrose in 0.1 M PBS at 4 o C. Coronal sections 30 μm thick were cut through hypothalamus and ventral tegmental area (VTA) using a cryostat (Leica CM1850; Leica Microsystems Ltd., Nussloch, Germany). The sections were obtained according to the rat atlas of Paxinos and Watson (Paxinos and Watson, 1986). The sections were immunostained for CRF and TH expression using the avidin-biotin-peroxidase complex (ABC) method. Briefly, the sections were incubated with primary goat anti-CRF antibody (1:500 dilution; Santa Cruz Biotechnology Inc., California, CA, USA) and sheep anti-TH antibody (1:2,000 dilution; Chemicon International Inc., Temecular, CA, USA) in PBST (PBS plus 0.3% Triton X-100) for 72 h at 4 o C. The sections were incubated for 120 min at room temperature with secondary antibody. The secondary antibodies were obtained from Vector Laboratories Co. (Burlingame, CA, USA) and diluted 1:200 in PBST containing 2% normal serum. To visualize immunoreactivity, the sections were incubated for 90 min in ABC reagent (Vectastain Elite ABC kit; Vector Labs. Co., Burlingame, CA, USA), and incubated in a solution containing 3,3'-diaminobenzidine (DAB; Sigma-Aldrich Chemical Co., St. Louis, MO, USA) and 0.01% H2O2 for 1 min. Finally, the tissues were washed in PBS, followed by a brief rinse in distilled water, and mounted individually onto slides. Images were captured using the AxioVision 3.0 imaging system (Carl Zeiss, Inc., Oberkochen, Germany) and processed using Adobe Photoshop (Adobe Systems, Inc., San Jose, CA, USA). The sections were viewed at 200× magnification, and the numbers of CRF and TH labeled cells was quantified in the hypothalamus and VTA. CRF-and TH-labeled cells were counted by an observer blinded to the experimental groups. Counting the immunepositive cells were performed within the square (100×100 μm 2 ), anatomically localized in at least three different hypothalamus and VTA sections per rat brain according to the stereotactic rat brain atlas of Paxinos and Watson [30]. The counted sections were randomly chosen from equal levels of serial sections along the rostral-caudal axis. The stained cells of which intensities were reached to a defined value above the background were only considered as immunopositive cells. Distinct brown spots indicating CRF-and TH-immunopositive cells were observed in the hypothalamus and VTA. The differences of brightness and contrast among raw images were not adjusted to exclude any possibility of subjective selection of the immunereactive cells. Total RNA preparation and RT-PCR analysis The expression levels of BDNF mRNA were determined by the reverse transcription-polymerase chain reaction (RT-PCR). The brain hippocampus was isolated from four rats per group. After decapitation, the brain was quickly removed and stored at -80 o C until use. The total RNA was prepared from the brain tissues using TRIzol Ⓡ reagent (Invitrogen Co., Carlsbad, CA, USA) according to the supplier's instruction. Complementary DNA was first synthesized from total RNA using a reverse transcriptase (Takara Co., Shiga, Japan). PCR was performed using a PTC-100 programmable thermal controller (MJ Research, Inc., Watertown, MA, USA). The operating conditions were as follows: The PCR products were separated on 1.2% agarose gels and stained with ethidium bromide. The density of each band was quantified using an image-analyzing system (i-Max TM , CoreBio System Co., Seoul, Korea). The expression levels were compared each other by calculating the relative density of target band, such as BDNF, to that of GAPDH. Statistical analysis All measurements were performed by an independent investigator blinded to the experimental conditions. Results in figures are expressed as mean±standard error of means (SE). Differences within or between normally distributed data were analyzed by analysis of variance (ANOVA) using SPSS (Version 13.0; SPSS, Inc., Chicago, IL, USA) followed by Tukey's post-hoc test. Statistical significance was set at p<0.05. Effect of BA on repeated restraint stress-induced body weight loss, increase in serum CORT levels and reduction in consumed sucrose intake Rats exposed to repeated restraint stress begin to lose their body weights on the first day of restraint stress and this initial reduction of body weight is sustained for a while without restoring or even exacerbated in some cases [27]. In the present study, we also examined body weights daily for 14 days to identify whether repeated restraint stress (STR group) caused body weight loss (difference between daily weights and starting weight) ( Fig. 2A). Analysis of the body weight values revealed a significant gradually reduction in body weight gain for 14 day in the STR group, as compared to the normal control rats (SAL) group. During this period, 40 mg/kg BA-treated rats showed significant inhibitions of reductions in body weight gains, as compared to STR group (p<0.05 on day 12 and 14). Acute restraint stress induces a large increase in serum CORT level, which gradually decreased as the restraint stress was repeatedly applied to the rats, probably due to adrenal habituation [31]. The serum CORT levels were measured in each group after exposure to repeated re-straint stress for 14 days (Fig. 2B). The ELISA analysis demonstrated that repeated restraint stress for 14 days significantly increased the serum CORT concentration in the rats by 203.88%, as compared to SAL group (p<0.05). It indicated that the repeated restraint stress was sufficiently stressful despite the evoked CORT responses (physiological responses) to repeated restraint stress was significantly more than the response to single restraint stress (data not shown). Daily administration of BA slightly inhibited the repeated restraint stress-induced increase in serum CORT level as compared to STR group in spite of little statistical significance (p=0.199). In present study, we examined sucrose intake once two days for 14 days to indentify whether repeated restraint stress (STR group) caused consumed sucrose solution much less than the SAL group, as seen in Fig. 2C. Analysis of the sucrose intake values revealed a significant gradually reduction in consumed sucrose intake gain for 14 day in the STR group, as compared to the normal control rats (SAL) group (p<0.01 on day 11 and 14; p<0.001 on day 13). During this period, 40 mg/kg BA-treated rats showed significant inhibitions of reductions in consumed sucrose intake, as compared to STR group (p<0.05 on day 11, 13 and 14). The results also showed that the recovery of consumed sucrose intake in the STR+BA40 group was almost comparable to that in the STR+FLX group. Effect of BA on repeated restraint stress-induced depression-like behavior Rats subjected to repeated restraint stress for 14 days exhibited a significant depression phenotype, characterized by increased immobility time during the FST, as compared to saline-treated controls (SAL group) ( Fig. 3A; p<0.05). However, the rats in STR+BA40 group showed significant decrease in immobility time during 5 min in the FST, as compared to those in STR group (p<0.05), indicating that administration of 40 mg/kg BA decreased depression-like behavior. Similarly, we next focused on another key behavior manifested as "climbing behavior". The rats in STR group showed significant decrease in climbing behavior during the FST, as compared to SAL group ( Fig. 3B; p< 0.05). However, it was shown that the rats in STR+BA40 group showed slightly restoration in climbing behavior time during 5 min in the FST, as compared to those of the STR group in spite of little statistical significance (p=0.697). Also, repeated restraint stress for 14 days did not induce significant differences of swimming behavior among all groups during the FST (Fig. 3C; p=0.853). This results also showed that the reduction of immobility on depression-like behavior in the STR+BA40 group was almost comparable to that in the STR+FLX group. Effect of BA on repeated restraint stress-induced motor functions Open-field activity was used to evaluate locomotor activity among the rats receiving repeated restraint stress for 14 days (Fig. 4). No significant individual differences in locomotor activity were observed between groups (p=0.432). It shows that administration of BA did not affect the psychomotor of the rat's performance. Effect of BA on repeated restraint stress-induced CRFand TH-like immunoreactivity Following the behavioral tasks, CRF-like immunoreactivity was analyzed in the cell bodies of various hypothalamic regions including the paraventricular nucleus (PVN; Fig. 5A). In the rat brains in the STR group, the numbers of CRF immunoreactive neurons in the PVN were increased by 233.47%. Analysis of the numbers of CRF-immunoreactive neurons values revealed that the rats receiving repeated restraint stress exhibited a significant increase in CRF expression compared to the SAL group (p< 0.01; Fig. 5B). The number of CRF-immunoreactive neurons was significantly decreased in hypothalamic PVN regions of the STR+BA40 group compared to the STR group (p <0.05). It also indicated that the increased CRF-immunoreactivity induced by the repeated restraint stress was significantly restored by BA administration and that the numbers of CRF-immunopositive neurons in the STR+BA40 group was similar to that in the STR+FLX group. TH-like immunoreactivity was analyzed in the cell bodies of VTA (Fig. 5A). In the rat brains in the STR group, the numbers of TH immunoreactive neurons in the VTA were decreased by 64.84%. Analysis of the numbers of TH-immunoreactive neurons values revealed that the rats receiving repeated restraint stress exhibited a significant decrease in TH expression compared to the SAL group (p<0.01; Fig. 5C). The number of TH-immunoreactive neurons was significantly increased in VTA regions of the STR+BA40 group compared to the STR group (p<0.05). It also indicated that the decreased TH-immunoreactivity induced by the repeated restraint stress was significantly restored by BA administration and the numbers of TH-immunopositive neurons in the STR+BA40 group was similar to that in the STR+ FLX group. Effect of BA on repeated restraint stress-induced decrease of dopamine and serotonin concentrations in the hippocampus and medial prefrontal cortex The ELISA analysis demonstrated that repeated restraint stress for 14 days significantly decreased the DA concentration in the hippocampus and medial prefrontal cortex by 31.12% and 40.85%, respectively, compared with rats in the non-treated SAL group. The concentration of DA in the hippocampus was markedly decreased in the STR group, as compared to the SAL group (p<0.01; Fig. 6A). Daily administration of BA showed significantly increased the repeated restraint stress-induced decrease of DA concentration in the hippocampus, as compared to STR group (p<0.05). It also indicated that the concentration of DA in the hippocampus in rats receiving 40 mg/kg BA administration was almost compatible with the rats receiving 10 mg/kg FLX administration. However, there was no significant difference in DA concentration among the six groups in the medial prefrontal cortex. The concentration of DA in the STR+BA40 group was significantly higher than that in the STR group (p=0.868), while the STR group had significantly lower DA concentration compared to the SAL group (p=0.430). The ELISA analysis demonstrated that repeated restraint stress for 14 days significantly decreased the 5-HT concentration in the hippocampus and medial prefrontal cortex by 15.78% and 55.04%, respectively, compared with rats in the non-treated SAL group (p<0.05; Fig. 6B). Daily administration of BA showed slightly increased the re- peated restraint stress-induced decrease of 5-HT concentration in the hippocampus, as compared to STR group in spite of little statistical significance (p=0.169). However, there was no significant difference in 5-HT concentration among the six groups in the medial prefrontal cortex. The concentration of 5-HT in the STR+BA40 group was significantly higher than that in the STR group (p=0.973), while the STR group had significantly lower 5-HT concentration compared to the SAL group (p=0.883). Effect of BA on repeated restraint stress-induced expression of BDNF mRNA in the hippocampus The effect of BA administration on the expression level of BDNF mRNA in rats with repeated restraint stress-induced hippocampus lesions were investigated using RT-PCR analysis (Fig. 7). The BDNF mRNA expression levels were normalized against glyceraldehydes-3-phophate dehydrogenase (GAPDH) mRNA, an internal control. BDNF mRNA expression in the hippocampus in the STR group was significantly decreased compared with that in the SAL group (p<0.01). The decreased expression of BDNF mRNA in the STR group was significantly restored in the STR+BA40 group (p<0.05), and the restored level was similar to that of normal rats in the SAL group. This also indicated that the expression of BDNF mRNA in the hippocampus in rats receiving 40 mg/kg BA administration was similar to that in rats receiving 10 mg/kg FLX administration. DISCUSSION We found that the plant-derived flavonoid BA has antidepressant-like effects in a rat model of depression, and identified potential mechanisms underlying the effect. BA decreased the duration of immobility at doses of 10, 20, and 40 mg/kg in the FST following repeated restraint stress; however, the effect was significant only at 40 mg/kg. This behavioral effect was likely the result of BA-induced modu- lation of hypothalamic CRF activity, preventing the reduction in brain DA that underlies the development of major depressive disorder. BA reduced stress-related depression-like behaviors in the FST in a dose-dependent manner; however, 40 mg/kg was the most effective dose, which is consistent with the findings of a previous study [32]. Our findings are consistent with those of previous studies that have shown that repeated restraint stress disrupts HPA axis activity, increasing the probability of depression-like behavior [27,33]. Furthermore, the gradual decrease in body weight and increase in serum CORT levels found immediately before behavioral testing indicate that the repeated restraint stress procedure was valid [27,34]. The restraint stress model used in our study is a well-established method of inducing stress that has several advantages. Animals placed in a restraint stress chamber for a specified period of time over 14 days undergo physiological changes in body weight and serum CORT levels. BA restored body weight and serum CORT levels to near normal levels toward the end of the 2-week treatment period. Furthermore, disruption of HPA axis activity induced by restraint stress was the likely cause of increased immobility duration during FST and reduced sucrose preference, both common symptoms of depression [35,36]. This hypothesis is supported by several studies in which elevated levels of CORT altered HPA axis activity and affected behavioral activity and sucrose preference [35,36]. By the end of the treatment period, animals administered BA prior to restraint stress increased their preference for sucrose compared to rats in the restraint stress control group, suggesting that BA counteracts chronic stress-induced depressive symptoms or psychological disorders [37]. These results are consistent with previous studies in which repeated restraint stress produced dramatic effects during the FST, indicating that BA has a powerful effect on systems disrupted during the FST [27]. It may take several weeks before the mood elevating effects of antidepressant are felt in humans and several restraint stress trials in rats; thus we administered BA for 14 consecutive days to investigate the antidepressant-like effects of BA in the FST of rats [38]. The decrease in immobility duration in the STR+BA40 group was similar to that of STR+FLX group, further supporting the antidepressant activity of BA. The present data suggest that CRF circuits in the PVN of the hypothalamus are activated by repeated restraint stress, which causes HPA axis hyperactivity resulting in depressive-like activity in behavioral tests [39]. This animal model of stress focuses on PVN region of the hypothalamus because PVN sends projections to the limbic system and several points in the hypothalamus [40,41]. Our results show that BA significantly blocked the stress-induced increase in CRF immunoreactivity in the PVN, suggesting that the anti-depressive effects of BA are closely associated with CRF modulation in the PVN. Also, the enzyme TH is involved in stress-induced activation of the central nervous system and in stress-related psychopathological conditions such as depression [42,43]. These results are consistent with previous studies indicating that depression-like behavior induced by chronic stress is the result of changes in the dopaminergic system [44]. Moreover, we demonstrated that administration of BA significantly increased TH-like immunoreactivity in the VTA of rats subjected to repeated restraint stress. Together, these findings indicate that BA attenuates behaviors and neurochemical responses associated with depression by modulating the HPA axis and the dopaminergic system, suggesting that administration of BA, like FLX, may indirectly alter catecholamine synthesis in the brain to produce physiological effects. Thus, our results indicate that BA acts by stimulating dopamine synthesis in the rat brain, suggesting that an overactive dopaminergic system may contribute to depressive symptomology, and that the therapeutic action of antidepressants reverses this activity by decreasing TH expression in the VTA [45]. Several studies have focused on the role of monoamines specifically DA and 5-HT overflow in the hippocampus, and have shown that changes in monoamines are strongly correlated with depressive-like behaviors in the FST [46]. Repeated exposure to restraint stress decreases the release and turnover of DA and 5-HT in areas of the brain implicated in behavioral and physiological responses to stress, such as the medical prefrontal cortex and hippocampus [47]. We suggested that the repeated restraint stress-induced impairment of FST performance is caused by a reduction in DA and 5-HT in the brain. In the present study, BA inhibited the decrease in hippocampal DA attributed to repeated exposure to restraint stress, but did not alter release in the medial prefrontal cortex. The elevated levels were restored to near control values. However, our values differ from those reported by previous studies [25]. This disparity may be attributed to differences in protocol, immobilization schedule and the brain regions analyzed. DA producing neurons in the medical prefrontal cortex and hippocampus, which directly innervate CRF secreting neurons in the hypothalamus, constitute a major stimulatory pathway in the stress-induced activation of the HPA axis [48]. Moreover, the finding that the secretion of CRF, which plays a pivotal role in basal and stress-induced CORT secretion, is controlled by a variety of brain monoamines and Effects of Baicalein on Depression Behavior 401 peptides, such as DA and neuropeptide Y, is consistent with the findings of our previous studies [16]. Thus, CRF may play an important role in the neurobiological and behavioral mechanisms medicated by the dopaminergic system and HPA axis. A recent study has shown that chronic social defeat stress and chronic restraint stress are associated with a long-lasting downregulation of BDNF in the brain which can be reversed by treatment with an antidepressant [49,50]. Thus, the increase in BDNF expression may have a role in the treatment of depression. These results suggest that a decrease in the expression of BDNF in the hippocampus may be related to the pathogenesis of depression-like symptoms [51,52]. In the present study, repeated restraint stress was associated with decreased expression of BDNF mRNA in the rat hippocampus and depression-like behavior. However, administration of BA restored the level of BDNF mRNA in the hippocampus of rats subjected to repeated restraint stress, suggesting that BDNF may play a role in the antidepressant effect of BA [53,54]. Our data provide further support for the hypothesis that the antidepressant effect of BA is at least in part correlated with the CREB or ERK signaling pathways. Administration of BA normalized the stress-induced decrease in DA concentrations and reduced hypersecretion of CORT and, thus should be considered a potential therapeutic agent for reducing stress. Furthermore, ample experimental and clinical evidence suggests that BA has no adverse effects following long-term use for 2 weeks. Patients suffering from stress need special care and protection against the risk of iatrogenic stress, and using a safe natural product is the obvious choice in such cases [55]. Our results showing an association between depression-like behavior and disruption of HPA axis activity and neurochemical interactions between CRF in hypothalamus and dopaminergic pathway suggest a novel hypothesis for the mechanisms mediating the antidepressant effect of BA. In summary, the present study demonstrates that repeated restraint stress significantly increases the duration of immobility in the FST compared to unstressed normal controls. Furthermore, the administration of BA significantly alleviates depression-like symptoms following repeated restraint stress, possibly by modulating the hypothalamic CRF and dopaminergic systems. Together, these findings indicate that BA is capable of ameliorating the complex behaviors and neurochemical responses involved in depression by modulating BDNF mRNA expression. Accordingly, BA may be a useful alternative therapeutic agent for stress-related disorders such as depression.
2018-04-03T01:01:27.795Z
2013-10-01T00:00:00.000
{ "year": 2013, "sha1": "dcc0d39fb7d5877f97645bff579504cab949cdbd", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc3823951?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "dcc0d39fb7d5877f97645bff579504cab949cdbd", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
75778230
pes2o/s2orc
v3-fos-license
EVALUATION OF LECTURE AS A LARGE GROUP TEACHING METHOD IN UNDERGRADUATE MEDICAL CURRICULUM: STUDENT’S PERSPECTIVE To evaluate lecture as a large group teaching method from student’s perspective. METHODS: The present study was undertaken in the department of Microbiology, KIMS, Amalapuram. A total of 60 Second year MBBS students were taken as study subjects. A questionnaire was designed and students were asked to fill it and also give suggestions as a part of feedback about the lectures conducted in the department of Microbiology. RESULTS: A total of 83.4% students find Chalkboard method + Power point presentation as the best way of delivering a lecture. Nearly 56.6% students opined that ideal duration for class should be 40 50 minutes. Long duration of lecture was a major disadvantage according to 66.6% students. 90% students feel that some time period of lecture should be reserved for interactive session. Majority of students also preferred class on elearning. 70% students feel that tutorials or seminars are needed along with theory class for better understanding of the subject. CONCLUSION: Lectures should be efficiently delivered by the instructor giving a conceptual understanding of the subject instead of mere reading the content. Lecture should be supplemented with tutorials and group discussion to improve learning. Duration of class should be restricted to 40-50 minutes as traditional long duration class makes it difficult to hold the attention of the students for an entire class period. Brief interaction with students will promote active learning. Elearning should be encouraged. INTRODUCTION: A lecture is an effective, traditional and the most dominant instructional teaching element commonly used in academic institutions or universities for teaching a large group. The concept of lecture is derived from the French word 'lecture', meaning 'reading' process which refers to an oral presentation intended to present information or teach people by an expert in a particular subject. Though lectures are much criticized as a one way teaching method, educators have not yet found practical alternative teaching methods for the large majority of their courses. 1 A lecture can be an immensely effective tool in the classroom, allowing an instructor to provide a theme that organizes material in an interesting way. It is essential to see lectures as means of helping students to learn the key concepts of a particular subject, rather than primarily as a means of transferring facts from instructor to student. The objective of the lecture is not only helping Students to acquire knowledge but also making students think to change their attitude and provide an inspiration to further interest in the subject. 2 The modalities of conducting a lecture have changed from traditional chalk-board, transparency and overhead projector (TOHP) to the use of electronic media and web based learning. It is essential to consider views of students regarding the present lecture methodology, to make necessary reforms in the traditional pattern of conducting a lecture followed since years together. Reviewing teaching program will have a better impact on comprehension and learning of students. So the present study was undertaken to evaluate lecture as a large group teaching method from student's perspective. MATERIAL AND METHODS: The present study was undertaken in the department of Microbiology, KIMS, Amalapuram. A total of 60 Second year MBBS students were taken as study subjects. A questionnaire was designed and students were asked to fill it as a part of feedback about the lectures conducted in the department of Microbiology. They were also asked to express any comments or suggestions. Student identity was not revealed. QUESTIONNAIRE: 1. Which is the best method of delivering a lecture? 2. What should be the ideal duration of a lecture? 3. Should some portion of lecture be reserved for interaction with students? 4. Should there be a class on e-learning? 5. What is the main disadvantage of lecture? 6. Are tutorials, seminars or discussions needed apart from theory class for understanding subject? OBSERVATION AND RESULTS: In the present study a total of 83.4% students find Chalkboard method + Power point presentation (PPT) as the best way of delivering a lecture than any of the individual methods ( Figure 1). Nearly 56.6% students opined that ideal duration for class is 40 -50 minutes while none gave opinion as 50 -60 minutes which is actually the universal practice currently followed ( Figure 2). Fig. 1: Lecture delivery Method In the same sense long duration of lecture was a major disadvantage according to 66.6% students ( Figure 3). A total of 90% students feel that some time period of lecture should be reserved for interaction with student as well as there should be a class on e-learning. A total of 70% students feel that tutorials or seminars are needed along with theory class for better understanding of the subject. DISCUSSION: Lecturing as a method of large group (more than 30) teaching is an instructor centered method. As students are the beneficiaries of a lecture, their feedback contributes to improve teaching strategy. Analysis of questionnaire in the present study revealed that 83.4% students preferred Chalkboard method + Power point presentation (PPT) over any of the individual methods. Learning with audiovisual aids has a great impact on students over traditional blackboard teaching 3 but misuse or overuse of technology degrades the quality of the presentation . 4 The reason for this being mere reading of transparencies using OHP or PPT slides with a lot of material without explaining the Similarly Meo et al reported that when contents such as figures and flow charts were discussed on PowerPoint and then elaborated on chalkboard, the students were more active. This combined teaching with PowerPoint and chalkboard keeps the students engaged, therefore combination of PPT & chalkboard called as "Integrated teaching" is more suitable than any individual lecture delivery method. 5 David et al reported that PowerPoint serves more as a mean of directing the flow on a topic than presenting the entire material . 6 In the opinion of 66.6% students, long duration of classes (50-60 min) is a major disadvantage of lecture. Studies on attention span shed light on why students find it difficult to maintain attention in traditional lecture format conducted nearly for one hour which exceeds attention span of an average student. Research studies have demonstrated that level of concentration varies during lecture, high level in the initial phase (30-40 minutes) , declines and stays flat for rest of the lecture . 7,8 Sustained attention varies widely depending upon time of day 9 motivation and enjoyment 10 and emotion. 11 The implication of this finding is that duration of lecture should be limited as well as instructor needs to grab attention of students refreshing them periodically by summarizing or emphasizing an important point, using humor appropriately or building interaction with the students. 7, 8 Majority of the students in this study preferred student -teacher interaction which can make topic more clear and break didactic nature of lecture. This finding is in accordance with guidelines put forth by MacGregor et al. that interaction with students in the form of buzz group discussion, student generated questions or peer questioning or just a few questions about the topic in the form of short quiz , should be incorporated into lecture segment in order to promote more active learning. 12 Researchers also conclude that deep approaches to learning are enhanced by explanatory ability and communication skills of the instructor. Communication skills include instructor's way to interact with students to encourage involvement and interest, focusing a shift to student centered learning. 13 Nowadays most of the teaching methods, including lectures and demonstrations can be translated into an electronic format (e -learning). Computer technologies, including the Internet, supports a wide range of learning activities from dissemination of lectures, access to live or recorded presentations, real-time discussions and self-instruction modules. 14,15 A positive approach towards e -learning is exhibited in this present study where 90% students support e-learning module. 70% students feel that tutorials or seminars are needed along with theory class for better understanding of the subject. Studies have proved that tutorials or seminars provide an ideal opportunity to clarify the concept, link theory to practice, enhance active learning and improve communication skills. 16 CONCLUSION: Teaching techniques have profound effect on learning. From the results of the present study and opinion of the students, the study concludes that lectures are necessary for introducing basic background of a topic or subject for a comparatively large number of students. It is important that lecture should be efficiently delivered by the instructor giving a conceptual understanding of the subject. The best way of teaching is combining blackboard method with power point presentation. Although lecture is effective teaching method but learning is restricted to cognitive domain level. So lecture should be supplemented with tutorials and group discussion to improve the levels of
2019-03-13T13:29:36.733Z
2014-09-27T00:00:00.000
{ "year": 2014, "sha1": "93d87ca48a12b26eb2716cb866ff58f1e1c86cac", "oa_license": null, "oa_url": "https://doi.org/10.14260/jemds/2014/3514", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ed5b0779d2681aae6188fc01b22b9cede6d4f8cb", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
265102468
pes2o/s2orc
v3-fos-license
Associations of social media and health content use with sexual risk behaviours among adolescents in South Africa Abstract Increasing rates of mobile phone access present potential new opportunities and risks for adolescents’ sexual and reproductive health in resource-poor settings. We investigated associations between mobile phone access/use and sexual risks in a cohort of 10–24-year-olds in South Africa. 1563 adolescents (69% living with HIV) were interviewed in three waves between 2014 and 2018. We assessed mobile phone access and use to search for health content and social media. Self-reported sexual risks included: sex after substance use, unprotected sex, multiple sexual partnerships and inequitable sexual partnerships in the past 12 months. We examined associations between mobile phone access/use and sexual risks using covariate-adjusted mixed-effects logistic regression models. Mobile phone access alone was not associated with any sexual risks. Social media use alone (vs. no mobile phone access) was associated with a significantly increased probability of unprotected sex (adjusted average marginal effects [AMEs] + 4.7 percentage points [ppts], 95% CI 1.6–7.8). However, health content use (vs. no mobile phone access) was associated with significantly decreased probabilities of sex after substance use (AMEs –5.3 ppts, 95% CI –7.4 to –3.2) and unprotected sex (AMEs –7.5 ppts, 95% CI –10.6 to –4.4). Moreover, mobile phone access and health content use were associated with increased risks of multiple sexual partnerships in boys. Health content use was associated with increased risks of inequitable sexual partnerships in adolescents not living with HIV. Results suggest an urgent need for strategies to harness mobile phone use for protection from growing risks due to social media exposure. Introduction Sub-Saharan Africa has the largest and fastest-growing youth population, with multiple overlapping sexual and reproductive health (SRH) needs. 1 In South Africa, the youth population is projected to reach more than 11 million by 2030. 2 South African youth experience heightened sexual health risk: most new HIV (human immunodeficiency virus) infections occur during adolescence, 3,4 and one in three girls become pregnant before age 20. 5 Adolescents and young people have been early and enthusiastic adopters of digital technologies.In 2019, 71% of South African households had a mobile phone user, and 64% had access to the internet, 6 with youth aged 15-24 years comprising 71% of internet users. 7][10][11] Mobile health (m-Health) initiatives may be a key pathway to improve SRH knowledge and HIV prevention among young people in resource-limited settings. 9,12Previous studies have shown that m-Health might help prevent adolescents from engaging in risky sexual behaviours as a result of improved SRH knowledge. 9,13Findings from a cluster-randomised control trial among 756 females aged 14-24 years in Accra, Ghana, indicated that text messaging improved their SRH knowledge, which in turn led to decreased risks of pregnancy. 14Most of m-Health interventions were implemented, taking into account the intersections with adolescents' sexual and reproductive health and rights in both policy and practice. However, m-Health interventions remain limited in scope and coverage, without proper evidence of large scaling-up process. 15A study on how 4500 young people in Ghana, Malawi and South Africa used mobile phones revealed that the majority of them had never heard of m-Health interventions, let alone participated in them. 16Searches via social media and websites have proven to be an innovative way to engage adolescents. 11,17Qualitative evidence supports user-driven health content use (rather than campaign-driven health content use) as an effective means of improving health behaviours, [16][17][18] but quantitative research, especially related to sexual risk behaviours, is lacking. This study attempted to close this gap by focusing on the "informal" uses of m-Healthnamely guided by creative and strategic use of mobile phonesfor safer sexual risk behaviours. 16The paper did not overlook the fact that adolescents are also susceptible to exposure to risks in the online space.A recent meta-analysis showed that frequent use of social media among adolescents is associated with increased risks of drug use, risky sexual practices and violent behaviours. 19e aimed to examine the associations of sexual risk behaviours with access to and use of mobile phones among a cohort of adolescents in South Africa, and to assess these associations by sex and HIV status. Study design We conducted a prospective cohort study "Mzantsi Wakho" amongst 1563 adolescents living with and without HIV in South Africa.This study is reported in accordance with the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement for cohort studies (Appendix 1). 20 Study setting and participants Research was conducted in 180 communities in a health sub-district in the Eastern Cape Province, South Africa.The Eastern Cape has high levels of poverty and HIV. 21The study recruited adolescents living with HIV from 52 primary healthcare clinics in 2014-2015 and 15 additional clinics in 2016-2018, and then recruited neighbouring adolescents not living with HIV in local communities.In the first wave of Mzantsi Wakho, which commenced in 2014-2015, 1519 adolescents were successfully interviewed, including 1046 living with HIV and 473 not living with HIV (Figure 1).Wave 2 (1454 interviewed, 1030 of whom were living with HIV; 1410 complete cases) was conducted in 2015-2017 and wave 3 (1429 interviewed, 1010 of whom were living with HIV; 1353 complete cases) in 2017-2018.In waves 2 and 3, participants resided in Eastern Cape, Free State, Gauteng, KwaZulu-Natal, North-West and Western Cape, due to high levels of migration.Ethical protocols were approved by the University of Cape Town (Cape Town, South Africa; CSSR 2013/ 4, approved 14 April 2013), Oxford University (Oxford, UK; CUREC2/12-21, approved 20 December 2012), Provincial Departments of Health and Education, and all participating healthcare facilities.All adolescents and their primary caregivers provided written informed consent at all survey rounds in Xhosa or English, and POPIA-compliant data management instruments were used across the data lifecycle to protect the personal information of participants. 224][25][26] All were selfreported for the past 12 months: (1) sex after substance use defined as sexual intercourse when the participant was drunk or used drugs (available in waves 2 and 3); (2) unprotected sex defined as no or infrequent condom use with partners (available in all waves); (3) multiple sexual partnerships defined as two or more sexual partners (available in all waves) and ( 4) inequitable sexual partnerships defined as sex in exchange for material support of any kind or sexual partner at least 5 years older than participant (available in all waves). Mobile phone access and use Self-reported data on current access to and use of mobile phone were available in waves 2 and 3. Adolescents were asked if they had a mobile phone (including a smartphone, Apple iPhone, Blackberry, basic phone and SIM card) and if it was their own or shared with someone.In this study, mobile phone access refers to self-reported Figure 1.Mzantsi Wakho cohort study flow chart.Abbreviations: T1, wave 1; T2, wave 2; T3, wave 3 ownership of a mobile phone with a functional SIM card.Adolescents who owned a mobile phone reported what they used it for and the frequency of use.Mobile phone use options included SMS, WhatsApp, Facebook, Mixit, health information, information about sexual health and HIV-related information.Frequency of use was reported in the questionnaire as "1.Never", "2.once a month", "3.once a week", "4.once a day" and "5.two or more times a day".Hereafter, we refer to daily or multiple times daily use as (frequent) use of mobile phone (recoded as binary).Mobile phone use measure was then classified as "0.no access", "1.social media alone" (referring to frequent mobile phone use for SMS, Facebook, Mixit or WhatsApp only) and "2.health content" (referring to frequent mobile phone use for both social media and sexual health or HIV-related information).Access to and use of mobile phone were measured in waves 2 and 3. Covariates We included eight covariates in our models based on associations identified in existing literature 27 : rural residence; informal housing; household poverty measured as access to the eight highest socially perceived necessities in the nationally representative South African Social Attitudes Survey (enough food, money for school fees, to see a doctor when needed, school uniform, basic clothing, soap, school books and shoes) 28 ; being married or in a consensual relationship; age; sex; living with HIV; school enrolment and survey wave.All covariates were categorical variables, except for the participant's age, which was continuous, and mean-centred before analysis to facilitate interpretations.Covariates changed for very few individuals in the data.Therefore, all covariates, except for the survey wave, were measured at baseline (between March 2014 and September 2015). Data and bias 0][31] Little's χ 2 test, with the presence of covariates and unequal variances between different missing-value patterns, indicated that data were not missing completely at random (Appendix 3). 32Frequencies showed that more missing data occurred in wave 2 than in wave 3. The proportion of missing data on sex after substance use was 13.5% in wave 2 and 6.5% in wave 3; the proportion of missing data on inequitable sexual partnerships ranged from 3.8% in wave 2 to 6.5% in wave 3.There was no missing data on mobile phone access and use (neither in wave 2 nor in wave 3).The amount of missing data for covariates at baseline was 0.1% for household poverty and informal housing.There was no missing data on sex, age, rural residence, relationship status, school enrollment and HIV status.4][35] Imputation models included all aforementioned covariates and auxiliary variables that were predictive of sexual risk behaviours. Statistical methods We first described sociodemographic characteristics, mobile phone access and use, individual sexual risk behaviours in the full sample, then by adolescent sex and HIV status.The correlations between individual outcomes were weak or moderate.We reported Spearman's bivariate correlation coefficients between outcome variables in Appendix 4. We reported the descriptive statistics for the original sample (i.e.data with missing values).Second, we used mixed effects logistic regression models on imputed data to account for repeated measures. 36A summary of the models is reported in Appendix 5. We estimated the overall association of individual sexual risk behaviours with mobile phone access (Model 1) and mobile phone use for social media alone and health content (Model 2), adjusting for baseline covariates.We reported average marginal effects from the models to estimate associations of individual sexual risk behaviours with mobile access and use.We also estimated the separate associations of individual sexual risk behaviours with mobile phone access (Appendix 5, Model 3) and mobile phone use (Appendix 5, Model 4) for two sub-groups: boys and girls, and for adolescents living with and without HIV.To prevent underpowered (stratified) analyses, we estimated average marginal effects (AMEs) in all four combinations (of boys or girls, living with HIV or not) using three-way interaction terms (between participant's sex, HIV status and mobile phone access/use).Third, we conducted sensitivity analyses, comparing the results in an analysis that included only complete cases (from data with no imputations) to results from imputed data.All analyses were done using Stata 17 37 .Statistical significance was defined a priori as p < 0.05. Results Table 1 shows baseline socio-demographic characteristics of participants (n = 1410), access to and use of mobile phones (in waves 2 and 3), and sexual risk behaviours (in waves 2 and 3).69% of participants recruited at baseline were living with HIV.The baseline mean age of respondents was 13.8 years.57% were female, 27% resided in a rural area, 18% were living in informal houses and 67% were in poor households.Nearly onethird of the sample was in a relationship and 94% were enrolled in school.In wave 2, about half of participants (50%) had access to a mobile phone (including 43% for boys, 56% for girls, 57% for adolescents not living with HIV and 48% for adolescents living with HIV), 44% used mobile phones for social media alone (including 24% for boys, 41% for girls, 49% for adolescents not living with HIV and 36% for adolescents living with HIV) and only 17% for health content (including 20% for boys, 15% for girls, 8% for adolescents not living with HIV and 21% for adolescents living with HIV).In wave 3, there was a slight increase in the proportion of mobile phone access (53%) including 47% for boys, 65% for girls, 63% for adolescents not living with HIV and 55% for adolescents living with HIV; 34% used mobile phone for social media alone (including 25% for boys, 43% for girls, 55% for adolescents not living with HIV and 45% for adolescents living with HIV) and 23% used mobile phone for health content (including 22% for boys, 24% for girls, 7% for adolescents not living with HIV and 30% for adolescents living with HIV). Sexual risk behaviours were similar across waves (Figure 2), except for unprotected sex which increased from 15% in wave 2 to 22% in wave 3 (Figure 2B).For boys, there was no change in the prevalence of any of the sexual risk behaviours between waves 2 and 3.For girls, the prevalence of unprotected sex increased from 19% in wave 2 to 29% in wave 3 (Figure 2B).The proportion of self-reported unprotected sex and sex after substance use was significantly higher among adolescents not living with HIV than among adolescents living with HIV in waves 2 and 3 (Figure 3A and Figure 3B).There was a significant increase in the proportion of selfreported unprotected sex between waves 2 and 3 among adolescents not living with HIV (from 21% in wave 2 to 32% in wave 3) and adolescents living with HIV (from 12% in wave 2 to 17% in wave 3) (Figure 3B).In wave 3, the proportion of selfreported multiple sexual partnerships was higher among adolescents not living with HIV (20%) than among adolescents living HIV (14%) (Figure 3C). We fit two multivariable models to determine the association of individual sexual risk behaviours with mobile phone access (Table 2, Model 1) and mobile phone use (Table 2, Model 2).We found no evidence of an association between mobile phone access and sexual risk behaviours after adjusting for covariates. Social media use alone (as compared to no mobile phone access) was associated with a significantly increased probability of unprotected sex (AMEs +4.7 percentage points [ppts], 95% CI 1.6 to 7.8, p = 0.003).Health content use (as compared to no mobile phone access) was associated with a significantly decreased probability of sex after substance use (AMEs -5.3 ppts, 95% CI -7.4 to -3.2, p < 0.001) and decreased probability of unprotected sex (AMEs -7.5 ppts, 95% CI -10.6 to -4.4, p < 0.001). Then we estimated separate effects of mobile phone access (Table 3, Model 3) and mobile phone use (Table 3, Model 4) for boys, girls and adolescents living with and without HIV.Mobile phone access was associated with a significantly increased probability of self-reported multiple sexual partnerships among boys only (AMEs 5.5 ppts, 95% CI 1.1-9.6,p = 0.013).In all four categories, adolescents who used mobile phones for health content had significantly lower probabilities of reporting unprotected sex (for boys: AMEs -6.8 ppts, 95% CI -11.4 to -2.2, p = 0.004; for girls: AMEs -7.9 ppts, 95% CI -12.5 to -3.2, p = 0.001; for adolescents not living with HIV: AMEs -10.0 ppts, 95% CI -17.0 to -3.1, p = 0.005; for adolescents living with HIV: AMEs -6.0 ppts, 95% CI -9.4 to -2.6, p = 0.001) than adolescents with no access to a mobile phone.In all four categories except for adolescents not living with HIV, health content use (vs.no mobile phone access) was associated with significantly decreased probabilities of reporting sex after substance use (for boys: AMEs -6.8 ppts, 95% CI -11.5 to -2.2, p = 0.004; for girls: AMEs -3.4 ppts, 95% CI -6.2 to -0.6, p = 0.018; for adolescents living with HIV: We conducted a sensitivity analysis with complete cases and compared to the imputed dataset.The results from the sensitivity analysis and main analysis were similar, confirming overall results (Appendix 6a and 6b). Discussion This study provides valuable insight into the rates of mobile phone use alongside sexual risk behaviours of young people who are taking the initiative to "use m-Health" informally, in a high-poverty context in South Africa.We found that 57% of adolescents in our study had access to a mobile phone in 2018; this is consistent with the national average of 55% of mobile phone access in 2019. 7Social media use alone was almost ubiquitous, but less than a quarter of adolescent mobile phone owners had accessed health content in the past 12 months. Our findings showed no association between mobile phone access and self-reported sexual These findings support and advance the existing literature.Two recent reviews found no associations between access to a mobile phone and adolescent sexual risk, suggesting that access alone to mobile phones is insufficient to improve adolescents' health outcomes. 11,38Our study supports these conclusions, and additionally examines the relationship amongst a large group of adolescents living with HIV, finding that mobile phone access in itself neither reduces nor increases sexual risks for this group. Our study found that exclusive use of social media (without concurrent use of health content) is associated with increased unprotected sex.0][41][42] In this South African sample, the increase in unprotected sex amongst adolescents living with HIV and using mobile phones for social media alone suggests that this group may be particularly vulnerable to damaging effects of social media (alone and without access to any health-related information). Two meta-analyses and a recent systematic review of mobile health interventions report overall benefits for adolescent health behaviours, Figure 3. Sexual risk behaviours by participant's HIV status across waves.Abbreviations: Cl, confidence interval.HIV, human immunodeficiency virus .HIV-, adolescents not living with HIV.HIV+, adolescents living with HIV including sexual health. 13,43,44This study adds to the evidence base by demonstrating beneficial associations of real-world use of health content, in almost all cases alongside social media, amongst adolescents living with and without HIV in South Africa.However, across both waves and in all subgroups, less than a quarter of adolescents had accessed any health content at all on their mobile phones in the past year.This suggests that while health content may be valuable, its uptake amongst this very high-risk group remains low, with consequent need for large-scale interventions that increase access and use of ageappropriate up-to-date sexual and reproductive health intervention.This also includes access to health content via social media.Data are average marginal effects (and 95% CI) from models with interaction between participant's sex, HIV status and mobile phone access (Model 3) or mobile phone use (Model 4), adjusting for rural residence, informal housing, household poverty and marital status.Data were obtained from mixed-effects logistic regression models on ten imputed data (n = 1353 including 587 boys, 766 girls, 933 living with HIV and 420 not living with HIV). We note several limitations.First, the use of retrospective self-report measures of sexual risk behaviours may increase bias related to social desirability and recall biases.However, self-report is currently the only feasible way to measure most adolescent sexual risk behaviours.To mitigate measurement errors the study used widely validated measures in previous adolescent sexual health research in South Africa.Second, despite the longitudinal design, causality between the access to and use of mobile phones and sexual risk behaviours cannot be confirmed.Third, although we found no systematic differences between participants who completed both survey rounds and those who dropped out, we cannot fully rule out possible biases from unmeasured sources of confounding and attrition.Other caveats include potential bias due to data not completely missing at random.However, we used multiple imputations to account for biases in missing data.We also repeated the analysis in a sensitivity analysis using complete cases analysis and again found similar results.Fifth, we fit multivariable mixed-effects logistic regression for each outcome separately, therefore, assuming that the covariances among random effects across all sexual risk behaviours outcomes and the covariances among the residuals equal zero.Nevertheless, our findings from sensitivity analyses do not deviate from estimates from multivariable models. The study also has a number of strengths.It adds to evidence from formal m-Health interventions, by examining a real-life sample of adolescents who use (and do not use) mobile phones informally to improve their SRHR knowledge and how they engage in safer sexual practices.In the absence of large-scale formal m-Health programmes in resource-limited settings, data from informal m-Health can inform development of adolescents' SRHR and HIV prevention programmes.However, the measure of informal m-Health used in this study is insufficiently detailed to assess the complexity of its associations with adolescents' sexual risk behaviours.Future surveys should attempt to collect data on mobile phone use in relation to sexual risk behaviours.We collected longitudinal data from adolescents living with and without HIV and analysed a sample of adolescent girls and boys, living with HIV or not, adding to our understanding of informal use of social media and health content amongst and comparing between these important groups. This study suggests that mobile phones can be a medium of both risk and resilience for adolescents in Southern Africa.Social media use onlywithout concurrent health content usewas associated with increased sexual risk.Health content use was protective, even in the context of concurrent social media use, but underutilised.This highlights important next steps for programming: to identify approaches that increase informal m-Health use amongst adolescents and to deliver and assess these in realworld settings.For example, UNICEF and partners have recently proposed a set of toolkits to address sexual and reproductive health and HIV prevention needs. 45As mobile and internet access increases exponentially in Africa over the next decade, it is essential that we minimise associated risks, and capitalise on the potential of mobile phones to improve adolescent sexual and reproductive health. Data sharing and data availability statement Prospective users, policymakers/government agencies/researchers (internal/external) will be required to contact the study team to discuss and plan the use of data.Research data will be available on request subject to participant consent and having completed all necessary documentation.All data requests should be sent to the Elona Toska (elona.toska@uct.ac.za) or William Rudgard (william.rudgard@spi.ox.uk).).De plus, l'accès à un téléphone portable et l'utilisation de contenus liés à la santé étaient associés à des risques accrus de partenariats sexuels multiples chez les garçons.L'utilisation de contenus liés à la santé était associée à des risques accrus de partenariats sexuels inéquitables chez les adolescents non séropositifs.Les résultats suggèrent un besoin urgent de stratégies visant à exploiter l'utilisation du téléphone portable pour se protéger contre les risques croissants dus à l'exposition aux médias sociaux. Figure 2 . Figure 2. Sexual risk behaviours by participant's sex across waves.Abbreviations: Cl, confidence interval Table 2 . Multivariable association between mobile phone access and use and selfreported sexual risk behaviours.Results from mixed-effects logistic regression models on ten imputed datasets (n = 1353)
2023-11-11T06:18:33.377Z
2023-11-10T00:00:00.000
{ "year": 2023, "sha1": "8c117e6f86964cf7c3e1e380f1660564e3009b2b", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/26410397.2023.2267893?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "52b8221a8a5b7d3ea60a7cc7a4a0ff33057a8ada", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
225114437
pes2o/s2orc
v3-fos-license
Green Infrastructure Incentives to Mitigate Flooding in Madison, WI Sarah Alexander1, Laura Borth2, Jennifer Bratburd3, Marie Fiori4 1University of Wisconsin-Madison, Department of Civil and Environmental Engineering, Madison, WI 2University of Wisconsin-Madison, Department of Nutritional Sciences, Madison, WI 3University of Wisconsin-Madison, Department of Bacteriology, Madison, WI 4University of Wisconsin-Madison, Department of Chemistry, Madison, WI http://doi.org/10.38126/JSPG170201 Corresponding author: salexander6@wisc.edu Implementation of green infrastructure has proven to be economically and environmentally advantageous in similar locations (i.e. located directly along water bodies with heightened flood risk). Milwaukee recently amended their regional green infrastructure plan to store 36 million gallons of stormwater and add 143 acres of green space by 2030 (City of Milwaukee Green Infrastructure Plan 2019). In developing the plan, Milwaukee Metropolitan Sewage District (MMSD) found that their improvements would recharge up to 4 billion gallons of groundwater per year and save 16,500 megawatt-hours per year (equating to a cost savings of $1.5 to $2.1 million). Further, it will reduce emissions, leading to improved health worth $9.1 million in annual health care savings (MMSD 2013). III. Current policy and legal practice in Madison Both Madison's urban environment and the artificially raised water level of Lake Mendota worsened the 100-year flood event in Madison (Wright 2018). In fact, the August 2018 flood resulted from a rainstorm predicted to occur every 20-50 years. Lake Mendota is currently five feet higher than natural levels, which greatly increases the risk that additional rains will overwhelm Tenney Lock's ability to regulate levels (Fenneman 1910, 42). Flash floods due to rising lake levels are most likely to occur on the East and South sides of the city (City of Madison Engineering 2019). Many of Madison's Black and Latino residents live in these neighborhoods, so it is essential that the City take steps to prevent future floods in order to be consistent with Madison's Racial Equity & Social Justice Initiative (Blank 2016; City of Madison n.d.). Conservationists believe that the water in Lake Mendota should be lowered to mitigate flooding. Most proposals suggest lowering the lake level by 1-2 feet which would expose large amounts of wetlands (Arthur 2019;Mesch 2019). Though these wetlands would greatly improve Madison's flood resilience, homeowners on the lake oppose this change. Lowering the lakes would change the shoreline, making it necessary to extend boat piers and repair boats damaged in shallower water. Recently, some local experts argue that controlling lake levels alone is not enough to prevent flooding (Cieslewicz 2018;Verberg 2018). The contentious discussions and political challenges surrounding the efficacy of lowering lake levels suggest alternative efforts are needed to mitigate future floods in Madison as extreme rainfall and flooding are projected to become more frequent (U.S. Global Change Research Program 2014). Wisconsin law prohibits municipalities from implementing stronger stormwater regulations for developers than existing state ordinances (Wisconsin State Legislature 2018). Therefore, Madison should develop incentive programs to improve the flood resistance of existing properties. Madison is the fastest growing city in Wisconsin, so reducing development in established neighborhoods is not an option (Hubbuch 2019). Green infrastructure provides the opportunity to install flood mitigation techniques around existing buildings as an integral part of the city, allowing Madison to adapt to the changing climate without imposing stringent and possibly economically detrimental green regulations on new development. A recent green infrastructure pilot study (605 homes, city reimburses 80-100% of cost) aims to quantify the benefit of stormwater reduction efforts on the watershed, yet further action is needed. i. Option 1: Execute a grant program to install rain gardens, native plants, and rain barrels The City of Madison should offer grants to homeowners to cover the installation costs of green infrastructure elements at their properties, reducing the purchase cost for homeowners. Various federal grant programs are available to local governments to incentivize improving water quality and to support programs in line with community priorities, including the Environmental Protection Agency (EPA) Nonpoint source pollution 319 grant program or the EPA urban waters small grants (Environmental Protection Agency 2020a,b). As the EPA outlines, long-term maintenance is essential for realizing the wealth of benefits from green infrastructure (e.g. environmental, social, economic) (Environmental Protection Agency 2018). The EPA also highlights establishing written contracts or procedures as one crucial way to assure continued maintenance. Thus, to ensure that Madison fully benefits from green infrastructure projects funded through the program, the City should require recipients to sign a contract promising 20 years of maintenance. The contract should clearly outline the responsibilities (i.e. routine landscape maintenance, removing trash/debris, and cleaning out accumulated sediment) as well as the consequences for non-compliance (e.g. necessary repayment of the funds to the grant program). The contract should also allow for transfer of maintenance responsibilities if home ownership changes. Advantages In neighboring Milwaukee, providing residents with the resources to purchase necessary supplies for green infrastructure implementation may encourage participation (City of Milwaukee Green Infrastructure Plan 2019). Grant programs are a less permanent change to the City's financial structure than a tax incentive, giving the City of Madison the flexibility to alter these programs over time as its needs evolve (i.e. adjust cost payout structure, timing, types of projects included, etc.). These programs have been widely implemented by other cities of similar size and location to Madison, giving the City a blueprint to jumpstart implementation. Disadvantages The City of Madison would need to secure and/or allocate monetary resources at the time of program implementation, which may or may not be feasible. While program flexibility is a clear advantage, a grant program may be easier to disband, potentially lessening the longevity of cumulative benefit. ii. Option 2: Revising the City of Madison's stormwater fees Stormwater taxes are considered a fair and equitable method for directing the cost of green infrastructure towards those who generate the most runoff and benefit the most from improvements (EPA, 2008). Residential and commercial properties currently receive a monthly utility tax, either a flat rate or based on water usage. However, taxes could be modified by the percent impervious surface, shifting the cost onto those who generate the most runoff. The increased monthly fee (and subsequent credit) must be high enough to encourage owners to reduce impervious surfaces. Since stormwater fees already exist in Madison, adding green infrastructure guidelines to the current Rate Adjustment Policy for the Storm Water Utility requires action only from the Madison Common Council and the City Engineer. The City Engineer can annually adjust the stormwater charges in accordance with Wis. Stats § 66.0821(4) and the Madison General Ordinance Ch 20 and 37. Minneapolis (Stormwater credit program) and New York (Green roof tax abatement program) have successfully implemented these changes. Advantages The cost of implementation is low; the decreased stormwater tax revenue (i.e. after owners make green infrastructure changes) will be lower than the cost of credits. Further, residents who perform improvements will save money. Implementation could happen quickly, as fees are under the authority of Madison; no public input or voting is required. Disadvantages This structure places the burden of cost and motivation on the resident, as they must front the initial capital, perform improvements, and successfully apply for stormwater credit. This process disproportionately excludes Madison's lowincome residents from participating. Creating unnecessarily complicated credit applications could discourage applicants, possibly requiring a streamlined application or free assistance to encourage use. iii. Option 3: Implement a volunteer program to support green infrastructure development Starting a volunteer program to directly engage residents to learn about green infrastructure and to mitigate climate change in their local area will promote community stewardship. Volunteers will participate in a program to acquire background knowledge and hands-on training to earn certification. Afterwards, they will complete yearly community service hours to advocate and assist in the implementation of community clean water projects. Advantages Leveraging social networks and personal connections is a proven strategy for altering social norms within a neighborhood or community, and sparks change in environmental behaviors (McKenzie-Mohr 2011). Volunteer programs have a proven record of facilitating implementation of green infrastructure and developing a network of community sustainability leaders (e.g. Master Water Stewards program in Minnesota; Minnehaha Creek Watershed District 2019). Thus, creating a network of informed volunteers to serve as community advocates and to assist residents in implementing green infrastructure projects is a low-cost option. Disadvantages No monetary support is provided to volunteers. Lack of compensation may alter the spatial distribution of projects, especially in lower income areas. Although this option reduces direct cost to the City, volunteer programs would likely require more coordination to implement compared to monetary incentive programs. V. Policy recommendation We recommend that the City of Madison prioritize development of a grant program to incentivize the implementation of green infrastructure projects on residential properties. Up-front monetary allocations have the clear advantage of assisting residents with capital costs at the time of incurrence (proven to be more impactful and sustainable). Given the vulnerability of the isthmus and lower-income areas in south and east Madison to climate changes, and flooding in particular, we suggest that the City structure this green infrastructure program to provide more incentives to vulnerable areas (i.e. higher grants and rebates to lower-income multi-unit developments and residential properties in floodprone neighborhoods). To address possible disadvantages of funding and program longevity, we suggest that the City initially fund the grant and rebate program by applying for federal funds that support water quality and climate resilience. Long-term, the program could be financed by a slight increase to the water rate structure, effectively distributing the cost of mitigating climate change impacts among residents. Voluntary participation should be available for residents who do not require monetary funding to implement green infrastructure projects yet wish to take part. As climate change results in increased heavy precipitation events in the Madison area, the City must plan and prepare to mitigate negative impact. We recommend that the City also consider revising stormwater fees and partner with local non-profits, such as Clean Lakes Alliance, to implement a volunteer program that will engage residents. Given the vulnerability of the isthmus and high percentage of residential properties in the city, actionable steps to incentivize residential green infrastructure projects are critical for climate adaptation.
2020-10-28T19:12:39.656Z
2020-10-12T00:00:00.000
{ "year": 2020, "sha1": "1ad0b4c3e78e57301b592ed353a64f469c2669ff", "oa_license": "CCBY", "oa_url": "https://www.sciencepolicyjournal.org/uploads/5/4/3/4/5434385/alexander_etal_jspg_v17.2.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "231bb53c2795ef6cad6639bb99f3ec0cf85ddbbf", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Business" ] }
217080080
pes2o/s2orc
v3-fos-license
Centers for Disease Control and Prevention Public Health Response to Humanitarian Emergencies, 2007–2016 Humanitarian emergencies, including complex emergencies associated with fragile states or areas of conflict, affect millions of persons worldwide. Such emergencies threaten global health security and have complicated but predictable effects on public health. The Centers for Disease Control and Prevention (CDC) Emergency Response and Recovery Branch (ERRB) (Division of Global Health Protection, Center for Global Health) contributes to public health emergency responses by providing epidemiologic support for humanitarian health interventions. To capture the extent of this emergency response work for the past decade, we conducted a retrospective review of ERRB’s responses during 2007–2016. Responses were conducted across the world and in collaboration with national and international partners. Lessons from this work include the need to develop epidemiologic tools for use in resource-limited contexts, build local capacity for response and health systems recovery, and adapt responses to changing public health threats in fragile states. Through ERRB’s multisector expertise and ability to respond quickly, CDC guides humanitarian response to protect emergency-affected populations. T he number of persons affected by humanitarian emergencies worldwide is unprecedented; in 2016, the United Nations Office for the Coordination of Humanitarian Affairs estimated that 125 million persons needed humanitarian assistance (1). More than half of these, 65.3 million persons, have been forcibly displaced as a result of armed conflict, civil strife, or human rights violations. The number displaced has increased by 75% during the past 20 years and by 50% in just the past 5 years (2). Among these are 21.3 million refugees and 40.8 million internally displaced persons (IDPs) (2). Displaced persons might settle in temporary shelters or camps in resource-limited or politically unstable areas, straining local capacity to provide services. The effects of humanitarian emergencies can be exacerbated by political instability and weak governance associated with fragile states or areas of conflict (3), and this instability directly undermines global health security. In such unstable settings, the humanitarian community calls these crises complex emergencies (CEs) (Table) (4). Although the underlying causes of humanitarian emergencies and CEs specifically are highly varied, the population displacement and health systems destabilization associated with emergencies have predictable public health consequences. A hallmark of CEs is increased mortality rates, sometimes >10-fold above baseline rates (3,6,7). Historically, the cause of the high morbidity and mortality rates have been infectious disease outbreaks; exacerbation of endemic infectious diseases; and acute malnutrition, often in high-density settlements with inadequate water, sanitation, shelter, and access to food (3,(7)(8)(9)(10). Increased availability of interventions for these conditions, coupled with a rise in conflicts in higher-income countries, have led to an increasing burden from chronic conditions such as tuberculosis, cardiovascular disease, and diabetes (3,8,9). Conflict-affected populations also have an elevated risk for injury from violence, including sexual and genderbased violence, and mental health conditions are common (3,9). Most displaced persons live in host communities, rather than in separate camps, contributing to poor or uncoordinated access to healthcare services (9). This inconsistent access continues to be problematic in protracted emergencies, during which public health services might be strained for years. Responding to the wide-ranging public health effects of CEs requires expertise in diverse sectors, such as vaccine-preventable and other infectious diseases; water, sanitation, and hygiene (WASH); nutrition; noncommunicable diseases; injury; sexual and reproductive health; and mental health. Equally varied are the epidemiologic approaches needed to effectively respond to CEs, including development of novel epidemiologic methods, rapid needs assessments, surveillance implementation and evaluation, outbreak investigations, and capacity building, often in resource-restricted and insecure environments. The Centers for Disease Control and Prevention (CDC) has long been a leader in developing and understanding the epidemiology and public health effects of humanitarian emergencies and CEs specifically. CDC's work in CEs began during the 1968 war-induced famine in Biafra in West Africa, during which staff documented the extent of severe malnutrition (11). CDC's assessment of the health effects of the 1970 Bangladesh cyclone established epidemiologic approaches in humanitarian emergencies triggered by natural disasters (12). CDC published a compendium of disease control and public health surveillance programs used among Khmer refugees from Kampuchea (Cambodia) in Thailand during 1979-1980 (13), followed by a synthesis of accumulated knowledge about public health issues in CEs (14). In 1990, Toole and Waldman, among the first CDC staff dedicated to studying the epidemiology of CEs, published a paper on mortality rates among displaced populations, which established the use of a crude mortality rate (CMR) threshold to quantitatively define CEs (15). In 1994, CDC staff, as part of the Goma Epidemiology Group, conducted rapid cluster sample population surveys to document the unprecedented mortality rate among Rwanda refugees in Goma, Zaire (now Democratic Republic of the Congo) (6,8). After a systematic review of nutritional surveys in Somalia during the 1991-1992 famine, CDC staff provided recommendations for standardizing nutritional assessments in CEs (16). Toole et al. published on measles control in refugee settings in 1989 with special attention to how measles prevention policies during CEs differ from measles control in standard settings (17,18). In the 1990s and 2000s, CDC staff emphasized the burden of chronic diseases in CEs (19) and documented adverse mental health outcomes and social functioning among refugee and CE-affected populations (20)(21)(22)(23) and, later, among national and international aid workers (24)(25)(26). In all activities, CDC worked to address the unique characteristics of humanitarian emergencies through development of epidemiologic methods, strengthening local capacity, improvement of surveillance, and evaluation of interventions. CDC continues its work on humanitarian emergencies through its humanitarian emergency response branch, the Emergency Response and Recovery Branch (ERRB), in the Division of Global Health Protection, Center for Global Health. ERRB includes the Global Disease Detection Operations Center, the Global Rapid Response Team, the Global Response Preparedness Team, the Global WASH Team, and the Humanitarian Health Team, thus unifying CDC's humanitarian emergency preparedness, alert, and response activities into a single program. Staff members work with the international humanitarian community to apply public health and epidemiologic science, develop tools and methods to understand health needs, and build the capacity and resilience of public health systems within these fragile settings. This article focuses on ERRB's work in humanitarian emergency response over the past decade; its collaboration with response partners; the broad lessons that can be drawn from its work; and how it and other humanitarian health responders are adapting to address new threats to global health security, new needs of populations affected by CEs, and humanitarian emergencies at large. Response Descriptions We retrospectively examined ERRB responses during 2007-2016. To compile the responses that occurred during 2007-2014, we abstracted data from past branch activities databases and publications. To ensure the completeness of this dataset, we compared it with a previously compiled comprehensive database of all emergencies worldwide, including CEs and natural disasters, for the same period (A. Culver, Emory University, pers. comm., 2015 Aug 4). Sources of CEs included in this database were the Center for Research on the Epidemiology of Disasters Complex Emergency Database (http://cedat.be) and the United Nations Office for the Coordination of Humanitarian Affairs Central Emergency Response Fund archives of funded responses (http://www.unocha.org/cerf/cerf-worldwide/allocations-country/2006-2017-country). Included events were those that affected >10,000 persons and had a documented A serious disruption of the functioning of a community or a society involving widespread human, material, economic, or environmental losses and impacts that exceeds the ability of the affected community or society to cope using its own resources Natural disaster A disaster brought about by natural hazards Human-made disaster A disaster brought about by human activities or events (4) Humanitarian emergency A disaster resulting in the need for international support (humanitarian assistance) to meet the basic needs of the affected population (4) Complex emergency A humanitarian emergency associated with fragile states or areas of conflict, in which a total or considerable breakdown of authority has occurred (4) Global health security A state of collective protection of health through ensuring all countries can effectively prevent, detect, and respond to public health emergencies (5) CMR of >1 death/10,000 persons/day. Sources for natural disasters were the Center for Research on the Epidemiology of Disasters international disaster database (http://www. emdat.be) and Central Emergency Response Fund archives; inclusion required that the event affected >10,000 persons. To compile emergency responses for 2015 and 2016, we abstracted data from branch administrative and travel records. Responses were selected to reflect >1 response/year and to include those in which branch staff had a major role. Of 14 selected emergencies over the past 10 years, nearly two thirds were CEs; the rest were natural and human-made disasters (online Technical Appendix Table, https://wwwnc.cdc.gov/EID/article/23/13/17-0473-Techapp1.pdf). Responses were conducted in Africa, Asia, Latin America and the Caribbean, Europe, and the Middle East; activities included providing technical assistance or directly conducting assessments and investigations, implementing and evaluating surveillance systems, developing guidelines, providing trainings, and coordinating interventions. All responses featured extensive collaboration with a variety of partners, including the US government, UN agencies, governmental health entities, and both national and international nongovernmental organizations (NGOs). To illustrate the breadth and impact of the CDC and its partners' work in humanitarian emergency responses, we highlight activities performed in 3 cases (online Technical Appendix Table). Case Studies Haiti Earthquake Response, 2010 On January 12, 2010, a 7.0 magnitude earthquake struck central Haiti, killing >200,000 persons and injuring another 300,000. The quake also created 1 million IDPs and massively disrupted public health and other basic services within an already fragile state. ERRB staff worked with the Pan American Health Agency and the Haiti Ministry of Public Health and Population to establish sentinel site surveillance for epidemic-prone infectious diseases at 51 health facilities across the country and in IDP camp clinics; these systems were instrumental in detecting the cholera outbreak that began in October 2010 (27,28). Recognizing that population displacement could exacerbate Haiti's already poor access to improved water sources and sanitation facilities, ERRB staff and the Haiti National Directorate for Potable Water and Sanitation performed a rapid assessment of access to WASH services in 308 IDP settlements in February 2010 and found that <10% of sites met the minimum Sphere Project standards for emergency sanitation (<50 persons/latrine) (29). This work provided the impetus for the Haiti National Directorate for Potable Water and Sanitation and the humanitarian WASH sector to increase emphasis on improving WASH in IDP sites, actions that likely reduced the number of cases among IDPs early in the cholera epidemic (30,31). The cholera epidemic was also the basis for a series of ERRB activities focused on improving access to clean water and proper sanitation in Haiti. Although WASH had been a core sector within ERRB, this epidemic led to an increased CDC emphasis on implementing and evaluating WASH interventions in CEs. In light of the ongoing burden of thousands of cholera cases in Haiti annually, WASH activities in Haiti are now a focus of health systems recovery work of ERRB and the CDC Haiti country office. Horn of Africa Famine and Displacement Response, 2011-2014 In 2011, a drought in the Horn of Africa led to severe food insecurity for 13 million persons, contributing to 30% acute malnutrition rates and declaration of famine in 3 regions of Somalia (32). Nearly 1 million Somali refugees fled to camps in Kenya and Ethiopia, and an additional 1.5 million persons were internally displaced within Somalia. Host populations in Kenya also experienced emergency rates of >25% acute malnutrition, and outbreaks of measles and cholera occurred. In response, ERRB staff worked with the UN High Commissioner for Refugees (UNHCR) to strengthen its Health Information System (HIS) disease surveillance (http://www.unhcr.org/en-us/protection/health/4a3374408/ health-information-system-toolkit.html), which led to identification of measles outbreaks in 2 refugee camps. Staff review of demographic profiles of outbreak cases led to an expansion of the target age group for vaccination from 6 months-14 years of age to 6 months-30 years of age (33,34). In a retrospective survey of deaths among 753 refugee families arriving at Dadaab, Kenya, ERRB staff and partners noted a doubling of CMR among refugees in transit (CMR 1.94, 95% CI 0.50-3.37) compared with that before departure (CMR 0.86, 95% CI 0.57-1.15), leading to aid agencies intervening during refugees' journeys (35). ERRB's evaluation of a blanket supplementary feeding program in northern Kenya, conducted with several collaborators, pointed out the need for more regular distribution of rations and strengthened interventions for acutely malnourished children (36). ERRB staff and UN partners reviewed and validated all aid groups' nutrition and mortality surveys conducted in Somalia to ascertain the severity of the famine in some affected areas, thus directing aid (32). Until 2014, ERRB supported the Somalia communicable disease reporting surveillance system, designed to optimize early warning of outbreaks, by providing analysis and training; this system identified an outbreak of polio in 2013, enabling swift intervention (37). For ERRB, the response to the Horn of Africa famine and displacement indicated the value of enhancing public health information quality, thereby guiding the allocation of humanitarian resources. ERRB's response to this emergency also sharpened CDC's capacity to respond to protracted emergencies over the course of several years, adapting responses to the changing public health needs across several sites simultaneously within a destabilized region. In addition, this response represented one of the first instances of ERRB's providing remote support and monitoring of emergency public health activities. Syria Displacement Response, 2012-Present Antigovernment protests in Syria in 2011 devolved into an ongoing, multisided armed conflict that has devastated a previously middle-income country and destabilized the region. The UN Office for the Coordination of Humanitarian Affairs estimated, as of October 2016, that 13.5 million persons across the region were in need of humanitarian assistance. The war has caused the displacement of 4.8 million persons outside the country and 6.1 million within, totaling more than half of Syria's population (38). The displacement crisis has strained resources in neighboring countries and beyond. As in other protracted emergencies, ERRB's work has spanned several years and multiple sectors. Branch staff helped UNHCR implement HIS for disease surveillance in Za'atari refugee camp in Jordan, including introduction of an outbreak response protocol. Thereafter, when HIS showed a decline in child vaccination rates in the camp area from 90% to 50%, aid partners conducted a measles vaccination campaign of 660,000 children. Working with the US Agency for International Development's Office of Foreign Disaster Assistance and the Assistance Coordination Unit, staff also established and trained local users on the Early Warning Alert and Response Network in northern Syria, playing a fundamental role in establishing disease surveillance in non-government-controlled areas and increasing local public health capacity. This system detected a polio outbreak in 2013, initiating a vaccination campaign, and provided information on suspected cholera cases and measles and typhoid fever outbreaks. ERRB assisted UNHCR, UNICEF, and other partners in conducting cross-sectional representative cluster surveys of nutritional status of refugee children and women of reproductive age, finding a high prevalence of anemia in both groups and providing evidence to support a micronutrient fortification food program for refugees (39). ERRB and multiple collaborators performed an assessment of the Minimum Initial Services Package for reproductive health among the refugees from Syria residing in Jordan and instituted a protocol for clinical management of survivors of sexual violence after noting a lack of such services (40). This response in Syria indicated the importance for the emergency health response community to support public health guideline and strategy development and program implementation across regional public health systems. The Syria displacement crisis also pointed out the need to develop responses for emergencies in middle-income regions of the world, where demographics, disease burden, and functionality of public health systems are different from those of sites of historic CEs. Discussion Reflecting on these 3 case studies and the other listed ERRB humanitarian emergency responses, several overarching lessons for effective public health humanitarian emergency response emerge. First, because humanitarian emergency response requires engaging in a broad range of public health work within resource-limited, fragile, or insecure environments, successful response requires developing close working relationships with other humanitarian response organizations. For CDC, these partnering organizations include national governments; ministries of health; US government agencies (especially the Agency for International Development's Office of Foreign Disaster Assistance and the Department of State Bureau of Populations, Refugees, and Migration); UN agencies, including the World Health Organization, UNHCR, and UNICEF; and national and international NGOs. At a basic level, these close relationships allow ERRB and other humanitarian responders access to CE settings. These collaborations encourage standardization of approaches across the international humanitarian emergency response community (29) and improved coordination of response (6,18). The common use of these standardized practices has been facilitated by the dissemination of the epidemiologic approaches and methods championed by CDC during humanitarian emergency responses and through CDC-trained staff going on to senior positions at UN agencies. Finally, these collaborations permit ERRB and similar organizations to provide technical assistance while partners such as national ministries of health, UN agencies, and NGOs take the lead in implementation of interventions. Second, because public health emergency responses often take place within the context of mass population displacement and fragile states, CDC and other responders must develop and apply epidemiologic methods and tools to be used in challenging and resource-limited settings. ERRB has contributed to several such tools. In the nutrition sector, ERRB enhanced the application of the emergency nutrition assessment software that facilitates survey planning, data collection, and analysis of anthropometric indices (http://smartmethodology.org/survey-planning-tools/ smart-emergency-nutrition-assessment) and led the technical development of the Community-based Management of Acute Malnutrition report for monitoring programs to manage malnutrition in emergencies (41). In the communicable diseases sector, ERRB helped develop the evaluation tool for tuberculosis in resource-limited, refugee, and postconflict settings (https://www.cdc.gov/globalhealth/healthprotection/errb/researchandsurvey/tbtool.htm). In the sexual and gender-based violence sector, branch staff contributed to the guidelines for integrating gender-based violence interventions in humanitarian action (http://gbvguidelines. org/wp/wp-content/uploads/2015/09/2015-IASC-Genderbased-Violence-Guidelines_lo-res.pdf). Across these and other sectors, in settings where the evidence base for interventions is limited, CDC focuses on strengthening the accuracy of data to build a solid evidence base for interventions to guide humanitarian response, enhance global health security, and prevent morbidity and mortality. Third, effective emergency responses must adapt to changing needs of emergency-affected populations. Humanitarian emergencies, especially CEs, which exacerbate the fragility of politically weak and unstable regions, could last several years without a clear endpoint. Although dramatically elevated mortality rates might decrease as a CE moves from an acute emergency to a postemergency phase, populations continue to be vulnerable to many of the same health risks. As the humanitarian response evolves and becomes better established, responders might need to strengthen disease surveillance, review and interpret public health data, and improve capacity of local or national public health systems. Responders must maintain a commitment to improving the function and resilience of public health systems within these fragile settings. Fourth, ERRB's work supports the work of CDC to prevent, detect, and respond to public health threats in fragile states under conditions that can result in regionally destabilizing effects and threaten global health security. Responding effectively requires that ERRB and other responders recognize 3 global patterns in population displacement: urbanization of the displaced, a shifting disease burden that includes noncommunicable diseases, and increasing security restrictions in areas of displacement. Understanding the unique aspects of urbanization of the displaced, moving away from the rural camp-based models of the past, suggests the need to change epidemiologic methods of surveillance and population assessment. In addition, because the displaced are increasingly likely to need assistance for noncommunicable, chronic diseases and access to long-term health services, compared with displaced populations in the past (9), the humanitarian emergency response community's areas of expertise must expand to include this sector. Increasing security restrictions have sometimes prevented, and will likely continue to prevent, CDC staff and the humanitarian community from physically accessing certain displaced populations. Furthermore, CDC is the public health agency of the US government and not a humanitarian agency, and thus, ERRB's responses are limited in ways that those of humanitarian agencies are not. These limitations include where, under what circumstances, and with which partners CDC staff can work. To address these limitations, ERRB staff is working to formalize remote support and program evaluation without sacrificing quality or comprehensiveness of assistance. More broadly, however, ERRB relies on humanitarian agencies to continue using epidemiologically sound public health approaches to guide evidence-based, effective interventions when CDC is precluded from responding. Finally, ERRB responses show that response expertise is most useful when deployed early in an emergency and with a sustained presence. To that end, ERRB's Global Rapid Response Team quickly matches needs in the field with expertise available from within ERRB and across the entire CDC, and these responders can provide longer-term support. In its first year of existence, the Global Rapid Response Team deployed >200 staff members to various emergency responses, including for Hurricane Matthew in Haiti in October 2016. As the numbers affected by and intensity of humanitarian emergencies increase, ERRB and other response organizations must provide broader assistance. To that end, ERRB collaborates with partners; contributes to epidemiologic tools to be used in humanitarian emergencies; and, through the Global Rapid Response Team, responds more quickly and with more staff. The next steps for ERRB and other responders include improving capacity and resilience of public health systems in fragile states; understanding the public health implications of long-term, urban-based displacements; adding a focus on noncommunicable diseases; and providing remote epidemiologic support in a systematic way. In settings where ERRB staff, as representatives of a US government agency, cannot respond, CDC's evidence-based interventions for emergencies are still implemented because of branch efforts in building local capacity for emergency response and training public health practitioners who then move on to work with humanitarian agencies. In these ways, ERRB continues to apply public health science to save lives in humanitarian settings while also working on the forefront of response-purposed detection and preparing a global health response workforce.
2018-03-02T18:45:16.278Z
2017-12-01T00:00:00.000
{ "year": 2017, "sha1": "99d8a07446e18900ba43f877b0efb3ff5b59a207", "oa_license": "CCBY", "oa_url": "https://wwwnc.cdc.gov/eid/article/23/13/pdfs/17-0473.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "0119f4c191cc806d6a87e71fa8317bd365fcbfaa", "s2fieldsofstudy": [ "Political Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
255865163
pes2o/s2orc
v3-fos-license
The HRASLS (PLA/AT) subfamily of enzymes The H-RAS-like suppressor (HRASLS) subfamily consists of five enzymes (1–5) in humans and three (1, 3, and 5) in mice and rats that share sequence homology with lecithin:retinol acyltransferase (LRAT). All HRASLS family members possess in vitro phospholipid metabolizing abilities including phospholipase A1/2 (PLA1/2) activities and O-acyltransferase activities for the remodeling of glycerophospholipid acyl chains, as well as N-acyltransferase activities for the production of N-acylphosphatidylethanolamines. The in vivo biological activities of the HRASLS enzymes have not yet been fully investigated. Research to date indicates involvement of this subfamily in a wide array of biological processes and, as a consequence, these five enzymes have undergone extensive rediscovery and renaming within different fields of research. This review briefly describes the discovery of each of the HRASLS enzymes and their role in cancer, and discusses the biochemical function of each enzyme, as well as the biological role, if known. Gaps in current understanding are highlighted and suggestions for future research directions are discussed. HRASLS family members share several common elements (Figs. 1, 2 and 3), including a highly conserved NCEHFV sequence in the C-terminal region and similar basic structural motifs [2,3]. Several residues in the NCEHFV sequence are critical for acylation and deacylation reactions, including the cysteine that acts as the active site nucleophile [2]. This residue is paired in HRASLS 2-5 with two N-terminal region histidines, and in HRASLS1 with N-terminal region histidine and asparagine residues, forming an unconventional catalytic triad that is characteristic of NlpC/P60 family members [4]. In this arrangement, the first histidine acts as a base to deprotonate the sulfhydryl group of the cysteine side chain, while the second polar residue acts to stabilize the correct orientation of the imidazole ring of the first histidine [2]. With the exception of HRASLS5, these proteins also contain a C-terminal transmembrane-spanning hydrophobic domain for endomembrane localization (Fig. 1) [5]. Currently, only the structures of HRASLS2, HRASLS3, and HRASLS4 have been reported. The crystal structures of HRASLS2 and HRASLS3 were solved by Golczak et al. in 2012. They were both shown to contain three α-helices well separated from a four-stranded antiparallel β-sheet, which resembles the basic structure of the NlpC/P60 thiol proteases that contain an α + β fold with segregated αβ segments (Fig. 2) [3,4]. The NMR structure of HRASLS4 was solved by Wei et al. in 2015 and showed similar folding to that of HRASLS3, although the basic structure contained four α-helices and a six-stranded antiparallel β-sheet (Fig. 2) [6]. Recent evidence has indicated a biochemical role for all HRASLS homologues in acyl chain remodeling of glycerophospholipids [3]. Using mass spectrometry analysis, a thioester intermediate has been detected to form between cysteine 161 of purified LRAT and the acyl side-chain donated by phosphatidylcholine (PC) [7]. This suggests that the transient acylation of a reactive cysteine residue also occurs in the LRAT-like HRASLS enzymes to mediate reactions involving glycerophospholipids ( Fig. 4) [3]. HRASLS1-5 all possess in vitro Ca 2+ independent phospholipaseA 1/2 (PLA 1/2 ) activities, which requires a pH of 8-9, dithiothreitol (DTT), and Nonidet P-40 for maximum function, and which are inhibited by iodoacetate [8][9][10]. The importance of DTT in particular suggests that preventing oxidation of sulfhydryl groups is critical for maintaining enzyme stability, which agrees with a predicted role for an active site cysteine in HRASLS enzymes [8]. Both PC and phosphatidylethanolamine (PE) act as substrates with PLA 1 activity showing dominance over PLA 2 activity for HRASLS1, 2, 4, and 5 [2,[8][9][10][11]. In regards to HRASLS3, there has been contradictory evidence of substrate specificity and positional preference for acyl hydrolysis, as well as other enzymatic characteristics [12][13][14][15]. Uyama et al. reported both PLA 1/2 activities with PC and PE as substrates, but a preference for hydrolysis at the sn-1 position [15]. Duncan et al. also found that this enzyme shows both PLA 1/2 activities with maximal phospholipase activity at pH 8, and they further identified a broad range of possible substrates, showing evidence of acyl hydrolase activity by HRASLS3/AdPLA against not only PC and PE, but also phosphatidylserine, and phosphatidylinositol, but not phosphatidic acid [14]. However, Fig. 1 Sequence alignment of human HRASLS enzymes. Sequence alignment was performed using ClustalW2 [51]. The C-terminal transmembranespanning hydrophobic region is underlined. The NCEHFV motif is highlighted in red with the active site cysteine bolded and italicized. The remaining two active sites in the catalytic triad are also bolded, and are highlighted in purple. Conserved residues are highlighted in yellow, and homologous residues are highlighted in grey in contrast to the findings of Uyama et al., Duncan et al. reported a preference for hydrolysis by this enzyme at the sn-2 position that is moderately stimulated by Ca 2+ [14]. The inconsistencies within the literature have been proposed to be due to differences in the assay systems used [15]. Based on the evidence to date, it seems that this enzyme should best be considered a PLA 1/2 [14][15][16]. The HRASLS family of proteins also possess Oacyltransferase activity, which describes the acyl-CoA independent transacylation of a lysophospholipid at either the sn-1 or sn-2 position using a fatty acyl chain donated by a phospholipid (Fig. 5) [1][2][3]. Each member of the HRASLS family has shown some preference for Oacylation of lysophosphatidylcholine (lyso PC) at the sn-1 position [1,8,10]. It has therefore been proposed that the HRASLS family be reclassified as a novel group of phospholipid metabolizing enzymes, respectively known as phospholipases A/acyltransferases (PLA/AT) 1-5 [1]. The HRASLS family of proteins also function in vitro to differing extents as N-acyltransferases for the production of N-acylphosphatidylethanolamines (NAPEs). HRASLS1-5 have been shown to catalyze the transfer [6]. The β-sheets are indicated by green arrows and α-helices are indicated by red lines of an acyl chain from the sn-1 position of a glycerophospholipid to the amino group of PE producing NAPEs, which serve as precursors for N-acylethanolamines (NAEs) (Fig. 5) [1]. NAEs are a class of bioactive signalling molecules produced when a NAPE molecule is cleaved through hydrolysis, as catalyzed by NAPEhydrolyzing phospholipase D (NAPE-PLD) (Fig. 5). NAEs have myriad biological roles, including modulation of inflammation, energy regulation, and nociception [1,17]. Although little is yet known regarding the physiological significance of NAE regulation by HRASLS subfamily members, these enzymes are already known to play diverse biological roles in vivo. In particular, a function has been investigated for several members as class II tumour suppressor genes. Class II tumour suppressors are involved in the negative regulation of cell growth, and are often downregulated in cancer cells secondary to other genetic or metabolic changes, but are not themselves mutated or deleted. As a result, they make attractive therapeutic targets, since they should be inducible if appropriate conditions or agents are identified [18][19][20]. Due to the study of HRASLS subfamily members in wide ranging fields, rampant renaming has occurred, with the result that there are now 27 different designations for these five enzymes (Table 1). Here, we identify and review the biochemical and biological role of these enzymes, and seek to disambiguate the current state of knowledge on this enzyme subfamily. Review HRASLS1 (PLA/AT1) HRASLS1 was originally discovered by Akiyama et al. in 1999 through differential display between two mouse cell lines [21]. The 167-amino acid long protein was named A-C1, and was found to be 46 % homologous with a rat tumour suppressor protein called rat H-rev107 (now also called HRASLS3, among other names (see Table 1)) [11,21]. The 168 aa-long human HRASLS1/A-C1 gene was also cloned, and was found to share 83 % identity with the mouse sequence [8,22]. A tumour suppressor role for HRASLS1/A-C1 was tested directly and it was found that, indeed, overexpression of HRASLS1/A-C1 inhibited the growth of H-Ras-transformed NIH3T3 fibroblasts [21]. Additionally, expression of the human homolog was found to be reduced in gastric cancers from 41 different individuals due to hyper-methylation of CpG islands in the 5' region of the HRASLS1 gene, suggesting that it also plays a role as a tumour suppressor in human disease [23]. The tissue distribution of HRASLS1 is distinguishable from other members of the HRASLS family of enzymes. In humans, mice, and rats, HRASLS1/A-C1/PLA/AT1 is predominately expressed in testes, skeletal muscle, and heart [8]. Humans show relatively low expression of HRASLS1 in most other tissues [8], while mice and rats also show abundant expression in brain, with some evidence of expression in thymus, lung, stomach, kidney, and bone marrow cells [8,21]. N-acylethanolamines (NAEs) are known to accumulate in response to acute injury in tissues, including the heart, brain, and testicles, suggesting a possible role for HRASLS-1 in mediating pathological events in these tissues [17]. Thus far, however, no studies have examined the physiological role of HRASLS1. HRASLS2 (PLA/AT2) HRASLS2 was first cloned from SW480 colon cancer cells [5]. This member of the HRASLS family is located on chromosome 11 in humans, but is lacking from the rodent genome [5,10]. The HRASLS2 gene transcript is detectable primarily in gastrointestinal tissues, including the colon, small intestine, and stomach, and is also expressed in the trachea and kidney [5]. Subcellular localization of HRASLS2 is primarily within the perinuclear region, where it shows a granular pattern [5], similar to that visualized for HRASLS1 [21]. A role for HRASLS2 in cancer and, specifically, in RAS-mediated cellular transformation, has been investigated. Transfection of HRASLS2 into HCT166 colon cancer cells, HeLA cervical cancer cells, and MCF-7 breast cancer cells was shown to suppress colony formation [5]. As well, HRASLS2 over-expression suppressed the activation of wildtype and mutant RAS in HtTA cervical cancer cells, resulting in increased cell death [5]. These effects were dependent on the C-terminal hydrophobic domain. Truncation of the last 26 amino acids containing this region eliminated the pro-apoptotic, anti-RAS, and growth suppressive activities of HRASLS2 [5]. In addition, loss of the putative C-terminal transmembrane domain also caused HRASLS2 to translocate, demonstrating increased intensity in the nucleus, which highlights the importance of this region in endomembrane localization [5]. The phospholipid metabolizing activities of HRASLS2 have been assessed using both cell homogenates and purified enzyme. HRASLS2 overexpression in COS-7 cells was found to increase in vitro PLA 1/2 activity, with similar increases evident in assays conducted using either post-microsomal supernatants or microsomal pellets [10]. HRASLS2 was also shown to prefer to use fatty acids from the sn-1 position of PC as acyl donors for its N-acyltransferase activities [1,10]. In vitro, Nacyltransferase activity was found to be approximately 4-fold greater than PLA 1/2 activity [1,10]. Because of the preference for activity at the sn-1 position, the major N-acyl species of NAPE produced by the overexpression of HRASLS2 in COS-7 cells tended to contain mostly saturated and monounsaturated acyl chains, with Npalmitoylethanolamide, N-stearoylethanolamide, and Noleoylethanolamide forming the predominant NAEs generated, while N-arachidonoylethanolamide (i.e. anandamide), and its precursor N-arachidonoyl PE, showed only minimal elevation [1]. The physiological function of HRASLS2 is as-of-yet uncharacterized. Overexpression of HRASLS2 in clonal cell lines has been shown to decrease endogenous plasmenylethanolamine levels, which causes abnormal localization of peroxisomal proteins [1]. A similar result has also been observed for HRASLS3 [12]. Since the predominant activity of that enzyme is PLA 1/2 rather than N-acyltransferase activity, it is unlikely that changes in NAPE/NAE levels or synthesis play a role in mediating peroxisomal events by HRASLS subfamily members. Further studies with Hrasls2 gene ablation will be important to elucidate the biological role of this enzyme. HRASLS3 (PLA/AT3) Diverse studies on both the biological role and enzymatic function of HRASLS3 have led to the generation of a particularly large number of aliases, with at least ten different names recorded in the literature (Table 1) [9,13,25]. HRASLS3 was the first of the HRASLS subfamily of proteins to be discovered [11]. It was identified by subtractive hybridization performed between H-ras transformed rat fibroblasts and phenotypic revertants that had regained density-dependent growth inhibition, and thus was initially given the name H-ras revertant #107 (H-rev 107) to denote this [11]. In additional studies, HRASLS3/H-rev107 was categorized as a class II tumour suppressor [11,26]. It has been suggested that HRASLS3 regulates oncogenic H-ras through its ability to decrease cellular levels of ether-type lipids, such as plasmalogens and monoalkyldiacyglycerols, by decreasing cellular levels of peroxisomes that are major sites for the synthesis of these lipids [12,27]. Uyama et al. found that HRASLS3 binds to the peroxisomal chaperone protein Pex19p through its C-terminal and N-terminal hydrophobic domains [27]. Subsequently, Pex19p is inhibited from binding to peroxisomal membrane proteins such as Pex3p and Pex11βp, which results in decreased peroxisomal biogenesis [27]. Additional studies, however, are still required to directly test the relative importance of peroxisomal biogenesis regulation in mediating effects of HRASLS3 on the H-ras-dependent growth of cancer cells. HRASLS3 is downregulated in many cancers by methylation of the CpG-rich region in the promoter, which results in gene silencing [18,28,29]. In OVCAR-3 ovarian cancer cells, HRASLS3 has been show to interact through its N-terminal proline-rich domain to ablate the catalytic activity of protein phosphatase 2A [25], causing caspase-9-dependent cell death. As well, HRASLS3 was found to inhibit cell migration and invasion when expressed in human NT2/D1 testicular cancer cells, and this function was suggested to relate to its ability to enhance the activity of prostaglandin D2 synthase [30]. At least one study has demonstrated, using chemical inhibitors, that the phospholipase activity of HRASLS3/H-rev107 is required for its anti-RAS effect [31]. This is also in agreement with an earlier finding that the growth-inhibitory and tumor-suppressing activity of HRASLS3/H-rev107 requires the C-terminal region [26] that is required for catalysis [16]. Not all studies, however, have found an inhibitory role for HRASLS3 in cancer. HRASLS3 does not inhibit, but instead stimulates, the proliferation of non-small cell lung carcinomas, contributing to tumour progression [29]. HRASLS3 has also been found to be a downstream target of mutant p53, with increased HRASLS3 levels reported in p53 mutant osteosarcomas [32]. As well, increased proliferation, migration, and invasion of osteosarcoma cells was reported following overexpression of HRASLS3 [32]. Additional studies on HRASLS3 active site mutants in transformed cell lines will help to isolate its lipid-enzymatic function from its function in tumour growth modulation. The enzymatic activity of HRASLS3/RLP-3 was first investigated in 2007 by Jin et al., who demonstrated N-acyltransferase activity for this enzyme in vitro [9]. Subsequent work indicated the predominance of PLA over N-or O-acyltransferase activities for HRASLS3 in vitro [1,8], and a detailed characterization of this enzyme as an adipose-specific phospholipase A 2 (AdPLA) has been performed [14]. On the basis of studies performed using general phospholipase chemical inhibitors, it was suggested that this enzyme constituted the first member of an entirely new group of phospholipases A 2 , group XVI. As a result, the gene was subsequently renamed PLA2G16 [14]. Highly detailed analyses of human HRASLS3 have been performed in a series of elegant papers, including X-ray crystallographic analysis of its structure [3,16,33]. In the first structure analysis of the HRASLS family of proteins, Ren et al. demonstrated in 2010 that the phospholipase active site of HRASLS3 contains a Cys 113 -His 23 -His 35 catalytic triad [33]. This finding was supported by Pang et al. in 2012 who showed that the phospholipase activity of HRASLS3 was mediated by this catalytic triad [16]. As is also seen in LRAT-NlpC/P60 family proteins, the active-site cysteine performs a nucleophilic attack on the sn-1 and sn-2 acyl groups of phospholipids. In this reaction, the pK a of Cys 113 is too high to allow it to act as a nucleophile alone [16]. Thus, His 23 assists in deprotonating the sulfhydryl side chain of Cys 113 , which effectively lowers the pK a from 8.3-8.8 to 7.0, allowing it to function in the deacylation of phospholipids. This study also provided mechanistic insight into the higher phospholipase activity relative to acyltransferase activity of HRASLS3 in particular. Evaluation of hydrogen/deuterium exchange at the active site of HRASLS3 suggested that during catalysis, water could readily access reactants, allowing the reactive intermediate to decompose to a free fatty acid and a lysophospholipid [16]. It remains to be determined whether varying degrees of hydrophobicity at the active site are related to the acyltransferase/phospholipase activity ratio of other HRASLS subfamily members. The C-terminal hydrophobic domain is critical for HRASLS3 activity. Truncation of the C-terminal hydrophobic domain results in loss of HRASLS3 from membranes, and a loss of HRASLS3mediated phospholipase activity in vitro, indicating the critical nature of this hydrophobic span for interfacial catalysis [27]. The in vivo role of HRASLS3 has been studied in some depth [13]. HRASLS3/AdPLA is expressed predominantly in white adipose tissue (WAT), and also to a lesser extent in brown adipose tissue [13,14], where it is a major regulator of lipolysis and is crucial for the development of obesity, as seen in an Hrasls3 −/− /AdPLA −/− mouse model [13]. The ablation of Hrasls3/AdPLA causes a reduction in adipose tissue mass and triacylglycerol content, despite normal adipogenesis, as a result of constitutively elevated rates of lipolysis. The underlying molecular mechanism involves the regulation of lipolysis through the modulation of prostaglandin E 2 (PGE 2 ) levels and signaling. HRASLS3/AdPLA is responsible for 80 % of adipocyte phospholipase activity, which is a major source of arachidonic acid that is used in the synthesis of eicosanoids [13]. As a result, loss of HRASLS3/AdPLA results in a dramatic fall in levels of PGE 2 [13]. The primary receptor for PGE 2 in adipocytes is prostaglandin E receptor 3 (EP3), which is unique among EPs in that it is Gα i -coupled, and therefore exerts a repressive action on adenylyl cyclase in adipocytes when activated [34]. Thus, loss of HRASLS3/AdPLA in mice causes a decrease in PGE 2 levels and concomitant reduced activation of inhibitory EP3, with the result that generation of cAMP, activation of the hormone-sensitive lipase, and stimulation of lipolysis proceeds unchecked, preventing the accumulation of triacylglycerol and the development of genetic or diet-induced obesity [13]. Since HRASLS3/AdPLA is normally stimulated by insulin, increased activity by this enzyme may help to modulate the antilipolytic effects of insulin in adipocytes in the fed state [13]. The role of HRASLS3/AdPLA in adipocyte triglyceride metabolism suggests that inhibition may be a useful strategy to prevent and treat obesity [13,16], and future studies will therefore likely focus on the design of inhibitors [16] and on strategies to regulate this enzyme. HRASLS4 (PLA/AT4) HRALS4 was identified by three separate groups in 1998 [35], 2000 [36], and 2001 [37] as a retinoid-inducible anti-proliferative/class II tumour suppressor gene that goes by the alternate names Tazarotene-induced protein 3 (TIG3), Retinoic acid receptor responder protein 3 (RARRES3), and Retinoid-inducible gene 1 (RIG1) (see Table 1 for all aliases). HRASLS4 is homologous with HRASLS3/H-rev107 [35,36] and, like that enzyme, has reduced expression in a wide variety of primary human tumours, including lymphoma, ureter, kidney, rectal, and uterine, and cancer cell lines, including HL-60 promyelocytic leukemia cells, HeLa cells, K-562 chronic myelogenous leukemia cells, SW480 colon carcinoma, A549 colon carcinoma, and G361 melanoma cells [35]. HRASLS4 also shows growth inhibitory effects when overexpressed in T47D Chinese hamster ovary cells [35]. A role for HRASLS4 in malignant disease continues to be studied extensively, and a comprehensive overview of this area is beyond the scope of the current review. However, it is of interest to mention some of the more recent findings. The HRASLS4 promoter contains a response element for the p53 tumour suppressor that is activated by wildtype but not mutant p53 [38]. Whether this feature exists for all HRASLS subfamily members has not yet been studied, but it would help to explain the almost universal down-regulation of these proteins in transformed cells. Like other HRASLS subfamily members, HRASLS4 also inhibits H-RAS-mediated signalling [5,39,40]. The anti-cancer activity of HRASLS4 has been localized specifically to its action within the Golgi. While the C-terminal hydrophobic domain of HRASLS4 anchors this enzyme within both the endoplasmic reticulum and the Golgi apparatus, only Golgi-targeted HRASLS4 induces apoptosis in cancer cells [39]. The phospholipid metabolizing activity of HRASLS4 appears to be important for its anti-cancer effects, particularly with regards to metastasis and invasion [41]. Similar to HRASLS3, HRASLS4 functionally interacts with prostaglandin D 2 synthase to augment the production of prostaglandin D 2 [42]. This function was found to be dependent on an intact C-terminal hydrophobic domain, and although not tested directly, it seems highly likely to be related to the phospholipase activity of this enzyme [42]. A role for HRASLS4 in phospholipid metabolism has been investigated [1,10]. HRASLS4 functions in vitro as a Ca 2+ independent PLA 1/2 [1,8,10]. Although Nacyltransferase activity in vitro has been shown to be minor or absent for this enzyme [10], HRASLS4 does significantly increase the cellular content of both NAPE and NAE in metabolic labelling experiments [1]. The functional significance of this activity with regards to the anti-cancer activity of this enzyme remains to be determined. A physiological role for Hrasls4 has been investigated [43]. HRASLS4 is found in the suprabasal epidermis of skin, where it interacts with and activates another enzyme, transglutaminase I (TG1), which functions during terminal differentiation to form covalent bonds between proteins at the inner surface of the plasma membrane [44]. The action of TG1 is critical for production of the cornified envelope that maintains the epidermal barrier [44], and adenoviral delivery of HRASLS4 to cultured keratinocytes results in the activation of TG1, resulting in differentiation-like cell death, as well as the generation of cornified envelope-like structures [45,46]. Reduced expression of HRASLS4 is evident in hyperproliferative conditions such as psoriasis [47]. An additional role for HRASLS4 in keratinocyte differentiation is suggested by findings that this protein also localizes to the centrosome, where it affects microtubule kinetics and cell division [43,48]. Interaction between the centromere and HRASLS4 has been found to involve a central region of the enzyme that contains the NCEHFV sequence [43]. However, whether the phospholipid metabolizing abilities of this enzyme, per se, are critical for its various physiological roles in skin cells, remains to be clearly characterized. HRASLS5 (PLA/AT5) HRASLS5 was first investigated as part of a search for new Ca 2+ -independent enzymes active in the N-acylation of PE [9] (also called RLP-1 and iNAT; see Table 1 for all aliases). HRASLS5 displays dominate N-acyltransferase activity over both PLA 1/2 and O-acylation activities in vitro, and overexpression of HRASLS5 enhances formation of NAPE and NAE in cultured cells [1,2,8,9]. However, HRASLS5 differs from other HRASLS family members in that it is predominantly present in the cytosol, likely due to the absence of the C-terminal hydrophobic span that is characteristic of HRASLS enzymes 1-4 (Figs. 1, 2, 3 and 4) [1,9]. HRASLS5 also does not show a clear preference for abstraction of acyl groups from either the sn-1 or sn-2 positions of PC during Nacylation reactions [9]. It has been suggested that HRASLS5 may be involved in the production of anandamide (arachidonoyl ethanolamide) because of its capacity to utilize fatty acyl chains from the sn-1 or sn-2 position of phospholipids [9]. However, studies have yet to characterize the nature of endogenous NAPE and NAE species produced by HRASLS5. Likewise, little is known regarding the physiological function of this enzyme. It has been identified in spermatocytes of developing rat testes, but its function in that tissue is currently unknown [49]. Similarly, studies have yet to investigate whether it has a role in cancer like the other HRASLS family members. Conclusions The five members of the HRASLS family of enzymes have been shown to have phospholipase A 1/2 activity, and O-and N-acyltransferase activity. In addition, studies generally report reduced expression of these enzymes in cancer cells, and demonstrate a direct anti-cancer role, although not all studies agree. Evidence from studies using mutated or truncated forms of HRASLS subfamily members suggests that the phospholipid metabolizing functions are required for some, but likely not all, of the effects observed in tumour cells. Further work should integrate recent advances in understanding the biochemical function of these enzymes to better understand mechanisms related to aberrant cell growth in neoplasia. Further studies should also focus on understanding the major physiological function of each homologue in cells and tissues, using recent advances in gene editing techniques as well as the generation of gene knockout mice. Human monozygotic twins with Poland Syndrome have been found to be heterozygous for a gene deletion event on chromosome 11 that removes HRASLS2-5, strongly suggesting a role for one or more of these genes in causation of this disorder [50]. Poland Syndrome is characterized by hypoplasia/aplasia of the pectoralis major muscle and other variable anomalies including hypoplasia/aplasia of mammary tissue and ribs, limited subcutaneous fat, and sternal anomalies [50]. Generation of gene knockout mice will be required to understand any putative role for this enzyme subfamily in normal physiology and in human disease.
2023-01-17T14:11:55.261Z
2015-10-26T00:00:00.000
{ "year": 2015, "sha1": "06f1c93423dd9a13638a74750c940ec703ac24dc", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12929-015-0210-7", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "06f1c93423dd9a13638a74750c940ec703ac24dc", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
22243213
pes2o/s2orc
v3-fos-license
Persistence of subclinical deformed wing virus infections in honeybees following Varroa mite removal and a bee population turnover Deformed wing virus (DWV) is a lethal virus of honeybees (Apis mellifera) implicated in elevated colony mortality rates worldwide and facilitated through vector transmission by the ectoparasitic mite Varroa destructor. Clinical, symptomatic DWV infections are almost exclusively associated with high virus titres during pupal development, usually acquired through feeding by Varroa mites when reproducing on bee pupae. Control of the mite population, generally through acaricide treatment, is essential for breaking the DWV epidemic and minimizing colony losses. In this study, we evaluated the effectiveness of remedial mite control on clearing DWV from a colony. DWV titres in adult bees and pupae were monitored at 2 week intervals through summer and autumn in acaricide-treated and untreated colonies. The DWV titres in Apistan treated colonies was reduced 1000-fold relative to untreated colonies, which coincided with both the removal of mites and also a turnover of the bee population in the colony. This adult bee population turnover is probably more critical than previously realized for effective clearing of DWV infections. After this initial reduction, subclinical DWV titres persisted and even increased again gradually during autumn, demonstrating that alternative non-Varroa transmission routes can maintain the DWV titres at significant subclinical levels even after mite removal. The implications of these results for practical recommendations to mitigate deleterious subclinical DWV infections and improving honeybee health management are discussed. Introduction Deformed wing virus (DWV) is a prevalent single-stranded RNA virus of affecting the European honeybee (Apis mellifera). At highly elevated titres, it causes wing deformities in developing pupae, resulting in flightless adults that die shortly after emerging [1], [2], [3]. Expression of the characteristic symptoms is however largely dependent on how the bee acquires the DWV infection [4]. Typically, in its natural state, this virus occurs in honeybee colonies as an asymptomatic covert infection that is maintained in the colony through horizontal transmission pathways between bees such as trophallaxis, cannibalism, cleaning and salivary gland secretions [2], [5] and through vertical transmission from infected parents to their progeny a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 [6], [7], [8]. The epidemiology of DWV has however been dramatically altered by the introduction of the invasive honeybee ectoparasitic mite, Varroa destructor [9]. This mite feeds on the hemolymph of developing pupae and adult bees, acting as a highly efficient vector of DWV by injecting the virus particles directly into bee hemolymph [10]. The Varroa mite also activates latent DWV infections through host immunosuppression, indirectly stimulating virus replication in bees [11]. These features of mite-vectored transmission lead to increased infection levels within individual parasitized bees [4]. Morphological DWV symptoms typically occur at high infection levels (> 10 10 genome copies / bee) that are almost exclusively coupled with acquiring the virus during the pupal stage via the feeding behaviour of the ectoparasitic mite [2], [4]. High mite infestations within a honeybee colony ultimately lead to an overt DWV epidemic causing colony death within a few years [12], [13]. DWV is now considered a global epidemic driven by the world-wide spread of the Varroa mite [14] and has been implicated in the high rate of honeybee colony losses experienced in Europe and the US [9], [15], [16], [17], [18]. Accordingly, the Varroa mite is considered a severely damaging pest to the European honeybee due to its role as a virus vector and consequently the most significant economic threat to apiculture on a global scale [14], [19]. In order to prevent colony losses caused by the Varroa mite-virus infection epidemic, beekeepers must reduce or limit the growth of the mite population within the colony to break the vector transmission route of DWV. This is often done by using in-hive pesticides (acaricides) that specifically target the Varroa mite such as a tau-fluvalinate acaricide [20]. However, DWV can potentially induce colony losses independent of Varroa mite infestation even after mites are removed [21]. The aim of this study was to quantify the DWV infection dynamics during and following a Varroa mite removal treatment to evaluate the time necessary to clear a DWV infection from a honeybee colony after mites are removed. Such information would be valuable for improving honeybee health management and colony survival by optimizing the duration and the timing of acaricide treatments. Experimental design and sampling Six honeybee colonies, that had survived the 2013/2014 winter with relatively high mite infestations and with a large proportion of adult bees showing symptomatic DWV infections, were split and re-queened in the spring of 2014 to give a total of 12 (six pairs) experimental colonies. The colonies were tested in Uppsala, Sweden (Ultuna: N 59˚49'4,31 E 17˚39'24,60) and originate from the local bee population. Each pair of split colonies was equalized for brood amounts and adult bees. One colony from each pair was assigned to be treated with Apistan™ (tau-fluvalinate) according to manufacturer recommendations while the other colony received no mitecontrol treatment. Apistan™ is a potent synthetic pyrethroid acaricide with up to 98-100% efficacy against Varroa mite infestation [20]. The Apistan™ strips remained in the treated colonies for the entire study in order to sustain a low mite infestation since there was a high risk of mite re-invasion from drifting bees from the untreated mite-infested colonies in the same apiary. Samples of 100-200 adult bees and 10 uninfested pupae were collected from each of the 12 colonies in two week increments starting two weeks before the Apistan was applied to the treated colonies. The last sample was 14 weeks after the treatment application. This sampling strategy produced a total of 9 sample time points between the 12 th of June and the 2 nd of October 2014. This time interval spans both the regular summer turnover of the adult bee populations and the production of the long-lived winter bees, which in Sweden occurs in late August [22]. No pupae were sampled on the last sampling date because the colonies were not rearing brood this late in the season. All experimental colonies overwintered and were examined in the spring on the 3 rd of March for overwintering survival. The mite infestation rates in the colonies were determined by washing the adult bee samples in soapy water to dislodge any mites, and collecting these on a fine sieve [23]. These mites were counted and measured as a proportion of the number of adult bees in the sample [23]. Thirty adult bees from each of these samples were delegated for quantitative molecular analysis of DWV infection. RNA purification The adult bee samples were extracted as a bulk sample of 30 bees and the pupae samples were extracted as a bulk sample of 10 pupae. The samples were placed in plastic mesh bags and ground to powder using liquid nitrogen and a pestle. To each ground sample, 200 μL per bee of GITC-buffer [24] containing 1% β-Mercaptoethanol was added, followed by shaking, to produce a primary homogenate. Total RNA was extracted from 100 μL of this homogenate by a QiaCube robot following the RNeasy protocol for plants (Qiagen). The RNA was eluted in 50 μL RNase-free water, the RNA concentration was estimated by NanoDrop and the sample was stored at -80˚C until further use. RT-qPCR The amount of deformed wing virus (DWV) RNA and RP49 mRNA (an internal reference gene for normalizing between-sample differences in RNA quantity and quality) in the adult and pupal bee samples was determined using reverse transcription quantitative PCR (RT-qPCR), using the iScript One Step RT-PCR kit (Bio-Rad) with SYBR Green as the detection chemistry and the Bio-Rad CFX connect thermocycler. Before RT-qPCR the RNA samples were diluted to a uniform concentration of 20 ng/μL to avoid concentration-dependent effects on RT-qPCR efficiency [24]. The reactions were performed in a 20 μL volumes containing 0.2 μM each of the forward and the reverse primer (S1 Table), 3 μL RNA, 10 μL SYBR Green RTmix and 0.4 μL of iScript reverse transcriptase. The thermocycling profile for both assays was as follows: 10 min at 50˚C for cDNA synthesis, 5 min at 95˚C for inactivation of the reverse transcriptase following 40 cycles of 10 sec. at 95˚C for denaturation and 30 sec. at 58˚C for annealing/extension and data collection. This profile was followed immediately by a Melting Curve analysis to confirm the identity of the amplification products, by incubating for 10 sec. at 95˚C followed by reading the fluorescence at 0.5˚C with increments from 65˚C to 95˚C. For both assays, a 10-fold serial dilution series of a positive cloned (plasmid) control of known concentration was also run on each reaction plate, as well as negative (water) controls, to establish the calibration curves for absolute quantification, as performed by the BioRad CFX software. The RT-qPCR data were subsequently converted to estimated copy numbers of each target RNA per bee as described previously [25]. Statistical analysis The DWV data for both adult bees and pupae were log transformed to meet assumptions of normally distributed data for parametric analysis. A linear maximum likelihood repeated-measures model (SAS, proc MIXED) was used to analyze the effects of acaricide treatment, time and a treatment à time interaction on the DWV titres in adult bees and pupae and the adult bee Varroa mite infestation rate. The Varroa mite infestation rate was included as an independent explanatory variable in the model. The covariance structure for the repeated factor was selected based on the Aikaike's information criteria [26]. The assumption of normality and equal variance was verified by analysis of residuals [26]. Results The Apistan treatment effectively reduced the Varroa mite population of the treated colonies within 6 weeks after the Apistan was applied (Fig 1, Table 1). The mites were not completely eliminated from the treated colonies but this was most likely an artifact of mite-re-invasion from the neighbouring untreated colonies in the same apiary (Fig 1). During this 6-week period, the adult DWV titres in the treated colonies were reduced 1000-fold relative to those of the untreated colonies, a differential that was maintained to the end of the season (Fig 1). This 6-week period equates to the average life-span of a summer bee and therefore represents a full demographic turnover of the adult bee population. However, the adult DWV titres of both the treated and untreated colonies increased about 10-fold from its lowest point in mid-August to the final sampling in mid-October. This period coincides with the production of long-lived winter bees, and therefore represents a different demographic phase. Statistical analyses confirmed that the treatment had a significant effect on adult DWV titres throughout the study period, and that a significant part of this effect could be explained by the covariation in mite infestation rates (Table 1). This explanatory effect of the mite infestation rates would be largely from the first phase, where both infestation rates and DWV titres decline in the treated colonies. Most of the remaining treatment effect comes from the second phase, where the DWV titre differences between treated and untreated colonies are maintained despite the increase in infestation rate in the untreated colonies. These trends for the adult DWV titres were largely mirrored by those for the pupae (Fig 2), but with greater variability, both between colonies at each time-point and between time-points. While the effect of treatment on adult bee samples was mostly reflected in a reduction in DWV titres in the treated colonies, the treatment differential on pupal samples was mostly reflected in a faster increase in DWV titres in untreated colonies. Since this coincides with the increase of the adult bee mite infestation rate in the untreated colonies, there was also a very strong explanatory effect of mite infestation rate on (Table 1). Removing the adult bee mite infestation rate as an explanatory variable for pupal DWV titres shifted the significance to the main treatment effect, which became marginally significant as a result (F 1, 10 = 6.26; P = 0.0314). This indicates that the effect of the Apistan treatment on DWV titres in pupae is almost entirely due to its effect on the colony mite infestation rate, rather than directly on the virus infection. Nevertheless, despite the mite removal, the average pupal DWV titres in treated colonies were higher in the autumn than they were at any other time during the study, over 10 7 genome copies / bee (Fig 2), attesting to the potency of the alternative transmission routes in maintaining the epidemic's momentum. On the last sampling date of our study it was not possible to collect pupal samples since the colonies were no longer rearing new bees and would remain so through the winter. The following spring the experimental colonies were checked for winter survival. One of the six Apistan treated colonies died during the autumn, before the final sampling, due to insufficient brood rearing. Three of the six untreated control colonies died during the winter (50% survival) These moribund colonies had the 2 nd , 3 rd and 4 th highest adult DWV titres as well as the 1 st , 2 nd and 6 th highest mite infestation rates of all colonies at the final sampling point the previous autumn. Discussion The broad pattern of DWV dynamics in the experimental colonies following mid-summer acaricide treatment can be divided into two phases. The first phase is characterized by a drastic reduction in DWV titres, coinciding with the removal of Varroa from the colonies and reaching its maximum effect 6 weeks after the start of treatment. This is followed by the second phase where the DWV titres increase again slightly, with a parallel increase observed in the untreated colonies. The pattern is clearer for the adult bees, where the main effect is a reduction of DWV titres in the treated colonies, than for the pupae, where the main effect is a faster increase in DWV titres for the untreated colonies. The reduction of adult DWV titres in the treated colonies of this study is consistent with the strong influence Varroa vectored transmission has on increasing the DWV titres in honeybee colonies [2], [4], [5], [10] [13], [27], [28]. The extent of the reduction (about 1000-fold) takes the adult DWV titres from clinical (>10 11 copies DWV/bee) to subclinical levels (<10 8 copies DWV/bee), which is sufficient to ensure winter survival. Clinical DWV symptoms in naturally infected bees typically start to appear at >10 10 copies DWV / bee, although there is considerable overlap between symptomatic and asymptomatic bees [29]. Although the DWV titres at the final sampling of the winter bees in this study were low enough to avoid the most damaging symptomatic effects, they were still high enough to be relevant to bee health and performance. Sub-clinical DWV infections often have sub-lethal effects, such as reduced life span [30], flight performance [31], foraging age and efficiency [32]. The persistence of high sub-lethal DWV titres in the treated colonies, well after all the mites were removed from the colonies, shows the importance of alternative DWV transmission routes (most likely oral) in sustaining the momentum of the DWV epidemic in the absence of mite-mediated transmission. Doses of around 10 7 virus particles/bee are usually sufficient for successful oral infection of larvae or adult bees [33]. The more unexpected result was the progressive increase of subclinical DWV titres in the treated colonies during the second phase of the experiment in both the pupal and adult samples, well after the mites were nearly completely removed from these treated colonies, instead of continuing to decrease or even level off. There are several possible factors that could have influenced this increase. The slight mite re-invasion towards the end of the study, likely from the nearby untreated colonies [34], was probably not strong enough and too late in the season to explain the 10-fold increase in DWV titres and occurs well after the upward trends were established. Drifting bees from untreated colonies can likewise be excluded as an influencing factor since it cannot explain the increase in pupal DWV titres. Regular bee turnover is also unlikely to be a major factor as the bees produced during the first phase are progressively less DWV-infected, while the pupae and the adults developing from the larvae feed by the first phase bees are progressively more infected. We suspect that this progressive increase in DWV titres from mid-August onwards is related to the nature of the production of the winter bees [35], with their unique physiological and functional traits [36]. Young nurse bees consume pollen and convert the nutrients to fats and the life-extending storage protein vitellogenin in their fat bodies [37]. These fat bodies are also major replication sites for DWV and similar viruses [38], [39]. During brood-rearing, the nutrients and constituents (and virus) produced in the fat bodies are used up to produce a proteinaceous secretion for feeding (and infecting) young larvae [40]. As brood-rearing slows down in autumn, more of the fat body resources (and virus) is retained as the bee prepares for surviving a long period of foraging dearth and becomes a 'winter' bee [41], [42]. Simultaneously, the collective nursing activity becomes increasingly focused on a shrinking population of (winter bee) larvae. These larvae are thus increasingly likely to receive a full infectious dose of DWV from infected nurse bees, relative to periods of high brood-rearing activity. Surges of Brood-rearing may thus also help explain the variability in pupal DWV titres during the foraging season, and between colonies. A third possible factor for the late season progressive increase of DWV titres could be the extended exposure of the treated colonies to tau-fluvalinate, which was previously shown to be associated with a temporary increase in DWV titres in late-season pupae and adult bees [25]. Honeybee colony death most often occurs during the winter months in temporal climates during the sensitive overwintering phase of the annual colony cycle [36]. The long-lived overwintering bees (surviving > 200 days) are produced late in the summer or autumn and will be responsible for foraging and rearing the next generation for the colony in the following spring. Due to this static population structure, where there is no adult populations turnover for several months, the health status of these long-lived winter bees is therefore of particularly critical importance for successful overwintering and colony survival. If Varroa control treatments are administered too late in the season, the overwintering bees will have already been reared under Varroa-infested conditions and may be too ill-affected by virus infections to survive the winter, even if the mite treatment itself was effective at removing the mites. The reduction of DWV titres in the treated colonies from clinical to subclinical levels not only paralleled the removal of mites but also, and perhaps more importantly, occurred during a turnover of the adult bee population (1 bee generation is approximately 38 days or nearly 6 weeks). The turnover of the adult bee population in a remedial Varroa treatment regime is probably more critical than previously realized. Highly infected adult bees must be replaced with a new generation of adults reared in a Varroa-free environment, so that new and progressively healthier bees will nurse and feed the larvae of the long-lived overwintering bees. By conducting our study over the summer months it was possible to observe the influence of the bee population dynamics in addition to mite removal on DWV infections. A previous study using Apistan treatment in the late summer showed high DWV titres in adult bees (> 10 10 copies / bee) and pupae (> 10 9 copies / bee) over the entire 6-week study [25]. Martin et al. [43] surveyed bees over the winter and using serological detection found a faster reduction of DWV when colony mite removal was performed in the summer rather than autumn. The DWV titres in the adult winter bees is dependent on what happens previously in the pupae [2]. In this experiment, the progressively increasing DWV titres in untreated colonies from early August onwards, when the brood rearing slows down, corresponds to the period when the mite infestation rate also increases [13]. This consequently increases the proportion of winter-bee pupae with clinical DWV titres and compromises the colony's chance of winter survival. For those pupal DWV titres to remain below 10 10 copies DWV/bee, treatment should not be later than at least 6 weeks before the end of brood rearing to allow for a generation turnover within the bee population. Our data demonstrates the importance of the alternative, non-Varroa virus transmission routes (e.g. oral transmission) for maintaining DWV titres within the colony at significant subclinical levels, even after the Varroa mites have been removed. Remedial treatment of honeybee colonies with high mite infestations and consequent high DWV titres may save the colony from immediate winter death, but subclinical DWV levels in overwintering bees could result in an insufficient number of bees for colony growth in the spring [13], [16], [32], causing an increased risk for damage by the Varroa-driven DWV epidemic the following year. To mitigate subclinical effects of DWV, we extend the above practical recommendation to continuously monitor and maintain low mite infestation rates in colonies by incorporating other integrated pest management (IPM) strategies to avoid irreversibly high DWV levels from building up, rather than relying on late season remedial treatment alone. Supporting information S1 File. Dataset. Persistence of subclinical deformed wing virus infections in honeybees following Varroa mite removal and a bee population turnover. (XLSX) S1 Table. Primer sequences and performance indicators for the RT-qPCR assays. Primer sequences and performance indicators, including the melting temperature of PCR products, for the RT-qPCR assays for DWV and internal reference gene RP-49. (PDF)
2018-04-03T04:52:36.170Z
2017-07-07T00:00:00.000
{ "year": 2017, "sha1": "4743c3d92796f1068c14b2bf147c7522ece787c9", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0180910&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4743c3d92796f1068c14b2bf147c7522ece787c9", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
120737607
pes2o/s2orc
v3-fos-license
How Travels a Bohmian Particle ? Bohm’s mechanics was built for explaining individual results in measurements, and mainly for getting rid of the enigmatic reduction postulate. Its main idea is that particles have at any time definite positions and velocities. An additional axiom is that particles follow continuous trajectories that admit the first derivative in time, the velocity. In the quantum theory, if the position of a quantum object is well-defined at some time, a Δt time later the object may be found anywhere in space, so, the velocity defined as Δx/Δt is completely undefined. This incompatibility is regarded in standard quantum theory as nature’s property. The disagreement between quantum and Bohm’s mechanics is particularly strong in wave-like phenomena, e.g. interference. For a particle traveling through an interference fringe, Bohm’s velocity formula shows a dependence of the time-of-flight on the fringe length. Such a dependence is not supported by the quantum theory. Thus, for deciding which prediction is correct one has to measure times-of-flight. But this is a problem. If one detects a particle at two positions and records the detection times, the time difference is meaningless, because the first position measurement disturbs the particle’s Bohm velocity (if exists). This text suggests a way around: instead of measuring positions and times, the particles are raised to an excited, unstable level, by passing them through a laser beam. The unstable level will decay in time, s.t. the density of probability of the excited atoms will indicate the time elapsed since excitation. For comparing the Bohmian and quantum predictions, this text proposes in continuation to send the beam of excited particle upon a mirror. Bohm’s velocity leads to anomalies in the reflected wave. Introduction Bohm's mechanics (BM) was built with the purpose of offering a simple and plausible alternative to the quantum theory (QT).The latter doesn't predicting measurement results of individual systems, only the statistics thereof, and regards this limitation as a property of the nature, "It requires us to give up the possibility of even conceiving precisely what might determine the behavior of an individual system at the quantum level, without providing adequate proof that such a renunciation is necessary" [1,2]. BM has an opposite view, "Permits us to conceive of each individual system as being in a precisely definable state, whose changes with time are determined by definite laws, analogous to (but not identical with) the classical equations of motion" [1,2]. BM is a hidden variable theory.It assumes that at a given time 0 a particle has a well-defined position, and this is the hidden variable of the theory.BM assumes that the density of probability for positions at t 0 is given by t   the absolute square of the wave-function, 0 2 Ψ , t r 0 t t    , and proves, [3], that at any the density of prob-2 Ψ , t r ability of the positions is , the connection be- being given by Schrödinger's equation.(An extensive analysis of BM may be found in [4]). , t r   So far, no contradiction with QT seems to appear.However, BM assumes an additional assumption, that the Bohmian particle travels along a continuous trajectory that admits also the first derivative with the time, the velocity.For this velocity BM postulates the expression where M is the mass of the particle, and S is the function that appears in the exponent if the wave-function is put in the form with R and S real.Then, if the Bohmian trajectory and velocity exist, the time-of-flight between two points and on a trajectory should be the integral where the position vector sweeps the trajectory, dL is the element of trajectory length, and is the projection upon dL of the Bohmian velocity at and t (the time when the particle passes through the point ). x z x z κ The standard QT disagrees with Equation (1).The uncertainty principle forbids the coexistence of definite values for position and velocity, s.t.Equation ( 2) is also meaningless in QT. A good tool for examining the Equations ( 1) and ( 2) are experiments on single particle interference.As shown in the next section, Equation (1) may entail that a Bohmian particle that enters an interference fringe is locked in it and has to travel along it until the end of that fringe.In consequence, if some fringes are longer and others shorter as happens when a beam falls obliquely on a mirror, the particles that enter short fringes have a short way to go through the interference region, and the particles that enter long fringes have a long way through this region.A difference in time-of-flight follows from this.No such difference is predicted by the QT. There remains a problem.In order to decide between the two theories one has to measure experimentally the times-of-flight.This is not a trivial task.The procedure of sensing the particle (without absorption) when it passes through the point 0 and recording the time t 0 , then detecting the particle when passing through the point 1 and recording the time t 1 , is worthless.The first position measurement disturbs the Bohmian velocity (if exists), s.t. the time difference is meaningless. There is a wide literature on the arrival-time topic.Arrival-time distributions and averages for different experimental configurations are calculated theoretically, see for instance the review [5], the general treatment in [6], and references therein.Though, how to measure times-of-flight without the disturbance at t 0 , is not clear. An interesting idea of Muga et al. [7] (see also [8]), was to raise the particle to an unstable state by passing the particle through a laser beam.The unstable state decays with photon emission, and the photon detection indicates the presence of the particle. Although [7,8] don't address the problem of the disturbance at t 0 , the present text uses their idea for finding an alternative to the position measurement at t 0 .The movement of a beam of unstable atoms is studied.The decay renders a set of such atoms more and more depleted with the distance from the laser beam, s.t. the degree of depletion of the set indicates how long time the set traveled. The beam is sent onto a mirror for atoms.An interference tableau of non-maximal visibility is obtained, through which the Bohm velocity (if exists) would drag the atoms in such a way that abnormal effects would appear in the reflected wave. The following sections are organized as follows.Sec-tion 2 illustrates the difference between the BM and QT predictions on a simple, ideal case, then shows a possible implementation.Section 3 examines the behavior of a beam of unstable atoms reflected by a mirror and finds the time-of-flight and the Bohmian trajectories.Section 4 comprises discussions. An Ideal Case Consider a beam of particles falling on a perfectly reflecting mirror.For simplicity, let's assume that the beam is produced in a tilted form, Figure 1, (eventually by means of fields).Let's approximate the direct and the reflected beam by plane waves, For the incidence angle of 45˚ one has . x z So, in the region of interference the wave function is (The subscript "I" stands for "interference").Now, let's find the trajectories of two Bohmian particles, 1 and 2, that pass simultaneously through the points Q 1 , respectively Q 2 , Figure 1(a).We will work below with a more practical expression for than (1), which is typically used in the literature.Substituting   D in this equation one gets for both particles the Bohm velocity . However, in the fringes the wave-function expression is (4), and the formula (5) yields the same x v 0 v , but z  .That implies that once in a fringe, the Bohmian particle travels along that fringe until the end of that fringe, without passing from one fringe to another.At the fringe end, the control of the particle is taken by the returning wave   R , and the particle begins to move with the Bohm velocity as calculated above.In Figure 1(a) one can see that particle 2 has a longer way to the detector than particle 1.From the points Q 1 and Q 2 down to the dotted line, both particles travel equal path-lengths, and so from the dotted line to the detector.But in the fringes particle 1 passes immediately from That induces a delay in the arrival at the detector for particles that pass through the vicinity of Q 2 , comparing with particles that pass through the vicinity of Q 1 .The difference in time can be easily calculated with the Bohm-velocities found above. No such things are predicted by the QT. Figure 1(b) shows the paths of two geometrical points (no particles) driven by the movement of the wave-function.They fly toward the mirror, then they return from it.One point follows the route Q 1 MP 2 , the other follows the route Q 2 NP 1 , and the lengths of the routes are equal. A Practical Implementation M. Köhl reported the results of a series of experiments with long and coherent beams of atoms [10,11], extracted from Bose-Einstein condensates.The extraction procedure is detailed in [12].The atoms in the beam, initially in a state with no magnetic dipole, crossed a region swept by laser beams where the atoms absorbed the energy necessary to pass to a state with magnetic momentum (see Figure 2(a) in [11]).Thus the magnetic field began to act on them, and in fact repelled them.The treatment of the movement of a particle in a constant field can be found in [13].The magnetic field implemented a mirror, in the region of superposition between the direct and the reflected beam appeared interference fringes.The mirror surface, i.e. the region within which the probability to find an atom drops to zero, was extremely thin.These experiments and those described in [7,8] inspired the procedure of estimating time-of-flight described in the next section. Interference with Unstable Particles This section has the purpose to show the difference between the BM and the QT predictions in a way that won't require the uncontrolled disturbance at t 0 .To the contrary, a controlled disturbance is used.The particles are passed through a laser beam where they absorb a photon and rise to a level of higher energy.This level is supposed to be unstable and to decay in time, s.t. the depletion of the beam shows us how much time elapsed since the atom was excited. The process of raising the atom to the excited state is not instantaneous, it doesn't occur at some sharp time t 0 .But in the experiment described below, all the particles that cross the laser beam and rise to the excited state undergo the same transformation, which takes the same interval of time.Next, if the particles exiting some region of the source follow a longer way than the particles exiting another part of the source, the former particles display a stronger depletion due to the decay than the latter.Thus, the absolute time-of-flight can't be established, because the excitation takes some time.However, one can establish differences between times-of-flight according to the degree of depletion. In the thought-experiment examined below, the trajectories of the Bohmian particles exiting some region of the source are longer than the trajectories of the Bohmian particles exiting another region of the source.The wave returning from the mirror is expected to show corresponding differences in depletion.For distinguishing the evolution of parts of the wave-function exiting different regions of the source, wide wave-packets are needed.Also, for studying the movement through the interference region, long wave-packets are needed for producing stable fringes during long intervals of time.All these requirements are met by the wave-packets used in Köhl's experiments.In addition, long wave-packets display a big indetermination in position, which entails a small indetermination in the linear momentum.In absence of fields, such wave-packets can be well approximated by planewaves.To get an image of how long were these wavepackets, one can look at Figure 1 in [12]. The inconvenient with Köhl's experiments is that the atoms were accelerated by fields, and that complicates the calculi.In this section we will consider again the ideal case in which the direct and the reflected beam have, each, a quite well-defined linear momentum.(A magnetic field will be used too, however for removing unwanted particles).The question of how to implement the mirror for atoms will be left aside, there are different ways to do it and we won't deal here with that. A Thought-Experiment Consider a long beam of atoms as in [10,11], prepared in a state with magnetic number m .The atom beam passes through a laser beam where the atom absorbs a photon and jumps to a higher energy level with 1  0 m  .In continuation, the atom beam lands on a mirror and is reflected, Figure 2. Suppose that the state with 0 m  is unstable and decays to a lower energy state with 1 m  by emitting a photon.The magnetic field B pushes away the atoms with from the atom beam. 1  m   The decay of the excited state is assumed to obey the exponential law 0 e t P t P , where P 0 is the probability to find the atom in the unstable state at a certain time taken as , and P(t) is the probability to find it still in this state after an interval of time t. 1 We study here the beam of excited atoms.We will approximate the direct and returning beam by plane waves as in the expressions (3), however we will take in consideration the losses due to decay.For simplicity let's assume , (The upper-script "e" stands for "excited").D when returning from the mirror.The group velocity of our wave-packet is given by t is the time needed to a thin horizontal layer of the wave-packet to travel from the top of the interference region, z I , to a fix point C of height z; R t is the time interval since the layer was at z I until its second visit of the layer at C, i.e. Introducing t D , t R and the convention (9) in the Equations ( 7) and ( 8), the direct and the returning beam become 1 + , 2πe e , 2πe e From the Equations ( 6), ( 10) and ( 11) there results the wave-function in the interference region, whose intensity is 1 It is known that the decay doesn't evolve always exponentially in time, see [9], but here is addressed the typical situation. 2The fraction This intensity entails a z-dependence of the fringe visibility V. Considering a small vicinity of some level z, the fringe visibility is the ratio of the difference between   2 t τ that appears here instead of t τ as in the decay law, is due to the fact that the decay law refers to probabilities, while the expressions ( 6), ( 7), (8), give amplitudes of probability.the maximal and minimal intensity in that vicinity, divided by the sum of the two.One gets     The next subsection examines the movement of the Bohm particle through this pattern, and the implications. The Bohm Velocity and the Time-of-Flight For the calculus of the time-of-flight we will use the integral (2).Therefore, the Bohm velocity will be needed. Introducing D ψ from (10) in the Equation ( 5 In the interference region the things are more complicated.One can check that x v remains the same, but, to the difference from the experiment in Section 2, here z v isn't zero in the fringes.From the Equation ( 12) results Using this expression and the intensity (13) in the formula (5) one obtains The quantity I κz is very big since the fringe width is a couple of orders of magnitude smaller than z I .Thus,       sinh 2 sin 2 I I .Noticing that the leading factor in the RHS is the x component of the Bohm velocity, (as calculated above, The leading sign "−" indicates that as long as a particle is in the fringes, it only falls, never goes up, see Figure 3.We will see in the next subsection the implications of this fact. With this velocity we can calculate the time-of-flight of a Bohmian particle along a Bohmian trajectory. Given two points A(x A , z A ) and B(x B , z B ) on a Bohm trajectory, we have according to the Equations ( 2) and (15), Since the fringe width is extremely small comparing with z I , the hyperbolic functions are practically constant over intervals in which changes many times and its values cancel mutually.There remains Note: this time-of-flight is between two points on the same Bohmian trajectory and in the fringe region.Outside the fringe region flight , This expression is also valid in QT.Indeed, considering a thin layer that travels with the wave-packet, as we considered in Section 3.1, the time of flight from A to B is given by the Equations (17), with the sign "−" for a direct flight from A to B, and the sign "+" for an indirect flight, first from A to mirror, then from mirror to B. Let's repeat for the sake of clarity: outside the fringes flight flight and is given by the expression (17), but inside the fringes BM gives for the time-of-flight the expression (16), while in QT is still valid (17). Bohm Trajectories and the Reflected Wave For the rationale that follows we will need the Bohmian trajectories.Then, let's first find their equation. The x component of the Bohm velocity is constant and the same inside and outside the interference region.So, we can write for two points A(x A , z A ) and B(x, z) on a same Bohmian trajectory, Equating with flight from Equation (16) for the region inside the fringes, and with flight from Equation (17) for the regions outside the fringes, we get, respectively,     sinh 2 ln sinh 2 where "−" is for the direct wave and "+" for the reflected wave. Figure 3 illustrates a comb of Bohmian trajectories, labeled 0 -20 that begin at equal distances, except for the trajectory 20 which is slightly closer to 19.One can see that the interference region behaves as a convergent lens bringing the trajectories closer to one another.Toward the bottom of the interference region the trajectories agglomerate and some of them overlap.On the other hand, the border between the interference region and the reflected wave e R ψ e has a divergent effect.In the reflected wave the trajectories appear very rarefied. These facts open a couple of problems.1) In Figure 3 different trajectories overlap toward the bottom of the interference region.However, as long as the gradient of the wave-function is single-valued at each point (which is the present case), QT doesn't allow several lines of flux to merge into one, or one flux line to split into several. Of course, examining the trajectories in Figure 3 at a higher resolution it will be found that the apparently overlapping trajectories are though separated by small distances.But under a higher resolution one can draw a denser comb of trajectories.Again there will be adjacent trajectories that merge into one, and the problem will reappear at the new scale. 1) Assume that trajectories don't overlap, i.e. there is a minimal distance between trajectories (assumption even more plausible if one works with fermions).Still, another problem appears. Let's imagine a transversal section through the direct beam, and consider the set of excited atoms present on this transversal section at the same time.Let's denote by δ the smallest distance between two atoms in this set.That means, δ is the distance between two neighbor Bohmian trajectories in D ψ e .The requirement of simultaneity is needed because sets of particles that begin their journey at different times may have their nets of trajectories displaced, one net with respect to the other.The distance between two trajectories that begin at different times may be arbitrarily small. From the trajectory formulas ( 19) and (20) one finds out that the trajectories that pass through the neighborhood of the point C are about 16 times more rarefied than they are in D ψ .BM tells us that the trajectories passing through the vicinity of C are short, see Figure 3, so the loss of particles by de-excitation is small and good statistics could be gathered.Then one should get that the distance between two particles detected at the same time in the vicinity of the point C never falls below 16δ. To the contrary, toward the RHS border of the reflected wave, the distance measured between two simultaneously detected particles should decrease.The calcu-lus shows that on the RHS border this distance may be as small as 30 δ . Of course, BM tells us that the trajectories here are much longer, so the statistics is poor.Two neighbor particles may not reach the detector together because one of them was lost by de-excitation.Though, examining many sets of simultaneously detected particles, one should obtain sometimes distances smaller than the minimal distance obtained in the neighborhood of the point C. QT does not confirm such effects. Discussions Bohm's mechanics is a salutary trial to get rid of the non-understandable reduction postulate of von Neumann. The explanation that a click in a detector on the branch a of the wave-function and the silence of the detectors on the other branches, is caused by something in the branch a that isn't in the other branches, is most plausible and appealing.Indeed, the detector doesn't click at its whim, it responds at a stimulus present in the wave-function. Vis-à-vis this explanation, the reduction postulate offers no explanation on why this detector responds and the others don't. It is therefore important to see if Bohm's explanation, together with the other assumptions of BM, are contradiction-free.If a contradiction though appears, it is desirable to find which one of the assumptions causes it. The present analysis puts under question mark Bohm's velocity formula. In a theory that aims at producing the same predictions as QT, the idea of simultaneously well defined values for position and velocity raises suspicions.This text doesn't prove that this idea is wrong.It proves less, that Bohm's formula for velocity creates problems. Whether this formula can be replaced by a better one for building a Bohm-like mechanics, is still ahead to be investigated.It wouldn't be a simple task because Bohm's velocity formula fits very well in the continuity equation, and any other formula should preserve this property.Also, any Bohm-like mechanics should be able to explain why in single particle interference, the probability to find the particle in the bright fringes is bigger and in the dark fringes smaller. Figure 1 ( a) shows the consequences of these facts, and Figure 1(b) shows the quantum replica. Figure 1 . Figure 1.Bohmian trajectories vs. quantum paths.(Not to scale).The dark strips in the interference region represent allowed fringes, and the bright strips forbidden fringes.For eye-guiding, the path starting at Q 1 is marked with full line and the path starting at Q 2 with dashed line.(a) Bohm trajectories; (b) Paths of two geometrical points driven by the wave-function. Figure 2 . Figure 2.An interference experiment with unstable atoms.The figure illustrates (not to scale) the depletion increasing with the distance traveled from the laser beam.The deexcited atoms are not shown.The level z = 0 is the mirror surface.z I is the top level of the interference region.The layer between the two horizontal lines is considered as moving with the group velocity.C is a fix point in space. ψ from(11) in(5) one gets the same x v , but z v changes sign.Let's notice that these values are equal with the group velocity components, Figure 3 . Figure 3. Bohmian trajectories.The interference fringes are not shown because they are too narrow.The black lines represent a comb of Bohmian trajectories of the excited atoms.The numbers on the top of the figure label the trajectories in the comb.The trajectories 0 -19 begin at equal distances, while the trajectory 20 is slightly closer to 19.
2019-04-19T13:05:11.838Z
2012-12-14T00:00:00.000
{ "year": 2012, "sha1": "cd000c1690f60e22978ffdfb02c49522441a307d", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=25489", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "cd000c1690f60e22978ffdfb02c49522441a307d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
207333279
pes2o/s2orc
v3-fos-license
Comparative study of intracavity KTP-based Raman generation between Nd : YAP and Nd : YAG lasers operating on the 4 F 3 / 2 → 4 I 13 / 2 transition Extending the spectral wavelengths of the diode-pumped Nddoped lasers at 1.3 μm with the KTP crystal in the intracavity Raman configuration is reported for the first time to the best of our knowledge. A systematic comparison is performed to show that a better optical conversion efficiency for the Nd:YAP/KTP Raman laser could be achieved thanks to the higher peak power and linearly polarized radiation at 1341 nm, whereas up to four Stokes emission lines are generated from the Nd:YAG/KTP Raman laser as a result of the fundamental dual-color operation at 1319 and 1338 nm. The maximum Stokes output power of the developed Nd:YAP/KTP Raman laser reaches 1.04 W under an incident pump power of 16 W and a pulse repetition rate of 10 kHz, corresponding to the diodeto-Stokes conversion efficiency as high as 6.5%. The largest pulse energy and highest peak power are evaluated to be up to 104 μJ and 34.7 kW, respectively. ©2015 Optical Society of America OCIS codes: (140.3550) Lasers, Raman; (140.3540) Lasers, Q-switched; (140.3580) Lasers, solid-state; (140.3480) Lasers, diode-pumped; (140.3530) Lasers, neodymium. References and links 1. M. J. Weber, M. Bass, K. Andringa, R. R. Monchamp, and E. Comperchio, “Czochralski growth and properties of YAlO3 laser crystals,” Appl. Phys. Lett. 15(10), 342–345 (1969). 2. M. J. Weber and T. E. Varitimos, “Optical spectra and intensities of Nd in YAlO3,” J. Appl. Phys. 42(12), 4996–5005 (1971). 3. A. A. Kaminskii, S. E. Sarkisov, I. V. Mochalov, L. K. Aminov, and A. O. Ivanov, “Anisotropy of spectroscopic characteristics in the biaxial YAlO3-Nd laser crystals,” Phys. Stat. Solidi 51(2), 509–520 (1979). 4. F. Hanson and P. Poirier, “Multiple-wavelength operation of a diode-pumped Nd:YAlO3 laser,” J. Opt. Soc. Am. B 12(7), 1311–1315 (1995). 5. R. F. Wu, K. S. Lai, H. Wong, W. J. Xie, Y. Lim, and E. Lau, “Multiwatt mid-IR output from a Nd:YALO laser pumped intracavity KTA OPO,” Opt. Express 8(13), 694–698 (2001). 6. H. Y. Zhu, Y. M. Duan, G. Zhang, C. H. Huang, Y. Wei, W. D. Chen, H. Y. Wang, and G. Qiu, “High-power LD end-pumped intra-cavity Nd:YAlO3/KTiOAsO4 optical parametric oscillator emitting at 1562 nm,” Laser Phys. Lett. 7(10), 703–706 (2010). 7. Y. Lü, P. Zhai, J. Xia, X. Fu, and S. Li, “Simultaneous orthogonal polarized dual-wavelength continuous-wave laser operation at 1079.5 nm and 1064.5 nm in Nd:YAlO3 and their sum-frequency mixing,” J. Opt. Soc. Am. B 29(9), 2352–2356 (2012). 8. H. Y. Zhu, Y. M. Duan, H. Y. Wang, Z. H. Shao, Y. J. Zhang, G. Zhang, J. Zhang, and D. Y. Tang, “Compact Nd:YAlO3/RbTiOPO4 based intra-cavity optical parametric oscillator emit at 1.65 and 3.13 μm,” IEEE J. Sel. Top. Quantum Electron. 21(1), 1600105 (2015). 9. Y. F. Chen, T. M. Huang, C. L. Wang, and L. J. Lee, “Compact and efficient 3.2-W diode-pumped Nd:YVO4/KTP green laser,” Appl. Opt. 37(24), 5727–5730 (1998). 10. S. Bai and J. Dong, “GTR-KTP enhanced stable intracavity frequency doubled Cr,Nd:YAG self-Q-switched green laser,” Laser Phys. 25(2), 025002 (2015). 11. Y. F. Chen, Y. S. Chen, and S. W. Tsai, “Diode-pumped Q-switched laser with intracavity sum frequency mixing in periodically poled KTP,” Appl. Phys. B 79(2), 207–210 (2004). #235322 $15.00 USD Received 2 Mar 2015; revised 9 Apr 2015; accepted 10 Apr 2015; published 14 Apr 2015 © 2015 OSA 20 Apr 2015 | Vol. 23, No. 8 | DOI:10.1364/OE.23.010435 | OPTICS EXPRESS 10435 12. J. Y. Huang, W. Z. Zhuang, Y. P. Huang, Y. J. Huang, K. W. Su, and Y. F. Chen, “Improvement of stability and efficiency in diode-pumped passively Q-switched intracavity optical parametric oscillator with a monolithic cavity,” Laser Phys. Lett. 9(7), 485–490 (2012). 13. Q. Cui, X. Shu, X. Le, and X. Zhang, “70-W average-power doubly resonant optical parametric oscillator at 2 μm with single KTP,” Appl. Phys. B 117(2), 639–643 (2014). 14. G. A. Massey, T. M. Loehr, L. J. Willis, and J. C. Johnson, “Raman and electrooptic properties of potaśsium titanate phosphate,” Appl. Opt. 19(24), 4136–4137 (1980). 15. Y. B. Band, J. R. Ackerhalt, J. S. Krasinski, and D. F. Heller, “Intracavity Raman lasers,” IEEE J. Quantum Electron. 25(2), 208–213 (1989). 16. H. M. Pask, “The design and operation of solid-state Raman lasers,” Prog. Quantum Electron. 27(1), 3–56 (2003). 17. P. Cerný, H. Jelínková, P. G. Zverev, and T. T. Basiev, “Solid state lasers with Raman frequency conversion,” Prog. Quantum Electron. 28(2), 113–143 (2004). 18. J. A. Piper and H. M. Pask, “Crystalline Raman lasers,” IEEE J. Sel. Top. Quantum Electron. 13(3), 692–704 (2007). 19. G. H. Watson, “Polarized Raman spectra of KTiOAsO4 and isomorphic nonlinear-optical crystals,” J. Raman Spectrosc. 22(11), 705–713 (1991). 20. C. S. Tu, A. R. Guo, R. Tao, R. S. Katiyar, R. Guo, and A. S. Bhalla, “Temperature dependent Raman scattering in KTiOPO4 and KTiOAsO4 single crystals,” J. Appl. Phys. 79(6), 3235–3240 (1996). 21. Y. F. Chen, “Stimulated Raman scattering in a potassium titanyl phosphate crystal: simultaneous self-sum frequency mixing and self-frequency doubling,” Opt. Lett. 30(4), 400–402 (2005). 22. S. Pearce, C. L. M. Ireland, and P. E. Dyer, “Solid-state Raman laser generating <1 ns, multi-kilohertz pulses at 1096 nm,” Opt. Commun. 260(2), 680–686 (2006). 23. Y. T. Chang, Y. P. Huang, K. W. Su, and Y. F. Chen, “Diode-pumped multi-frequency Q-switched laser with intracavity cascade Raman emission,” Opt. Express 16(11), 8286–8291 (2008). 24. Z. Liu, Q. Wang, X. Zhang, Z. Liu, J. Chang, H. Wang, S. Zhang, S. Fan, W. Sun, G. Jin, X. Tao, S. Zhang, and H. Zhang, “A KTiOAsO4 Raman laser,” Appl. Phys. B 94(4), 585–588 (2009). 25. Z. J. Liu, Q. P. Wang, X. Y. Zhang, S. S. Zhang, J. Chang, H. Wang, S. Z. Fan, W. J. Sun, X. T. Tao, S. J. Zhang, and H. J. Zhang, “1120 nm second-Stokes generation in KTiOAsO4,” Laser Phys. Lett. 6(2), 121–124 (2009). 26. H. T. Huang, J. L. He, and Y. Wang, “Second Stokes 1129 nm generation in gray-trace resistance KTP intracavity driven by a diode-pumped Q-switched Nd:YVO4 laser,” Appl. Phys. B 102(4), 873–878 (2011). 27. H. Zhu, Z. Shao, H. Wang, Y. Duan, J. Zhang, D. Tang, and A. A. Kaminskii, “Multi-order Stokes output based on intra-cavity KTiOAsO4 Raman crystal,” Opt. Express 22(16), 19662–19667 (2014). 28. G. Kh. Kitaeva, “Terahertz generation by means of optical lasers,” Laser Phys. Lett. 5(8), 559–576 (2008). 29. P. Zhao, S. Ragam, Y. J. Ding, and I. B. Zotova, “Power scalability and frequency agility of compact terahertz source based on frequency mixing from solid-state lasers,” Appl. Phys. Lett. 98(13), 131106 (2011). 30. W. Wang, Z. Cong, X. Chen, X. Zhang, Z. Qin, G. Tang, N. Li, C. Wang, and Q. Lu, “Terahertz parametric oscillator based on KTiOPO4 crystal,” Opt. Lett. 39(13), 3706–3709 (2014). 31. H. Li, R. K. Hanson, and J. B. Jeffries, “Diode laser-induced infrared fluorescence of water vapour,” Meas. Sci. Technol. 15(7), 1285–1290 (2004). 32. A. D. Griffiths and A. F. P. Houwing, “Diode laser absorption spectroscopy of water vapor in a scramjet combustor,” Appl. Opt. 44(31), 6653–6659 (2005). 33. H. Li, A. Farooq, J. B. Jeffries, and R. K. Hanson, “Near-infrared diode laser absorption sensor for rapid measurements of temperature and water vapor in a shock tube,” Appl. Phys. B 89(2–3), 407–416 (2007). 34. E. Gregor, D. E. Nieuwsma, and R. D. Stultz, “20 Hz eyesafe laser rangefinder for air defense,” Proc. SPIE 1207, 124–135 (1990). 35. L. R. Marshall, J. Kasinski, and R. L. Burnham, “Diode-pumped eye-safe laser source exceeding 1% efficiency,” Opt. Lett. 16(21), 1680–1682 (1991). 36. H. Y. Zhu, G. Zhang, C. H. Huang, Y. Wei, L. X. Huang, A. H. Li, and Z. Q. Chen, “1318.8 nm/1338.2 nm simultaneous dual-wavelength Q-switched Nd:YAG laser,” Appl. Phys. B 90(3–4), 451–454 (2008). 37. H. Liu, M. Gong, X. Wushouer, and S. Gao, “Compact corner-pumped Nd:YAG/YAG composite slab 1319 nm/1338 nm laser,” Laser Phys. Lett. 7(2), 124–129 (2010). 38. L. Guo, R. Lan, H. Liu, H. Yu, H. Zhang, J. Wang, D. Hu, S. Zhuang, L. Chen, Y. Zhao, X. Xu, and Z. Wang, “1319 nm and 1338 nm dual-wavelength operation of LD end-pumped Nd:YAG ceramic laser,” Opt. Express 18(9), 9098–9106 (2010). 39. Y. Duan, H. Zhu, C. Xu, H. Yang, D. Luo, H. Lin, J. Zhang, and D. Tang, “Comparison of the 1319 and 1338 nm dual-wavelength emission of neodymium-doped yttrium aluminum garnet ceramic and crystal lasers,” Appl. Phys. Express 6(1), 012701 (2013). 40. W. Chen, Y. Inagawa, T. Omatsu, M. Tateda, N. Takeuchi, and Y. Usuki, “Diode-pumped, self-stimulating, passively Q-switched Nd:PbWO4 Raman laser,” Opt. Commun. 194(4–6), 401–407 (2001). 41. A. A. Demidovich, P. A. Apanasevich, L. E. Batay, A. S. Grabtchikov, A. N. Kuzmin, V. A. Lisinetskii, V. A. Orlovich, O. V. Kuzmin, V. L. Hait, W. Kiefer, and M. B. Danailov, “Sub-nanosecond microchip laser with intracavity Raman conversion,” Appl. Phys. B 76(5), 509–514 (2003). 42. R. Frey, A. de Martino, and F. Pradère, “High-efficiency pulse compression with intracavity Raman oscillators,” Opt. Lett. 8(8), 437–439 (1983). #235322 $15.00 USD Received 2 Mar 2015; revised 9 Apr 2015; accepted 10 Apr 2015; published 14 Apr 2015 © 2015 OSA 20 Apr 2015 | Vol. 23, No. 8 | DOI:10.1364/OE.23.010435 | OPTICS EXPRESS 10436 43. J. T. Murray, W. L. Austin, and R. C. Powell, “Intracavity Raman conversion and Raman beam cleanup,” Opt. Mater. 11(4), 353–371 (1999). Introduction The output characteristics of solid-state lasers are mainly determined by the gain medium.To date, the Nd:YAG crystal, due to its excellent mechanical and optical properties, is widely studied in developing various kinds of laser architectures and popularly utilized in commercial products.From a review of previous literatures, it was pointed out that the Nd:YAP crystal, also known as Nd:YAlO or Nd:YAlO 3 , is a suitable candidate for replacing the Nd:YAG crystal while keeping some similar physical properties such as hardness and thermal conductivity.Although these two host materials are both derived from the Y 2 O 3 -Al 2 O 3 system [1][2][3], the different composition ratios lead the crystalline host to be orthorhombic in the Nd:YAP crystal rather than cubic in the Nd:YAG crystal.Therefore, a more efficient nonlinear wavelength conversion could usually be readily achieved with the Nd:YAP laser thanks to the linearly polarized emission as a result of the natural crystal birefringence [4][5][6][7][8]. The potassium titanyl phosphate (KTP) crystal is a well-known nonlinear crystal for producing coherent radiations covering a large portion of the spectral range through second harmonic generation [9,10], sum and difference frequency generation [7,11], and optical parametric oscillation [12,13].Besides the second-order nonlinear response, the early research of the spontaneous Raman spectrum [14] indicated the feasibility of the KTP crystal to be a practical frequency converter via stimulated Raman scattering (SRS), an attractive method of wavelength conversion based on the third-order nonlinearity [15][16][17][18].The isomorphs of the KTP crystal, including the potassium titanyl arsenate (KTA) and rubidium titanyl phosphate (RTP) materials, were also identified as efficient Raman-active media [19,20], and over the past few years, the intracavity generation of coherent Stokes waves driven by diode-pumped Nd-doped lasers at 1.06 μm has been extensively demonstrated based on the X(ZZ)X configuration [21][22][23][24][25][26][27].In comparison with other crystalline Raman materials such as vanadates and tungstates, the relatively small frequency shifts of the KTP crystal and its isomorphs allow for easily acquiring multiple Stokes emission lines through cascaded SRS process, which are desirable for producing coherent terahertz radiation by different frequency generation [28][29][30].Even so, to the best of our knowledge, the KTP-based SRS process has not been realized to extend the spectral wavelengths of the Nd-doped lasers operating on the 4 F 3/2 → 4 I 13/2 transition. In this work, we originally demonstrate an intracavity KTP-based Raman radiation with a frequency shift of 267 cm −1 in a compact diode-end-pumped actively Q-switched Nd:YAP laser at 1341 nm for the first time.Under an incident pump power of 16 W and a pulse repetition rate of 10 kHz, the total Raman output power of 1.04 W, including the first and second Stokes components of 0.46 W at 1391 nm and 0.58 W at 1445 nm, is efficiently generated with a pulse duration as short as 3 ns, corresponding to the diode-to-Stokes conversion efficiency of up to 6.5%.The largest pulse energy and highest peak power are evaluated to be 104 μJ and 34.7 kW, respectively.We also prepare a Nd:YAG crystal to make a systematic comparison under a similar operating condition.It is experimentally found that although lower output power is obtained from the Nd:YAG/KTP Raman laser, up to four Stokes emission lines could be produced thanks to the dual-wavelength operation of the fundamental wave at 1319 and 1338 nm.The first and second Stokes spectral wavelengths generated in our developed KTP-based Raman oscillators can find their usefulness in the moisture and water-vapor detections [31][32][33] as well as laser radar, range finder, telemetry, and other remote-sensing applications [34,35]. Experimental setup Figure 1 schematically depicts the experimental arrangement of intracavity Raman oscillator based on the KTP crystal pumped by diode-end-pumped actively Q-switched Nd-doped lasers at 1.3 μm, where a shared configuration for the fundamental and Stokes wavelengths was utilized.The pump source was a fiber-coupled laser diode at 808 nm with a core diameter of 800 μm and a numerical aperture of 0.14.A pair of plano-convex coupling lenses with focal lengths of 25.4 mm was utilized to reimage the pump beam into the laser crystal with a spot radius of 400 μm.The input mirror was a plane mirror that is coated for anti-reflection (AR, R < 0.2%) at 808 nm on the entrance surface, and for high reflection (HR, R > 99.8%) in the range of 1300-1500 nm as well as for high transmission (HT, T > 90%) at 808 nm on the other surface.The Nd:YAP and Nd:YAG crystals were employed for conducting a systematic comparison at the fundamental and Stokes wavelengths.The Nd:YAP crystal with the dimension of 3 × 3 × 10 mm 3 was with the doping concentration of 1%, whereas the Nd:YAG crystal with the diameter of 4 mm and the length of 10 mm was with the doping concentration of 0.8%.The KTP crystal with the size of 4 × 4 × 20 mm 3 was x-cut along θ = 90° and φ = 0° to realize the X(ZZ)X configuration in the SRS process.Both sides of the Nd:YAP, Nd:YAG, and KTP crystals were AR coated in the range of 1300-1500 nm.These crystals were also wrapped with indium foil and mounted in water-cooled copper holders at a temperature of 18°C.A 20-mm-long acousto-optical Q-switch (Gooch & Housego) had AR coating at lasing wavelengths on both sides, and it was driven at RF frequency and power of 41 MHz and 25 W. A plane mirror with HR coating at the fundamental wavelength was used as the Raman output coupler.Its reflectivity gradually decreases with the increase of the wavelength from 1350 to 1480 nm.The values of reflectivities for fundamental wavelengths and individual Stokes emission lines that would be generated in the experiment are as follows: R = 99.5% at 1319, 1338, and 1341 nm, R = 99.0%at 1368 nm, R = 98.5% at 1389 and 1391 nm, R = 91.31%at 1420 nm, R = 29.1% at 1442 nm, and R = 25.8% at 1445 nm.All optical components were placed as close as possible to have a cavity length of 77 mm. The spectral information on the laser output was registered by an optical spectrum analyzer (Advantest, Q8381A) that employs a diffraction lattice monochromator for highspeed measurement of pulse light with a resolution of 0.1 nm.The pulse temporal behaviors were recorded by a digital oscilloscope (LeCroy, Wavepro 7100, 10 G samples/s, 1 GHz bandwidth) with a fast InGaAs photodiode.First of all, the actively Q-switched performances at the fundamental wavelength for the Nd:YAP and Nd:YAG lasers were comparatively studied, where the aforementioned Raman output coupler was replaced by a plane output coupler with a reflectivity of 96% in the range of 1300-1350 nm.Figures 2(a)-2(d) illustrate the results of average output power, pulse energy, pulse duration, and peak power at 1.3 μm as a function of the pulse repetition rate.The average output power and pulse energy for both lasers are experimentally found to be quite comparable, as exhibited in Figs.2(a) and 2(b).On the one hand, the average output power increases from 1.16 to 1.99 W and the pulse energy decreases from 232 to 100 μJ when the pulse repetition rate changes from 5 to 20 kHz for the Nd:YAP laser.On the other hand, the average output power raises from 1.24 to 1.9 W and the pulse energy reduces from 248 to 95 μJ as the pulse repetition rate varies from 5 to 20 kHz for the Nd:YAG laser.In contrast, the pulse duration obtained from the Nd:YAP laser is observed to be generally shorter than that obtained from the Nd:YAG laser by an amount of 20-30 ns, depending on the pulse repetition rate, as shown in Fig. 2(c).Consequently, the Nd:YAP laser could emit Q-switched pulses with higher peak power as compared with the Nd:YAG laser, as displayed in Fig. 2(d), where the largest values for the Nd:YAP and Nd:YAG laser are 3.9 and 3.1 kW, respectively.Figures 2(e) and 2(f) describe the optical spectra for both cases with the insets showing the measured room-temperature fluorescent profiles.The central wavelength for the Nd:YAP laser locates at 1341.3 nm with the full width at half maximum of approximately 0.2 nm.For the Nd:YAG laser, it was experimentally found that the dual-wavelength operation is achieved due to the comparable stimulated emission cross sections at 1319 and 1338 nm [36][37][38][39], as can also be deduced from the inset in Fig. 2(f). Output characteristics of the intracavity KTP-based Raman generations Then, the conversion efficiencies in the SRS process based on the KTP crystal were comparatively investigated with the Nd:YAP and Nd:YAG lasers operating on the 4 F 3/2 → 4 I 13/2 transition.The best results are achieved at a pulse repetition rate of 10 kHz.During the experiment, only the first and second Stokes emission lines were generated, and no higherorder Stokes wavelength was detected.The average output powers of the fundamental and individual Stokes components at a pulse repetition rate of 10 kHz are described in Fig. 3(a) for the Nd:YAP/KTP Raman laser.The fundamental output power is measured to be on the order of several tens of milliwatts in the whole operating range.The pump threshold for the first Stokes wave is about 5.1 W and its output power continuously increases with the pump power and then saturates at a level of around 0.46 W. When the pump power exceeds 6.9 W, the second Stokes radiation starts to emit and quickly grows up with the pump power, and eventually becomes the dominant portion of the Raman output.At the maximum incident pump power of 16 W, the total Raman output power of 1.04 W, containing the first and second Stokes components of 0.46 and 0.58 W, is efficiently generated, corresponding to the diode-to-Stokes conversion efficiency of up to 6.5%.Alternatively, the conversion efficiency with respect to the output power available from the fundamental laser at 1341 nm reaches 63.8%.The Raman pulse energy is evaluated to be 104 μJ under a pulse repetition rate of 10 kHz.For the Nd:YAG/KTP Raman laser, similar behaviors for the fundamental and Stokes waves are obtained except that the pump thresholds for the first and second Stokes components are increased to be 6.2 and 8.9 W, as illustrated in Fig. 3(b).At the maximum incident pump power of 16 W, lower total Raman output power of 0.47 W is obtained, including the first and second Stokes components of 0.09 and 0.38 W, respectively.The conversion efficiencies with respect to the input diode power and the output power obtainable from the fundamental laser at 1.3 μm are 2.9 and 27.6%, respectively.The Raman pulse energy is estimated to be 47 μJ under a pulse repetition rate of 10 kHz.The lower peak power and randomly polarized emission offered by the fundamental Q-switched Nd:YAG laser might be the reason why the considerably lower conversion efficiency in the SRS process is acquired for the Nd:YAG/KTP Raman laser.The behavior of the cascaded SRS process for the current KTP-based Raman laser could be understood more clearly with the help of a set of spatially independent coupled rate equations, which describe the temporal evolutions of the population inversion density n, the intracavity fundamental photon density φ 0 , and the intracavity ith-order Stokes photon density φ i [40,41]: [ ] [ ] where r p is the rate of the pump density, c is the speed of light, σ and τ are the stimulated emission cross section and the upper-state lifetime of the gain medium, T r is the photon round-trip time in the resonator, l g is the length of the gain medium, g is the Raman gain parameter, h is the Planck constant, τ 0 is the photon lifetime at the fundamental light frequency ν 0 , l R is the length of the Raman-active material, τ i is the photon lifetime of the ithorder Stokes wave at the light frequency ν i , and N represents the highest order that the Raman laser could be generated.For the specific ith-order Stokes pulse to build up, the first derivative of φ i with respect to time should be greater than zero, which means the following criterion should be satisfied: It is apparent that when the photon number of lower (i-1)th-order Stokes wave reaches a certain threshold value given by Eq. ( 5), it can act as the pump source to produce the next higher ith-order Stokes radiation through the SRS conversion.This process could be repeatedly performed until the highest Nth-order Stokes beam generates. The optical spectra at incident pump powers of 5.5, 6.8, 11.4, and 16 W are exhibited in Figs.4(a)-4(d) for the Nd:YAP/KTP Raman laser.The evolution of the optical spectra with respect to the incident pump power is consistent with the behaviors of output powers for individual spectral components recorded in Fig. 3(a).As shown in Fig. 4(d), the central wavelengths of the first and second Stokes radiations are measured to be 1391.4and 1444.9 nm, respectively.The frequency shifts between the Stokes emission lines and the fundamental wavelength at 1341 nm agree very well with the asymmetric bending mode of a distorted TiO 6 octahedron (267 cm −1 ).For the Nd:YAG/KTP Raman oscillator, Figs.4(a')-4(d') describe the optical spectra at incident pump powers of 5.5, 6.8, 11.4, and 16 W. Intriguingly, it is experimentally found that up to four Stokes spectral lines could be derived from the dualwavelength Nd:YAG laser at 1.3 μm.Through the cascaded SRS process, the Stokes waves at 1367.9 and 1420.5 nm are created by the fundamental light at 1319.2 nm, while the Stokes radiations at 1388.7 and 1441.7 nm are originated from the fundamental light at 1338.5 nm.These results imply that the cascaded SRS conversion from the multi-wavelength laser might be a promising approach to emit a wealth of spectral lines with small wavelength separations.During the experiment, weak red lights were also observed for both Nd:YAP/KTP and Nd:YAG/KTP Raman lasers as a result of second and sum frequency generations for the fundamental and Stokes waves in the KTP crystal, as shown in Figs.4(e) and 4(e'), respectively.Typical temporal behaviors of the residual fundamental and Raman output pulses at an incident pump power of 16 W and a pulse repetition rate of 10 kHz are illustrated in Fig. 5.The pulse-to-pulse amplitude stabilities are experimentally found to be better than ± 10% for both Nd:YAP/KTP and Nd:YAG/KTP Raman pulses, as depicted in Figs.5(a) and 5(c).It can be seen that the depletion of the falling edge of the fundamental light is accompanied with the quick build-up of the Stokes wave.Furthermore, the nonlinear frequency conversion of the SRS process leads to the significant pulse shortening for the Raman laser as compared with the fundamental light [42].For the Nd:YAP/KTP Raman laser, the shortest Stokes pulse duration illustrated in Fig. 5(b) is as narrow as 3 ns, corresponding to the peak power as high as 34.7 kW.For the Nd:YAG/KTP Raman laser, the shortest Stokes pulse duration described in Fig. 5(d) is 5 ns, corresponding to the peak power of 10 kW.Finally, because of the beam clean-up effect of the SRS process [43], the beam qualities for the Stokes radiation were generally found to be better than those of the fundamental light.With a knife-edge method, the beam quality factors of the Raman lasers are measured to both be M 2 <1.5 for orthogonal directions. Conclusion In summary, the KTP crystal has been successfully employed as an intracavity Raman-active medium for extending the spectral ranges of the diode-pumped Nd:YAP and Nd:YAG lasers operating on the 4 F 3/2 → 4 I 13/2 transition with the frequency shift of 267 cm −1 for the first time.Experimental results have clearly shown that although up to four Stokes emission lines could be generated from the Nd:YAG/KTP Raman laser due to the fundamental dual-wavelength operation at 1319 and 1338 nm, the higher peak power and linearly polarized emission result in a better SRS conversion efficiency for the Nd:YAP/KTP Raman laser.Under an incident pump power of 16 W and a pulse repetition rate of 10 kHz, the developed Nd:YAP/KTP Raman laser efficiently generated a total output power of 1.04 W with a pulse duration down to 3 ns, where the first and second Stokes output powers were 0.46 and 0.58 W. The corresponding diode-to-Stokes conversion efficiency was up to 6.5%.The largest pulse energy and highest peak power of the developed Raman laser were evaluated to be up to 104 μJ and 34.7 kW, respectively. Fig. 2 . Fig. 2. (a) Average output power, (b) pulse energy, (c) pulse duration, and (d) peak power with respect to the pulse repetition rate under an incident pump power of 16 W for the Nd:YAP and Nd:YAG laser at 1.3 μm; Optical spectra for the (e) Nd:YAP and (f) Nd:YAG lasers with the insets showing the measured room-temperature fluorescent profiles. Fig. 3 . Fig. 3. Average output powers of the fundamental and individual Stokes components at a pulse repetition rate of 10 kHz for the (a) Nd:YAP/KTP and (b) Nd:YAG/KTP Raman lasers. Fig. 4 . Fig. 4. Optical spectra in the near infrared region at incident pump powers of (a) 5.5 W, (b) 6.8 W, (c) 11.4 W, and (d) 16 W, as well as the (e) optical spectrum in the visible region, for the Nd:YAP/KTP Raman laser under a pulse repetition rate of 10 kHz; (a')-(e') being the corresponding cases for the Nd:YAG/KTP Raman laser. Fig. 5 . Fig. 5. Oscilloscope traces at an incident pump powers of 16 W and a pulse repetition rate of 10 kHz for the Nd:YAP/KTP Raman laser with the time span of (a) 500 μs and (b) 200 ns; and those for the Nd:YAG/KTP Raman laser with the time span of (c) 500 μs and (d) 200 ns.
2018-04-03T04:18:17.355Z
2015-04-20T00:00:00.000
{ "year": 2015, "sha1": "c9a700d5c5f0086203af9733af10306b2a972790", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1364/oe.23.010435", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "9fbb50fa97556f3317aceebd6927686a08ad65eb", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
244920911
pes2o/s2orc
v3-fos-license
Combining Planck and SPT cluster catalogs: cosmological analysis and impact on Planck scaling relation calibration We provide the first combined cosmological analysis of South Pole Telescope (SPT) and Planck cluster catalogs. The aim is to provide an independent calibration for Planck scaling relations, exploiting the cosmological constraining power of the SPT-SZ cluster catalog and its dedicated weak lensing (WL) and X-ray follow-up observations. We build a new version of the Planck cluster likelihood. In the $\nu \Lambda$CDM scenario, focusing on the mass slope and mass bias of Planck scaling relations, we find $\alpha_{\text{SZ}} = 1.49_{-0.10}^{+0.07}$ and $(1-b)_{\text{SZ}} = 0.69_{-0.14}^{+0.07}$ respectively. The results for the mass slope show a $\sim 4 \, \sigma$ departure from the self-similar evolution, $\alpha_{\text{SZ}} \sim 1.8$. This shift is mainly driven by the matter density value preferred by SPT data, $\Omega_m = 0.30 \pm 0.03$, lower than the one obtained by Planck data alone, $\Omega_m = 0.37_{-0.06}^{+0.02}$. The mass bias constraints are consistent both with outcomes of hydrodynamical simulations and external WL calibrations, $(1-b) \sim 0.8$, and with results required by the Planck cosmic microwave background cosmology, $(1-b) \sim 0.6$. From this analysis, we obtain a new catalog of Planck cluster masses $M_{500}$. We estimate the ratio between the published Planck $M_{\text{SZ}}$ masses and our derived masses $M_{500}$, as a"measured mass bias", $(1-b)_M$. We analyse the mass, redshift and detection noise dependence of $(1-b)_M$, finding an increasing trend towards high redshift and low mass. These results mimic the effect of departure from self-similarity in cluster evolution, showing different dependencies for the low-mass high-mass, low-z high-z regimes. INTRODUCTION Galaxy clusters are the largest, gravitationally bound structures in the Universe. These objects represent the nodes in the cosmic web of the large scale structure, and are related to the peaks in the density field, on scales of the order of megaparsec. Galaxy clusters can be detected in different wavelengths. In recent years, several experiments produced large catalogs of clusters to be used for the cosmological analysis, such as the Planck survey (Planck Collaboration et al. 2016a,b), the South Pole Telescope (SPT hereafter) (Bleem et al. 2015;de Haan et al. 2016;Bocquet et al. 2019) and the Atacama Cosmology Telescope (Hilton et al. 2021) in the millimeter wavelengths; the Kilo-Degree Survey (Maturi et al. 2019), the Dark Energy Survey (Drlica-Wagner et al. 2018;Abbott et al. 2020) in optical; the ROSAT survey (Böhringer et al. 2017), the XXL survey (Adami et al. 2018;Pacaud et al. 2018) and the first eROSITA observations (Liu et al. 2021) in X-rays. In particular, the abundance of galaxy clusters (galaxy cluster number counts) has emerged as a fundamental cosmological probe. Cluster formation and evolution is strictly related to the underlying cosmological model, tracing the growth of structures, see e.g. Allen et al. (2011). In particular, the observed cluster abundance is mainly sensitive to the combination of two cosmological parameters: the total matter density Ω m and σ 8 , which is defined as the rms fluctuation in the linear matter density field on 8 Mpc/h scale at redshift z = 0. Comparing and combining results from cluster abundance with other cosmological probes, such as cosmic microwave background radiation (CMB hereafter) at high redshift, or baryon acoustic oscillations (BAO hereafter) at low redshift, allows us to perform fundamental consistency checks of the standard cosmological model. Cosmological constraints from cluster counts rely on the knowledge of their mass and redshift distribution, which is described by the halo mass function, see e.g. discussion in Monaco (2016) and references therein for an updated list of available mass function evaluations, and McClintock et al. (2019); Bocquet et al. (2020) for recent mass function emulators. However, cluster mass cannot be measured directly forcing us to rely on observational mass-proxies that correlate with the underlying halo mass. Cluster masses and survey observables are linked through statistical scaling relations, that describe the interplay between astrophysics and cosmology in the cluster formation and evolution. These relations are usually calibrated through a multi-wavelength analysis. Indeed, observations of the same clusters in different frequency bands provide a unique insight on the interaction between baryonic and dark matter, allowing us to further model the impact of astrophysical processes on the cluster cosmological evolution. Scaling relations are then combined with a model for the selection process (i.e. a selection function) to transform the theoretical halo mass function into a prediction for the distribution of clusters in the space of redshift and survey observables. In this scenario, it is clear that a precise and comprehensive characterization of the mass function, the scaling relations and the selection function is needed in order to provide stringent and unbiased constraints on cosmological parameters from galaxy clusters. In this work, we perform the first combined cosmological analysis of the SPT-SZ (Bleem et al. 2015) and Planck (Planck Collaboration et al. 2016a) cluster catalogs. Both experiments detect clusters in the millimeter wavelengths, through the thermal Sunyaev-Zeldovich (tSZ hereafter) effect (Sunyaev & Zeldovich 1970). The strength of this analysis lies in the combination of a fullsky survey (Planck) with deep and high-resolution observations from a ground-based experiment (SPT). The combination of the two cluster catalogs spans a large redshift range (from z = 0 for Planck catalog, up to z ∼ 1.7 for the SPT one), ensuring the possibility to test the impact of astrophysics over a broad redshift range. The strength of combining Planck and SPT cluster observations has been already explored in the analysis of Melin et al. (2021), in which the authors provide a new cluster catalog extracted from the common area observed by the two experiments. The analysis we present here is the first in a series of papers in which we plan to exploit the combination of the SPT-SZ and Planck cluster catalogs. In this work we focus primarily on providing a new calibration for Planck scaling relations. In Planck Collaboration et al. (2016b) the evaluation of Planck cluster masses from tSZ observations is based on the assumption of hydrostatic equilibrium (HE hereafter). Hydrodynamical simulations suggest, however, that HE cluster masses are biased low by a factor of ∼ 20%, see e.g., discussion in (Pratt et al. 2019). A mass-bias parameter, defined through the ratio between the HE inferred mass and the total cluster mass, is thus introduced: (1 − b) = M SZ /M tot ∼ 0.8. The calibration of the whole mass-observable scaling relation is done through external X-ray and weak lensing (WL hereafter) measurements, the latter used in particular to estimate the mass bias. Nevertheless, WL analyses based on different cluster subsamples and approaches (von der Linden et al. 2014;Hoekstra et al. 2015;Okabe & Smith 2016;Sereno & Ettori 2015;Smith et al. 2016;Penna-Lima et al. 2017;Sereno et al. 2017;Herbonnet et al. 2020) might provide different mass calibrations, resulting in different constraints on cosmological parameters and showing therefore the impact of the cluster subsample selection choice, see also discussion in Salvati et al. 2019. The mass calibration plays therefore an important role in the CMB-cluster σ 8 tension (Planck Collaboration et al. 2014Collaboration et al. , 2016b, where the discrepancy could be entirely relieved by adopting a mass-bias parameter of (1 − b) ∼ 0.6. Such a strong deviation from HE masses would be, however, in strong contrast with the above described WL observations and hydrodynamical simulations predictions, and with several other astrophysical observations for clusters (see, e.g., Eckert et al. 2019). Finally, we note that more recent analyses of Planck data (Aghanim et al. 2016;Salvati et al. 2018;Planck Collaboration et al. 2020a) reveal that cosmological results are now consistent between CMB primary anisotropies and galaxy clusters, with constraints on the σ 8 parameter well in agreement within 2 σ. These results are still systematically limited by the assumed mass calibration, which, as in the original Planck analysis, strongly depends on the subsample of clusters adopted to constrain the mass-bias parameter. It is therefore fundamental to perform an independent calibration of the scaling relations. The Planck and SPT-SZ cluster catalog combination that we propose in this work specifically address this point. By exploiting the cosmological constraining power of SPT-SZ clusters and its associated mass-calibration data-sets, and the tight correlation between cosmology and astrophysics, we provide an independent evaluation of Planck scaling relation parameters and a new evaluation of Planck to-tal cluster masses, which are therefore consistent with the SPT-SZ WL calibrated masses and corrected for Eddington bias effects (Allen et al. 2011). The paper is structured as follows: in section 2 we describe the cluster observations for Planck and SPT and the underlying theoretical model for the use of cluster number counts. In section 3 we discuss the approach used to combine the datasets and extract cosmological information, and the recipe to evaluate Planck cluster masses and further analyse the mass bias. We present and discuss the results in sections 4 and 5, and derive our final conclusions in section 6. DATA AND MODEL In this section we summarise the observation and detection strategies for the Planck and SPT experiments. We also describe the theoretical models that lead to the evaluation of the likelihood function needed for the cosmological analysis. For the full discussion, we refer to the SPT analysis in Bleem et al. (2015); Bocquet et al. (2019) and the Planck analysis in Planck Collaboration et al. (2016b,a). We recall here that clusters detected through the tSZ effect are often defined as objects with a mass M 500 contained in a sphere of radius R 500 , such that the cluster mean mass overdensity inside R 500 corresponds to 500 times the critical density ρ c (z). Therefore we define the total cluster mass as South Pole Telescope The South Pole Telescope is a 10m diameter telescope located at the geographic South Pole (Carlstrom et al. 2011). We consider observations of the SPT-SZ survey (Bleem et al. 2015), which detected galaxy clusters through the tSZ effect, using observations in the 95 and 150 GHz bands, in a 2500 deg 2 area. With ∼ 1 resolution and 1 • field of view, SPT is able to observe rare, high-mass clusters, from redshift z 0.2. Galaxy clusters are extracted from the SPT-SZ survey data through a multi-matched filter technique, see e.g Melin et al. (2006). This approach makes use of the known (non-relativistic) tSZ spectral signature and a model for the spatial profile of the signal. In the standard SPT analysis approach, the spatial profile follows the projected isothermal β model (Cavaliere & Fusco-Femiano 1976), with β fixed to 1. The tSZ signature is then used, together with a description of the noise sources in the frequency maps, to construct a filter designed to maximize the sensitivity to galaxy clusters. From the filtered maps, we can extract cluster candidates, via a peak detection algorithm similar to the SExtractor routine (Bertin & Arnouts 1996). In SPT analysis, the maximum detection significance (the signal-to-noise ratio maximized over all filter scales) ξ is used as tSZ observable. In this work we focus on the cosmological cluster sample, analyzed in de Haan et al. (2016) and Bocquet et al. (2019). It is a subsample of the full SPT-SZ sample, consisting of 365 detections (343 of which have been optically confirmed), restricted to z > 0.25 and with a detection significance ξ > 5. For the SPT cluster cosmological analysis, we follow the recipe described in Bocquet et al. (2019). We report here the main steps and refer the reader to the original study for further details. We make use of a multiwavelength approach, considering also WL and X-ray data. In detail, we use WL measurements of 32 clusters in the SPT-SZ cosmological sample, considering the reduced tangential shear profiles in angular coordinates (corrected for contamination by cluster galaxies) and the estimated redshift distributions of the selected source galaxies. These measurements are obtained with Magellan/Megacam (Dietrich et al. 2019) for 19 clusters in the redshift range 0.29 ≤ z ≤ 0.69, and with the Advanced Camera for Surveys on board of the Hubble Space Telescope (HST hereafter) (Schrabback et al. 2018) for 13 clusters in the redshift range 0.576 ≤ z ≤ 1.132. For the X-ray measurements, we consider Chandra observations for 89 clusters in the SPT-SZ cosmological sample (McDonald et al. 2013(McDonald et al. , 2017. The X-ray data products used in this analysis are the total gas mass M gas within an outer radius ranging from 80 to 2000 kpc, and the spectroscopic temperature T X in the 0.15R 500 − R 500 range. The SPT cluster cosmological analysis is based on a multi-observable Poissonian likelihood. The likelihood function can be written as In the above equation, p is the vector of cosmological and scaling relation parameters, the first sum is over all the i clusters in the cosmological sample, while the second sum is for the j clusters with Y X = M gas T X and/or WL measurements, with g t being the reduced tangential shear profile. Therefore, the first two terms represent the tSZ cluster abundance, while the third encodes the information from follow-up mass calibration data. In order to account for the impact of noise bias on the detection significance ξ, we introduce the unbiased tSZ significance ζ. It is defined as the signal-to-noise ratio at the true, underlying cluster position and filter scale. The relation between the two quantities, across many noise realizations, is given by This definition has been largely tested and validated in Vanderlinde et al. (2010) andde Haan et al. (2016). We can now explicitly evaluate the different terms in Eq. 2. The first term is given by In the above equation, Ω(z, p) is the survey volume, dN (M 500 , z|p)/dM 500 is the halo mass function, P (ζ|M 500 , z, p) is the unbiased observable-mass relation and P (ξ|ζ) is the measurement uncertainty defined in Eq. 3. Therefore, the first term in Eq. 2 is obtained evaluating Eq. 4 at the measured (ξ i , z i ) for each cluster, marginalizing over photometric redshift errors where present. The second term is simply evaluated through a two-dimensional integral over Eq. 4. The last term in Eq. 2 represents the mass calibration contribution and can be evaluated as where P (M 500 |z, p) is the normalized halo mass function. The multi-observable scaling relation P (ζ, Y X , M WL |M 500 , z, p) is assumed to follow a multivariate lognormal distribution, whose mean values, for the unbiased tSZ significance ζ, the X-ray Y X quantity and the WL mass M WL , read: ln M WL = ln b WL + ln M 500 . The covariance matrix elements of the multi-observable scaling relation are defined as where the intrinsic scatters σ O of the observables O = ζ, Y x , M WL are assumed to be independent of mass and redshift, and the three coefficients ρ(O i ; O j ) account for their correlations. The full description of the WL bias, b WL , and the associated scatter is done in Bocquet et al. (2019), we only recall here that the modelling introduces six nuisance parameters δ i . All the parameters characterizing the scaling relations are listed and defined in Table 1. We conclude mentioning that the SPT-SZ cosmological sample contains 22 tSZ detections with unknown redshift, since they have not been confirmed through optical counterparts. This number is consistent with the expected number of false detections above ξ = 5. Therefore, discarding these objects does not affect the cosmological results. Planck satellite The Planck satellite is a mission from the European Space Agency (ESA), which concluded the observations in 2013 (Planck Collaboration et al. 2020b). The Planck cluster catalog (Planck Collaboration et al. 2016a) is based on full-sky observations from the 6 channels of the High Frequency Instrument (HFI, Planck Collaboration et al. (2020c)), in the frequency range 100-857 GHz. Similarly to SPT, Planck clusters are extracted using a multi-frequency matched filter technique. For the spatial profile of the signal, the so-called "universal pressure profile" from Arnaud et al. (2010) has been adopted. The cosmological sample, labeled as "PSZ2 cosmo", consists of 439 clusters, 433 of which have confirmed redshifts, detected with a signal-to-noise ratio q > 6, on the 65% of the sky remaining after masking high dust emission regions and point sources. The signal-to-noise ratio is defined as where Y 500 is the integrated compton parameter (tSZ signal for a cluster) and σ f (θ 500 , l, b) is the detection filter noise as a function of the cluster angular size, θ 500 , and sky position in galactic coordinates (l, b). The PSZ2 cosmo sample spans the mass range M SZ = (2 − 10) × 10 14 M and the redshift range z = [0, 1]. The Planck cosmological analysis is based on a Poissonian likelihood, constructed on counts of redshift and Table 1. Cosmological and scaling relation parameters, following the definitions in Bocquet et al. (2019) and Planck Collaboration et al. (2016b). We report a brief description and the prior we adopt in our analysis: a range indicates a top-hat prior, while N (µ, σ) stands for a Gaussian prior with mean µ and variance σ 2 . signal-to-noise ratio: In the above equation, N z and N q are the total number of redshift and signal-to-noise bins, with redshift binning ∆z = 0.1 and signal-to-noise ratio binning ∆ log q = 0.25. N ij represents the observed number counts of cluster.N ij is the predicted mean number of objects in each bin, modelled by theory as We report here the main steps to evaluate the theoretical cluster number counts and refer to Planck Collaboration et al. (2016b) for the complete description. The cluster distribution can be written as is the product of the volume element and the halo mass function respectively. In Eq. 12, the quantity P [q|q m (M 500 , z, l, b)] represents the distribution of the signal-to-noise ratio q given the mean valueq m (M 500 , z, l, b), predicted by the model, for a cluster located at position (l, b), with mass M 500 and redshift z. The P [q|q m ] distribution takes into account the noise fluctuations and the intrinsic scatter σ ln Y of the actual cluster signal Y 500 around the mean value,Ȳ 500 (M 500 , z), predicted from the scaling relation. In this analysis, we assume that the intrinsic scatter does not show any dependence on (M 500 , z), following the original approach in Planck Collaboration et al. (2016b). The relation between the cluster observables Y 500 , θ 500 and the cluster mass and redshift is described by a lognormal distribution function P (ln Y 500 , θ 500 |M 500 , z). The mean values of this distribution are given by the scaling relationsȲ 500 (M 500 , z) andθ 500 (M 500 , z), defined as In the above equations, D A (z) is the angular diameter distance and E(z) ≡ H(z)/H 0 . In the original analysis of Planck Collaboration et al. (2014, 2016a), the calibration of Eqs. 14 and 15 is based on X-ray observations of 71 clusters under the assumption of hydrostatic equilibrium. To account for possible deviations from this assumption (due to cluster physics, observational effects or selection effects), the mass bias parameter b is introduced in the analysis, such that the relation between the HE mass (M SZ ) and the real cluster In order to evaluate the mass bias (and therefore the real cluster mass), WL mass determinations are Table 2. Original calibration of Planck scaling relation parameters. N (µ, σ) stands for a Gaussian prior with mean µ and variance σ 2 . Parameter Value Parameter Value Note-a In practice, in the analysis we use the parameter introduced in the analysis. For the baseline cosmological analysis, Planck collaboration adopts the evaluation from the Canadian Cluster Comparison Project (Hoekstra et al. (2015), CCCP hereafter), (1 − b) SZ = 0.780 ± 0.092, based on 20 clusters. We stress that the mass bias is considered as a constant quantity, i.e. not allowing for dependence on the cluster mass and redshift. The original values for the scaling relation parameters (from X-ray and WL calibration) are reported in Table 2, following Planck Collaboration et al. (2016b). We note that as baseline we assume the self-similarity model for the redshift evolution of the cluster population. This translates into fixing the β parameter to β SZ = 2/3. In summary, the main difference between Planck and SPT mass calibrations lies in the use of external data (from other cluster samples) for Planck vs. the use of internal data (direct follow-up observations) for SPT. Therefore, when analyzing Planck data, it is possible to relax some of the external calibration results and provide independent constraints on some of the scaling relation parameters. METHOD In this section we describe the strategy that we adopted to combine Planck and SPT data, in order to avoid covariance between the two samples. In particular, we discuss how we modify the original Planck likelihood to provide a proper combination with the SPT one. Finally, we describe the method we use to provide a new evaluation of Planck cluster masses. Combining Planck and SPT cluster likelihoods In order to combine Planck and SPT cluster likelihoods, it is necessary to take into account the overlapping area of the observed sky and the clusters in common between the two catalogs. We choose to modify the Planck likelihood. In particular, we perform a split in redshift of the entire likelihood. For z ≤ 0.25, where we do not have cluster data from the SPT-SZ survey, we rely on the original version for the Planck likelihood. For z > 0.25, we modify the Planck likelihood, removing the part of the sky observed also by the SPT-SZ survey, and the clusters in common with the SPT-SZ catalog. Hereafter, we refer to this new Planck redshiftsplitted likelihood as "PvSPLIT". With this choice, we can therefore treat the two Planck and SPT cluster likelihoods independently. We now discuss in details the approach used to build the z > 0.25 part of the likelihood. From the original Planck analysis (Planck Collaboration et al. 2016a), the cosmological cluster catalog is built through the application of a multi-frequency matched filter technique to the HFI frequency maps, selecting objects with S/N > 6. The detection algorithm first divides the sky in 504 tangential patches of 10 • × 10 • area, with constant values of detection noise. After applying the galactic and point source mask, we are left with 417 sky patches, covering ∼ 65% of the sky. Cluster candidates are then detected in each sky patch: the final catalog is therefore completely dependent on the characteristics of the detection process, including the sky patches division. When modifying the Planck cluster likelihood for z > 0.25, we therefore need to keep this patches configuration. We identify 16 patches that fully overlap with the SPT observed sky. We remove those patches from the sky area in the likelihood. Furthermore, we identify 35 patches with a partial overlapping between Planck and SPT sky. In this case, we decide to keep them in the analysis, but reduce the sky fraction in each patch, according to the area that is actually observed by both experiments. The remaining observed sky is shown in Fig. 1, upper panel. We show in grey the removed patches, due to Planck galactic mask and the Planck-SPT fully overlapped area. In yellow, we highlight the patches that partly overlaps between Planck and SPT-SZ survey. For the cluster catalog, we remove 27 clusters in common with the SPT-SZ cosmological catalog and 2 clusters that fall in the removed patches. We also introduce redshifts for the 6 clusters whose redshifts was unknown in the original PSZ2 cosmo sample. We report the new redshifts in Table 3, specifying if these values have been obtained from photometric (P) or spectroscopic (S) observations. We show the new cluster distribution in Fig. 1 Following Eq. 10, the new Planck PvSPLIT likelihood therefore reads where we adopt a binning in redshift of ∆z = 0.05, such that we have N z1 = 5 redshift bins up to z ≤ 0.25, and N z2 = 15 above. For the binning in the signal- Table 3. Redshifts for clusters without redshifts in the original PSZ2 cosmological catalog obtained from photometric (P) or spectroscopic (S) observations. Note-a from Pan-STARRS (Chambers et al. 2016) following Bleem et al. (2020) to-noise ratio, we follow the original analysis, with ∆ log q = 0.25. The total likelihood for the combined analysis of Planck and SPT, following Eqs. 2 and 16, is therefore defined as Sampling recipe For the cosmological analysis, we make use of the complete SPT likelihood, described in section 2.1. In particular, we rely on the combination of the SPT-selected clusters with their detection significance and redshift, together with the WL and X-ray follow-up data, where available. Following the definition in Bocquet et al. (2019), we refer to this data set as "SPTcl" (SPT-SZ + WL + Y X ). For the Planck part of the likelihood, we use the PvS-PLIT version described in the previous section. We adopt the parametrisation for the scaling relations described in Eqs. 14 and 15. In this analysis, we want to test the capability of the Planck+SPT combination to constrain the Planck scaling relation parameters. For this reason, we do not consider the original X-ray+WL calibration reported in table 2 when analyzing Planck data. As a baseline, we use the X-ray calibration for the log Y * ,SZ and σ log YSZ parameters, as reported in Table 2, and we assume the self-similarity model for the cluster evolution, i.e. β SZ = 0.66. We therefore focus the analysis on the constraints that we can obtain on the mass bias and the power-law index of the mass dependence, (1 − b) SZ and α SZ . We refer to this parameter exploration and likelihood combination as the baseline "SPTcl + PvSPLIT" results. As a further test, we also relax the assumption of redshift self-similar evolution and let the β SZ parameter free to vary. For the cosmological parameters, we assume a νΛCDM scenario. We vary the following parameters: the total matter density Ω m , the amplitude of primor-dial curvature perturbation A s , the Hubble rate h, the baryon density Ω b h 2 , the spectral index for scalar perturbations n s and the massive neutrino energy density Ω ν h 2 . When providing the results for the cosmological parameters, we focus also on the σ 8 quantity. We report all the parameters, with the priors used in the analysis, in Table 1. The sampling of the likelihood is performed with the importance nested sampler algorithm MultiNest (Feroz et al. 2009), within the cosmoSIS package (Zuntz et al. 2015). As shown in section 2, the halo mass function is a fundamental ingredient for the evaluation of the cluster number counts. For both the SPT and Planck parts of the analysis, we make use of the evaluation from Tinker et al. (2008). Mass evaluation We now describe the approach we use to provide a new evaluation of true Planck cluster masses, M 500 . We follow the discussion in Planck Collaboration et al. (2016b). We start from the Planck cluster observable, the signal-to-noise ratio q, and evaluate P (M 500 |q), which represents the conditional probability that a cluster with given signal-to-noise ratio q has a mass M 500 . Following the Bayes theorem, this probability is defined as P (M 500 |q) ∝ P (q|M 500 ) P (M 500 ) where the first term is the conditional probability of the data (the signal-to-noise ratio q), given the model (the cluster mass M 500 ), and the second term is the mass probability distribution. The latter is related to the mass function dN/dM 500 , such that P (M 500 ) = dN/dM 500 | M 500 dM 500 dN/dM 500 . In order to evaluate P (q|M 500 ), we follow the recipe for P [q|q m (M 500 , z, l, b)], that represents the probability distribution of the observed signal-to-noise ratio q given the mean one,q m , as already mentioned in section 2. Following Eq. 9, the mean theoretical signal-to-noise ratio is defined asq whereȲ 500 andθ 500 are the mean values of the scaling relations defined in Eqs. 14 and 15 and σ f is the detection filter noise. For fixed values of the cosmological and scaling relation parameters, we have therefore a unique relation between the cluster mass M 500 andq m . The probability distribution can be evaluated as In the above equation, the second term accounts for the intrinsic scatter of the mass-observable relations, while the first term links the theoretical signal-to-noise ratio q m to the observed one, assuming pure Gaussian noise. In practice, we adopt a Monte-Carlo extraction based approach, starting from the parameter space exploration performed for the SPTcl + PvSPLIT analysis. For a given cosmological and scaling relation model, we extract M 500 in the range [3 · 10 13 , 1.2 · 10 16 ][M h −1 ] (following what is done in the PvSPLIT likelihood) according to the halo mass function distribution. We then eval-uateȲ 500 ,θ 500 and consequentlyq m , following Eq. 20. For the given mean theoretical signal-to-noise ratio, we then extract q m , following a log-normal distribution with standard deviation equal to the intrinsic massobservable relation scatter, σ ln Y . Given q m , we can extract the estimate of the observed signal-to-noise ratio q est , following a Gaussian distribution with standard deviation equal to 1. We then select N values of q est around the corresponding observed signal-to-noise q, therefore selecting the corresponding values of M 500 . The posterior distributions for M 500 are obtained marginalizing over the full parameter space, considering cosmological and scaling relation parameters. The resulting catalog therefore provides the first sample of Eddington-bias-corrected calibrated cluster masses, that include correlations associated with scaling relation and cosmological parameters. As detailed in section 4.2, we make this catalog publicly available. Mass bias With the evaluation of M 500 , we can estimate directly and for each cluster in the PSZ2 cosmo sample, the mass bias as (1 − b) M = M SZ /M 500 . We use M SZ estimations provided by the Planck collaboration (Planck Collaboration et al. 2016b) for the 433 clusters in PSZ2 cosmo for which the redshift was originally provided. In practice, we expand the procedure for M 500 evaluation presented in previous section. For each cluster, at each step of the Monte-Carlo extraction, we also extract M SZ within the constraints of the Planck measurements. We then evaluate (1 − b) M . The final constraints on (1 − b) M are therefore obtained marginalizing over cosmological and scaling relation parameters, and take into account the uncertainty on M SZ . We highlight the difference between the scaling relation parameter (1−b) SZ entering Eqs. 14 and 15, and the quantity we investigate here. The assumptions of spherical collapse, hydrostatic equilibrium and self-similarity lead to the formulation of Eqs. 14, 15 that link the tSZ observables and the cluster mass. In this case, the mass bias (1 − b) SZ is introduced to take into account any generic departure from hydrostatic equilibrium. Nevertheless, as discussed in Planck Collaboration et al. (2016a), M SZ is evaluated as the real cluster mass combining the scaling relation information with the output of the matched filter approach used to detect the clusters. The combination of these different approaches might select cluster scales that do not actually maximise the S/N ratio from the matched filter algorithm, and therefore introduce a further bias in the estimation of the real cluster mass. We attempt therefore to provide a complete characterization of the "measured" mass bias (1 − b) M , analyzing the dependencies with respect to theoretical modelling and observational assumptions. We consider a mass and redshift evolution for the mass bias. The goal is to understand if we need to further improve the theoretical modelling of the scaling relations. Indeed, as discussed e.g. in Salvati et al. (2019) In addition, we analyze a possible link between the evaluation of the mass bias (and therefore of the cluster mass) and the cluster position in the sky. This dependence might be related to the observational strategy, as well as to the assumptions for the ingredients used in the matched filter approach. As discussed in section 3.1, the Planck sky area, used for the cluster cosmological analysis, is divided into 417 patches, with each patch having a different value of detection noise σ f (θ 500 , l, b). This noise depends on the filter size θ 500 and is therefore related to the matched filter approach used to detect clusters in the Planck map. Therefore, the analysis of a possible dependence of the mass bias with respect to the detection noise allows us to quantify the systematic uncertainties coming from the modelling of the whole selection approach. Considering the mass, redshift and noise dependence, we define the theoretical mass bias (1 − b) th M as where M * = 4.68 · 10 14 M h −1 is the median mass of the sample (obtained from our analysis), z * = 0.21 is the median redshift of the sample and σ f, * (θ 500 ) is the median detection noise at the given θ 500 . In order to get constraints on the mass, redshift and detection noise dependence, we perform a fit between the measured Planck M SZ masses and a theoretical estimation M th SZ defined as where the masses M 500 are derived following the method described in section 3.3. RESULTS In this section we report the results for the combined cosmological analysis of Planck and SPT cluster likelihoods. We also provide an estimate of the cluster mass and mass bias for Planck PSZ2 cosmological sample. Cosmological and scaling relation parameters The results presented in this analysis are obtained combining the full SPT likelihood with the new Planck likelihood, presented in section 2 and 3, SPTcl + PvS-PLIT. When discussing our results, we focus on the constraints for the cosmological parameters Ω m and σ 8 and for the Planck scaling relation parameters (1 − b) SZ and α SZ . We start by comparing the results for SPTcl + PvSPLIT baseline combination with constraints obtained when considering the SPT data and Planck data alone. We stress that, when providing results for Planck data alone, we are actually considering the combination of cluster counts with measurements of BAO (Alam et al. 2017;Beutler et al. 2012Beutler et al. , 2011Ross et al. 2015), together with constraints on the baryon density Ω b h 2 from Big Bang Nucleosynthesis (BBN hereafter). We also consider the full calibration of the scaling relation parameters (as reported in Table 2), following the analysis in Planck Collaboration et al. (2016b). In this work, we simply perform a new analysis using the MultiNest sampler, within the cosmoSIS package, in order to provide consistent results. This dataset combination is labelled as "PvFULL". We report the constraints on cosmological and scaling relation parameters in Table 4. We show the 68% confidence level (CL hereafter) constraints for all the parameters. In the triangular plot in Fig. 2 we show the one-dimensional and two-dimensional probability distributions for the cosmological and scaling relation parameters for the main comparison between the baseline SPTcl + PvSPLIT and the original PvFULL and SPTcl analysis. From these results, we see that SPT cluster data are driving the constraining power, as it is shown from the shift of Ω m contours towards lower values and σ 8 contours towards larger values for the SPTcl + PvSPLIT baseline combination, with respect to PvFULL constraints. We stress again that for the SPTcl + PvSPLIT baseline combination we are not including BAO+BBN dataset and part of the X-ray+WL mass calibrations when considering Planck data, therefore losing part of the constraining power that leads to the tight bounds obtained for the PvFULL analysis (as further discussed in Appendix A). We now focus on the Planck scaling relation parameters (1 − b) SZ and α SZ . Regarding the mass bias, we find (1 − b) SZ = 0.69 +0.07 −0.14 . Although pointing towards low value of (1 − b), this result is still consistent with constraints obtained from recent WL calibration and numerical simulation analyses, see e.g. a collection of results in Salvati et al. (2018) and Gianfagna et al. (2021). Nevertheless, not considering the WL calibration from the CCCP analysis (used in the original Planck analysis) leads to a slight enlargement in the constraints. Regarding the mass slope α SZ , we find α SZ = 1.49 +0.07 −0.10 , which is ∼ 4 σ away with respect to the value obtained when adopting the X-ray calibration, α SZ = 1.79 ± 0.06. We recall here that, following the definition of the scaling relations in Eqs. 14 and 15, the value of α SZ 1.8 is in agreement with self-similarity assumption. The shift we find seems to be due to a combination of different effects. First of all, the PvSPLIT likelihood provides slightly different constraints with respect to the original PvFULL one, especially on the α SZ parameter, already pointing to 1.71 +0.07 −0.09 , as shown in Fig. 3 (dark blue contours) and in Table 4. We then test for the possible impact of sampling choice. In particular, as discussed also in Bocquet et al. (2019), sampling on A s or on ln (10 10 A s ) provides different constraints on the Figure 2. We show the one-dimensional and two-dimensional probability distributions for the cosmological (Ωm, σ8) and Planck scaling relation (αSZ, (1 − b)SZ) parameters. The contours represent the 68% and 95% CL. We compare results for different dataset combinations: SPTcl + PvSPLIT in green (baseline results of this analysis), PvFULL in orange and SPTcl in blue. We refer to the text for the complete description of the datasets. cosmological parameters, where the main effect can be seen on Ω m and H 0 . In our SPTcl + PvSPLIT baseline analysis we are following Bocquet et al. (2019) and sampling linearly on A s . In the original Planck analysis, the sampling is done on ln (10 10 A s ), as it is also done for the PvFULL results. We test therefore what happens when considering a logarithmic sampling for the SPTcl + PvSPLIT combination. The results are reported in Table 4 and Fig. 3 (pink contours). In this case, for the SPTcl + PvSPLIT + ln(10 10 A s ) combination, we find a negligible impact when considering the Ω m and σ 8 constraints. We find a larger effect when focusing on the scaling relation parameters. In particular, the constraints for the mass slope are α SZ = 1.60 +0.10 −0.18 , being therefore consistent with both the original PvFULL value and the baseline SPTcl + PvSPLIT results. Nevertheless, the main cause for the departure from self-similarity in the mass slope of the scaling relations is due to the lower value of Ω m obtained for the SPTcl + PvSPLIT combination, as it can be seen in Fig. 2. As an additional note, we stress that when focusing on the SPT scaling relation parameters (described in Eqs. 6-8), results for the SPTcl + PvSPLIT combination are fully consistent with the original analysis presented in Bocquet et al. (2019). As a final test, we relax the assumption of selfsimilarity for the redshift evolution of the scaling relations, therefore adding β SZ as a varying parameter. We report the constraints for the cosmological and scaling relation parameters in Table 4 and Fig. 3 (black contours). We find these results to be fully in agreement with our baseline analysis. For the redshift evolution parameter, we find β SZ = 0.57 +0.20 −0.51 , in agreement with the predicted self-similar value β SZ = 2/3. Mass and mass bias evaluation We now present the results for the mass and mass bias evaluation for the clusters in the Planck cosmological sample, following the approach described in section 3.3. In Fig. 4 we show the results obtained from the Monte-Carlo extraction, presenting the evaluated M 500 as a function of redshift. These results well reproduce the Planck selection threshold, being able to detect lowmass objects only in the low-redshift regime. The full cluster mass catalog is available at https://pole. uchicago.edu/public/data/sptplanck cluster. We report the first entries in Table 5: in the sixth column we report the constraints on M 500 and in the seventh column we report the full array of masses extracted through the Monte Carlo approach. We note that, for the 27 clusters in common with the SPT-SZ catalog, our mass estimation is in agreement within 2σ with the estimates from Bocquet et al. (2019), as further discussed in Appendix B. The constraints for (1 − b) M are shown in Fig. 5 in green, with 68% and 95% error bars. Note that the error bars for each cluster are heavily correlated, since they include the marginalization over cosmological and scaling relation parameters starting from the same SPTcl+PvSPLIT baseline chain. We analyze the possible redshift, mass and noise dependence for the mass bias (1 − b) M , as defined in Eq. 22. The results are obtained from the fit of M th SZ = (1 − b) th M · M 500 to Planck M SZ masses, starting again from the SPTcl + PvSPLIT (including M 500 evaluation) chain. We report these trends in Fig. 5 (blue curves) and the results for the fit in Table 6. While we find a value for the amplitude that is consistent with the constraints for (1 − b) SZ , having A bias = 0.69 +0.04 −0.09 , we find also strong evidence for mass and redshift evolution. In particular, the mass bias is increasing for high redshift and low mass, with γ M = −0.41 +0.04 −0.06 and γ z = 0.81 ± 0.13. Regarding the detection noise, we find no evidence for the mass bias to be dependent on this quantity, since we have γ n consistent with 0 within 1 σ. We conclude this section presenting masses for the PSZ2 cosmo catalog obtained when fixing the cosmological and scaling relation parameters. For the cosmological parameters, we adopt a flat νΛCDM scenario, following Bocquet et al. (2019). For the Planck scaling relation parameters, we take the best-fit values from the SPTcl + PvSPLIT baseline run with the fixed cosmology. The values of the parameters are reported in Table 7. Also in this case, the full cluster mass catalog is available at https://pole.uchicago.edu/public/data/ sptplanck cluster. We report the first entries in Table 5, eighth column. As for the marginalized masses, for the 27 clusters in common with the SPT-SZ catalog, our mass estimation is in agreement within 2σ with the estimates from Bocquet et al. (2019), as further discussed in Appendix B. We show in Fig. 6 the Planck evaluated masses M 500 , as function of redshift, in comparison with cluster masses from the SPT-SZ 2500 deg 2 catalog (Bocquet et al. 2019). As a reference, we also add clusters from recent SPT observations: the 79 clusters from the SPTpol 100 deg 2 sample (Huang et al. 2020), and the 448 clusters from the SPTpol Extended (SPT-ECS) sample (Bleem et al. 2020). DISCUSSION The results presented in the previous sections show the tight correlation between cosmological and scaling relation parameters, highlighting that a correct and unbiased evaluation of cluster masses is fundamental to SZ SPTcl + PvSPLIT + SZ SPTcl + PvSPLIT + sampling on ln(10 10 A s ) SPTcl + PvSPLIT PvFULL PvSPLIT Figure 3. We show the one-dimensional and two-dimensional probability distributions for the cosmological (Ωm, σ8) and Planck scaling relation (αSZ, (1 − b)SZ) parameters. The contours represent the 68% and 95% CL. We compare results for the original Planck analysis PvFULL (orange contours), with results obtained considering the new Planck likelihood PvSPLIT (dark blue contours). We also show results for the SPT + Planck combination, comparing the baseline analysis (green contours) with results when considering a logarithmic sampling on 10 10 As (pink contours) and when relaxing the assumption of self-similar redshift evolution for Planck scaling relation (black contours). perform precision cosmology with galaxy clusters. In an ideal scenario, to calibrate the scaling relations we would rely on high-precision multi-wavelength observations for each cluster in the considered cosmological sample. Since current counterpart observations in X-rays and optical bands do not cover the full Planck cosmological cluster sample, in this analysis we choose an alternative approach, by exploiting the cosmological constraining power of SPT-SZ cluster catalog, with its internal Xray and WL mass calibration, and use the Planck-SPT combination to constrain the Planck scaling relations. The results presented in section 4 point towards the necessity of improving the general astrophysical model adopted for the cluster evolution. We start discussing the results obtained for the SPTcl + PvSPLIT cluster catalog combination. First of all, we highlight the powerful cosmological constraining power of the SPT-SZ cluster sample: SPT data are driving the results, pushing the constraints for the SPTcl + PvSPLIT combination. For this dataset combination, we are also able to get tight constraints on the Planck scaling relation parameters, comparable with the results from PvFULL (i.e. the original full Planck likelihood), as shown in Table 4. In particular, we decide to focus on the parameters describing the mass dependence, therefore not considering external calibration and assumption of selfsimilarity for the mass bias, described by (1 − b) SZ , and the mass slope α SZ . For the mass bias, we find (1 − b) SZ = 0.69 +0.07 −0.14 . This is still in agreement within 2 σ with the different external WL calibrations and hydro-dynamical simulation estimations, but it also encompasses the lower values Table 5. First entries for the new Planck cluster catalog. We report cluster ID, coordinates, redshift and signal-to-noise ratio as delivered by Planck collaboration. We add in the sixth and seventh column the evaluation of M500 obtained marginalizing over cosmological and scaling relation parameters from our SPTcl+PvSPLIT analysis (labelled as "free"), and the full array of extracted masses (labelled as "free,c"). In the eight column we report the evaluation of M500 for the fixed values of cosmological and scaling relation parameters reported in Note-a from Planck Legacy Archive (https://pla.esac.esa.int) Figure 4. Cluster masses for the Planck cosmological sample, evaluated with a Monte-Carlo extraction approach. We show the best-fit value (red points) and the 68% c.l. error bars (in blue). . We report the 68% CL constraints. preferred from CMB data. This result can be further discussed in light of the evaluation of (1 − b) M that we performed for each single cluster. We discuss in section 3.3.1 the difference between the scaling relation parameter and the measured mass bias. The two quantities describe from different approaches a general non-precise knowledge of how the astrophysical processes affect the theoretical model for the cluster evolution (and as a consequence how we model the mass-observable relation and the selection approach). By analyzing (1 − b) M , we find strong hints for mass and reshift evolution of this quantity, with the amplitude being consistent with (1 − b) SZ , having A bias = 0.69 +0.04 −0.09 , as shown in Table 6. The increasing trend for the redshift evolution is also consistent with the analysis shown in Salvati et al. (2019). We now focus on the mass slope of the scaling relations, α SZ . For SPTcl + PvSPLIT we find α SZ = 1.49 +0.07 −0.10 , which is ∼ 4 σ lower than the self-similarity value. As discussed in section 4.1, this low value is due to a combination of different effects, with the dominant one being the shift of Ω m towards lower values. Indeed, we report the best-fit (black points) with 68% (dark green) and 95% (light green) error bars. The blue shaded area represents the trend and the 68% and 95% CL obtained when fitting M th SZ from Eq. 23, following results in We report also the SPT cluster masses from the SPT-SZ 2500 deg 2 catalog (green squares), from the SPTpol 100 deg 2 catalog (yellow stars) and from the SPTpol Extended cluster catalog (purple circles). this shift slightly tilts the mass function, such that it leads to fewer objects at low mass and more objects in the high-mass tail. The low value of α SZ seems to accommodate for this tilt, balancing the low-mass high-mass weight. The mass-redshift evolution of (1 − b) M seems to account for the same effect, balancing the low-mass high-mass trend. We also stress that, when not assuming self-similarity for the redshift evolution of Planck scaling relation and sampling also on the β SZ parameter, we find consistent results with the baseline analysis and no evidence for departure from self-similarity. From these combined results on the Planck scaling relation parameters and the estimated mass bias, we can take one main message: the simple model for the mass calibration of tSZ clusters, based on the assumptions of self-similarity, spherical symmetry and hydrostatic equi-librium, needs to be improved towards a more realistic description, at least for the modelling of the mass (and therefore scale) dependence. This is indeed the approach used for the SPT-SZ cluster analysis: the empirical, multi-observable approach used for the mass calibration provides constraints for the different parameters (defined in Eqs.6-8) not relying on strong theoretical assumptions. As a last point, we discuss the dependence of the measured mass bias with respect to the detection noise. As described in section 3.3.1, with this parametrization we try to quantify the impact of the detection process in the full cosmological modelling. From our analysis, we find no hint for a noise dependence of the mass bias, having γ n = 0.05 +0.06 −0.08 . As a further test, we check the results when considering only the noise dependence for the bias, i.e. In this case, we find A n = 0.60 +0.06 −0.14 and γ n = −0.37 +0.14 −0.12 , pointing to a decreasing trend of the measured bias with respect to the noise. This implies that the M SZ estimation for clusters detected in patches with higher detection noise is more biased, possibly due to a loss of tSZ signal. On the other hand, when considering only the mass and redshift dependence for the measured mass bias, we find results for the amplitude and the slopes that are fully consistent with what we report in Tab. 6. This stresses even more that an incorrect characterization of the mass and redshift dependence for the massobservable relation is still a dominant source of uncertainties with respect to possible systematics coming from the modelling of the cluster selection process. In this paper we provide the first combination of Planck and SPT cluster catalogs for a cosmological analysis, with the aim of exploiting the SPT cosmological constraining power to provide an independent evaluation of Planck scaling relation parameters. We build a new likelihood (labelled "PvSPLIT") to analyze the Planck PSZ2 cosmo sample, removing the clusters and sky patches in common with SPT observations. The baseline analysis is given by the "SPTcl + PvS-PLIT" combination, where we do not rely on the external X-ray and WL calibrations for the mass slope α SZ and the mass bias (1 − b) SZ adopted in the original Planck analysis. We summarize our main findings below: 1. We show the strong constraining power of SPT-SZ clusters, which drives the results for the SPTcl + PvSPLIT combination. Focusing on Planck scaling relation parameters, we find that the SPTcl + PvSPLIT combination provides results comparable in accuracy with the external X-ray and WL calibrations used for the original Planck analysis, having α SZ = 1.49 +0.07 −0.10 and (1 − b) SZ = 0.69 +0.07 −0.14 . We stress that the value of α SZ that we find is ∼ 4 σ lower than the expected self-similar value, α SZ = 1.8, a result driven primarily by the relatively low values of Ω m preferred from SPT data. 2. Through a Monte Carlo extraction approach, we provide new estimates of Planck cluster masses M 500 , obtained marginalizing over cosmological and scaling relation parameter posteriors derived from the SPTcl + PvSPLIT analysis. We provide also an evaluation of M 500 masses for Planck clusters in the PSZ2 cosmo catalog at fixed values of cosmological and scaling relation parameters. The cluster mass catalogs are available at https://pole. uchicago.edu/public/data/sptplanck cluster. 3. We provide a measurement of the mass bias, (1 − b) M , for 433 over 439 clusters of the PSZ2 cosmo sample (for which we have the redshift in the original Planck analysis), using the M SZ measurements from Planck and our estimation of M 500 . The constraints for (1 − b) M account for the uncertainties on the cosmological and scaling relation parameters derived in this work. We study a possible dependence of (1 − b) M with respect to the cluster mass and redshift, and to the survey detection noise. The aim is to highlight the impact, in the cosmological analysis, of the assumed modelling for the mass-observable relation and the cluster detection approach. On the one hand, we find (1 − b) M to have a decreasing trend with respect to the cluster mass and an increasing trend with respect to redshift, with the slopes being γ M = −0.41 +0.04 −0.06 and γ z = 0.81 ± 0.13. On the other hand, we do not see any noise dependence, having γ n fully consistent with 0. 4. Comparing the results for the scaling relation parameters and the measured mass bias dependencies, we find them to mimic the same effects, mainly a departure from self-similarity for the cluster evolution, and therefore the necessity to consider different dependencies for the low-mass vs. high-mass and low-redshift vs. high-redshift clusters. This analysis confirms the importance of an accurate mass calibration when using cluster counts as a cosmological probe. We find that the simple model for the mass calibration of tSZ clusters, based on the assumptions of self-similarity, spherical symmetry and hydrostatic equilibrium, needs to be improved towards a more realistic description. Furthermore, we stress that the adopted modelling should take into the cluster sample selection, from the cluster mass-redshift distribution to the impact of the detection approach. This project is paving the way towards a full joint analysis of SPT and Planck cluster catalogs, with a joint mass-calibration, allowing to more stringent tests of cosmology beyond flat νΛCDM scenario. SZ SPTcl + PvSPLIT + SZ SPTcl + PvSPLIT + sampling on ln(10 10 A s ) SPTcl + PvSPLIT PvFULL SPTcl CMB Figure 8. We show the one-dimensional and two-dimensional probability distributions for the cosmological (Ωm, σ8) and Planck scaling relation (αSZ, (1 − b)SZ) parameters. The contours represent the 68% and 95% CL. We compare results for different dataset combinations, as described in the text. Figure 9. We show the distribution of mass differences for the 27 clusters in common between Planck and SPT-SZ cosmological catalogs, as a function of the cluster number (left) and redshift (right). We compare the results when considering the mass estimates marginalized over cosmological and scaling relation parameters (in blue, top panels) and obtained with fixed cosmological and scaling relation parameters (in black, bottom panels). The shaded areas represent the 1σ and 2σ intervals of the distribution. Figure 10. We show the distribution of mass differences for the 27 clusters in common between Planck and SPT-SZ cosmological catalogs, as a function of the cluster number (left) and redshift (right). For the Planck clusters, we consider MSZ estimates. For the SPT-SZ clusters we consider the mass estimates marginalized over cosmological and scaling relation parameters. The shaded areas represent the 1σ and 2σ intervals of the distribution.
2021-12-08T02:15:44.899Z
2021-12-07T00:00:00.000
{ "year": 2021, "sha1": "32c24de30c4580bae0486b119aa84b9215d08411", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.3847/1538-4357/ac7ab4/pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "32c24de30c4580bae0486b119aa84b9215d08411", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
220889949
pes2o/s2orc
v3-fos-license
UP256 Inhibits Hyperpigmentation by Tyrosinase Expression/Dendrite Formation via Rho-Dependent Signaling and by Primary Cilium Formation in Melanocytes Skin hyperpigmentation is generally characterized by increased synthesis and deposition of melanin in the skin. UP256, containing bakuchiol, is a well-known medication for acne vulgaris. Acne sometimes leaves dark spots on the skin, and we hypothesized that UP256 may be effective against hyperpigmentation-associated diseases. UP256 was treated for anti-melanogenesis and melanocyte dendrite formation in cultured normal human epidermal melanocytes as well as in the reconstituted skin and zebrafish models. Western blot analysis and glutathione S-transferase (GST)-pull down assays were used to evaluate the expression and interaction of enzymes related in melanin synthesis and transportation. The cellular tyrosinase activity and melanin content assay revealed that UP256 decreased melanin synthesis by regulating the expression of proteins related on melanogenesis including tyrosinase, TRP-1 and -2, and SOX9. UP256 also decreased dendrite formation in melanocytes via regulating the Rac/Cdc42/α-PAK signaling proteins, without cytotoxic effects. UP256 also inhibited ciliogenesis-dependent melanogenesis in normal human epidermal melanocytes. Furthermore, UP256 suppressed melanin contents in the zebrafish and the 3D human skin tissue model. All things taken together, UP256 inhibits melanin synthesis, dendrite formation, and primary cilium formation leading to the inhibition of melanogenesis. Introduction Hyperpigmentation is a skin pigmentation disorder that is discolored, blotchy, or darker than normal skin. Hyperpigmentation occurs by an excess production of melanin attributed to multiple causes such as age, inflammation, hormone imbalance, and environmental exposure and ultraviolet (UV) [1]. Skin inflammation, burns, wounds, cuts, and any other skin injury, can result in either hyperpigmentation or hypopigmentation [2]. These disorders are difficult to treat and lack the appropriate therapeutic regimens. Inhibitory Effects of UP256 on Melanin Synthesis in Melanocytes We investigated the effect of UP256 on melanin synthesis in normal human epidermal melanocytes (NHEM). Well-known melanogenesis inhibitor, phenylthiourea (PTU) [23], a positive control, was cytotoxic at both treatment concentrations (1 and 10 µM), while UP256 had no significant cytotoxicity at its effective concentrations up to 5 µM (Figure 1a). UP256 (5 µM) reduced melanin production in NHEMs by 15%. At 5 µM, the potency of UP256 appeared almost similar to PTU at 10 µM (Figure 1b). We further confirmed the anti-melanogenic effect of UP256 using L-3,4-dihydroxyphenylalanine (L-DOPA) staining, which detects in situ tyrosinase activity (Figure 1c). UP256 treatment at 5 µM significantly inhibited melanin synthesis, similar to PTU at the same concentration (Figure 1d). UP256 showed an inhibitory effect on cellular tyrosinase activity ( Figure S1). In contrast, under cell-free conditions, UP256 did not show significant inhibitory effect on the activity of mushroom tyrosinase ( Figure S2), suggesting that the effect on melanocytes was not mediated by direct inhibition of UP256 with the tyrosinase enzyme. Inhibitory Effects of UP256 on Melanin Synthesis in Melanocytes We investigated the effect of UP256 on melanin synthesis in normal human epidermal melanocytes (NHEM). Well-known melanogenesis inhibitor, phenylthiourea (PTU) [23], a positive control, was cytotoxic at both treatment concentrations (1 and 10 µM), while UP256 had no significant cytotoxicity at its effective concentrations up to 5 µM (Figure 1a). UP256 (5 µM) reduced melanin production in NHEMs by 15%. At 5 µM, the potency of UP256 appeared almost similar to PTU at 10 µM (Figure 1b). We further confirmed the anti-melanogenic effect of UP256 using L-3,4dihydroxyphenylalanine (L-DOPA) staining, which detects in situ tyrosinase activity (Figure 1c). UP256 treatment at 5 µM significantly inhibited melanin synthesis, similar to PTU at the same concentration ( Figure 1d). UP256 showed an inhibitory effect on cellular tyrosinase activity ( Figure S1). In contrast, under cell-free conditions, UP256 did not show significant inhibitory effect on the activity of mushroom tyrosinase ( Figure S2), suggesting that the effect on melanocytes was not mediated by direct inhibition of UP256 with the tyrosinase enzyme. Effects of UP256 on melanogenesis in NHEM. Cell viability (a) and melanin content were measured (b) after treatment with UP256 (0.1, 1, and 5 µM) for 72 h. PTU was used as a positive control. In situ tyrosinase activity in NHEMs was observed via L-DOPA staining (c). Scale bar = 100 µm. Relative amounts of stained area were measured with the ImageJ program. The results are calculated as a percentage of the vehicle-treated control (d) and expressed as mean ± SD of three independent experiments (* p < 0.05 ** p < 0.01, and *** p < 0.001, compared with the vehicle-treated control). Effects of UP256 on the Expression of Melanogenic Enzymes in Melanocytes Tyrosinase, TRP-1, TRP-2, and SOX9 are simultaneously regulated by each other during melanogenesis. We observed the effect of UP256 on the expression of these melanogenic enzymes via western blot analysis (Figure 2a). UP256 significantly inhibited the expression of tyrosinase (approximately −43%), TRP-1 (−32%), TRP-2 (−47%), MITF (−19%), and SOX9 (−23%) at 5 µM, compared to the vehicle-treated cells after 72 h of treatment. UP256 reduced the expression of tyrosinase and MITF similarly, but it reduced TRP-1, 2, and SOX9 more effectively, compared to PTU (Figure 2b-e). In situ tyrosinase activity in NHEMs was observed via L-DOPA staining (c). Scale bar = 100 µm. Relative amounts of stained area were measured with the ImageJ program. The results are calculated as a percentage of the vehicle-treated control (d) and expressed as mean ± SD of three independent experiments (* p < 0.05, ** p < 0.01, and *** p < 0.001, compared with the vehicle-treated control). Effects of UP256 on the Expression of Melanogenic Enzymes in Melanocytes Tyrosinase, TRP-1, TRP-2, and SOX9 are simultaneously regulated by each other during melanogenesis. We observed the effect of UP256 on the expression of these melanogenic enzymes via western blot analysis (Figure 2a , and SOX9 (f). The band intensities were quantified, and the integrated areas normalized, first to the corresponding value of GAPDH, and then to the signal observed in the vehicle-treated control. All data are presented as mean SD of three independent experiments. * p < 0.05, ** p < 0.01 and ***p < 0.001 compared with the vehicle-treated control. Effects of UP256 on Rac1, Cdc42, and α-PAK Signaling Proteins in Melanocytes To determine the effect of UP256 in the regulation of small GTP-binding proteins related to dendrite formation in melanocytes, we performed a pull-down assay for cellular GTP-Rac1 and Cdc42. As shown in Figure 3a-d, UP256 treatment markedly inhibited Rac1 and Cdc42 activation, and decreased α-PAK expression. These data suggest that the inhibition of the GTP-Rac1, GTP-Cdc42, and α-PAK pathways is involved in UP256-induced inhibition of dendrite formation. We used ML141, a Cdc42 inhibitor, and selective MSC 23766, a selective Rac1 inhibitor, to further confirm our findings. As expected, the inhibitor treatment (20 µM) decreased the dendrite levels in a manner similar to UP256 treatment (Figure 3e,f). , and SOX9 (f). The band intensities were quantified, and the integrated areas normalized, first to the corresponding value of GAPDH, and then to the signal observed in the vehicle-treated control. All data are presented as mean SD of three independent experiments. * p < 0.05, ** p < 0.01 and ***p < 0.001 compared with the vehicle-treated control. Effects of UP256 on Rac1, Cdc42, and α-PAK Signaling Proteins in Melanocytes To determine the effect of UP256 in the regulation of small GTP-binding proteins related to dendrite formation in melanocytes, we performed a pull-down assay for cellular GTP-Rac1 and Cdc42. As shown in Figure 3a-d, UP256 treatment markedly inhibited Rac1 and Cdc42 activation, and decreased α-PAK expression. These data suggest that the inhibition of the GTP-Rac1, GTP-Cdc42, and α-PAK pathways is involved in UP256-induced inhibition of dendrite formation. We used ML141, a Cdc42 inhibitor, and selective MSC 23766, a selective Rac1 inhibitor, to further confirm our findings. As expected, the inhibitor treatment (20 µM) decreased the dendrite levels in a manner similar to UP256 treatment (Figure 3e,f). Inhibitory Effects of UP256 on Cilia Formation in Melanocytes To investigate the relationship with melanogenesis and cilia formation, we measured the cilia formation and melanin contents of melanocytes were increased over time (24,48, and 72 h) ( Figure 4a). As shown in Figure 4b,c, we observed that the melanin slightly increased as time passed, and the formation of cilia also increased at 48 and 72 h. We also measured cilia formation in the sample treatment group (Figure 4d). UP256 treatment decreased approximately 20% of cilia formation in melanocytes compared to the control group, but PTU treatment did not change the cilia (Figure 4e). Figure 3. Analyses of the effects of UP256 on the expression of proteins responsible for melanocyte dendrite formation (a). The expression of GTP-bound Rac1 (b), Cdc42 (c), and α-PAK (d) was measured using western blotting. The number of cells with more than two dendrites was counted in pictures (e) and represented as the percentage of total cells (f). A total of 300 cells were counted from each experimental group. Scale bar = 50 µm. All data are presented as mean SD of three independent experiments. * p < 0.05 and ** p < 0.01 versus the vehicle-treated control. Inhibitory Effects of UP256 on Cilia Formation in Melanocytes To investigate the relationship with melanogenesis and cilia formation, we measured the cilia formation and melanin contents of melanocytes were increased over time (24,48, and 72 h) ( Figure 4a). As shown in Figure 4b,c, we observed that the melanin slightly increased as time passed, and the formation of cilia also increased at 48 and 72 h. We also measured cilia formation in the sample treatment group (Figure 4d). UP256 treatment decreased approximately 20% of cilia formation in melanocytes compared to the control group, but PTU treatment did not change the cilia (Figure 4e). , and α-PAK (d) was measured using western blotting. The number of cells with more than two dendrites was counted in pictures (e) and represented as the percentage of total cells (f). A total of 300 cells were counted from each experimental group. Scale bar = 50 µm. All data are presented as mean SD of three independent experiments. * p < 0.05 and ** p < 0.01 versus the vehicle-treated control. Effects of UP256 on Melanogenesis In Vivo and Ex Vivo We tested the effect of UP256 on melanogenesis in the zebrafish model as well as the reconstructed skin tissue. UP256 clearly inhibited melanogenesis in the 3D skin model (Figure 5a). No UP256-induced toxicity was seen in hematoxylin and eosin (H&E) staining ( Figure 5c). Moreover, melanin production was lower in the UP256-treated group, compared with the control group ( Figure 5d). We also determined whether UP256 was effective in vivo by treating the zebrafish embryo model with UP256 for 72 h. We observed that UP256 inhibited melanogenesis in the melanocytes as well as in the zebrafish embryos ( Figure 5b). The melanin amount in zebrafish was slightly decreased after treating the UP256 sample when compared to the control group, but not as much as the PTU group ( Figure S3). In the 3D human skin model, UP256 highly inhibited the melanin level similar to the positive control. In our 3D human skin model, the analysis showed that USP256 had better depigmenting activity compared to the zebra fish model. Effects of UP256 on Melanogenesis In Vivo and Ex Vivo We tested the effect of UP256 on melanogenesis in the zebrafish model as well as the reconstructed skin tissue. UP256 clearly inhibited melanogenesis in the 3D skin model (Figure 5a). No UP256-induced toxicity was seen in hematoxylin and eosin (H&E) staining (Figure 5c). Moreover, melanin production was lower in the UP256-treated group, compared with the control group ( Figure 5d). We also determined whether UP256 was effective in vivo by treating the zebrafish embryo model positive control. In our 3D human skin model, the analysis showed that USP256 had better depigmenting activity compared to the zebra fish model. Figure 5. Depigmentation effects of UP256 in reconstructed skin and zebrafish. A total of 0.05% UP256 (w/v) and 0.1% PTU (w/v) were applied, respectively, to the reconstructed skin tissue for 14 days. Following the incubation period, the tissue was photographed using a digital camera (a). Synchronized zebrafish embryos were treated with 30 µM UP256 and PTU and observed under a stereomicroscope after 72 h (b). Reconstructed tissues were fixed and stained with H&E (c) and Fontana-Masson silver stains (d). Scale bar = 50 µm. Discussion In our study, we attempted to highlight the possible use of UP256, a natural product, for treating hyperpigmentation and associated diseases. Pigmentation can be regulated following several steps: regulation of melanin synthesis, melanosome transfer to other epidermal cells, and melanosome degradation and turnover [5]. Synthesis of melanin takes place in specialized intracellular organelles called melanosomes, catalyzed by melanogenesis enzymes such as tyrosinase. Mature melanin-filled melanosomes move from the perinuclear region to the dendrite tips of melanocytes. Effective regulation of melanin synthesis is crucial in skin whitening and treating hyperpigmentation. It appears that the reduced level of melanin in NHEMs after treatment with UP256 is due to decreased melanogenic enzyme expression. Melanin synthesis is initiated by hydroxylation of tyrosine to L-DOPA and further conversion of L-DOPA to DOPA-quinone, which acts as a precursor molecule for melanin synthesis via various pathways [24]. Tyrosinase is a crucial enzyme involved Figure 5. Depigmentation effects of UP256 in reconstructed skin and zebrafish. A total of 0.05% UP256 (w/v) and 0.1% PTU (w/v) were applied, respectively, to the reconstructed skin tissue for 14 days. Following the incubation period, the tissue was photographed using a digital camera (a). Synchronized zebrafish embryos were treated with 30 µM UP256 and PTU and observed under a stereomicroscope after 72 h (b). Reconstructed tissues were fixed and stained with H&E (c) and Fontana-Masson silver stains (d). Scale bar = 50 µm. Discussion In our study, we attempted to highlight the possible use of UP256, a natural product, for treating hyperpigmentation and associated diseases. Pigmentation can be regulated following several steps: regulation of melanin synthesis, melanosome transfer to other epidermal cells, and melanosome degradation and turnover [5]. Synthesis of melanin takes place in specialized intracellular organelles called melanosomes, catalyzed by melanogenesis enzymes such as tyrosinase. Mature melanin-filled melanosomes move from the perinuclear region to the dendrite tips of melanocytes. Effective regulation of melanin synthesis is crucial in skin whitening and treating hyperpigmentation. It appears that the reduced level of melanin in NHEMs after treatment with UP256 is due to decreased melanogenic enzyme expression. Melanin synthesis is initiated by hydroxylation of tyrosine to L-DOPA and further conversion of L-DOPA to DOPA-quinone, which acts as a precursor molecule for melanin synthesis via various pathways [24]. Tyrosinase is a crucial enzyme involved in the transformation of L-tyrosine to L-DOPA [25]. Melanocytes were stained with L-DOPA, which detects in situ tyrosinase activity and is a more sensitive indicator of changes in melanin synthesis than the 8 of 12 determination of total melanin levels. Our data showed that UP256 inhibited the melanin content in melanocytes. We further confirmed the depigmenting effect of UP256 by the reduced tyrosinase expression under the same conditions. Two other proteins, TRP-1 and TRP-2, are known as supportive enzymes for tyrosinase [26]. TRP-1 plays an important role in tyrosinase activation and stabilization. TRP-1 further helps to increase melanosome synthesis, and the eumelanin/pheomelanin ratio [27]. Similarly, TRP-2 acts as a Dct that helps eumelanin synthesis, especially via the dopachrome [24]. UP256 inhibits TRP-1 and TRP-2 protein expression, leading to a reduction in their supportive role in melanin synthesis. The reduction in the levels of melanin after treatment with UP256 appears to be a result of a decrease in the expression of melanogenic enzymes, probably due to a reduced expression of MITF. UP256 inhibits expression of transient receptor potential cation channel subfamily M member 1 (TRPM1; also known as melastatin) which is controlled by MITF. Activation of TRPM1 leads to induce the melanogenesis as well as differentiation of melanocytes and UP256 inhibited TRPM1 mRNA expression ( Figure S4). Additionally, SOX9 is also associated with melanogenesis, melanocyte differentiation, and skin pigmentation [28]. SOX9 specifically controls the function of Dct, the tyrosinase promoter, leading to highly stimulated pigmentation upstream [29]. Our results showed that UP256 decreased SOX9 expression, followed by a reduction in TRP-1, TRP-2, and tyrosinase, resulting in low melanin production in NHEMs. In melanocytes, melanosomes mature and are trafficked to dendritic tips, where they are transferred to adjacent epidermal keratinocytes through pathways that involve microtubule networks and the actin cytoskeleton. Melanocytic dendrite formation and extension are the foremost steps for melanosome transfer to nearby keratinocytes. Therefore, dendrite formation is critically important for melanosome transfer. GTP-binding proteins regulate cytoskeletal organization including dendritic formation. In particular, Rho family GTPases including Cdc42, Rac1, and RhoA play a vital role in the process of melanocyte dendrite formation and extension. Rac1 has been known to activate dendrite and lamellipodia formation, while Cdc42 is involved in filopodia and outer neurite formation [30]. Given the role of Cdc42 and Rac1 in cytoskeletal organization, it is convincing that UP256 decreases Cdc42 and Rac1 activation and induces the reduction of dendrite formation. Additionally, Rac1-α-PAK signaling is a well-known link for the dendritic spine formation, and their crosstalk also helps in melanocyte dendrite formation [31]. In our study, UP256 downregulated dendrite formation, because it inhibited GTP-Rac1, GTP-Cdc42, and α-PAK, and ultimately inhibited melanosome transfer. Our data confirm that UP256-mediated dendrite formation inhibition is similar in effect to that of the well-known Rho family protein inhibitors: ML141 (Cdc42 inhibitor) and NSC 23,766 (Rac1 inhibitor). Therefore, it is clear that UP256 regulates Rho-GTP-binding proteins, especially Rac1 and Cdc42, all critical for dendrite formation. The upstream signaling intermediates that regulate Rac1 and RhoA activity require more extensive studies as well as the potential crosstalk between dendrites and the keratinocyte membrane at the attachment site. It is contradictory regarding the relationship between melanogenesis and cilia formation when we measured the cilia formation and melanin contents in NHEM over time (24,48, and 72 h) (Figure 4a), Interestingly, we found that the formation of cilia increased at 48 and 72 h passed with time, but the melanin was slightly increased at that time. It is reported that cilia might bear out by several shocks and stress, and UV and heat shock both triggered cilia assembly in RPE-1 cells [32,33]. Melanocytes are more sensitive to UVR-damage than any other cells. Therefore, melanocytes may play an important role in cell signal processing and melanin synthesis through cilia. However, melanin synthesis by exposure to α-MSH was significantly reduced by the induction of primary cilium formation with cytochalasin D (CytoD), and melanin was significantly elevated by treatment with ciliobrevin A (Cilio A), an inhibitor of primary cilium formation [34]. This indicated that melanogenesis could be inhibited by the ciliogenesis enhanced by cytoskeleton depolymerization with CytoD. However, little has been informed about how the ciliogenesis could be increased in inactive cells without the cytoskeleton dynamics with CytoD. Our data demonstrated that melanogenesis could be correlated with ciliogenesis based on the incubation time of NHEM and suggests that the treatment of CytoD might inhibit melanogenesis just by regulating the actin polymerizations independently in primary cilia formation. Therefore, there may be limitations to generalize the relationship with melanogenesis and cilia formation. In contrast, our results showed that treatment of UP256 significantly decreased cilia formation. Our results were tested under normal conditions where function was maintained the same in in vivo. Therefore, our results suggest that the depigmenting effect by UP256 may be due to decreasing the dendrite movement via regulating the cilium cytoskeleton dynamics in melanocytes. The zebrafish is an excellent and well-established model for in vivo studies involving pigmentation experiments. Based on the inhibitory effect of UP256 on melanogenesis in NHEMs, zebrafish model studies were performed to confirm the effect of UP256 on hyperpigmentation. An additional study was conducted to confirm the effectiveness of the newly discovered anti-melanogenesis inhibitors in an artificial skin model. Histological changes resulting from UP256-induced depigmenting were observed using Fontana-Masson staining. UP256 has a potential inhibition effect on melanin production by reducing tyrosinase, TRP-1 and -2, and SOX-9 expression in melanocytes. This anti-melanogenic effect was further confirmed in a 3D skin model and zebrafish. In addition, UP256 has an apparent regulatory role in melanocyte dendrite formation. Therefore, we conclude that UP256 is a promising potential therapeutic agent for hyperpigmentation-related skin diseases and skin freckles. Cell Viability and In Situ Tyrosinase Activity Cell viability was performed as previously described [29]. Briefly, cells were incubated with 0.1% (w/v) 3-(4, 5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide reagent for 1 h. The resulting formazan crystals were then dissolved in dimethyl sulfoxide (DMSO), and the absorption measured at 570 nm, using a microplate reader from Molecular Devices (San Jose, CA, USA). To evaluate in situ tyrosinase activity, L-DOPA staining was performed, following the method reported [30]. Briefly, NHEM were fixed with 4% paraformaldehyde and permeabilized using the 0.1% Triton X-100 reagent. Cells were stained with 0.1% L-DOPA for 3 h at 37 • C, and observed under a microscope (Olympus, Tokyo, Japan). Zebrafish Model Zebrafish embryos obtained from the Zebrafish Resource Bank (Kyungpook National University, Daegu, South Korea) were treated with UP256, with PTU as a positive control, for 9-72 h post-fertilization. Treated zebrafish embryos appeared more depigmented compared to the vehicle-treated embryos. This depigmentation was clearly revealed when we observed the zebrafish under a stereomicroscope. 3D Tissue Model MelanoDerm TM is a reconstructed skin that was purchased from MatTek (Ashland, MA, USA), and consists of human keratinocytes and human melanocytes cultured to form a multilayered, well-differentiated model of the human epidermis. Reconstructed skin tissue was cultured with EPI-100-NMM medium (MatTek, Ashland, MA, USA) under conditions of 5% CO 2 at 37 • C. UP256 was dissolved in a mixture of propylene glycol and PBS (50:50, v/v), and then treated to skin tissue every two days. The tissues were rinsed with PBS, fixed with 10% Neutral Buffered Formalin, and embedded in paraffin and sectioned at 4 µm. The sections were stained with Hematoxylin-Eosin (H&E) and Fontana-Masson and then the stained slides were examined under a light microscope (Olympus, Tokyo, Japan). Detection of Primary Cilia For the detection of primary cilia in vitro (Lee et al., 2019), NHEMs were grown on a coverslip and then incubated for 24-72 h. Cells were fixed with 4% paraformaldehyde for 10 min, washed three times with cold PBS, and permeabilized with PBST (0.1% (v/v) Triton X-100 in PBS) for 10 min. Then, cells were washed three times, and incubated with monoclonal anti-acetylated tubulin antibodies (1:1000, Sigma-Aldrich St. Louis, MO, USA) and rabbit Arl13b antibodies diluted (1:1000, Rosemont, IL, USA) in PBST for 1 h at room temperature. After washing three times with PBS, cells were incubated with chicken anti-mouse IgG-Alexa 488 diluted (1:1000, Life technology, Carlsbad, CA, USA) and anti-Rabbit-Alexa 568 (1:2000, Life technology, Carlsbad, CA, USA) in PBST for 1 h at room temperature. Nucleus was visualized by staining cells with DAPI. After washing with PBS, cells were mounted on a glass slide. Primary cilia were observed and photographed at 400x magnification under a fluorescence microscope (Nikon, Tokyo, Japan). Statistical Analyses The results were analyzed using Bonferroni's test for multiple comparisons of one-way analysis of variance (ANOVA) using GraphPad Prism 5.0 (GraphPad Software Inc., San Diego, CA, USA). p-values of <0.05, <0.01, and <0.001 were considered statistically significant. Results are presented as the mean and the standard error of the mean (SEM). Conclusions For the first ime, we discovered that UP256 containing bakuchiol could inhibit melanin synthesis, dendrite formation, and primary cilium formation leading to the inhibition of melanogenesis and the mechanism might involve the regulation of the expression of Tyrosinase Expression and Rho-Dependent Signaling. Therefore, we conclude that UP256 is a promising potential therapeutic agent for hyperpigmentation-related skin diseases.
2020-07-30T02:06:27.516Z
2020-07-28T00:00:00.000
{ "year": 2020, "sha1": "d1de5b4fbffd43336d75519bbb1198091e6cb580", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/21/15/5341/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "da40b8bd09147f945d462ded342d54402822c06b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
139775873
pes2o/s2orc
v3-fos-license
Surface integrity in wire-EDM tangential turning of in situ hybrid metal matrix composite A359/B4C/Al2O3 Abstract In this work, wire electric discharge turning, a novel and unconventional technique, was used for the turning operation of a newly developed hybrid metal matrix composite of aluminum (A359/B4C/Al2O3) fabricated in-house by electromagnetic stir casting. The objective of the work was to examine the effect of rotational speed on the elements of surface integrity. It involved the measurement of various parameters such as the roughness (Ra, Rq, Rz), morphology of the recast layers, microhardness variation, and the formation of residual stresses on the machined surface and in the subsurface during the operation. The quality of the turned surface was examined by 3D surface visualization images and surface topographical details obtained by an Olympus LEXT OLS 3100 laser confocal microscope. Further, surface study at the microscopic level was done by field-effect scanning electron microscopy (FE-SEM) images to examine the surface defects. The measurement results revealed a successful turning operation, which showed a dull, textured surface without any specific texture or pattern on the machined surface. The surface had many peaks and valleys with small-scale of defects such as surface porosity. However, these defects were negligible and resulted in a smooth surface finish at high rotational speeds. Introduction Metal matrix composites (MMCs) are advanced materials containing the nonmetallic phase of the reinforcing material in the metallic phase of the matrix alloy and having improved properties compared to those of the basic alloy. Aluminum composites are very suitable materials for the automobile and aircraft industries because of their favorable properties such as light weight, high hardness, higher tensile and compressive strengths, better wear, and high corrosive resistance [1][2][3]. Reinforcements are added to the base alloy depending on the final desired properties [4]. The addition of two or more reinforcements into the metal matrix makes hybrid MMCs, which can overcome the negative aspects of the MMC with a single reinforcement [5,6]. MMCs are used in manufacturing industries because of their extensive applcations. However, machining of these materials is still a challenging task. The presence of hard reinforcing particles in the MMCs leads to the high tool wear. Moreover, carbide tools show significant tool wear even for a very short period of machining [7]. Unconventional machining provides better alternatives for machining these materials compared to conventional machining [7,8]. Thermal erosion processes, such as wire electric discharge machining (WEDM) [8], and cold alternative methods such as abrasive waterjet machining [4,5] are the preferred methods in this regard. WEDM is a thermal erosion process through spark generation by using a wire electrode against an electrically conductive work material [9]. The electric discharge produced during the operation erodes the material from the workpiece by initiating small cracks and craters via melting and vaporizing the material, which is then flushed away by means of a dielectric fluid. The process is suitable for generating intricate 2D and 3D shapes with good accuracy and high surface finish. It is suitable for hard-to-cut materials such as steels, alloys, and composites [10]. The process is capable of making dies and engine parts more efficiently than conventional machining which involves many processes such as cutting, grinding, and polishing and leads to higher cycle time [11]. WEDM can also be used for turning operations, known as wire electric discharge turning (WEDT), which is an alternative to abrasive waterjet turning [12]. In WEDT, a cylindrical workpiece rotates axially against the traverse motion of the wire electrode (Figure 1), which helps the removal of the material from the surface of the workpiece. The major process parameters of WEDM and WEDT are similar; however, in WEDT the rotational speed (in rpm) is also considered, which affects the process outcome [12]. The surface integrity of the turned surface plays a major role in assessing the quality of the machined surface and its subsurface. The characteristics of the subsurface include the thickness of the heat-affected zone (HAZ), which depends on various process parameters [13]. A surface free from defects (such as tearing, cracks, phase transformation, plastic deformation, and recrystallization) and has better roughness value, high fatigue life, and high dimensional accuracy is in great demand in today's manufacturing industries. The WEDM process is greatly affected by the melting and vaporization of the machined surface and its subsurface, which depend on the machining parameters such as the discharge current. At high discharge currents, the material undergoes a higher level of melting and vaporization due to the heat generated, which tends to case-harden the machined surface and in some cases extend to a depth beneath the surface [13,14]. These surface studies are also necessary because of the occurrence of microstructural changes, hardening of the layers, and phase transformation. It also causes variations in the microhardness values. The measurement of microhardness during the machining of MMCs plays an important role in deciding the surface quality. In the case of MMCs, the microhardness of the machined surfaces is characterized by the fine-grained structure and homogeneous mixing of thee reinforcements [15,16]. The rapid cooling during WEDT increases uneven solidification. It also leads to an increase in porosity and micro-holes in the recast layer, due to which the sudden changes of microhardness on the machined surface and in the depth below the machined surfaces can result. A significant amount of work has been done on the integrity of the surface machined by WEDM for flat surfaces, but limited work exists on the WEDT of cylindrical surfaces. Masuzawa et al. [17], for the first time, reported the turning operation via WEDM to manufacture smalldiameter pins and a shaft, which were used as a part of a tool for micro-EDM application. They found that the wire speed, power, and servo voltage were significant parameters for obtaining roundness. However, surface roughness was greatly affected by the power. Haddad et al. [18] investigated the effect of the machining parameters on the surface roughness, roundness, and material removal rate (MRR) in the cylindrical WEDT of AISI D3 tool steel. They examined the integrity of the machined surface and subsurface. Scanning electron microscopy (SEM) images were presented to explain surface defects such as craters and macro-ridges. They also discussed the depth of the HAZ to explain the subsurface properties. In continuation of their study, Haddad and Tehrani [19] reinvestigated the effect of the machining parameters on MRR in cylindrical WEDT by using the response surface methodology (RSM). It was observed that for maximum MRR, the voltage and power should be fixed at the highest values and other parameters should be set at minimum. Mohammadi et al. [20] worked on the optimization of MRR using statistical analysis in WEDT. The effect of multiple parameters, such as power, voltage, time-off, wire speed, wire tension, rotational speed, and servo voltage, on MRR was investigated. Signal-to-noise (S/N) ratio analysis was used to obtain the optimal condition. Rajkumar et al. [21] studied the turning of the Al/SiCp MMC by WEDT. A regression equation was derived for MRR for easier prediction. The results showed that high pulse on-time, medium gap voltage, and lower spindle speed led to higher MRR and efficient machining. Jabbaripour et al. [22] studied the microhardness of the recast layer deposited during EDM machining of Ti6Al4V. The results showed that an increase in pulse energy increased the microhardness and thickness of the recast layer deposited. Baki et al. [23] machined the Ti-6Al-4V titanium alloy by WEDT to optimize the input parameters and also to observe their effect on MRR and surface roughness. They implemented grey relational analysis for the optimization of the output obtained by ANOVA and concluded that the proposed methodology could effectively deal with the multiresponse optimization problem. Giridharan and Samuel [24] worked on the multiobjective optimization of MRR and surface roughness in WEDT. The WEDT process was modeled using an artificial neural network with the feedforward backpropagation algorithm and using an adaptive neuro-fuzzy inference system. The experiments were designed based on the Taguchi design of experiments to train the neural network and to test its performance. Janardhan and Samuel [25] used pulse train data analysis to investigate the effect of the machining parameters on the performance of WEDT. Thy observed that the rotation of workpiece caused arc regions during WEDT. Ramamurthy et al. [26] compared the surface finish and kerf width produced while machining Ti6Al4V using different wire electrodes. They used the Taguchi L9 array in their study, and concluded that the pulse off-time had a very significant influence on the machining responses. It is thus evident from the literature review that most of the research work was focused on the effect of the process parameters on surface roughness and MRR while machining homogeneous, hard materials. In this article, we discuss the evaluation of the turning operation on a newly developed hybrid MMC A359/B 4 C/Al 2 O 3 [27,28]. The integrity of the surface created during WEDT is analyzed through different measurement techniques. The effect of rotational speed on the surface roughness and MRR is also examined. The quality of the turned surface is examined by a laser confocal microscope and by field-effect scanning electron microscopy (FE-SEM). The parameters of roughness such as Ra, Rq, and Rz are measured using an optical profilometer. The study of recast layers, microhardness depth profiles, and residual stresses in the subsurface is also carried out. Experimental procedure The experimental material consists of the A359 aluminum alloy (Si 8.5-9.5%, Cu 0.2%, Mg 0.5-0.7%, Mn 0.1%, Fe 0.2%, Zn 0.1%, Ti 0.2%, and Al the remaining) as a base metal and B 4 C and Al 2 O 3 as reinforcing materials. A359 has good casting and wettability properties. Its thermophysical properties are shown in Table 1. B 4 C is the third hardest material after diamond. It is cubic boron nitrate and possesses relatively low specific gravity, high wear resistance, and high impact resistance. Al 2 O 3 is a hard material and is resistant to wear. It has good dielectric properties, high strength, and good thermal conductivity. The properties of B 4 C and Al 2 O 3 are also shown in Table 1. The reinforcements were added in the proportion of 2% by weight of each component to the base metal. Electromagnetic stir-casting process was used in the fabrication process of the hybrid MMC. A detailed description of the working principle and fabrication process has been given in our previous work [29]. The machining work was carried out on a Maxicut e CNC wire-cut electric discharge machine manufactured by Electronica Machine Tool Ltd. A special turning setup was developed and mounted on the conventional WEDM machine ( Figure 2) to provide an additional degree of freedom in the rotational axis. A chuck was fitted to a fixture and operated with the help of an electric motor and belt pulley arrangement to rotate workpiece against the traverse motion of the wire electrode. The rotational speed of the motor and chuck could be adjusted using regulators. The traverse motion of the wire electrode in the X-and Y-direction of the WEDM machine could be varied to get the desired cylindrical shape of the workpiece. The working range and other specifications of the machines are given in Table 2. In the experiment, the turning operation is performed on the developed hybrid MMC. Hence to measure the effect of rotation during WEDT on the tested hybrid MMC, the rotational speed was selected as the variable parameter. The other technical parameters were kept constant, as shown in Table 3. A cylindrical workpiece of diameter 20 mm was used as the workpiece. The range of the turning speed coul be varied from 70 to 700 rpm using this setup. On the basis of a pilot run, it was concluded that small variations in the rotational speed have only an insignificant effect on the surface quality. Hence to cover the wide range of roational speed, 200, 400, and 600 rpm were selected to create three different surfaces. The turning mechanism and turning operation are shown in Figure 3. The effect of the rotational speed on the resulting surface roughness and MRR was assessed. The surface roughness [Ra: average roughness; Rq: average rootmean-square (RMS) roughness; Rz: average maximum height of the profile] was measured by a noncontact type roughness tester (MicroProf FRT optical profilometer). All measuring surfaces were scanned, and the scanned data were used to generate the values and graphs using the scanning probe image processor (SPIP) software. Surface roughness values and graphs, along with 3D plot, were also recorded and captured simultaneously. The 3D plot of the surface was done by an in-built sensor (SEN 00003). The other basic measuring parameters of optical profilometer are shown in Table 4. The MRR was also calculated by using Eq. (1) to measure the machining outcomes. In the equation, D (mm) is the initial workpiece diameter, d (mm) is the final diameter, l (mm) is the length of turning, and t (min) is the machining time. The 2D and 3D topographical details of the machined surfaces and their qualitative description were captured by an Olympus Lext OLS 3100 laser confocal microscope. FE-SEM images were also acquired using a Zeiss Supra 55 FE-SEM machine to study the morphological details of the developed MMC and the machined surfaces. While analyzing the subsurface, the residual stress was measured by a GIXRD X-ray diffraction machine. When X-rays are allowed to interact with the sample surface, it produces a diffracted beam which follows Bragg's law nλ = 2d sin θ, where λ is the wavelength, θ is the diffraction angle, and d is the lattice spacing (called the d-spacing). The mathematical relation of d vs. sin 2 α (obtained from XRD machine) is then used to calculate the residual strain ε. The residual stresses are measured using Hook's law by taking Young's modulus as 113 GPa and Poison's ratio as 0.275, as reported by the tensile testing of the developed hybrid MMC. The depth profile of the sample was obtained by polishing and etching ultrathin surface layers down to ~250 μm. The sample for microhardness testing was prepared by metallographic polishing, and the Vickers indentation test was performed at a load of 100 g for 10 s. The depth from the surface was taken as the same as in the residual stresses profile. Results and discussion The hybrid MMC of A359 + 2% Al 2 O 3 + 2% B 4 C was successfully developed by electromagnetic stir-casting in the laboratory. After testing and characterization, the mechanical properties of the hybrid MMC [25] were measured, which and reported in Table 5. The FE-SEM images of the developed hybrid MMC are shown in Figure 4A-F. These figures show the mixing of the reinforcement as well as the casting defects. The solid solution of the base aluminum alloy and an interdendritic network of aluminum-boron or aluminum-silicon eutectic mixture can also be observed. Because of the low density of B 4 C particles compared to that of the base metal A359, it flows in the aluminum melt, which causes a nonuniform distribution of these particles. Some regions of the MMC show particle-free zones. However, clustering of the reinforcements are seen at some places, which is due to the lack of turbulence during mixing. A higher magnification image shows the presence of different particles dispersed in the matrix phase, which can be seen as different shades of gray in the images. Some brightly colored phases are also found, which show that there are some particles that reflect the field-emission electrons at a higher rate. These particles may be due to the presence of reinforcements or some other particles of the base metal. However, porosity, microcracks, and small-scale blow-holes are also seen in the composite, but they are negligible compared to the better and improved properties of the composite. Figure 5 shows the hybrid MMC sample turned via WEDT. On visual observation, the surface appeared relatively homogenous without any cutting lines, tool marks, or specific texture or pattern. The surfaces were not glossy because of the thermal decomposition. On increasing the rotational speed, the surface seemed to improve and more precisely finished. However, small pores became visible on the surface. This is attributed to the melting of the matrix material and simultaneous dielectric flushing, due to which the reinforcements get dislodged from their place and create a void on the surface of the workpiece. It leads to an increase in the surface roughness. The size of these pores was measured to be in the range 5-15 μm. It was observed that on increasing of rotational speed from 200 to 600 rpm, the average roughness Ra changed from 4.8 to 6.2 μm, Rq from 6.25 to 7.79 μm, and Rz from 38.5 to 45.92 μm; the corresponding values are shown in Figure 6A-C, respectively. It is attributed to the fact that, in case of WEDT, the flushing pressure of the deionized water helps to remove the debris and excess material. However, because of the rapid cooling, the solidification rate is nonuniform, which leads to a slightly higher range of the surface roughness value. It was also observed that while increasing the rotational speed, the surface finish showed an improving trend. This is because at higher rotational speeds, the circumferential length of the workpiece crossing the spark zone increases per unit time but the pulse energy (during pulse ON-time) available remains same for the same time interval. So, the spark zone formation decreases, which reduces the rough cutting. Figure 6A shows that the surface roughness Ra decreases by 22.58% with the increase in rotational speed. Similar trends are observed for the other roughness parameters (Rq and Rz), which decreases by 19.76% and 16.16%, respectively. On the other hand, for these reasons, the value of MRR decreased by 39.61% on increasing the rotational speed up to 600 rpm, as shown in Figure 7. In the range of MRR from 32.5 to 19.5 mm 3 /min, slow material removal was observed while increasing the rotational speed. A similar trend was also reported by other authors [10,30,31]. This is due to the fact that the machine itself adjusts the traverse speed in the range 0.1-0.3 mm/min. The major difference observed between the cutting and turning operations by the wire EDM process is in the generation of the effective spark. The amount of effective spark produced is less in WEDT than in WEDC for the same time interval and for the same travel length per unit time. The 2D and 3D visualization of the machined surface at different rotational speeds are shown in Figure 8A-C. It is seen from the 3D images that a circular curvature without any undulation is present on all surfaces. Smallscale porosity as well as voids are observed on the surfaces, which follow a decreasing trend at higher rotational speeds. Roughness measurement result already indicated that, at lower rotational speeds, the surface roughness increases. At the lower speed, the discharge energy per unit time per unit length is more, which causes a high rate of melting and vaporizing of the workpiece surface, and vice versa. The other reason behind the higher surface roughness is the formation of voids. The reinforcement particles, which were hard to cut and came in the path of the wire electrode, got dislodged from their places and created voids on the surface of the workpiece due to the melting and flushing action of the dielectric fluid. Reinforcement percentage is another important factor affecting these surface defects. The melting of the matrix metal is also obstructed by the clustering of the hard reinforcement particles in the MMC. As a result, they are split in the form of brittle fracture and create voids at that place. The 2D and 3D surface topographical images of the machined surfaces captured using an Olympus Lext OLS 3100 laser confocal microscope at the different rotational speeds are shown in Figure 9A-C. The images show the uneven melting of the surface due to the thermal erosion phenomenon. The workpiece material melts and resolidifies during the process, and this results in the generation of craters, peaks, and valleys over the entire machined surface. It is to be noted that in the no-spark zone, the solidification of the recast layer is nonuniform, which also creates several peaks and valleys. These valleys and craters are mainly responsible for the roughness of the material. Small microcracks are also be observed on uneven surfaces because of the nonuniform cooling of the surface. Moreover, some of the wire material also gets deposited on the machined surface. The FE-SEM images of the surface show the melting and vaporization of machined surface, deterioration of surface, as well as surface defects such as craters, microcracks, and cavities ( Figure 10). Some spherical nodules (droplets) of the resolidified metals are observed on the machined surface. It is due to the nonuniform solidification, i.e. the molten metal does not form a homogeneous phase and appears separately in the form of nodules because of surface tension [32]. The top surface due to the resolidification of the metal is called a recast layer ( Figure 11). This layer is formed as a result of the resolidification of the workpiece material as well as decomposition of the wire material (zinc and brass), which get neither vaporized nor flushed out by the deionized flush pressure. Microcracks and a small amount of porosity are also seen in the recast layers, which may be due to the high surface residual stresses during quenching or resolidification of the molten material. These defects lead to the reduced hardening of the top of the machined layer. It was seen from the analysis that at lower rotational speeds, the presence of microcracks and craters on the surface was higher. This is due to the high pulse energy per unit length of the workpiece, which causes more melting from the surface and leads to deep craters and severe microcracks, vaporization, and resolidification. Because of the thermal erosion of material from the machined surface, an HAZ is formed. This changes the property of machined surface as well as the subsurfaces [10]. Initially, without machining, the residual stress is nonsignificant or negligible but compressive in nature due to the natural solidification and shrinkage of the cast hybrid MMC before turning [27]. It is seen from the graphs ( Figure 12) that residual tensile stresses are found in the case of WEDT because of the thermal erosion process. This is attributed to the fact that thermal stresses are induced in the HAZ below the machined surface. Figure 12 shows the variation of the residual stress through the depth from the surface of the machined sample by varying the rotational speed. The residual tensile stress at a depth of 20 μm was found to be 340, 324, and 310 MPa for the different rotational speeds, respectively. However, with increasing depth from the surface toward the core of the workpiece, the effect of thermal erosion process is negligible. It reaches the original residual stress of the cast hybrid MMC before turning and is found near the depth of 250 μm approximately, which is compressive in nature. Other researchers have also reported the generation of residual tensile stresses in the subsurface [10,33]. The graph also indicates the decreasing trend of the residual tensile stresses with increasing rotational speeds. It is because at lower rotational speeds, the maximum discharge energy per unit circumferential area per unit time is available. It has more time for melting, vaporizing, quenching, and resolidification for a particular area and hence tends to increase the HAZ and thermal stresses. The other phenomena, such as decomposition and resolidification of workpiece and wire electrode material at the machined surface, are probably responsible for the increase in the residual stress. The results of the microhardness testing are shown in Figure 13. Initially, the microhardness of the sample was 200 HV before machining. Microhardness was measured along the cross-sectional plane at the initial indentation depth of 20 μm. It is observed from the graph that, for the WEDT, the microhardness values at a depth of 20 μm are found to be 165, 176, and 185 HV at 200, 400, and 600 rpm, respectively. These values seem lower than the hardness value of the as-cast hybrid MMC. It is attributed to the reduced hardness of the recast layer. The porosity and micro-holes in the recast layer are high at high discharge energy, which tend to reduce the hardness [10]. Another reason is the oxidation of elements of the recast layers, such as zinc, copper, and brass, which decreases its hardness [33][34][35]. When the rotational speed increases, the availability of the discharge energy per unit circumferential length of the workpiece decreases. It reduces the thickness of the recast layer, and hence the microhardness shows an increasing trend. As the depth increases beyond the recast layer, the microhardness of the machined surface increases by ~10-15% up to the depth of 50 μm. This is due to the microstructural changes of the machined surface during the erosion process. However, these increments become negligible at a depth of 250 μm. Conclusion The turning operation of the hybrid MMC A359/B 4 C/Al 2 O 3 was succesfully performed by WEDT. It can be concluded from the results that WEDT is suitable for hard-to-cut conductive materials such as MMCs. On the basis of surface topography results, it could be concluded that the WEDT surface has a dull appearance because of the deposition of the resolidified layer, which is thermally affected, but the surface is free from any particular surface texture or cutting marks. However, a few surface defects like porosity were observed on the machined surface, which were associated with the dislodging of reinforcements from the workpiece surface due to high flushing and circumerencial matrix material melting during the turing operation. The surface roughness values were in the range 4.8-6.2, which shows a decreasing trend with increasing rotational speed. The MRR value decreases on increasing the rotational speed. The major difference observed between cutting and turning by the wire EDM process is in the generation of the effective spark. The amount of effective spark produced is less in WEDT than in WEDC for the same time interval and for the same travel length per unit time. The surface topographical images also revealed that all the machined surfaces had small craters, valleys, and peaks. Overall, the desired surface finish was found with fewer machining defects. Some microcracks were also noticeable on the surface. A recast layer comprising resolidification of the work material and the wire material was also observed on the machined surface. The HAZ in the machined sample was found down to 250 μm from the machined surface, which tended to induce residual stresses and an increase in the microhardness of the machined sample.
2019-04-30T13:07:58.664Z
2018-08-03T00:00:00.000
{ "year": 2019, "sha1": "7600ac7119b7bac02c6a337992f75870f3a08cde", "oa_license": "CCBY", "oa_url": "https://www.degruyter.com/downloadpdf/journals/secm/26/1/article-p122.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7600ac7119b7bac02c6a337992f75870f3a08cde", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
59527029
pes2o/s2orc
v3-fos-license
TrkB-ICD Fragment, Originating From BDNF Receptor Cleavage, Is Translocated to Cell Nucleus and Phosphorylates Nuclear and Axonal Proteins The signaling of brain-derived neurotrophic factor (BDNF) has been suggested to be impaired in Alzheimer’s disease (AD), which may compromise the function of BDNF upon neuronal activity and survival. Accordingly, decreased levels of BDNF and its tropomyosin-receptor kinase B-full-length (TrkB-FL) have been detected in human brain samples of AD patients. We have previously found that neuronal exposure to amyloid-β (Aβ) peptide, a hallmark of AD, leads to calpain overactivation and subsequent TrkB-FL cleavage leading to decreased levels of TrkB-FL and the generation of two new fragments: a membrane-bound truncated receptor (TrkB-T′) and an intracellular fragment (TrkB-ICD). Importantly, we identified this TrkB-FL cleavage and TrkB-ICD presence in human brain samples, which indicates that this molecular mechanism contributes to the loss of BDNF signaling in humans. The exact role of this TrkB-ICD fragment is, however, unknown. Here, we used a human neuroglioma cell line and rat cortical primary neuronal cultures to track TrkB-ICD intracellularly. Our data show that TrkB-ICD is a relatively stable fragment that accumulates in the nucleus over time, through a phosphorylation-dependent process. We also found that TrkB-ICD has tyrosine kinase activity, inducing the phosphorylation of nuclear and axonal proteins. These findings suggest that TrkB-ICD may lead to a dysregulation of the activity of several proteins, including proteins in the nucleus, to where TrkB-ICD migrates. Since TrkB-ICD is formed by Aβ peptide-induced cleavage of TrkB-FL, the present data highlights a new mechanism that may have a role in AD pathophysiology. INTRODUCTION Alzheimer's disease (AD) is a slow progressing neurodegenerative disease, leading to atrophy and neuronal loss of particular brain regions, in particular the hippocampus, therefore leading to cognitive impairments (Huang and Mucke, 2012). Together with Tau protein hyperphosphorylation, amyloid-β (Aβ) peptide has been considered one of the main players in AD progression (Huang and Mucke, 2012). It is has been suggested that brain-derived neurotrophic factor (BDNF) signaling, one of the major pathways responsible for endogenous neuroprotection, is dramatically disrupted in AD (Phillips et al., 1991;Connor et al., 1997;Ferrer et al., 1999;Arancibia et al., 2008;Zuccato and Cattaneo, 2009;Diniz and Teixeira, 2011;Kemppainen et al., 2012;Nagahara et al., 2013;Jerónimo-Santos et al., 2015). BDNF, through the activation of the full-length isoform of Tropomyosin-receptor kinase B (TrkB-FL), promotes neuronal growth, survival, differentiation and synaptic plasticity, thus contributing to the homeostasis the mammalian nervous system (Huang and Reichardt, 2001). In addition to TrkB-FL, BDNF can also activate truncated (Tc) isoforms (TrkB-Tc) that lack the tyrosine kinase domain, acting as negative modulators of BDNF signaling (Stoilov et al., 2002). In the brain of AD patients there is a molecular dysregulation of the main players of BDNF signaling namely, decreased levels of BDNF and TrkB-FL and increased levels of TrkB-Tc (Phillips et al., 1991;Connor et al., 1997;Ferrer et al., 1999;Kemppainen et al., 2012). In addition, as we recently found in cell cultures, Aβ peptide, through extrasynaptic NMDA receptors, promotes the increase of intracellular calcium levels, leading to the overactivation of calpains, which then promote TrkB-FL cleavage (Jerónimo-Santos et al., 2015;Tanqueiro et al., 2018). This process leads to the decrease of TrkB-FL levels and the generation of two distinct fragments: a membrane-bound truncated receptor (TrkB-T ) and an intracellular fragment (TrkB-ICD). In addition, we already confirmed TrkB-FL cleavage and consequent TrkB-ICD formation in the human brain (Jerónimo-Santos et al., 2015). In this work we characterized TrkB-ICD regarding its stability, localization and molecular function in rat neuronal cells and in a neuroglioma cell line. We found that TrkB-ICD is a stable fragment, which, over time, translocates into nucleus and phosphorylates nuclear and axonal proteins. Taken together, these data strongly suggest that TrkB-FL cleavage could be an important step of AD pathophysiology, since it leads to a loss of BDNF signaling and furthermore forms an intracellular fragment that might propagate Aβ toxicity to the neurons. Supplementary Table S1. TrkB-ICD Fragment: Determination of Half-Life Time and Intracellular Localization TrkB-ICD stability in vitro was assessed by determining its half-life time (T 1/2 ). After 16 h of transfection with TrkB-ICD vector, H4 cells were treated with cycloheximide (CHX, 5 µM), an inhibitor of protein biosynthesis, for 8 h and 24 h. TrkB-ICD levels were quantified at 0 h, 8 h and 24 h after CHX treatment; a time-dependent gradual decrease of TrkB-ICD expression levels was detected (Figures 1A,B). After 8 h of CHX exposure there was a significant decrease on TrkB-ICD expression levels (p < 0.0001) towards near 50% of the value at time 0 ( Figure 1B), whereas at 24 h of incubation with CHX only residual levels of TrkB-ICD were detected (p < 0.0001; Figures 1A,B). Data obtained using primary neurons follow a similar pattern (Supplementary Figure S1A). Mathematical treatment (Belle et al., 2006) of the data obtained in H4 cells ( Figure 1C) gave a degradation rate constant of k = 0.086 and an estimative of T 1/2 of approximately 8 h. To assess TrkB-ICD subcellular localization we started by an in silico approach and evaluated the presence of nuclear localization sequence (NLS), which codifies the nuclear import of proteins (Lange et al., 2007;Kosugi et al., 2009). The analysis of TrkB-ICD sequence performed by a NLS prediction algorithm (Figures 1D,E, cNLS Mapper software) revealed the presence of two bipartite NLS. In this algorithm, scores of 9/10 codify proteins exclusively present in nucleus, whereas scores of 1/2 indicate proteins only present in cytoplasm. In our case, both predicted sequences are characteristic of proteins that could be localized either in nucleus or in cytoplasm (scores of 5.5 and 6.5; Kosugi et al., 2009). In TrkB-ICD transfected primary neurons there was a time-dependent progressive increase in the nuclear expression TrkB-ICD. Actually, after 4 h and 8 h of transfection, TrkB-ICD is dispersed on the cell, while for 16 h and 24 h of transfection we observed a significant increase in the proportion of cells with TrkB-ICD staining exclusively in nucleus (41.7% and 73.8%, respectively; Figure 1F). Representative immunofluorescence images for 4 h-and 24 h-transfected cells are shown in Figure 1G. That TrkB-ICD is progressively translocated to the nucleus could also be concluded by using a subcellular fractionation protocol, which allows to distinguish three fractions: N, enriched in nuclear proteins; cytosolic and membrane (C&M), enriched in cytoplasmic and membrane proteins; H, total homogenate. Figure 1H shows data from 24 h-transfected neurons, revealing that TrkB-ICD is present with stronger intensity in the N fraction than in the C&M fraction. TrkB-ICD nuclear translocation was also detected in H4 cells, however with a different temporal pattern, since the presence in the nucleus was only detected after 48 h of transfection (Supplementary Figure S1B). TrkB-ICD Fragment: Characterization of Tyrosine Kinase Activity and Its Influence Upon Nuclear Translocation Considering that TrkB-ICD contains a TrkB-FL tyrosine kinase domain and also that phosphorylation is a central step for many biologic processes, we evaluated whether this fragment could per se present kinase activity. To do so, we used an antibody (PY99) that specifically detects phosphotyrosinecontaining proteins and evaluated, through western-blotting and immunofluorescence assays, phosphotyrosine immunoreactivity of 24 h-transfected primary neurons. As shown in Figure 2B, expression of TrkB-ICD induced a massive phosphorylation pattern of several proteins. Similar results were found in H4 cells transfected cells (Supplementary Figure S1C). Importantly, only cells expressing TrkB-ICD show staining for phosphorylated proteins at tyrosine residues. Moreover our data reveals that during the 24 h transfection period, TrkB-ICD could induce phosphorylation of somatic, nuclear and axonal proteins (Figures 2A1,A2). We then evaluated if TrkB-ICD nuclear translocation was dependent on its kinase activity. To do so, immunofluorescence assays were performed using 24 h-transfected primary neurons incubated with K252a (200 nM), which inhibits Trk-FL kinase activity (Ohmichi et al., 1992). Under these conditions, TrkB-ICD immunolabeling was dispersed throughout all cell body and axons ( Figure 2D). Accordingly, there was a marked decrease in the percentage of cells with staining exclusively on the nucleus (from 73.8% to 14.8%, Figure 2C) and marked increases TrkB-ICD detection in the cytoplasmic and membrane fraction ( Figure 2E). These data strongly suggest that kinase activity is a requisite for nuclear translocation of the TrkB-ICD. To further characterize TrkB-ICD kinase activity, we evaluated the mechanisms underlying the phosphorylation profile. Accordingly, 24 h-transfected primary neurons were incubated with inhibitors of crucial signaling pathways. For that, we measured the ratio between levels of phosphorylated proteins and TrkB-ICD levels, to cancel out any influence of these drugs on the transfection/expression of TrkB-ICD levels. Indeed, some drugs affected TrkB-ICD levels ( Figure 2F), possibly by influencing either its production and/or its degradation. Importantly, however, the ratio between the total amounts of phosphorylated proteins over the amounts of Trk-B-ICD was markedly decreased by inhibitors of protein kinase A (PKA) and PKC (H-89 and Staurosporine, STS inhibitors, respectively), an effect with a similar magnitude to K252a effect (p-value = 0.0292, p-value = 0.0039 and p-value = 0.0045, respectively, when compared to non-treated TrkB-ICD-transfected cells). These data indicate that PKA and PKC belong to the signaling cascade operated by TrkB-ICD that lead to protein phosphorylation ( Figure 2G). Finally, to further assess the role of TrkB-ICD as a trigger to the phosphorylation, we evaluated whether inhibition of its kinase activity as well as inhibition of its de novo synthesis, would affect overall phosphorylation. When 16 h-transfected primary neurons were incubated with K252a (200 nM) for 2 h, 8 h and 24 h, the overall levels of protein phosphorylation markedly decreased, being already nearly absent after 2 h of incubation with the tyrosine kinase inhibitor (Figure 2H). This indicates that phosphorylated proteins are quickly de-phosphorylated in the absence of TrkB-ICD activity. When 16 h-transfected primary neurons treated with CHX for the different periods, there was a progressive decline of the levels of phosphorylated proteins (Figure 2I), which is accompanied by a progressive decline of TrkB-ICD itself, thus reinforcing the conclusion that TrkB-ICD is a trigger for protein phosphorylation. DISCUSSION The present work demonstrates, for the first time, that TrkB-ICD: (I) is a relatively stable protein; (II) phosphorylates several proteins; and (III) accumulates in the nucleus (Figure 3). Previously, we described that TrkB-FL could be cleaved by Aβ peptide accumulation (Jerónimo-Santos et al., 2015). This cleavage has been demonstrated in several excitotoxic conditions (Gomes et al., 2012;Vidaurre et al., 2012;Danelon et al., 2016) and may have a major impact in long-lasting excitotoxic conditions, such as Aβ peptide accumulation in AD. It is widely known that BDNF and TrkB-FL constitute a major signaling pathway in the developing and adult mammalian brain, being implicated in neuronal differentiation, growth, survival and plasticity (Lewin and Barde, 1996;Huang and Reichardt, 2001). Therefore, the appropriate function of this signaling pathway is crucial for the central nervous system homeostasis, and its dysregulation might lead to neuronal damage. It is already known that an impaired BDNF/TrkB-FL system plays an important role in the pathogenesis of AD (Phillips et al., 1991;Connor et al., 1997;Ferrer et al., 1999;Arancibia et al., 2008;Zuccato and Cattaneo, 2009;Kemppainen et al., 2012;Nagahara et al., 2013;Jerónimo-Santos et al., 2015). Existing data demonstrate that pro-BDNF, BDNF and TrkB-FL levels are reduced in the brain of AD patients, while TrkB-Tc are increased (Connor et al., 1997;Ferrer et al., 1999;Michalski and Fahnestock, 2003). Importantly, the overexpression of TrkB-FL in the AD APP/PS1 mouse model, reduces memory impairment (Kemppainen et al., 2012). Though attention has been focused in the loss of function of TrkB-FL, knowledge of the action of the intracellular fragment is crucial for a full understanding of the mechanisms underlying AD pathophysiology and thus, for the design of strategies to mitigate its progression. This is particularly important if one considers the high stability of TrkB-ICD when compared to other intracellular fragments. For instance, the p25 fragment is only detected for around 3 h after an insult (Patrick et al., 1999). Being a more stable protein fragment, TrkB-ICD can probably have larger effects on cellular processes. Its translocation and accumulation in the cell nucleus, further suggests a persistent effect of TrkB-ICD on cellular homeostasis. In addition, it is also known that AD is characterized by a dysregulation in phosphatase activity, leading to an overall increase in kinase activity (Kuban-Jankowska et al., 2015). Given the spontaneous kinase activity of TrkB-ICD fragment, we can hypothesize that this fragment could also influence the kinase/phosphatase balance, leading to increased levels of phosphorylated proteins, such as Tau. Nevertheless, it is important to acknowledge that we observed a slight difference in the phosphorylation levels, detected by PY99 antibody, between control and transfection with the empty vector (EV). This difference could be attributed to the transfection process, but does not alter the main finding herein described: the strong tyrosine-phosphorylation mediated by TrkB-ICD fragment. On the other hand, despite the evidence that the expression of TrkB-ICD induces a robust protein phosphorylation, it is not clear whether it does so, by directly phosphorylating target proteins, or by an indirect mechanism, inducing the phosphorylation of intermediate proteins. The Ser/Thr kinase family may also be one of these intermediate proteins, since their inhibition affects overall protein phosphorylation, suggesting that TrkB-ICD may have additional targets than Tyrosine residues alone. Additionally, TrkB-ICD could be also affecting gene expression in the nucleus, similar to the fragment formed from β-catenin cleavage (Abe and Takeichi, 2007). Our data also demonstrate that TrkB-ICD translocation to the cell nucleus is dependent on tyrosine kinase activity. This is in accordance with previous findings that NLS can be upregulated by phosphorylation, facilitating nuclear translocation of several proteins, namely enzymes devoted to mediate nuclear translocation processes, such as importins (Nardozzi et al., 2010). Our data show that this protein phosphorylation mediated by TrkB-ICD is not restricted to the nucleus. It can affect different cell compartments and may require the activity of kinases other than tyrosine kinases, namely PKA as assessed by using H-89, a PKA inhibitor. Data obtained with staurosporine may suggest involvement of PKC, but since staurosporine may also inhibit tyrosine kinase activity (Ohmichi et al., 1992), this should be taken into account. Given that TrkB-ICD possesses the kinase domain of TrkB-FL receptor, it could directly activate the canonical pathways triggered by TrkB-FL (PLCγ, PI3K/AKT and MAPK pathways), having therefore a protective role. Although we do not exclude that hypothesis, it is unlikely that TrkB-ICD could have this putative protective role, since TrkB-ICD only has the anchorage site for PLCγ and we do not detect (results not shown) any activation of the three pathways. Furthermore, these mediators of TrkB-FL signaling are in close vicinity to the cytoplasmic membrane and our data suggest that TrkB-ICD is mainly located on nucleus. Calpain-mediated cleavage of several proteins, namely of p35, β-catenin or mGluR1α proteins, leads to the formation of intracellular fragments known to be involved in neuronal death, hyperphosphorylation of Tau protein, excitotoxicity, gene transcription modifications (Patrick et al., 1999;Abe and Takeichi, 2007;Xu et al., 2007). The present work by showing that TrkB-ICD is stable and affects the degree of phosphorylation of several proteins, (cytoplasmic, axonal and nuclear), highlights a novel mechanism through which Aβpeptide could induce neuronal damage. Besides the loss of neuroprotection due to TrkB-FL cleavage, a gain of toxic function might occur due to the formation of a stable fragment with high phosphorylating potential, the TrkB-ICD. Since this overexpression of TrkB-ICD is mechanistically different from the TrkB-ICD released from Aβ-induced TrkB-FL cleavage, one should acknowledge that the findings herein described could be different from those triggered by the TrkB-ICD in vivo. On spite of the different origins of TrkB-ICD in those conditions, the TrkB-ICD sequence is exactly the same. In addition, the transfection method, even with intrinsic limitations, allows the evaluation of TrkB-ICD impact on cell environment per se without any other effect that could be attributed to Aβ peptide and/or calpain activation. Current therapies for AD only alleviate symptoms and do not target the etiology itself. Our data highlights a new diagnostic and/or therapeutic target against AD by showing that the TrkB-ICD fragment might play an important role in the Aβ toxicity cascade. This constitutes a step forward towards the clarification of the molecular mechanisms underlying AD pathophysiology. ETHICS STATEMENT The protocol was approved by the iMM's Institutional Animal Welfare Body -ORBEA-iMM, the National Competent FIGURE 3 | BDNF and its functional receptor TrkB-FL constitute one of the major pathways responsible for endogenous neuroprotection. Actually, after recognition of homodimeric BDNF, TrkB-FL receptor dimerizes and transactivates its tyrosine kinase domain, triggering three different signalling pathways (MAPK/Erk, PI3K/Akt and PLCγ), which promote neuronal growth, survival, differentiation and plasticity and regulate learning and memory processes. This system is, therefore, really crucial to the homeostasis maintenance of the mammalian nervous system (Huang and Reichardt, 2001). However in Alzheimer's disease it is already known that Aβ peptide could lead to an increase on intracellular Ca 2+ mediated by overactivation of extrasynaptic NMDA receptors (NMDAr) (Tanqueiro et al., 2018). This increase in Ca 2+ levels will then promote a sustained activation of calpains, which will cleave TrkB-FL and forms two different fragments: a membrane-bound fragment, TrkB-T , and an intracellular fragment, TrkB-ICD (Jerónimo-Santos et al., 2015). In this work, using different techniques and samples from a neuroglioma cell line and primary neuronal cultures, we clearly showed that TrkB-ICD is a stable fragment, which is translocated to cell nucleus after its formation and also that phosphorylates axonal, somal and nuclear proteins. Authority -DGAV (Direção-Geral de Alimentação e Veterinária) and by the Ethical Comission of Centro Hospitalar de Lisboa Norte and Centro Académico de Medicina de Lisboa. In addition, this study was carried out in accordance with the recommendations of ''Directive 2010/63/EU''.
2019-02-01T14:03:40.361Z
2019-02-01T00:00:00.000
{ "year": 2019, "sha1": "aa17d3f45e2a8d13946ee88fc9f622231212fd21", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnmol.2019.00004/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "aa17d3f45e2a8d13946ee88fc9f622231212fd21", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
246634397
pes2o/s2orc
v3-fos-license
Derivatives Risks as Costs in a One-Period Network Model We present a one-period XVA model encompassing bilateral and centrally cleared trading in a unified framework with explicit formulas for most quantities at hand. We illustrate possible uses of this framework for running stress test exercises on a financial network from a clearing member's perspective or for optimizing the porting of the portfolio of a defaulted clearing member. Introduction In the wake of the 2008-09 global financial crisis, clearing through central counterparties (CCPs) has become mandatory for standardized derivatives, other ones remaining under bilateral setup with higher capital requirements. One role of the CCPs 1 is to provide to their clearing members fully collateralized hedges of their market risk with their clients. But this comes at a cost to the clearing members, which pass it to their corporate clients in the form of XVA (cross-valuation adjustments) add-ons. Bearing in mind that the risks of a hedge are, by definition, of the same magnitudes as the ones of the originating position and that standardized derivatives usable as hedging assets have to be traded through CCPs, the XVA footprint of not only bilateral but also centrally cleared trading is significant and should be analyzed in detail, which is the topic of this paper. a BNP Paribas Stress Testing Methodologies & Models. This article represents the opinions of the author, and it is not meant to represent the position or opinions of BNP Paribas or its members. dorinel.2.bastide@bnpparibas.com (corresponding author). More precisely, the trades of a clearing member bank with a CCP are partitioned between proprietary trades, which are in effect hedges of the bilateral trading exposure of the bank, and back-to-back hedges of so-called cleared client trades, through which non-member clients gain access to the clearing services of a CCP. focus on the XVA analysis of a bank only acting as a clearing member of one CCP, without proprietary trading. The present paper provides an integrated XVA analysis in the realistic situation of a bank dealing with many clients and CCPs, through both proprietary (also dubbed house) accounts and client accounts. For the sake of tractability, this is achieved in a stylized one-period setup, finetuned to applications including risk assessment in the context of stress test exercises 2 or optimizing the porting of the portfolio of defaulted clearing members. The first type of application is motivated by the default in 2020 of Ronin Capital, a broker/dealer firm that had clearing exposures on both CCP services Fixed Income Clearing Corporation (FICC) GSD 3 segment (123 members) and CME Futures (56 members of which 24 common with FICC GSD). If all members are assumed to be only exposed to these CCPs and their cleared clients, we can illustrate these relationship by the network depicted in Figure 1.1. Any common member on those two CCPs needs to ensure conservative risk assessment that can be achieved in the proposed framework by accounting for common memberships on the two CCPs. If such common memberships are ignored, they can lead to lower loss estimates giving wrong risk view on potential losses. The second type of application is an illustration of the results of defaulted portfolio porting as it has been the case for the trader Einer Aas on NASDAQ OMX 4 that has defaulted on 2018 with loss spill-over effect on surviving members. Section 2 sets the stage. Section 3 develops the corresponding XVA analysis. Section 4 develops two applications in the above veins. Section 7 concludes. Figure 1.1: Network consisting of two CCPs (in red), 123 members for CCP1 seen on the left hand side, and 56 members for CCP2 on the right hand side, with 24 common members displayed as the group of members in the middle of the two CCPs (155 members in total, in blue), and with 179 cleared clients (in green). turns, from the bank to the CCP, from the CCP to other clearing members, and from the latter to their own clients. As a consequence, the CCP is flat in terms of market risk, as is also each of the clearing members. Defaults Settlement Rule As reasserted in the wake of the 2008-09 global financial crisis by the Volcker rule, a dealer bank should be hedged as much as possible, at least in terms of market risk 7 . Jump-to-default risk, on the other hand, is hardly hedgeable in practice. Instead it is mitigated through netting and collateralization. Namely, designated netting sets of transactions between two given counterparties (two individual participants or a participant and the CCP) are jointly collateralized, i.e. guaranteed against the default of one or the other party. The collateral (or guarantee) comprises a variation margin, which tracks the mark-to-market (counterparty-risk-free value) of the netting set between the two parties, and nonnegative amounts of initial margin posted by each party to the other, which provide a defense against the risk of slippage of the value of the netting set away from its (frozen) variation margin during its liquidation period. In the case of transactions with a CCP, there is an additional layer of collateral in the form of the (funded) default fund contributions of the clearing members, which is meant as a defense against extreme and systemic risk. For each participant, variation margin is rehypothecable and fungible across all its netting sets. Initial margin is segregated at the netting set level. default fund contributions are segregated at the clearing member level. The general rule regarding the settlement of contracts of a defaulted netting set is that: Assumption 2.1 If a counterparty in default is indebted toward the other beyond its posted margin, then this debt is only reimbursed at the level of this posted margin (assuming zero recovery rate of the defaulted party for simplicity in this paper); otherwise the debt between the two parties is fully settled. Here debt is understood on a counterparty-risk-free basis. This rule also applies to a netting set of transactions between a clearing member and a CCP. However, in our stylized setup, a CCP is nothing but the collection of its clearing members. Our CCP has no resources of its own (in particular, it cannot post any default fund contribution, or "skin-in-the-game" 8 ). As long as it is non-default, i.e. as long as at least one of its clearing members is non-default, our CCP can only handle the losses triggered by the defaults of some of its clearing members by redirecting these losses on the surviving ones. This participation of the surviving members to the losses triggered 8 such additional protection layer, though quite common in practice, is of marginal magnitude compared to the other protection layers. By omitting skin-in-the-game component, the obtained results are conservative in terms of risk management and the various formulations are simplified. by the defaults of the other members corresponds in our framework to the usage by the CCP of their default fund contributions, both funded (as already introduced above) and unfunded. As will be detailed in equations below, the funded default fund contributions are used in priority for covering losses triggered by the defaults of clearing members over their margins. The unfunded default fund contributions correspond to additional refills that can be required by the CCP, often up to some cap in principle, without bounds in our model, in case the funded default fund contributions of the surviving members are not enough. XVA Framework Assume that at time 0 all the banking participants, including the reference clearing member bank, in Figure 2.1, with no prior endowments, enter transactions with their clients and hedge their positions, both bilaterally between them and through the CCP. As seen above, the CCP and each bank are flat in terms of market risk. However, as market participants are assumed to be defaultable with zero recovery, in order to account for counterparty credit risk and its funding and capital consequences, each banking participant requires from its corporate clients a pricing rebate (considering conventionally the bank as the "buyer") with respect to the mark-to-market (counterparty-risk-free) valuation of the deals. The corporate clients of the bank are assumed to absorb the ensuing prices via their corporate business, which is their primary motivation for these deals. A reference probability measure Q , with corresponding expectation operator denoted by E , is used for the linear valuation of cash flows, using the risk-free asset as our numéraire everywhere. This choice of a numéraire simplifies equations by removing all terms related to the (assumed risk-free) remuneration of all cash and collateral accounts. The funding issue is then refocused on the risky funding side of the problem, i.e. funding costs in what follows really means excess funding costs with respect to a theoretical situation where the bank could equally borrow and lend at the risk-free rate. More precisely, as suitable for XVA calculations (Albanese et al., 2021, Remark 2.3): given a physical probability measure defined on the full model σ algebra A and equivalent to a reference risk-neutral measure on the financial sub σ algebra B of A, we take Q equal to the reference risk-neutral measure on B and equal to the physical probability measure conditionally on B. Following the general XVA guidelines of , the above-mentioned pricing rebate required by the reference clearing member bank, dubbed funds transfer price (FTP), comes in two parts: first, the expected counterparty default losses and funding expenditures of the bank, an amount that flows into the bank liabilities and which we refer to as contra-asset valuation (CA); second, a cost of capital risk premium (KVA), which instead is loss-absorbing 9 and is also used by the management of the bank as retained earnings for remunerating the shareholders of the bank for their capital at risk within the bank. All in one, the bank buys the deals from its clients at the 9 hence, not a liability. (1) Assumption 2.2 At time 0 the amounts CA and KVA sourced from the corporate clients of the bank are deposited on reserve capital and capital at risk accounts of the bank. Let EC denote an economic capital of the bank corresponding to the minimum level of capital at risk that the bank should hold from a regulatory (i.e. solvency) perspective. If KVA < EC, then the bank shareholders need to provide the missing amount (EC−KVA) of capital at risk, so that the actual level of capital at risk of the bank is max(EC, KVA), while shareholder capital at risk reduces to max(EC, KVA) − KVA = (EC − KVA) + . (2) Theoretical XVA Analysis In this section we detail each term in the equations above, in the realistic setup of a bank involved into an arbitrary combination of bilateral and centrally cleared portfolios, in a tractable one-period setup with period length T . In the one-period XVA model of Albanese et al. (2021, Section 3), there were no CCPs and the bank was assumed to have access to a "fully collateralized back-to-back hedge of its market risk", ensuring by definition and for free to the bank a cash-flow (P − MtM) at time 1, irrespective of the default status of the bank and its client. There, P denoted the contractual cash flows from the (assumed unique) client to the bank and MtM was the corresponding counterparty-risk-free value. In the present paper we reveal the mechanism of such a "fully collateralized hedge of the market risk" of the bank, which can be achieved through central clearing, but at a certain cost that we analyze. Cash Flows We use the terms client for cleared clients and counterparty for bilateral counterparties. Given disjoint sets of indices I 0, C, and B for the clearing members (including the reference bank labeled by 0) and for the respective cleared and bilateral netting sets of the bank with its individual clients and counterparties, We denote by: • J 0 , shortened as J, and J i , i ∈ I \ {0}, the survival indicator random variables of the bank and of the other clearing members at time 1; γ = Q (J = 0), the default probability of the bank; • J = max i J i , the survival indicator random variable of the CCP (i.e. of at least one clearing member), • P i , MtM i = E P i , and IM i , i ∈ I, the contractual cash flows, variation margin, and initial margin from the clearing member i to the CCP corresponding to the cleared clients account of the member i; • P i , MtM i = E P i , and IM i , , i ∈ I, the contractual cash flows, variation margin, and initial margin from the clearing member i to the CCP corresponding to the house account of the clearing member i; • DF i , i ∈ I, the default fund contribution posted by the clearing member i to the CCP; • J b , b ∈ B, the survival indicator random variable of the counterparty of the bilateral netting set b of the reference bank; P b , VM b , and IM b , the associated contractual cash flows, variation margin, and initial margin from the corresponding counterparty to the bank; and IM b , the initial margin from the bank to the counterparty; • J c , c ∈ C, the survival indicator random variable of the client of the cleared trading netting set c of the bank, and P c , MtM c = E P c 10 , and IM c , the associated contractual cash flows, variation margin, and initial margin from the corresponding client to the bank 11 ; • L, the loss of the CCP, i.e. the loss triggered by the defaults of its clearing members beyond their posted collateral 12 , which is borne by the surviving members (if any); • µ = Jµ, the proportion of these losses allocated to the reference clearing member bank. Assumption 3.1 i (P i +P i ) = 0 (the CCP is flat in terms of market risk), c P c = P 0 (by definition of cleared trades and of their mirroring trades), and b P b = P 0 (the reference bank is flat in terms of market risk). Assumption 3.1 yields the clearing conditions regarding the contractually promised cash flows, which applies to each banking participant (written there for the reference bank) and to the CCP. Assumption 2.1 is monitoring the default cash flows. We need one more condition, regarding the funding side of the problem: Assumption 3.2 The bank can use the amounts CA and max(EC, KVA) on its reserve capital and capital at risk accounts for its variation margin borrowing purposes. Funds needed beyond CA+ max(EC, KVA) for variation margin posting purposes are borrowed 10 reflecting the fact that members of CCPs are fully collateralized. 11 note that a bank does not post any initial margin on its cleared client netting sets. 12 variation margin, initial margin, and (funded) default fund contributions. by the bank at its credit spread γ above OIS. The initial margin and default fund contributions, instead, must be borrowed entirely by the bank, but this can be achieved at some blended funding spread γ ≤ γ. The rationale for funding variation margin but not initial margin from CA+max(EC, KVA) is set out before Equation (15) in Albanese et al. (2017). The motivation for the assumption γ ≤ γ is provided in Albanese et al. (2020, Section 5), along with numerical experiments suggesting that γ can be several times lower than γ. Lemma 3.1 The borrowing needs of the bank for reusable and segregated collateral amount to, respectively, (3) Proof. On the bilateral trades of the bank and their hedges, the Treasury of the bank receives b VM b of variation margin from the counterparties and has to post an aggregated amount b MtM b of variation margin. The assumption stated before the lemma then leads to (3). Lemma 3.2 On the bank survival event {J = 1}, the counterparty default losses C and the funding expenses F of the bank are given by and Proof. On the CCP survival event {J = 1}, the CCP receives, by Assumption 2.1, Using the CCP clearing condition Assumption 3.1: which is (5). On the bank survival event {J = 1} (⊆ {J = 1}), by the respective Assumptions 2.1 and 3.1, the bank receives from its clients and counterparties Subtracting (8) from (9), we obtain On top of this comes the participation µL of the bank to the CCP default losses, which yields (4). Moreover, in view of Lemma 3.1 and Assumption 3.2, the (risky) funding expenses of the bank are given by (6). Valuation Let E denote the expectation with respect to the bank survival measure Q associated with Q , i.e., for any random variable Y, Under a cost-of-capital XVA approach, the bank charges its future losses to its corporate clients at a CA level making J(C + F − CA), the trading loss of the shareholders of the bank, Q centered. In addition, given a target hurdle rate h assumed in [0, 1] (and typically of the order of 10%), the management of the bank ensures to the bank shareholders dividends at the height of h times their capital at risk (EC − KVA) + (cf. (2)), where we model EC as ES J(C + F − CA) , the expected shortfall of the trading loss = J(C + F − CA) computed under the bank survival measure at a quantile level 13 of α = 99.75%, i.e. under the primal and dual representations of the expected shortfall 14 , VaR a ( ) denoting the Q value-at-risk (lower quantile) of level a of : which for atomless also coincides 15 with E[ | ≥ α]. Note that, in view of the dual representation, an expected shortfall of a centered random variable is nonnegative. Accordingly (as detailed after the definition): Hence in view of (4) and (6): i.e. E J C+F −CA) = 0, as desired 16 . The terminal cash flows of the form (1−J)×. . . in (12) or (13) are thus consistent with the desired shareholder centric perspective. They 13 under normal distribution assumptions, such ES at percentile level 99.75% allows reaching similar loss level as with a VaR (quantile) risk metric at the level 99.9%. In practice, regulatory and economic capital indeed aims at capturing extreme losses that can occur once every 1000 years, cf. paragraph 5.1 from Basel Committee on Banking Supervision (2005) for the detailed instructions. can also be interpreted as the amounts of reserve capital and risk margin lost by the bank shareholders, as their property is transferred to the liquidator of the bank, if the bank defaults. Due to these terminal cash flows, the above definition is in fact a fix-point system of equations. The split of the underlying CA equation (13) into the collection of equations (12) is motivated by both interpretation and numerical considerations. From an interpretation viewpoint, it is useful to provide the more granular view on the costs of the bank provided by the split of the global CA amount between, on the one hand, bilateral and centrally cleared trading default risk components BCVA and CCVA and, on the other hand, bilateral and centrally cleared trading funding risk components BMVA and CMVA for segregated initial margin, whereas the FVA cost of funding variation margin is holistic in nature (can only be apprehended at the level of the bank balance-sheet as a whole), via the feedback impact of CA + max(EC, KVA) into the FVA. From a numerical viewpoint, the collection (12) of smaller problems may be easier to address than the global equation (13). Each of the smaller problems can also be handled by a dedicated desk of the bank, namely the CVA desk, for the BCVA and CCVA, and the Treasury of the bank, for the BMVA, CMVA and the FVA. Passing in the above equations to the bank survival measure Q * based on Lemma 3.3 shows that the corresponding fixed point problem is in fact well-posed and yields explicit formulas for all the quantities at hand. All the above XVA numbers are nonnegative. Proof. By the result recalled after (11), EC is nonnegative as an expected shortfall under Q of the random variable J(C +F −CA), which is centered under Q * and therefore under Q, by (10). The first four formulas in (14) As CA = CCVA + CMVA + BCVA + BMVA + FVA, this is an FVA semilinear equation, which, as γ is nonnegative, is equivalent to the FVA formula (14) are obtained by substituting the already derived XVA formulas in (4) and (6). Remark 3.1 The reason why funding disappears from the bank trading loss, i.e. J(C + F − CA) = J(C − CVA), is because, in a one-period setup, the collateral borrowing requirements (3) of the bank are simply constants. Hence funding triggers no risk to the bank, but only a deterministic cost. In a dynamic setup, funding generates both costs and risk. Extension to Several CCPs or CCP Services In the realistic case where the reference bank is a clearing member of several services of one or several CCPs, we index all the CCP related quantities in the above by an additional index ccp in a finite set disjoint from I ∪ C ∪ B. Then, with CA = CCVA + CMVA + BCVA + BMVA + FVA as before: Proof. In the case of several CCP services, the second line in (3) must be turned into the terms in the first lines of (4) and (6) must now be summed over the various CCP services in which the bank is involved as a clearing member. The rest of the analysis proceeds as before. Before passing to the case studies, we detail the calculation of economic capital under the member survival measure. Case Studies Setup We describe two possible applications of our XVA framework which will be illustrated by numerical case studies. To these ends, we introduce a market and credit model with parameters that can capture dependence between portfolio changes, joint defaults and possible averse exacerbated changes of the portfolio due to their owner default known as wrong-way risk. Then two networks will be defined to serve the numerical illustrations, one rather educational on the use of the XVA metrics and the other one reflecting the more realistic situation depicted by Figure 1.1. The CVA and KVA computations require a Monte-Carlo routine run under Q in combination with a rejection technique in order to yield simulations under the survival measures associated with different clearing members, thereafter labeled CM* with * taking an identifier number.In the numerical applications that follow, all members play in turn the role of the reference bank in the theoretical XVA framework of Sections 2-3. For obtaining confidence intervals regarding the expected shortfalls that are embedded in the KVA computations, the simulations are split into several batches, from which the mean of the estimated ES's yields the final ES estimate, while their standard deviation is used to define a confidence interval. The default time for member i is generated based on Student-t copulas with correlated credit and market components, where credit components are reflected through the member's default times and the market components through their portfolio variation over the liquidation period following the default, proxied in our setup by the difference ∆P i := P i − MtM i . We denote by ρ cr > 0 the correlation coefficient of the copula Gaussian factor driving common defaults, by ρ mkt > 0 the correlation coefficient of the Gaussian factor driving common portfolio variations, and by ρ wwr i > 0 the correlation coefficient of the Gaussian factor driving both portfolio variation and default for member i. The Student-t degree of freedom parameter is assumed to be the same for generating both members' default and portfolio variations. In equations, denoting by F i the marginal c.d.f. of member i's default time and by S ν the Student-t c.d.f. with degree of freedom ν: where nom i ∈ R is a signed nominal of the portfolio of member i, σ i is its annualized relative volatility, ∆ reflects a positive liquidation period accounting for the time taken by the CCP to novate or liquidate 18 defaulted portfolios, and • W c i and W m i are i.i.d. random variables following χ 2 distribution with degree of freedom ν, independent from the above Gaussian random variables. Remark 4.1 In practice, margin computations rely on historical estimates based on several market stressed periods. Our approach, instead, aims at reflecting extreme market shocks with fat tailed Student-t distributions of degree of freedom ν = 3, and volatility level within a reasonable range of [20%, 40%]. Our static formulation depicts stationary increments of the defaulted portfolios' value over the liquidation period. The above setup requires the following constraints on the correlation coefficients to be properly defined 19 : 18 cf. Section 6. 19 otherwise, the model for both default time and portfolio variation factors is undefined due to their The "minus" sign in front of the common credit-market factor − ρ wwr i for the default time component in (16) ensures that the corresponding common factor accelerates defaults, whilst increasing the market exposure due to the + ρ wwr i factor in the second part of (16). In the examples that follow, market participants are identified by a number and can then be included in one of several of the considered CCPs. Single CCP Setup and initial XVA costs We consider a single CCP service with 20 members labeled by i ∈ 0 . . . n = 19, only trading for cleared clients (i.e. without bilateral or centrally cleared proprietary trading). Each member faces one client. The ensuing financial network is depicted by Figure 4.1. All clients are assumed to be risk-free. For any member i, its posted IM to the CCP is calculated based on the idea of a VM call not fulfilled over a time period ∆ s < ∆ at a confidence level α ∈ (1/2, 1), using a VaR metric 20 applied to the non-coverage of VM call taken also to follow a scaled Student-t distribution S ν with ν degrees of freedom: where S ν −1 is the inverse c.d.f. of a Student-t distribution with degree of freedom ν. The default fund is calculated at the CCP level as 20 under the member survival measure. for the two largest stressed losses over IM (SLOIM i ) among members, identified with subscripts (0) and (1), where SLOIM is calculated as the value-at-risk at confidence level α > α of the loss over IM, i.e. The total amount (19) is then allocated between the clearing members to define their The nom j 's of other clearing members are not observable by a given one. However, following Murphy and Nahai-Williamson (2014) and Lipton (2018), nom (i) denoting the i-th largest absolute nominal amount for i ∈ 0 . . . n = 19, a parameterization of the form can be fit to the total default fund held by the CCP 21 and the sum of its five largest default fund contributions 22 , published each quarter for most of the CCPs and that are public data. The inferred parameter α and β from the default fund data are used to depict a similar pattern on the nominal sizes 23 . The participants and portfolios parameter inputs are detailed in Table 4.1, where id is the identifier of the CM, DP stands for the one year probability of default of the member expressed in percentage points, size represent the overall portfolio size of the member detained within the CCP, and vol is the annual volatility used for the portfolio variations. The portfolios listed in the Table 4.1 relates to the members towards the CCP (which are mirroring the ones between the members and their clients). The sizes sum up to 0, in line with the CCP clearing condition (first identity in Assumption 3.1, here without proprietary trades). The parameters of the XVA costs calculations are summarized in Table 4.2. Note that the chosen period length of T = 5 years covers the bulk (if not the final maturity) of most realistic CCP portfolios. For each member, the CCVA, CMVA and KVA costs are calculated and reported in Table 4 Table 4.2: XVAs calculation configuration bracket is the corresponding quantile level from which average is calculated and numbers in parenthesis represent the 95% confidence interval in relative difference from calculated metric for both CCVA and KVA. All the XVA numbers decrease with the member size. To assess the average behavior w.r.t. ρ cr , ρ mkt and ρ wwr of the CCVA and KVA, we vary these correlations between 10% and 90% and display in Figures 4.2 and 4.3 the corresponding metrics, aggregated over all clearing members successively considered as the reference bank. As expected, the KVA depicts both an increase with respect to ρ cr , ρ mkt and ρ wwr , though ρ wwr has more impact than ρ cr and ρ mkt (right panels in Figures 4.2 and 4.3). As seen on the left panels of Figures 4.2 and 4 is understandable as, apart for modulations of the measure with respect to which each individual CCVA is assessed, the CCVA aggregated over clearing members is essentially an expectation of the CCP loss L (cf. the second line of (14)). The individual CCVAs (as per the first line of (14)) of each clearing member, however, may depend on ρ cr and ρ mkt (on top of of ρ wwr ) in a strong and nontrivial manner, via the allocation coefficient µ. Two CCPs Network Setup We consider the case of Figure 1.1 where there are two CCPs with some common members and stress test is considered from the perspective of one of these common members. The motivation for this case is to provide a realistic example mimicking in a simplified way the trading firm Ronin Capital, which had memberships on both FICC GSD 25 segments, hereafter denominated by CCP1 and CME Futures, hereafter denominated by CCP2 in March 2020. It is well known that a VaR type risk measure is not subadditive, in particular for credit portfolios as illustrated in Example 5.4 in Acerbi and Tasche (2002) To perform the analysis, the following setup is considered: • all members have only clearing client positions 26 , with 123 members on CCP1 and 56 members on CCP2, out of which 24 are common to both CCPs, • all clients are assumed default free, • both CCPs use configuration as per Table 4.2, • the sizes of the positions are assumed exponentially distributed in the sense that from the most exposed member to the least one, absolute value of positions decrease exponentially with the form in (21) • the proportion of the default fund detained by the 5 biggest members is 25% for CCP1 and 61% for CCP2 27 , • the size of the default fund of CCP1 is assumed to be twice the one of the default fund of CCP2. All data used are either public or have been anonymized. Similar configuration as given in Table 4.2 is used, apart from the number of Monte-Carlo simulations reduced to 2 millions for memory capacity reasons. The clearing conditions are ensured by setting the sum of the portfolio sizes nom i to zero on each CCP. The situation of member 3, exposed to both CCPs, as the defaulting member, corresponds roughly to the situation of Ronin Capital in 2018. In particular, an annual probability of default of 0.1% corresponds roughly to a BBB rating, that was assigned to Ronin Capital in 2018 for its issuances 28 . Stress Test Exercises As outlined in the capital requirements regulation detailed in The European Parliament and the Council of the European Union (2013) article 290, financial institutions must conduct regular stress test exercises of their credit and counterparty exposures. Paragraph 8 of this article also stipulates the reverse stress test 29 requirement to [...] identify extreme, but plausible, scenarios that could result in significant adverse outcomes. This is complemented by article 302 on the exposure financial institutions may have towards CCPs: Institutions shall assess, through appropriate scenario analysis and stress testing, whether the level of own funds held against exposures to a CCP, including potential future credit exposures, exposures from default fund contributions and, where the institution is acting as a clearing member, exposures resulting from contractual arrangements as laid down in Article 304, adequately relates to the inherent risks of those exposures. In practice, stress test exercises aim at assessing the capacity of financial institutions to absorb financial and economic shocks. In regular exercises, such as the ones conducted by the European Banking Authority, the shocks are usually considered under so called central and baseline macro-economic scenarios corresponding to a median quantile and adverse scenario usually taken as a 90 th percentile reflecting severe yet plausible scenario that can occur once every 10 years 30 . Additionally, extreme scenarios can be considered for measuring the capital adequacy 31 for absorbing extremely severe losses around confidence level at 99.9%. From a clearing member perspective, this requires to have the capacity of scanning certain points of its trading loss distribution. In our framework, this boils down to identifying particular levels of the distribution of the trading loss C = CVA = C − CCVA − BCVA of the reference clearing member bank, where the different terms are detailed in Proposition 3.2. The other type of stress test exercises, referenced as reverse stress test 32 (Bellini et al., 2021), consists in identifying the probability of reaching a given loss level as well as describing the scenario configuration such as projected defaults and loss magnitude leading to such loss levels. The distribution must span a sufficient large spectrum of losses, including the ones targeted by the exercise, but it also has to be sufficiently rich numerically to allow identifying combinations of events leading to such losses. Confidence intervals of corresponding extreme scenario probabilities should complement the analysis to ensure the reliability of the used model and numerical methods. Regulators have the ability to challenge financial institutions on these elements and demand for improvements 33 . We now briefly explain how to identify and exploit the scenarios leading to contribute the most to economic capital, in the spirit of . We denote by M the number of Monte-Carlo scenario for which J = 1, i.e. survival of the reference bank. Its trading loss C − CVA for a simulation m is given by C m − CVA, where m ∈ 1 . . . M enumerates the simulated scenarios for which the reference member bank ends up in survival state. To get an estimate of the economic capital based on expected shortfall, relying on Acerbi and Tasche (2002, Definition 2.6 and Proposition 4.1), we calculate, for a high confidence level α ∈ ( 1 2 , 1) and [x] denoting the integer part of any x ∈ R, where the C (m) − CVA's are the simulated trading losses of the reference bank ranked in increasing order. To obtain the contribution of any simulated scenario m (with (m ) ≥ [αM ]) to the economic capital estimated by (22), i.e. Yhe contribution δ m ES (C − CVA) of scenario m to ES (C − CVA) is then given by: To illustrate the various flavors of stress test exercises that can be conducted by a CCP member, we report numerical results for the two network examples introduced in Section 4. We start with a reverse stress test exercise on example covered by Table 4.1. For this first illustration, a specific extreme loss is targeted and the corresponding probability of loss reaching at least such target level is estimated. We then consider the example illustrated by Figure 1.1 where projected loss levels for specific confidence levels are indicated for the members with common memberships on the two CCPs. Numerical Results In Table 5.1, we report, for the example summarized in Table 4.1, the 99.9 th percentile trading loss levels, referenced as extreme quantile, with corresponding (asymmetric) confidence intervals based on the approach proposed in Meeker et al. (2017, Section G.2). This is done for every clearing member successively playing the role of the reference bank in the setup of Sections 2-3. We also compute the probabilities of reaching a loss equal to 1.5 times the obtained extreme quantile level, referenced as RST scenario, with corresponding confidence levels 34 . Our description of the scenarios leading to such losses includes the identified defaulted members, the generated losses and the allocated loss coefficient of the reference clearing member (CM1 in this example). CM0's default, reflecting the highly concentrated position of CM0. Note that the 16 th worst loss scenario for member 1 entails 8 defaults, including the one of CM0, which is the only one to generate losses beyond its posted margin (i.e. to trigger a loss to the surviving members). From CM1 viewpoint (i.e. with CM1 in the role of the reference clearing member), 15 scenarios entail significant losses over the collateral posted by the defaulted CM0 (positive first entries in the last column of Table 5.2). CM0 bears a very large concentrated position compared to other members. Even if CM0 has more IM and DF requirements than others, this is still not enough: this example highlights that employed DF allocation rules in this example dilute the DF collateral requirements for concentrated positions. It also illustrates that scenarios with multiple defaults do not necessarily lead to extreme losses, due to the fact that members with medium or small positions have large default fund contributions stemming from others' concentrated positions. In Table 5.3, we report, for the example illustrated by Figure 1.1 with 2 CCPs, the trading loss levels (value-at-risks) at confidence levels 90% and 99.9%, for the 24 common level probability estimated in Table 5.1 as 0.0428%, which is of course far too many to report. Nonetheless a focus on the 20 worst ones already illustrates the type of information that can be exploited for such exercises. members on the two CCPs. The corresponding numbers in the case where the two CCPs would be considered separately is reported in the columns labeled "stand-alone". For quantiles at 90% confidence levels, the loss levels are significantly higher when the common membership are considered compared to the stand-alone quantile loss calculation conducted on each CCP and summed, especially for the first ten members. For members with very low size on one of the two CCPs compared to the other, considering the common memberships or not does not affect the loss estimates, as expected 36 . This outlines the importance of taking into account such commonality feature for sizeable members on the CCPs. On the contrary, with quantile loss levels at confidence level 99.9%, the sum of stand-alone loss estimations are well above the loss estimate when common memberships are taken into consideration. For members facing the two CCPs, this leads in particular to over-conservative KVA estimates. This in turn is detrimental for client end-users that support unnecessary additional capital costs. Optimizing the Porting of Defaulted Client Portfolios In case a clearing member defaults, the CCP tentatively novates part of the CCP portfolio of the defaulted member through auctions among the surviving clearing members (Default Risk Management Working Group, 2016;Basel Committee on Banking Supervision, 2019a), and it liquidates the residual on the market. A natural baseline is that the CCP novates (auctions among surviving members) client trades and their mirroring client account positions, collectively dubbed client positions for brevity hereafter, whereas house account positions are liquidated. The liquidation side of the procedure cannot be handled in our modeling setup, which does not embed the fundamentals of price formation (our MtM processes are assumed to be exogenously given). On the other hand, an XVA-based procedure can be used for rendering what would be the output of an idealized, efficient auction, assuming a large number of clearing members (Oleschak et al., 2019, Section 3.3). Namely, supposing that the reference clearing member, labeled by 0 in Sections 2-3, defaults at time 0, i.e. just after that all portfolios have been settled, for each surviving member CM * successively envisioned as a potential taker of the defaulted (client) positions of CM0, one computes the incremental (∆) XVAs of porting the defaulted positions to CM * , for each surviving member (CM * included 37 ). The corresponding incremental XVA numbers are then summed over metrics and survivors, resulting in the funds transfer price (FTP * ) of porting defaulted client positions to CM * . The effective taker is then the surviving member for which the ensuing FTP * is the smallest 38 . See Albanese et al. (2020, Section 5.2) for more details on such "XVA Pareto optimally" driven novation procedures. 36 as the CCP with the very low size compared to the other should have marginal impact. 37 note that all members are impacted by additional margin to fund due to the re-calibration of their DF by the CCP, whereas only the member taker of the portfolio sees in addition its IM adjusted. 38 In what follows, based on the example of Table 4.1 (which only involves client positions), we analyze from this perspective a first scenario of a single default on the CCP and a second scenario with two defaults. Single Default Resolution Example Taking the first case with a single default, we first assume the scenario whereby CM0 defaults at time 0. Table 6.1 summarizes the ∆XVA * , across members * from 1 to 19, in increasing order of the total FTP * indicated in the last column. Based on the results of Table 6.1, CM1 appears to be the potential taker leading to the least overall FTP costs across all surviving members. This is understandable as this member's portfolio size (184 in Table 4.1) nets the most the defaulted member's portfolio size (-242), with volatility and credit default probability similar to 39 the ones of the defaulted member. As CM1 concentrates more risks due in particular to non-perfect offset 40 between its 39 in particular, not significantly higher than. 40 By offset we refer to risk reduction when taking over some additional position. The effect of correlation is such that an opposite sign in portfolio size does not imply an equal offset of the risk of the aggregated positions. For instance, even with opposite sizes and same volatilities but for ρ mkt ∈ (0, 1/2), the member ends up with more risk. ∆CMVA ∆CCVA ∆KVA 0.0593 0.0182 0.2846 prior positions and the defaulting one, there is an increase of its IM reflected through an increase of CMVA. But the new risk of CM1 is less than the sum of the former risks of CM0 and CM1, hence the CCVA aggregated across surviving members is reduced. This only happens when CM1 takes over the defaulting portfolio, other potential takers leading to an overall increase of the CCVA. As for the KVA, there is a reduction effect for CM1 when CM1 is the taker (see the term in parentheses in Table 6.1), but an overall increase in the total KVA (aggregated over all surviving members), whichever the member taker is. Having CM1 as the taker allows to minimize ∆KVA. As expected, among the three XVA components, KVA is the main determinant of the optimal taker: see Table 6.2. Once the CCP has re-allocated all defaulted client positions, the resulting financial network formerly depicted in Figure 4.1 becomes the network with 19 members shown in Figure 6.1. The thick lines represent the new portfolio exposures for CM1 and the pale dashed lines show the defaulted CM0 positions. Joint Default Resolution Example In case two members default instantly at time 0, it is likewise possible to resolve numerically the re-allocation of their client portfolios. The number of possible combinations of takers in that case is 18 2 = 324 (assuming each of the two portfolios taken over by one survivor). By putting into default CM0, the largest member with portfolio size −242, as well as CM8, a middle-sized member with portfolio size 26, we get that CM1 and CM4 taking over the respective portfolios of the defaulted CM0 and CM8 leads to the least FTP (additional ∆XVAs aggregated across the remaining 18 members of the CCP). The resulting network post defaults of members 0 and 8 is shown in Figure 6.2. As depicted by Table 6.3, the KVA again plays the major role in determining the optimal takers. Figure 6.1: The 1-CCP, former 20-member financial network with 19 members post CM0 default. Defaulted CM0, labeled "B0" in the presented network, is represented as pale dashed node with pale dashed links to reflect former exposures to its client and toward the CCP. The optimal porting of CM0 portfolio with CM1, labeled "B1", is outlined with bold links to reflect the new exposures for CM1. CCP When looking at the signed portfolio sizes and one-year default probability of the two takers, it is aligned with the intuition that the second largest member, which is CM1 with portfolio size 184, should take over the defaulted portfolio of CM0 with size −242, as it has an opposite portfolio direction, resulting into a strong netting benefit. Moreover that its default probability is similar to that of CM1. At first sight, CM4 with prior default portfolio size −80 taking over defaulting portfolio of CM8 with size 26 seems surprising. Other potential takers with closest opposite portfolios sizes are CM6 (with size −46) and CM9 (with size −20). But their default probability is roughly twice the one of CM4. As a result, CM4 taking over CM8 defaulted portfolio yields a significant reduction in terms of KVA compared to the situation where CM6 or CM9 would take over CM8's portfolio, as depicted in Table 6.4. In practice, much larger financial networks are involved. An illustration of such network (restricted to Eurozone) is given by Figure 6.3, omitting all client trades for ease of readability. The center of the network indicates the various members having common memberships towards several CCPs. Combinatorial novation optimization, as also stress test analysis 41 , over such complex networks, is of course order ( Figure 6.2: The 1-CCP, former 20 member financial network, with 18 members post CM0 and CM8 defaults. Defaulted CM0, labeled "B0" and CM8, labeled "B8", are represented as pale dashed node with pale dashed links to reflect former exposures to their clients and toward the CCP. The optimal portings of CM0 and CM8 portfolios are outlined with bold links to reflect the new exposures for both CM1 and CM4. heavier than what we presented in the above and would require specialized numerical techniques. Conclusion We have proposed a fully integrated risk management framework that can serve stress test analysis, including reverse stress test in line with regulatory requirements, as well as porting defaulted portfolios analysis, in a setup encompassing all the trades (bilateral as centrally cleared and their hedges) of a reference bank. The framework includes dependence features between financial participants portfolios, joint defaults, and a configurable wrong-way risk feature. This is done in a numerically tractable static setup (although already quite demanding on large financial networks). A dynamic extension could be considered (but at an even much higher computational burden). Another improvement would be to add regulatory constraints such as minimum regulatory capital requirements and liquidity leverage ratios. More fundamentally, in this paper, we tackle the derivatives risk problem from a pure counterparty credit risk viewpoint: thus note that, if members, clients and counterparties are all default free, then in view of Proposition 3.2 all considered XVAs are zero and our setup trivializes. In fact, another dimension to the problem is liquidity (see e.g. Amini et al. (2020); Faruqui et al. (2018)). Depending on the considered applications 42 , credit or liquidity is the main force at hand. A challenging research project would be to integrate both in a common setup.
2022-02-08T04:00:24.653Z
2022-02-07T00:00:00.000
{ "year": 2022, "sha1": "6fa791352c10d49e626714537e018d654999acd4", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "6fa791352c10d49e626714537e018d654999acd4", "s2fieldsofstudy": [ "Mathematics", "Economics", "Business" ], "extfieldsofstudy": [ "Economics" ] }
243686058
pes2o/s2orc
v3-fos-license
Addressing the Role of Sustainable Public Procurement as a Panacea for Sustainable Development in the Local Government Areas: The Episode of Nigeria The coronavirus has displaced local communities across the nation as their livelihood is compromised. This study explores extensively the challenges confronting sustainable public procurement in the local government areas that bothers sustainable development amid COVID-19. This study conducted semi-structured interviews with eight procurement experts from eight leading local government areas with documented evidence of extensive procurement activity across four geopolitical zones of Nigeria to obtain primary data. Findings from this study suggest a tremendous decline in the livelihood of the rural communities amid the pandemic and ridicule of local government procurement practice across the region. The study also finds a significant level of interference by the state government that continually denies the local government administration from attaining sustainable development compared to their counterparts in the developed societies. Introduction Sustainable public procurement has emerged as a global leading policy paradigm that creates a critical role in promoting sustainable development in the local communities. Scholars and policymakers worldwide embrace the concept of local mobilisation as it emphasises sustainable development at the grass-root level regarding the economic, social, and environmental benefits (Preuss, 2009;Thomson & Jackson, 2007). Both the private sector and the public sector of the economy need procurement for further growth; however, this is not achieved in a vacuity (Arrowsmith, 2020). While public procurement is the acquisition of goods, works, services, and utilities using public funds to meet the growing needs of the overwhelming citizens within the boundaries of the law, sustainable procurement is empirical evidence that a suitable supplier and contractor are selected and fit for a specific project. "Sustainable procurement (SP) is procurement that is consistent with the principles of sustainable development, such as ensuring a strong, healthy and just society, living within environmental limits, and promoting good governance" (Walker & Brammer, 2012). In a similar and current understanding, sustainable public procurement addresses a process whereby organisations meet their needs for goods, services, works, and utilities in a way that achieves value for money on a whole life basis in terms of generating benefits not only for the organisation, but also for the society and the economy, whilst minimising damage to the environment (DEFRA, 2006;UN, 2015;UNEP, 2018), cited in (Sönnichsen & Clement, 2020). On the other hand, sustainable development addresses the need to manage social, economic, and environmental resources for the benefit of future generations (Usang & Salim, 2016). Today, these extraordinary global economic initiatives are faced with the situation of the COVID-19 outbreak, which continues to create more havoc on humanity, particularly around sustainable consumption and production (UN, 2020c). Responsible Consumption and Production (RCP) -According to the United Nations, RCP refers to "the use of services and related products, which respond to basic needs and bring a better quality of life while minimizing the use of natural resources and toxic materials as well as the emissions of waste and pollutants over the life cycle of the service or product so as not to jeopardize the needs of future generations" (UN, 2020c). RCP is critical to a sustainable world and remains the cardinal point to any growing economy, substantially contributes to low-carbon emission, promotes green economies, and conserves natural resources (Tseng, Zhu, Sarkis, & Chiu, 2018;UN, 2020a). Although consumption and production significantly drive the global economy (Tseng et al., 2018), there is also planetary devastation through unsustainable utilisation of the natural resources, as the worldwide footprint is on the increase and projected faster than the trajectory of economic productivity, population growth, a high percentage of unacceptable food forfeiture along the supply chain, and unnecessary medical waste caused in the era of the pandemic (SDG, 2020). Impact and response to COVID-19 amid Sustainable Consumption and Production COVID-19 was characterised as a pandemic by the World Health Organisation (WHO) on the 11th of March 2020. While the effect is still counting, it has affected 37,554,022 people and a total fatality figure of 1,077,228 globally, devastating the economy and labour market, distorting consumption, and production supply chain, and creating an endless array of challenges to the aviation, tourism, and hospitality industries which potentially engenders significant decreases in revenue, jobs, life expectancy, and sustainable businesses (ILO, 2020;WHO, 2020). Unfortunately, the previously underestimated pandemic has escalated into a significant negative economy, with a worrisome future projected to remain for a while due to the growing portents of economic recession worldwide (ILO, 2020;Shah & Farrow, 2020). According to the World Health Organisation (WHO), the catastrophic effect of the coronavirus is already eminent in the global arena, and the impact would remain with us for a long time; however, a swift and well-coordinated strategic response is required to reduce the direct impact of the virus at the national and international levels while extenuating the global economic fallout (Sohrabi et al., 2020;WHO, 2020). This implies that protecting frontline workers and their families remains a priority while considering the income losses due to the decline in economic activities. Responding further to the consequences of the pandemic, consumers are faced with the threat of purchase of goods and services, delays in capital investments, and the hiring of workers globally (ILO, 2020). From the local perspective, it is pertinent to acknowledge a multilateral social dialogue between the employees, employers, and the government to implement a strategic policy for sustainability. Addressing these challenges of the pandemic, countries now could change their consumption and production pattern to a more sustainable reality by commencing a national framework that requires a robust regulatory plan and policies of the UN Goal 12, Target 7, that seeks to promote public procurement practices that are sustainable, by national policies and priorities (Tseng et al., 2018;UN, 2020c) Sustainable Public Procurement -a global perspective The concept of SPP accounts for a vast and wider spectrum of products and services over its life circle, considering economic opportunities, social equity, and environmental consequences (Sönnichsen & Clement, 2020). This process integrates the requirements, criteria, and specifications that have a direct and positive impact on the environment and supports economic development and social justice (DEFRA, 2006;UNEP, 2013). It accounts for an average of 12% of GDP in OECD countries and accounts for up to 30% in developing countries with enormous purchasing powers, capable of driving the market economy towards sustainability, in that way promoting the transition to a green economy (UN, 2020c). From a global perspective, public authorities remained the significant consumers, and this is achieved by using their purchasing power as a weapon to drive the market economy by allowing environmentally friendly goods, works, services and potentially make a positive contribution to sustainable consumption and production, which is part of the hallmark of sustainability (UN, 2020b). For effectiveness, sustainable public procurement requires absolute environmental modalities for products and services in ways that promote procurement to the benefit of the larger populace. In Europe, for instance, many countries have developed operational guidance that streamlines the fundamental criteria of the core, verifiable, and comprehensive environmental standards of the SPP (Tseng et al., 2018;UN, 2020b). While this guidance remains the pinnacle for sustainable procurement, it applies to a vast majority of products and services for the sustenance of the environment and is found in textiles products, server room, and cloud services, imaging equipment, consumables, operating and maintaining facilities, organic constituents of growing media, gardening practices to enhance biodiversity, utilisation of low emission vehicle, manufacturing and end-of-life of cleaning products and other cleaning accessories (UN, 2020b) Sustainable Public Procurement -the case of local governments in Nigeria The people at the local area, constituted by law that possess substantial administrative control, encompass councils with small communities of government functionaries with specific government attributes (Omagbon, 2016). Over the years, local government authorities have scrutinised the urgency for the effectiveness of the procurement system for sustainable development in the communities regarding the economic, social, and environmental benefits (Preuss, 2009). Local governments are consistently acquainted and encouraged to make their spending decision reflects the act of probity, transparency, and best value for money (Usang & Salim, 2016;Arrowsmith, 2020). In the developed nations where procurement is receiving a greater share of attention, local government procurement is unclassified and subject to a growing spectrum of initiatives, a concept that seemed to integrate procurement into the corporate social responsibility (CSR), promoting social justice, environmental sustainability and minimising economic inequalities, predominantly in the aspect of consumer behaviour and business production (Walker & Brammer, 2012;Preuss, 2009;Saha & Paterson, 2008). There is no doubt that many local governments globally embrace policies and programs that promote the drastic reduction of their environmental footprints and consequently improve the quality of life for citizens (Saha & Paterson, 2008). In Nigeria, for instance, local government is regarded as a strong arm of the government and, by the constitution, bestowed with the responsibility of providing positive innovations to the people at the grass-root in ISSN 2222-1700 (Paper) ISSN 2222-2855 (Online) Vol.12, No.8, 2021 3 the aspect of education, health, infrastructure, sanitation, road projects, and collective wellbeing of the communities (Usang & Salim, 2016). Like many countries around the world, Nigeria's local government is disproportionately disrupted with COVID-19, which is associated with endless arrays of economic fallout, health hazards, food shortages, job losses, as well as environmental degradation. The outbreak of the coronavirus has consequentially amplified the already existing derogatory situation in the local communities where human challenges of sustainable public procurement remain on the horizon (UN, 2020d). Challenges of SPP in Nigeria's LGA amid Covid-19 The challenges confronting sustainable public procurement for sustainable development in Nigeria's local government areas are inexplicable; it requires a shared partnership of every stakeholder to address as a matter of urgency regarding the current wave of the COVID-19. Corruption, the Fundamental Adversary of the State The government often comes up with sophisticated plans to build up better communities using the local might through the local government procurement system to actualise its developmental objective (Saha & Paterson, 2008). However, this study suggests that there are "hawks and hyenas" within the corrupt system and would attempt every phase possible to derail this original plan. This study's primary data reveals a wild spread of corruption, which has undermined procurement performances regardless of time and season. The entire institution has lost its credibility to corruption and compromised one way or the other. Many corrupt and sharp practices have invariably led to inefficiency, misappropriation, and general rot in the government. Sustainable Public procurement must be handled with great impunity to promote sustainable development in the local government areas. Lack of Autonomy in the Local Government Administration From its inception as the 3rd tier of government, the local government controls a certain level of power and arrays of endless autonomy to deliver good governance to the communities. Based on documentary evidence, the local government should control their resources and develop their beloved communities. Unfortunately, the control is only manifested on paper, just like a toothless bulldog. Local government outrightly lacks the capacity to manage the resources of the government for the rural communities due to the monopoly of power by the state government. There is an inundating spring of dominance that has prevented the local government from carrying out their statutory functions due to interference by the federal and state governments. There is a constant shortage of funding for local government procurement to drive sustainability, given these alarming involvements. The account of local government procurement is precarious, as there is perpetual financial turbulence capable of deprivation to the communities' basic needs. Political interference in the affairs of the local government is disparaging. Until there is recapturing of the local government mandate where both federal and state governments demonstrate transparency, accountability, and integrity, the chances are that nothing meaningful can take place. Emphatically, the local government procurement activities cast doubt due to the government's influence on the local procurement activities. Ironically, the state receives more attention than the local government areas who need the attention more, as most people reside in vulnerable communities where the basic needs of humanity are needed, especially in health, agriculture, potable water, accessible roads, and irrigation. Until there is an autonomous local government system where we have a national economy that the local government receives its directives from the federal government just as the state government does, and elections are conducted by independent electoral commission instead of the state government, then we are far from getting public procurement in the local government right. Most of the time, the local government chairmen are hand-picked by the state governors to serve their interests. Findings suggest that the state governments across the nation have let the people down time and again. Until there is an intervention from the government that listens to the yearnings of the disadvantaged people to assuage this occurrence, impoverishment will continue to ravage the livelihood of the ordinary people, particularly at the grass-root. Apathy on the part of Government and Stakeholder In the context of this study, apathy demonstrates the overwhelming behaviour of the stakeholders regarding the management of goods, works, services, and utilities for the tiers of government, particularly the local government areas of Nigeria. Empirical evidence suggests a growing level of negligence on government properties that connote the wrong approach from both sides of the aisle. There is no proper handling of government property, and the attitude of staff towards work is unchanged, especially towards government property. Apathy on the government's part towards the local communities regularly looks at the unkept environmental hazards directed to many local government areas. This act of apathy is clear evidence of how far the government is from the people. The Covid-19 pandemic is now a justification for the government to neglect the people's request, as pointed out by the communities. Apathy on the government's part towards the local communities is regularly looking at the unkept environmental hazards directed to many local government areas. This act of apathy is clear evidence of how far the government is from the people. The COVID-19 pandemic is now a justification for the government to neglect ISSN 2222-1700 (Paper) ISSN 2222-2855 (Online) Vol.12, No.8, 2021 the people's request, as pointed out by the communities. Rather than seeing this moment as a wake-up call to assuage the ordinary person's misery, ironically, more discomforts are inflated. For instance, some states have gone as far as depriving their citizens of the COVID-19 palliatives that alleviate the harsh reality of time and the country's situation. Lack of Finance in the Local Government Administration Every year local government authority comes up with budgets to help with the planning and statutory obligations to disseminate quality services to the local communities. During the budget, the local government wants to procure hospital equipment, maintain schools, build markets for the rural dwellers, and other various services. Nevertheless, only a tiny fraction of allocation is approved by the state government in this budget, making it difficult to prioritise when workers' salary is at stake. Any contractor that executes contracts in local government should be ready for the worst days, as waiting many years for reimbursement is imminent. The local government is perpetually underfunded, and if considered at any time, it is the crumbs they get. With this kind of circumstance and challenges, the local government cannot put up any viable developmental project. This act is unsustainable and can ruin the economy of any given society. In some instances, some of the local government areas are deliberately denied funding because of refusing to support the governor's candidacy. They are out rightly marginalised and further deprived of any form of financial obligation. To these individuals who are in continuous denial of local developmental resources, the concept of democracy is impracticable. Absence of E-Procurement/Technological Innovation The public procurement system's effective accomplishment is not without technological innovation, and this trend would continue to mutate for decades to come. Many developed communities are leaders in these social-economic advancements to encourage the best value for money and foster development. Because of the absence of e-Procurement in Nigeria's procurement structure, inconsistencies have become prominent. The use of the e-Procurement system is fundamental in several ways, limiting the number of human interfaces, promoting accountability, probity, improving speed and accuracy. The application of necessary tools, software, and eprocurement systems conquer the fear of mismanagement that tends to undermine the objective of sustainable procurement and reliance in the entire echelon of procurement practice. For lack of e-procurement systems in the local government areas across the country, numerous projects have failed to see the light of reality. It is evident in the execution of capital projects handled by quack contractors in the local communities, leaving footprints of incompetence and uncompleted projects. Study methodology This research uses an exploratory approach to contribute to theory building, particularly to the role of what sustainable public procurement implies to the local communities in sub-Saharan Africa, Nigeria, amid COVID-19, and its relation to the social, economic, and environmental possibilities. Eight public procurement practitioners from four geopolitical zones consisting of eight local government areas participated in the semi-structured interviews to explore the perception of public procurement activities at the local government areas to gain in-depth knowledge regarding the paradigm of local government procurement. Findings suggest that these LGAs are prominent and considered to be a good representation for the conceptual study in retrospect. Additionally, other documentary evidence includes reports published by the United Nations Development Goals, national reports from the BPP, and verifiable reports from donor agencies. Conclusion This study contributes tremendously to the growing field of literature, particularly to sustainable public procurement in Nigeria's local communities. A society cannot profile any meaningful development when its material resources and human capacity development at the local government areas remain grossly immobilised. This study focused on exploring public procurement activities in Nigeria's local government areas amid the coronavirus pandemic. It confirmed several challenges that bothered in deepened corruption, gross negligence, and government property mismanagement by the public officials and the local communities. Furthermore, the government can establish an anti-corruption tax force at the grass-root to monitor procurement activities to produce a better result and introduce a centralised database system that promotes accountability, transparency, and best value for money. Removing corruption leads the current practice to sustainable procurement. Training should be mandatory for the procurement officers to have adequate knowledge and approaches to enhance their level of productivity, fine-tune their attitudes from illicit corrupt practices, and build their morality to become better practitioners. References Adeyeye, A. (2014). Governance Reform and the Challenge of Implementing Public Procurement Law Regime
2021-08-20T18:57:47.943Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "49f6ac81a92d10a99c476c1fbfeec07a070c284e", "oa_license": "CCBY", "oa_url": "https://iiste.org/Journals/index.php/JEDS/article/download/56132/57971", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "e112d4dc31babdf5c9e2b36b3bcb6523f04c527d", "s2fieldsofstudy": [ "Environmental Science", "Business", "Economics" ], "extfieldsofstudy": [] }
257952664
pes2o/s2orc
v3-fos-license
Quiz-based Knowledge Tracing Knowledge tracing (KT) aims to assess individuals' evolving knowledge states according to their learning interactions with different exercises in online learning systems (OIS), which is critical in supporting decision-making for subsequent intelligent services, such as personalized learning source recommendation. Existing researchers have broadly studied KT and developed many effective methods. However, most of them assume that students' historical interactions are uniformly distributed in a continuous sequence, ignoring the fact that actual interaction sequences are organized based on a series of quizzes with clear boundaries, where interactions within a quiz are consecutively completed, but interactions across different quizzes are discrete and may be spaced over days. In this paper, we present the Quiz-based Knowledge Tracing (QKT) model to monitor students' knowledge states according to their quiz-based learning interactions. Specifically, as students' interactions within a quiz are continuous and have the same or similar knowledge concepts, we design the adjacent gate followed by a global average pooling layer to capture the intra-quiz short-term knowledge influence. Then, as various quizzes tend to focus on different knowledge concepts, we respectively measure the inter-quiz knowledge substitution by the gated recurrent unit and the inter-quiz knowledge complementarity by the self-attentive encoder with a novel recency-aware attention mechanism. Finally, we integrate the inter-quiz long-term knowledge substitution and complementarity across different quizzes to output students' evolving knowledge states. Extensive experimental results on three public real-world datasets demonstrate that QKT achieves state-of-the-art performance compared to existing methods. Further analyses confirm that QKT is promising in designing more effective quizzes. Quiz-based Knowledge Tracing Shuanghong Shen, Enhong Chen, Senior Member, IEEE, Bihan Xu, Qi Liu, Zhenya Huang, Linbo Zhu, Yu Su Abstract-Knowledge tracing (KT) aims to assess individuals' evolving knowledge states according to their learning interactions with different exercises in online learning systems (OIS), which is critical in supporting decision-making for subsequent intelligent services, such as personalized learning source recommendation. Existing researchers have broadly studied KT and developed many effective methods. However, most of them assume that students' historical interactions are uniformly distributed in a continuous sequence, ignoring the fact that actual interaction sequences are organized based on a series of quizzes with clear boundaries, where interactions within a quiz are consecutively completed, but interactions across different quizzes are discrete and may be spaced over days. In this paper, we present the Quiz-based Knowledge Tracing (QKT) model to monitor students' knowledge states according to their quizbased learning interactions. Specifically, as students' interactions within a quiz are continuous and have the same or similar knowledge concepts, we design the adjacent gate followed by a global average pooling layer to capture the intra-quiz short-term knowledge influence. Then, as various quizzes tend to focus on different knowledge concepts, we respectively measure the interquiz knowledge substitution by the gated recurrent unit and the inter-quiz knowledge complementarity by the self-attentive encoder with a novel recency-aware attention mechanism. Finally, we integrate the inter-quiz long-term knowledge substitution and complementarity across different quizzes to output students' evolving knowledge states. Extensive experimental results on three public real-world datasets demonstrate that QKT achieves state-of-the-art performance compared to existing methods. Further analyses confirm that QKT is promising in designing more effective quizzes. Index Terms-data mining, neural networks, online learning system, knowledge tracing, quiz-based modeling. I. INTRODUCTION O NLINE learning systems (OIS) have been playing an increasingly important role in satisfying individuals' growing demands for intelligent educational services [1,2], e.g., personalized learning source recommendation [3,4]. Knowledge tracing (KT), which aims to monitor students' dynamic knowledge states in learning based on their learning interactions on OIS, is one of the fundamental research tasks to provide guidance for these intelligent services [5]. In recent years, an increasing amount of attention has been abstracted to this emerging research area [6]. Generally, OIS assigns exercises related to different Knowledge Concepts (KCs, e.g., Adding and Subtracting Fractions) for students to answer so that they can acquire the required knowledge. According to students' interactions, i.e., their performance on different exercises, researchers have designed different KT methods to infer their knowledge states and predict their future performance. Subsequently, we can enhance the learning and teaching efficiency by adopting targeted teaching strategies for each student in accordance with their knowledge states. In the literature, most of existing methods measure students' knowledge states through sequence modeling. For example, Bayesian knowledge tracing (BKT) formalized the learning process as the Markov process and utilized the Hidden Markov Model to assess the dynamic knowledge state [5]. Deep knowledge tracing (DKT) further introduced RNNs/LSTMs [7] to conduct sequence modeling on students' learning interactions [8]. Many subsequent studies have improved BKT and DKT in different aspects, such as considering students' individual characteristics [9,10,11], utilizing more side information [12,13], incorporating the structure of KCs [14,15]. Moreover, some latest works presented new architectures to solve the KT problem, such as using memory networks to store and update the knowledge state [16], applying the attention mechanism to capture the knowledge dependency of learning interactions [17,18]. However, most of existing KT methods, including the above mentioned ones, assume that students' historical interactions are uniformly distributed in a continuous sequence, which does not conform to the reality. Actually, exercises in OIS are assigned to students in the form of quizzes rather than individually [19,20]. Specifically, the quiz is defined as an informal test of specific knowledge, which is consisted of a number of exercises of the same or similar KCs [21]. Therefore, students' historical interactions are only continuous within the same quiz, and there are clear boundaries between different quizzes, which may be spaced over several days. To better illustrate, we give some real examples of students' interaction sequences from the perspective of quizzes in Figure 1. These examples are extracted from the real learning data in Eedi [22], an OIS that millions of students interact with daily around the globe. In Figure 1, we recorded students' interactions based on days and the first day denoted for the starting point of the interaction sequence. The lower part of Figure 1 clearly indicates that students' historical interactions are quiz-based and discrete. Besides, in the upper part of Figure 1, we give the details of s 1 's interactions that occurred continuously within a specific quiz: the whole quiz answering process took about 11 minutes Real examples for students' interaction sequences from the perspective of quizzes. In the lower block diagram, we recorded students' interactions based on days and the first day was the starting point of the sequence. Here, a square denotes for one day, where dark squares denote that students finished one or more quizzes in the corresponding day and light squares represent they did not answer any exercises. The lower part indicates that students' interaction sequences are organized based on a series of quizzes with clear boundaries. In the upper part, we give the visualization of s 1 's continuous interactions on a specific quiz of 11 different exercises related to the same KC: Basic Arithmetic. s 1 spent about 11 minutes completing this quiz. and all of the exercises in this quiz have the same KC: Basic Arithmetic. In summary, students' learning interactions within the same quiz are continuous over a short period of time, while these across different quizzes are discrete with certain intervals. In this paper, we argue that it is critical and beneficial to consider the quiz-based organization style of students' learning interactions in KT. Unfortunately, there are many technical and domain challenges to be solved along this line. First, as we have mentioned above, exercises in the same quiz usually have similar KCs and students' related interactions are continuous over a short period of time, it is a nontrivial problem to capture the intra-quiz short-term knowledge influence. For example, compared to a hard previous exercise, answering an easy previous exercise should have different influence on students' performance on the present exercise within a quiz [23]. There are more challenges when coming to different quizzes, as they are discrete and usually have various KCs. Specifically, if a recent quiz has similar KCs to previous quizzes, students' interactions on previous similar quizzes may become unreliable and be replaced by the recent ones, how to capture such inter-quiz long-term knowledge substitution? Besides, if a recent quiz has new KCs that have never appeared in previous quizzes, how to integrate these quizzes related to various KCs, i.e., to measure the inter-quiz long-term knowledge complementarity? To achieve our primary goal of realizing quiz-based KT with addressing the above challenges, we propose the Quizbased Knowledge Tracing (QKT) model in this paper, which measures students' knowledge states by exploring their quizbased learning interactions. Specifically, we first design the adjacent gate to control the knowledge influence between adjacent interactions within the same quiz. Considering that students' average performance on a quiz reflects their knowledge states on the quiz-related KC, we further perform the global average pooling operation for each quiz. For example, the student s 1 in Figure 1 got 7 correct answers and 4 wrong answers on the contained 11 exercises related to the same KC: Basic Arithmetic, s 1 's knowledge state with respect to Basic Arithmetic should be approximately 7 11 . Then, we directly utilize the Gated Recurrent Units (GRU) [24] to assess the inter-quiz long-term knowledge substitution, which memorizes students' interactions on recent quizzes and forgets the remote ones. Besides, we present the self-attentive encoder to measure the inter-quiz knowledge complementarity, which preserves and fuses students' interactions on all historical quizzes. To measure students' varying degrees of knowledge loss on historical quizzes, we design a novel recency-aware attention mechanism in the self-attentive encoder. Finally, we can get students' evolving knowledge states by combining the interquiz long-term knowledge substitution and complementarity between different quizzes. Our main contributions are summarized as follows: • We firstly focus on the quiz-based organization style of students' learning interactions in OIS for the KT task. We summarize the feature of the quiz-based interaction sequence, i.e., it is continuous over a short period of time within a quiz and discrete with certain intervals across different quizzes. We further give detailed analysis of three public real-world datasets collected from different OIS from the perspective of quizzes in Section VI-A. • We propose a novel Quiz-based Knowledge Tracing model to assess students' dynamic knowledge states by exploring their quiz-based interaction sequences. In QKT, we respectively measure the intra-quiz short-term knowledge influence and inter-quiz long-term knowledge substitution and complementarity. • We conduct extensive experiments to verify the effectiveness of QKT, the results indicate that QKT has superior performance to existing methods. Further analyses indicate that QKT can be utilized to help design more effective quizzes. II. RELATED WORKS In this section, we introduce existing related works from two categories: knowledge tracing, and cognitive diagnosis. A. Knowledge Tracing With the development of OIS, the significance of monitoring students' knowledge states is becoming increasingly prominent [6], which was first formalized as the knowledge tracing task by Corbett and Anderson [5]. They proposed the BKT model, assuming the learning process as a Markov process and using students' observed interaction sequences to infer their latent knowledge states. Then, researchers enriched and developed BKT in many aspects. For example, individualizing the parameter in BKT for each student [9,10], considering the tutor intervention of OIS [25], and incorporating students' forgetting effect [26]. In recent years, the advances of deep learning (DL) have boosted the neural network based KT models. Specifically, DKT introduced RNNs/LSTMs to model the students' knowledge states in a sequence manner [8]. Then, DKVMN used memory networks to store and update students' latent knowledge states on specific KCs [16]. Some researchers considered the natural structure within the KCs, and proposed to use GNNs to capture the influence integration of the knowledge state between different KCs [27,14,15]. Besides, some studies noticed that students' historical related interactions had more impacts on their future performance. Therefore, they introduced the attention mechanism to model the knowledge dependencies in learning. For example, Pandey and Karypis [17] applied the Transformer [28] to trace students' knowledge states, Ghosh et al. [18] incorporated self-attention mechanism with monotonic assumption. Zhang et al. [29] applied the dual-attentional mechanism to model students' learning progress based on multiple factors. There were also many works focused on the representation of exercises. For example, learning the semantic representations of exercises from their text contents [30,31]. Liu et al. [32] turned to pursue pretrained exercise embeddings from exercise-KC relations, the exercise similarity, KC similarity, and the exercise difficulties together. Shen et al. [33] modeled the exercise difficulty effect and designed an adaptive sequential neural network to match the exercise difficulty with the knowledge state. Recently, researchers further explored students' learning process. Wang et al. [34] presented the Hawkes process to adaptively model temporal cross-effects in learning. Shen et al. [35] proposed to model students' learning gains and forgetting in learning for calculating their dynamic knowledge states. Long et al. [36] estimated students' individual cognition level and knowledge acquisition in learning. In summary, most of existing KT methods follow the paradigm of sequence modeling. They assumed that students' historical interaction sequences are uniformly distributed in a continuous sequence, which neglects the fact that students' interactions are quiz-based with clear boundaries. Therefore, interactions across different quizzes are actually discrete. Although some works have noted the significance of the interaction's timestamp [37,35], they were limited to simply utilizing the time information as additional features. Ke et al. [38] further split students' historical interactions into sessions based on fixed time duration and performed session-aware KT. However, it was also inconsistent with the quiz-based organization of students' interactions and damaged the knowledge correlation intra-and inter-quiz. B. Cognitive Diagnosis Cognitive diagnosis (CD) is also concerned with assessing individuals' knowledge states based on their behaviors [39]. In contrast to KT, CD is often applied in testing scenarios, which utilizes all historical interactions to learn each student's static knowledge state. Specifically, the item response theory (IRT) is one of the classical CD models [40], which used a logistic regression model to estimate students' knowledge states: where c is the random guessing probability, θ is the knowledge state, β is the exercise difficulty. However, θ in IRT is a single value, which cannot reflect students' knowledge states on various KCs. Therefore, multidimensional item response theory (MIRT) was proposed to use a multidimensional vector to represent students' knowledge states on different KCs [41,42]. In recent years, deep learning has been widely employed for cognitive diagnosis [43,44,45]. For example, the Neural Cognitive Diagnosis (NCD) attempted to utilize neural networks to model the student-exercise interactions. NCD's general framework can be formulated as: where φ denotes the neural network used to model the studentexercise interactions. F s is the knowledge state, F kc means the KC factor, F other denotes other factors (such as exercise difficulty), θ f denotes all learnable parameters. Subsequently, researchers have made extensions to NCD in different aspects, such as incorporating students' abundant context information [46] and measuring the hierarchical relations among students, exercises, and KCs [47]. However, CD has an underlying assumption that all of students' interactions are equally important to their knowledge states. This assumption is reasonable within a single test/quiz, but may not be reliable for students' quiz-based interaction sequences in reality. III. PROBLEM STATEMENT The OIS contains multiple basic elements, including students, exercises, KCs, and student-exercise interactions. Supposing that all the students together in a dataset form the student set, namely S = {s 1 , s 2 , ..., s S }, all the exercises together form the exercise set, namely E = {e 1 , e 2 , ..., e E }, all the KCs together form the KC set, namely K = {k 1 , k 2 , ..., k K }, where s, e, k respectively denote a student, an exercise, and a KC, S, E, K respectively denote the number of students, exercises, and KCs. In general, each exercise is related to specific KCs and we use the Q-matrix given by educational experts to indicate the exercise-KC relations. The Q-matrix is made up of ones and zeros, where one means the corresponding exercise and KC are related, otherwise it is zero. For a specific student s, the student-exercise interaction is the most basic unit. We have i denotes an interaction of s, which includes an exercise e and the answer a given by s on e, i.e., i = (e, a|s). Here the answer a is a binary correctness label (1 represents correct and 0 means incorrect). The quiz is the basic organization form of exercises in OIS. Generally, a quiz is presented as an informal test of specific knowledge, which is made up of multiple exercises with the same or similar KCs. For example, the given quiz in Figure 1 contains 11 exercises with the same KC: Basic Arithmetic. Noting that the quiz may have different names in different systems, e.g., it is called the assignment in ASSISTments [48] and CodeWorkout [49]. For the sake of convenience, we uniformly use the name quiz throughout this paper. We have q denotes the student's interactions on a quiz, i.e., q = {i 1 , i 2 , ..., i L }, L is the length of the quiz q (i.e., the length of interactions in q) and the subscripts from 1 to L represent the order of each interaction. Let U denote the student's whole interaction sequence, we can represent U as the quiz set, i.e., U = {q 1 , q 2 , ..., q J }, J is the number of quizzes and the subscripts from 1 to J represent the order of each quiz. Then, we can formally formulate the task of quiz-based knowledge tracing as follows: a) Problem Formalization.: Given a student's sequential interactions U = {q 1 , q 2 , ..., q J } on multiple quizzes, the quizbased knowledge tracing task aims to assess the student's dynamic knowledge states across different quizzes and predict her performance on new exercises in future quizzes. IV. QUIZ-BASED KNOWLEDGE TRACING In this section, we present the proposed QKT model in detail and indicate how to measure the intra-quiz short-term knowledge influence, as well as the inter-quiz long-term knowledge substitution and complementarity. The architecture of QKT is depicted in Figure 2 and Algorithm 1, which is mainly consisted of intra-quiz modeling module and inter-quiz modeling module. Specifically, in the intra-quiz modeling module, we mainly focus on exploring the knowledge influence between students' adjacent interactions within a quiz and capturing their overall knowledge states for each quiz. Then, in the interquiz modeling module, we turn to measure the knowledge substitution and knowledge complementarity between different quizzes, which will be finally integrated together to output students' evolving knowledge states across different quizzes. A. The Intra-quiz Modeling Module In this module, given the student's interactions on the quiz q = {i 1 , i 2 , ..., i L }, we aim to measure the intra-quiz shortterm knowledge influence and output the quiz vector q that represents the student's knowledge states for the quiz q. 1) Interaction Modeling: A quiz usually contains multiple interactions, which are consisted of exercise-answer pairs. Therefore, we first conduct interaction modeling in this part. Specifically, we use the embedding matrix E ∈ R E×de to represent all exercises in a dataset. Therefore, for the specific exercise e l in the quiz q, we can obtain its embedding e l from E. Besides, considering that exercises have different KCs, we also use an embedding matrix K ∈ R K×d k (here we set d e to be equal to d k ) to represent all KCs, so that we can also obtain the KC embedding k e l of e l from K. Then, we combine the exercise embedding e l and its KC embedding k e l by a multilayer perceptron (MLP) to represent the complete exercise: where we haveẽ l ∈ R de denotes the complete exercise vector, W 1 ∈ R de×de and b 1 ∈ R de are trainable parameters. Subsequently, considering the binary answer value, we respectively present the right layer and the wrong layer to distinguish the different effects of two binary answers. Then, the exercise-answer pair can be represented as follows: where we have i l ∈ R di denotes the interaction vector. W r ∈ R de×di and b r ∈ R di are trainable parameters for the right layer, W w ∈ R de×di and b w ∈ R di are trainable parameters for the wrong layer. 2) Knowledge Influence Modeling: After finishing the interaction modeling, we proceed to measure the knowledge influence of interactions within a quiz. Concretely, such knowledge influence mainly exists in adjacent interactions from many aspects. For example, spending different energy on the previous exercise should have a different impact on students' performance on the current exercise [50]. Besides, previous hard exercises bring more negative effects than easy exercises, i.e., a learning effect occurs when easy exercises come before harder exercises and a fatigue effect occurs when exercises come in a hard-to-easy order [51,23]. In our proposed QKT model, we measure the above adjacent multi-aspect knowledge influence in an uniform manner. Specifically, we design the adjacent gate according to the influence between two adjacent interactions, which is then applied to control how much information should be respectively extracted from the previous and current interactions. The calculating formulas are: where we have Γ l ∈ R di denotes the adjacent gate, x l ∈ R di denotes the combined vector of two adjacent interactions. σ is the sigmoid activation function, · is the element-wise product operation, ⊕ means vector concatenation, W 2 ∈ R 2di×di and b 2 ∈ R di are trainable parameters. 3) Global Average Pooling: In this part, we further explore to making trade-offs between all interactions in a quiz to get the quiz vector that represents the student's overall knowledge states for the quiz q. As interactions in a quiz have the same or similar KCs, each interaction should partially contribute to the overall results. Therefore, we perform the global average pooling operation on {x 1 , x 2 , ..., x L } to calculate the quiz vector as: where we have q ∈ R dq (d q equals to d i ) denotes the quiz vector. Through the above modeling process, q contains both the intra-quiz short-term knowledge influence and the student's overall knowledge state on the quiz-related KCs. B. The Inter-quiz Modeling Module In this module, we turn our attention from one quiz to multiple quizzes and try to measure the inter-quiz long-term knowledge integration, i.e., integrating different quiz vectors together to represent students' dynamic knowledge states. Specifically, there are two main forms of the inter-quiz longterm knowledge integration: the knowledge substitution and the knowledge complementarity. We will first separately assess them and then combine them together. 1) The Knowledge Substitution: The knowledge substitution means that students' interactions on previous quizzes will be replaced by recent ones, which often appears in quizzes that have the same or similar KCs. For example, a student may get poor performance on the quizzes related to the KC: Venn Diagrams when he learned this KC at the beginning. (9); 7: measure the inter-quiz knowledge complementarity by Eq. (10), Eq. (11); 8: integrate the student's knowledge state by Eq. (12); 9: return h However, after a period of studying, he can achieve perfect performance on subsequent quizzes so that his interactions at the beginning quizzes are unreliable and should be updated by the recent interactions. To measure the above knowledge substitution, we directly utilize the Gated Recurrent Units (GRU) [24] to model students' quiz sequences {q 1 , q 2 , ..., q J } as follows: where we have sub j−1 ∈ R dq denotes the summary of all previous quizzes, Γ r is the reset gate that determines how to combine the present quiz vector and previous memories sub j−1 , Γ u is the update gate that controls how much previous memories will be preserved. tanh is the activation function, After processing all quizzes in order, we can get the vector sub J that captures the inter-quiz long-term knowledge substitution, which memories more students' interactions on recent quizzes and forgets more on the remote quizzes. 2) The Knowledge Complementarity: In contrast to the knowledge substitution, knowledge complementarity means that students' interactions on previous quizzes will be effective in parallel with the interactions on the subsequent quizzes. Specifically, in all the quizzes completed by the student, we have measured the knowledge substitution for the quizzes with overlapping KCs. However, there are also many quizzes that focus on different KCs, and students' interactions on these quizzes should complement each other for a more comprehensive knowledge state retrieval. To measure the above knowledge complementarity, we present the self-attentive encoder to capture the dependency of different quizzes. Specifically, for each quiz vector q j , we first utilize three embedding layers to respectively project q j into the query vectorq j ∈ R dq×1 , the key vectork j ∈ R dq×1 and the value vectorṽ j ∈ R dq×1 . Then, the dot-product attention value α jj between q j and q j is calculated as: . (8) However, α jj only measure the similarity of q j and q j as their attention weight, without considering the order of a specific quiz in the whole quiz sequence. Actually, learning is temporal and the recent events have more influence on students, which is also known as the recency effect [52]. Therefore, we argue that it is necessary to weight more recent quiz more heavily and further propose the recency-aware attention mechanism, which adds two recency-aware terms to the attention value α jj as: where γ is a constant parameter, which scales the value of the recency-aware terms β 1 j and β 2 j to match the dot-product attention value α jj . Noting that through adding β 1 j and subtracting β 2 j , we just move part of the dot-product attention value from the front quizzes to the behind quizzes in the student's quiz sequence, realizing the assumption that the more recent quiz matters more. The total value of the dot-product attention is not changed, i.e., J j =1α jj = J j =1 α jj = 1. Subsequently,α jj will be multiplied toṽ j to get the output as a weighted sum of the values: where we have z j ∈ R dq×1 denotes the quiz vector for q j , which includes the knowledge complementarity with other quizzes. Finally, similar to intra-quiz interactions integration in Section IV-A3, we perform global average pooling on {z 1 , z 2 , ..., z J } as: Then, we can get the vector com J that captures the inter-quiz long-term knowledge complementarity. 3) Integrating the knowledge state: After calculating sub J and com J that respectively model the inter-quiz long-term knowledge substitution and complementarity, we need to further combine them together to output students' knowledge states across different quizzes. In QKT, we utilize a simple but effective way of addition to integrate them: where we have h ∈ R d h denotes the knowledge state vector, W 6 ∈ R dq×d h and b 6 ∈ R d h are trainable parameters. V. MODEL LEARNING To train all embeddings, weight matrices, and bias terms in QKT, we first use h to predict the student's answer on the exercise e n in future quizzes, and then choose the cross-entropy log loss between the predicted answer y n and the student's actual answer a n as the objective function, as follows: (a n log y n + (1 − a n ) log(1 − y n )) + λ θ ||θ|| 2 , (13) where we use both the multiplication (i.e,ẽ n · h) and the concatenation (i.e,ẽ n ⊕ h) to integrate the prediction vector, θ denotes all trainable parameters in QKT and λ θ is the regularization c. The objective function will be minimized using Adam optimizer [53] on mini-batches. VI. EXPERIMENTS In this section, we first introduce the public real-world datasets utilized in our experiments. Then, we conduct experiments to evaluate the effectiveness of QKT with the aim of answering the following research questions: • RQ1: Does our presented QKT model outperform existing methods on the student performance prediction task? • RQ2: How do the different components in QKT impact its performance respectively? • RQ3: How about the impact of a varying number of quizzes for each student on QKT? • RQ4: How about the impact of a varying length of interactions for each quiz on QKT? A. Datasets Three public real-world datasets are used for evaluating in our experiments: (1) Assist2012 1 ; (2) Eedi2020 2 ; (3) CSEDM 3 . We give the statistics of all datasets in Table I. The distributions of the quiz length L and the quiz number J in each dataset are also given in Figure 3, which are various across different datasets. The detailed descriptions of all datasets are: • Assist2012 is collected from 8th-grade students for the school year 2012-2013 in the ASSISTments math tutoring system [48]. Exercises with similar KCs in ASSISTments are organized as the assignment (similar to the quiz), and students need to practice on different assignments for obtaining the related knowledge. In our experiments, we have filtered the interactions that the exercise's related KCs are missing. • Eedi2020 is published in the NeurIPS 2020 Education Challenge, which contains students' answers to mathematics questions from Eedi, an OIS which millions of students interact with daily around the globe from school year 2018 to 2020. Exercises in Eedi are organized as different quizzes. We used the data for task 3&4 in this challenge. This dataset has hierarchical KCs, we utilize only the KC in the leaf node for each exercise. • CSEDM is published in the 2nd Computer Science Educational Data Mining Challenge, which is collected from a CS1 course in the Spring and Fall 2019 semesters at a public university in the U.S. It contains the code submissions from students for 50 coding problems in 5 different assignments. B. Experimental Settings For all students' answering records, We first sorted them by the timestamp of answering. Then we utilized their interactions on all previous J − 1 quizzes to train the model and predict their performance on the exercises in the last quizzes (i.e., the J-th quiz). To ensure the reliability of experimental results, we filtered out the students who answered fewer than 2 quizzes. To set up the training process, we randomly initialized all parameters and embeddings in the uniform distribution [54]. Experiments on all datasets have been 5-fold cross-validated on students 4 . The initial learning rate was 1e-3 and we set the learning rate decay of 50% every three epochs to achieve the optimal point. The mini-batch size was 32. The dimensions d e , d i , d q , and d h were uniformly set as 128. The constant parameter γ utilized to scale the value of the recency-aware terms in Eq. (9) was 1e-5. According to the distributions of the quiz length and the quiz number on all datasets as shown in Figure 3, for Assist2012, the quiz length and the quiz number are both 30 in our experiments. For Eedi2020, we respectively set the quiz length and the quiz number to be 20 and 50. For CSEDM, the quiz length is 10, and the quiz number is 5. C. Comparison Baselines To verify the effectiveness of QKT, we compare it with existing KT methods. All comparison methods are tuned to have the best performances for a fair comparison. All models are completed by Tensorflow and trained on a cluster of 4 We will make the code publicly available upon acceptance Linux servers with the NVIDIA Tesla V100 GPU. The simple introduction of KT baselines are: • DKT introduces RNNs/LSTMs to model students' knowledge states in a sequential manner [8]. • DKVMN uses the memory network to store and update the knowledge state [16]. • SAKT utilizes the self-attention mechanism to capture the knowledge dependency between student-exercise interactions [17]. • AKT learns context-aware interaction representations by the self-attentive encoder, and measures students' knowledge acquired in the past relevant to the current exercise [18]. • LPKT models the learning process and calculates students' learning gains and forgetting to assess their knowledge states [35]. • DIMKT measures the influence of the exercise difficulty on the knowledge state and learning process [33]. Moreover, we also compare QKT with two baselines in the area of cognitive diagnosis: • IRT uses the logistic function to model students' knowledge states as a continuous variable [40]. • NCD presents neural networks to learn the complex studentexercise interactions [45]. D. Evaluation Metrics To evaluate the performance of QKT and all baselines, we use multiple metrics from both regression and classification perspectives. Specifically, from the perspective of a classification task, we utilize Area Under roc Curve (AUC) to measure the effectiveness, and the larger values are, the better the result. Then, as a regression task, we quantify the distance between the predicted and actual answers with Root Mean Square Error (RMSE) and the square of Pearson correlation (r 2 ). For the RMSE, smaller values mean better results. The r 2 is the opposite, where larger values are better results. E. Student Performance Prediction (RQ1) The student performance prediction task is crucial for evaluating the quality of the captured knowledge state, i.e., correct predictions stand for better estimation of the knowledge states. To evaluate the effectiveness of QKT, we compare it with all baselines on this task. Table II gives the experimental results of our model and all baselines, where we can find several significant observations. First, QKT outperforms all existing methods on all datasets and all metrics, which indicates that our proposed QKT is effective to capture the quiz-based organization style of students' learning interactions. Second, in contrast to some baselines (such as LPKT) that introduced the answer time and interval time as additional features to model the whole learning process, QKT achieves better performance without using any time information, which further demonstrates the significance and value of exploring students' interaction sequences from the quiz perspective. Noting that the time information is also important for QKT, as this paper mainly focuses on finding and evaluating the necessity of conducting quiz-based knowledge tracing , we leave the extension to time information of QKT as a future work. Third, the performance gains of QKT against the best baseline are positively related to students' average quiz number, which is in line with our intuition as QKT benefits more from the inter-quiz modeling module when there are more quizzes (we will further verify it in section VI-G). For example, the average number of quizzes taken by students in the datasets Assist2012 and Eedi2020 is close (i.e., 19.02 for Assist2012 and 20.54 for Eedi2020, as shown in Table I), the performance gains of QKT against the best baseline are also close in both datasets. However, students in CSEDM finished less quizzes, and QKT's performance gains are relatively smaller. F. Ablation Study (RQ2) In this section, we conduct ablation experiments to show how different components in QKT affects its performance. The experimental results on Assist2012 are shown in Table III attentive encoder and assumed that there was only inter-quiz long-term knowledge substitution, i.e., students' knowledge state is totally represented by sub j in QKT w/o SUB. • QKT w/o RA refers to QKT without using the recencyaware attention mechanism, i.e., we removed the recencyaware terms β 1 j and β 2 j in Eq. (9). We can find some interesting conclusions from the results shown in Table III. First, removing the knowledge substitution modeling leads to the most significant performance decline of QKT, suggesting that equally considering students' previous and recent quizzes with the same or similar KCs heavily damages the knowledge state modeling, as students' interactions at the previous quizzes were not reliable anymore after answering the follow-by quizzes. Second, the inter-quiz knowledge complementarity is also important. If we only model the inter-quiz knowledge substitution, the information contained in quizzes without overlapped KCs will be lost, and the performance of QKT also drops as expected. Third, measuring the adjacent influence between interactions within a quiz is necessary, which is critical for intra-quiz shortterm knowledge influence modeling. Finally, the proposed recency-aware attention mechanism helps to better measure the knowledge substitution, which verifies our assumption that more recent quizzes matter more. G. The Impact of the Quiz Number (RQ3) As this paper focuses on the quiz-based KT, we will further evaluate how different numbers of quizzes for each student (i.e., the quiz number) will affect QKT's performance in this section. Specifically, we compared the performance of QKT under 10 different quiz numbers on Eedi2020, i.e., 1,2,3,4,5,10,20,30,40,50. The corresponding results are reported in Figure 4, where we can directly observe a positive relationship between QKT's performance and the quiz number when the quiz number is small. Moreover, we can get more interesting findings after a detailed look at the figure. First, the performance of QKT tends to be stable when the quiz number reaches a threshold, which is approximately 10 in Figure 4. This phenomenon reflects certain marginal effects of the quiz number, which inspires us to design more effective quiz combinations rather than asking students to finish more quizzes for better estimating their knowledge states. Second, looking at the leftmost bar in in each subplot of Figure 4, QKT gets poor performance if we model only one quiz. However, there is a huge promotion if we consider more than one quizzes. This phenomenon suggests that QKT benefits greatly from modeling the inter-quiz long-term knowledge integration. H. The Impact of the Quiz Length (RQ4) In contrast to the quiz number, we also evaluate QKT's different performance of QKT under various quiz lengths (i.e., the length of interactions in each quiz). Specifically, we compared the performance of QKT under 8 different quiz lengths on Eedi2020, i.e., 5,10,15,16,17,18,19,20. The corresponding results are reported in Figure 5, where we can find more interactions in a quiz bring better performance as expected. The reason is that more interactions contain more reliable information about students' knowledge states of the quiz-related KCs. Similar to the quiz number, we can find certain marginal effects of the quiz length, i.e., QKT's performance grows very slowly after the quiz length reaches a threshold (about 15 in Figure 5), which can instruct us to design more effective quizzes with less exercises to improve students' learning efficiency. VII. CONCLUSIONS AND FUTURE WORKS In this paper, we focused on students' interaction sequences from the quiz-based perspective for the KT task, and proposed a novel Quiz-based Knowledge Tracing (QKT) to model the interaction sequence in the quiz-based organization. We first analyzed and summarized the feature of students' quizbased interaction sequences, i.e., it was continuous over a short period of time within a quiz and discrete with certain intervals across different quizzes. Then, on the basis of above features, we respectively considered the intra-quiz shortterm knowledge influence and inter-quiz long-term knowledge substitution and complementarity, and further designed corresponding module in QKT to measure and integrate them to monitor students' dynamic knowledge states. Finally, we conducted extensive experiments on three public real-world datasets to evaluate the effectiveness of QKT, which indicated that QKT achieved better performance than existing best methods. Further analyses on the effects of the quiz number and the quiz length demonstrated that QKT had the potential to benefit the quiz design. In the future, we will further explore utilizing the time information (such as answer time and interval time) for more precise intra-and inter-quiz relation modeling. Besides, we will attempt to introduce pre-defined knowledge relation of different quizzes to explicitly measure the inter-quiz knowledge integration.
2023-04-06T01:16:37.570Z
2023-04-05T00:00:00.000
{ "year": 2023, "sha1": "5bea36a8a648d5cd64113f22a420d03be758fbe9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "fd86b25ccd63aa928d43b537878bccc94f86819a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
2588677
pes2o/s2orc
v3-fos-license
PROCAIN: protein profile comparison with assisting information Detection of remote sequence homology is essential for the accurate inference of protein structure, function and evolution. The most sensitive detection methods involve the comparison of evolutionary patterns reflected in multiple sequence alignments (MSAs) of protein families. We present PROCAIN, a new method for MSA comparison based on the combination of ‘vertical’ MSA context (substitution constraints at individual sequence positions) and ‘horizontal’ context (patterns of residue content at multiple positions). Based on a simple and tractable profile methodology and primitive measures for the similarity of horizontal MSA patterns, the method achieves the quality of homology detection comparable to a more complex advanced method employing hidden Markov models (HMMs) and secondary structure (SS) prediction. Adding SS information further improves PROCAIN performance beyond the capabilities of current state-of-the-art tools. The potential value of the method for structure/function predictions is illustrated by the detection of subtle homology between evolutionary distant yet structurally similar protein domains. ProCAIn, relevant databases and tools can be downloaded from: http://prodata.swmed.edu/procain/download. The web server can be accessed at http://prodata.swmed.edu/procain/procain.php. INTRODUCTION Recent progress in structural biology, including structural genomics initiatives (1) has significantly increased the coverage of existing protein folds by representatives with solved 3D structures (2).According to some analyses (3), this coverage is close to completion, which means that any given protein is likely to have a structure similar to a solved one.The existence of such structural templates opens the opportunity for structure modeling and potential function prediction for a majority of protein sequences.However, as demonstrated by the recent Critical Assessment of Techniques for Protein Structure Prediction, CASP8 (4), the presence of homologs with known a structure does not warrant the quality of sequence-based structure prediction.The largest current challenge in the prediction process is the ability to detect a distant homolog and to construct an accurate alignment between this homolog and the target sequence.Thus, there is a strong demand for more powerful automated methods for remote homology detection and alignment construction. Historically, most progress in sequence-based homology detection was made by considering sequence patterns that reflect evolutionary, structural and functional constraints in protein families.Introduction of numerical profiles (5) and hidden Markov models (HMMs) allowed comparing a sequence to a multiple sequence alignment (MSA) rather than its single representative (6)(7)(8).As a further improvement, methods for profile-profile (9)(10)(11)(12) and HMM-HMM (13) comparison were aimed at detecting similarities in amino acid preferences at sequence positions in two distant families.In addition to the residue substitution preferences ('vertical' signals), MSA can reveal patterns of interdependence between amino acid content at different positions ('horizontal' signals).These patterns, dictated by structure and function, are often preserved better than the sequence and thus can help detecting protein similarity where individual sequence positions diverged beyond recognition.Currently, such 'horizontal' information is used by only a few methods (13,14), mainly in the form of secondary structure (SS) prediction. Here, we complement sensitive profile-profile comparison with the consideration of various structure-and function-related patterns revealed by MSA: similarity in SS, amino acid conservation and MSA motifs.The resulting tool for MSA comparison, PROCAIN, improves homology detection and alignment quality beyond the range of current state-of-the-art methods.*To whom correspondence should be addressed.Email: grishin@chop.swmed.eduß 2009 The Author(s) This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/ by-nc/2.0/uk/)which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. Multiple sequence alignments Profiles are generated as described elsewhere (9) from multiple sequence alignments that are constructed and processed using a program (buildali.pl)generously provided by J. Soding.Starting from a single sequence, this program runs up to eight iterations of PSI-BLAST, filtering PSI-BLAST alignments at each iteration.We find that this filtering results in better homology detection by resulting profiles. Score for similarity of residue content in MSA columns To measure the positional similarity of residue content, we use the formula originally implemented in the COMPASS method (9). where n 1 i and n 2 i are effective counts (15) of residue type i in the compared columns 1 and 2; Q 1 i and Q 2 i are estimated target residue frequencies ( 16) of the two columns; p i is the background residue frequency.c 1 and c 2 are scaling factors calculated as follows: Sequence motif score In the alignments of homologous protein sequences, matches of similar positions tend to cluster together along the sequence (17).These clusters often correspond to similar functional motifs (18).Thus, we introduce a simple additional score that rewards such clusters, i.e. diagonals of positively scoring matches in the dynamic programming matrix.If a pair of profile positions has a positive score for residue content, and both immediate neighbors of this pair also score positively, then the score of the central pair is increased by the sum of these three sequence similarity scores, s m , multiplied by a weight w m = 0.5. Residue conservation score Strong residue conservation normally indicates important functional positions, such as binding sites; therefore matches and mismatches of such positions should be of special importance for homology detection (19).In order to further emphasize similarity between conserved positions, we introduce a separate conservation score.Residue conservation is calculated using an entropybased method (17), with the final measure normalized to the range [0;1]: Here f i is the total residue frequency in the compared columns 1 and 2. This conservation value is then combined with the sequence similarity score as follows: where w c = 0.5 is the weight for the conservation score.This term additionally rewards the matches between highly conserved positions if these positions are similar and penalizes these matches if the positions are dissimilar. Secondary structure score PROCAIN incorporates SS information in the form of SS prediction by PSIPRED (20).A 3 Â 3 secondary structure substitution matrix derived from structural alignment of SCOP domains is used for this purpose.The confidence levels of secondary structure prediction are considered as follows: Here S seq mean is the average of all positions to all positions sequence similarity scores.w ss is the weight factor, a constant for all query sequences after it is trained.CD 1 and CD 2 are the secondary structure prediction confidence levels (0-9) of columns 1 of the query profile and columns 2 of the subject profile.SS 12 is the secondary structure substitution value of the two columns.n and m are the lengths of the query protein sequence and subject protein sequence.S seq ij is the sequence similarity scores of columns i of the query profile and columns j of the subject profile. An important characteristic of how PROCAIN incorporates these three types of information is that sequence similarity scores or its average value are involved in every additional score. Database A calibration database of 935 protein SCOP domains is formed by picking a representative protein domain from each SCOP fold (13).The subject database is composed of 4147 SCOP protein domains.MSAs are formed for all the protein sequences for both databases by running buildali.pland then converted into numerical profiles.SS is predicted for all the proteins in both databases using PSIPRED (20).An all-to-all profile comparison is performed within the subject database; for each protein domain, the average score to nonhomologs is calculated.Similarly, each protein profile of the calibration database is compared to all the profiles of the subject database using PROCAIN.The corresponding average scores are calculated and recorded.The average scores for both calibration and subject database profiles serve as rough measures of their propensity to produce a large score in a random comparison. Statistical significance estimation As part of database construction, we precompute and store background score distributions for profiles of the searching database.First, for every profile A we calculate the set of similarity scores (21) against all nonhomologous profiles B in the same database and find the mean value of this set, <s> A .Then we process this set by subtracting the mean score of the counterpart profile B from each score s AB : s 0 AB = s AB -<s> B .The resulting distribution of scores {s 0 AB } for profile A is stored and used during the search. For every profile C in the calibration database, we precompute the set of similarity scores {s CA } against entries of the searching database and then calculate the mean value of this set, <s> C .When the query profile Q is compared to profiles in the calibration database, the mean score of each profile C is subtracted from its similarity score to the query s QC : s 0 QC = s QC -<s> C .During the actual search, when query Q is compared to profile A in the searching database, the distribution of adjusted calibration scores for the query, {s 0 QC }, is combined with the distribution of adjusted background scores for the subject, {s 0 AB }.The resulting distribution is fitted with EVD to estimate EVD parameters k and , which are then used in the Karlin-Altschul formula to calculate the E-value: E ¼ kmne ÀS , where m and n are effective lengths of the two profiles and S = s QA -0.5(<s> QC +<s> A ) is the adjusted score for query against the database profile A. Quality of homology detection by individual queries We construct sorted lists of hits for each query domain, and consider sensitivity (sensitivity = recall = TP/(TP + FN), where TP and FN are the numbers of true positives and false negatives, respectively) at a given level of selectivity (selectivity = precision = TP/(TP + FP), where FP is the number of false positives).These sensitivity values for the evaluated methods are compared using paired t-test and nonparametric paired Wilcoxon rank test.We find that a 50% level of selectivity reveals the most significant differences between the compared methods, and results are similar for the t-test and Wilcoxon test. RESULTS Numerical profiles describe amino acid content at MSA positions and reflect, in a simple way, evolutionary process in a protein family at the level of individual residues in polypeptide chain.However, profile comparison position by position cannot detect subtler yet powerful sequence features that are dictated by structural or functional constraints and remain preserved long after the divergence of two homologous sequence families.One obvious example of such feature is the conservation of SS: as a rule, even extremely distant homologs share SS elements that are part of their common structural fold.We find that two more features significantly improve the quality of homology detection: the level of amino acid conservation at individual positions and the presence of similar extended motifs without insertions or deletions. Alignment construction and scoring Given MSA for a query protein family, PROCAIN performs a search in a profile database, constructs profileprofile alignments and reports significant similarities.We introduce new approaches to both alignment construction and estimating statistical significance of these alignments (Figure 1).Profile-profile alignments are based on the scores for similarity between individual positions of compared MSAs.These scores include four terms (Figure 1): a standard measure for the similarity in residue composition (9) combined with three additional measures that reflect local similarity in secondary structure, amino acid conservation and sequence motifs: where s seq is the score for similarity of residue content at the two compared MSA columns [the same measure as used in COMPASS ( 9)], C is a measure of total conservation in the two columns, normalized to the range [0;1], w c is the constant weight for the conservation term; s ss and w ss are the score for similarity in predicted SS and the corresponding constant weight.The last term rewards aligned motifs: d m = 1 if the two aligned positions have a positive residue content score and belong to a longer alignment segment that includes at least one position with a positive score on each side, d m = 0 otherwise; s m is the sum of scores for similarity of residue content for the given pair of positions and for its two immediate neighbors (see Methods for details).Importantly, the motif score is always non-negative: it rewards positive-scoring segments of profile-profile alignment without indels but does not additionally penalize gaps or mismatches.The resulting positional scores s are used for the construction of the optimal local Smith-Waterman alignment (22) of the two profiles. Estimating statistical significance Accurate estimation of statistical significance of the optimal alignment score (P-value or E-value) is essential for the confident discrimination of even most distant homologs from nonhomologs.In this respect, profile-profile comparison presents a particular challenge: the optimal alignment scores strongly depend on residue composition, secondary structure, and other features of specific pairs of compared profiles.As a remedy, Soding (13) suggested constructing individual distributions of random alignment scores for each query, based on the query's comparison to a calibration database.This database includes a single protein representative from each structural fold and thus should not contain more than one protein homologous to the query; therefore the produced set of scores should represent random comparisons of the query to unrelated profiles.The resulting score distribution is used to estimate statistical significance of a score between the query and any given family. Although this calibration adjusts statistical estimates to individual properties of each query, it does not distinguish between various families present in the database.These families also differ in their propensity to produce random high-scoring alignments with nonhomologs.We develop this approach further and consider individualized distributions on each side of the comparison, for both the query and the database profiles.The most straightforward way to construct a distribution of random scores for a database profile would be to perform a calibration on the same representative database as for the query.We find, however, that the quality of homology detection benefits from considering the composition of the specific database where an actual search is performed.A typical search would be aimed at 3D structure prediction and would therefore involve a database of protein families with known structures, for example, MSAs of sequence homologs for PDB, SCOP or CATH representatives.In such a database, we take advantage of knowing the actual relationships between database entries.For each database profile, we precompute the set of similarity scores to nonhomologs in the same database.We then use the means of these sets to further compensate for the different properties of database entries: each score for a given database profile A against another profile B is individually adjusted by subtracting the mean score of B (see 'Methods' section for details).The resulting distribution of adjusted scores for profile A is later used for the E-value estimation in the actual search. Similarly, for every profile in the calibration database we precompute the mean score against all profiles in the searching database.When the query is compared to the calibration profiles, the corresponding means are subtracted from the similarity scores, producing the calibration distribution.Finally, when the actual search is performed, we combine calibration distributions for the query and the database profiles and estimate the E-value using approximation (23) of the combined distribution by extreme value distribution (EVD) (24,25) (see 'Methods' section for details). Quality of homology detection To assess PROCAIN's performance from different angles, we use a number of evaluation tests.These tests are based on a statistically balanced set of divergent protein domains from SCOP (26), whose relationships are defined by complementing SCOP annotation with a rigorous Support Vector Machine (SVM)-based algorithm (2) and combining a number of metrics for sequence and structure similarity.Our evaluation of detection quality includes complementary approaches to the definition of true/false positives: reference-dependent approaches use 'gold standard' domain relationships, whereas referenceindependent approaches focus on the quality of structural matches predicted by the sequence alignment (2). Results of several evaluations are shown in Figure 2.Each plot includes ROC curves (27) for two versions of PROCAIN (with and without consideration of SS), PROCAIN's predecessor COMPASS (9) and the current state-of-the-art method employing SS prediction, HHsearch (versions with and without consideration of SS).In Figure 2a, true positives are defined as all domain pairs that share the same SCOP superfamily or have a significant similarity detected by our evaluation system using Support Vector Machine (SVM) (2).Comparison of these plots leads to important conclusions.First, PROCAIN without SS significantly outperforms COMPASS, the method based only on the residue similarity at the profile positions.Moreover, performance of PROCAIN without SS is similar to that of HHsearch with SS (Figure 2a).This improvement is due to considering residue conservation and motif matches, as well as the new estimates of statistical significance.Second, introducing the comparison of SS in either PROCAIN or HHsearch further improves detection quality, especially in the area of remote homologs (the right part of the plot).For the two compared multiple sequence alignments (MSAs), scores between individual positions are calculated by combining the standard measure for the similarity of residue content in the alignment columns (step 3a) with the motif (3b), conservation (3c) and secondary structure (3d) terms.The resulting scores for positional matches are used to construct the optimal local alignment by Smith-Waterman algorithm.To estimate the statistical significance of the optimal alignment score, we perform comparisons to unrelated profiles for both the query and subject MSAs.The query is compared to the calibration database, whereas the subject is compared to unrelated profiles in the searching database.The combined distribution of the resulting random scores is approximated with extreme value distribution (EVD) and used to calculate E-value. Indeed, conservation of SS becomes more important for highly diverged proteins with low sequence similarity.Third, performance of PROCAIN with SS is significantly higher than that of HHsearch, which is considered a current standard in the field (Figure 2a). Although information about SS improves the discrimination between homologs and nonhomologs, it might potentially scramble the ranking of evolutionary distances between detected homologs and the query.If overemphasized, SS similarity to a distant relative might bring this protein to the top of the list of detected homologs, above the query's immediate relatives.This effect would diminish the method's value for evolutionary analysis and prediction of structure and function.As a control for this effect, we evaluate the quality of detecting only closest homology relations, by disregarding more remote homologs as false positives.Figure 2b shows ROC curves where true positive matches are defined as sharing the same SCOP superfamily, which generally corresponds to the similarity detected by PSI-BLAST.Notably, in this range of evolutionary distances PROCAIN and HHsearch have similar quality of homolog ranking, unaffected by the addition of SS information (Figure 2b). For the purpose of structure and function prediction, a method should not only correctly rank the detected similarities but also provide meaningful sequence alignments.Figure 2c shows the quality of detecting all homologs, including remote, with additional requirement for the accuracy of produced alignments.Similarity to a homolog is considered a true positive only if the corresponding alignment has a certain level of quality, either reference-dependent (matching a 'gold-standard' structural alignment) or reference independent (generating a reasonable structural superposition).In this experiment, alignments are required to either correctly reproduce !5 residue matches in the reference DALI (28) alignment, or to generate structure superposition with GDT_TS (29) score !0.15 (see 'Methods' section for details).According to these criteria both versions of PROCAIN have a higher detection quality than HHsearch version with SS, which indicates improvement in the alignment accuracy for the detected homologs (Figure 2c). As a more direct evaluation of structure modeling, we use an approach conceptually similar to the one in the Critical Assessment of Techniques for Protein Structure Prediction (CASP) (30).We define true positives according to their value for structure prediction rather than to a fixed reference of protein relationships and alignments.In this reference-independent evaluation (Figure 2d), any detected protein superpositions with GDT_TS !0.15 are considered true positive; all others false positives.Both versions of PROCAIN show a significantly higher reference-independent detection quality than other methods (Figure 2d). Homology detection by individual queries Evaluation based on all-to-all comparisons (Figure 2) might be biased if a subset of queries produces many highly significant hits that dominate the beginning of the ROC curve.To control for such a bias, we compare the performance of the methods query by query.For each query in our set, we consider the sorted list of hits and calculate sensitivity at a given level of selectivity (see 'Methods' section in Supplementary Data).For a pair of methods, sensitivity values for each query are compared using the paired t-test.Table 1 shows t-test P-values for sensitivity at 50% selectivity; data for other sensitivity levels are included in SI Table S8 and S9.Consistent with the results of all-to-all comparisons (Figure 2), at the level of individual queries PROCAIN performs significantly better than other methods. Homology detection in protein classes PROCAIN performs differently in different major protein classes.Results of evaluation of homology detection quality within the main SCOP classes (all a, all b, a/b and a + b) can be found in Supplementary Figures S3-S6.PROCAIN performance in the a/b class is very similar to the overall performance, whereas the other three classes show significant differences.Similar, yet somewhat smaller, differences are observed for HHsearch (see Supplementary Figures S3-S6).We hypothesize that these differences may reflect the composition of the training set that is used to optimize the weights (w c , w ss and w m ) of additional terms in PROCAIN score.This set consists of domains randomly chosen from the total evaluation set, and therefore shows a similar distribution of representatives among the main classes.As the protein world in general, this set is dominated by the homologs from the a/b class (47.9%), whereas all a, all b and a + b classes are less represented (17.6%, 9.6% and 8.9%, respectively). The observed difference in performance suggests that adjustment of scoring parameters according to the query's class may be a plausible further direction to increase the detection quality.For example, for all a or all b proteins, the improvement introduced by considering SS is smaller compared to the whole set (Supplementary Figures S3 and S4).Indeed, an SS prediction string that consists mainly of a single SS type bears less additional information for an aligner than a string with clearly delimited SS elements of different types.Therefore, in all a and all b proteins, using a lower relative weight for the SS score may put more emphasis on the direct amino acid similarity, which might be more important to detect. Alignment quality Similar to the evaluation of homology detection, we use both reference-dependent and -independent criteria for the assessment of alignment quality.Figure 3 shows the quality of alignments produced by COMPASS, HHsearch and PROCAIN evaluated by three measures.Accuracy with respect to the reference alignment is defined as the fraction of correctly aligned positions among all aligned residue pairs.Coverage is the ratio of alignment length to the overall length of the reference structural alignment.As a reference-independent measure, we use GDT_TS (29) of the structural superposition guided by the alignment under evaluation. PROCAIN generally produces much longer alignments with coverage of 40% larger than COMPASS and almost 200% larger than HHsearch (Figure 3b).Manual inspection of alignments suggests that PROCAIN aligns the same relatively easy sequence segments as HHsearch or COMPASS, and additionally extends the alignment in both directions.These extended regions often have lower similarity and are harder to align.Lower accuracy in these regions reduces the overall alignment accuracy (Figure 3a).However, the less accurate alignments that include more divergent protein parts may better reflect structural and functional protein similarities.Such alignments may be especially beneficial in structure modeling, being more informative than clear-cut yet short alignments covering only a few SS elements.Accordingly, PROCAIN alignments are favored by referenceindependent evaluation based on structure superposition (Figure 3c). Subtle homology relations detected by PROCAIN In our SCOP data set, PROCAIN confidently (E-value <0.01) detected 405 pairs of distant homology relationships between SCOP domains that belong to different superfamilies while structurally similar.These relationships were missed by HHsearch (HHsearch probability <0.20).On the other hand, approximately 68% fewer Methods are compared by sensitivity values at 50% selectivity, calculated separately for each query.The cell for each pair of methods contains P-values of paired t-test for four criteria of true/false positive distinction, the same as used in Figure 2a-d (from top to bottom): reference dependent, close homologs only, reference dependent with alignment quality, and reference independent.Plus and minus signs by the P-values denote, respectively, positive and negative difference between the method on the left and the method on the top. distant relationships (129 domain pairs) are detected by HHsearch (probability >0.91, which corresponds to PROCAIN E-value of 0.01) and missed by PROCAIN (E-value >2.13, which corresponds to HHsearch probability of 0.20).Full lists of these similarities are included in Supplementary Data.The considerable amounts of remote homologs uniquely detected by either of the methods reflect conceptual differences between PROCAIN and HHsearch.Thus, as is often the case in sequence analysis, a user searching for distant protein similarities would benefit from combining both methods. Figure 4 shows two examples of subtle homology relationships detected by PROCAIN.The nitrilase Nit domain of NIT-FHIT fusion protein from Caenorhabditis elegans (PDB ID 1emsA, domain 2, Figure 4a) is similar to the mre11 nuclease from achreon Pyrococcus furiosus (PDB ID 1ii7A, Figure 4b), with a significant PROCAIN E-value of 9.9 10 -3 .Mre11 is a central component of a protein complex responsible for homologous recombination, telomere length maintenance and DNA double-strand break repair in eukariotes (31).The NIT-FHIT protein is involved in purine metabolism (32).In vertebrates, Nit and Fhit homologs are expressed as two separate interacting proteins.Fhit is a nuleotide-binding domain strongly associated with carcinogenesis and tumor suppression (32), whereas the substrate and cell biology of Nit are unknown.SCOP assigns mre11 and Nit to different superfamilies within metallodependent phosphatase fold of a + b class (carbonnitrogen hydrolases and metallo-dependent phosphatases, respectively), noting that these superfamilies share 'some topological similarities' in structure but not establishing homology.The detected sequence similarity should have significant implications for the evolution and biology of both double-strand DNA repair and purine metabolism in eukaryotes. As another example, PROCAIN predicts homology (with E-value = 3.0 10 -3 ) between two bacterial all-a proteins: processive endocellulase CelF from Clostridium cellulolyticum (PDB ID 1g9gA, Figure 4c) and squalenehopene cyclase from Alicyclobacillus acidocaldaris (PDB ID 2sqcA, domain 1, Figure 4d).These domains share a significant structure similarity (DALI Z-score = 16.7)yet belong to different SCOP superfamilies: six-hairpin glycosidases and terpenoid cyclases/protein prenyltransferases, respectively.CelF is a component of cellulosome, protein complex responsible for the degradation of cellulose and similar substrates outside the cell.Squalene-hopene cyclase is a membrane protein with the active site located in a large central cavity (33,34).The detected homology between these domains may suggest a similar functional role of the internal cavity in enzymatic activity of CelF. DISCUSSION Here we present a new method for sequence profile comparison that complements 'vertical' context of MSA, i.e. substitution constraints at individual sequence positions, with 'horizontal' context, i.e. patterns of residue contents at multiple positions.We find that the additional 'horizontal' information, in the form of similarity in predicted SS and local sequence motifs, significantly expands the range of detected remote protein relationships.Combining this information with the new approach to the estimation of statistical significance, PROCAIN provides the quality of homology detection beyond the capabilities of current state-of-the-art methods. Contribution of SS prediction Similar to others (13,14), we find that considering SS prediction leads to significant improvement in both similarity detection (Figure 2) and alignment accuracy (Figure 3).As expected, this improvement is more pronounced for extremely distant homologs, where direct sequence signals are weak yet SS is conserved.SS prediction itself (20) involves the analysis of various types of information derived from sequence profiles: periodic patterns of hydrophobicity, residue propensities for occurrence in SS elements, specific sequence motifs, and so on.Thus, for the purposes of homology detection, similarity between SS predictions, regardless of their accuracy, may be considered as a simple representation of 'horizontal' sequence patterns in the compared protein families.After testing different ways of including SS predictions in the profile comparison, we find that the best performance results from a simple addition of the weighted substitution score for SS types.The optimal weight value, w ss = 0.1, appears to be similar to that used in HHsearch (13), suggesting that this might be a general optimal ratio of mixing residue and SS information. Contribution of additional non-SS features Although the comparison of SS predictions is a major contributor to the increased quality of homology detection (Figure 2), it does not dominate the improvement as much as reported for HHsearch, a conceptually similar method based on the comparison of HMMs (13).Interestingly, inclusion of simple profile features (positional conservation and the presence of ungapped segments in profile alignment), as well as the new protocol of statistical estimation, results in a performance comparable to that of HHsearch with SS included (Figure 2a).HHsearch ( 13) is based on HMM-HMM comparison allowing for flexible gap penalties in alignment construction, and is considered among the best performing methods for homology detection.We find that a similar detection quality can be achieved by a simpler profile aligner with fixed gap penalties and no SS consideration (Figure 2).Addition of SS improves the quality of PROCAIN detection further, beyond the previously achievable levels (Figure 2).The simplicity of profile-profile comparison makes it more tractable for analyzing contributions of different score terms and procedures, providing potentially an easier platform for finding directions of major improvement.However, evaluation of the effects of additional PROCAIN procedures on HMM comparison would be extremely interesting. An important PROCAIN feature that differs from previously reported methods is the score that rewards clusters of positive matches in continuous motifs but does not penalize for their absence.In such a cluster, each positional match receives additional score input from neighboring matches.This scheme boosts the importance of longer stretches of similar sequence positions, which are typical in homologs, and evens out the scores within a stretch, so that the signals from extremely conserved positional matches are further distributed over their closest neighbors. E-value estimation based on symmetrized calibration A significant contribution to PROCAIN's performance comes from the new approach to the estimation of statistical significance of detected similarities.In our symmetrized calibration scheme, the background score distributions are derived for both query and its database counterparts.When used as queries, different profiles are known to differ in the heaviness of the tail of random score distribution: the same score value may be quite significant for one query and marginal for another.These differences are caused by variations in profile properties, some of which are easier to model separately (length, sequence diversity), whereas others are more difficult (residue composition, SS content, etc.)In the same fashion, profiles in the searching database have different propensity to appear as highly scored matches when compared to an unrelated query.Thus, a random model of individual comparison between a query and a database profile would be more accurate if the background distributions for both query and subject are considered.Our scheme does not affect the computational speed of the search, since all distributions for the database profiles are pre-computed and analytically approximated in advance.Given the power of today's computational resources, building distributions based on comparisons of unrelated entries in the search database is feasible and may be beneficial for various other search applications. Figure 1 . Figure 1.Schema of PROCAIN procedures for construction of sequence alignments (green) and estimation of their statistical significance (orange).For the two compared multiple sequence alignments (MSAs), scores between individual positions are calculated by combining the standard measure for the similarity of residue content in the alignment columns (step 3a) with the motif (3b), conservation (3c) and secondary structure (3d) terms.The resulting scores for positional matches are used to construct the optimal local alignment by Smith-Waterman algorithm.To estimate the statistical significance of the optimal alignment score, we perform comparisons to unrelated profiles for both the query and subject MSAs.The query is compared to the calibration database, whereas the subject is compared to unrelated profiles in the searching database.The combined distribution of the resulting random scores is approximated with extreme value distribution (EVD) and used to calculate E-value. Figure 2 . Figure 2. Quality of homology detection by PROCAIN compared to other methods.ROC plots are shown for PROCAIN and HHsearch, both with and without consideration of SS, and for PROCAIN predecessor COMPASS.Light and dark green, PROCAIN without and with SS, respectively.Blue and purple, HHsearch without and with SS, respectively.Red, COMPASS.(a) True positives include all homologs as annotated by SCOP and predicted by a combination of similarity measures (see text for details).(b) True positives defined only as close homologs.(c) True positives defined as in (a), with additional requirement for the level of alignment accuracy.(d) True positives defined in a reference-independent fashion, as alignments corresponding to meaningful structural superpositions (GDT_TS > 0.15). Figure 4 . Figure 4. Subtle homology relations detected by PROCAIN.(a, b) Similarity between a Nit domain (PDB ID 1emsA, domain 2) and mre11 nuclease (PDB ID 1ii7A).(c, d) Similarity between CelF endocellulase (PDB ID 1g9gA) and squalene-hopene cyclase (PDB ID 2sqcA, domain 1).Matched protein regions corresponding to blocks in PROCAIN alignments are shown in the same color, from blue to red.Unmatched regions are colored gray.Sequence alignments are colored according to predicted secondary structure, with a-helices and b-strands shown in red and cyan, respectively. Figure 3 . Figure 3. Quality of alignment between homologs.Color-coding is the same as in Figure 2: light and dark green, PROCAIN without and with SS, respectively; blue and purple, HHsearch without and with SS, respectively; red, COMPASS.Average parameters of alignment quality are shown for several bins of remote sequence identity: 0-5%, 5-10%, 10-15% and 15-20%.(a) Reference-dependent accuracy.(b) Coverage.(c) GDT_TS of alignment-guided structure superposition.See text for details. Table 1 . Paired tests for detection quality on individual queries
2015-03-06T19:42:58.000Z
2009-04-07T00:00:00.000
{ "year": 2009, "sha1": "9b174aa424f4878f6ca5a5d2198aea27aad5a09e", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/nar/article-pdf/37/11/3522/16752239/gkp212.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "1e8891b490826ef6a0cd56fdf3c902947ba250e6", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
258232333
pes2o/s2orc
v3-fos-license
Differences in Cerebral Oxygenation in Cardiogenic and Respiratory Cardiac Arrest Before, During, and After Cardiopulmonary Resuscitation We compared the changes in cerebral oxygen saturation (ScO2) levels during cardiac arrest (CA) events using porcine models of ventricular fibrillation CA (VF-CA) and asphyxial CA (A-CA). Twenty female pigs were randomly divided into VF-CA and A-CA groups. We initiated cardiopulmonary resuscitation (CPR) 4 min after CA and measured the cerebral tissue oxygenation index (TOI) using near-infrared spectroscopy (NIRS) before, during, and after CPR. In both groups, the TOI was the lowest at 3–4 min after pre-CPR phase initiation (VF-CA group: 3.4 min [2.8–3.9]; A-CA group: 3.2 min [2.9–4.6]; p = 0.386). The increase in TOI differed between the groups in the CPR phase (p < 0.001); it increased more rapidly in the VF-CA group (16.6 [5.5–32.6] vs. 1.1 [0.6–3.3] %/min; p < 0.001). Seven pigs surviving for 60 min after the return of spontaneous circulation in the VF-CA group recovered limb movement, whereas only one in the A-CA group (p = 0.023) achieved movement recovery. The increase in the TOI did not differ significantly between the groups in the post-CPR phase (p = 0.341). Therefore, it is better to monitor ScO2 concomitantly with CPR initiation using NIRS to assess the responsiveness to CPR in clinical settings. Introduction Out-of-hospital cardiac arrest (CA) occurs in approximately 120,000 people in Japan, 350,000-700,000 in Europe, and 330,000 in the United States annually [1][2][3]. Cardiopulmonary resuscitation (CPR) is performed to achieve the return of spontaneous circulation (ROSC) and to improve neurological outcomes. However, only 2% of CA cases in Japan have favourable neurological outcomes [4]. Post-CA neurological damage is mediated by ischaemic injury caused by the interruption of blood flow during CA and reperfusion injury [5,6]. To prevent ischaemic injury due to CA, it is important to assess the amount of oxygen delivered to the brain during CPR compared to that before CPR. Near-infrared spectroscopy (NIRS) facilitates real-time non-invasive measurement of cerebral oxygen saturation (ScO 2 ) during CPR, and NIRS monitoring can predict ROSC and neurological outcomes in patients with CA [7][8][9][10]. However, the use of measurement values obtained during CPR is controversial because few studies have initiated monitoring with NIRS concomitantly with CPR initiation. In most studies, the NIRS monitor was attached while performing CPR [7][8][9][10]. The most prevalent causes of CA in clinical settings are cardiogenic events, followed by respiratory events [4]. Cardiogenic CA is caused by arrhythmia triggered by coronary others [11][12][13][14]. It occurs when the circulation suddenly stops while oxygenation and carbon dioxide levels in the blood are maintained. In contrast, respiratory CA is caused by respiratory failure due to airway obstruction, pulmonary diseases, and neuromuscular causes [15,16]. In the case of respiratory CA, severe hypoxia, elevated partial pressure of carbon dioxide, severe mixed acidosis, and gradual deterioration occur, leading to CA ( Figure 1) [17]. Post-CA syndrome presents a similar pathophysiology to acute respiratory distress syndrome. However, a previous report stated that respiratory CA with respiratory failure preceding CA does not differ from cardiogenic CA in the incidence of acute respiratory distress syndrome after resuscitation [18]. Therefore, the exact pathophysiological mechanisms remain unclear. Previous studies have compared the ROSC rate, neurological outcomes, and biomarkers between cardiogenic and respiratory CA events [19,20]. However, to the best of our knowledge, no study has examined the differences in ScO2 trends between cardiogenic and respiratory CA events. Therefore, we aimed to compare changes in ScO2 levels during different events in animal models of ventricular fibrillation CA (VF-CA; a cardiogenic event) and asphyxial CA (A-CA; a respiratory event). We hypothesised that ScO2 values before CPR similarly decrease immediately after VF induction and asphyxia when cerebral oxygen delivery is impaired and that the ScO2 during CPR would be lower and less likely to rise for A-CA than for VF-CA because A-CA takes longer to develop into CA and low ScO2 continues. We compared the changes in the ScO2 levels before, during, and after CPR between these events. Materials and Methods All experimental procedures were approved by the Institutional Animal Research Committee of our University . The animals were cared for in conformance with the National Institute of Health Guidelines for the use and care of animals. Animal Preparation Twenty female pigs were randomly categorised into VF-CA and A-CA groups (Table 1). They were intramuscularly administered 500 mg of ketamine as sedation/analgesia and subsequently restrained in the supine position. Intravenous lines were placed on the ear vein of each pig for continuous infusion of propofol (loading concentration, 1 mg/kg; Therefore, we aimed to compare changes in ScO 2 levels during different events in animal models of ventricular fibrillation CA (VF-CA; a cardiogenic event) and asphyxial CA (A-CA; a respiratory event). We hypothesised that ScO 2 values before CPR similarly decrease immediately after VF induction and asphyxia when cerebral oxygen delivery is impaired and that the ScO 2 during CPR would be lower and less likely to rise for A-CA than for VF-CA because A-CA takes longer to develop into CA and low ScO 2 continues. We compared the changes in the ScO 2 levels before, during, and after CPR between these events. Materials and Methods All experimental procedures were approved by the Institutional Animal Research Committee of our University . The animals were cared for in conformance with the National Institute of Health Guidelines for the use and care of animals. Animal Preparation Twenty female pigs were randomly categorised into VF-CA and A-CA groups (Table 1). They were intramuscularly administered 500 mg of ketamine as sedation/analgesia and subsequently restrained in the supine position. Intravenous lines were placed on the ear vein of each pig for continuous infusion of propofol (loading concentration, 1 mg/kg; maintenance concentration, 4 mg/kg/h) and vecuronium (loading concentration, 0.6 mg/kg; maintenance concentration, 0.4-0.6 mg/kg/h) [21,22]. Under local anaesthesia, using 1.0% lidocaine solution, the animals underwent tracheostomy; a 7-mm inner-diameter endotracheal tube (ETT) was placed in the trachea. The animals were ventilated using LTV-1000 ventilators (CareFusion, San Diego, CA, USA) under the following ventilator settings: tidal volume, 10 mL/kg; positive end-expiratory pressure, 6 cm H 2 O; inspired oxygen fraction, 0.21; and respiratory rate titrated to achieve an end-tidal carbon dioxide (EtCO 2 ) value of 40-45 mmHg. A femoral artery catheter was surgically inserted to monitor arterial pressure and collect blood samples. Additionally, a Swan-Ganz catheter (Model 744F8, Edwards Lifesciences, Irvine, CA, USA) was placed through the femoral vein to monitor the right atrial pressure. A 5F-pacing catheter was placed in the opposite femoral vein. We confirmed the positions of the ETT, Swan-Ganz catheter, pacing catheter, and heart positions using a C-arm fluoroscopic X-ray system (DHF-105CX, Hitachi, Tokyo, Japan). Cerebral Oxygenation NIRO 200NX TM (Hamamatsu Photonics, Hamamatsu, Japan), a continuous NIRS monitor, was used to measure ScO 2 (tissue oxygenation index; TOI) using a spatially resolved spectroscopy technique. The TOI was defined as the ratio of oxygenated haemoglobin to total haemoglobin, expressed as an absolute percentage. The normal TOI values were between 50% and 80% [23,24]. In humans, this device is applied to the supraorbital region; it follows an elliptical trajectory approximately 2 cm deep within the cerebral tissue to assess cerebral oxygenation [7]. In this study, two NIRO 200NX TM probes were attached to the intact left and right skin patches covering each cerebral hemisphere anterior to the coronal suture [25]. After completing the experiment, we dissected the brains and confirmed that the distance from the scalp to the brain did not exceed 1.5 cm; therefore, the device could measure cerebral oxygenation in our animal model, as it had a penetration depth of 3 cm. Because of the limited number of probes, monitoring of brain tissue oxygenation (PtO 2 ) using an intracerebral oxygenation probe (Licox TM , Integra Life Sciences, Plainsboro, NJ, USA) was possible for only four and three pigs in the VF-CA and A-CA groups, respectively. A 2-mm hole was drilled between the left and right NIRO 200NX TM probes, a 22-gauge cannula was inserted until cerebrospinal fluid outflow was confirmed, and a Licox TM probe was fixed such that it protruded 2 mm from the cannula tip. Data Collection Vital signs, including heart rate, heart rhythm, arterial blood pressure, right atrial pressure, respiratory rate, a saturation of percutaneous oxygen (SpO 2 ), and EtCO 2 were monitored using the Philips IntelliVue MP50 patient monitor (Philips Medizin Systeme, Boblingen, Germany). The arterial blood gas (ABG) was measured using the EPOC (Alere, Waltham, MA, USA) or ABL 735 (Radiometer Copenhagen, Copenhagen, Denmark) blood gas analyser. TOI data were obtained from the left and right probes. Study Design After calibrating and synchronising all monitoring equipment, Ringer's solution was rapidly infused, and the animals were stabilised for 30 min to achieve steady vital signs. Subsequently, the baseline TOI, vital signs, and ABGs were recorded. After 10 min, the pre-CPR phase was started. In the VF-CA group, VF was induced using a pacing catheter and an electrical stimulator (GY600A TM ; Kaifeng Huanan Equipment Co., Ltd., Kaifeng, China), and ventilator support was disengaged. In the VF-CA group, the time of VF induction was set as the CA time. In the A-CA group, asphyxia was induced by clamping the ETT, followed by withdrawal of ventilator support. The CA time was defined as the time when aortic systolic pressure dropped below 30 mmHg [19]. We initiated CPR at 4 min after the onset of CA ( Figure 2). During CA, the continuous infusion of propofol and vecuronium was simultaneously stopped in both groups. Vital signs, including heart rate, heart rhythm, arterial blood pressure, right atrial pressure, respiratory rate, a saturation of percutaneous oxygen (SpO2), and EtCO2 were monitored using the Philips IntelliVue MP50 patient monitor (Philips Medizin Systeme, Boblingen, Germany). The arterial blood gas (ABG) was measured using the EPOC (Alere, Waltham, MA, USA) or ABL 735 (Radiometer Copenhagen, Copenhagen, Denmark) blood gas analyser. TOI data were obtained from the left and right probes. Study Design After calibrating and synchronising all monitoring equipment, Ringer's solution was rapidly infused, and the animals were stabilised for 30 min to achieve steady vital signs. Subsequently, the baseline TOI, vital signs, and ABGs were recorded. After 10 min, the pre-CPR phase was started. In the VF-CA group, VF was induced using a pacing catheter and an electrical stimulator (GY600A TM ; Kaifeng Huanan Equipment Co., Ltd., Henan, China), and ventilator support was disengaged. In the VF-CA group, the time of VF induction was set as the CA time. In the A-CA group, asphyxia was induced by clamping the ETT, followed by withdrawal of ventilator support. The CA time was defined as the time when aortic systolic pressure dropped below 30 mmHg [19]. We initiated CPR at 4 min after the onset of CA ( Figure 2). During CA, the continuous infusion of propofol and vecuronium was simultaneously stopped in both groups. Figure 2. Experimental protocol. Arrows indicate the time points at which arterial blood gas was recorded (at baseline, −1 min, and 3 min after CPR initiation and every 4 min thereafter; immediately after ROSC and every 20 min subsequently). VF-CA, ventricular fibrillation cardiac arrest; VF, ventricular fibrillation; CPR, cardiopulmonary resuscitation; ROSC, return of spontaneous circulation; A-CA, asphyxial cardiac arrest; ETT, endotracheal tube; sABP, systolic arterial blood pressure. In the CPR phase, we standardised the quality of chest compressions using the LU-CAS 2 TM device (LUCAS TM , Jolife, Lund, Sweden). Compressions were localised over the retrosternal position of the heart, which was previously confirmed using fluoroscopy. Simultaneously with CPR initiation, respirator support was restarted at the following ventilatory settings: tidal volume, 10 mL/kg; positive end-expiratory pressure, 6 cm H2O; inspired oxygen fraction, 1.0; and respiratory rate, 10 breaths/min. We monitored the pulse and analysed the heart rhythm every 2 min. During the first 2 min of CPR, only basic life support was provided, comprising chest compressions and ventilation asynchronously. In the CPR phase, we standardised the quality of chest compressions using the LUCAS 2 TM device (LUCAS TM , Jolife, Lund, Sweden). Compressions were localised over the retrosternal position of the heart, which was previously confirmed using fluoroscopy. Simultaneously with CPR initiation, respirator support was restarted at the following ventilatory settings: tidal volume, 10 mL/kg; positive end-expiratory pressure, 6 cm H 2 O; inspired oxygen fraction, 1.0; and respiratory rate, 10 breaths/min. We monitored the pulse and analysed the heart rhythm every 2 min. During the first 2 min of CPR, only basic life support was provided, comprising chest compressions and ventilation asynchronously. Subsequently, if VF persisted, we repeated defibrillation (monophasic, 360 J; Nihon Kohden, Tokyo, Japan) at 2-min intervals and administered adrenaline (1 mg) every 4 min after the first 2 min of defibrillation. Similarly, if the pulseless electrical activity or asystole persisted, adrenaline (1 mg) was administered every 4 min after the first 2 min of CPR as advanced cardiopulmonary life support. We did not use antiarrhythmic drugs as described previously [19]. We stopped CPR in animals that did not achieve ROSC within 30 min of CPR initiation. Animals that achieved ROSC within 30 min continued to receive a rapid infusion of Ringer's solution (post-CPR phase; Figure 2) and were ventilated at the following settings: tidal volume, 10 mL/kg; positive end-expiratory pressure, 6 cm H 2 O; inspired oxygen fraction adjusted to maintain a SpO 2 of 93-98%; and a respiratory rate, 10 breaths/min. Surviving animals received no additional medication, including dopamine, phenylephrine, or lidocaine. Independent limb flexion and extension during the post-CPR phase were considered signs of movement function recovery in the subjects; continuous infusion of propofol and vecuronium was restarted in these animals. Those that survived for 60 min after ROSC were euthanised with potassium chloride. We recorded the median ABP and EtCO 2 values continuously from the pre-to the post-CPR phase, and ABG data were recorded in each phase (Figure 2). Statistical Analyses All data are presented as medians (interquartile ranges [IQR]). The TOI value, time, vital signs, and ABG were analysed non-parametrically using the Mann-Whitney U test. Yates' chi-square test was used for intergroup comparisons of the ROSC rate and movement recovery. The TOI was measured repeatedly over time. We used a linear mixed model to account for the longitudinal nature of the data [26,27]. The correlation between the repeated measures recorded for each subject was determined using a random intercept and slope. In case of significant differences, the TOI values obtained every 1 min in the pre-CPR and CPR phases and every 2 min in the post-CPR phase were compared using the Mann-Whitney U test. Correlation coefficients of the associations between the TOI and PtO 2 , SaO 2 (arterial oxygen saturation), and partial pressure of oxygen during ABG recording were determined. The Friedman test (Bonferroni's multiple comparisons, if significant) was used to identify the strongest correlation. All tests were two-sided with a significance level of 0.05. All statistical analyses were conducted using the EZR Ver 1.52 (Saitama Medical Centre, Jichi Medical University, Saitama, Japan) [28], which is a graphical user interface for R software (The R Foundation for Statistical Computing, Vienna, Austria). Baseline None of the animals experienced CA during preparation. The vital signs and ABG values did not differ significantly between the VF-CA and A-CA groups (Table 1). We used 280 (200-320) and 155 (125-205) mg propofol (p = 0.071); 11.0 (8.0-14.0) and 4.0 (3.4-5.5) mg vecuronium (p = 0.009); and 400 (250-500) and 300 (238-300) mL Ringer's solution (p = 0.352) in the VF-CA and A-CA groups, respectively. Propofol and vecuronium doses were higher in the VF-CA group because the number of cases that needed drugs for VF induction before or after ROSC was higher in this group. Pre-CPR Phase In the A-CA group, the interval between clamping and CA onset was 9.9 (8.5-11.3) min. The TOI reached the minimum value at 3-4 min after initiation of the pre-CPR phase in both groups (VF-CA group, 3.4 [2.8-3.9] min; A-CA group, 3.2 [2.9-4.6] min; p = 0.386; Table 2). TOI decrease within the 4-min interval after VF induction and clamping did not differ significantly (p = 0.188; Figure 3a). The interval between the lowest TOI value and CPR initiation was longer in the A-CA group than in the VF-CA group ( Data are presented as medians (interquartile ranges). * The values for the A-CA group were significantly different from those for the VF-CA group in each phase (p < 0.05). CPR, cardiopulmonary resuscitation; TOI, tissue oxygenation index; VF, ventricular fibrillation; ROSC, return of spontaneous circulation; mABP, mean arterial blood pressure; EtCO 2 , end-tidal CO 2 . CPR Phase In total, nine (90%) and six (60%) subjects achieved ROSC in the VF-CA and A-CA groups, respectively (p = 0.303; Table 3). CPR duration did not differ significantly between the two groups (VF-CA group, 10.5 [5.3-12.8] min; A-CA group, 19.5 [11.3-29.8] min; p = 0.085). The initial and median values of the mean arterial blood pressure (mABP) were significantly higher in the VF-CA group than in the A-CA group ( Table 2). The increase in the TOI was significantly different between the groups (p < 0.001). In both groups, the TOI value increased rapidly up to a certain value and gradually increased thereafter (Figure 3b). The slope at every 5-s interval up to the maximum TOI value was measured, and the peak of the slope was defined as the singular point. The TOI velocity from CPR initiation to the singular point was higher in the VF-CA group than in the A-CA group (16.6 [5.5-32. The maximum and median TOI values were higher in the VF-CA group than in the A-CA group ( Table 2). The initial TOI values did not differ significantly between the two groups at CPR initiation. In contrast, at 1-6 min after CPR initiation, the TOI values were higher in the VF-CA group than in the A-CA group (Figure 3b). The TOI increased comparably in both groups (p = 0.188). (b) CPR phase: the TOI values increased differently in both groups (p < 0.001). Between 1 and 6 min after CPR initiation, the TOI was higher in the VF-CA group than in the A-CA group. Values are presented as medians (interquartile ranges). VF-CA, ventricular fibrillation cardiac arrest; A-CA, asphyxial cardiac arrest; CPR, cardiopulmonary resuscitation. Table 3. Outcomes of subjects in the ventricular fibrillation cardiac arrest (VF-CA) and asphyxial cardiac arrest (A-CA) groups. VF-CA (n = 10) A-CA (n = 10) p-Value The maximum and median TOI values were higher in the VF-CA group than in the A-CA group ( Table 2). The initial TOI values did not differ significantly between the two groups at CPR initiation. In contrast, at 1-6 min after CPR initiation, the TOI values were higher in the VF-CA group than in the A-CA group (Figure 3b). Post-CPR Phase In total, seven (70%) and three (30%) animals survived for 60 min after ROSC in the VF-CA and A-CA groups, respectively (p = 0.180; Table 3). All seven animals in the VF-CA group recovered movement, whereas only one pig in the A-CA group showed movement recovery (p = 0.023; Table 3 The increase in TOI did not significantly differ between the two groups (p = 0.341); the TOI increased slowly and then plateaued in both groups (Figure 4). The interval from ROSC to the maximum TOI did not differ significantly between the two groups (VF-CA, 16 in both groups (p < 0.001). Between 1 and 6 min after CPR initiation, the TOI was higher in the VF-CA group than in the A-CA group. Values are presented as medians (interquartile ranges). VF-CA, ventricular fibrillation cardiac arrest; A-CA, asphyxial cardiac arrest; CPR, cardiopulmonary resuscitation. Post-CPR Phase In total, seven (70%) and three (30%) animals survived for 60 min after ROSC in the VF-CA and A-CA groups, respectively (p = 0.180; Table 3). All seven animals in the VF-CA group recovered movement, whereas only one pig in the A-CA group showed movement recovery (p = 0.023; Table 3 The increase in TOI did not significantly differ between the two groups (p = 0.341); the TOI increased slowly and then plateaued in both groups (Figure 4). The interval from ROSC to the maximum TOI did not differ significantly between the two groups (VF-CA, 16 Correlation between the TOI and Cerebral Oxygenation Correlation coefficients of the association of the TOI with PtO2, SaO2, and PaO2 (partial pressure of arterial oxygen) were 0. Discussion Here, the ScO 2 values reached a trough at approximately 4 min after VF induction and clamping in the VF-CA and A-CA groups, respectively. The ScO 2 values in the A-CA group remained low from 4 min after clamping until CPR initiation. Furthermore, after CPR initiation, the ScO 2 values increased more rapidly in the VF-CA group than in the A-CA group. During a CA event, oxygen levels decline in the brain, followed by membrane pump arrest within 3-5 min of complete ischaemic anoxia [6,29] and the initiation of hypoxiainduced irreversible changes. The PtO 2 and ScO 2 reach their lowest values at 4 min after VF induction [25,30]. The ScO 2 reflects changes in both the delivery and consumption of oxygen in the brain [10]. Brain oxygen saturation is correlated with partial pressure of oxygen in the brain tissue [25,31]. In the VF-CA group, the ScO 2 values decreased because circulation abruptly stopped and oxygen could not be delivered to the brain. Conversely, although the ScO 2 reached its lowest value in a similar time duration after clamping in the A-CA group, ScO 2 values persisted at low levels because oxygen could not be delivered to the brain despite continued circulation. Therefore, the ScO 2 decreased when oxygen could not be delivered to the brain and was minimised at 4 min after anoxia started developing in the brain in both the VF-CA and A-CA groups. In the pre-CPR phase (1 min before CPR), the ABG results of the A-CA group were worse than those of the VF-CA group. In the VF-CA group, ABG was measured at 3 min after circulation abruptly stopped while oxygenation was maintained; therefore, the PaO 2 and partial pressure of arterial carbon dioxide were maintained, and the lactate levels did not increase. In contrast, in the A-CA group, ABG was measured at approximately 13 min after clamping (oxygen could not be delivered suddenly, and carbon dioxide could not be discharged; however, circulation was maintained for approximately 10 min). Therefore, lactic acid was produced under anaerobic conditions, and acidosis worsened [32]. The ROSC rate was 90-100% and 60% in the VF-CA and A-CA groups, respectively, during CPR initiation at 8 min after CA onset [19,20]. Moreover, the mABP after ROSC was lower in the A-CA group, causing hypoxia-induced cardiac dysfunction [20]. Here, although we initiated CPR at 4 min after the onset of CA, the ROSC rates observed were consistent with those reported previously [19,20]. The mABP values during CPR and after ROSC were lower in the A-CA group than in the VF-CA group. In the VF-CA group, the median ScO 2 value was >50% during CPR, and the maximum ScO 2 value was achieved approximately 3 min after CPR initiation. Conversely, in the A-CA group, the median ScO 2 value remained <50% during CPR, and the maximum ScO 2 value was achieved at 6.5 min after CPR initiation. Furthermore, the ScO 2 values were significantly higher in the VF-CA group than in the A-CA group at 1-6 min after CPR initiation because the VF-CA group had a short duration of anoxia with low ScO 2 values, consistent mABP of >65 mmHg following CPR initiation, and immediate oxygen delivery to the brain, resulting in a rapid increase in the ScO 2 values with good responsiveness to CPR. In contrast, the A-CA group had long-duration anoxia with low ScO 2 values and low mABP at CPR initiation, which hampered immediate oxygen delivery to the brain and resulted in the gradual increase of ScO 2 with poor responsiveness to CPR. The degrees of hypoxia and ischaemia considerably increased after asphyxia than after VF [33]. NIRS monitoring facilitates the understanding of the degree of hypoxia before and during CPR. However, in many clinical settings, the NIRS monitor was attached during CPR; thus, it remains unclear whether the initial ScO 2 value immediately after attaching the monitor and the maximum, minimum, or average values during CPR should be used as indicators for ROSC and neurological outcomes [34]. Moreover, congenital heart disease, which is a cause of cardiogenic CA, may cause low ScO 2 , depending on the heart or vascular anomaly; therefore, a low ScO 2 value may not be associated with poor outcomes in some diseases [13]. This study showed that the timepoint of the initiation of NIRS monitoring might obscure the significance of the differences in the ScO 2 values between VF-CA and A-CA events. During NIRS monitoring in a pre-hospital setting, the values obtained after NIRS monitor attachment at CPR initiation revealed an initial ScO 2 value of approximately 30% in both the ROSC and non-ROSC groups [26]. In our study, both groups displayed a minimum ScO 2 value of approximately 40% at CPR initiation. Measuring the ScO 2 on the NIRS monitor from CPR initiation is better for assessing responsiveness and predicting the ROSC, neurological outcomes, and cause of CA. Nevertheless, further studies are needed to determine whether NIRS data can predict ROSC, neurological outcomes, and the cause of CA. Limitations This study had some limitations. First, our observations were restricted to a 60-min time duration after ROSC. The lack of adequate vital signs after ROSC indicated the possibility of organ damage before ROSC, which may have affected the neurological outcomes. Furthermore, limb movement within 1 h alone is insufficient to assess neurologic function. However, limb movement is the criterion for stopping CPR in basic life support, and the presence of limb movement is important for good neurological outcomes. Second, one pig in the VF-CA group did not achieve ROSC with 10 defibrillations. If defibrillation is not successful early, the duration of CPR is prolonged, the mABP is lowered, oxygen delivery is worsened, and the ScO 2 decreases. Combined with cardiac dysfunction, these factors make it impossible to achieve ROSC. Conversely, successful defibrillation in the presence of consistently high ScO 2 values may be an important factor during CPR. Only one animal in the A-CA group recovered independent movement. At approximately 7 min after clamping, this animal experienced VF and CA (shortest time to CA in the A-CA group). The short durations in the low ScO 2 condition may lead to favourable neurological outcomes. Third, the small sample size did not allow parametric statistical analysis. Fourth, the lower mABP could be due to asphyxia injury during the longer pre-CPR phase in the A-CA group. This could lower cerebral perfusion pressure and reduce oxygen delivery. Fifth, the effect of sedatives and analgesics on the ROSC rate and movement recovery cannot be denied. However, the VF-CA group had better ROSC and movement recovery rates than the A-CA group, even though it took longer to induce VF in that group and more medications were required. Finally, we started CPR at 4 min after the onset of CA, which is when hypoxia-induced irreversible changes are initiated. A shorter time after the onset of CA would have resulted in more pronounced differences between the groups because the ScO 2 would have increased immediately with a smaller decrease in the VF-CA group. Conversely, the ScO 2 changes in subjects with long durations of CA (even cardiogenic) remain unknown. Therefore, further studies are needed to confirm the findings of this study. Conclusions The ScO 2 values reached a trough at approximately 4 min after VF induction and clamping in the VF-CA and A-CA groups, respectively. Furthermore, the ScO 2 value increased more rapidly after CPR initiation in the VF-CA group than in the A-CA group. Importantly, it is better to initiate ScO 2 monitoring using NIRS concomitantly with CPR initiation to assess the responsiveness to CPR. Data Availability Statement: The data that support the findings of this study are available from the corresponding author, Y.K., upon reasonable request.
2023-04-20T15:13:32.152Z
2023-04-01T00:00:00.000
{ "year": 2023, "sha1": "320faf9ba53d98b36de42d85274e69db48d786f7", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/12/8/2923/pdf?version=1681787061", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3499478fd5e9c17a62a24a4a661a58c44eb6153d", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
52033126
pes2o/s2orc
v3-fos-license
The Impact of Implantation Time During Liver Transplantation on Outcome: A Eurotransplant Cohort Study Supplemental digital content is available in the text. I schemia-reperfusion injury is a major threat to the liver transplant. Prolonged cold ischemia time impairs graft function and survival. 1,2 Additionally, donor warm ischemia before organ retrieval in donation after circulatory determination of death (DCD) impacts organ viability and increases the risk of ischemic biliary type strictures. 3,4 However, as soon as the graft leaves the ice for implantation, the liver starts rewarming rapidly. 5 This period of warm ischemia, called the anastomosis time or implantation time, may cause additional harm to the graft resulting in decreased graft and patient survival. In liver transplantation, implantation time has not been studied in detail. Although it has been incorporated as a dichotomous factor in some outcome analyses where it was found to be a risk factor for patient death, 6,7 a systematic literature search could not define a study that specifically investigated the effect of implantation time on outcome (SDC, Materials and Methods, http:// links.lww.com/TXD/A94; SDC 12 -References -http://links. lww.com/TXD/A95 and Table S1, SDC, http://links.lww. com/TXD/A84). It is unclear from the current literature how big the impact of implantation time on patient level might be, whether it affects all types of liver grafts, or if the effect of implantation time is constant over time. We aimed to further define the relation between implantation time and outcome using the Eurotransplant registry, the deceased donor organ allocation organization of 8 European countries. Study Population Eurotransplant is an international nonprofit organization that manages patient-oriented allocation and cross-border exchange of deceased donor organs to achieve the best possible match between available donor organs and patients on the transplant waiting list in 8 countries: Austria, Belgium, Croatia, Germany, Hungary, Luxembourg, the Netherlands, and Slovenia. The Eurotransplant registry prospectively records data for all liver transplants performed in 38 liver transplant centers in its region. Data are collected on a voluntary basis to develop best practice recommendations and policies to improve organ allocation and transplant outcomes. 8 We analyzed data submitted to this registry from all recipients of solitary liver transplants from deceased donors undertaken between January 1, 2004, and December 31, 2013. This study was approved by the Eurotransplant Liver and Intestinal Advisory Committee and the Organ Procurement Committee. Implantation time was defined as the time between the graft leaving the ice and restoration of blood flow to the liver in the recipient. Donor warm ischemia time in DCD livers was defined as the time between circulatory arrest in the donor and cold flush of the liver. Cold ischemia time was defined as the time between the start of the cold flush in the donor and the start of graft implantation in the recipient when the liver leaves the ice and is placed inside the recipients body to start the first vascular anastomosis. Transplant failure refers to all-cause graft failure and was taken as time from transplantation to graft failure or death of the patient. Graft failure was defined as relisting for liver transplantation or death of the patient due to liver failure and was therefore censored for death with a functioning graft. Survival of the patient was defined as time from transplantation until death. We calculated the Donor Risk Index (DRI) for all transplants as a measure of graft quality. 9 Because the Eurotransplant registry entails no data on donor ethnicity, we considered all donors to be non-African American. As sharing schemes are different in Eurotransplant compared the United States of America, 8 we did not take the parameters on regional or national sharing into account. Statistical Analysis Follow-up analysis of the study population included all data submitted to Eurotransplant by May 3, 2016. Only recipients for whom data for both implantation time and outcome data were available were included in the study. Continuous variables are presented as median (interquartile range), categorical variables as number (%). Multivariable Cox regression models were used to evaluate the relation of implantation time with transplant, graft, and patient survival. Variables in the multivariable models were included if they were shown to affect transplant outcome in scientific literature and available in the Eurotransplant Registry. In addition, possible confounders that might affect the association between implantation time and outcome were considered (Table 1). A multivariate imputation was performed for variables with missing data (SDC, Materials and Methods, http://links.lww.com/TXD/A94). Once the set of confounders was determined based on the backward stepwise selection with multiple imputation, the model was extended with implantation time. Furthermore, a random effect for center was added to model the correlation between the survival times of patients within the same center, as the recipient center can have an impact on outcome. 10 As simultaneous correction for 2 random effects in Cox regressions was not feasible in this data set, separate analyses were performed including donor and recipient center as a random effect separately. These analyses indicated that the impact of recipient center on outcome was more important than the effect of donor center (data not shown); therefore, results from the analyses correcting for recipient center are reported. By including a random effect for recipient center, the interpretation of the effect of implantation time refers to differences in risk between patients within the same center having a different duration of implantation time. 11 Centers were anonymized in the analyses. When the Cox model was tested to ascertain whether the effect of implantation time was constant over time, we found this was not the case (data not shown). 12 To handle the nonproportional hazards in the multivariable model, implantation time was used as a time-varying variable, allowing it to have a different effect in the following periods: <3 months, 3 to 6 months, 6 to 12 months, and >12 months (SDC, Materials and Methods, http://links.lww.com/TXD/A94). Restricted cubic splines were used to allow nonlinearity in the relation between implantation time and the log-hazard. 13 The effect of implantation time on early outcome was visualized based on a multivariable Cox regression model restricting the follow-up time to 3 months, and centering the implantation time on the mean of the center to mimic the correction for the random center effect. These figures give the mean survival function for varying values of anastomosis time, adjusted for all other covariates in the Cox model. 14 We performed interaction analyses to determine whether implantation time had more effect on survival in recipients of DCD livers than in recipients of donation after brain death (DBD) livers, livers from donors with a higher DRI, livers with longer cold ischemia times, and whether the effect of implantation time was modified by type of the graft (whole graft versus split graft). All reported results involving variables with missing values were based on multiple imputations. P values less than 0.050 were considered significant. All analyses were performed using SAS software (v 9.4 for Windows). The STROBE guidelines were followed in reporting this study. Characteristics of the Study Population Fifteen thousand one hundred thirty-six deceased-donor liver transplants were performed in the Eurotransplant region between January 1, 2004, and December 31, 2013. Data on implantation time were available in 5461 cases from which we excluded 80 transplants in which implantation times were reported to be extremely short (<10 minutes) or long (>200 minutes) as well as 118 cases because of missing outcome data. Transplant characteristics were comparable between the 5223 transplants included and the 9913 transplants excluded from this study ( Table S2, SDC, http:// links.lww.com/TXD/A85). The variability in reporting rates of implantation times is shown in Table S3 (SDC, http:// links.lww.com/TXD/A86), the correlation between average implantation time and reporting rates was weak (rho = 0.06). All patients without an event have a minimal follow-up of at least 1 year. Median follow-up after transplantation was 4.5 years (2.4-6.8 years). Table 1 shows the donor and recipient characteristics at the time of transplantation. Median cold ischemia time was 9.1 hours (7.4-11.0 hours), and median implantation time was 41 minutes (34-51 minutes) ( Figure 1). Implantation Time Independently Impairs Outcome Implantation time was independently associated with an increased overall transplant failure rate for all deceaseddonor livers (adjusted hazard ratio [HR], 1.04; 95% confidence interval [CI], 1.01-1.07; P = 0.007) ( Table 2 and Table S4, SDC, http://links.lww.com/TXD/A87). The magnitude of the effect of every 10-minute increase in implantation time was comparable to the effect of each hour of additional cold ischemia time (adjusted HR, 1.03; 95% CI, 1.02-1.05; P < 0.001). Implantation time was also an independent risk factor for graft loss (adjusted HR, 1.04; 95% CI, 1.01-1.09, P = 0.03) and patient death (adjusted HR 1.03, 95% CI 1.00-1.06, P = 0.048) (Tables S5-S6, SDC, http://links.lww.com/TXD/ A88, http://links.lww.com/TXD/A89). Donor and recipient age, year of transplant, and indication for transplantation were also independent risk factor for worse outcome ( Body mass index of neither donor nor recipient associated with outcome in the multivariable models. Type of preservation fluid had no independent effect on outcome, neither had the arterial anatomy as reported by the donor surgeon. Information on arterial reconstruction at time of transplantation is not available in the Eurotransplant Registry. Because there is variability in reporting rates of implantation time by centers (Table S3, SDC, http://links.lww.com/ TXD/A86), we repeated the analysis of implantation time on transplant survival only including centers with reporting rates of 50% or greater. Implantation time remained an independent risk factor for transplant loss with an HR of 1.05 (1.01-1.09; P = 0.026). An exploratory analysis (Table S7, SDC, http://links.lww. com/TXD/A90) suggests that donor and recipient body mass index, recipient sex, abnormal arterial anatomy as reported by the donor surgeon, the indication for transplantation, and the transplant center volume seem associated with the duration of implantation time. The Detrimental Effect of Implantation Time Impacts on Early Outcome We next investigated whether the effect of implantation time is constant over time or whether the strength of the effect weakens as time evolves. Indeed, univariable models showed evidence of nonproportional hazards (data not shown). Therefore, assuming a similar magnitude of the detrimental effect of implantation time over time would be overly simplistic. When implantation time was used as a time-varying variable, distinguishing between outcome periods less than 3 months, 3 to 6 months, 6 to 12 months, and more than 12 months, the detrimental effect of implantation time was stronger early after transplantation (HR, 1.08; 95% CI, 1.05-1.12; P < 0.001). Beyond 3 months, there was no longer evidence for an effect of implantation time on transplant outcome ( Table 2 and Table S8, SDC, http://links.lww.com/TXD/A91). This short-term effect of implantation time on transplant outcome is visualized in Figure 2, illustrating the impact on patient level. In this data set, the average probability of transplant loss at 3 months for an implantation time of 30 minutes is 14.5%, whereas an implantation time of 60 and 90 minutes resulted in a 18.4% and 23.2% probability of graft loss, respectively. Figure 3 shows the expected survival function during the first 3 months for selected values of implantation time and visualizes the early effect of implantation time on transplant loss. In an additional exploratory analysis, we observed a stronger effect of implantation time in their higher range pointing toward a clinically more important effect but this differential effect was not significant (Figure 4). Table S9 (SDC, http://links.lww.com/TXD/A92) shows demographics of DCD versus DBD transplants. The DCD donors were younger with a lower body mass index and died more frequently after trauma or anoxia compared with DBD donors. Donor sodium levels were also lower in DCD. Histidine-tryptophan-ketoglutarate was more often used as preservation solution. Recipients were younger and had lower laboratory MELDs. DCD livers were rarely used for recipients with acute liver failure or in need of a retransplantation. DCD donation was an independent predictor of transplant loss (adjusted HR, 1.54; 95% CI, 1.24-1.89; P < 0.001) and graft failure (adjusted HR, 2.13; 95% CI, 1.60-2.83; P < 0.001) but not of patient survival. When donor warm ischemia time was added as a predictor to the multivariable models for graft failure and transplant loss, DCD status was no longer a significant risk factor (Table 3). This shows that the increased risk of loss for DCD grafts is (even completely) due to the additional donor warm ischemia time. Higher-risk Organs Do Not Seem More Susceptible to the Detrimental Effect of Implantation Time We next assessed whether the detrimental effect of implantation time was more pronounced in DCD compared to DBD livers. This was not the case, there was no interaction effect between implantation time and donor type for transplant survival (Table S10, SDC, http://links.lww.com/TXD/A93). We next evaluated whether prolonged cold ischemia time, graft quality (assessed by DRI) or type of graft (whole vs split) might affect the susceptibility of the graft to increased implantation time. Interaction effects between implantation cold ischemia time, DRI, and type of graft were investigated separately in the multivariate model. Although cold ischemia time, DRI, and type of graft were independently associated with transplant survival, the unfavourable effect of prolonged implantation time on graft survival was not influenced by either in any of the multivariable models (data not shown). DISCUSSION This analysis of 5223 deceased donor liver transplants captured in the Eurotransplant registry shows that implantation time is an independent risk factor of transplant loss, graft loss, and patient death. Every 10-minute increase in implantation time had a detrimental effect on outcome similar to that of every hour increase in cold ischemia time. We could also show that the effect of implantation time on outcome is time dependent and that the effect is clearest in the first 3 months posttransplant. Although the association between implantation time and outcome might seem evident, so far, there has been limited interest in exploring the effect of implantation time during which the graft is rapidly rewarming. We studied the clinical importance of the effect of implantation time further. In this cohort-after correction for all other covariables in the Cox model-the probability to suffer transplant loss within the first 3 months after transplantation increased above 20% for implantation times above 70 minutes. This time correlates well with a previous study performed by Rana et al. 6 While devising the "survival outcomes after liver transplantation score," these authors describe that an implantation time above 70 minutes was an independent risk factor for patient death at 3 months posttransplant. Implantation times above 70 minutes are infrequent, but the detrimental effect of implantation time is continuous. Even with shorter implantation times, the risk of transplant loss is increased. The clinical relevance of that effect should be placed into context. By correcting the analyses for recipient center, the interpretation of the effect of implantation time refers to differences in risk between patients within the same center, having a different duration of implantation. Exploratory analyses show that when implantation time stays within a 10-minute interval from the average implantation time within a given center (Figure 4), the clinical impact of implantation time seems minimal. In other words, our results show that keeping the implantation time as close to and preferably below the average implantation time within a given center are likely to reduce the risk of graft loss and patient death. . Hazard ratio and pointwise 95% CI comparing each value of implantation time with the center specific mean implantation time (therefore, the HR equals 1 at the center-specific mean). The result is obtained from a Cox regression allowing nonlinearity (on the log-scale) using restricted cubic splines and illustrates that the effect of increasing implantation time is more important in the higher range. Liver implantation should be both diligent-to reduce the risk of vascular complications-and swift-to reduce the impact of implantation time-stressing the importance of well-trained and experienced surgeons performing liver transplantations. The implantation technique might also reduce the time to reperfusion. Shorter implantation times for piggyback compared with classical caval replacement have been described, [15][16][17] and this might contribute to the reported improved perioperative outcome after piggyback. 18 Although long term outcome between the 2 techniques in the reported study was the same, our results suggests that further studies looking at implantation technique as a potential confounder are worthwhile. The sequence by which the anastomosis are constructed and the liver is reperfused (portal vein first, hepatic artery first, or simultaneous reperfusion of portal vein and hepatic artery) might also play a role. Because Eurotransplant does not collect detailed information on portal and arterial reperfusion, we were unable to investigate this further. The reported implantation times in this article reflect the wide variety of surgical technique used in the different Eurotransplant liver transplant centers. Indeed, preliminary findings of a recent survey conducted within Eurotransplant, Swisstransplant, Scandiatransplant, and the British Transplantation Society showed that the portal vein is reperfused first in 61% of cases, simultaneous portal vein and hepatic artery in 19% of cases. 19 Despite this limitation, a detrimental effect of implantation time was found, stressing the importance of more detailed investigation in other large data sets that capture the sequence of reperfusion. These will likely provide important insights that might help improve surgical technique and outcome after liver transplantation in the absence of randomized controlled trials. Keeping the graft cold during implantation might improve outcome. Technical modalities to keep the liver cold during implantation need to be thought of. Surface cooling might not be very straightforward or effective for a large organ, such as the liver. Some centers rinse the liver during implantation to remove the preservation solution. 20,21 Keeping this rinse solution cold might reduce rewarming and, therefore, the effect of longer implantation times on outcome. Our results did not show an increased vulnerability of DCD livers to implantation time. DCD donation was an independent risk factor for worse outcome, and that effect was entirely caused by the donor warm ischemia time. However, there was no interaction between donor type and implantation time suggesting that DCD livers were not more susceptible to the deleterious effect of implantation time. Most likely, any potential effect remained undetected because there were only 208 DCD livers in our study. It is, therefore, warranted to repeat these analyses in a larger DCD series. Alternatively, and perhaps counterintuitively, one could hypothesize that the significant changes at the cellular and subcellular levels caused by the withdrawal phase and warm ischemia time in DCDs 22 result in masking the effect of a second hit of warm ischemia time because most of the damage is already done (ie, the magnitude of the effect of implantation time is reduced in DCDs, whereas DCD-status still negatively impacts outcome). The strength of our analysis is the use of a large cohort of transplant recipients in the Eurotransplant region. A limitation inherent to every registry study based on data from many different centers and countries is the lack of detailed information regarding donor and recipient characteristics and incomplete data registration. In contrast to the US and UK transplant registries, data submission to the Eurotransplant registry is not compulsory, explaining the high frequency of missing data in this registry. However, even with mandatory data submission, the final cohort of a recent study looking at implantation time in kidney transplantation, using the United Network of Organ Sharing registry, represented only 57.7% of the eligible cohort. 23 Also, as baseline characteristics of transplants excluded because of missing data were comparable to the transplants included, we do not suspect that our results were importantly confounded. Although multiple imputations were used, a recognized strategy to reduce the concerns related to missing data, our findings should be confirmed using other large data sets. Although this large cohort study allowed us to perform survival analyses, no in depth analysis on other transplant outcome variables was possible. We also cannot exclude that implantation time is a surrogate for other confounding factors that may impact outcome because the Eurotransplant Registry does not contain detailed information on possible determinants of the duration of implantation time during liver transplantation. Furthermore, a detailed analysis looking at an association of implantation time with early outcome such as primary nonfunction and early allograft dysfunction and the development of biliary complications-not captured by the Eurotransplant Registry-would be very valuable. In addition, now that the detrimental effect of longer implantation time has been demonstrated, additional confounding factors, such as surgical technique (piggyback vs caval replacement; reperfusion of the artery before portal vein, use of a cold rinse during implantation), the number and nature of the arterial reconstruction, the presence of portal vein thrombosis, and so on, that are not detailed in the Eurotransplant registry need to be teased out in other data sets that do contain this information. In conclusion, implantation time is a risk factor for liver transplant outcome, especially in the first months after transplantation. This finding identifies the need for better understanding of confounding factors as well as the need to limit perioperative warm ischemic injury to improve outcome after liver transplantation. Validation of these findings and exploration of uncaptured confounders in other large data sets are needed.
2018-08-21T22:42:14.719Z
2018-05-18T00:00:00.000
{ "year": 2018, "sha1": "99fbeaae13bd09e734f37984668230dd3e8e68a0", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1097/txd.0000000000000793", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "99fbeaae13bd09e734f37984668230dd3e8e68a0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
255257374
pes2o/s2orc
v3-fos-license
Impact of Moderate-To-Vigorous Sports Participation Combined with Resistance Training on Metabolic and Cardiovascular Outcomes among Lean Adolescents: ABCD Growth Study Background: To investigate the combined impact of being engaged in resistance training (RT) and meeting the physical activity guidelines through sports participation (SP) on cardiovascular and metabolic parameters in lean adolescents. Methods: A longitudinal study, part of the ongoing study entitled “ABCD Growth Study” (Analysis of Behaviors of Children During Growth), assessed data from 64 adolescents (23 from the sport group, 11 from the sport + RT group, and 30 from the control group). Metabolic and cardiovascular outcomes were analyzed as dependent variables. For the independent variables, sports participation and resistance training were considered, and for the covariates, sex, chronological age, body weight, height, and somatic maturation. Results: After 12 months of follow-up, the RT + SP presented improvements in triglycerides (TG) and the SP presented a reduction in LDL-c, TG, and glucose when compared to the control group. Conclusions: Being engaged in RT and SP is a good strategy to improve health in eutrophic adolescents, with a great impact on TG from the lipid profile. Introduction Cardiovascular diseases (CVD) represent the main cause of death in adults worldwide [1]. In fact, the first signs of cardiovascular disease manifest in the first decades of life [2], raising the relevance of unhealthy behaviors adopted during adolescence, such as insufficient physical activity [3]. Sports participation (SP) is a subset of physical activity and constitutes the main manifestation of physical exercise during adolescence, being influenced by sociocultural aspects [4]. Engagement in sports is the most important way for adolescents to meet guidelines for moderate-to-vigorous physical activity (MVPA) [5] and is widely recommended for pediatric groups. Furthermore, engagement in resistance training (RT) is linked to improved sport performance and muscle strength [6]. Moreover, RT contributes to the maintenance or increase in muscle mass, control of body fatness [7,8], increase in the resting metabolic rate, and, potentially, an increase in daily caloric expenditure [6,9]. Additionally, RT appears to have a direct relationship with increased bone mineral density [10] and improved resting blood pressure, without affecting growth [11]. Although there are solid guidelines recommending both SP and RT for adolescents, their combined effect on cardiovascular and metabolic aspects of adolescents is still under investigation. Pediatric exercise science has extensively investigated the impact of RT and SP on the health aspect of overweight/obese adolescents but in lean adolescents this information is unclear. Thus, the aim of this manuscript was to elucidate the features of systematic training and physical activity in adolescents with different metabolic characteristics. Additionally, to analyze the combined impact of being engaged in RT and meeting the physical activity guidelines through SP on cardiovascular and metabolic parameters in lean adolescents compared to both only meeting the physical activity guidelines through SP and no engagement in SP or RT at all. Sampling This longitudinal study was part of the ongoing study entitled "ABCD Growth Study" (Analysis of Behaviors of Children During Growth), which is being carried out in the city of Presidente Prudente, State of São Paulo, Brazil. The ABCD Growth Study was approved by the Ethics Committee of São Paulo State University (UNESP [process: 1.677.938]). Data collection and analyses were performed by researchers of the Laboratory of Investigation in Exercise (LIVE), which is part of the Department of Physical Education of UNESP. Parents/guardians, and adolescents signed the written consent form. The ABCD Growth Study is a pragmatic trial in which researchers are interested in the way sports participation affects health outcomes among adolescents in the "real world", targeting improved external validity of the findings. Therefore, researchers monitor adolescents over time but do not interfere in their training routine, different from a clinical trial where the variables (e.g., training, rest, nutrition, etc.) are controlled in order to reach maximum internal validity (ideal conditions) while reducing external validity. The dataset for this manuscript was collected in 2017 (baseline) and 2018 (12 months of follow-up). Further details about the sampling process can be found elsewhere [12,13]. Briefly, at baseline, researchers contacted school units and sports clubs spread out across the city to request authorization to contact the adolescents. After receiving permission, the researchers contacted the respective sites and visited them in order to explain the inclusion criteria, which were as follows: (1.) chronological age between 11 and 18 years; (2.) not having any metabolic disorder that hinders sports practice; (3.) at least one year of regular sports practice (sports group); (4.) at least one year without regular practice or exercise (control group); and (5.) signed consent and informed consent forms from legal guardians and adolescents, respectively. Additionally, for this specific manuscript, the absence of obesity was adopted as an inclusion criterion (body fatness < 25% for boys and <30% for girls). At baseline, 285 adolescents were assessed and started the follow-up period. After 12 months of follow-up, 189 adolescents were reassessed. The reasons for the 96 dropouts were as follows: fear of blood collection, having moved to another city, not having enough time to participate in data collection, and the desire to drop out of the study. From those 189 adolescents who remained in the study, 5 were excluded as they were 18 years old at baseline, 14 were excluded due to missing data in at least 1 of 4 lipid variables at followup, 1 adolescent was excluded due to missing data for anthropometry at follow-up, and 63 were excluded due to a diagnosis of obesity at baseline. From the 106 lean adolescents remaining, 42 were excluded due to not meeting the physical activity guidelines through sports participation. Finally, the sample was composed of 64 adolescents (48 boys and 16 girls) ( Figure 1). Dependent Variables: Metabolic and Cardiovascular Systolic (SBP) and diastolic blood pressure (DBP) were analyzed using an automatic device (Omron brand, Healthcare, model HBP 1100) previously validated for the pediatric population [14]. After the 10 min rest period, three measurements were made in the right arm near the humerus, with a 1 min interval between them, and the average of the three evaluations was adopted. The lipid profile and glycemic profile were evaluated through the following variables: total cholesterol (CT), high-density lipoprotein (HDL-c), low-density lipoprotein (LDL-c), triglycerides (TG), and glucose. The adolescents were instructed to fast for 12 h before the test. All variables were treated as absolute changes (∆) after 12 months of follow-up (subtraction of the baseline values from follow-up values). Independent Variable: Sports Participation and Resistance Training Researchers monitored adolescents over a period of 12 months, assisted by coaches and assistant coaches. Information about RT was collected through a face-to-face interview (e.g., days per week, previous time of engagement, etc.), and adolescents who reported RT at baseline and follow-up were considered consistently engaged. The heart rate of adolescents engaged in sports was assessed during two whole training sections (e.g., warm-up, drills, practice itself, cool-down [with a sensor attached to their chest]), and time in moderate-tovigorous physical activity was calculated (Polar brand, model H7) [15,16]. In this study, only adolescents meeting the physical activity guidelines for sports participation (n = 34; 60 min of moderate-to-vigorous physical activity) were considered. Finally, combining both sports participation and RT, these adolescents were divided according to their engagement in RT into a sport group (n = 23; only sport) and a sport-RT group (n = 11; sport + RT). Adolescents who were assessed in schools and declared no engagement in sports at baseline (and over the 12 months prior to baseline) were tracked as "control" and only those who remained not engaged in both sports and RT were considered in this manuscript. Covariates Sex and chronological age were collected through a face-to-face interview. The sex was defined as the biological sex at birth. Body weight was measured using a scale (Filizola brand, model Personal Line 200) and height and cephalic trunk height using a fixed stadiometer (Sanny brand, Professional model). Body fatness was estimated by bone densitometry (brand General Electric, model WH-Prodigy Primo). Anthropometric data were used to estimate somatic maturation through the peak height velocity (PHV) [17]. Statistical Analyses Descriptive data are presented as mean, standard deviation (SD), and 95% confidence interval (95% CI). Analysis of variance (ANOVA) and covariance (ANCOVA) were used to compare the variables according to sports participation and RT in crude and adjusted approaches, respectively. Post-hoc tests were used when necessary (Tukey and Bonferroni, respectively). ANCOVA models were adjusted for sex, age, maturation, body fatness, and the baseline values of the dependent variable. Levene's test assessed the assumption of homogeneity of variances (all models were adequately fit), and measures of effect size were expressed as eta-squared (ES-r) as follows: ES-r < 0.064 (small effect size), ES-r ≥ 0.064 and < 0.140 (moderate effect size), and ES-r ≥ 0.140 (high effect size). Statistical significance was set at 5% (p-value < 0.05) and analyses were performed using the software BioEstat (version 5.0). Characteristics of the Sample Baseline characteristics of the sample, stratified according to groups, are presented in Table 1. The sport group was younger than the control group (p-value = 0.001) and the sport-RT group (p-value = 0.019). Similarly, the PHV of the sport group was lower than the control group (p-value = 0.001) and the sport-RT group (p-value = 0.001), although adolescents in all groups had passed the peak. The glucose level was lower in the control group than the sport-RT group (p-value = 0.007). Table 2 presents the absolute changes after 12 months, divided according to engagement in sports and RT. For RT, there was a difference only for changes in TG (p-value = 0.033). In parallel, when the data were divided by SP, there were differences for LDL-c, TG, and glucose (p-value = 0.005, p-value = 0.001, and p-value = 0.007, respectively). Table 2. Changes in metabolic and cardiovascular outcomes after 12 months of follow-up in adolescents according to engagement in RT and sports. Mean (SD) Mean (SD) p-Value Body fatness (%) After 12 months, the control group presented a significant increase for LDL-c (6.8 mg/dL [0.45 to 13.1]), TG (16.0 mg/dL [7.0 to 24.9]), and glucose (4.2 mg/dL [0.9 to 7.5]), which was higher than observed for the sport group (LDL-c, TG, and glucose) and sport + RT group (TG) ( Table 3). ANCOVA identified that even after controlling for variance explained by the confounding factors, adolescents who were simultaneously engaged in RT and SP presented lower TG levels (−7.1 mg/dL [95% CI: −22.1 to 7.9]) than the control group (15.3 mg/dL [95% CI: 5.5 to 25.2]). The magnitude of the difference was moderate. Moreover, differences for LDL-c and glucose did not remain significant in the adjusted models, but the effect size observed for LDL-c was of moderate magnitude (Table 4). Table 4. Adjusted absolute changes in metabolic and cardiovascular outcomes after 12 months of follow-up in adolescents according to the combined engagement in sports and RT. Discussion This pragmatic trial identified that simultaneous engagement in both sports and RT was significantly related to moderate improvements in TG. The first aspect of our findings is the fact that, from the seven outcomes investigated, only one was significantly related to the combined engagement in sports and RT. In fact, this is not a surprise, mainly because the adolescents assessed were non-obese, and thus, abnormal values are less common [18]. However, the effect size observed was moderate (even for the outcomes that were not significant [LDL-c]) [19,20], which is surprising because in non-obese adolescents, a small magnitude was expected. Our findings highlight the relevance of sports participation to metabolic health, not just the engagement itself, but the regular engagement in sports enough to meet the physical activity guidelines as a relevant way to promote metabolic benefits even in lean adolescents. To our knowledge, there are limited data investigating the combined impact of RT and sports participation on cardiovascular and metabolic parameters in lean adolescents. Our main findings regarding the combined impact of both on lipids are in agreement with other studies assessing sports participation [21,22], but include adolescents of different weight categories. The combined engagement in RT and aerobic exercise is highly recommended in guidelines to treat cardiometabolic comorbidities related to pediatric obesity [23][24][25][26], and our findings indicate that this effect is also seen in lean adolescents. In terms of glucose parameters, differences became non-significant on the adjusted model. The absence of significant differences for glucose was attributed to differences at the baseline (which were higher in the group engaged in both SP and RT), although no abnormal values were observed for glucose in the sample (glucose > 126 mg/dl [27]). Therefore, it seems reasonable to believe the combination of RT and sports participation at a moderate-to-vigorous intensity is beneficial for the glucose metabolism of lean adolescents, even in clinical cases where glucose values are in the normal range. In fact, further studies about the issue are needed, mainly considering larger sample sizes than in the current pragmatic trial. The background supporting the use of exercise to treat obese adolescents is consistent. Data provided by a randomized clinical trial indicate that the impact of aerobic exercise combined with strength training in overweight and obese adolescents is one of the best strategies to achieve recommended lipid profile values [28]. In addition, aerobic exercise combined with strength training is a very widely recommended non-pharmacologic strat-egy to prevent and treat diseases such as dyslipidemia and diabetes mellitus type II [27]. The main question raised by us was whether this beneficial impact is observed in the absence of obesity, and our findings seem to confirm its existence. Moreover, the literature points out that the benefits attributed to physical exercise are highly determined by the exercise intensity [29], while our findings also support the relevance of activities of higher intensity in order to achieve these benefits. The limitations of the present study should be recognized. First, the absence of training parameters for RT (e.g., intensity, frequency, volume, etc.) is relevant because, without these data, it is not possible to describe in depth how RT was administered to these adolescents. Second, the pragmatic approach adopted in this study provides a better inference of these findings in the reality of these adolescents' lives. In fact, the combination of different sports gives a "real-world" to our findings but limits the possibility of describing which sport would be more beneficial for the metabolic health of lean adolescents, if any. Finally, the reduced sample size, especially in the group SP + RT, limits sex-specific analyses. Conclusions In summary, combined engagement in sports of moderate-to-vigorous intensity and RT seems a relevant strategy to improve lipid profiles in lean adolescents. Author Contributions: Conceptualization, data collection, design, and ethics applications, A.E.v.A.M., W.T., J.B.U. and R.A.F.; substantial contributions to conceptualization, data-acquisition, analysis, interpretation, and critical revision for important intellectual content, E.Z. and A.W.P.d.J. All authors reviewed the manuscript. All authors have read and agreed to the published version of the manuscript. Informed Consent Statement: Informed consent was obtained in writing from all individual participants included in the study and their parents or legal guardians. Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy.
2022-12-30T16:05:50.007Z
2022-12-27T00:00:00.000
{ "year": 2022, "sha1": "36cd74a22778767d72c8c12afe15106c948d7d5a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/20/1/444/pdf?version=1672146741", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a3fd2d34371a8ffd91a9430f00ee80cf2138e054", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269136280
pes2o/s2orc
v3-fos-license
Novel Topical Treatments for Itch The experience of itch often poses a burden on patient quality of life and has the capacity to inflict significant suffering. Topical therapies are a mainstay of treatment for many cutaneous and systemic diseases and afford patients the opportunity to manage their conditions without many of the systemic side effects of non-topical therapies. We review a multitude of new topical medications targeting the skin, immune system, and neural receptors. The list includes Janus kinase inhibitors, tyrosine kinase inhibitors, phosphodiesterase inhibitors, transient receptor vanilloid inhibitors, topical cannabinoids, and topical acetaminophen. Many of the topical therapies reviewed show promising data in phase 2–3 clinical trials, but further research is needed to compare therapies head-to-head and test their efficacy on a broader range of conditions. INTRODUCTION Itch is an unpleasant sensory phenomenon, and its chronic form has significant psychosocial impacts on patient quality of life.Numerous systemic and cutaneous pathologies produce the sensation of itch, and different mechanisms are implicated [1].With this, there also exists a plethora of therapeutic targets for drug discovery.Research in pruritus is continually evolving as mechanisms of diseases are uncovered and clarified and as new and creative therapeutics enter the scientific community for investigation.The interplay between advancements in the understanding of disease mechanisms and data on the efficacy of novel drugs yields an expansive framework from which clinical decision-making in the management of pruritus may be derived. Topical therapies in particular serve as a cornerstone of treatment for many dermatologic conditions as they often lack the systemic side effects that are present with systemic therapies.They are especially implicated as primary treatment modalities for conditions where the mechanism of disease is based in the skin; nevertheless, their role as adjunctive therapy in more systemic diseases is crucial.This review explores novel topical drug therapies that have recently emerged in the past few years and show potential for the treatment of pruritus, most commonly localized pruritus.We review a wide range of therapies, from drugs in preclinical phases of study to medications that have recently been introduced into clinical practice (Table 1). This article is based on previously conducted studies and does not contain any new studies with human participants or animals performed by any of the authors. TOPICAL JANUS KINASE INHIBITORS (BREPOCITINIB, DELGOCITINIB, RUXOLITINIB) Topical Janus kinase (JAK) inhibitors have emerged as promising therapeutics in the treatment of pruritus owing to their efficacy and favorable safety profiles (Fig. 1).Their mechanism of action is immunomodulatory, involving inhibition of Janus kinases, which leads to disruption of signaling through type 2 cytokine, e.g., interleukin (IL)-4, IL-13, and IL-31, receptors as well as direct effects on cytokine receptors in nerve fibers [2]. Ruxolitinib Topical ruxolitinib is a JAK1 and JAK2 inhibitor that was approved by the US Food and Drug Administration (FDA) in September 2021 for the treatment of atopic dermatitis and has shown efficacy in the treatment of associated atopic itch and other itchy conditions including psoriasis, lichen planus, and cutaneous graft-versus-host disease (GVHD). Phase III trials showed ruxolitinib to be efficacious in treating atopic itch.The Topical Ruxolitinib Evaluation in Atopic Dermatitis Studies (TRuE-AD) comprise two randomized, double-blind, vehicle-controlled studies with the same design [3].Both studies looked at individuals 12 years or older with atopic dermatitis for 2 years or more, an IGA score of 2 or 3, and a body surface of 3-20% not including the scalp [3].Patients were randomized to receive twice daily 0.75% or 1.5% ruxolitinib cream or vehicle cream for the initial 8 weeks of the study [3].Within the first 12 h of the first application of ruxolitinib cream, there were significant reductions in itch NRS scores compared to vehicle [3].Additionally at week 8, in patients with baseline itch NRS scores C 4, there were significantly more patients with clinically relevant improvements in scores in TruE-AD1 and TruE-AD2 in those treated with ruxolitinib 0.75% (40.4% and 42.7%, respectively) and ruxolitinib 1.5% (52.2% and 50.7%, respectively) compared to the vehicle (15.4% and 16.3%, respectively) [3].By the second day, both studies showed a significantly greater proportion of patients achieving C 4-point reductions on NRS in the ruxolitinib 1.5% cohort (TruE-AD1, 11.6%; TruE-AD2, 10.8%) versus the vehicle (TruE-AD1, 2.9%; TruE-AD2, 1.3%) [3]. Ruxolitinib has demonstrated efficacy in reducing disease severity in psoriasis but with no published data on itch [4].Further study in this area is needed, but it is assumed that it would also work to reduce pruritus.Similarly, ruxolitinib has demonstrated efficacy in treating lichen planus (LP) in a phase 2 study [5].The study investigated 12 patients with LP; ruxolitinib 1.5% cream was applied twice daily to LP lesions for 8 weeks except for an untreated index lesion that served as a control [5].The results showed that patient-reported quality of life and symptoms of pruritus were rapidly improved following treatment with ruxolitinib on the Skindex-16 and pruritus NRS, respectively [5].By week 2, average scores on both scales were decreased by more than half and progressively decreased throughout the duration of the treatment period [5].At week 4, Skindex-16 and pruritus NRS were reduced from baseline scores of 56.2 and 5.8 to 19.8 and 1.3, respectively [5].Although the mechanism of itch is unclear in LP, some evidence points to the role of JAK1 and/or JAK2 [5,6].Additionally, topical ruxolitinib has been studied in the management of cutaneous GVHD with good efficacy on disease severity [7].Patients with cutaneous GVHD often complain of pruritus, but the pathophysiology of itch in this condition is not well elucidated and the severity of itch has been reported not to be associated with disease severity [8].There is a reported association between the reduction of pruritus and a longer failure-free survival in patients with chronic GVHD, pointing to the need for further research in this area [8,9].Numerous studies are currently underway to evaluate ruxolitinib's efficacy in other itchy conditions including prurigo nodularis, lichen sclerosus, and seborrheic dermatitis. Brepocitinib Brepocitinib, a small-molecule tyrosine kinase 2 (TYK2)/JAK1 inhibitor is among the novel drugs in this category.A recent phase 2b randomized, double-blind, vehicle-controlled, dose-ranging, and parallel group study evaluated its use in patients with mild to moderate atopic dermatitis [10].The study utilized the Peak Pruritus Numerical Rating Scale (PP-NRS) in 241 patients; this is a single-item survey designed to inquire about a patient's worst itch over the preceding 24 h on a scale of 0 (no itch) to 10 (worst itch imaginable) [11].Across all treatment dosages including brepocitinib 0.1% once daily, 0.3% once or twice daily, 1.0% once or twice daily, and 3.0% once daily, there was a numerically higher proportion of participants with C 4-point reductions in weekly average of the PP-NRS from week 3 to week 6, the end of the study period, compared to vehicle groups (once or twice daily).However, statistically significant differences in C 4-point reductions on the PP-NRS were noted at week 6 in the brepocitinib 1% once daily (45.2%), 3% once daily (50.0%), and 1% twice daily (40.7%) treatment groups compared to vehicles (daily, 18.2%; twice daily, 16.7%) [10]. Delgocitinib Delgocitinib, a JAK1-3 and TYK2 inhibitor, that has been approved for pediatric and adult atopic dermatitis in Japan is another novel topical that has shown some evidence in alleviating atopic pruritus [12].A phase 3, randomized, doubleblind, vehicle-controlled study and an open-label, long-term extension study evaluated the use of delgocitinib 0.5% ointment applied twice daily for the treatment of patients with moderate to severe AD as determined by a modified eczema area and severity index score C 10, an investigator global assessment (IGA) score of 3 or 4, and a body surface area of 10-30% [13].This was performed for 4 weeks followed by a continuation period of 24 weeks [13].The trial evaluated changes in pruritus NRS during the daytime and nighttime and found that in as early as the nighttime of the first day, patients applying the delgocitinib ointment saw a statistically significant reduction in pruritus NRS score compared to the vehicle ointment; this change was maintained over time [13].Moreover, the mean change in pruritus NRS at week 4 was -1.6 and the change at week 25 was -1.3 [13].Nevertheless, despite these improvements in NRS scores, the level of change is not considered clinically significant [14].Delgocitinib has also demonstrated efficacy in the management of chronic hand eczema [15].One phase 2b dose-ranging double-blind randomized clinical trial evaluated 258 adults with mild to severe chronic hand eczema [15].Patients were randomized to delgocitinib cream 1, 3, 8, or 20 mg/g or a vehicle cream applied twice daily for 16 weeks.Eleven signs and symptoms of chronic hand eczema were evaluated through a Hand Eczema Symptom Diary on an 11-point rating scale [15].Application of 20 mg/g delgocitinib cream resulted in early and sustained reductions in of both itch and pain [15].Clinically relevant reductions of C 4 points in itch and pain from baseline were noted by week 16 in 48.4% and 63.6% of patients, respectively, compared to 17.9% and 5.9% of patients treated with vehicle cream [15].Furthermore, there were statistically significant improvements in all chronic hand eczema signs and symptoms in this treatment group compared to the vehicle cream [15].Clinician-reported outcomes corroborated this data [15]. Roflumilast Roflumilast cream was FDA approved for plaque psoriasis in July 2022, as it rapidly and effectively reduced itch severity, providing improvement after 2 weeks with a reduction in worst itch of -4 on a 10-point scale at week 8 [16,17].Furthermore, in two pivotal phase III trials, DERMIS-1 and DERMIS-2, 67.5% and 69.4% of patients treated with roflumilast and with baseline WI-NRS scores C 4 had at least a 4-point reduction in score (WI-NRS Success) compared to 26.8% and 35.6% of patients treated with vehicle cream, respectively [18].In December 2023, roflumilast 0.3% was approved for the treatment of seborrheic dermatitis.A phase III trial demonstrated that in 8 weeks, 62.8% of patients treated with roflumilast for seborrheic dermatitis experienced WI-NRS Success compared to 40.6% of those treated with vehicle cream; this improvement was noted within 2 days of initial therapy [19].Roflumilast is also currently being studied for AD.Two phase III trials, INTEGUMENT-1 and INTEGU-MENT-2, showed significant reductions in WI-NRS within 1 month in patients with AD treated with 0.15% roflumilast cream; 33.6% and 30.2% of treated patients had significant improvements versus 20.7% and 12.4% in the vehicle group in both studies, respectively, and improvements were noted as early as 1 day after initial application [20]. Difamilast Difamilast is another topical PDE4 inhibitor approved in Japan in September of 2021 for the management of AD in adults and children greater than the age of 2. Difamilast 0.3%, difamilast 1%, or placebo was administered twice daily for 8 weeks in a phase 2 clinical trial for patients with mild or moderate AD (baseline IGA score of 2 or 3 and a 3-year history of disease) and 10-70 years of age.Regarding pruritus, difamilast 1% improved pruritus visual analog scale (VAS) scores from baseline within the first week (-36.4% mean change) compared with placebo.Itch improvement was sustained, with a reduction of VAS scores throughout the 8 weeks of the study [21].Another phase 2 clinical trial looked at Japanese pediatric patients 2-14 years old.This was a randomized, double-blind, placebo-controlled, 4-week study to evaluate the safety and efficacy of difamilast for AD [22]; 73 patients were randomized to treatment with difamilast 0.3%, difamilast 1%, or vehicle ointment twice daily.Patients receiving difamilast showed a consistent improvement in VAS pruritus scores over the trial period; such improvements were evident as early as week 1 where patients treated with difamilast 0.3% and 1% had least square mean changes from baseline of -18.61 and -12.83, respectively, compared to 0.34 in patients treated with vehicle [22].By week 4, changes in least square means were -18.00, -17.21, and 8.19 for difamilast 0.3%, difamilast 1%, and vehicle, respectively [22]. Two phase 3 trials similarly highlight the efficacy of difamilast in treating pruritus.One double-blind, vehicle-controlled phase 3 trial looked at difamilast for the treatment of atopic dermatitis in patients aged 2-14 years.Patients received difamilast 0.3%, difamilast 1%, or vehicle ointment twice daily for 4 weeks [23].Using a pruritus visual rating score, at week 1, patients treated with difamilast 0.3% and difamilast 1% had least square mean changes of -0.59 and -0.54, respectively, compared to -0.14 in the vehicle group; these differences were statistically significant and persisted until week 4 of the study [23].Another randomized, double-blind, vehicle-controlled phase 3 trial looked at a Japanese cohort of patients aged 15 to 70 years old with atopic dermatitis.Patients were treated with difamilast 1% ointment or vehicle twice daily for 4 weeks [23].At week 1, there was a significantly greater change of least square mean from baseline in patients treated with difamilast 1% compared to vehicle, and by week 4, the change was -0.65 in the difamilast 1% cohort compared to -0.04 in the vehicle group, which was statistically significant [23]. Lotamilast Lotamilast is another PDE4 inhibitor that has garnered attention.One multicenter, randomized, vehicle-controlled phase 2 clinical trial evaluated patients with AD aged 20 to 64 years with an affected body surface area of 5-30%.Patients received 0.2% lotamilast or vehicle ointment for 4 weeks, and those who continued for the extension phase received 0.2% lotamilast for an additional 8 weeks [24].Pruritus scores were evaluated using Scoring Atopic Dermatitis C (SCORAD-C), which showed statistically significant improvements after 4 weeks [24].There was a -50.0% and -69.5% mean difference between lotamilast and vehicle in the full analysis set and per protocol set, respectively [14,24].A randomized, vehicle-controlled, exploratory trial on Japanese children with AD also found lotamilast to be efficacious in the reduction of pruritus [25].In this study, 62 patients were treated with lotamilast 0.05%, lotamilast 0.2%, or vehicle ointment twice daily for 2 weeks [25].Notably, the trial found a greater decrease in pruritus score in those treated with lotamilast 0.2% compared to vehicle (-37.5% vs. -6.7%)[25]. LEO 29102 LEO 29102 is a PDE inhibitor selective for the PDE4D isoform that has also been studied in patients with AD [14].A proof of concept phase 2 trial compared the efficacy of LEO 29102 to pimecrolimus in patients with AD.Patients were treated with LEO 29102 dosages of 0.03 mg/g, 0.1 mg/g, 0.3 mg/g, 1.0 mg/g, and 2.5 mg/g twice daily [14].Pruritus was evaluated using the descriptors absent, mild, moderate, and severe to describe itch on the trunk and limbs [14].The study found that patients treated with 0.3 mg/g and 2.5 mg/g of LEO 29102 twice daily had the greatest reductions in pruritus after 4 weeks (28.0%and 23.3%, respectively), although the statistical significance of the results was not reported [14]. ARYL HYDROCARBON ACTIVATOR (TAPINAROF) Activation of the aryl hydrocarbon receptor (AHR) induces epidermal differentiation and has implications in skin barrier repair [26].Tapinarof is a topical agent that activates AHR and has been FDA approved in a 1% formulation for the treatment of plaque psoriasis.Two phase III trials on patients with mild to severe plaque psoriasis showed a highly significant difference in patients achieving itch-free status compared to controls in 12 weeks (50% in both trials compared to 32% and 27% in the vehicle groups, respectively) [27].Tapinarof has also been studied in AD in two phase III trials ADORING 1 and 2. Both trials showed substantial amounts of patients achieving meaningful itch reductions in those treated with tapinarof compared to vehicle by week 8 (ADORING 1, 55.8% vs. 34.2%,respectively; ADORING 2, 52.8% vs. 24.1%,respectively) [28,29]. Asivatrep or PAC-14028 is a selective and potent transient receptor potential vanilloid subfamily V member 1 (TRPV1) antagonist [33].In a randomized, double-blind, vehicle-controlled phase 2b trial, patients with mild to moderate AD were randomized to receive vehicle cream or asivatrep 0.1%, 0.3%, and 1% applied twice daily for 8 weeks.All asivatreptreated groups showed decreased mean change in VAS from baseline over the course of the study, although this difference was only statistically significant from baseline at week 8 in the asivatrep 1% group [34].In a randomized, vehicle-controlled, phase 3 trial, patient-reported assessments of itch were lower in patients with AD treated with asivatrep cream than those receiving vehicle at week 1 and maintained until the end of the study (week 8).Moreover, the mean change in patient-reported pruritus VAS scores from baseline were significantly greater in the asivatrep-treated patients compared to those receiving vehicle (-2.3 points vs. -1.5 points), indicating significant improvement in itch [33].Additionally, asivatrep appears to optimize skin barrier function through its production of epidermal differentiation markers, which may contribute to its antipruritic effect [33]. TOPICAL CANNABINOIDS (CANNABIDIOL) Cannabinoids are compounds that act on the endocannabinoid system to elicit a range of physiologic effects.Recently, topical cannabinoids have garnered attention for their potential role in managing cutaneous pathologies, namely AD, as they have been shown to have antipruritic and anti-inflammatory properties through activity on neurons, inflammatory cytokines, and mast cells [35].The antipruritic mechanism of action is likely multifactorial, including peripheral and central modulation of cannabinoid receptors 1 (CB1) and 2 (CB2) and TRP channels [35].The central effect is predominantly mediated via CB1, and the peripheral effect likely involves an analgesic effect mediated by both CB1 and CB2 [35]. Cannabidiol A recent study investigated the effects of topical cannabidiol on 14 patients with AD aged 25--73 years old [36].Patients completed surveys before and after 14-day application [36].Pruritus was assessed using VAS-pruritus, which assessed patients' itch on a scale of 0 (no itch) to 10 (worst itch of their life), and the 5-D pruritus scale, which assesses the degree, duration, disability, and distribution of itch within the prior 2 weeks [36].Patients experienced a statistically significant reductions in pruritus on both, the VAS-pruritus (pre-treatment, 5.78; post-treatment, 4.01) and the 5-D pruritus scale (pretreatment, 13.2; post-treatment, 10.86) [36]. Another study evaluated the effects of topical cannabinoid gel in patients with self-reported eczema.Twenty individuals consented to participate, of whom 16 completed the Patient Oriented Eczema Measure to assess disease severity and the emotional domain of the Quality-of-Life Hand Eczema Questionnaire to assess the psychosocial burden of disease [37]; 67% of participants reported a decrease in itch and more than 60% had a perceived improvement in their eczema [37]. Additionally, a role for topical cannabinoids has been suggested in uremic pruritus.There is some evidence to show their effect on TRPV1, which is implicated in the pathogenesis of uremic pruritus [38].In one non-randomized study, 21 individuals with uremic pruritus were treated with a cream containing the endogenous cannabinoid acetylethanolamide and a related noncannabinoid, palmitoylethanolamide, which resulted in 38% of participants experiencing complete relief of pruritus [38,39].More studies are needed to further elucidate the role of cannabinoids for pruritus. B244 B244 is a live biotherapeutic currently under investigation for a role in the management of AD (Table 2).B244 consists of a purified strain of Nitrosomonas eutropha [40].This is an bacteria that oxidizes ammonia to nitrite and nitric oxide, which is thought to promote antimicrobial and anti-inflammatory activity, respectively [40].In vitro analysis found B244 to reduce Th2 cytokines associated with AD including IL-4, IL-5, and IL-13 [40].A randomized, double-blind, placebo-controlled, dose-ranging, phase 2b trial of B244 enrolled 547 patients 18-65 years old with mild to moderate AD and moderate to severe pruritus [40].Optical density (OD) at 600 nm was used to divide patients into a low dose group (OD 5.0), a high dose group (OD 20.0), or a vehicle group for a 4-week treatment period and 4-week follow-up period [40].Patients were to apply a topical spray twice daily during the treatment weeks [40].Pruritus was assessed using the WI-NRS at 4 weeks [40].Patients treated with B244 saw a 34% reduction in WI-NRS score (B244, -2.8; placebo, -2.1) from a baseline score greater than 8; this was statistically significant [40]. KM001 There is evidence to suggest the involvement of TRPV3 in pruritus pathways.As such, this receptor has garnered attention as a potential therapeutic target for itch.Notably, KM001 has emerged as a novel topical small-molecule inhibitor of TRPV3 and is currently undergoing phase II trials for the treatment of lichen simplex chronicus [30]. Topical Acetaminophen Topical acetaminophen has recently been studied for a possible role in treating itch.Traditionally, the mechanism of action is thought to involve inhibition of the cyclooxygenase pathway and prostaglandin synthesis, although the exact mechanism is not entirely known [42].In a double-blind, vehicle-controlled pilot study, 17 healthy volunteers 19-50 years of age (average age of 26.4 years) were evaluated for treatment response with 1%, 2.5%, and 5% acetaminophen gels and a vehicle gel that was applied to the skin prior to the induction of itch with histaminergic (with histamine) and nonhistaminergic (with cowhage) stimuli.Individuals treated with the 2.5% and 5% acetaminophen gel formulations had significant reductions in itch for histamine and cowhage compared to vehicle [42].Moreover, the mean peak itch intensity was significantly reduced with the 2.5% gel formulation by 32% compared to the vehicle [42]. CONCLUSION Topical treatments are the mainstay of therapies used by dermatologists; however, there were limited developments of novel topical antipruritics.The significant advancement in our understanding of the mechanisms of itch is leading to the development of novel topical therapies for itch, namely localized itch.The therapies highlighted carry a wide range of mechanisms and varying degrees of efficacy in their respective phases of study, inspiring continued innovation to target itch pathways.Comparisons between the drug therapies discussed may be limited by differences in study methods, phases of clinical trials, and pruritus assessment tools.For this reason, it is difficult to determine if one drug is more efficacious than another without being tested in a head-to-head IL interleukin, WI-NRS Worst Itch Numerical Rating Scale, AD atopic dermatitis trial.Furthermore, AD is the most studied condition in this review, and there is more room to test these topicals in the management of other itchy conditions.Ultimately, as research in the field continues to grow, increasing therapeutics are becoming available, allowing for more patient options and increasingly nuanced modalities of clinical practice. Rami H. Mahmoud and Omar Mahmoud contributed equally to this work. Fig. 1 Fig. 1 Overview of the JAK/STAT pathway's role in generating action potentials after binding of cytokines and the mechanism of action of topical JAK inhibitors.Detomidine activates skin nociceptor a2-adrenergic receptors, reducing itch signaling (Created with BioRender.com).JAK janus kinase, TRPV1 transient receptor potential vanilloid subfamily V member 1, TRPA1 transient receptor potential ankyrin 1 Table 1 Summary of new topical treatments under investigation, their mechanism of action, and effects on itch At 1 week, patients with AD treated with difamilast 1% ointment had a significantly greater change of least square mean from baseline compared to vehicle, and by week 4, the change was -0.65 vs. square mean changes of -0.59 and -0.54, respectively, compared to -0.14 in the vehicle group; these differences were statistically significant and persisted until week 4 Significantly greater mean change in patient-reported pruritus VAS scores from baseline in patients with AD treated with asivatrep compared to vehicle (-2.3 points vs. -1.5 points) Table 2 Emerging new topical treatments under investigation, their proposed mechanism of action, and effects on itch
2024-04-15T06:17:11.579Z
2024-04-01T00:00:00.000
{ "year": 2024, "sha1": "16abf14bdf08f7ce005f16f4f2fc992a24cced19", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s13555-024-01144-w.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c30dc6350d9b6dde23fbb83c5b3213af7249eb6d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
49479481
pes2o/s2orc
v3-fos-license
Cholera outbreak caused by drinking contaminated water from a lakeshore water-collection site, Kasese District, south-western Uganda, June-July 2015 On 20 June 2015, a cholera outbreak affecting more than 30 people was reported in a fishing village, Katwe, in Kasese District, south-western Uganda. We investigated this outbreak to identify the mode of transmission and to recommend control measures. We defined a suspected case as onset of acute watery diarrhoea between 1 June and 15 July 2015 in a resident of Katwe village; a confirmed case was a suspected case with Vibrio cholerae cultured from stool. For case finding, we reviewed medical records and actively searched for cases in the community. In a case-control investigation we compared exposure histories of 32 suspected case-persons and 128 age-matched controls. We also conducted an environmental assessment on how the exposures had occurred. We found 61 suspected cases (attack rate = 4.9/1000) during this outbreak, of which eight were confirmed. The primary case-person had onset on 16 June; afterwards cases sharply increased, peaked on 19 June, and rapidly declined afterwards. After 22 June, eight scattered cases occurred. The case-control investigation showed that 97% (31/32) of cases and 62% (79/128) of controls usually collected water from inside a water-collection site “X” (ORM-H = 16; 95% CI = 2.4–107). The primary case-person who developed symptoms while fishing, reportedly came ashore in the early morning hours on 17 June, and defecated “near” water-collection site X. We concluded that this cholera outbreak was caused by drinking lake water collected from inside the lakeshore water-collection site X. At our recommendations, the village administration provided water chlorination tablets to the villagers, issued water boiling advisory to the villagers, rigorously disinfected all patients’ faeces and, three weeks later, fixed the tap-water system. Introduction Cholera is a diarrhoeal disease caused by the bacterium Vibrio cholerae. Approximately 20% of those infected with V. cholerae develop acute, watery diarrhoea and 10-20% of those infected develop severe diarrhoea and vomiting [1]. The incubation period for cholera is short (a few hours-5 days for most subtypes) [2]. Cholera can spread quickly in places with poor water and sanitation conditions once the organism is introduced [3]. Outbreaks are usually caused by consumption of contaminated water or food [2,3]. Since cholera has a relatively high infectious dose (10 4 organisms [4]), it often requires heavy contamination of drinking water or multiplication of the pathogen in the contaminated food to cause outbreaks. Since the 19 th century, the world has experienced 7 cholera pandemics, which have caused millions of deaths. The current pandemic, 7 th overall, started in South Asia in 1961 and has spread to all World Health Organization (WHO) regions [3]. In 2016 alone, more than 132,000 cases of cholera were reported in 38 countries worldwide, including 2420 deaths, with a case fatality rate of 1.8% [5]. Of all continents, Africa has been the worst affected during the current pandemic-in 2016, 17 African countries reported 54% of all global cases. The casefatality rate in Africa (2.5%) was also well above the global level (1.8%); of the 2420 reported deaths globally in 2016, 1762 (73%) occurred in Africa [5]. Inadequate water and sanitation conditions have been identified as the driving force for the cholera epidemic in Africa [6]. Cholera continues to be an important public health problem in Uganda. Outbreaks, sometimes prolonged or wide spread, have occurred since the disease was first reported in 1971 [7]. Since 1997, cholera cases have been reported annually in Uganda, including a major epidemic that occurred in 1998, with nearly 50,000 reported cases [8]. Districts bordering the Democratic Republic of Congo (DRC), South Sudan and Kenya, as well as urban slums in Kampala were the most affected areas [8]. Despite the frequent occurrence of outbreaks in Uganda, during the past 10 years only two have been rigorously investigated [9,10] following the standard steps in an outbreak investigation [11]. On 20 June 2015, a cholera outbreak, with more than 30 reported cases including 19 positive by a rapid diagnostic test (RDT) and eight culture-confirmed, occurred in Katwe Village, a fishing village in Kasese District, south-western Uganda. We conducted an investigation to assess the scope of the outbreak, identify the mode of transmission, and inform control and prevention measures. Description of the area where the outbreak occurred Kasese District is located in south-western Uganda, bordering DRC (Fig 1). Cholera outbreaks have been reported in the district before [12]. The district has a population of 12,324 persons according to the 2014 census. Katwe Village (0˚08'44.5"S, 29˚53'05.8"E) is located inside the Queen Elizabeth National Park between Lake Edward and Lake Katwe (Fig 1). The village consists of five settlement zones, i.e. Kyakitale, Top Hill, Kyarukara, Kiganda, and Rwenjubu. The main economic activity in the village is fishing. The village has one health facility, the Katwe Health Centre III. After the outbreak occurred, the local government established a cholera treatment centre at this health centre. Case definition and identification After reviewing the common signs and symptoms of the initial case-persons, we constructed a two-tiered case definition: A suspected case was onset of acute watery diarrhoea in a Katwe Village resident from 1 June to 15 July, 2015; a confirmed case was a suspected case with V. cholerae cultured from a stool specimen. We reviewed patient records at the cholera treatment centre at Katwe Health Centre III between 1 June and 15 July, 2015 to identify patients that met the definition for suspected cases. We also reviewed the data in the Health Management Information System, an electronic disease reporting system managed by the Ministry of Health of Uganda. This data base contained basic information of patients with reportable conditions, including name, age, sex, Cholera outbreak caused by drinking contaminated water from a lakeshore water-collection site, Uganda, 2015 residence, date of admission, date of hospitalization and clinical symptoms. To improve the completeness of case finding, we worked with health workers and key informants who were accurately aware of health-related issues in the community, including members of village health teams and community leaders, to identify diarrhoeal patients in the community that met the case definition. The community leaders also encouraged every one with diarrhoea to report to the nearest health facility. Persons who met the definition for suspected cases were identified and interviewed to collect data on their clinical presentations and potential exposures using a case-investigation questionnaire. Descriptive epidemiology and hypothesis generation To generate the hypothesis on the mode of transmission for this outbreak, we evaluated the distribution of the case-patients' clinical presentations, constructed an epidemic curve to describe patterns of case-persons' dates of onset, and computed attack rates by age, sex, place of residence, and educational level. We also conducted hypothesis-generation interviews of 10 case-personsregarding potential exposures during the five days prior to their symptom onset, and community leaders on whether there had been any large gatherings that could explain this outbreak. Based on the descriptive epidemiology findings, we developed a hypothesis on the probable mode of transmission for this outbreak, and estimated the point in time when the exposure likely had occurred. Case-control investigation To test the hypothesis formed from the descriptive epidemiology, we conducted a case-control investigation. Because the vast majority of cases occurred in the Kyarukara settlement zone, we recruited suspected case-patients in this zone to participate in the case-control investigation. If a household had more than one eligible case-person, only one was invited to participate. Controls were selected among residents of Kyarukara settlement zone who had novomiting or diarrhoea from 1 June 2015 to the time of the investigation. To select controls randomly, we obtained a list of all households in Kyarukara settlement zone, and randomly selected four times as many control-households as the number of cases by using paper lots. For each casepatient, we identified among the randomly selected households four age-matched controls (i.e., within the same five-year age-group) whose residence was closest to that of the casepatient. If a household has diarrhoeal or vomiting patients since June 1, were placed it with one without such patients in the same neighbourhood. We used a structured questionnaire to collect information from case-persons and controls on exposures related to food and drinking water exposures, including the usual source of drinking water, and the usual water-collection site the kind of food they usually ate, whether they usually ate hot or cold food. However, considering that the epidemic curve indicated a point-source exposure, and that community interviews revealed no large gatherings during which sharing of food might explain this outbreak, we presumed that food exposure was unlikely a major driver for this outbreak except for perhaps the sporadic cases after 22 June. Therefore, we only analysed water exposure. We also collected demographic data (e.g., age, sex, occupation, and education). We trained members of the village health team to administer the questionnaires. To account for the matched design for the case-control investigation, we calculated the Mantel-Haenszel adjusted odds ratios (OR M-H ) and their associated 95% confidence intervals (CI) [13]. Laboratory and environmental investigations We collected stool samples from case-patients and transported them in Cary-Blair media to the clinical laboratory at the Bwera Hospital for laboratory testing. The stool specimens were first tested using an RDT [14]. RDT-positive specimens were plated on agar for bacterial culture [15]. For environmental investigation, the team visited all settlements within the village, observed the water-collection sites used by the village residents, assessed the general water and sanitation conditions in the community, and interviewed the community leaders about water and sanitation practices. Ethical considerations The Ministry of Health of Uganda gave the directive and approval to conduct this investigation. Additionally, the office of the Associate Director for Science, CDC/Uganda, determined that the primary purpose of this investigation was to verify, characterize and control an outbreak; hence it was not human subjects research. We obtained verbal informed consent from case-persons and controls above 18 years of age. For participants under18 years of age, we sought verbal consent from their parents or guardians and assent from respondents. We assuredthe case-patients and controls that their participation was completely voluntary and there would be no negative consequences should they decided not to participate. Descriptive epidemiology and hypothesis generation We identified 61 suspected cases in the village, with no deaths. Of these cases, 19 tested positive by RDT and eightwere culture-confirmed to be cholera 01 serotype Inaba. The clinical symptoms included acute watery diarrhoea (100%), vomiting (64%), abdominal pain (79%), and self-reported fever (2%). The primary case occurred on 16 June 2015. Cases increased sharply on 18 June, peaked on 19 June, and rapidly declined thereafter. After 22 June, eight additional cases occurred sporadically in the community; the last case occurred on 1 July. This epidemic curve indicated a point-source exposure pattern in the beginning phase of the outbreak [16], followed by occurrence of scattered cases in the community (Fig 2). The pattern of the epidemic curve in relation to the incubation period for a particular disease can give clues on the source and time of exposure [16]. For cholera O1 serotype Inaba, the median incubation period is 3 days (range: a few hours to 5 days) [2]. Since this is a pointsource outbreak, we determined the time of occurrence of the point-source exposure as follows [16]: Counting back 3 days (i.e., median incubation period) from 19 June (i.e., peak of the epidemic curve), we estimated that the exposure occurred on 16 June; similarly, counting back 5 days (i.e., maximum incubation period) from 22 June (i.e., end of the initial point-source epidemic curve), we estimated that the exposure occurred on 17 June. Therefore the exposure likely occurred between 16 and 17 th June 2015. The descriptive epidemiologic analysis on the distribution of cases by place of residence showed that, of the five settlements, Kyarukara had the highest attack rate (21/1000), followed by Top Hill (0.67/1000) and Kiganda (0.36/1000); whereas Kyakitale and Rwenjubu had no cases. The analysis by demographic characteristics revealed that the attack rates were similar between male-residents (4.5/1000) and female-residents (5.5/1000), and among different age groups (Table 1). These data suggested that the exposure that caused this outbreak affected all socio demographic subgroups, and mainly affected residents of the Kyarukara settlement zone. Cholera outbreak caused by drinking contaminated water from a lakeshore water-collection site, Uganda, 2015 Interviews with community leaders revealed no large community gatherings on 16 and 17 June where contaminated food might have caused the initial point-source outbreak indicated by the epidemic curve. On the other hand, nine of the ten case-persons interviewed usually collected their drinking water from a lakeshore water-collection site "X" (described later), which served Kyarukara, the most affected settlement zone. Also, none of the 10 case-persons interviewed reported treating or boiling their drinking water. In summary, the descriptive epidemiology analysis and the hypothesis-generating interviews clearly indicated that this was probably a waterborne outbreak caused by drinking contaminated water (likely from water-collection site X) between 16 and 17 June. Case-control investigation findings In the case-control investigation, case-persons and controls did not differ significantly in the distributions of age (p = 0.33), sex (p = 0.75) and education level (p = 0.57). However, 97% of the 32 case-persons compared with 62% of the 128 controls usually collected their drinking water from water-collection site X (OR M-H = 16, 95% CI: 2.4-107). On the other hand, individuals who did not treatment drinking water (i.e., boiling, filtering, or chlorination with a chlorine tablet) had less odds of being cholera cases (OR M-H = 0.29, 95% CI: 0.099-0.82) ( Table 2). Findings from the interview of the primary case-person The primary case-person was a fisherman. During an in-depth interview, he described that he developed diarrhoea on 16 June while fishing on Lake Edward. Subsequently he returned to the shore during the early hours on 17 June, and reportedly defecated "near" water-collection site X. He received treatment at Bwera Hospital (the regional referral hospital) on 17 June, where he was laboratory-confirmed to have cholera. Environmental and laboratory investigation findings Each of the five settlements in Katwe Village had their own water-collection sites. Because there are dangerous lake animals (e.g., Nile crocodiles and hippopotamuses) in Lake Edward, residents of the village used rocks and a fence to surround water-collection points to protect water-collecting villagers, especially children and those collecting water during darkness, from potential attacks of lake animals. Water-collection site X was located on the shore of Lake Edward, which served the Kyarukara settlement zone. Rocks and fences surrounding the collection site potentially made the lake water inside stagnant, preventing contaminants from being diluted quickly (Fig 3). We noted during our environmental investigation that not all villagers in the Kyarukara settlement zone collected water inside the fenced area of water-collection site X; some collected water outside of the fenced area. The environmental Investigation showed water collected point implicated during the cholera outbreak (Fig 3). Katwe Village used to have a tap-water system. However, the system broke down eight months prior to this outbreak. The village also had a protected spring, which provided cleaner water. The spring was about three kilometres away from the village centre, up on a hill. After climbing up the hill, a water collector often would have to wait in the line for hours to get a jerry-can of water. A commercial group collected and sold the spring water to the villagers at 1000 Uganda shillings (about US$1.5) per 100 litres, which is approximately the amount of water used by an average household in the village. This price was not affordable by most of the villagers; therefore they used the free lake water for drinking and other household use. Interviews of community leaders also revealed that during the initial few days after the occurrence of the outbreak, some case-patients' family members washed soiled clothes nearwater-collection site X. After health education was conducted, this practice stopped, as observed by the outbreak investigation team during the investigation. Laboratory investigations found that 19 of the 27 samples collected from case-patients were positive for V. cholerea by RDT while eight were positive for V. cholerea O1 (serotype Inaba) by stool culture. Discussion Our investigation demonstrated that the cholera outbreak in Katwe Village, south-western Uganda during June-July 2015 was a point-source outbreak caused by drinking contaminated water collected from the inside of a water-collection site on the shore of Lake Edward. Globally, waterborne disease outbreaks often occur in countries where water and sanitation conditions are poor [17][18][19][20][21]. Rural Sub-Saharan Africa has had one of the highest population growth rates globally in recent years, yet access to improved water source and sanitation facilities has not changed much during the same time period [22]. Consequently, outbreaks of waterborne diseases (including cholera, typhoid fever, bacterial dysentery, and hepatitis E) often occur, endangering the lives and wellbeing of millions of people not just in Sub-Saharan Africa but in the entire global community as a whole [23]. Of these waterborne diseases, cholera has had an especially large impact on Africa countries since the seventh pandemic reached the continent during the 1970s, with a high reported incidence and case-fatality rates [23][24][25][26]. In Uganda, an investigation of a previous cholera outbreak that occurred among the northern country's semi-nomadic pastoralists revealed that the outbreak was also caused by drinking untreated water [27]. During other recent cholera outbreaks, drinking water contamination was often implicated or assumed, although rigorous epidemiologic investigations rarely have been conducted [8,28]. History of public health has demonstrated time and again that early identification, investigation and response are the key for prompt control of communicable disease outbreaks. This outbreak started on 16 June and was reported on 20 June. We started our investigation on 22 June. During the outbreak investigation, we recommended to the village administrators to provide water chlorination tablets to the villagers, issue water-boiling advisory to the villagers, and rigorously disinfect all patients' faeces. With the assistance of the investigation team, the village administrators implemented all of these recommendations. After 22 June, eight additional cases occurred sporadically, likely due to the transmission of V. cholerae from the initial patients, possibly through washing of their soiled clothes in the water-collection site during the initial few days of the outbreak, as reported by the village leaders. The outbreak completely stopped after 1 July 2015 after control measures were aggressively implemented. In comparison, elsewhere in Kasese District, an outbreak started in another community in late February 2015 but was not responded to until May 2015. That outbreak spread to many communities and lasted several months [10]. While it is impossible to prove that the prompt investigation and aggressive response during our investigation stopped this outbreak early, our findings serve as a reminder of the importance of early detection, prompt detection and rapid response when an outbreak occurs. Conclusion In conclusion, this cholera outbreak was caused by drinking contaminated water from a lakeshore water-collection site. The primary case-person's faeces briefly contaminated the water inside the fenced water-collection site. The water was made stagnant by the rocks and the fence surrounding the site, which might have facilitated this outbreak.
2018-07-09T00:24:15.038Z
2018-06-27T00:00:00.000
{ "year": 2018, "sha1": "b28224dadb0757d54ece5e56521d1c0938bb3dc6", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1371/journal.pone.0198431", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b28224dadb0757d54ece5e56521d1c0938bb3dc6", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Geography", "Medicine" ] }
199508499
pes2o/s2orc
v3-fos-license
Recent Applications of Molecularly Imprinted Sol-Gel Methodology in Sample Preparation Due to their selectivity and chemical stability, molecularly imprinted polymers have attracted great interest in sample preparation. Imprinted polymers have been applied for the extraction and the enrichment of different sorts of trace analytes in biological and environmental samples before their analysis. Additionally, MIPs are utilized in various sample preparation techniques such as SPE, SPME, SBSE and MEPS. Nevertheless, molecularly imprinted polymers suffer from thermal (stable only up to 150 °C) and mechanical stability issues, improper porosity and poor capacity. The sol-gel methodology as a promising alternative to address these limitations allowing the production of sorbents with controlled porosity and higher surface area. Thus the combination of molecularly imprinted technology and sol-gel technology can create influential materials with high selectivity, high capacity and high thermal stability. This work aims to present an overview of molecularly imprinted sol-gel polymerization methods and their applications in analytical and bioanalytical fields. Introduction Molecularly imprinted polymers (MIPs) as a privileged sorbent provide selective recognition sites for a template molecule of interest based on its size, structure and functional groups. MIPs were synthesized by non-covalent methods and implemented for the first time in the 1970s by Mosbach and his group [1]. MIPs as capable materials with high specificity, and their physical and chemical stability have been effectively used in the extraction, and microextraction fields and even in sensing applications [2][3][4][5][6][7][8][9][10][11][12][13][14][15]. The common MIP preparation methods require organic monomers (acrylates or acrylic acid) and an organic solution phase which represent limitations from environmental and biological points of view. The common MIP preparation methods (bulk and precipitation) suffer from limitations, such as the short life span of prepared polymers and the necessity of a relatively expensive initiator (azobisisobutyronitrile-AIBN). Moreover, both covalent and non-covalent MIP preparation methods suffer from some drawbacks such as template leaching, both low thermally and chemically stability, and poor reusability [16]. To overcome these limitations the sol-gel method was proposed as a simple, relativity low-cost methodology providing products of high thermal and mechanical stability as solid phases for applications in various areas of research [17]. Sol-gel synthesis occurs by dissolving a metal oxide precursor (M(OR) n ) in a low molecular weight solvent medium using Sol-gel is a simple, manageable and cost-effective method for the production of homogeneous and highly porous metal oxide nanosorbents. The sol-gel process gives us a chance to produce various sort of nanomaterials or to modify polymer surfaces for applications in different sample preparation techniques [17]. Sorbent swelling, structure deformation and blockage are common problem with MIPs in biological samples. Sol-gel chemistry can produce imprinted selective cavities with longer lifetime due to the use of silica-based materials with strong and stable structures. In addition, efficient elimination of the template molecule from the MIP network after preparation has always been a challenging issue that can be significantly reduced with the sol-gel methodology. The high thermally stability silica materials allow the use of high temperatures to remove the template from the MIP network. The poor porosity and low surface area that are further drawbacks in MIPs can be improved by using silica-based precursors as high porous and capacious natural materials. The simple sol-gel technique concept and the various categories of possible sorbents accessible with this method has been classified by Collinson [18]. Briefly, numerous types of porous sorbents can be fabricated by performing polymerization, gelation, aging, drying, and heating as the main steps of the sol-gel method. By monitoring and optimizing the factors affecting each of the mentioned steps, different sorbent shapes such as uniform nanoparticles (1.5-10 nm), aerogels, xerogels, monoliths and thin films can be produced. In addition, the colloidal crystal templating method performed by implanting the silica into latex spheres has been presented to get pores in the range from 50 to 1000 nm diameter [19]. Sol-gel is a general way to create two groups of organic-inorganic substrates; hybrids of materials by weak interaction among organic and inorganic components (group I) or strong covalent binding with a siloxane matrix (group II) as shown in Figure 1 [18]. In a pioneering study by the Shea group, MIP were prepared by the sol-gel method for extraction of phosphates and phosphonates from aqueous samples and it was a great starting point for further developments in this field [20]. In this review we briefly cover the most recent MIP-sol-gel (MSG) preparation methods and their applications in extraction and microextraction studies. MSG in Solid Phase Extraction (SPE) SPE as the most common sample preparation technique, is the main beneficiary of MIP use [21]. However some aspects of the MIP-SPE method need to improve such as the low thermal stability, low adsorption capacity, short lifetime and low diffusion speed. The sol-gel technique has been MSG in Solid Phase Extraction (SPE) SPE as the most common sample preparation technique, is the main beneficiary of MIP use [21]. However some aspects of the MIP-SPE method need to improve such as the low thermal stability, low adsorption capacity, short lifetime and low diffusion speed. The sol-gel technique has been modified and applied by many research groups to solve these limitations. In one study, MSG sorbent was used for on-line extraction and detection of enrofloxacin in fish and chicken samples [22]. In this work 3-aminopropyltriethoxylsilane (APTES) was employed as monomer, tetraethoxysilane (TEOS) as crosslinker and silica gel as support material in N,N dimethylformamide (DMF) solvent and suitable selectivity and sensitivity for measuring enrofloxacin was reported. Using relatively the same method MSG were synthesized and used for the extraction of methyl-3-quinoxaline-2-carboxylic acid and quinoxaline-2-carboxylic acid from pork muscle [23], cloxacilloic acid in cloxacillin [24], 2,4-dichlorophenoxyacetic acid [25], florfenicol [26], chrysoidine [27], vitamin D3 [28] and some polar organophosphorus pesticides from almond oil [29]. In an interesting method dummy-MSG coated with magnetic graphene oxide was synthesized for the extraction of phthalate esters from water and screening with GC/MS [30]. In a simple and cost-effective method a titania-based MIP was synthesized without monomer and crosslinker using sunset yellow (Sun) as template molecule due to high binding capacity between the Sun sulfonic acid groups with titanium under acidic conditions [31]. In a novel approach core-shell structural multi-walled carbon nanotubes (MWNTs)-Sudan IV MSG was prepared and used as SPE sorbent. The MWNTs-MIP was used for the on-line SPE-HPLC extraction and measurement of Sudan IV in chili samples [32]. MWNTs represent a remarkable support phase for making core-shell MIPs due to their significant strength, high surface area and unique chemical properties. The developed method showed high efficiency with over 89% recovery and 2.3 ng L −1 limit of detection. In this work 3-amino-propyltrimethoxysilane was used for covalent grafting of silicon-oxygen group onto the MWNTs' surface. Then, a MIP was created on the MWNT surface using a template molecule, suitable monomer functional monomers and cross-linkers, followed by hydrolysis and condensation steps. Moreover, a magnetic surface ion-imprinted polymer (c-MMWCNTs-SiO 2 -IIP) was synthesized using magnetic CNTs/Fe 3 O 4 composites (c-MMWCNTs) as the core, APTES as the functional monomer, TEOS as the cross-linker and applied for SPE of Cu(II) from herbal medicines [33]. In a one-step hydrothermal method, core-shell Fe 3 O 4 @MIP nanospheres were easily synthesized and applied for extraction and detection of bisphenol A in aqueous samples [34]. Also, MSG has been applied as SPE sorbent in food analysis and applied for the extraction of iprodione in a white wine sample [35]. MSG was prepared in various sorts and recently on a polyethylene support and used as µ-SPE and determination of methadone in human plasma [36]. Tablets was conditioned and immersed in plasma samples and the amount of extracted methadone was measured by LC-MS/MS. The referred study presented a simple method, however the lifetime and recovery aspects still need improvement. Moreover, diethyl-stilbestrol is a harmful residue to human health due to its potential carcinogenicity and in an important study it was extracted in milk samples by a magnetic MIP which was synthesized using a combination of bulk and sol-gel techniques [37]. The most problematic issue in MSG preparation is the leakage of template molecule, especially in the extraction of trace analytes in complex media. To overcome this problem the dummy silica MSG nanospheres method was presented by Liu for SPE of bisphenol A in food samples [38]. In the dummy method a compound with a similar structure to the desired analyte must be used as template molecule and in the mentioned study dihydroxybiphenyl was used as dummy template for bisphenol A MSG preparation. MSG in Solid Phase Microextraction (SPME) The sol-gel method is suitable to solve some disadvantages of common MIP preparation methods such as swelling, low binding capacity and non-specific binding. Preventing the swelling and blockage is a crucial factor in the case of capillary sorbent preparation. In an intriguing study a molecularly imprinted xerogel (MIX) was used as a capillary sorbent for the microextraction of fentanyl from urine and plasma samples [39]. The xerogel was prepared by adding EPPTMOS (precursor) to fentanyl (template) using 10% (v/v) water and 70 µL TFA under sonication. A peristaltic pump was used to pass the prepared sol through a copper tube for 30 min to assure the formation of the gel on the inner surface of the copper tube. The tube was placed into a desiccator for up to 12-15 h for further aging and to increase polycondensation. The polycondensation step was accomplished by placing the loop in an oven in a temperature range of 50-200 • C for an appropriate period of time. Then, an organic solution (methanol and acetic acid (9:1)) was passed through the prepared loop to remove the template from the xerogel network. The prepared capillary tube was connected on-line to an HPLC loop ( Figure 2) and used for all experiments. The developed on-line method showed recoveries of up to 85% for the extraction of fentanyl from biological samples. This robust and on-line method avoids protein precipitation and the dilution of plasma and urine samples. Then, an organic solution (methanol and acetic acid (9:1)) was passed through the prepared loop to remove the template from the xerogel network. The prepared capillary tube was connected on-line to an HPLC loop ( Figure 2) and used for all experiments. The developed on-line method showed recoveries of up to 85% for the extraction of fentanyl from biological samples. This robust and on-line method avoids protein precipitation and the dilution of plasma and urine samples. In addition, MSG was used for the surface modification of a commercial fiber and applied for extraction of diazinon and its structural analogs from aqueous cucumber samples and detection with gas chromatography-nitrogen/phosphorus detection [40]. In another study the surface of a needle was optimized with a MSG xerogel method and used for the extraction of bilirubin (BR) from complex samples and screening with LC-MS/MS [41]. 3PMTMOS (precursor) and BR (template) were mixed and sonicated. Subsequently, TFA was added as catalyst and solution was sonicated, followed by the slow addition of water to begin the hydrolysis process and finally the prepared solution was incubated for 30 min at room temperature. Then, the mixture solution was passed through the needle to form a thin MSG surface and it was placed in a desiccator for 24 h to complete the aging process. Finally, the needle polycondensation process was accomplished in an oven using a temperature gradient between 50-250 °C for 3 h. This sorbent in comparison with a non-imprinted needle (as blank) showed roughly more than five-fold better imprinting factor, four times better recovery% and four times higher adsorption capacity. The prepared needle was connected to a Hamilton syringe and used for extraction purposes. This MSG showed stability and could be used for up to 100 extractions in complex biological solutions (Figure 3). This method can be a good recommended tool to solve the frangibility of solid phase fibers and can be applied in the SPME field. In addition, it is straightforward method and the product can be connected on-line to liquid and gas chromatography In addition, MSG was used for the surface modification of a commercial fiber and applied for extraction of diazinon and its structural analogs from aqueous cucumber samples and detection with gas chromatography-nitrogen/phosphorus detection [40]. In another study the surface of a needle was optimized with a MSG xerogel method and used for the extraction of bilirubin (BR) from complex samples and screening with LC-MS/MS [41]. 3PMTMOS (precursor) and BR (template) were mixed and sonicated. Subsequently, TFA was added as catalyst and solution was sonicated, followed by the slow addition of water to begin the hydrolysis process and finally the prepared solution was incubated for 30 min at room temperature. Then, the mixture solution was passed through the needle to form a thin MSG surface and it was placed in a desiccator for 24 h to complete the aging process. Finally, the needle polycondensation process was accomplished in an oven using a temperature gradient between 50-250 • C for 3 h. This sorbent in comparison with a non-imprinted needle (as blank) showed roughly more than five-fold better imprinting factor, four times better recovery% and four times higher adsorption capacity. The prepared needle was connected to a Hamilton syringe and used for extraction purposes. This MSG showed stability and could be used for up to 100 extractions in complex biological solutions ( Figure 3). This method can be a good recommended tool to solve the frangibility of solid phase fibers and can be applied in the SPME field. In addition, it is straightforward method and the product can be connected on-line to liquid and gas chromatography instruments. Moreover, in a further trend a SMPE-probe was prepared on the surface of a stainless-steel wire by the MSG method using chlorpyrifos as template molecule, tetraethoxysilane a sol-gel precursor, and acrylamide and β-cyclodextrin as functional monomers [42]. This probe was used for the residual determination of organophosphorous pesticides in fresh and dry foods by GC-FID. The MIP-SMPEprobe was a straightforward and robust tool, and showed good sensitivity, reproducibility and selectivity toward the investigated template molecules and their structural analogs. Monolithic MSG High-throughput techniques like in-tip sample preparation methods have been considered due to their simplicity, stability and suitable recovery. In situ monolithic in-tip MIPs made by the sol-gel process are an easy, fast, robust and durable method that has been applied for the selective extraction of L-tyrosine (Tyr), a potential lung cancer biomarker, from biological fluids [43]. In this method the template molecule (Tyr) was mixed and sonicated with the precursor TPM for 30 min. Then, TEOS, TFA as the catalyst and water were added. The mixture was stirred for 30 min at 70 °C. After that, 0.03 mL of this solution was transferred into a tip, and the tip was kept in 70 °C for 2 h. Then, the tip was left at room temperature for 7 h. Finally, methanol containing 10% acetic acid was used as the solvent to remove the entrapped template ( Figure 4). The in-tip monolithic MSG was used for Tyr extraction and measurement by LC-MS/MS with high recovery, accuracy and selectivity. Using almost the same process an in-tip dummy MIP for SPME of vanillin and methyl vanillin and their determination by HPLC was prepared [44]. Moreover, in a further trend a SMPE-probe was prepared on the surface of a stainless-steel wire by the MSG method using chlorpyrifos as template molecule, tetraethoxysilane a sol-gel precursor, and acrylamide and β-cyclodextrin as functional monomers [42]. This probe was used for the residual determination of organophosphorous pesticides in fresh and dry foods by GC-FID. The MIP-SMPE-probe was a straightforward and robust tool, and showed good sensitivity, reproducibility and selectivity toward the investigated template molecules and their structural analogs. Monolithic MSG High-throughput techniques like in-tip sample preparation methods have been considered due to their simplicity, stability and suitable recovery. In situ monolithic in-tip MIPs made by the sol-gel process are an easy, fast, robust and durable method that has been applied for the selective extraction of l-tyrosine (Tyr), a potential lung cancer biomarker, from biological fluids [43]. In this method the template molecule (Tyr) was mixed and sonicated with the precursor TPM for 30 min. Then, TEOS, TFA as the catalyst and water were added. The mixture was stirred for 30 min at 70 • C. After that, 0.03 mL of this solution was transferred into a tip, and the tip was kept in 70 • C for 2 h. Then, the tip was left at room temperature for 7 h. Finally, methanol containing 10% acetic acid was used as the solvent to remove the entrapped template ( Figure 4). The in-tip monolithic MSG was used for Tyr extraction and measurement by LC-MS/MS with high recovery, accuracy and selectivity. Using almost the same process an in-tip dummy MIP for SPME of vanillin and methyl vanillin and their determination by HPLC was prepared [44]. Hollow Fiber and Nanofiber Modification and Preparation with MSG Hollow fibers (HFs) are a well-known alternative to fragile SPME fibers due to their high stability and avoidance of biological matrix interference. The transfer speed of an analyte from solution to the surface is a key factor in HF performance which can be facilitated using solution stirring. Various modified sorts of HFs are mostly applied in the liquid-phase microextraction (LPME) field [45]. Recently, the surface of a hollow fiber membrane was modified with MSG as LPME sorbent for the extraction of hippuric acid from human plasma and urine samples [45]. In this work a polysulfone HF membrane surface was modified with the sol-gel method and used for LPME extraction of hippuric acid from complex matrixes. In a further approach an electrospinning method was used for Hollow Fiber and Nanofiber Modification and Preparation with MSG Hollow fibers (HFs) are a well-known alternative to fragile SPME fibers due to their high stability and avoidance of biological matrix interference. The transfer speed of an analyte from solution to the surface is a key factor in HF performance which can be facilitated using solution stirring. Various modified sorts of HFs are mostly applied in the liquid-phase microextraction (LPME) field [45]. Hollow Fiber and Nanofiber Modification and Preparation with MSG Hollow fibers (HFs) are a well-known alternative to fragile SPME fibers due to their high stability and avoidance of biological matrix interference. The transfer speed of an analyte from solution to the surface is a key factor in HF performance which can be facilitated using solution stirring. Various modified sorts of HFs are mostly applied in the liquid-phase microextraction (LPME) field [45]. Recently, the surface of a hollow fiber membrane was modified with MSG as LPME sorbent for the extraction of hippuric acid from human plasma and urine samples [45]. In this work a polysulfone HF membrane surface was modified with the sol-gel method and used for LPME extraction of hippuric acid from complex matrixes. In a further approach an electrospinning method was used for Recently, the surface of a hollow fiber membrane was modified with MSG as LPME sorbent for the extraction of hippuric acid from human plasma and urine samples [45]. In this work a polysulfone HF membrane surface was modified with the sol-gel method and used for LPME extraction of hippuric acid from complex matrixes. In a further approach an electrospinning method was used for the preparation of the MSG nanofibers [46]. Electrospinning is a capable methodology to create micro-/nanofibers through an inexpensive and simple process. Electrospun micro-/nanofibers have been applied to many different applications. In on work, a simple and novel way for the preparation of unbreakable MSG nanofibers by the electrospinning technique was developed. The electrospinning of MSG is a challenging task to overcome this issue. Nylon 6 (12% w/w) in 4 mL formic acid was used as a backbone and support of the precursor ( Figure 5). The developed method was used for SPME and determination of acesulfame coupled on-line with HPLC. The selectivity of method for the extraction of acesulfame was evaluated in the presence of some sweets (saccharine, caffeine, and aspartame) in the beverage sample. This robust tool showed proper selectivity toward acesulfame and was used for fifty extractions without any noticeable obstruction. Other Novel Methods for Preparation of MSG Recently some interesting methods for MSG preparation have been developed which we will discuss here briefly. In a novel study a type of uniform nanomagnetic MIP sorbent was prepared by the sol-gel methodology and applied for the recognition of bovine serum albumin (BSA) [47]. The Fe 3 O 4 @BSA-MIPs showed 5 nm size, which can facilitate the mass transfer, and a high saturation magnetization (43.82 emu g −1 ), which allowed it to be easily separated from solution using an external magnetic field. These nanomaterials showed a proper equilibrium time (15 min) with good imprinting factor and selectivity coefficient (16.4 and 4.65). Fe 3 O 4 @BSA-MIPs was used successfully for the separation and enrichment of BSA from a bovine blood sample with good recovery and stability. In this method TEOS, APTES and octyltrimethoxysilane (OTMS) as monomer and crosslinkers were used in two-step process to create core-shell nano-MSG on a Fe 3 O 4 @SiO 2 surface. The template molecule, APTES and OTMS were mixed separately and were then added to the mixture of Fe 3 O 4 and TEOS. The process was followed by adding acid and finally template molecules were removed to generate imprinted cavities. The preparation process of Fe 3 O 4 @BSA-MIPs is shown in Figure 6. the preparation of the MSG nanofibers [46]. Electrospinning is a capable methodology to create micro-/nanofibers through an inexpensive and simple process. Electrospun micro-/nanofibers have been applied to many different applications. In on work, a simple and novel way for the preparation of unbreakable MSG nanofibers by the electrospinning technique was developed. The electrospinning of MSG is a challenging task to overcome this issue. Nylon 6 (12% w/w) in 4 mL formic acid was used as a backbone and support of the precursor ( Figure 5). The developed method was used for SPME and determination of acesulfame coupled on-line with HPLC. The selectivity of method for the extraction of acesulfame was evaluated in the presence of some sweets (saccharine, caffeine, and aspartame) in the beverage sample. This robust tool showed proper selectivity toward acesulfame and was used for fifty extractions without any noticeable obstruction. Other Novel Methods for Preparation of MSG Recently some interesting methods for MSG preparation have been developed which we will discuss here briefly. In a novel study a type of uniform nanomagnetic MIP sorbent was prepared by the sol-gel methodology and applied for the recognition of bovine serum albumin (BSA) [47]. The Fe3O4@BSA-MIPs showed 5 nm size, which can facilitate the mass transfer, and a high saturation magnetization (43.82 emu g −1 ), which allowed it to be easily separated from solution using an external magnetic field. These nanomaterials showed a proper equilibrium time (15 min) with good imprinting factor and selectivity coefficient (16.4 and 4.65). Fe3O4@BSA-MIPs was used successfully for the separation and enrichment of BSA from a bovine blood sample with good recovery and stability. In this method TEOS, APTES and octyltrimethoxysilane (OTMS) as monomer and crosslinkers were used in two-step process to create core-shell nano-MSG on a Fe3O4@SiO2 surface. The template molecule, APTES and OTMS were mixed separately and were then added to the mixture of Fe3O4 and TEOS. The process was followed by adding acid and finally template molecules were removed to generate imprinted cavities. The preparation process of Fe3O4@BSA-MIPs is shown in Figure 6. In an interesting approach a MIP based on an ionic liquid (IL) sorbent on the surface of multiwall carbon nanotubes (MWCNTs) was prepared utilizing sol-gel methodology [48]. In this method 3aminopropyltriethoxysilane-modified multiwall carbon nanotube (MWCNT-APTES) was used as support surface, BSA as the template, an alkoxy-functionalized IL (1-(3-trimethoxysilylpropyl)-3methyl imidazolium chloride, [TMSPMIM]Cl) was both the functional monomer and the sol-gel catalyst, and TEOS as the crosslinking agent were used. In this process, MWCNTs@BSA-MIPILs were modified with APTES and a mixture of template-monomer was added to ensure proper covalent binding. The next step was accomplished with TEOS for hydrolysis and polycondensation. Finally, the elimination of the template from the matrix revealed the specific binding spots. In this controllable method effective parameters to increase the selectivity (MIP cavity shapes) and decrease non-specific binding such as pH value, ionic strength of the incubated were addressed and optimized. Recently, a novel monolithic magnetic molecularly imprinted nanoparticle stir-bar was prepared with sol-gel methodology for the extraction of thiabendazole (TBZ) and carbendazim (CBZ) from orange samples [49]. In this method, oleic acid was used to modify the surface and then a sol-gel procedure was employed to encapsulate the particles. The sol-gel method can help overcome some weaknessed of previous common MIP preparation techniques. However sol-gel techniques cannot In an interesting approach a MIP based on an ionic liquid (IL) sorbent on the surface of multiwall carbon nanotubes (MWCNTs) was prepared utilizing sol-gel methodology [48]. In this method 3-aminopropyltriethoxysilane-modified multiwall carbon nanotube (MWCNT-APTES) was used as support surface, BSA as the template, an alkoxy-functionalized IL (1-(3-trimethoxysilylpropyl)-3-methyl imidazolium chloride, [TMSPMIM]Cl) was both the functional monomer and the sol-gel catalyst, and TEOS as the crosslinking agent were used. In this process, MWCNTs@BSA-MIPILs were modified with APTES and a mixture of template-monomer was added to ensure proper covalent binding. The next step was accomplished with TEOS for hydrolysis and polycondensation. Finally, the elimination of the template from the matrix revealed the specific binding spots. In this controllable method effective parameters to increase the selectivity (MIP cavity shapes) and decrease non-specific binding such as pH value, ionic strength of the incubated were addressed and optimized. Recently, a novel monolithic magnetic molecularly imprinted nanoparticle stir-bar was prepared with sol-gel methodology for the extraction of thiabendazole (TBZ) and carbendazim (CBZ) from orange samples [49]. In this method, oleic acid was used to modify the surface and then a sol-gel procedure was employed to encapsulate the particles. The sol-gel method can help overcome some weaknessed of previous common MIP preparation techniques. However sol-gel techniques cannot solve all issues, and to prove this statement a study was performed by the Kadhirvel group [50]. In this work MIPs were packed in column in both acrylic and sol-gel tridimensional networks for selective extraction of naproxen. All related parameters were optimized, and the results showed sol-gel improve the selectivity, but the acrylic approach presented better mass transfer, efficiency and porosity. This interesting approach opens a window to show the importance of preparing composite imprinted materials by mixing acrylic and silica-based precursors in future studies. In a further approach microspheric particles of MIX was prepared by filling up the pores of spherical, mesoporous, bare silica particles with a pregelification mixture utilizing pressure. Then a thin layer of MIX was created on the mesopore using gelification and a drying step. In order to prevent extensive outer-surface deposition several parameters needed to be optimized such as the amount of porogen, pressurization time and the selection of a proper washing solvent for the pore-filling step. The results proved that uniform pore-filled silica particles increased the adsorption capacity and facilitated the analyte binding process. This spherical composite showed high selectivity for the separation of (S)-naproxen in the presence of ibuprofen (α = 4.9, imprinting factor = 13). In comparison with bulk polymerization methods this methoid displayed outstanding column efficiency (9 vs. 1.2 theoretical plates/cm) [51]. MSG is a flexible method and it has also been applied in real-life. In such a real-life application, MSG was prepared and used as an antidandruff agent [52]. To prepare a MSG composite a mixture of silane, tetra(C 1 -C 4 )alkyl orthosilicate, porogen solvent and C 14 -C 20 fatty acid (as template molecule) were used. The results proved that MSG can selectivity trap the C 14 -C 20 fatty acid, the mean reason for dandruff formation. Additionally, in an attractive work a water-compatible MSG polymer was used for the controlled release of salicylic acid as an anti-inflammatory drug [53]. In vivo investigations showed that MSG has lower binding capacity and higher imprinting factor in water media in comparison with organic solvents. Moreover, from safety and toxicity aspects MSG particles of more than 300 nm in size would not cross the skin barrier. MIPs still are not prevalent for drug delivery applications due to their poor compatibility in polar environments and the results of referred work proved that MSG could be a potential alternative in this field. As a conclusion, the recent applications of MSG in different sample preparation fields are summarized in Table 1. Organophosphorous pesticides Stainless steel wire-SPME GC Fresh and dry foods [42] l-Tyrosine Monolithic in tip LC-MS/MS Human plasma and urine samples [43] Vanillin and methyl vanillin Monolithic in tip HPLC Milk powder [44] Hippuric acid Hollow fiber liquid-phase microextraction Conclusions In this review recent applications of sol-gel methodology for the preparation of imprinted polymers were discussed. Sol-gel methodology not only can facilitate MIP preparation but also improve the thermal and chemical satability. MSG materiala can be prepared in various formats and can be applied easily in diffrent sample peparation techniques. However some aspects of the MSG method need improvement, such as better selectivity, sensitivity and longer lifetime to be more applicable in the near future. Funding: This research received no external funding. Conflicts of Interest: The authors declare no conflict of interest.
2019-08-10T13:03:59.724Z
2019-08-01T00:00:00.000
{ "year": 2019, "sha1": "52a424fbbb4660b6527593c74b4c8cc1d09c31f4", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/molecules24162889", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5abb2faea788e18687f304d122f3b7bf4313754d", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
249648637
pes2o/s2orc
v3-fos-license
Unusual Cause of Respiratory Distress in a Term Neonate Background: Respiratory distress is a clinical finding often seen in neonates. Common causes of respiratory distress in this population include respiratory distress syndrome, transient tachypnea of the newborn, infection, aspiration, and cardiac etiologies. We present the case of a neonate who presented with respiratory distress with no identifiable cause on initial workup. The patient was eventually found to have a variant of a genetic mutation that predisposed the infant to this presentation. Case Report: A term male infant born via spontaneous vaginal delivery was admitted to the pediatric service at 3 weeks of age because of tachypnea. Chest x-ray showed perihilar infiltrates. Septic screen, thyroid function test, sweat test, echocardiogram, intracranial ultrasound, and modified barium swallow were normal. Computed tomography scan of the chest showed ground glass opacities in the upper and lower lobes. Airway evaluation showed no evidence of obstruction or anatomic abnormalities. Bronchoscopy showed no masses or tracheomalacia. Bronchoalveolar lavage was negative for infection. The infant was treated with intravenous antibiotics, steroids, and furosemide but continued to be tachypneic and required supplemental oxygen. Genetic studies were obtained to assess for surfactant deficiencies, and the patient was transferred to another center for a higher level of care. Genetic evaluation was positive for NKX2.1 variance mutation C.190C. The patient's symptoms improved, and he was weaned to room air by 3 months of age. Conclusion: When evaluating a child with unexplained pulmonary disease, clinicians should have a high index of suspicion for interstitial lung disease including surfactant protein mutations. INTRODUCTION Respiratory distress is common in neonates but typically improves with treatment during an expected period. Common causes of respiratory distress in term neonates are respiratory distress syndrome, transient tachypnea of the newborn, infection, persistent chemical pneumonia from meconium aspiration, structural respiratory anomalies, and cardiac etiologies. 1 Children who present with respiratory distress may have other symptoms including nasal flaring, grunting, tachycardia, tachypnea, cough, and other nonspecific symptoms 2 ; thus, a thorough evaluation must be conducted to determine the underlying cause. We present the case of a 3-week-old term infant who presented with respiratory distress requiring supplemental oxygen with no identifiable cause on initial workup. The patient was eventually found to have a genetic mutation causing surfactant dysfunction, predisposing the infant to this presentation. CASE REPORT A term male infant born via spontaneous vaginal delivery was admitted to the pediatric service at 3 weeks of age because of tachypnea. Perinatal history was unremarkable except for meconium-stained fluid; the infant was discharged home on day of life 2. At the patient's 3-week well visit, his mother reported a 2-day history of increased work of breathing and dry cough. Review of systems was negative for stridor, fever, vomiting, sweating with feeds, or apneic episodes. Initial physical examination was significant for a respiratory rate of 88/min, with increased work of breathing. Radiography of the chest showed bilateral perihilar infiltrates that were greater on the right than the left (Figure 1). Brain natriuretic peptide (BNP) was elevated at 895 pg/mL (reference range, 0-99 pg/mL). Echocardiogram showed a patent foramen ovale, ejection fraction of 70% to 75%, and no evidence of pulmonary hypertension. Intracranial ultrasound obtained to evaluate other causes of elevated BNP was normal. Infectious workup including blood count, Creactive protein (CRP), and blood culture was negative. The patient was discharged home after 4 days without apparent cause for his tachypnea. The patient was readmitted 1 week later because of persistent tachypnea (respiratory rate of ࣙ90/min), new onset hypoxemia (89% to 93% on room air), and retractions. Cardiology, pulmonology, and otolaryngology specialists were consulted. Septic screen, metabolic panel, and thyroid function tests were negative. Sweat chloride test was negative. Computed tomography (CT) scan of the chest showed ground glass opacities in the lower lobes ( Figure 2). Figure 1. Chest radiograph showed bilateral perihilar hazy infiltrates that were greater on the right. Modified barium swallow study ruled out aspiration, and contrast esophagram was normal. Airway evaluation showed no evidence of obstruction or anatomic abnormalities. Bronchoscopy showed no masses, lesions, or tracheomalacia. Bronchoalveolar lavage was negative for infection. The infant was treated empirically with intravenous piperacillin/tazobactam 80/10 mg/kg 3 times daily for 5 days, oral azithromycin 10 mg/kg daily for 1 day followed by 5 mg/kg daily for 4 days, 1 dose of oral furosemide 2 mg/kg, and prednisolone 1 mg/kg twice daily for 5 days. He required supplemental oxygen via nasal cannula during the entire 15-day hospital stay. Most of the common underlying etiologies for respiratory distress were ruled out. In view of the patient's clinical presentation and CT scan, interstitial lung disease was suspected. Genetic studies for surfactant protein mutations were obtained before the patient was transferred. The genetic evaluation was positive for NKX2.1 variance mutation C.190C. The patient was transferred to a higher center for further management and eventually discharged home on oxygen. No further testing was done at the higher center as the infant's clinical condition was improving. He was followed closely by a pediatric pulmonologist and pediatrician. Aside from his tachypnea and hypoxemia, his growth was appropriate. No further symptoms developed. Oxygen therapy via nasal cannula was continued until about 3 months of age when he was successfully weaned off the oxygen, and his symptoms resolved. The patient's parents report that the infant is well, has not needed oxygen therapy, and has no signs of respiratory distress or hospitalizations. DISCUSSION Genetic mutations of NKX2.1, previously known as TTFI (thyroid transcription factor 1), on chromosome 14q13 have an autosomal dominant pattern of inheritance and have been associated with brain-lung-thyroid defects. 3 Our patient did not have any pertinent family history which might be explained by the fact that this syndrome is characterized by a highly variable penetrance and expressivity; it can involve only 1 organ system or any combination of all 3, including neurologic manifestations, pulmonary disease, and congenital hypothyroidism. 4 Neurologic manifestations include hypotonia that can progress to chorea and/or ataxia. Pulmonary involvement, the second most common manifestation, can include respiratory distress syndrome in neonates, interstitial lung disease in young children, and pulmonary fibrosis in older patients. 5 Hypothyroidism associated with this gene mutation can be attributed to decreased thyroid regulator production and development. 5 In our patient, no neurologic or thyroid abnormalities were identified, and the family denied any family history; the child presented solely with respiratory symptoms despite a negative workup that ultimately could be explained by NKX2.1 gene variability. Identifying the mutation in our patient led to relief for the family in knowing the diagnosis and will enable screening and possible early identification of endocrine or neurologic manifestations. In the pulmonary system, homeobox NKX2.1 protein expression can cause a rare form of progressive respiratory failure that is highly correlated to altered surfactant production. 6 NKX2.1 protein expression regulates the development of lung structures by regulating respiratory epithelial cell genes and, thus, is important in surfactant protein metabolism. Genetic variants of NKX2.1 have been associated with decreased surfactant production contributing to neonatal respiratory distress syndrome and the development of interstitial lung disease. 6 The decreased surfactant causes increased alveolar surface tension predisposing to atelectasis, increased ventilation-perfusion mismatch, and increased pulmonary inflammatory response that increase the potential for lung injury. In severe cases, disrupted lung development is a likely mechanism for the Respiratory Distress in a Neonate respiratory disease. 7 Histopathology findings support the hypothesis that disruption of NKX2.1 targets functional and structural lung development and surfactant homeostasis. [7][8][9][10] The finding of ground glass opacities on chest CT scan in our patient and his continued respiratory distress may be explained by NKX2.1 gene dysfunction and correlated to decreased surfactant production resulting from altered protein metabolism. 6 In children with NKX2.1 gene mutation, repeated episodes of respiratory distress and decreased lung immunity can lead to increased vulnerability to lung infections. 4 In most patients with this gene mutation, mechanical ventilation is required to maintain adequate lung function at birth. 3,4,[7][8][9] Further research indicated that hydroxychloroquine, in addition to azithromycin and prednisolone, is another treatment modality that can be used to improve lung function in patients who present with respiratory distress and lung infections. 11 Hydroxychloroquine has been associated with improved surfactant protein production; however, further research is required because this association is currently indeterminate. Our patient did not require mechanical ventilation, but he was treated with azithromycin, oral steroids, and supplemental oxygen. Although evidence is limited that this treatment combination was the reason for the improvement in our patient's symptoms, we can speculate that it played a part in improving the underlying lung disease. CONCLUSION An in-depth evaluation is required in term infants who present with recurring respiratory symptoms when all other causes of tachypnea and hypoxia have been ruled out. Despite its rarity, clinicians should have a high index of suspicion for pulmonary disease caused by surfactant protein mutations when evaluating a child with unexplained respiratory symptoms. NKX2.1 is a genetic defect that can potentially cause interstitial lung disease and should be suspected in neonates with hypothyroidism or neurologic abnormalities; however, respiratory distress secondary to this mutation can present even without hypothyroidism or neurologic abnormalities.
2022-06-15T13:08:04.805Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "1cbce756ebcee5168635b93e3accd874f826cdc6", "oa_license": "CCBY", "oa_url": "http://www.ochsnerjournal.org/content/ochjnl/22/2/196.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a6bb5252ae74d412dd4ef43eae088b0235764bb6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
55664085
pes2o/s2orc
v3-fos-license
Phenotypic Characteristics of Yeasts of the Genus Candida and Cryptococcus in Differential Culture Media In the clinical mycology laboratory, the identification of yeast species is done by screening in specific media, such as chromogenic agar for Candida species and Niger seed agar for Cryptococcus species, both of which are of clinical interest. This study aimed to evaluate the growth and morphological characteristics of yeasts of the Candida and Cryptococcus species in different culture media. Yeast species included in the study were: C. albicans, C. dubliniensis, C. glabrata, C. tropicalis, C. krusei, C. lipolytica, C. parapsilosis, C. metapsilosis, C. orthopsilosis, C. neoformans, C. gattii, C. flavescens, and C. albidus. The media used were Sabouraud dextrose agar, Sabouraud dextrose broth, hypertonic Sabouraud broth (plus 6.5% NaCl), Candida chromogenic agar, methyldopa agar, Niger seed agar and tobacco agar. Growth, color, size, presence of fringes, melanin and the appearance of the colony were evaluated. All isolates grew in the media used, except for the hypertonic Sabouraud broth; in Candida chromogenic agar, C. albicans and C. dubliniensis present a green color and C. tropicalis a blue color, while other species show colors including pink, purple, gray and white; in the Niger seed agar, C. neoformans, C. gattii and C. flavescens presented a brown color, while others had white colonies; in tobacco agar, the colors included white, cream and gray; and in methyldopa agar, all colonies were white. Some isolates presented colonies with fringes in the tobacco, methyldopa and Niger seed agar; the presence of melanin was observed by Cryptococcus isolates in the Niger seed and tobacco agar; the appearance of colonies in the media varied from opaque to shiny or mucoid, according to the isolate and the culture medium. All of the culture media used allowed the growth of the tested isolates, except for C. lipolytica, which did not grow in hypertonic Sabouraud broth. The isolates of Cryptococcus, C. krusei and C. dubliniensis presented a significant reduction of growth in hypertonic Sabouraud broth. K e y w o r d s Cryptococcus species are cosmopolitan and are found in the environment, and associated with bird excreta and different vegetable debris. They are agents of human and animal mycoses, with two main species being involved: Cryptococcus neoformans and Cryptococcus gattii (Kwon-Chung and Bennett, 1992). Some species of Candida, in turn, inhabit the body surface of animals and humans, but have also been isolated from the environment, including the air and water. Candida albicans is the best known species, accounting for the majority of cases of candidiasis (Lacaz et al., 2002). In the mycology laboratory, the identification of yeasts is performed by screening with classic tests, such as the germ-tube formation test, microculture in corn-tween 80 agar and auxanograms, which are generally more timeconsuming. Other tests that are faster may be included, such as some miniaturized commercial tests that contain colorimetric, biochemical or enzymatic tests; these are more practical, but incur higher costs. The rapid confirmation of a fungal infection is important so that the treatment can be specific and started as soon as possible (Kwon-Chung and Bennett, 1992;Lacaz et al., 2002). The identification of a fungus from a clinical sample begins with the direct examination of the sample and followed by its isolation in culture. From the culture, the characteristics of the colonies, such as appearance and color, are observed in the specific media for fungal growth, for example Sabouraud agar, chromogenic agar, Niger seed agar and sunflower agar. Subsequently, a microscopic analysis of colonies of these cultures is performed, as well as additional biochemical tests, following the identification algorithm (Kwon-Chung and Bennett, 1992;Lacaz et al., 2002). Yeasts of the genus Cryptococcus are generally identified by the direct investigation of fungal structure in the clinical material, characterized by the presence of a capsule surrounding the cell, followed by culture. After the isolation of the colonies, the identification is performed as follows: evaluation of the in vitro production of urease, melanin production in media containing phenolic compounds, such as Niger seed agar, and confirmation of the presence of a capsule surrounding the cells. The differentiation between species of this genus is possible through physiological tests such as the assimilation of carbohydrates and nitrogen, from different sources. The species of C. neoformans and C. gattii, which are the most frequently isolated, can also be differentiated using bromothymol blue canavanine-glycine agar, in which C. neoformans is unable to grow, whereas C. gattii grow and change the media color from the original green to an intense blue (Freydiere et al., 2001;Lacaz et al., 2002;Pedroso et al., 2007). Species of the genus Candida are differentiated in the laboratory by phenotypic tests such as germ tube test, filamentation tests on corn agar plus tween 80, auxanograms and zymograms. The most frequent species in the clinical mycology laboratory are: C. albicans, C. glabrata, C. tropicalis, C. parapsilosis, C. krusei and C. guilliermondii. Another species, C. dubliniensis, although not very frequent, presents phenotypic and biochemical characteristics that are very similar to C. albicans, meaning that they need to be differentiated between for epidemiological purposes and also because they can present different responses to antifungals (Kirkpatrick et al., 1998). A more intense green color in the chromogenic agar would be indicative of C. dubliniensis, while other practical tests, such as growth at variable temperatures, and morphology in tobacco agar, among others, would be able to differentiate between these species (Freydiere and Guinet, 1997;Loreto et al., 2010;Pasligh et al., 2010). Currently, another species, C. auris, has emerged as a resistant species, sometimes presenting resistance to one or more classes of antifungals, which makes its identification and the in vitro susceptibility testing of antifungals extremely important (Arauz et al., 2018;Arendrup and Patterson, 2017). In the context of the use of culture media for screening and presumptive identification in mycology laboratories, this study aimed to evaluate the growth and morphocolonial characteristics of Candida and Cryptococcus species in different culture media. Microorganisms and culture media Twenty-one isolates of the Candida and Cryptococcus genera, including reference strains (INCQS, ATCC), controls (here called CFP, CP) and environmental (AMB 1) and clinical isolates (the others) used in laboratory, were evaluated. Specifically, the isolates studied were: three isolates of Prior to the execution of the tests, samples stored in BHI-glycerol, kept at -20ºC, were spiked onto Sabouraud agar and incubated at 30ºC for 48-72 h; afterwards, two more seedings were performed on the same media for the complete reactivation of isolates and then the tests were performed. In vitro tests Suspensions of each isolate were made in sterile saline (NaCl 0.9%), with turbidity equivalent to 0.5 tube of the McFarland scale, prepared from a 48-hour culture on ASD. From those suspensions, 10 microliters of each were seeded on the surface of agar contained in petri dishes (90x15mm). This was performed for each of the culture media (NSA, TBCA, MDA and CHRmA). Seeding was undertaken on four plates of each culture medium. Subsequently, the plates were incubated at 30°C for four days, with daily growth monitoring. The characteristics analyzed in all of the plaques were: growth, color and size of the colonies, formation of fringes in the periphery of the colonies and the production of melanin in the Niger seed, tobacco and methyldopa agar. Growth tests on SDB and HSDB were performed by transferring 20 μl of the suspension of each isolate in saline solution to two tubes containing SBD or HSDB (in duplicate). The tubes were incubated at 30ºC for four days. SDB was used as a growth and turbidity control for comparison with HSDB. After incubation, the cultures were visually examined for turbidity of the broth, which was compared to the McFarland scale tubes, and the test results were expressed according to turbidity. Results and Discussion All isolates of the genera Candida and Cryptococcus were grown on the media agar, methyldopa, tobacco and chromogenic Candida agar (Table 1). In MDA, all colonies were white, except for C. neoformans and C. gattii, which were light brown. Regarding the size of the colonies, all colonies were small, except for C. albidus and C. neoformans, which had larger colonies when compared to those formed in SDA. In the NSA, most of the isolates presented smaller colonies than in the SDA, with the exception of five isolates: C. krusei, C. flavescens, C. albidus and both C. neoformans isolates, whose colonies were similar in size to those observed in SDA. Fringe formation was observed in four isolates: C. flavescens, C. gattii and the two C. neoformans. Melanin production was observed in the colonies of C. flavescens, C. gattii and the two C. neoformans isolates. In TBCA, all isolates had smaller colonies than those in SDA, ranging from bright or opaque to mucoid, and white, gray or beige to brown colors. Melanin was evidenced in two isolates of C. neoformans and one C. gattii. One isolate of C. dubliniensis and one of C. krusei were able to form fringes in TBCA. In CHRmA, the colors of the colonies ranged from purple, green, beige, blue, white, and pink to gray; most of the isolates presented small colonies in comparison to the SDA and no isolates formed fringes. In addition, the colonies presented variable aspects, and all colonies of C. glabrata and Cryptococcus spp. presented a brilliant appearance. In SDB, all isolates of Candida and Cryptococcus presented growth. In HSDB, the isolate of C. lipolytica did not show growth, while the Cryptococcus flavescens isolate also presented film formation on the broth surface ( Table 2). The detailed characteristics of the isolates tested are shown in Tables 1 and 2. In this study, morphological characteristics of Candida and Cryptococcus species were evaluated in different culture media. These yeasts are clinically important because they are capable of causing serious diseases in humans, especially in immunocompromised individuals (Khawcharoenporn et al., 2007;Pfeiller and Diekema, 2004). The ideal culture medium for fungal growth and development is one that has all of the necessary substrates for the in vitro reproduction of microorganisms such as nitrogen, carbon, micronutrients, water, and others. For fungi, unlike bacteria, the isolation and identification of the main species can only be carried out with a few culture media in microbial laboratories (Lacaz et al., 2002). Some culture media are widely used, with different purposes such as: screening, selecting and differentiating specific isolates and the preliminary identification of isolates present in clinical samples; these include Sabouraud dextrose agar, corn agar, chromogenic agar, Niger seed agar, tobacco agar, methyldopa agar and hypertonic Sabouraud dextrose broth (Freydiere and Guinet, 1997;Loreto et al., 2010;Menezes et al., 2011;Pasligh et al., 2010). Niger seed agar, for instance, is a selective and differential medium for the species Cryptococcus neoformans and C. gattii, which show colonies with brown pigment (Kwon-Chung and Bennett, 1992). Tobacco agar is another medium used to verify the production of the melanin pigment in Cryptococcus species, but also for the differentiation between Candida albicans and C. dubliniensis (Loreto et al., 2010;Silveira-Gomes et al., 2011). Chromogenic agar is a selective and differential medium for yeasts of the genus Candida, allowing differentiation between some species according to the coloration of the colonies; for example, the colonies of C. albicans and C. dubliniensis present a green color, while colonies of C. tropicalis show a blue color, colonies of C. krusei are lilac and dry, whereas other species may have white, cream or gray coloration (Odds and Bernaerts, 1994;Rousselle et al., 1994). In the present study, all isolates of Cryptococcus spp. grew in the NSA and TBCA. Regarding melanin expression, only the Cryptoccocus albidus var. albidus was unable to produce this on TBCA; however, a brownish pigment was evidenced in NSA. The production of melanin in specific media by other species of Cryptococcus besides C. neoformans and C. gattii has been demonstrated in the literature (Menezes et al., 2011;Pedroso et al., 2007). None of the Candida species presented melanin in these media. The interesting thing in these findings was the production of melanin by C. flavescens in the NSA, a fact that has not yet been described in the literature, and which could be elucidated when the metabolic pathways related to the expression of laccase and phenol oxidase-like enzymes are studied and described. MDA was proposed as a minimum, chemically controlled medium for the expression of melanin in Cryptococcus spp. (Menezes et al., 2011). The melaninproducing species presented a very light brown color, which it is difficult to verify, and is only possible when compared with an isolated control that does not express the pigment. On this culture medium it is interesting to note that pH seems to play a key role in the expression of melanin, since the pigment is evidenced at pH between 5.0-5.5. This medium only provided evidence of fringes for the species C. tropicalis and C. krusei, with no evidence of other characteristics that would be useful for the screening of other species in a routine laboratory. The formation of fringes around colonies was observed in a few isolates in the media TBCA, MDA and NSA. This characteristic is presented by some isolates, but is dependent on the species and medium; therefore, it may not be a characteristic that can be used in the screening of microorganisms in the laboratory. However, it is a feature that can be used in studies which include morphotyping (Bacelo et al., 2010). TBCA was proposed as a form of differentiation between C. albicans and C. dubliniensis. According to the literature, colonies of C. dubliniensis in TBCA, as well as in NSA, have fringes due to the abundant mycelium and numerous chlamydospores; this does not occur with colonies of C. albicans, which are smooth and do not produce chlamydospores in those media (Liverio et al., 2017;Loreto et al., 2010). Contrary to the expected results, in the present study, however, only colonies of one of the three isolates of C. dubliniensis showed fringes, which suggests that this characteristic needs to be further explored and studied to determine the optimal incubation conditions and the interfering factors. The development of Candida chromogenic agar was an evolution in relation to culture media, as it enabled the laboratories to identify C. albicans, the most frequent species in clinical samples, with a high degree of certainty (Odds and Bernaerts, 1994). In general, in this study, the isolates showed different morphological aspects, ranging from bright to opaque. The results of this analysis are in agreement with the expected results and those which have been widely described in the literature. The differentiation between C. albicans and C. dubliniensis in chromogenic agar has been suggested by some authors (Liverio et al., 2017); however, it seems to be difficult, and only when isolates from these two species are cultivated simultaneously is it possible to perceive the variation in the shade of green produced, according to the author's experience. On the other hand, different chromogenic culture media of different brands are on the market, and their performance is different, as shown by recent studies (Vecchione et al., 2017). Therefore, it is up to each laboratory to evaluate the benefits and costs, and to adopt the method that best meets their needs. The hypertonic Sabouraud dextrose broth was proposed to differentiate C. albicans from C. dubliniensis, as the latter shows inhibition or an expressive reduction of growth (Mahelová and Ruzicka, 2017;Silveira-Gomes et al., 2011). In the present study, it was observed that many isolates had a decrease in growth in HSDB, compared to SDB, according to the turbidity scale. However, according to the literature, C. dubliniensis presented a decrease when compared to the McFarland turbidity scale; the three isolates of C. dubliniensis presented turbidity equivalent to tube 7 in the SDB, while the turbidity in HSDB ranged from 0.5 to 1. In HSDB, isolates of C. krusei and C. lipolytica showed no growth or presented significant growth reduction. Cryptococcus isolates showed variable growth in SDB, and decreased growth in HSDB. As the isolates of Cryptococcus are aerobic, it is expected that there is little growth in static incubation in liquid medium, but it was possible to verify a reduction of growth in the hypertonic broth. Cryptococcus species require the presence of oxygen for growth, so they will have difficulty growing in liquid media (Kwon-Chung and Bennett, 1992). This is because the culture medium is a suspension of microorganisms; when inoculated, the cells tend to go to the bottom of the tube, so the absence of oxygen will reduce or impede growth. Incubation of the culture media in this study was performed in a static manner; however, if the incubator is shaken, this interference can be eliminated. Also, it was observed that the isolates of Candida krusei, Candida lipolytica and Cryptococcus flavescens showed a growth film on the surface of the broth in SDB. This has not been previously described, and may help the laboratory in identify those species. In general, all of the media allowed the growth of the tested isolates, with the exception of C. lipolytica, which did not grow in HSDB. On the other hand, the isolates of Cryptococcus, C. krusei and C. dubliniensis showed a significant reduction of growth in HSDB. Phenotypic methods remain the main way to identify or screen fungi that are isolated in the medical laboratory, especially in small laboratories, with a small sample processing capacity, and which are located far from major centers. More advanced methodologies such as those based on DNA or proteomics are restricted to large laboratories and/or reference centers because of the high cost and complexity of the techniques. Thus, studies that seek to improve the screening of species in culture media already used in the routine laboratory, as well as the testing of other media should be encouraged and disseminated, so that the most frequent fungal species in mycology laboratories can be identified with greater accuracy.
2019-04-03T13:06:50.183Z
2018-08-10T00:00:00.000
{ "year": 2018, "sha1": "2272c804e9849b029a2f826bc35dccfded63dc5a", "oa_license": null, "oa_url": "https://www.ijcmas.com/7-8-2018/Larissa%20Alves%20Lima,%20et%20al.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "6182e84a35a491ec65aa280dc7daaa651c243761", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
234104566
pes2o/s2orc
v3-fos-license
Pre-employment transition services for students with disabilities: A scoping review BACKGROUND: Students with disabilities often experience numerous challenges in terms of finding employment. Given the important role of vocational rehabilitation counselors in supporting employment activities for these students, a need exists for identifying effective strategies that increase employment outcomes for this population. OBJECTIVE: The objective of this scoping review is to examine and describe successful research- based interventions on pre-employment transition services for students with disabilities that can be used by vocational rehabilitation counselors. METHODS: The search strategy examined literature from 1998 through 2017 focused on vocational rehabilitation counselors, students with disabilities, and elements related to pre-employment transition services. Articles included American, European, and Australian literature published in English. RESULTS: This review identified a number of research-based interventions that support employment outcomes for students with disabilities. CONCLUSIONS: The research-based interventions identified in this scoping review can help vocational rehabilitation counselors consider effective strategies for increasing employment outcomes for students with disabilities. Introduction People with disabilities experience substantial challenges in terms of employment rates, wages, advancement, workplace barriers, accommodations and supports (Bouck & Joshi, 2015;Crudden, 2012;Dong, Fabian & Leucking, 2016). The National Longitudinal Transition Study-2 (NLTS2) found that among youth with disabilities who had been out of high school 1 to 4 years, 58 percent worked full time at their current or most recent job (Newman et al., 2009). In contrast, almost 80 percent of transitionaged youth without a disability and not enrolled in high school were employed (U.S. Bureau of Labor Statistics, 2017). For people with a disability, typical targeted outcomes for employment include finding a position in the competitive labor market, job stability, wages and benefits on a par with other employees who do not have a disability, hours worked, and job satisfaction. While most of these are common to any job seeker or employee, the degree of success in achiev- ing any one of the outcomes can be impacted by the type and severity of the disability. According to the NLTS2 study, individuals diagnosed with disabilities related to learning, speech/language, hearing, or mental health were significantly more likely to be employed after high school than individuals diagnosed with disabilities related to intellectual, visual, orthopedic or orthopedic functions, autism spectrum disorder, traumatic brain injury, or who had multiple disabilities or deaf-blindness (Newman et al., 2009). To improve employment outcomes for people with disabilities, vocational rehabilitation (VR) counseling has been identified as a key component to support transition-aged youth in obtaining employment. The U.S. Department of Education found that among persons who achieved an employment outcome because of VR services, 76 percent were still working three years after exit compared to 37 percent of people who were eligible for VR services but did not receive them (Hayward & Schmidt-Davis, 2003). The Workforce Innovation and Opportunity Act (WIOA), which amended the Rehabilitation Act of 1973, now requires VR agencies to set aside at least 15 percent of their federal funds to provide "pre-employment transition services . . . [to] students with disabilities who are eligible or potentially eligible for VR services" (Workforce Innovation and Opportunity Act of 2014). These five required pre-employment transition services (Pre-ETS) comprise five service elements. Job exploration counseling Job exploration counseling includes activities to foster career awareness, to help a person understand how personal work-related values apply to employment options. It can focus on building a person's understanding of the skills and qualifications necessary for specific careers. It may also include attending talks given by speakers who discuss specific careers, or participating in career organizations. Job exploration counseling excludes assessments that evaluate whether someone can receive VR. In general, job exploration counseling includes helping students with disabilities discover: • Vocational interests (often through an inventory) • Potential careers and career pathways of interest to the students • Activities to recognize the relevance of a high school and post-school education to their futures, both in college and/or the workplace • The labor market, particularly industries and occupations in high demand, and non-traditional employment options (Interwork Institute, 2016a). Work-based learning Work-based learning experiences include the following types of activities: • Informational interviews • Career-related competitions • Learning about careers through simulated workplace experiences or workplace tours/field trips • Apprenticeships (which combine in-school and work-based learning) • Job shadowing or career mentoring (when a peer or experienced person provides guidance about a job) • Paid work experiences, including internships or employment • Unpaid work experiences including internships, volunteering, and service learning (Interwork Institute, 2016b). While practicums and student-led enterprises are included in work-based learning, they are excluded from this scoping review since they are performed in a school setting. Counseling on opportunities for enrollment in comprehensive transition or postsecondary educational programs Counseling on these specific opportunities may include providing information on: course offerings; career options; the types of academic and occupational training needed to succeed in the workplace; postsecondary opportunities associated with career fields or pathways; academic curricula; and college and financial aid process and forms (Jamieson et al., 1998). The goal of this scoping review was to focus on employment outcomes for students with disabilities. Since this category focuses on post-secondary outcomes, it was omitted from this review so as to focus the review on studies targeting youth prior to exiting secondary school. Workplace readiness training to develop social skills and independent living Workplace readiness training refers to common social skills used in employment. These skills sup-port appropriate interactions with managers and colleagues and are common across all workplace environments (Interwork Institute, 2016c). This scoping review focuses on those skills necessary to succeed in employment, and excludes those specific to independent living. Examples of social skills relevant to employment may include but are not limited to: • Engaging in effective and professional communication skills-including timely, respectful and professional listening and self-expression • Understanding employer expectations for punctuality and performance • Cooperating with and supporting others as part of teamwork • Maintaining a positive attitude • Making decisions and problem solving (Interwork Institute, 2016c). Independent living skills, such as having a healthy lifestyle, developing friendships, and knowing how to prepare meals are not included in this scoping review because they are not necessarily prerequisites to success in workplace settings. Instruction in self-advocacy Self-advocacy refers to a person's capacity to communicate and negotiate his or her interests (Interwork Institute, 2016b). Self-determination refers to a person's ability to determine and plan for the future based on those personal interests and values. In the case of this scoping review, we focus on self-advocacy and self-determination as related specifically to the ability to seek accommodations in the workplace. These skills include: understanding oneself, one's disability; the capacity to know when and how to disclose about one's disability; decision-making; being able to identify necessary accommodations in the workplace; knowing how to request and use accommodations and any necessary assistance; and understanding one's rights and responsibilities. These five-pre-employment transition (pre-ETS) service components guided our organization for summarizing research literature, with the goal of informing the state of literature regarding employment of students with disabilities. Objective Since VR counselors are required to provide pre-ETS to eligible students with disabilities, it is important that they are aware of research-based practices respective to each of the five required areas. The objective of this scoping review is to examine and describe successful interventions related to the five required pre-ETS areas. Data sources and searches The search strategy examined literature from 1998 through 2017 focused on VR counselors, students with disabilities, and elements related to pre-ETS. The start date of 1998 was selected because the Workforce Investment Act was authorized that year. It supported local activities to increase employment rates, including reauthorizing the 1973 Rehabilitation Act; thus, the research, strategies, and tools found post-1998 would likely be more relevant to VR professionals. The search strategy followed the three-step method described in the Joanna Briggs Institute's (JBI's) methodology of scoping reviews: 1) a search using the terms 'youth' or 'students', 'disability', and 'employment'; 2) using all identified keywords and index terms (see Appendix A.1.) in all included library databases; and 3) searching the reference lists of all identified reports and research articles for additional studies. The review focused solely on literature and toolkits written in English. The literature and the toolkits primarily came from the United States, and included those from Canada, Europe, Australia as potential articles because these other geographic areas are thought to have similar issues and supports. The sources of the included studies were drawn from the following databases: Education Resource Information Center (ERIC), Education Source, Psy-chINFO, Soc INDEX, Academic Search Premier, PubMed, and National Institute on Disability, Independent Living, Research and Rehabilitation (NID-ILRR) project websites identified by searching the database of NIDILRR awards maintained by the National Rehabilitation Information Center (NARIC). The gray literature search was conducted through a broad google search using the same inputs selected for the research articles (youth' or 'students', 'disability', and 'employment'). Rehabilitation Services Administration technical assistance websites, such as the Workforce Innovation Technical Assistance Center (WINTAC) that the Rehabilitation Services Administration funds, were also searched for additional gray literature. Eligibility criteria for review From the articles gathered, only research, researchbased tools, and gray literature that support VR counselors with at least one of the four elements of Pre-ETS covered in this review. Any article that did not meet the eligibility criteria was excluded from this review. Data extraction Two independent reviewers assessed all abstracts and titles for relevance using the same inclusion criteria. Screening results from reviewers were compared for interrater reliability and all discrepancies were resolved through discussion with a third and senior coder. If the reviewers were unsure about the title and abstract description, the citation was advanced to the full text stage, retrieved, and the inclusion decision determined reviewing the full text. Once the list of included studies was determined, a full text of each study was retrieved, and the team extracted relevant data using a predetermined data charting form to guide the process (see Appendix A.2.). Results Two reviewers, independent of each other, charted two to three studies to become familiar with the source results and to trial the data. This process ensured that all relevant information was extracted. After the two reviewers familiarized themselves with the charting form, they extracted the data independently from each other. After screening 567 records and excluding 471 of them, 96 full text articles were extracted and assessed for eligibility. Of these 96 articles, 63 of them did not meet the eligibility criteria and were excluded, for a final total of 33 articles. Figure 1 illustrates the results of the decision process for the selection of the included studies; (for more information on included studies by year, country, and design please see Appendix A.3). For each of the four Pre-ETS service elements covered in this review, the research team identified themes to capture the various strategies that fall under each element. These themes were established during the full text review after the reviewers observed distinct groups of strategies being referenced across articles. After grouping the strategies into themes across articles, each reviewer was assigned two or more themes to assess whether they corresponded to the Pre-ETS service elements. Afterwards, a group discussion took place where each reviewer explained how each of their assigned themes related or did not relate to the service categories. Any disagreement among potential themes were discussed and resolved by a third reviewer. The following were the themes established for each of the WIOA categories covered in this review along with the rationale for their inclusion: discover their ideal work life and is an activity that VR professionals may help guide and direct. • Family outreach -family members of students with disabilities can provide unique perspectives and insights, which may help VR professionals facilitate the job exploration process. • Cultural competence -the cultural context of students with disabilities may influence their career aspirations. Because of this, understanding a student's cultural context may allow VR professionals to better assist with job exploration. • Interagency collaboration -working with stakeholders (i.e., school personnel) to direct career activities may help students explore careers (Fabian, 2007). Work-based learning • Previous or early work experience -early work experience is a significant predictor of employment. Helping students find work-based learning opportunities while in school may increase employment opportunities. • Supplemental Security Income (SSI) payments -SSI payments may act as barriers to employment for students with disabilities. Being aware of these barriers may help VR professionals navigate employment issues related to SSI. • Work-based social supports -providing social supports such as work-based mentoring may lead to higher employment outcomes for students with disabilities. Workplace readiness • Communication/interview skills -developing the skills necessary to communicate and perform well in interviews is an important factor for work place readiness. • Social skills support -these are skills that allow for productive social interactions with colleagues and customers and are important to develop before entering the workforce. • Transportation -helping students with disabilities overcome barriers to transportation may lead to higher workplace outcomes. Instruction in self-advocacy • Disclosure and disability awareness -helping students with disabilities understand the risks and benefits of disclosure is an important aspect of self-advocacy. • Workplace accommodations and rights -helping students how to identify needed accommodations to support gainful employment is a skill related to self-advocacy. • Self-determination -these skills include selfesteem, decision-making, problem-solving, and job-seeking skills and can help students with disabilities find employment. Job exploration counseling A total of 15 studies addressed job exploration counseling and evidence-based interventions that can improve employment outcomes for students with disabilities. These topics and their implications were categorized across four themes: (1) development of career goals (n = 5), (2) family outreach (n = 5), (3) cultural competence (n = 4), and (4) interagency collaboration (n = 11). In the studies included many of the themes appear concurrently. As a result, the total number of studies in each of these thematic categories are greater than the total number of studies included. Career goal development Career goal development involves setting clearly defined, measurable goals that depict a vision for one's ideal work life. Five studies indicated that the development of career goals can help students with disabilities find employment after high school (Brewer et al., 2011;Fabian, 2007;Jamieson et al., 1998;Rothman & Maldonado, 2008). Each of the five studies examined a program intervention that included goal development for students with disabilities as a component. One of the studies, which used a multivariate analysis, found that 68 percent of students with disabilities participating in the intervention secured jobs after finishing the career counseling and career assessment components of the program (Fabian, 2007). Though this specific study found no significant relationship between career goals and work outcomes, it did note that a substantial body of literature exists supporting the importance of concrete career goals and workplace efficiency (Fabian, 2007). Based on this study's outcomes, Fabian (2007) recommended that VR counselors begin early in students' educations to develop vocational goals. Thresholds is an intervention that aimed to help students with disabilities make career decisions. In their evaluation of Thresholds, Jamieson et al. (1998) found that the program increased the vocational decision-making abilities of participants. A major component of the program included developing action plans to achieve career goals (Jamieson et al., 1998). The Maryland Seamless Transition Collaborative (MSTC), an intervention designed to address key barriers to employment faced by students with disabilities, showed that a plurality of participating students had achieved key outcomes from the model's framework at the point of transition . The MSTC model includes a component in which students work to together with professionals, family members, and friends to identify their employment interests and goals, which are subsequently documented in a personal profile . The Model Transition Program from New York, which seeks to increase employment among students with disabilities at the point of transition, revealed a positive relationship between measurable postschool goals and postschool outcomes for students with disabilities (Brewer et al., 2011). Noticing a communication gap between high school counselors and VR counselors after evaluating a pre-college transition program in upstate New York, this study recommended a stronger collaboration between VR and high school counselors with a focus on developing students' long-term career goals (Rothman & Maldonado, 2008). Family outreach Family members play an essential role in the lives of students with disabilities and often want to be included in discussions involving employment. Learning from the unique perspective of family members may help VR professionals facilitate the job exploration process. Five studies emphasized the importance of family outreach regarding students with disabilities and the career exploration process (Crudden, 2012;Greene, 2014;Stone et al., 2015;Tilson & Simonsen, 2013). Three of the studies used a qualitative design, one used mixed methods, and one was a literature review. Researchers for two of the qualitative studies conducted interviews (Stone et al., 2015;Tilson & Simonsen, 2013). Stone et al. (2015) held interviews with 57 non-Hispanic and Hispanic students with disabilities and found that families are often underequipped to handle all the needs of their child, suggesting that family members should "work in conjunction with vocational staff members to facilitate the job process" (p. 462). Tilson and Simonsen held interviews with employment specialists and found that effective family communication strategies are an important component of their practice (2013). Similarly, in the third qualitative study (Crudden, 2012), researchers led focus groups with VR agency personnel to examine beliefs about effective service delivery practices. Focus group participants identified parental involvement as a positive factor in the transition and career planning process (Crudden, 2012). Another component of the MSTC intervention, as described in Luecking and Luecking's mixed methods study in the section above (2015), involves early VR case initiation in which VR counselors work with the student and family to develop the Individualized Plan of Employment. The literature review (Greene, 2014) included in this theme explored transition outcomes for culturally and linguistically diverse students with disabilities and found that high numbers of these students will continue to be part of VR personnel caseloads for the foreseeable future. Because of this observed trend, Greene (2014) argued that transition personnel should work with the "broader cultural community of the family" (p. 243) when developing a comprehensive transition program for culturally and linguistically diverse students with disabilities. Cultural competence Students with disabilities come from an array of diverse backgrounds with different cultural needs. The cultural needs of students with disabilities may influence their career goals. Because of this, it is important for VR counselors to understand the cultural context of the students they serve when facilitating the job exploration process. Four studies cited cultural competence as an important attribute for VR counselors (Awsumb, Balcazar & Alvarado, 2016;Greene, 2014;Stone et al., 2015;Tilson & Simonsen, 2013). Three of the studies used a qualitative design, and one used a quantitative design. Of these three qualitative studies, researchers in one study conducted interviews with employment specialists and found that cultural competence was an important attribute in their line of work: "The employment specialists in our study seemed committed to learning about the youth as individuals and understanding the cultural context in which the youth lived. They demonstrated cultural competence by considering the interconnectedness of environmental and situational factors that would influence the job placement and retention process" (Tilson & Simonsen, 2013, p.131). The two other qualitative studies we identified (Greene, 2014;Stone et al., 2015) cited cultural competence as an impor-tant factor when communicating with families of culturally and linguistically diverse students with disabilities. The one quantitative study included in this theme (Awsumb et al., 2016) used an observational analysis to examine the outcomes of students with disabilities participating in a transition program. This study stated that poverty and oppression, especially for minority students, creates barriers to employment opportunities and that "without the help of dedicated, knowledgeable, and effective counselors (either from VR or their high schools), these students will not be able to succeed" (p. 63). Interagency collaboration Interagency collaboration focuses on getting stakeholders to solve problems together across multiple systems. Working together with stakeholders to direct career activities may help VR counselors facilitate the career exploration process. Eleven studies cited interagency collaboration between VR counselors and other stakeholders as a key factor in helping students with disabilities obtain employment (Awsumb et al., 2016;Dong et al., 2016;Fabian, 2007;Giesen & Cavenaugh, 2012;Izzo, 1999;Plotner et al., 2012Plotner et al., , 2013Povenmire-Kirk et al., 2015;Rothman et al., 2008;Tilson & Simonsen, 2013). The design of these studies ranged from quantitative (7), to qualitative (2), to mixed (2). One of the quantitative studies (Giesen & Cavenaugh, 2012) used a multiple logistic regression model to examine competitive employment outcomes for students with disabilities. It found that interagency agreements between VR agencies and local education agencies can be used as a framework to get students with disabilities involved early in transition planning (Giesen & Cavenaugh, 2012). Another quantitative study (Fabian, 2007) used a multivariate analysis to determine factors affecting transition employment in urban students with disabilities. One implication from the study's findings is that VR counselors seeking to assist in the formulation of career-related activities and interventions work with special education and related personnel as early as possible. Similar findings were outlined in the other five quantitative studies included in this theme. For instance, Plotner et al. (2013) argue that stronger partnerships between schools and VR systems can positively impact counselor perceptions of transition activities. Dong, Fabian, and Luecking (2015) state that local school agencies and VR agencies working collaboratively can help address the employment gap students with disabilities face as they exit high school. Izzo (1999) maintains that for students with disabilities to experience smooth transitions into adult life, VR services must be coordinated with other educational services. Awsumb et al. (2016) concur that a working relationship between VR professionals, students with disabilities, the family, and the school is essential for a successful transition to adulthood. Plotner and colleagues (2012) similarly consider knowledge sharing and frequent communication as essential for VR professionals to perform the full scope of their duties. As reported in the two qualitative studies, one team of researchers interviewed seventeen employment specialists and identified four personal attributes that support effective practices (Tilson & Simonsen, 2013). One of these attributes identified was 'networking savvy', which is the ability to connect with people and resources to create opportunities for students with disabilities (Tilson & Simonsen, 2013). Researchers involved in the second qualitative study (Povenmire-Kirk et al., 2015) conducted focus groups with key members of CIRCLES, which is a service delivery model that aims to improve interagency collaboration for transitioning students with disabilities. Members of the focus group felt that the CIRCLES' framework improved the sense of collaboration and awareness of services available in their districts (Povenmire-Kirk et al., 2015). One of the mixed methods studies (Rothman et al., 2008) found that high school counselors should collaborate with VR counselors to match career goals to student strengths and abilities. The second mixed methods study described system linkages and collaboration as a primary component of the MSTC intervention. Work-based learning Twelve studies identified work-based learning as generally beneficial for future work, and that participation in it could affect outcomes in future work. Findings could be further broken down into three themes: (1) the importance of previous or early work experience (n = 8), (2) challenges surrounding Supplemental Security Income (SSI) payments (n = 4), and (3) the importance of mentoring and other social supports (n = 5). Previous or early work experience Eight studies reported the value of previous or early work experience in supporting employment post-transition (Crudden, 2012;Giesen & Cavenaugh 2012;Fabian, Lent & Willis, 1998;Luecking & Wittenburg, 2009;Madaus, 2006;McDonnall, 2011;Simonsen & Neubert, 2012;Wehman et al., 2015). Of these, five studies were secondary data analyses and identified previous employment as strongly significant. Three studies reported that paid work during secondary school predicted employment following graduation (Giesen & Cavenaugh 2012;Simonsen & Neubert, 2012;Wehman et al., 2015). Simonsen and Neubert (2012) note that students with disabilities who participated in paid work during school were 4.53 times more likely to participate in integrated employment. Likewise, Wehman et al. reported that employment in high school was the most significant predictor of post-high school employment (p = 0.0004) (2015). In addition, career awareness training (p = 0.0216) and attending a vocational school (p = 0.0151) is associated with post-high school employment (Wehman et al., 2015). McDonnall (2011) also finds early work experience to be a significant predictor of employment (p = .002) and, in addition, offers data showing that the number of jobs held to be an additional predictor. In her study, students with disabilities who held two jobs over the past two years were between 1.6 to 2.1 times more likely to be employed than those who held no jobs in the two years before the survey (McDonnall, 2011). Finally, in an analysis of the Marriott Foundation's "Bridges from school to work" program, students' work-related behaviors were the best predictors of paid employment following internship completion (Fabian et al., 1998). The three qualitative studies each identified the skills learned in "practical on-the-job situations" as the primary benefit of work-based learning (Madaus, 2006). One article offers three case studies that demonstrate how on-the-job experience can translate into employment following the end of a school-based employment intervention (Luecking & Wittenburg, 2009). While participants in all these studies cited internships or other time-limited transitional employment as a good way to gain this experience, VR counselors who participated in a series of focus groups recommended developing and practicing interviewing, communication, and job readiness skills in a variety of different ways (Crudden, 2012). These included: camps, summer jobs, school-sponsored work activities, after-school employment, volunteer work, job shadowing, supported employment, on-the-job trainings, and internships (Crudden, 2012). Supplemental security income payments Supplemental Security Income payments may act as barriers to employment for students with disabilities. Being aware of these barriers may allow VR professionals to more effectively assist students with disabilities in exploring careers. Four studies discussed the role that SSI plays in encouraging or discouraging employment among students with disabilities (Giesen & Cavenaugh, 2012;Hemmeter, 2014;Luecking & Wittenburg, 2009;McDonnall, 2011). Of these, one was a randomized controlled trial (RCT), one was a series of case studies, and two were secondary data analyses. The RCT and the case studies were both part of the same program evaluation of the Social Security Administration's "Youth Transition Demonstration" project (Hemmeter, 2014; Luecking & Wittenburg, 2009). The Youth Transition Demonstration recognized the limits on earning potential that accompany SSI payments as a fundamental barrier to participant employment, and so structured the program around this recognition. Because the purpose of the project was to identify interventions that would improve the educational and vocational outcomes with youth who qualify for SSI payments, or Social Security Disability Insurance (SSDI) payments, Youth Transition Demonstration supplied participants with waivers of certain SSI and SSDI rules that limit SSI eligibility for those over a certain threshold of income. In the RCT, "the program waivers allow the treatment group youths to keep more of their income and remain in the program longer than the control group youths. Combined with the earnings results, the waivers may indicate better employment outcomes for treatment group youths" (Hemmeter, 2014, p. 21). However, because the intervention is still underway, it is too early to determine the overall effectiveness of the Youth Transition Demonstration. The case studies support this conclusion by illustrating three successful examples of the Youth Transition Demonstration (Luecking & Wittenburg, 2009). Of the two studies that were secondary data analyses, the first found that receipt of SSI income payments at the time of application was a significant negative predictor of employment (Giesen & Cavenaugh, 2012). However, Giesen and Cavenaugh (2012) also note that the receipt of SSDI, which is available only to persons who have worked for a certain amount of time, does not have a negative association with employment. Likewise, the second study examined predictors of employment in students with disabilities with visual impairments (McDonnall, 2011). While this study found the receipt of SSI benefits to be a significant negative predictor on its own, the recent receipt of SSI benefits was much less significant when considered in combination with other variables. Because the receipt of SSI benefits is a major barrier to employment for people with visual impairments, the author suggests that these barriers may be more significant to older adults with visual impairments. McDonnall (2011) suggests that this discrepancy could be indirectly caused by persons with visual impairments being prevented from gaining early work experience when they were younger because of receiving SSI payments. Mentoring and other work-based social supports Five studies, four of which were qualitative, cited the importance of work-based social supportsincluding mentoring, role models, or other workbased advocates-as a factor that supports successful employment (Crudden, 2012;Lindsay et al., 2012;Madaus, 2006;Stone et al., 2015;Verhoef et al., 2014). Two qualitative studies identified mentoring as beneficial to transitioning into employment (Lindsay et al., 2012;Madaus, 2006). When asked how transition services could be improved, survey respondents suggested that their colleges establish mentoring programs between current students and graduates with disabilities, which they felt would allow students to see what can be achieved (Madaus, 2006). Likewise, participants in the study conducted by Lindsay et al. (2012) reported increased self-confidence following an employment-training program, which some youth attributed to their contact with peer-mentors. Furthermore, both parents and youth in the study expressed that they would have liked to continue with a mentor or buddy system after the program ended. One study conducted focus groups with VR counselors (Crudden, 2012). Participants in these focus groups suggested providing visually impaired youth with role models who are both blind and sighted as a means of developing positive social skills (Crudden, 2012). Another study (Verhoef et al., 2014), while it did not evaluate for mentoring specifically, included mentoring as a component in an intervention that was ultimately found to result in improved occupational performance, self-care, and satisfaction with performance. Finally, most participants in Stone et al. (2015) believed that it would be beneficial for their supervisors to be aware of their disability, as they felt that this would result in more access to workplace supports and accommodations, social or otherwise. Instruction in self-advocacy A total of 15 included studies addressed topics related to self-advocacy and self-determination. Specifically, the content, findings, and implications related to this topic were categorized across three themes: disclosure (n = 5), workplace accommodations (n = 5) and self-determination (n = 10). Disclosure and disability awareness Five studies support the importance and training of students with disabilities in the process, risks, and benefits of disclosure (Lindsay & DePape, 2015;Lindsay et al., 2012;Lindsay, McDougall & Sanford, 2013;Newman, Madaus & Javitz, 2016;Rothman & Maldonado, 2008). One of the studies (Newman et al., 2016) used data from the National Longitudinal Transition Study-2 (NLTS-2), which represents thousands of students with disabilities, to examine if transition planning in high school later helps students request accommodations in a postsecondary setting. The authors note that transition plans need to reflect reasonable accommodations that students with disabilities can request and control in educational settings, which VR counselors can help inform (Newman et al., 2016). A survey of 27 students with disabilities attending a pre-college summer transition program found that students with disabilities are often reluctant to disclose their disability due to fear of stigma and bias (Rothman & Maldonado, 2008). At the same time, Rothman and Maldonado (2008) reported that participants reported that self-advocacy as the most important contributor to their success in future employment. The team led by Lindsay et al. (2012) conducted semi-structured interviews with eighteen students with disabilities that participated in a skills development training program, which included practicing how and when to disclose a disability. Some of the participants concluded that working helped them practice explaining their condition and need for accommodations (Lindsay et al., 2012). Similarly, in another study Lindsay et al. conducted interviews to determine if students with disabilities disclose their disability (2013). Fourteen of the 18 participants who had participated in a 2-year employment training program reported difficulty in knowing how to disclose their condition prior to taking the job. The authors suggest that their study reveals the importance and need for disclosure skills in students with disabilities. Another study conducted interviews to explore how answers given during job interviews differ between students with disabilities and their peers without disabilities (Lindsay & DePape, 2015). The authors state some research has shown that students with disabilities are rated more favorably by potential employers when they disclose their condition early during job interviews. However, Lindsay and DePape (2015) also warn that disclosing disabilities can harm potential job prospects if the employer has negative attitudes toward people with disabilities. Workplace accommodations and rights Three studies identified workplace accommodations as a factor that supports successful employment for students with disabilities (Crudden, 2012; Lindsay et al., 2013;Rothman & Maldonado, 2008). Two of these studies (Crudden, 2012;Lindsay et al., 2013) used a qualitative design. Both studies found that students with disabilities needed to be engaged in self-advocacy activities sufficiently to identify needed accommodations to support gainful employment and/or to understand their rights with respect to possible on the job discrimination actions. Another study (Rothman & Maldonado, 2008) addressed students with disabilities' transition to college life and assessed the effects of a one-week program that addressed issues related to advocacy skills and seeking accommodations. Exit interviews with the students indicate that they found the one-week program to be beneficial (Rothman & Maldonado, 2008). Self-determination Eleven studies examined the importance of selfdetermination for students with disabilities in obtaining employment (Bal et al., 2017;Brewer et al., 2011;Carter, Austin & Trainor, 2011;Crudden, 2012;Greene, 2014;Foley et al., 2012;Luecking & Wittenburg, 2009;McDonnall, 2011;McDonnall & Crudden, 2009;Newman, et al., 2016;Simonsen & Neubert, 2012). 1 Six of the studies (Brewer et al., 2011;Carter et al., 2011;McDonnall, 2011;McDonnall & Crudden, 2009;Newman et al., 2016;Simonsen & Neubert, 2012) were quantitative and used data drawn from state and national databases to analyze the impact of self-determination training such as goal setting, job seeking and choices, decision-making, problem-solving, job seeking/readiness skills and job choices on successful employment. Each of these six studies highlight the link between self-determination training/skills and successful employment for students with disabilities. For example, Brewer et al. (2011) found that the more students were involved with career development activities, which include self-determination training, the more likely they were to participate in work-related experiences. Similarly, McDonnall and Crudden (2009) found that selfdetermination skills were associated with greater employment outcomes for visually impaired students upon transition. Three of the studies (Bal et al., 2017;Crudden, 2012;Luecking & Wittenburg, 2009) were qualitative in design. Each three of these studies acknowledge that self-determination factors such as self-esteem, decision-making, problem-solving, and job-seeking skills can help students with disabilities find employment. The last two studies were literature reviews. The first literature review (Foley et al., 2012) found evidence that students with disabilities who exhibited more self-determination demonstrated more advanced outcomes in many employment categories. The study's authors concluded that more service models are needed to promote the success of students with disabilities (Foley et al., 2012). Finally, Greene (2014) summarized the literature on cultural/linguistic sensitivity in relation to self-advocacy and self-determination. Greene (2014) concluded that students with disabilities from minority cultures/languages face unique challenges (i.e., racial and cultural stereotypes, immigration issues, lack of language proficiency) with respect to self-advocacy and self-determination and that special attention should be given to the development of these skills in the context of the student's cultural/linguistic identity. Workplace readiness Nine studies present results relating to work readiness social skills and independent living. Multiple outcomes were identified and classified according the following three themes: (1) communication/interview skills (n = 7), (2) social skills (n = 6), (3) and transportation (n = 6). Communication/interview skills Seven studies found that communication/interview skills can help students with disabilities obtain employment (Carter et al., 2011;Crudden, 2012;Lindsay & DePape, 2015;Lindsay et al., 2012;McDonnall, 2011;Stone et al., 2015;Verhoef et al., 2014). Of the seven studies, two used a quantitative analysis of the NLTS2 database (Carter et al., 2011;McDonnall, 2011). Carter et al. (2011) found that students with disabilities who communicated well with others and were independent in self-care had nearly three times the odds of having paid work than students who had trouble communicating. McDonall (2011) states that previous research has identified communication skills as a key factor for successful employment. Another study (Verhoef et al., 2014) used a pre-post design to evaluate how participants' occupational performance changed over time in a multidisciplinary VR intervention, which used job interview training as a component. The study found that students with disabilities who participated in the intervention showed improved occupational performance at work (Verhoef et al., 2014). Four studies using a qualitative design including interviews (Lindsay & DePape, 2015;Lindsay et al., 2012;Stone et al., 2015) and focus groups (Crudden, 2012) also identified the need for communication and interview skills for successful employment. Stone et al. (2015) interviewed Hispanic and non-Hispanic students with disabilities and found that these students emphasized needing more assistance with job interview skills. Lindsay et al. (2012) interviewed students with disabilities participating in an employment training program that used practice interviews as a component. The majority of participants stated that the practice interviews were a helpful feature of the program (Lindsay et al., 2012). Lindsay and DePape (2015) performed a content analysis of mock interview responses from both students with disabilities and their typically developing peers. The authors (Lindsay & DePape, 2015) found that students with disabilities often faced more challenges during the interview process, such as responding to scenario-based problem-solving questions. Crudden (2012) conducted focus groups with rehabilitation state agency personnel regarding best service delivery practices. Focus group participants recommended that VR personnel conduct a situational assessment of interviewing, communication, and job readiness skills (Crudden, 2012). Social skills support A total of six studies were identified as addressing the topic of social skills in terms of development, need, or barrier to successful transition programming for students with disabilities (Crudden, 2012;Lindsay et al., 2012;Luft, 2012;McDonnall, 2011;Noel et al., 2017;Verhoef et al., 2014). Interestingly, few of the studies indicated what specific behaviors were associated with social skill development or program-ming. Inclusion of social skills training for students with disabilities is typically related to work performance and success with co-workers, employers, and customers. The most common reference given was to 'social skills', 'social contact', or 'social interactions' but without example, illustration or definition. In some instances, social skills were demonstrated in the context of the interview or career training (Lindsay et al., 2012;Noel et al., 2017), peer activity (Verhoef et al., 2014), or interpersonal interaction (Crudden, 2012;Luft, 2012). Only two of the studies provided a quantifiable result of training for social skill measurement (McDonnall, 2011;Verhoef et al., 2014). A large national observational study (McDonnall, 2011) found that social activities related to peer interactions were significantly related to post-school employment. Similarly, a pre-post single group study (Verhoef et al., 2014) found a positive effect on employment for students with disabilities that understood the nature of social skill use at work, home and in social life. Transportation Six studies addressed transportation access, competency, or barriers as a significant factor in both school-based and post-school-based employment (Carter et al., 2011;Crudden, 2012;Lindsay et al., 2012;McDonnall, 2011;Noel et al., 2009;Verhoef et al., 2014). Two studies (Carter et al., 2011;McDonnall, 2011) analyzed data from the NLTS2 database. Carter et al. (2011) found that the availability of transportation for people with disabilities was positively associated with employment. McDonnall (2011) reported that difficulty with transportation was a significant predictor of post-school unemployment. Another study (Lindsay et al., 2012) interviewed eighteen students with disabilities that attended a school-based summer program. Participants in the program reported difficulty with transportation to be a substantial barrier to sustained employment and found the transportation skill training to be a valuable part of the program (Lindsay et al., 2012). Another study (Noel et al., 2017) conducted a survey of the Illinois Balancing Incentive Program and reported that 38% of participants in eight of the programs assessed indicated that difficulty with transportation was a major barrier to work. In a pre-post intervention study, Verhoef et al. (2014) concluded that interventions aimed at improving employment for students with disabilities not only should address issues at work, but also problems in self-care such as transportation to work. Discussion This scoping review provided an overview of existing literature that evaluates research-based interventions that can be applied to four of the required pre-employment transition service elements of WIOA. In doing so, it has identified several areas that can help inform strategies used by VR counselors for increasing employment for students with disabilities at the point of transition from secondary school to more divergent paths. For job exploration counseling, strategies VR counselors may consider 1) working with schools to identify and create goals related to career exploration; 2) reaching out to families to gain new perspectives regarding the student's career aspirations; 3) learning about the home life and cultural background of students with disabilities to better the situational factors that can affect the job exploration process; and 4) particularly for VR leadership, coming up with ways to identify interagency partners who can help students with disabilities explore career options. For work-based learning, strategies VR counselors may consider include 1) helping students with disabilities identify internships, volunteer activities, and short-term jobs to prepare for working full-time; 2) weighing the positives and negatives of SSI payments and how these and related benefits affect employment to determine what would be best for the individual; and 3) increasing their awareness of the social supports that exist and develop ways to identify possible role models, mentors, and advocates to enhance their skills at work. For workforce readiness, strategies VR counselors may consider 1) using communication exercises (for example, mock interviews) to help students with disabilities gain the skills they need for employment; 2) identifying which social skills need improvement and opportunities available to improve them; and 3) helping students with disabilities become familiar with different kinds of transportation. For instruction in self-advocacy, strategies VR counselors may consider 1) helping plan activities that encourage self-determination; 2) making sure that students have a transition plan in place; and 3) helping students become aware of the accommodations that are available to them in the workplace. Though these strategies may help VR counselors increase employment outcomes for students with disabilities, there is still the challenge of effectively instructing VR counselors to employ these strategies in the field. Most state VR services likely do not have the resources to design and implement training modules comprised of the strategies discussed in this review. One solution may be to continue to leverage technical assistance centers with expertise on disability and employment to both develop and lead the training. Since the development and implementation of such training modules would require these technical assistance centers to invest both time and resources, policy makers may consider providing additional funding to these centers. Limitations This scoping review has several limitations. The quality of each study (i.e., methodology, design) included in this review was not assessed. Determinations for inclusion were instead derived by assessing each study's conclusions, implications, and recommendations as they related to four of the required elements of pre-employment transition services. Another significant limitation concerns how several studies lacked detailed descriptions of the interventions they were evaluating. Though these studies noted that one or more of the components related to themes were utilized in the intervention, they offered little insight as to what degree. Because of this lack of detail, it was difficult at times to determine the emphasis that any one intervention component played in its overall results. Another limitation is that for this review, only articles written in English were used. Articles written on other languages may have been appropriate for this review's objectives but were not considered. One more limitation is the omission of counseling on opportunities for enrollment in comprehensive transition or postsecondary educational programs from this review. As stated earlier, this WIOA service category did not fit this review's focus. For a training module focused on the WIOA service categories to be comprehensive, it will also require strategies for this category. A future review may examine the literature regarding strategies for the counseling on opportunities for enrollment in comprehensive transition or postsecondary educational programs WIOA category. Conclusion This scoping review has shown there are several interventions or components within interventions that support employment outcomes for students with disabilities as they relate to the WIOA service categories. To ensure that VR counselors are not only aware of these strategies, but can implement them effectively, technical assistance centers with expertise on disability and employment may consider leading the development of a training module incorporating components of these interventions.
2021-05-11T00:07:10.321Z
2021-01-12T00:00:00.000
{ "year": 2021, "sha1": "f37124889fa3f4c30f17ca854d755df2b4badfe2", "oa_license": "CCBYNC", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8081404", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ff520e02df5b1c712f7e1249840bfe8ff49621b3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
55599911
pes2o/s2orc
v3-fos-license
The Structural-Functional Synthesis of IoT Service Delivery Systems by Performance and Availability Criteria This paper is devoted to the quantitative investigation of the availability of cloud service systems. In this paper we calculate the criteria and constraints of a distributed service platform such as availability and system performance index variations by defined set of the main parameters. We analyze the calculation results to enable optimal synthesis of distributed service platforms based on the cloud service-oriented architecture. The method of synthesis has been numerically generalized considering the type of service workload. We used Hurst parameter to statistically evaluate each integrated service that requires implementation within the service delivery platform. The latter is synthesized by structural matching of virtual machines using combination of elementary service components. According to Amdahl's Law the clustering of cloud-networks allows to break the complex dynamic network into separate segments that simplifies access to the resources of virtual machines. This in turn simplifies complex topological structure, enhancing the overall system performance. The proposed approaches and obtained results allow to numerically justify and algorithmically describe the process of structural and functional synthesis of efficient distributed service platforms. These platforms through dynamic configuring and exploitation provide an opportunity to create the dynamic environment in terms of comprehensive services range and significant user workload fluctuation. Introduction The structural-functional integrity of modern cloud networking paradigm is very important for building scalable and reliable commercial infrastructures according to Service-Oriented Architecture (SOA).This architectural concept is used for wide set of applications in order to ensure their efficiency in a concurrent world of e-business, e-commerce, personal communications and other activities [1 ,2].Despite that, the mentioned networking concepts started conquering the market just several years ago.However, such network solutions are commonly characterized by extremely complex design and high commercial value.Thus, we propose analytical method for synthesis of structural and functional parameters of generalized service delivery platform (SDP).The method takes a set of constraints as an input. Today, cloud-computing services are widely spread across the information technology and telecommunication market.They help to make business workflows more effective and scalable [2]. The key players of these markets are Microsoft (Microsoft Azure), Google (Google Apps Engine), Amazon (Elastic CloudComputing, Simple Storage Service), IBM (Blue Cloud), Nimbus, Oracle etc.Some small companies also have their own cloud computing services.There are multiple free of charge solutions available at the market, e.g., iCloud, Cloudo, FreeZoho, SalesForce etc.All these solutions are different in terms of offered services.Among the typical services are SaaS (Software as a Service), PaaS (Platform as a Service), IaaS (Infrastructure as a Service) and HaaS (Hardware as a Service). Despite the variety of services, there are quite typical hardware and software facilities used as the basis of most of could systems.They are facilitating an operation of the system that is built according to SOA.Usually such system consists of a set of virtualized service nodes or virtual machines (VM).Scalability and flexibility are achieved through VM replication and migration.Such dynamic environments are commonly very unreliable in terms of failure probability of hardware or software components.There are several principles that allow to minimize this probability and to increase the recovering speed of a cloud system.Most of them are based on distributed data processing (reserving, re-distribution of computing resources etc.) and are invisible for consumers, who perceive system availability as no-faulty operation.The statistics of typical failures in cloud systems is very interesting (see Table 1) [3,4].It shows that existing approaches to achieve high-reliability of cloud systems are poor in terms of effectiveness.Deeper analysis showed that cloud system unavailability is not the single consequence of a failure.In the case of Microsoft Sidekick failure, all personal data of users was lost [3] and was restored only partially. Despite the wide set of solutions [5,6] for ensuring availability of service systems, existing cloud systems are constantly encountering bottlenecks.Thus, ensuring high system availability is the high-priority task. The Synthesis Criteria and System Parameters The simplest network structure (topology) is undirected (oriented) graph G with a set of vertices V and set of edges (arcs) E which corresponds to nodes and lines.The simplest model of structural reliability of an information service system is a random graph (G; p) with p = {p (ε); ε  E}.It could be characterized by independent removal of edges of G (arcs) ε  Е with probability q(ε) = 1 -p(ε).In a service system, the availability is tightly coupled with survivability of a set of VM and can characterized by system's ability to quickly and easily recover from a failure the normal operating mode.However, this concept can be described as the ability of the system to reliably operate for a long time with maximal efficiency.The concepts of service availability and service survivability are interrelated in the theory of complex systems (e.g., cloud networks). The most important component of cloud system reliability is the availability it which describes the ability of the server system to survive continuously in the given conditions and during the given interval of operation.It can be calculated using survivability parameter of a distributed set of VMs.These VMs can form various combinations of ESC. The properties of ESC combinations are affecting the service availability and overall system performance index. To assess these parameters in the cloud system we should clearly define the notion of dynamic network topology.In our model, we evaluate the survivability of structures in terms of the probability that two segments will be interconnected in the near future, i.e., there will be at least one edge.This edge is a "key link" to connect these segments.On the other hand, there must be a working ESC, which is not overloaded and is able to process the given flow of requests.These probabilities are affected by the probability of failure of a certain path in the intermediate segments of the server system.For instance, a router between subscribers or VMs.Thus, at any given moment of time, ESC availability depends on the probability of requests blocking in the intermediate segments. In paper [7], authors understand information systems vitality as ability of a system to perform its basic functions under the impact of outer factors (at least within tolerable loss of quality of service).This definition is similar to the definition given in [8].In [9] the information systems vitality is defined as ability to perform a given task under the deleterious effects on the entire system or its individual components, keeping operational performance within acceptable limits.These two definitions focus on the following key points.The first, the vitality should be considered as an intrinsic property of the system as it doesn't depend on operating conditions that arise at any given moment of time.It possesses this property all the time and to some extent the property could occur under normal operating conditions, where there are failures that are caused by manufacturing defects, degradation, maintaining etc.The survivability can be observed under the large external influences that are not expected for normal operation mode and can lead to extreme operating conditions.The second, the system supports not all the functions that it should perform during normal operation mode.It supports only the basic functions that sometimes leads to QoS degradation.This means that we should replace the strategy of decreasing the severity of adverse effects. In the studies about survivability and availability, we identified a number of areas (approaches) where several types of analysis can be used, e.g., game-theoretical [10,11], probabilistic [8], deterministic [12,13], graph [14,15].Probabilistic and deterministic approaches are the most accommodated for technical purposes.The main ideas of these models were outlined in [15]. The probabilistic methods of survivability investigation are based on assumption that location and time of occurrence of adverse (harmful) effects (HE) can be described using uniform distribution within a single system HE. The deterministic methods of survivability investigation are based on matching of specific types of harmful factors and resistance of system elements.These approaches cab be divided in static and dynamic.The static approach is based on definitions of object's weak region and on the level of damaging factors.In the next step, the list of items that might be damaged is determined.The level of the system operation quality is determined using the logical functions.A dynamic approach is based on the use of simulation models, including dynamic models: the emergence and development of HE; development of HE factors that affect the state of the elements of the object; object operating in terms of structural and parametric changes induced by damaging factors and by countermeasures to HE. In turn, the graph models are characterized by simplicity.Traditionally, they are used when investigating the structural survivability that goes along with the concept of "destruction".The system that is represented by a graph can be considered as destroyed, if after removing of the vertices the graph is valid and satisfies one or more of the following conditions:  graph contains at least two components;  there are no directed paths for a given set of vertices;  the number of vertices in the largest component of G is less than some given number;  the shortest way is longer than certain given value.Accordingly, a system is considered as survivable, and a service system is considered as available in the absence of these conditions.A task of optimal parametric synthesis of cloud SDP can be solved by optimal choice of the designed system parameters for each declared complex service.In [16] we define this as: (1) should maximize the following equations and is used to define the synthesis criteria: where:   , opt X x t is a probabilistic process of parameters opt x changing,   , is a structural performance function.Here x D is a tolerance region for opt x parameters, T is an exploration time for current SDP realization. Let us define the tolerance region: where min A P is the minimum acceptable service availability within designed SDP.The solution for task (1) is based on the analysis of the interrelations of the service availability and ESC parameters, as well as workload traffic statistical parameters (Hurst parameter). 1 st and 2 nd statistical moments for workload traffic served by respective service are also needed to examine all necessary stochastic characteristics.These parameters could be easily obtained after statistical simulation of the workflow intensity with necessary H parameter [17].A set of internal parameters Criteria Calculation Given internal SDP parameters we could define [15]: where 0 , N is a number of SDP service nodes with organized VM, that aggregating respective ESCs.There is no trivial solution was found and the common task was splitted to the parts using additive survivability definition [15]: ) respectively using [18] and Erlang process of i-th order definition: and  is an average physical availability of VM at each parallelized by ESC combination SDP service node.According to the transformation of Norros equation in [17]: where B is a buffer utilization ratio at the moment of time t, H is the Hurst parameter of respective traffic type, that applicable for examined complex service, C is the average throughput capacity of the ESC,  is a traffic intensity, v c is a variation coefficient of the incoming traffic workload, both indexes are for examined complex service.Consequently, using Erlang process of i-th order definition: and  is an average physical availability of VM at each sequential by ESC combination SDP service node.According to the transformation of Norros equation [17]: ) and respectively using Poisson process definition let's define [18]: where 3  is an average physical availability of VM at each transport SDP service node. Using (5-13) we define the statistical distribution for ESC combination and respectively service availability criterion (2) calculation on the each synthesized VM structure of cloud SDP.Using the same set of parameters (1) that was used in (2-3), we simultaneously define structural performance by the second synthesis criterion (3) after Amdahl's Law (Fig. 1) [19]. Conceptual Study of Targeted Technologies Implementation of modern broadband network services, and IoT (Internet of Things) concept in particular drastically changes the view on services and service network systems' infrastructure.In these systems, all network addressing functions and stream management are delegated to cloud environment, removing the need for local networking equipment if it is not either software configured (SDN) or doesn't provide a direct connection to communication networks.The given category of information and communication systems (CBN) were developed at Petrino and Aryaka companies' \cite{20demydov2016method}.Network-as-a-Service (NaaS) was proposed for generalization of mentioned CBN properties [4].Approaches to CEN system deployment are investigated in papers [21][22][23][24], particularly the optimization of corresponding network equipment performance.SDN concepts of network functions virtualization, as well as service network infrastructure elements, are a basis for description and research of CBN service systems [25].Structural and parametric researches of cloud service systems are performed in papers [1,22,23].The authors of the researches obtained the analytical dependencies of the main functional parameters, alongside with quality of service dependencies (jitter, packet delay, system throughput, packet loss ratio).To improve quality of service by increasing the availability indicators of cloud platform telecommunication nodes based on [26], we propose the method of dynamic routing metrics correction.We simulated some service network systems with scaling using breakthrough structural tracing.The numerical availability of cloud service systems must be assessed in order to perform an effective synthesis of service delivery system using the service availability criteria (which includes reliability, robustness and QoS).Results and functional dependencies (1-13), obtained in [27] were used for the modelling of the above-mentioned characteristics in this paper.Overlapping of the results obtained in this paper with the results mentioned earlier, allows us to assess the alternative cloud service system segments deployment approaches with the required parameters for further effective processing of their development and modification strategies. Cloud service systems had obviously become very popular around the market nowadays, making electronic business more effective and scalable [2].The most famous are the solutions from Microsoft (Microsoft Azure), Google (Google Apps Engine), Amazon (Elastic Cloud Computing, Simple Storage Service), IBM (Blue Cloud), Nimbus, Oracle and so on.Relatively small operators provide cloud computing services alongside large enterprise cloud systems.There also exist solutions which free of charge such as iCloud, Cloudo, FreeZoho, SalesForce etc.They vary in the list of services that are offered, as well as the type of service they deliver: SaaS (Software-as-a-Service), PaaS (Platform-as-a-Service), and IaaS (Infrastructure-as-a-Service).For the case of finished infrastructure transformation from CEN to CBN paradigm HaaS (Hardware-as-a-Service) is also worth mentioning.Despite the service variety (which are generally called XaaS), there exist a few typical software and hardware tools, which are often used as basis for cloud systems deployment.They provide system functionality, which is based on SOA (Service-Oriented Architecture).They are implemented as a set of virtualized service nodes or virtual machines that are replicated in order to sustain scalability and support some set of electronic services with flexibility and according to consumers' needs.Hardware and software tools of cloud computing platform may sometimes operate unstable or unreliable due to it's imperfection or degradation in some statistical indicators.In order to minimize such possibility and decrease the recovery time in cloud computing environment it is necessary to apply special principles, the majority of which are specific for distributed data processing (redundancy, parallelization, calculation resources reallocation etc.). Described approaches are aimed at partial concealment of the real system availability for the purpose of creating the illusion of trouble-free operation of a distributed service platform.Despite that, the statistics of cloud service platforms failures are available [3,27].It shows that in some situations solutions applied in SDP for obtaining high service availability become ineffective.Analyzing deeper, it is worth mentioning that system inaccessibility may not be the only consequence of cloud systems' failures.In case of Microsoft Sidekick failure, users' personal data was lost and was not fully recovered [3,27].Despite the high level of well-known methods for increasing availability of service systems, cloud systems are still a subject for bottleneck analysis.The main purpose is to increase their operation robustness, service availability and overall performance [6,28].The relevance of research on the subject is rather high. Service Availability Modelling in Scalable Cloud Service Networks Structural and functional integrity of modern cloud computing paradigm is important for scalable and reliable service platform deployment using SOA.Many applications involve this architectural concept to become more effective in a world, where most business processes are parallel: electronic business, e-commerce, personal communications and so on [1,2] and such network concepts become more and more widespread by the year.Due to the high complexity of designing and high commercial value of network solutions we developed and analytical synthesis method [27] for the purpose of optimization structural and functional parameters with given constraints for typical cloud service delivery platforms (SDP). To perform a simulation of service availability in scalable cloud service networks, we define "structural parameters" as quantitative indicators for basic service components, which are somehow configured in virtual machines structure on cloud telecomm platform nodes.The structures of network connections in physical network topology are defined unclearly, which is inherent for cloud systems, separating them from traditional network architectures.In general, topological configuration of virtual machines is dynamic, same as the configuration of offered and required service array.Virtual machines migrate and replicate elementary services, as the components of complex applications according to subscribers' demands distribution.That is, service-oriented architecture owns a totality of migrating resources inside a cloud system, which is extremely complex distributed object, forming a specific implementation of SDP. Figure 1.VM systems' performance coefficient after Amdahl's Law [19].We can outline the specific service components groups, which are used in the orchestration process, by which complex application may be used by SDP users.Service components classification, as threads or streams, which are performed by virtual machines may be adopted in accordance with the definitions of Amdahl's Law [4,19] (Fig. 1).Thus, for our integral model we can outline hypervisors and other successive elementary service components (ESC) of service applications set α into one group, and also ESCs which serve queues in parallel -into another η=n-α (see Fig. 2).Here n depicts a total number of ESCs [27], as was mentioned in previous chapter. This way, taking unclearly (fuzzy) specified and dynamic cloud network system structure into account, a task of optimal structural and functional synthesis can be reduced to selection of optimal service components quantities, which belong to different designated groups within their combined mix during the formation and building up of complex applications in a service-oriented architecture.Unfortunately, the main difficulties associated with this task come with a lack of research about stochastic traffic serving processes by distributed service applications in system aspect under the conditions of SDP load variation, which is generated by user requests to different service types [5,17].Service functional properties should be considered in terms of stochastic processes, given their direct dependence on the statistical properties of the load served.To characterize the statistical features of the traffic load on the cloud system self-similarity Hurst parameter can be used.Accordingly, in [16] we defined the specific features for the following types of traffic: VoIP, VoD, IPTV Multicast, Web-data etc. Parallel and sequential sections of elementary services of a virtual structure in cloud SOA [27].Let us define the concept of "functional parameter" for a corresponding service type, which should be served in SOA-based cloud platform, as statistically predefined and calculated Hurst parameter, which corresponds to the specific type of traffic for this service.Thus, for each synthesized instance of cloud architecture the corresponding indicator of generalized service availability can be presented and calculated for each functional service type, which SDP can offer, same as relative performance indicator (Fig. 1) [27] for a given combination of ESCs.We chose both indicators as criteria for optimal structural and functional synthesis of service delivery platforms [27]. Next, we present the results of service availability modelling in scalable cloud service network, which was performed using MATLAB and Mathcad systems based on numerical calculations and analytical dependencies approximation, which were presented in [4,29] (Figs. 3, 4).We assume, that for the modelling results depicted in Fig. 3 a, cloud platform service layer implementation is performed without optimal distribution of service flows (virtual machines migration). The case, where service flows may be adaptively corrected on demand by virtual machines migration [30] is depicted in Fig. 3 b.In case of non-optimal cloud network platform resources configuration the necessary resources exceed the available resources of virtual machines, while in case of adaptive cloud network platform resources configuration, the comprehensive resource roughly equivalents the available virtual machine requirements.Figs. 2, 3 (a, b) list various load parameters Λ and the values of Hurst parameter H for aggregated inbound traffic.P a is a generalized service availability of cloud network platform; η is a fraction of ESCs, which process the queues in parallel (%); N a is a number of service nodes in cloud network platform. Conclusion In this paper, we investigated service availability of distributed service platform.Based on the obtained results, we performed an effective synthesis of configuration of scalable cloud service system and considered load type and volume.The workload requires processing in terms of corresponding service development using different combinations of parallel and successive ESCs in order to achieve the best indices of performance and system efficiency.Based on the obtained results, we draw a conclusion about gradual reduction of service availability in cloud network system during the load increase.Moreover, the increasing rate is slower in system instances, where fraction of the parallel ESCs, which serve the queues, prevails the successive fraction.With the bigger number of service nodes in a cloud platform and parallel growth of service components, the service availability increases faster.According to the indexes, obtained during the simulation, we assume that an optimal η/α ratio would be 60/40.The impact score of Hurst parameter to service availability manifested in its high values, showing that self-similar traffic significantly reduces the latter.Therefore, we can see that in the case of resources shortage as for sub-optimal cloud service platform configuration, for the self-similar traffic the reduction of availability index is possible by an average of 15% for λ=0.2 and up to 30% for λ=0.9.In the context of adaptive cloud platform resources configuration, the use of virtual machine migration will lead to the equal reduction of service availability and will not exceed 5-10%, and with the service nodes count above 40 it becomes insignificant.Note, that high levels of self-similarity are intrinsic to the internet traffic generation services -H~0.685,IP-telephony -H~0.981,Video-on-Demand (VoD) -H~0.608,service data transmission -H~0.719[28].Taking Amdahl's law into account, we claim that cloud system clustering allows to achieve the maximal effectiveness for cluster size not less than 100 nodes, which comes out from the results of service availability simulation.However, increase of the size of cloud platform cluster will provoke a necessity to modify thread management methods, in particular, the use of modified methods for improving the availability of telecommunications nodes. In overall the contributions of the paper are following: a.New structure of the software code for SDP handlers with mutual optimization of the componential (modular) operational availability and performance gain of the parallelized program components at the SaaS were proposed.They benefit by balancing the consequent and concurrent logic of data workload handling processes over the under layered platform, as a service.b.PaaS system configuration based on optimization of its structural and parametric indexes by the criteria of system productivity and service availability.This configuration is achieving after clustering of concurrently operating virtual machines set undergoing their migration process into the service network platform.c.IaaS network architecture principles by the criteria of the network interfaces availability of the infrastructural telecommunication systems and their performance index.It means composition of concurrent (ex.after virtualization) elements of an active network equipment and consequent passive network elements, that's appropriate for traffic streams handling of different priorities and types. Considering, that for service network systems based on IaaS architecture the optimal ratio between concurrent and consequent service components is approaching to the "golden section ratio" (60/40%), it could be concluded that servicing of a network workload as flow of packets should be performed at network infrastructure within up to 40% of consequent (passive) network equipment application.It could be, for example, switching equipment (in some CEN realization), that serving the aggregated workload without any differentiation.The other part of network equipment should be concurrent by the its character.It should differentiate the workload to serve separate streams in the dedicated components, that operate concurrently or in quasi-parallel mode.The examples of such workload are high-priority data flows.We proved that separation of high-priority data flow processing and worsening of non-real-time parameters of data flows leads to significant improvement of the quality of high-priority data flow processing.To avoid mutual impacts among concurrent (parallelized) components when synthesizing the service system, the respective software and hardware resources should be dedicated and reorganized as virtualized components.This increases system's operational resilience and is based on the improvement of processes at networkdependent levels of the ISO/OSI model.Keeping real the ratio (60/40%) between concurrent and consequent service components, we could see that system's availability is increased.The redistribution of the memory (buffering) and calculation of system resources are performed.It falls to the synthesized network architecture and the weighted delay of the network traffic transportation is decreased.Thus, the QoS for critical applications with special real-time data flow should be improved.Aggregation of the data flow on the consequent components of the service network system should not pay into global QoS worsening.These fluctuations of QoS parameters are local by their character on the separated network interfaces (for ex. at IaaS).In overall due to effects of synergism and system emergency, an improvement of the QoS in service network systems (considering data flows timing parameters) is observed, concerning specified traffic priorities. On the basis of solutions (1-3) the consideration was done upon the tolerance region (4) using the proposed model, which describes application of the Amdahl's Law and, respectively, the performance of concurrent handlers, in circumstances of availability changes of their components after the Dodonov-Lande model for a complex systems resilience.It allows to improve the effectiveness of the IaaS, PaaS and SaaS implementations after criteria of the system performance index, system availability and resilience using given set of initial parameters, including service network system structure, and parameters of the traffic, considering modern NFV approach.Additionally, the models and approaches of the aggregated traffic processing using more precise structural, operational and statistical system parameters were considered in this paper. represented as a point inside 3 R cube, and the space of allowed opt x parameters is limited by x D tolerance region. Figure 2 . Figure2.Parallel and sequential sections of elementary services of a virtual structure in cloud SOA[27].Let us define the concept of "functional parameter" for a corresponding service type, which should be served in SOA-based cloud platform, as statistically predefined and calculated Hurst parameter, which corresponds to the specific type of traffic for this service.Thus, for each synthesized instance of cloud architecture the corresponding indicator of generalized service availability can be presented and calculated for each functional service type, which SDP can offer, same as relative performance indicator (Fig.1)[27] for a given combination of ESCs.We chose both indicators as criteria for optimal structural and functional synthesis of service delivery platforms[27].Next, we present the results of service availability modelling in scalable cloud service network, which was performed using MATLAB and Mathcad systems based on numerical calculations and analytical dependencies approximation, which were presented in[4,29] (Figs.3, 4).We assume, that for the modelling results depicted in Fig.3a, cloud platform service layer implementation is performed without optimal distribution of service flows (virtual machines migration).The case, where service flows may be adaptively corrected on demand by virtual machines migration[30] is depicted in Fig.3b.In case of non-optimal cloud network platform resources configuration the necessary resources exceed the available resources of virtual machines, while in case of adaptive cloud network platform resources configuration, the comprehensive resource roughly equivalents the available virtual machine requirements.Figs.2, 3 (a, b) list various load parameters Λ and the values of Hurst parameter H for aggregated inbound traffic.P a is a generalized service availability of cloud network platform; η is a fraction of ESCs, which process the queues in parallel (%); N a is a number of service nodes in cloud network platform. Figure 3 . Figure 3. Service availability modelling results in scalable cloud service network for non-optimal case -a, and for adaptive case -b. Table 1 . The Typical Failure Statistics of Cloud-Systems.
2018-12-11T04:43:53.556Z
2017-05-22T00:00:00.000
{ "year": 2017, "sha1": "95fb823789f47526b9fc541a6d06754c221bfff6", "oa_license": "CCBY", "oa_url": "http://www.clausiuspress.com/assets/default/article/2017/08/10/article_1502406615.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "95fb823789f47526b9fc541a6d06754c221bfff6", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
3237411
pes2o/s2orc
v3-fos-license
Strange Magnetism We present an analytic and parameter-free expression for the momentum dependence of the strange magnetic form factor of the nucleon and its corresponding radius which has been derived in Heavy Baryon Chiral Perturbation Theory. We also discuss a model-independent relation between the isoscalar magnetic and the strange magnetic form factors of the nucleon based on chiral symmetry and SU(3) only. These limites are used to derive bounds on the strange magnetic moment of the proton from the recent measurement by the SAMPLE collaboration. Introduction There has been considerable experimental and theoretical interest concerning the question: How strange is the nucleon? Despite tremendous efforts, we have not yet achieved a detailed understanding about the strength of the various strange operators in the proton. A dedicated program at Jefferson Laboratory preceded by experiments at BATES (MIT) and MAMI (Mainz) is aimed at measuring the form factors related to the strange vector current. In fact, the SAMPLE collaboration has recently reported the first measurement of the strange magnetic moment of the proton 1 . To be precise, they give the strange magnetic form factor at a small momentum transfer, G (s) M (q 2 = −0.1 GeV 2 ) = +0.23 ± 0.37 ± 0.15 ± 0.19 nuclear magnetons (n.m.). The rather sizeable error bars document the difficulty of such type of experiment. On the theoretical side, there is as much or even more uncertainty. For example, the spread of the theoretical predictions for the strange magnetic moment, −0.8 ≤ µ (s) p ≤ 0.5 n.m. underlines clearly the abovemade statement. In the following we report about a parameter-free prediction 2 for the momentum dependence of the nucleons' strange magnetic (Sachs) form factor based on the chiral symmetry of QCD solely. In addition, a leading order model-independent relation between the strange and the isoscalar magnetic form factors has been derived, which allows to give an upper bound on the momentum dependence of G Strangeness Vector Current The strangeness vector current of the nucleon is defined as with q = (u, d, s) denoting the triplet of the light quark fields and λ 0 = I (λ a ) the unit (the a = 8 Gell-Mann) SU(3) matrix. Assuming conservation of all vector currents, the corresponding singlet and octet vector current for a spin-1/2 nucleon can then be written as Here, q µ = p ′ µ − p µ corresponds to the four-momentum transfer to the nucleon by the external singlet (v subject to the normalization F B the (anomalous) strangeness moment. In the following we concentrate on the "magnetic" strangeness form factor G (s) M (q 2 ), which in analogy to the (electro)magnetic Sachs form factor is defined as and for which chiral perturbation theory (CHPT) gives the most interesting predictions. The strange magnetic form factor To obtain the complete strange magnetic form factor in ChPT one only has to consider the diagrams 2 where the external singlet/octet source couples directly to the nucleon as well as the one where the octet source couples to the intermediate kaon cloud, the pion and the η cloud do not contribute to this order. For the proton (p) and the neutron (n) one finds with Q 2 = −q 2 . The strange magnetic moment µ (s) N cannot be directly predicted in ChPT due to the influence of poorly known singlet counterterms 3,2 . However, to O(p 3 ) in ChPT the momentum dependence is given entirely in terms of well-known parameters 2 and the analytic function Fig.1. For small and moderate Q 2 , it rises almost linearly with increasing Q 2 . The isoscalar connection An SU(3) analysis of the magnetic isoscalar (I = 0) form factor of the nucleon G I=0 M (Q 2 ) shows 2 that to O(p 3 ) it can be expressed via the same function f (Q 2 ) given in Eq.(6). We can therefore eliminate f (Q 2 ) from both expressions and derive a model-independent relation between the isoscalar magnetic form factor G I=0 M (q 2 ) of the nucleon and the strange magnetic form factor with µ s = 0.88 n.m. being the isoscalar nucleon magnetic moment. This relation is exact to O(p 3 ) in SU (3) Given that there are also non-strange contributions in the physical isoscalar magnetic form factor, which will start to manifest at order q 4 , we consider Eq.(7) as an upper bound on the strange magnetic form factor. Summary In summary, we have derived two novel relations which constrain the momentum dependence of the strange magnetic form factor in the low energy region. The first one is based on the observation that to one loop oder in three flavor ChPT, the strange form factor picks up a momentum dependence which is free of unknown coupling constants. The second one rests upon the observation that the isoscalar magnetic form factor calculated in SU(3) also acquires a momentum dependence which can be related to the one of the strange magnetic form factor. One can now utilize the Q 2 -dependence from the two bounds, Eqs. (5,7), to extract the strange magnetic moment from the SAMPLE result for the strange magnetic form factor. For Q 2 = 0.1 GeV 2 , the correction is -0.06 and -0.20, respectively, i.e. for the mean value of ref. 1 we get µ (s) p = 0.03 . . . 0.18 n.m. , which even for the upper value is a sizeable correction. Clearly, these numbers should only be considered indicative since (a) the current experimental errors are bigger than the correction and (b) higher order corrections to the relations derived here should be worked out. Finally, we note that the G0 collaboration at TJNAF will also probe this particular range of momentum transfer 4 . We would like to thank the organizers of Baryons98 for providing us with the opportunity to present this work to the physics community.
2014-10-01T00:00:00.000Z
1998-11-09T00:00:00.000
{ "year": 1998, "sha1": "7d1ef6df27270947f6f8a15f3a1ec9316a633aaf", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "74ac060484f2c14f5cd91a253ea669c43cbb3cae", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
230799307
pes2o/s2orc
v3-fos-license
Transport across meso-junctions of highly doped Si with different superconductors We studied the transport properties of meso-junctions of semiconducting (Sm) highly doped Si with different superconductors (Sc) through point contact Andreev reflection (PCAR) spectroscopy. Spectra of low transparency point contacts between Si and In showed an enhancement in the superconducting energy gap of In. This was due to the effect of an additional gap arising from the Schottky barrier at the Sm-Sc interface. For higher transparency Si-Nb and Si-Pb point contacts, no gap enhancement was observed though there were weak sub gap features. These were due to proximity induced interface superconductivity known to occur for Sm-Sc junctions of high transparency. Introduction Carrier transport in semiconductor (Sm)-superconductor (Sc) junctions has been probed extensively in the last decade 1,2,3,4,5,6,7,8,9,10,11 . In Sm-Sc junctions with high transparency, a finite super-current flow is detected which is understood in terms of Andreev reflection and the superconducting proximity effect (SPE) 3,4,5 . Furthermore, presence of Andreev scattering at the interface are shown to affect the current-voltage characteristics of the junctions. Recently there has been considerable interest in using the SPE to search for proximity induced superconductivity in topological insulators and semi-metals and also hunt for the Majorana modes 12,13 . It is proposed by Sau, Lutchyn, Tiwari and Das Sarma that such modes might exist at a contact of a Sc to a Sm nanowire 14 , 15 . Following this, some experimental signatures of the emergence of zero energy modes have been observed which has prompted researchers to hunt for more conclusive proof of the Majorana modes 16 . In recent times Sm-Sc hybrid devices have also been successfully fabricated which have shown promising applications as superconducting light emitting devices 17,18 , waveguide amplifiers 19 or in fields of quantum information 20 . In most of these applications, the devices have exploited the phenomena of Andreev reflection. Andreev reflection and SPE at the Sm-Sc interfaces is found to depend on the doping of the Sm and on the interface cleanliness and transparency 21,22,23. Point contact spectroscopic technique has been employed to study Andreev reflection for Sm-Sc interfaces, where the Sm-Sc contact is primarily done by micro-fabrication techniques in thin film geometry. This has been used to study systems like Si/Nb 3,8,9 , GaAs/Sn 24 , InAs-AlSb/Nb 25 , GaAs/Nb 10,26 , InAs-Pb 27 etc where non-equilibrium pair currents have been detected in the Sm. In most of the earlier work, multiple peak structure was observed in the conductance spectra of the junctions. Interestingly, in some reports of diffusive Sm-Sc contacts, reflection less tunneling was observed which gave rise to huge enhancements of zero bias conductance (ZBC) which is considered a classic signature of phase coherence 28 . In some other reports, the voltage at which the coherence peaks in the conductance spectra was observed was much greater than the voltage corresponding to SC (energy gap of the superconductor) with no multiple peaks 10 . The point contact Andreev reflection (PCAR) spectra was theoretically modelled using the Blonder-Tinkham-Klapwijk (BTK) model which assumes a delta like potential barrier at the interface of the Sm-Sc. It provides an analytical approach for calculating the transport properties based on the Bogoliubov-de Gennes (BdG) equations 29 according to which it is expected to see the peaks in the conductance spectra at a voltage of ~ SC/e. Lissitski et. al. further showed the effect of the Schottky barrier formed at the interface of the Sm-Sc interface. 30 They predicted that the peaks in the conductance spectra should appear at voltages greater than SC/e. Through their simple model, they showed that the presence of the Schottky barrier reduced the charge screening resulting in a gap in the electronic spectra in the Sm side which presented an additional energy gap to the tunneling of electrons. Very recently, a theoretical model also studied the effect of the Schottky barrier present at the Sm-Sc interface. 31 Their results predict asymmetry in the conductance spectra and huge enhancements in the Andreev signal due to resonant tunneling for an appropriate barrier. Observation of phase coherent phenomena in Sm-Sc junctions through transport measurements was restricted to complex geometry with elaborate fabrication techniques. In this paper, we report soft point contact Andreev reflection spectroscopic studies between highly doped n type Si (n ++ Si) and In 32 . PCAR spectra or conductance vs bias voltage (G(V) = dI/dV vs V) spectra of the n++Si-Indium (In) junction shows clear yet broad spectral gap features at low temperatures (Here, the spectra resembled those in the tunneling regime rather than the point contact regime as discussed in Ref [29]). For most of the contacts, these broad spectral features with distinct coherence peaks were also observed at temperatures which were as high as about 80% of the transition temperature (Tc) of In. Analyzing the spectra measured at the lowest temperatures (~ 1.7 K) using the BTK model, a high value of the superconducting energy gap () for In was obtained (~1.33 times the BCS gap of In). Also the parameter characterizing the transparency of the contact (z ~ i.e. proportional to the interface barrier height, V0) remained high (z > 1) indicating that the transparency was low. However, analysis of the temperature dependence of the PCAR spectra gave the values of (T) which seemed to follow a BCS variation, with the gap closing at the Tc of In. Interestingly, such anomalous features were not observed in soft point contact spectra between Al-In junctions which mimicked a normal metal-superconductor (N-S) junction (Note: Al film was measured above 1.2 K, Tc of Al). This indicated that the observations for the n++Si-In junction were reflective of a Sm-Sc interface. Furthermore, tip induced superconductivity reported from point contact measurements in highly doped Si with Ag tip 33 was also not observed in the present study. Interestingly, such broad spectral features were observed from hard point contact measurements between n++Si and a Pb tip, the contacts again being of low transparency. However, for the contacts studied for the n++Si-Nb junctions, the PCAR spectra were more in the point contact regime indicating that the transparency of the contact was high. Here, the spectra were similar with previous reports which was consistent with the theory of proximity induced interface superconductivity (PIIS) in a Sm-Sc junction, showing a smaller gap in addition to the gap feature associated with the superconducting Nb gap (originating from Andreev reflection) 9 . We analyzed our data based on the model presented in Ref. [30] and [9]. The broad spectral features of the n++Si-In contacts and the n++Si-Pb contacts could be explained on the basis of the presence of the Schottky barrier present at Sm-Sc interfaces consistent with the model of Ref [30]. Our experimental findings indicate that the transparency of the contact is vital in understanding the transport across Sm-Sc interfaces. Experimental details Commercially available silicon crystals (n type) were used for the measurements presented here. The type and level of doping were confirmed by in-house Hall measurements at different temperatures between 100 -2 K. These doped crystals had a resistivity of ~0.9 m cm, with a doping concentration ~3-5x10 25 per m 3 as seen from the Hall data at 25 K in Fig. 1(b). It is worth mentioning that the mean free path in the n++ Si is ~ 12 nm and the coherence length, Fig. 1(a)). Transport measurements of the n++ Si/In junction was done by the standard four probe configuration. A Keithley current source was used to bias the junction and a nanovoltmeter was used to measure the voltage drop. The i-v curves were numerically differentiated to get the conductance spectra (dI/dV vs V). In soft PC though the macroscopic contact area is large, the effective electrical contact happens over a much smaller area due to the presence of parallel micro bridges in the contact area. Contacts in both the ballistic (where the effective contact diameter, d << elastic mean free path of the electrons, lel) and diffusive regime (where d << (lel lin)/3) 0.5 , lin being the inelastic mean free path) can provide useful and same energy resolved information from the PC 34,35 . Microscopic area of the contact was tuned by applying small voltage pulses. The contact resistance of the soft PC data presented here was about 2 -20 ohm which corresponded to the apparent contact diameter of about tens of nanometers 34 . The effective contact diameter is therefore expected to be smaller implying that the transport is likely to be in any one of the two regimes, ballistic or diffusive. Since the actual microscopic PC is unknown, standard diagnostics based on the shape of the conductance spectra were done to ensure that the spectra were not in the thermal regime 32,36,37 . In the thermal regime, conductance dips are observed at biases, V >> /e where  is the superconducting energy gap. These spectral features are associated to the local heating at the contact which lead to the bias current reaching the critical current of the superconductor 37 . By slowing tuning the contact, and increasing the contact resistance, the contact diameter can be decreased to enter the non-thermal regime of transport where these dips disappear and the characteristic double peak symmetric about V = 0 distinctly appear. Since, the lel is quite low in n++ Si, it is more likely that the transport is in the diffusive regime as was reported in similar measurements on n++Si-Nb junctions 9 . Hard point contact on Si was done by the usual needle-anvil technique with different tips like Nb, Pb, Ag, Cu, Pt-Ir etc. Pressure on the contacts was adjusted at room temperature to keep the normal state resistance to a few ohms. As explained above, similar diagnostics were done to eliminate any spectra in the non-thermal regime. Results and Discussions Figs 1(c) shows the PCAR spectra acquired by soft PC on a highly doped n-type Si at a temperature T < Tc of In. Three distinct observations emerge from the data: i) Andreev reflection is occurring at the Sm-Sc interface leading to the characteristic double-peak structure of a PCAR spectra (looking at the spectra the contacts look more in the tunneling regime) 29 ii) Unlike the symmetric double peak structure seen in a typical PCAR spectra for N-Sc interfaces, the spectra in Fig. 1 (c) are asymmetrical. Notably the asymmetry decreases on decreasing the normal state contact resistance, RN or increasing conductance (dI/dV) (see lower panel in Fig. 1(c)) iii) the coherence peaks are quite pronounced even at temperatures as high as ~ 0.8Tc, where Tc is the transition temperature of the superconducting In. and iv) the spectra are relatively broad with peak to peak voltage ~ 2.0-2.3 mV. The asymmetrical nature of the PCAR spectra can be understood on the basis of the recent work by Bouscher et. al. who developed a theoretical model to understand the conductance of Sm-Sc junctions with arbitrary potential barriers 31 . Their results show that due to the presence of the Schottky barrier between the degenerate Sm and the Sc, the process of the retro-reflection of the quasi-particles become non-ideal and hence leads to an asymmetry in the conductance spectra. This asymmetry was also seen in earlier experiments of Si-Nb contacts where the pair current was observed from the transport measurements of the junctions 3 . In our data the decrease in asymmetry of the PCAR spectra with decreasing contact resistance can be understood from the fact that with tuning the contact with local voltage pulses, the potential barrier at the junction reduces. In Fig. 1(d), PCAR spectra are shown for different contact resistances, RN. In these set of spectra, the asymmetry is removed manually by normalizing each spectrum with the spectra obtained at T >Tc for each contact. From these spectra, in addition to the above mentioned four distinguishing features, we can see that with decreasing RN, the coherence peaks diminishes considerably. Following the work of Heslinga et al. 9 , we fit the spectra with the BTK model to obtain the values of the superconducting energy gap () of bulk In. A broadening parameter  was introduced in the BTK model to account for all sources of non-thermal broadening 38 . Besides,  and , the interface transparency was modelled with a dimensional parameter z which is taken to be proportional to the potential barrier potential, Vo at the N-Sc interface in the BTK model. For most of the spectra, z ranged between 1.0-1.7, indicating that the contact was primarily in the tunneling regime. This can be explained on the basis of the large mismatch between the Fermi levels between the Sm-Sc which reduces the transparency at the interface. The values of the  and  obtained from fitting the PCAR spectra with different RN is shown in Fig. 1(e) and also in Table I. [Note: Some of the spectra analyzed whose data have been presented were measured at a base temperature of T = 1.7 K and many of the spectra were measured at a base temperature of T = 2.8 K]. From Fig. 1(e), we can see that for some of the contacts with RN < 6 ohm, becomes almost comparable to , which explains the observed broadened PCAR spectra for these contacts as shown in Fig. 1(d). Furthermore, it appears that the value of  for In is about 0. at the interface and the transport can be successfully described by the BTK model 39 . In order, to re-emphasize that this is not due to any experimental artefact, we carried out soft PC spectroscopy on Al film with In at temperatures greater than the Tc of Al (~ 1.2 K). This behaved like a normal N-S contact. The PCAR spectra (shown in the upper panel of Fig. 2(a) was fitted with the BTK model. The values of (T) followed the BCS variation as seen from the lower panel of Fig. 2(a) Next we explore the possibilities (iii) and (iv). We measured the temperature dependence of the normal state resistance, RN of the soft PC which is shown in Fig. 3(a) for a contact with RN = 8.0 ohm. A single drop in the PC resistance is seen at T = 3.4 K corresponding to the Tc of In (see inset of Fig. 3(a)). The temperature variation of the PCAR spectra for this contact is shown in Fig. 3(b). The PCAR spectra seemed to become completely featureless at T > 3.4 K indicating a closure of the gap at the Tc. Each spectrum was analyzed using the modified BTK model to obtain (T) which is plotted as a function of temperature in Fig. 3(c) as red solid circles. We have similarly analyzed spectra for another contact with RN = 18 ohm. The obtained (T) from the fits have been shown in Fig. 3(c) as blue squares. Also, shown in this plot is the BCS variation of the gap for In (green dashed line). As is clearly seen that the values of (T) are consistently higher at all temperatures than that expected for Indium. However, interestingly, the temperature variation of the gap values of In obtained from the fits mimics the BCS variation (as seen by the black dashed lines in the figure), albeit with extremely high ratio of /Tc. We next explore possibility (iii) in greater detail on the basis of the work reported in Ref [30]. It has been shown there that the presence of a Schottky barrier at the Sm-Sc interface results in a reduction of the charge screening along the Sm side. This leads to an additional energy gap, EB which along with the superconducting energy gap,  presents a larger energy gap of (EB + ) to single electron tunneling. Consequently, the conductance spectra of the interface shows peaks at voltages of (EB + )/e at temperatures T << Tc. We analyze our data based on this model. According to the model, assuming that in the Sm side, states get depleted in an energy range of EB across the Fermi level (), EB can be evaluated using the equation: Here, L* is the phase coherence length across the interface and is usually the minimum of the (Fig. 1(e)) can also be understood on the basis of the variation of the interface Schottky barrier with contacts which would affect EB. The different parameters for different contacts (different RN) obtained from the analysis is shown in Table 1. We estimated EB by three different ways. EB * was obtained from (eVpk -Sc) where Vpk was the peak position of the coherence peak observed in the PCAR data and Sc was the expected superconducting energy gap of In at the measurement temperature, T. EB # was simply obtained from (BTK -Sc), where BTK was the value of the gap obtained by fitting the experimental PCAR spectra by the BTK model. Finally, EB was obtained from figure 3(d) and using the crude expression of EB mentioned above. Interestingly, a fairly good match was obtained between EB* and EB. Thus, the model of the charge screening by the Schottky barrier not only explains our data for the n++Si-In contacts qualitatively, but also reasonably good quantitative match is obtained. Next, we investigated the magnetic field evolution of the PCAR spectra at different temperatures for the n++Si-In contacts. PCAR spectra was measured at a constant temperature for different magnetic fields at temperatures of T = 1.9 K, 2.1 K, 2.4 K, 2.6 K and 2.8 K. The spectra obtained at T = 1.9 K is shown in Fig. 4(a). Each spectrum was fitted with the single gap BTK model to extract s(T) for different magnetic fields. This is plotted for different temperatures in Fig. 4(b). At the lowest measured temperature of 1.9 K, the critical field where the gap would close seems to be greater than 3.5 kG. We attribute it to In though it seems to be substantially enhanced from 0.28 kG for bulk In 40 .This is not surprising as critical fields are known to be enhanced at meso-scopic junctions 41 . Furthermore, the magnetic field variation for higher temperatures seems more or less linear. To further check the effect of the Schottky barrier on the peak position of the PCAR spectra, we explored interfaces of n++Si with other superconductors like Pb and Nb. We carried out hard point contact spectroscopy with these superconducting tips on n++Si. These experiments would also help us investigate the final possibility i.e. (PIIS) and how it influences PCAR spectra. Since the Tc of both these superconductors are higher than In, any signature of another superconducting phase (even weak) at the interface would result in well resolved peaks in the PCAR spectra. It is worth mentioning that Andreev reflection studies done on n++Si -Nb junctions 9 as well as hetero-structures of many topological materials like Bi2Se3, Bi2Te3 etc 12,13 with superconductors have shown the presence of a sub gap feature which is related to the proximity induced gap at the interface of the Sm/topological semi-metals. These sub-gap features are in addition to the gap features arising from Andreev reflection. Fig. 5(a) shows the temperature variation of the PCAR spectra for a n++Si -Pb junction. The spectra became featureless at the Tc = 7.0 K of the point-contact (As seen from the temperature variation of RN of the contact in the inset). However, similar to the n++Si-In junctions, the contact appears to be in the tunneling regime and gives broad spectral features. The peak to peak voltage was 5.0 mV. This data can also be analyzed based on the model of the Schottky barrier. For the n++Si-Pb interface, EB is expected to be ~ 0.78 -1.28 meV which can again explain quantitatively the observed peak position in the PCAR spectra (See Table 1). For another contact shown in of the n++Si -Nb junction similar to that seen in many Sm-Sc junctions 9 . It is worth mentioning PIIS give rise to the zero bias peak (seen in Fig. 5(c)) associated with Andreev scattering known to occur in very transparent Sm-Sc junctions. Thus, the presence of the prominent peak at a voltage corresponding to the gap of the Sc and the sub gap features clearly indicates that these low z contacts cannot be analyzed on the basis of the Schottky gap (see Table I). Also, from the PCAR spectra is seen to be influenced differently. For low transparency contacts, the Schottky barrier primarily influences the transport resulting in shifting of the coherence peaks from /e to higher biases while for highly transparent junctions, proximity induced interface superconductivity dominates the transport giving distinctive sub gap features and zero bias peak in the conductance spectra. Acknowledgements We will like to thank Mr. Soumyajit Mandal and Prof. P. Raychaudhuri for the Hall measurements. SB acknowledges partial financial support from the Department of Science and Technology, India through No. SERB/F/1877/2012. Author Contributions PP carried on the experiments and analyzed the data. SB conceptualized and supervised the project and also refined the analysis by invoking the model of a degenerate semiconductor and a superconductor. Both authors discussed the results and SB wrote the manuscript.
2020-12-31T09:05:51.608Z
2021-01-07T00:00:00.000
{ "year": 2021, "sha1": "eec8bc4679f042630d11d88cff8fc01f6cd2a38c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2101.02401", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c25e71976ff69e8d224104da5315c171459ac933", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
256230482
pes2o/s2orc
v3-fos-license
A cross-sectional study of ophthalmologic examination findings in 5385 Koreans presenting with intermittent exotropia The Korean Intermittent Exotropia Multicenter Study (KIEMS) was a retrospective, cross-sectional and multicenter study for the investigation of intermittent exotropia involved 65 strabismus specialists from 53 institutions in Korea. Purpose of this study was to present ophthalmologic findings of intermittent exotropia from the KIEMS. Consecutive patients with intermittent exotropia of ≥ 8 prism diopters (PD) at distance or near fixation were included. Best-corrected visual acuity, cycloplegic refraction data, angles of deviation at several cardinal positions, ocular dominance, fusion control, oblique muscle function, and binocular sensory outcomes were collected. A total of 5385 participants (2793 females; age 8.2 years) were included. Non-dominant eye was more myopic than the dominant eye (− 0.60 vs. − 0.47 diopters, P < 0.001). Mean exodeviation angles were 23.5 PD at distance and 25.0 PD at near fixation. Basic type (86.2%) was the most, followed by convergence insufficiency (9.4%) and divergence excess (4.4%) types. Alternating ocular dominance and good fusion control were more common at near than at distance fixation. Good stereopsis at 40 cm was observed in 49.3% in Titmus stereo test (≤ 60 arcsec) and in 71.0% in Randot stereo test (≤ 63 arcsec). Intermittent exotropia was mostly diagnosed in childhood and patients with the condition showed relatively good binocular functions. This study may provide objective findings of intermittent exotropia in a most reliable way, given that the study included a large study population and investigated comprehensive ophthalmology examinations. www.nature.com/scientificreports/ examinations can be performed only manually. To obtain comprehensive and convincing information about the clinical characteristics of intermittent exotropia, a large-scale study, regardless of clinical considerations, such as age, amount of exotropia angle, and necessity of surgical intervention, is needed. Also, the ophthalmologic examinations need to be conducted by strabismus specialists using a standardized protocol. The Korean Intermittent Exotropia Multicenter Study (KIEMS) is a large-scale nationwide and multicenter study investigating the clinical features of intermittent exotropia using a standardized protocol. It was initiated by the Korean Association of Pediatric Ophthalmology and Strabismus (KAPOS), whose members are strabismus specialists. The KIEMS is one of the largest clinical studies on intermittent exotropia to date and is expected to present the overall features, including the subjective and objective features, of intermittent exotropia. This study was conducted to present the objective ophthalmologic findings from the KIEMS. Results Baseline characteristics of participants. A total of 5385 participants were included in this study with age of 8.2 ± 7.6 years (mean ± standard deviation; range, 0.3-106.7 years). The age distribution of all participants has been previously described 18 . The mean spherical equivalent (SE) was -0.57 ± 1.89 diopters (D) (range, + 7.0 to − 12.88 D) in the right eye and -0.61 ± 1.96 D (range, + 8.75 to − 14.00 D) in the left eye (P = 0.666, paired t-test). The non-dominant eye at distance fixation tended to be more myopic than the dominant eye (SE: − 0.60 ± 1.98 vs. − 0.47 ± 1.74 D, P < 0.001, paired t-test) ( Table 1). Discussion This study described the objective examination findings from the KIEMS, which is one of the largest clinical studies on intermittent exotropia to date. Although many previous studies on the clinical characteristics of intermittent exotropia have been conducted, the KIEMS is expected to provide the most comprehensive and reliable overview of the clinical spectra of intermittent exotropia in terms of sample size and study parameters. In this study, the number of female participants (51.9%) was comparable to that of male participants (48.1%). In a previous population-based cohort study including participants aged < 19 years in the United States, a female predominance (64.1%) was reported 19 . Another multicenter cohort study in the United Kingdom also reported 20 . In contrast, in Singaporean 4 and Chinese 5 population-based studies in children aged < 6 years (mostly of Chinese ethnicity), the prevalence of exotropia showed no sex difference when compared with the general population. In addition, a previous population-based study in Korea reported that sex was not significantly associated with clinically significant intermittent exotropia (≥ 15 PD) in adolescence 7 . Studies in Asian countries including our study have found no sex predominance in the prevalence of intermittent exotropia, whereas Western studies tended to show a female predominance. Future studies with age or ethnicity standardization are needed to clarify the sex differences in intermittent exotropia. In this study, basic-type exotropia (86.2%) was the predominant type followed by convergence insufficiencytype (9.4%) and divergence excess-type (4.4%) exotropia when classified based on a ≥ 10 PD difference between the distant and near exotropia angles. Patients with the convergence insufficiency type were older than those with the other two types. Similarly, a recent study in Korea reported that basic-type exotropia was the most prevalent type (79.2%) in 355 patients with exotropia 14 . A population-based study from China reported a 74.7% prevalence of basic-type exotropia in 166 patients with intermittent exotropia aged 3-6 years 5 . Rutstein and Corliss also reported basic-type exotropia as the most common type in 73 patients 21 . A study from Singapore reported that divergence excess-type exotropia had a higher prevalence (59.5%) than basic-type exotropia (27%) in 453 patients with intermittent exotropia; however, the authors speculated that some patients with basic-type exotropia may have been inadvertently classified to the divergence excess type, as the children were not routinely patched to eliminate tenacious proximal fusion 22 . However, Burian and Franceschetti observed basic-type exotropia in 33% and convergence insufficiency-type exotropia in 55% of 237 prospectively collected consecutive patients, although they used stricter standards in classifying cases as convergence insufficiency-type exotropia 23 . Kushner and Morton observed divergence excess-type exotropia in 48.5%, which was the most prevalent type, although it included 80 patients (39.6% of the total participants) with simulated divergence excess (within a distant-near angle difference of 10 PD after 1 h of monocular patching), and basic-type exotropia in 38.6% of 202 patients with intermittent exotropia 24 . They reported that convergence insufficiency-type exotropia was more common in older participants, consistent with the current study ( Table 4). The proportion of intermittent exotropia types may be affected by the inclusion criteria used or the clinical characteristics of the participants. Alternating ocular dominance (48.3% at distance, 61.0% at near) was more common than right or left dominance (29.1% for the right eye and 22.6% for the left eye at distance; 22.1% for the right eye and 16.9% for the left eye at near) in this study. The proportion of alternating ocular dominance at near fixation was larger than that at distance fixation. Similarly, fusion control was better under the near viewing condition than under the distant viewing condition in this study. Previous studies investigating fusion control in patients with intermittent www.nature.com/scientificreports/ exotropia showed similar results [25][26][27] . In monocular dominance, there is a preference for one eye over the other eye under the binocular viewing condition, whereas no such preference exists in alternating ocular dominance 28 . It is well known that patients with intermittent exotropia rarely manifest amblyopia in either eye (if amblyopia occurs, it mostly manifests in the non-dominant eye) because the eyes can remain aligned at least in the near fixation condition 29 . Therefore, the result of this study confirmed that patients with intermittent exotropia show good binocular interaction. More than 60% of the participants saw four or five lights in the distant Worth four-dot test, which suggests that patients with intermittent exotropia have relatively good binocular function at distant fixation, in which the sensory function of one eye does not overwhelm that of the other eye; however, seeing four lights in the test does not necessarily mean that the participants had central foveal fusion 30 . Monocular suppression was observed in < 40% of the patients, evenly in each eye. In the Titmus stereotest at 40 cm, approximately 50% of the participants showed good stereopsis of ≤ 60 arcsec, reflecting central fusion at near fixation. Moreover, in the Randot stereoacuity test at near fixation, > 70% of the participants showed ≤ 63 arcsec of stereopsis. Romanchuk et al. reported that 72.5% of their 109 patients showed better stereopsis than 60 arcsec in the Titmus stereo test even after ≥ 9 years follow-up from the initial visit 31 . Similarly, Mohney et al. reported that 63% of 152 patients showed 60 arcsec or better stereopsis in the Randot stereo test in a Pediatric Eye Disease Investigator Group study 32 . It is well known that patients with intermittent exotropia have relatively good near stereopsis 1 . The participants in this study can be assumed to have similarly good binocular functions, as previously reported. This study should be viewed in the light of its limitations. Owing to the retrospective study design, data collection could not be performed as strictly as in a prospective study, which may have inevitably biased the patient selection or data collection process. Moreover, data were collected from 65 strabismus specialists from 53 different institutions and the circumstances of ophthalmologic examinations may have been different among the investigators, possibly affecting the study results. Despite efforts to reduce variability through the use of a standardized protocol and standardized case report forms, this study had the same limitations as many other multicenter studies. In conclusion, this large observational study that included 5385 participants reported the objective findings of intermittent exotropia. In most of the study participants, intermittent exotropia was diagnosed during childhood (age, 8.2 ± 7.6 years). Basic-type exotropia was the most common type, followed by the convergence insufficiency and divergence excess types. In the assessment of fusion control, good to fair control was observed in 69.2% at distance fixation and in 79.8% at near fixation, and "good stereopsis" ( ≤ 60 arcsec in the Titmus stereotest and ≤ 63 arcsec in the Randot stereo test) was observed in 49.3% and 71.0%, respectively. This study potentially provides the most reliable information on the general clinical spectra of intermittent exotropia thus far, given the large study size and the coordination among many specialized investigators. Future studies using the KIEMS data are expected to provide more information about various aspects of intermittent exotropia. Methods The KIEMS is a nationwide, retrospective, observational, cross-sectional, and multicenter study. The protocol of the KIEMS has been described elsewhere 18 . Briefly, the study was conducted as a collaboration among 65 strabismus specialists who were members of KAPOS and affiliated with 53 institutions in Korea. The medical records of patients who visited the eye clinic of each institution for the first time between March 1, 2019, and February 29, 2020, were reviewed. Participants with intermittent exotropia with ≥ 8 prism diopters (PD) at distance fixation (at 6 m) or near fixation (33 cm) in the prism and alternate cover test (PACT), regardless of age, were included in this study. Participants who had previous strabismus surgery history were excluded. Participants were excluded if they had signs of incomitant strabismus, ocular conditions affecting vision or prior ocular surgical history, chromosomal anomalies, or systemic disorders such as congenital anomalies or neurologic disorders. The KIEMS protocol conformed to the tenets of the Declaration of Helsinki. The protocol was approved by the Institutional Review Board of Kim's Eye Hospital (KEH 2020-05-007) and by each participating institution. The requirement for informed consent was waived by the Institutional Review Board of Kim's Eye Hospital because the study used retrospectively collected clinical data and the data were accessed anonymously. The KIEMS collected data from subjective questionnaires completed by patients or guardians and from the results of objective ophthalmologic examinations conducted by strabismus specialists. In this study, we collected and analyzed the following objective data from ophthalmologic examinations in the KIEMS: age, sex, [33 cm] viewing conditions using accommodative targets with the patients' best optical correction), and associated strabismus (e.g., dissociated vertical deviation, vertical deviation, and oblique muscle dysfunction). Vertical deviation was defined as hypertropia/hypotropia of ≥ 5 PD in the primary position. Lateral incomitance was defined as a decrease in the exo-angle of ≥ 20% in the right or left gaze, as compared with that in the primary position. "A" pattern exotropia was defined as a condition in which the exotropia angle at down gaze was higher by ≥ 10 PD than that at up gaze. Likewise, "V" pattern exotropia was defined as a condition in which the exotropia angle at up gaze was higher by ≥ 15 PD than that at down gaze. Right or left ocular dominance was determined to be present when the right or left eye had a shorter duration of dissociation during the uncover test, and alternating ocular dominance was identified when the duration of dissociation was similar between the two eyes. Fusion control under the distant and near viewing conditions was also investigated and classified as follows: good control, when ocular fusion was disrupted only after the cover test at distance fixation and was rapidly regained without blinking or fixating ocular movements; fair control, when ocular fusion was regained only after blinking or fixating movements after disruption with cover testing at distance fixation; and poor control, when ocular fusion was spontaneously broken without fusion disruption or was not regained despite blinking or refixation 33 . For sensory status evaluation, the Worth four-dot test (Richmond Products, Albuquerque, NM, USA) under the distant viewing condition and either the Titmus stereotest (Stereo Optical Co., Inc., Chicago, IL, USA) or Randot stereotest (Vision Assessment Corporation, Elk Grove Village, IL, USA) under the near viewing condition were performed. Stereoacuity of ≤ 60 arcsec in the Titmus stereotest or ≤ 63 arcsec in the Randot stereo test was defined as "good stereopsis. " More detailed findings of the ophthalmologic examinations are provided in an article describing the KIEMS methodology 18 . Statistical analysis was performed using SPSS (version 21.0; IBM Corporation, Armonk, NY, USA). Statistical significance was set at P < 0.05. Bonferroni correction was applied to the P value for subgroup analyses. Mean ages were compared between male and female participants using an independent t-test. Exodeviation angles in the secondary positions and in the right and left head-tilted positions, compared with the exodeviation angle in the primary position, were analyzed using a paired t-test. The differences in the ratios of ocular dominance and fusion control at distant and near fixation conditions were compared using Pearson's chi-square test. Data availability Data supporting the findings of the current study are available from the corresponding author upon reasonable request.
2023-01-26T06:16:02.767Z
2023-01-24T00:00:00.000
{ "year": 2023, "sha1": "2629ae368bd341e3fabb51442a00179bbd04308f", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-023-28015-2.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "44ac10235bc5a952614597a53de2515fa4266dd1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269769224
pes2o/s2orc
v3-fos-license
Safety and efficacy of monthly high-dose vitamin D3 supplementation in children and adolescents with sickle cell disease Little is known about the impact of vitamin D supplementation on hand grip strength (HGS) and health-related quality of life (HRQoL) in children and adolescents with sickle cell disease (SCD). We aimed to evaluate the safety and efficacy of monthly high-dose vitamin D3 supplementation and its implications on bone mineral density (BMD), HGS, and HRQoL in patients with SCD and healthy controls. The study included 42 children with SCD and 42 healthy matched controls. The study participants were supplemented with high-dose monthly oral vitamin D3. Changes in the serum level of 25(OH) vitamin D3, maximum HGS, and BMD from baseline to 6 months were assessed, and the HRQoL questionnaire and Childhood Health Assessment Questionnaire (CHAQ) were used to evaluate the functional capacity. At baseline, SCD subjects had poorer growth status indicated by negative Z scores. Suboptimal BMD was detected by significantly lower Z score, and lower HGS and worse HRQL parameters were found compared to the controls (P < 0.001). Median 25(OH) vitamin D3 was significantly lower in SCD patients compared to controls (16.5 vs. 28 ng/mL, respectively (P < 0.001)). After 6 months of vitamin D supplementation, there was significant improvement in the DEXA Z-score (P < 0.001), limitation of physical health (P = 0.02), pain scores (P < 0.001), and CHAQ grades (P = 0.01) in SCD patients. A significant improvement in HGS (P < 0.001 and P = 0.005) as well as the CHAQ score (P < 0.001 and P = 0.003) was detected in the SCD group and controls, respectively. There were no reported clinical adverse events (AEs) or new concomitant medications (CMs) during the study duration, and safe levels of Ca and 25 (OH) D3 were observed at 3 and 6 months for both groups. There was a significant positive correlation between HGS and total physical score (r = 0.831, P < 0.001) and a negative correlation with CHAQ score (r =  − 0.685, P < 0.001). We also detected a significant positive correlation between vitamin D levels at 6 months and HGS (r = 0.584, P < 0.001), pain score (r = 0.446, P < 0.001), and a negative correlation with CHAQ score (r =  − 0.399, P < 0.001). Conclusion: Monthly oral high-dose vitamin D supplementation was safe and effective in improving vitamin D levels, HGS, and HRQoL in SCD children and healthy subjects, and BMD scores in SCD patients. Further randomized controlled trials are warranted to assess an optimal dosing strategy and to investigate the impact on clinically significant outcomes in children and adolescents with SCD and their healthy counterparts. Trial registration: ClinicalTrials.gov, identifier NCT06274203, date of registration: 23/02/2024, retrospectively registered. What is known: • Several studies have reported a high prevalence of vitamin D deficiency and suboptimal bone mineral density (BMD) in sickle cell disease (SCD) patients. • Musculoskeletal dysfunction is reported in SCD patients with a negative impact on physical activity and health-related quality of life (HRQL). • Little is known regarding the impact of vitamin D3 supplementation in children and adolescents with SCD. What is new: • We found that monthly oral high-dose vitamin D3 supplementation was safe, tolerated, and effective in improving serum vitamin D levels, HGS, BMD scores, and HRQL in SCD patients. Introduction Sickle cell disease (SCD) is a hereditary disorder characterized by chronic hemolytic anemia and vaso-occlusive crises (VOC) [1].Musculoskeletal dysfunction is reported in SCD patients.Several factors such as anemia, VOC-related stresses (e.g., hypoxia, ischemia, oxidative stress, inflammation, and necrosis), as well as muscle microvascular remodeling, may contribute to muscle dysfunction in SCD patients [23].As a result, the attenuated muscle strength, particularly hand grip strength (HGS) may have a negative impact on physical activity and health-related quality of life (HRQL) [3]. Several studies have reported a high prevalence of vitamin D deficiency and suboptimal bone mineral density (BMD) in SCD patients, which is linked to worse disease outcomes [4][5][6][7].However, only a few studies reported the safety and impact of vitamin D supplementation on HGS and HRQL in pediatric SCD [8]. Handheld dynamometry is considered a valid, reliable, simple tool for the objective measurement of HGS [9,10].HRQL is a crucial outcome measure that provides insight into the well-being of children with SCD [11].The Childhood Health Assessment Questionnaire (CHAQ) is commonly used to assess health status in children, and the updated versions showed improved validity in a variety of musculoskeletal problems [12,13]. This study aims to evaluate the safety and efficacy of monthly high-dose vitamin D 3 supplementation in patients with SCD and healthy controls and its implications on BMD, HGS, and HRQL. Material and methods We enrolled 42 children with SCD (HbSS, HbSβ0 thalassemia genotype), aged ≤ 18 years old, male or female at a steady state (≥ 1 month from blood transfusion and ≥ 14 days following one of the SCD complications as hospitalization for VOC or acute chest syndrome (ACS)), stable hemoglobin (Hb) level near their usual baseline, and stable dose of hydroxyurea (mg/kg) for at least 90 days before enrollment.Eligible patients were recruited from the Pediatric Hematology outpatient clinic at Zagazig University.A control group of 42 healthy age and sex-matching children were also included.We excluded SCD patients who were on chronic blood transfusion therapy, had comorbid chronic conditions, or were on medications known to interfere with calcium or vitamin D absorption or metabolism, known hypercalcemia or vitamin D hypersensitivity, vitamin D treatment for rickets, presence of urolithiasis, liver or renal impairment, and malabsorption disorders.We also excluded obese children with body mass index (BMI) > 85th percentile for age and sex [14] as the adipose tissue is the main site for storing vitamin D [15]. The study was approved by the Institutional Review Board (IRB) of the Faculty of Medicine Zagazig University (IRB No. ZU-IRB #10584).Legal guardians signed informed written consent before participating in the study, and assent was taken from the children aged 12-18 years. Study design This was an interventional study (Clini calTr ials.gov Identifier: NCT06274203, on 23/02/2024, retrospectively registered).Subjects within each group, SCD or controls, received monthly oral vitamin D 3 doses according to the baseline status of vitamin D as follows: sufficient: 100,000 IU, insufficient: 150,000 IU, and deficient: 200,000 IU.The study was conducted throughout the duration from May 2023 to Feb 2024; the enrollment period was an average of 3 months, from 3 May 2023 to 30 July 2023, with visits at baseline, 3, and 6 months.The last subject was completed on 30 January 2024.Monthly phone calls were made to support compliance with therapy and collect any adverse events (AEs) or new concomitant medications (CMs). Anthropometric measures A complete physical examination was performed including anthropometric measurements [16].We calculated the BMI from weight (kg/m 2 ) using a digital scale (Scaletronix, White Plains, NY) and height using a stadiometer (Holtain, Crymych, UK).Age-and gender-specific Z scores for weight, height, and BMI were generated based on Centers for Disease Control and Prevention 2000 reference standards [14]. Assessment of HGS [17] Initially, the preference for one hand was ascertained, then the participants warmed up by gripping the handle, adjusting their grip, and going through two to three testing trials to become acquainted with a handheld dynamometer.The American Society of Hand Therapists (ASHT) standard operating protocols are followed when taking the measurement [18].The participants support their feet while sitting up in a chair.The arm being examined is placed on a table with the elbow in 90° of flexion, the forearm in 0° of pronation and supination, the wrist in neutral resting position, and the shoulders slightly abducted (~ 10°) and neutrally rotated [18].Starting with the dominant hand, each participant makes three maximal voluntary contractions for each hand.For additional analysis, the three test averages are computed to two decimal places. The Childhood Health Assessment Questionnaire [19] After CHAQ was translated into Arabic, it was validated to assess the functional impairment [19].It consists of thirty questions divided into eight categories: dressing and grooming; arising; eating; walking; hygiene; reach; grip; and activities.There are four potential responses to each question: "without any difficulty" (score 0); "with some difficulty" (score 1); "with much difficulty" (score 2); and "unable to do" (score 3).If the domain is ranked lower (0/1)m then aid assistance, equipment, or assistance from another person receives at least a score of 2. A summary score known as CHAQ-DI, which varies from 0 to 3, is calculated by averaging the highest score in each domain.For a CHAQ-DI score to be considered minimally clinically significant, it must be ≥ 0.75. HRQL questionnaire (the SF-36 v2 questionnaire) [20] The SF-36v2 was translated into Arabic language and adapted [21].The questionnaire was scored following standard guidelines and divided into eight subscales: physical function, role limitations resulting from physical health, bodily pain, general health perception, vitality, social function, role limitations resulting from emotional problems, and mental health [20].For each subscale: a higher score indicated good health and ranged from 0 to 100. Laboratory assessment Serum 25(OH)D was determined quantitatively by radioimmune assay (Roch Diagnostic Mannheim, Germany).Subjects were classified based on vitamin D status into vitamin D sufficient: > 30 ng/mL, insufficient: 20-29.9ng/mL, and deficient: < 20 ng/mL [22].Vitamin D 3 dose was considered unsafe if it resulted in elevated 25(OH)D > 160 ng/mL with elevated calcium (age-and sex-specific range).Routine laboratory tests as complete blood count, CRP, ESR, serum Ca, and serum ferritin were also assessed. BMD measured by DEXA BMD was assessed using DEXA scan (GE-Lunar Prodigy, Madison, MA, USA) [23].We evaluated BMD at the posterior-anterior spine.Z-scores were used to interpret the results, with Z-scores less than − 2 SD being regarded as abnormal.A Z-score between − 1 and − 2 SDs was used to indicate osteopenia, whereas a Z-score above − 1 SD was used to define a normal BMD [24]. Statistical analysis Sample size calculation was done using OpenEpi program version 3 (www.OpenE pi.com).Considering the Mean ± SD of HGS among cases and controls (16.2 ± 7.9, 21.9 ± 9.9 respectively) [8], the minimum appropriate sample size (which achieve power of ≥ 80%) was calculated as 78 participants (39 patients and 39 controls).Thus, eligible 84 participants were included (divided into 2 equal groups, 42 patients and 42 controls), and the power was calculated as 83.06%.The collected data were tabulated and analyzed using IBM SPSS Statistics, version 26 (IBM; Armonk, New York, USA).Continuous quantitative variables were expressed as the mean ± SD or median and interquartile range (IQR), and categorical qualitative variables were expressed as numbers and percentages.Continuous data were checked for normality by using the Shapiro-Wilk test.Independent sample t-test and Mann-Whitney tests were used to compare two groups of normally and not-normally distributed data, respectively.Categorical data were compared using the chi-square test and Fisher exact test.The Spearman correlation test was used to detect the closeness of association between 2 variables.All tests were two-sided.P-value < 0.05 was considered statistically significant; P-value < 0.001 was considered highly statistically significant. Results We enrolled 42 children with SCD (24 had HbSS, and 18 had HbSβ0 thalassemia genotype), 24 males, a mean age of 9.03 ± 3.7 years, and 42 healthy age and sex-matched controls. Regarding the clinical data of the SCD group, the mean disease duration was 6.6 ± 3.5, patients who experienced < 1 VOC were 21 (50%), while 50% experienced > 2 VOCs in the last 12 months.Most SCD patients (85.7%) had < 1 ACS in the last 12 months and > 2 ACS were reported in 14.3% of patients.Six (14.3%) patients received iron chelation therapy, deferasirox film-coated tablets, and all of them were compliant.The baseline demographic variables, laboratory parameters, DEXA score, HGS, and HRQL data of SCD patients and healthy controls are presented in Table 1. SCD patients had poorer growth status as indicated by negative Z scores for weight, height, and BMI (P < 0.001).Significantly higher WBCs, ESR, and serum ferritin levels were detected in the SCD group compared to controls.Suboptimal BMD was detected in SCD patients as indicated by significantly lower Z score compared to controls, and 4 SCD patients had a history of multiple fractures.We observed significantly lower HGS and worse HRQL parameters in the SCD group.The median 25(OH)D at baseline was significantly lower in SCD patients when compared to controls (16.5 vs. 28 ng/mL respectively).Vitamin D status in SCD and controls is shown in Fig. 1. All SCD patients and 33 healthy controls completed the study (9 controls lost follow-up).Monthly oral high doses of vitamin D improved vitamin D levels at 6 months in both SCD and control groups (P < 0.001).After 6 months of vitamin D supplementation, we detected a significant improvement in the DEXA Z score (P < 0.001), limitation of physical health (P = 0.02), emotional wellbeing (P < 0.001), pain scores (P < 0.001), and CHAQ grades (P = 0.01) in SCD patients.A significant improvement in HGS (P < 0.001 and P = 0.005) as well as the CHAQ score (P < 0.001 and P = 0.003) was detected in the SCD group and controls, respectively.A significant decrease in ESR was observed in SCD patients at 6 months (P = 0.01) as shown in Table 2. Vitamin D levels at baseline and after 6 months among the studied groups (sufficient, insufficient, and deficient) in SCD and controls are presented in Table 3. The estimated compliance to vitamin D supplementation was 90% and 87%, in SCD and controls, respectively.There were no reported clinical AEs, or new CMs during the study duration, and safe levels of Ca and 25 (OH) D were observed at 3-and 6-month visits for both groups.At baseline, 34 out of 42 SCD patients (81%) were on hydroxyurea, and none of them changed the dose or status (on/off hydroxyurea) throughout the 6-month intervention. Discussion We detected suboptimal levels of 25(OH)D in SCD (HbSS, HbSβ0 thalassemia genotype) and healthy controls with significantly worse status in SCD patients.Combined groups at baseline, 41.65% had deficient vitamin D status, 29.75% had insufficient status, and 28.5% had sufficient levels.Moreover, suboptimal BMD was detected in SCD patients, and 4 patients reported a history of multiple bone fractures.Monthly high-dose vitamin D supplementation of 200,000, 150,000, and 100,000 IU for deficient, insufficient, and sufficient status, respectively, was safe, well tolerated, and associated with higher vitamin D levels at 6-month assessment.All groups succeeded in restoring sufficient vitamin D status except the deficient SCD group with a median level of 25.5 (20.5-34) at 6-month post-intervention; however, there was a highly significant improvement from baseline levels (P < 0.001).This high-dose regimen also led to a significant improvement in BMD in the SCD group, defined by the DEXA Z scores (P < 0.001).It is worth mentioning that our Fig. 1 Vitamin D status among the studied groups study was not randomized, blinded, or placebo-controlled as we considered giving a placebo for vitamin D-deficient subjects for 6 months was not ethical. In line with these findings, high vitamin D doses (240,000 to 600,000 IU) given over 6 weeks in a pilot study were reported to be safe and effective in normalizing vitamin D status [25].However, pre-and post-intervention BMD was not assessed.A meta-analysis by Brustad et al. concluded that high doses of vitamin D (daily doses to 10,000 IU/d or bolus doses to 600,000 IU) were safe with no increased risk of SAEs in young children aged 0 to 6 years [26].Williams et al. studied 4 SCD children with severe vitamin D deficiency, who received oral vitamin D 3 100,000 IU EOW for 8 weeks followed by monthly 100,000 IU for 22 months which improved vitamin D deficiency and BMD scores with no reported AEs [27].Another study showed that monthly oral doses of vitamin D with 100,000 or 12,000 IU for 2 years improved respiratory disease rates > 50% in SCD children aged 3 − 20 years [28].Dougherty et al. reported that daily supplementation of vitamin D 3 with a high dose of either 4000 or 7000 IU for 12 weeks was efficacious and safe in both HbSS patients and healthy children [29].A recent randomized controlled trial reported that a daily dose of 1000 IU vitamin D 3 and a high-dose vitamin D bolus will maintain 25(OH)D levels ≥ 75 nmol/L in SCD patients, however, 64 AEs were reported in 28 participants [30].The most commonly reported AEs were vaso-occlusive crisis, fever, cold, chronic pain, headache, small red bumps, nausea, and vomiting.However, no SAE occurred during this study [30].Consistent with our finding, many studies have reported low BMD in 28 to 64% of pediatric SCD patients [4,31], and many of these patients were found to be vitamin D deficient (< 12 ng/mL) [4,32].However, long-term studies on vitamin D supplements relating to bone mineralization in SCD patients are still required. Deficient muscle strength was reported in SCD children compared to controls with a negative impact on HRQL [2,29,33,34].In this study, we observed a significantly lower HGS and poorer HRQL parameters in the SCD group (P < 0.001), a significant positive correlation between HGS and total physical score, and a negative correlation with CHAQ score collectively in SCD and healthy subjects.After 6 months of vitamin D supplementation, we found a significant improvement in HGS for both children with SCD and controls (P < 0.001 and P = 0.005, respectively).This was associated with improvement in CHAQ grade, pain, physical health, and emotional well-being in SCD patients, and improvement in emotional, social, and total physical function in healthy controls.Moreover, we demonstrated a significant positive correlation between vitamin D level at 6 months and HGS and pain scores, and a negative correlation with CHAQ scores which indicate less pain and better health. High doses of vitamin D may contribute to the enhancement of the muscular and physical function of children with and without chronic disease.Bartoszewska 3 Correlation between vitamin D level at 6 months and HGS (A), pain (B), and CHAQ (C) among the studied groups D 3 supplementation improved muscular strength and torque in both HbSS and healthy children [8]. Pain is a hallmark of SCD with a negative impact on patient outcomes and HRQL [36][37][38][39].A meta-analysis by Yong et al. found that vitamin D has been proven to reduce pain in patients with widespread chronic pain [40].Osunkwo et al. performed a randomized, double-blind pilot study, in which SCD patients received either high-dose vitamin D 3 (40,000 to 100,000 units weekly) or placebo for 6 weeks.Fewer pain days, higher quality of life scores, and higher levels of serum vitamin D were reported in the treatment group [25].Consistent with this finding, Dougherty et al. reported a significant decrease in pain as well as fatigue, and higher HRQL in pediatric SCD patients who received high-dose vitamin D; however, they highlighted the need for further longitudinal study to detect the sustained impact with longer-term supplementation [8].Adly et al. conducted a study on 50 children and adolescents with SCD and detected statistically lower frequencies of joint and bone pain, and sickle crisis after 3 months of vitamin D supplementation [41]. The exact mechanisms by which vitamin D supplementation lowers pain remain unclear.Vitamin D deficiency may exaggerate the disease course and aggravate the risk of complications through modification of neural and immune processes that contribute to pain perception [42].Hood et al. reported that vitamin D supplementation to a sufficient level is one complementary therapy to decrease pain-related emergency department visits [43]. As concerns the inflammatory status in SCD patients, we detected higher ESR at baseline compared to controls.Moreover, reduced ESR levels were detected in the SCD group after 6 months of vitamin D intervention.Lee et al. reported that vitamin D supplementation affected numerous immune and inflammatory markers for SCD, including IL2, serpin E1, IFNγ, TNFα, sICAM1, and hsCRP, especially with high-dose vitamin D 3 [44].More studies are needed to investigate the immunomodulatory properties of vitamin D, with variable responses to different doses in SCD patients. Conclusion Monthly oral high-dose vitamin D supplementation was safe, tolerated, and associated with higher vitamin D levels, improved HGS, and HRQL in both SCD children and healthy subjects as well as improved BMD scores in SCD patients.However, several enquiries remain regarding vitamin D supplementation in SCD, related to the optimal dose, duration of supplementation, long-term AEs, and efficacy in different types of SCD.Further full-scale randomized controlled trials are required to formulate standardized guidelines for optimal dosing and to investigate the impact on clinically significant outcomes in children and adolescents with SCD and their healthy counterparts. Limitations The small sample size and being a nonrandomized openlabel trial may limit the generalizability of our outcomes. et al. described the molecular mechanisms of vitamin D function in muscle tissue via two pathways, the genomic pathway acts via gene transcription impacting the transportation of calcium in muscles as well as the metabolism of phospholipids, and the non-genomic pathway which controls the intracellular calcium transport stimulating the growth and proliferation of the muscle cell [35].Dougherty et al. found that vitamin Fig. 2 Fig. 2 Correlation between HGS and total physical score (A) and CHAQ (B) among the studied groups Table 2 Subjects characteristics at baseline and after 6 months of vitamin D3 supplementation Data expressed as median (IQR).Test: chi-square for trend Table 3 Vitamin D levels at baseline and after 6 months among the studied groups
2024-05-16T06:17:54.310Z
2024-05-14T00:00:00.000
{ "year": 2024, "sha1": "79f074f7e0a72b93540d653e2014f984c0b5965c", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00431-024-05572-w.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "24849b935a748685a5162688a0b489581341e139", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
211259098
pes2o/s2orc
v3-fos-license
Performance of Oscillating Plasma Thrusters Traditional expressions and definitions describing performance of plasma thrusters, including the thrust, specific impulse, and the thruster efficiency, assume a steady state plasma flow with a constant flow velocity. However, it is very common for these thrusters that the plasma exhibits unstable behavior resulting in time-variations of the thrust and the exhaust velocity. For example, in Hall thrusters, the ionization instability leads to strong oscillations of the discharge current (so-called breathing oscillations), plasma density, ion energy, and as a result the ion flow. In this paper, we revisit the formulation of the thrust and the thrust efficiency to account for time variations of the ion parameters including the phase shift between the ion energy and the ion flow. For sinusoidal oscillations it was found thrust can potentially change more than 20%. It is shown that by modulating ion energy at specific amplitudes, thrust can be maximized in such regimes. Finally, an expression for the thruster efficiency of the modulating thruster is derived to show a mechanism for inefficiencies in such thrusters. I. Introduction For many plasma thrusters, plasma instabilities often result in unstable thruster operation which may affect thruster performance. For example, operation of Hall thrusters often exhibits discharge current oscillations of 10-30 kHz. These discharge oscillations result in oscillations of the plasma flow from the thruster [1][2][3]. Apart from these naturally occurring oscillations, recent studies have explored externally driven oscillations of the discharge to control the thruster operation [4][5][6][7]. In particular, work by Romadanov et al. modulated the DC discharge voltage with a sinusoidal signal. It was found that thrust under such oscillating regimes may degrade when the ion flow and ion energy oscillations shift out of phase. In addition, these experiments revealed that this phase shift becomes appreciable, particularly at large oscillation amplitudes [8]. A key question then is what effect this phase shift may have on thruster performance, including the thrust, Isp and thruster efficiency. To the best of our knowledge this question has never been addressed in plasma thruster literature. Therefore, the main goal of this paper is to derive and compare time-resolved and time-averaged thrust and thruster efficiency expressions and relative values for oscillating plasma thrusters. This paper is organized as follows. Section 2 discusses performance for a steady-state plasma thruster operation and shows the importance of accounting for time-dependent oscillations in the formulation of the thruster performance. Section 3 derives thrust for a sinusoidal modulated thruster. Section 4 introduces the expressions for the input power under modulation and the performance of modulated thrusters. Finally ion power ratio in the oscillating plasma thruster is discussed in Section 5. A. Remarks on Steady State Thruster Performance For a steady-state operation of the thruster, the thrust is defined as where m tot is the total mass flow and v jet is the jet velocity. For plasma thrusters with input power P in , the efficiency is defined as the ratio between the thrust power P thrust and the input electrical power: where I d and V d are discharge current and voltage respectively. When the plasma flow and the plasma exhaust velocity oscillate about a mean value, the derivation of the thrust becomes much more involved. For the sake of our analysis, we consider the time-dependent thrust produced by ion acceleration in an applied electric field. This mechanism of ion acceleration is relevant to Hall thrusters and ion thrusters. Detailed descriptions of how the electric field is generated can be found in other references [9]. The time-dependence of the thrust is due to either natural plasma oscillations or external modulations of the applied power or voltage. B. Transient Time Scales Consider a plasma thruster operating in a pulsed mode where the frequency of pulse repetition is much slower than the time scale of the transient plasma processes, such as a breathing instability oscillating the ion current. For such regimes, the derivation of the time-dependent performance is relatively simple. Due to the negligible timescales of the transient processes, the accelerating voltage (energy/charge) and mass flow are in phase (Fig. 1) and so the thrust is dependent only on the duty cycle (α) of the pulsing. Pulsed thrust can be expressed as Fig. 1 Ion Voltage and Ion Current with Time at 1Hz Pulsed Operation where I L and I H are the low and high levels of the ion current respectively, V L and V H are the low and high levels of the ion energy respectively, and α is the duty cycle of the square-wave pulse. Consider now that the repetition frequency increases to approach the transient time-scales of the thruster (e.g. the frequency of natural oscillations or the pulse duration). Under such conditions, non-linear thruster plasma response (e.g. resonance-kind behavior [4] or a hysteresis) can cause the ion current to lag or lead the ion energy, and in some cases, alter the shape of the waveforms. Limiting the current analysis to square waves, the performance should depend on both the duty cycles of each and their phase shift φ i . This causes the relatively simple square pulse case to be considerably more difficult as there can be multiple solutions depending on the relative duty cycles and phases of the ion energy and ion current flow. Appendix A contains the exact solutions of the thrust for a high-frequency square wave oscillation for each of these 6 cases. While it is not difficult to derive, the sheer variety of solutions for thrust for the various square-wave forms is inconvenient for usage. Therefore, for the rest of the paper, a sine wave oscillation is considered for two reasons. The first is that there is a single solution to the time-dependent thrust and power, and the second is this solution is directly relevant to the situation explored in experiments with externally-driven breathing oscillations in Hall thrusters [4,7,10]. A comparison of theoretical thrust and power with experiments will be a subject of a separate paper. III. Modulated Thrust For simplicity of our analysis, we assume the oscillations in ion current and the energy are sinusoidal and of the form (Fig. 2): Fig. 2 Ion Voltage and Ion Current with Time at 10kHz Sine-wave Operation Here I im is mean ion current, I ia is amplitude of ion current oscillations, V im is mean ion energy, and V ia is amplitude of ion energy oscillations, and φ i is the phase angle between current and ion energy. Ion energy is expressed as the accelerating voltage. Thrust can be derived by finding the time average of the product of the instantaneous ion mass flow and energy, as shown in Eq. 6. Substituting our equations for ion current and ion energy for mass flow and v ex respectively, this expression can be solved into a form with elliptic integrals of the first and second kind. The full derivation for thrust is shown in Appendix B and comes out to whereῡ = V ia /V im and the coefficients a n can be found in Appendix D. A low-error approximation of this expression for practical usage including associated error can also be found in Appendix B. The modulated thrust in Eq. 7 can be separated into the sum of three terms: the steady state thrust of the mean voltage and current, a portion that decreases thrust, and a portion that increases thrust: Unintuitive effects of oscillation on thrust can be seen in Eq. 8 and illustrated in Fig. 3. Here plots of thrust vs ion energy amplitude are shown using typical ion energy and ion currents found in recent modulation experiments with a cylindrical Hall thruster (CHT): I im = 0.3A, I ia = 0.3A, and V im = 160V [8]. Larger modulations in ion energyῡ lower the thrust through T D , but these same modulations increase the T I portion, scaled by cos φ i . This results in a net increase in thrust at low phase angles, but a decrease at high phase. A large phase angle implies a higher portion of ions are accelerated at a lower voltage and contribute less to thrust. As the voltage oscillation increases this effect gets worse, which can lower the thrust by as much as 40% below the DC level when phase is 180 o . At low phase angles a higher portion of ions are accelerated at high voltage and so thrust increases. The transition region between these two effects holds some interest as the "boost" or decrease in thrust is not linear with oscillation amplitude. At mid-range phase angles (between roughly 45 o to 85 o ) the thrust initially increases with ion energy amplitude before decreasing. This causes a maximum in thrust which can be seen in Fig. 4. This nonlinearity in the thrust is dependent on the modulating waveform, and it suggests a theoretical maximum of the thrust for the thruster with modulated operation or oscillating thruster. Taking the derivative of the thrust equation (Eq. 7) with respect toῡ, it is possible to find an oscillating regime that would theoretically provide the maximum thrust Hence, the dimensionlessῡ which provides the maximum thrust depends solely on another dimensionless parameter: Solving the relationship between the two dimensionless parameters can be done numerically. The curve on Fig. 5 corresponds to the optimal relationship between the dimensionless parameters at which a theoretical thrust maximum can be achieved for the thruster with oscillations. This theoretical maximum thrust is only applicable in a small transition regime where the term I ia cos φ i /I im is not too large. Thus it does not present the highest theoretically possible thrust for the sinusoidal waveform. A. Input Power To find the efficiency of the oscillating plasma thruster, in addition to the thrust (Eq. 8), it is also necessary to account for effects of oscillations on the input power. For example, in Hall thruster experiments with externally driven oscillations of the discharge voltage, a phase difference between the discharge current and discharge voltage was often observed [7]. Thus the expression for the input power of oscillating thruster has to be different from the expression used for conventional DC power Hall and Ion thrusters. Consider the case of a sinusoidally oscillating input voltage offset by some DC level, much like the form of ion energy and ion current. where I dm is mean discharge current, I da is amplitude of discharge current oscillations, V dm is mean discharge voltage, and V da is the amplitude of discharge voltage oscillations, and φ d is the phase angle between discharge current and voltage. The input power can be found by integrating the product of the discharge current and voltage which comes out to The power can then be seen as mean component (I id V dm ) of the power plus the AC component which is dependent on the phase angle φ d . The input electric power will be at minimum when current and voltage are out of phase and is at maximum when the two are in phase. B. Efficiency Using Equation 2, the thruster efficiency is often split into three separate components: current efficiency, voltage efficiency, and propellant utilization [11]. Other terms such as plume divergence will not be considered here. Taking the typical current efficiency (η curr = I im /I dm ), voltage efficiency (η volt = V im /V dm ), and propellant utilization This is a useful form of efficiency and desirable to keep. The difficulty in doing so arises from the fact that not only are ion energy and ion current oscillating in time, the discharge voltage and current may be too. The product of the mean of each of these is not equal to the mean of the product of these terms. The time-dependent effects, such as phasing differences, may alter the efficiency. However it is possible to derive an efficiency that maintains this form with the addition of another term which includes these oscillatory effects. Starting by deriving thrust power P T hrust = T 2 /(2 m tot ) and continuing the derivation form of thrust described in Appendix B, where the terms A and B contain elliptic integrals: The last equation is specific for a plasma thruster with sinusoidal oscillations of ion current and ion energy. Substituting Eq. 16 and Eq. 14 into Eq. 2, assuming only singly charged ions (η prop = 1), and expanding thrust to the second order: We achieve a form similar to the typical efficiency equation whereī = I ia /I im ,ī d = I da /I dm , andῡ d = V da /V dm . This oscillation term η osc accounts for the phase variations both in the discharge power and ion thrust power. From the above equation it can be seen that for an oscillating thruster the performance is dependent on both the ion phase φ i and the discharge phase φ d . The form of Eq. 18 is specific to a sinusoidally modulating thruster with thrust expanded to the second order, but a similar equation may be found for further orders of expansion or for different waveforms. Note that because Eq. 15 contains propellant utilization, the above derivations for the efficiency (Eq. 17 and Eq. 18) were done assuming full ionization of the propellant, i.e. no neutral species. If there is a neutral species with a much lower velocity than the ion species, the propellant utilization term can be introduced as follows, following the typical derivation for propellant utilization [11], The numerator of the oscillatory η osc term alters the performance due to the alignment of the ion velocity with the ion flow. What is particularly interesting is the decrease in performance due to higher amplitude of ion energy (and ion velocity) by the second term of the numerator, even when the ion energy is in phase with the ion flow. This term occurs due to the difference between thrust power and the kinetic power of the ions in the thruster exhaust, which requires some analysis. V. Ion Power Ratio It can be shown that when the ion velocity distribution function (VDF) is described by a delta function (no energy spread), the thrust power is equal to the kinetic power of the accelerated ion flow. However, when the ion VDF is broader than the delta function, the thrust power is no longer equal to the kinetic power. Given some random VDF of the ions, the thrust power is proportional to the squared mean of the particles' velocity (jet velocity), while the kinetic power is proportional to the squared quadratic mean (root-mean-square) of the velocity. This is evident when considering the form of thrust power and kinetic power of the exhaust. Consider some velocity distribution v with associated mass flow m: Fig. 6 Gaussian velocity distribution with both mean velocity squared and rms velocity squared In any velocity distribution the RMS value is always greater than or equal to the mean. Proof of this can be found by the Schwartz Inequality, which for the thrust and kinetic power case can be seen by multiplying each side by the total mass flow. Equality between these two terms can be set by assigning an ion power ratio efficiency factor The physical meaning behind the ion power ratio is the imbalance in kinetic energy fast and slow moving particles take compared to the thrust they provide, which results in an inefficiency of the transformed kinetic power. For a given mass flow rate and propellant utilization (i.e. ion flow), a lower input power is required to achieve the targeted thrust with mono energetic ions with the velocity v jet than with ions having a velocity distribution function including ions with v i > v jet and v i < v jet . It implies that to achieve the same thrust, the presence of slow ions (v i < v jet ) will need to be compensated by faster ions. The generation of these faster ions will require more power than would be needed for mono-energetic ions to produce the same thrust. As a result, a wider velocity distribution will have a lower ion power ratio and lower efficiency. Velocity distributions are common in plasma thrusters -accelerated ions often have a low-velocity tail due to ions born downstream of the ionization region. Efficiency is then decreasing not only by the lowered mean exhaust velocity (and so voltage efficiency), but also by an inefficient transformation of kinetic power to thrust power by the ion power ratio. It is important to distinguish the fraction of the input electric which goes to the kinetic power of the ions and the fraction of the input power which goes directly to the thrust generation. The former is defined by the current utilization efficiency times the voltage efficiency. The latter is by the inclusion of this term which considers the velocity distribution, where the presence of slow and fast ions decreases the portion of kinetic power that is converted to thrust power. This is concerning for oscillating thrusters as it is inherent in the device that there is a distribution in velocity because ion velocity is changing with time. Due to the smaller timescale of velocity oscillations compared to the thrust operation, the varying velocities (with time) are grouped into a single distribution of which the ion power ratio applies. This represents an inherent decrease in efficiency of such devices. The degree to which this is a problem for sinusoidal oscillations is analyzed in Section 5.B. For demonstration purposes, the case of no oscillations and two species (ions and neutrals) with single velocities will be considered first, where it can be shown that the propellant utilization falls out of the ion power ratio. It should be noted the ion power ratio is similar to the squared inverse of the "form factor" used in electrical engineering. The form factor is the ratio of RMS signal to mean signal, where in this case the form factor is of the velocity distribution. A. No Oscillations -two species When deriving the total efficiency the ion power ratio is either ignored by assuming a single species of propellant, which essentially assigns the velocity distribution as a delta function and collapses the integrals to provide equality between thrust and kinetic power, or it is assumed there are a discrete number of species. Usually this is taken to be a singly charged ion and a neutral species, which turns the ion power ratio into the propellant utilization. This can be seen by taking the above ion power ratio and assuming the velocity distribution is the sum of a delta function for each species. Taking Eq. 22 with m = m n δ(v n ) + m i δ(v i ) where v n and v i are the neutral and ion velocity respectively and m n and m i are the neutral and ion mass flow respectively: where the inequality is solved by the propellant utilization form of the ion power ratio by assuming v i v n and plugging Eq. 25 and 26 into Eq. 24. Including a time dependence on the mass flow (or ion current) and velocity (or ion energy) precludes one from using the propellant utilization form of the ion power ratio or collapsing the integrals by delta functions. Instead the full integral must be solved. B. Oscillations -single species As a simplification only a single species of ions with a single velocity at any point in time is considered here. Kinetic power of the oscillating ion flow is the product of the ion current and the ion energy: The thrust power from a thruster with oscillations remains the same form as the no-oscillation version, where the square of the time-averaged thrust is divided by the time averaged mass flow. This is due to the fact that the spacecraft will experience the time-averaged thrust as it travels. It is in these time-averages that the nuance of the ion power ratio is found. One intuitive result revealing the ion power ratio can be seen when the phase angle φ i = 90 • . At high ion energy oscillations V ia the thrust decreases (see Fig. 3), while the kinetic power stays constant (see Eq. 28). Thus, the input electric power is being transferred into kinetic power of the ions that is not resulting in thrust power. The expression for η ipr can be very involved, depending on the order of expansion of the thrust. For this paper, we shall only expand up to the second order. Taking thrust power from Eq. 16, writingī = I ia /I im and through some simplification: Similar solutions can be found for other orders of expansion. Fig. 7 shows that ion power ratio decreases as low as 73%, which represents nearly 30% of kinetic power not contributing to thrust. This is the worst case scenario that exists for a thruster when the ion energy amplitude is equal to the ion mean energy. However, by controlling the phase of the ion energy and ion current it is possible to increase the ion power ratio to 95%. This highlights the importance of ensuring the phasing of an oscillating thruster is in the optimal regime. VI. Conclusions The main results and following conclusions have implications for plasma thrusters operating with natural discharge oscillations and for thrusters operating with externally driven oscillations. It is shown that the thrust can be increased with oscillations of the ion energy and the ion current. The maximum thrust is achieved when the two are in phase and oscillating with large amplitudes. A method to determine maximum thrust was shown for the out-of-phase case. It was shown that for a thruster with oscillating input voltage and current, performance is highest when the discharge current and the discharge voltage are out of phase and when ion current and ion energy are in phase. For sinusoidal oscillations, the thrust was found to potentially increase by 20% or decrease up to 40%. Because the plasma oscillations can induce time variations of the ion velocity distribution function, we also analyzed the effect of the VDF on the ion power ratio; a generalized form of propellant utilization. The ion power ratio represents the portion of kinetic power that is transformed into the thrust power, which can be decreased by a wide velocity distribution of the exhaust. This revealed an inefficiency that can significantly decrease the performance of a thruster: for sinusoidal oscillations the ion power ratio can be as low as 73%. By adjusting the phase between ion current and energy, however, this inefficiency can be nearly completely nullified. While the presented analysis was conducted for a specific waveform, the same approach can be taken for any arbitrary waveform. Future work may reveal waveforms with greater gains in thrust and thruster efficiency. For a square wave oscillation as shown in Fig. 8 these solutions are shown in Eq. 30 where α is the duty cycle of the ion modulation, β is the duty cycle for the ion current modulation, and φ i is the dimensionless phase shift between the two parameters. All parameters are shown graphically in Fig. 8, and the particular duty cycle/phase-shift combination observed in the figure is described by the second case in Eq 30. B. Full Derivation of Thrust -Oscillations Thrust for a thruster with oscillations is found by taking the time-average of the instantaneous thrust over the oscillations, where the instantaneous thrust has the form T = mv ex . Here we are assuming a single species of exhaust. For purposes of equating the thrust and kinetic power later for electric propulsion devices, the mass flow and exhaust energy will be written as current (I i = me/M) and voltage (V i = K/e)respectively. Note that this analysis is not restricted to ion propellant, as these are only separated from mass flow and energy by a constant. The form of the waveform of both the mass flow and the exhaust energy is important. Here we will assume each is an offset sinusoid as shown in Fig. 2. That is a sinusoid which has an offset larger than the amplitude such that it is never negative. To account for a possible phase shift between the energy and mass flow, which measurements of a modulated Hall Thruster have shown to exist, a phase angle φ i is included. Each integral (A, B, and C) will be solved separately. Eq. 34 can be reduced to a form involving elliptic integrals of the second kind, E(k). Takingῡ = V ia /V im as our independent variable: The complete elliptic integral of the second kind can be expressed by the power series Integral A can then be simplified to: A similar approach is taken to find integral B Again takingῡ = V ia /V im as our independent variable, the solution of Eq. 38 can be solved into a form involving the elliptic integrals of both the first K(k) and second kind E(k). Solving for the power series of Eq. 39 provides a simplified form which quickly converges. Integral C is simply zero, which can be shown by method of u substitution. Taking u = V im + V ia sin θ: For a thruster with an oscillation in the mass flow and energy of an offset-sinusoid form, the thrust can then be written as: where the coefficients a n can be found through the power series of the elliptic integrals. The first 6 terms are shown in Eq. 37 and Eq. 41. When there is no oscillation in ion energy or when the expansion is taken to the zeroth order, Eq. 43 reduces to T = 2MV i m e I im = v jet m tot While it is possible to solve the modulated thrust for expansion orders to the nth degree, the series quickly converges, particularly for lowerῡ = V ia /V im . The error of the series expansion for integral A and integral B are shown in Fig. 9. Error is defined here as the difference between the series expansion and the numerically calculated divided by the numerically calculated. Fig. 9 show the error below 2% at 3rd order forῡ < 0.5. For thrusters with much fuller oscillations withῡ ∼1, error on thrust is less than 5% with 6th order expansion. Several useful forms of the thrust follow: If the ion energy oscillation amplitude is less than half the mean ion energy the thrust can be expanded to the first order with error below 4%: If the ion energy oscillation amplitude is equal to the mean ion energy (full pulse) an exact expression can be found: MV im e (3I im + I ia cos φ i ) (45) C. Maximum Thrust Derivation To derive voltage amplitude that will provide the maximum thrust, we take our derived expression of thrust (Eq. 7) and take the derivative with respect toῡ. We then set the left-hand side of Eq. 46 to zero and simplify. This expression can then be numerically solved forῡ, as was performed in Fig. 5. D. Thrust Expansion Coefficients The coefficients a n for the series expansion in Eq. 43 are shown up to 12th order. These coefficients can be calculated through the series expansion of integrals A and B. Funding Sources This work was supported by the Air Force Office of Scientific Research.
2020-02-25T02:01:23.048Z
2020-02-22T00:00:00.000
{ "year": 2020, "sha1": "d54f707f0d6cfe6c2acd347ede74220fe67d5ffc", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d54f707f0d6cfe6c2acd347ede74220fe67d5ffc", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
247339523
pes2o/s2orc
v3-fos-license
Gene Expression of CD70 and CD27 Is Increased in Alopecia Areata Lesions and Associated with Disease Severity and Activity Background Alopecia areata (AA) is an acquired hair loss disorder induced by a cell-mediated autoimmune attack against anagen hair follicles. CD27-CD70 is a receptor-ligand complex which enhances T helper and cytotoxic T cell activation, survival, and proliferation. The overstimulation of this complex can lead to a lack of tolerance and the development of autoimmunity. Objectives This study aimed to assess the gene expression of CD27 and CD70 in patients with AA. Methods CD70 and CD27 mRNA expressions were evaluated by a quantitative real-time polymerase chain reaction in scalp biopsies from 40 AA patients (both AA lesions and non-lesional areas) and 40 healthy controls (HCs). The Severity of Alopecia Tool (SALT) score was used to assess AA severity. Patients were evaluated for signs of AA activity, including a positive hair pull test and dermoscopic features of black dots, broken hairs, and tapering hairs. Results The gene expression of CD70 and CD27 was significantly higher in AA lesions than in non-lesional areas (p < 0.001 for both) and HCs (p=0.004, p=0.014, respectively). There were significant positive correlations between AA severity and gene expression of CD70 (p < 0.001) and CD27 (p=0.030) in AA lesions. Significant associations were detected between signs of AA activity and lesional gene expression of CD70 and CD27. Additionally, CD70 and CD27 gene expression was significantly lower in non-lesional biopsies compared to HCs (p < 0.001). Conclusion Gene expression of CD70 and CD27 was increased in AA lesions and was associated with disease severity and activity. Thus, both molecules can be a predictor of AA severity and activity. Furthermore, the expression was reduced in non-lesional scalp areas. Thus, a lack of CD27 and CD70 expression may initially predispose to immunological dysregulation and the development of AA. Introduction Alopecia areata (AA) is an acquired non-cicatricial hair loss disorder affecting 0.1%-0.2% of individuals worldwide [1]. A collapse of the hair follicle (HF) immune privilege causing a CD8+ cytotoxic T (cT) cell assault against anagen HFs is implicated in AA development [2,3]. However, the precise molecular mechanisms are still poorly established. Cluster differentiation (CD) 27 is a member of the tumor necrosis factor receptor family. CD27 is expressed on natural killer cells, T helper ( ) cells, cT cells, and hematopoietic stem cells [4]. CD70, the uniquely identified ligand of CD27, belongs to the tumor necrosis factor family and is expressed exclusively on activated T cells, B cells, natural killer cells, and dendritic cells. e expression of CD70 is triggered by T cell activation upon antigen receptor attachment, toll-like receptor, or CD40 signaling. e interaction between CD70 on stimulated antigen presenting cells (APCs) and CD27 on T cells generates costimulatory signals that enhance and cT cell activation, survival, proliferation, chemotaxis, and cytokine production such as interleukin 2 and interferon-c [5]. Moreover, CD27-CD70 interaction induces the production of CXCL10 chemokine by activated cT cells, which enhances the chemotaxis of additional activated cT cells [6]. CD70 was also found to stimulate 1 cell differentiation independent of interleukin 12 [7]. Soluble CD27 can be shed from the activated T cells' surface via cleavage by matrix metalloproteinases to be released in the circulation. It was reported that serum soluble CD27 could enhance T cells and antigen-primed B cells' activation and increase immunoglobulin G production [4,5]. Interestingly, it was proposed that CD70 signaling generated by unaroused, premature APCs can inevitably lead to a lack of tolerance and autoimmunity development [8]. Significant evidence indicates that the dysregulation of CD27-CD70 complex signaling is involved in the pathogenesis of several immune-related disorders [5], but the pathway has been poorly investigated in AA. CD70 is merely expressed upon T cell activation, and thus blocking the C27-CD70 complex has been proposed as an appealing treatment target for autoimmune disorders [9]. is study aimed to assess the expression of CD27 and CD70 genes in patients with AA and to evaluate the association between this expression and AA severity and activity. Participants. A total of 40 patients with AA were enrolled in this descriptive analytical case-control study. Patients were recruited from the Dermatology Outpatient Clinic, Suez Canal University Hospital, Ismailia, Egypt. e biochemical analyses were conducted at the molecular laboratory of the Oncology Diagnostic Unit, Faculty of Medicine, Suez Canal University, Egypt. e study was performed between February and September 2021, in line with the guidelines of the Helsinki Declaration and the items of the STROBE statement. Approval was granted by the Institutional Review Board and the Research Ethics Committee, Faculty of Medicine, Suez Canal University, on 25 January 2021, with the approval code: 4454. Age and sex-matched HCs with no concurrent infections, history of AA, autoimmune disorders, atopy, or cancer were included in the study. All participants signed written informed consent form. Alopecia areata was diagnosed clinically through detecting patches of total hair loss with normal scalp skin, and the diagnosis was confirmed by dermoscopy examination. Exclusion criteria were patients who had any systemic treatment for three months or topical applications for 2 weeks prior to the study or had regrowth of hair, concurrent infection, or a history of other autoimmune diseases or atopy or cancer. Patients' complete history was recorded including demographic data (age and sex) and significant clinical data (duration of the present AA lesions, disease course, age of disease onset, AA in other body sites, prior attacks of AA, family history of AA, atopy, or other immune-mediated disorders), and dermatological examination was performed to detect site of patches, their number and pattern, and presence of nail abnormalities. AA clinical pattern was categorized into patchy (single patch and multiple patches), ophiasis, alopecia totalis, and alopecia universalis. e "Severity of Alopecia Tool" (SALT) score [10] was used to evaluate AA severity. AA activity was assessed via the subjective history of progression, an objective examination of the hair pull test at the edges of each patch, and the presence of black dots, broken hairs, and tapering hair on dermoscopy examination (DermLite, 3 Gen LLC, San Juan Capistrano, CA, USA) (magnification × 10). Assessment of the Expression of CD70 and CD27 Genes. Punch scalp biopsies (4 mm) were taken from each patient from AA lesions and non-lesional areas. One scalp biopsy was taken from the patient with alopecia universalis and HCs. Biopsies were soon submerged in the RNA stabilizing solution (Qiagen, USA) and moved to −80 C for storage until handling. According to the manufacturer's instructions, we isolated the total RNA using RNeasy Mini Kit (QIAGEN, USA). We also assessed RNA purity by NanoDrop ND1000 spectrophotometer at the absorbance ratio of 260/280 nm (NanoDrop Tech., Inc., Wilmington, DE, USA). To assess the integrity of the RNA, we ran it at 1% agarose gel electrophoresis. Quantitative real-time polymerase chain reactions (PCR) for CD27 and CD70 genes were performed on step one real-time PCR instrument (Applied Biosystems, UK) using COSMO cDNA synthesis kits (WF-1020500X, Willowfort, UK), HERA plus SYBR Green-qPCR Kit (WF1030800X, Willowfort, UK), and the specific primers for the target genes (Table 1). e amplification program included two stages, an initial denaturation stage at 95 C for 3 min, followed by 40 cycles of 95 C denaturation for 15 s and annealing at 57 C for GAPDH and 60 C for (CD27 or CD70) for 60 s. After amplification, a melting curve analysis was performed to confirm PCR amplicons by collecting the fluorescence data. GAPDH was used as an internal control. e relative amounts of the target genes were calculated using the delta CT method. Statistical Analysis. Statistical analysis was done via the Social Sciences Statistical Package. Categorical values were represented by using numbers and percentages. e normality of distribution was tested by the Kolmogorov-Smirnov test. Numerical values were represented using the range, median, mean, and standard deviation. Chisquare test, Monte Carlo correction, Mann-Whitney test, Kruskal-Wallis test, Wilcoxon signed rank test, and Spearman coefficient were used to measure the significance. e results were considered significant at a p value less than 0.05 (confidence level of 5%). Clinical and Demographic Data of AA Patients. In patients with AA, the age ranged from 15 to 62 years (mean age � 2 8.93 ± 12.21 years). Twenty (50%) patients were males, and 20 (50%) were females. In HCs, the age ranged from 18 to 59 (mean age � 29.80 ± 10.16 years). Eighteen (45%) HC individuals were males, and 22 (55%) were females. ere was no statistically significant difference in age or gender between patients with AA and HCs (Supplementary Table 1). Clinical data of AA patients are shown in Table 2. Expression of CD70 and CD27 Genes. e mean mRNA expression of the CD70 gene in AA lesions and non-lesional scalp skin was 2.41 fold and 0.16 fold, respectively, relative to the expression in HCs. e relative expression of CD70 gene was significantly higher in AA lesions compared to nonlesional areas (p < 0.001) and HCs (p � 0.004). In addition, the gene expression was significantly lower in non-lesional areas than in HCs (p < 0.001) (Figure 1). e mean mRNA expression of the CD27 gene in AA lesions and non-lesional scalp skin was 3.19 fold and 0.25 fold, respectively, relative to the expression in HCs. e relative expression of the CD27 gene was significantly higher in AA lesions compared to non-lesional areas (p < 0.001) and HCs (p � 0.014). e CD27 gene expression was significantly lower in non-lesional biopsies compared to HCs (p < 0.001) (Figure 2). e study revealed significant positive correlations between AA severity (SALT) and the relative mRNA expressions of CD70 (p < 0.001) (Figure 3(a)) and CD27 (p � 0.030) (Figure 3(b)) in AA lesions. Furthermore, there were significant associations between signs of AA activity (positive hair pull test, and the presence of black dots, broken hairs, and tapering hair on dermoscopy examination) and the relative gene expressions of CD70 and CD27 (Table 3). Apart from that, there was no significant relation between CD70 or CD27 gene expression and other data of AA patients (age, sex, disease course, age of onset, duration of AA lesions, clinical pattern, nail abnormalities or family history of AA, atopy, or immune-mediated disorders) (Supplementary Tables 2-5). Discussion In this study, the relative gene expression of CD70 and CD27 in AA lesions was significantly higher than that in nonlesional areas and HCs, correlated with AA severity, and was associated with signs of AA activity, including positive hair pull test at patch margins and dermoscopic features such as black dots, broken hairs, and tapering hair. AA is generally believed to be an autoimmune assault upon anagen HF, orchestrated by 1 and cT lymphocytes. Inflammatory infiltrate of , cT cells, and APCs was identified in the peribulbar region of anagen HFs in active AA lesions and was found to be correlated with AA severity. ese kinds of infiltrate trigger apoptosis in anagen HF keratinocytes, Figure 2: e relative expression of CD27 gene in patients with AA (n � 40) compared to HCs (n � 40). e relative CD27 gene expression in AA lesions ranged from 0.09 to 11.96 (mean 3.19 ± 3.65, median 1.58), while in non-lesional areas, it ranged from 0.0 to 1.68 (mean 0.25 ± 0.38, median 0.14). Wilcoxon signed rank test (p1) was used to compare the expression in AA lesions and non-lesional areas. Mann-Whitney test (p2) was used to compare the expression in AA lesions and HCs. Mann-Whitney test (p3) was used to compare the expression in non-lesional areas and HCs. * Significant at p < 0.05. AA: alopecia areata; HCs: healthy controls; CD: cluster differentiation. Dermatology Research and Practice causing their arrest and regression into the telogen or dystrophic anagen states [2]. Clinically, this is manifested by a sudden stoppage of hair shaft growth, resulting in tapered and broken hair shafts with a positive hair pull test at the active patch margin [11]. e CD27-CD70 complex is mostly expressed on activated T cells, B cells, natural killer cells, and dendritic cells, and its interaction signaling facilitates 1 and cTcell activation, survival, proliferation, and chemotaxis [5]. us, the higher gene expression of CD70 and CD27 in active AA lesions can be explained by the increased infiltration of immune cells expressing these molecules. is infiltration was present mostly in active AA lesions and was associated with AA severity; this fact may clarify the association between CD27 and CD70 gene expression and AA severity and signs of activity. As a result, CD27 and CD70 gene expression can be used to predict the severity and activity of AA. To date, no previous studies have evaluated the gene expression of CD27-CD70 complex in AA. is expression correlated with the hypo or demethylation of the CD70 gene promoter area, which causes a failure to repress CD70 expression when it is triggered by T cell activation [15,16]. Regarding CD27, its soluble serum level was elevated in patients with active vitiligo and was suggested as a marker of disease progression [17,18]. Serum soluble CD27 was downregulated upon treatment of psoriasis [19]. CD27 expression was significantly elevated in the lesional skin and serum of patients with systemic sclerosis, with a significant association with disease severity [20]. e expression of CD27+ B cells and serum soluble CD27 was increased in SLE patients and correlated with disease activity [21]. e pathogenesis of most of these diseases entails cellmediated autoimmune inflammatory pathways, and their association with AA is well established [2]. In the same manner, the expression of CD70 was increased in human contact dermatitis, which is a 1-mediated inflammation [22]. Additionally, the present study revealed that CD27 and CD70 gene expression was significantly lower in the nonlesional areas compared to HCs. Interestingly, Abolhassani [23] reported two family members with genetic abnormalities in the CD70-CD27 signaling cascade associated with clinical features of AA, Behcet's disease, recurrent viral pneumonia, central nervous system infection, and Hodgkin lymphoma induced by Epstein-Barr virus. e patients' clinical and immunologic data revealed an abnormality in B-cell differentiation, impaired functional activity of effector T cells, and decreased antibody production, which increased vulnerability to recurrent viral illness. e authors proposed an association between CD70 deficiency and an increased risk of alopecia areata due to either recurrent uncontrolled viral infections or decreased proliferation and activity of T-regulatory cells. is study's findings are consistent with our results of decreased CD70 and CD27 gene expression in the non-lesional scalp areas of AA patients, suggesting that a deficiency of CD70 and CD27 expression may predispose to immunological dysregulation and the development of AA. After that, the recruitment of autoreactive T cells against anagen hair follicles in active AA lesions may cause local overexpression of CD70 and CD27 expressed on the infiltrating immune cells. Notably, several in vivo studies have suggested that monoclonal antibodies targeting CD27-CD70 complex could be a potential therapeutic modality in autoimmune diseases [5]. Anti-CD70 antibodies lowered the antibody titer and decreased joint disease's severity in murine collagen-induced arthritis [9]. In addition, anti-CD70 antibodies repressed immunoglobulin secretion by B cells triggered by T cells isolated from SLE patients [24]. Colitis was prevented along with a decrease in colitis-associated 1 cytokines in a mouse model using anti-CD70 antibodies [25]. Accordingly, our study findings may shed new light on targeting CD27-CD70 complex for medical treatment of AA, especially severe cases resistant to traditional medical treatment. Indeed, the conclusions of this study should be considered against its limitations, which include small sample size and the lack of evaluation of CD27 and CD70 tissue expression in AA scalp lesions compared to non-lesional areas and HCs. Moreover, additional research investigating the molecular functions of CD27 and CD70 in AA and comparing the expression of both molecules during AA activity and after recovery and hair regrowth is needed. Conclusion e expression of CD27 and CD70 genes was increased in AA scalp lesions and was associated with AA severity and activity. CD27-CD70 interaction can therefore be a predictor of AA severity and activity. Furthermore, the expression of both molecules was lower in non-lesional scalp areas. us, a lack of CD27 and CD70 expression may predispose to immunological dysregulation and the development of AA. Data Availability e data and materials related to the present work are included within this article. 6 Dermatology Research and Practice
2022-03-10T16:39:18.279Z
2022-03-08T00:00:00.000
{ "year": 2022, "sha1": "c2265e31392fbec0c3cdcd3570492fccb59e8cd8", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/drp/2022/5004642.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5a1fa2d9c86cb8269ec9b0f6897cd40ef5ecf610", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
218527516
pes2o/s2orc
v3-fos-license
Which interactions matter in economic evaluations? A systematic review and simulation study Background We aimed to assess the magnitude of interactions in costs, quality-adjusted life-years (QALYs) and net benefits within a sample of published economic evaluations of factorial randomised controlled trials (RCTs), evaluate the impact that different analytical methods would have had on the results and compare the performance of different criteria for identifying which interactions should be taken into account. Methods We conducted a systematic review of full economic evaluations conducted alongside factorial RCTs and reviewed the methods used in different studies, as well as the incidence, magnitude, statistical significance, and type of interactions observed within the trials. We developed the interaction-effect ratio as a measure of the magnitude of interactions relative to main effects. For those studies reporting sufficient data, we assessed whether changing the form of analysis to ignore or include interactions would have changed the conclusions. We evaluated how well different criteria for identifying which interactions should be taken into account in the analysis would perform in practice, using simulated data generated to match the summary statistics of the studies identified in the review. Results Large interactions for economic endpoints occurred frequently within the 40 studies identified in the review, although interactions rarely changed the conclusions. Conclusions Simulation work demonstrated that in analyses of factorial RCTs, taking account of all interactions or including interactions above a certain size (regardless of statistical significance) minimised the opportunity cost from adopting treatments that do not in fact have the highest true net benefit. Interactions, Simulation Background It has recently been suggested that many treatments are likely to have non-additive effects on costs and qualityadjusted life-years (QALYs), and that ignoring such interactions and making separate decisions on treatments which could in practice be used together may not achieve the best allocation of healthcare resources [1,2]. Estimates of the incidence, magnitude and direction of interactions for economic endpoints are therefore required for decision-makers considering which interventions can be assessed independently and for researchers conducting factorial trials with economic evaluations or model-based economic evaluations on multiple treatments. Factorial randomised controlled trials (RCTs) provide unbiased estimates of the magnitude of interactions. These studies randomise patients to different levels of at least two factors: for example, a 2 × 2 design may compare placebo, A, B and A + B. Taking account of interactions when analysing factorial trials avoids bias, but reduces statistical power, while omitting interaction terms and assuming that there is no interaction is more efficient, but introduces bias whenever the true interaction is non-zero [3][4][5][6]. In practice, researchers cannot know whether treatments genuinely interact and have only a single sample in which to decide which interactions matter and estimate treatment effects. Analysts must therefore pre-specify a decision rule or criterion that determines the circumstances in which interactions will be included in the base case analysis. Analyses of primary clinical endpoints generally omit interactions that are not statistically significant [3,4,7]. Economic evaluation, however, focuses on estimating expected costs and benefits to inform decision-making, where statistical inference is arguably irrelevant [8]. It has therefore been suggested that it is important to avoid bias by including interactions in economic evaluations of factorial RCTs unless they are shown to be negligible [1]. However, even within this context there are several reasons to avoid conducting inefficient analyses. Firstly, inefficient analyses will over-estimate the value of further information, potentially displacing spending on healthcare today with over-investment in research. Secondly, small sample sizes or inefficient analyses may mean that (by chance) the treatment with highest expected net monetary benefit (NMB) in the sample being analysed is not the one that would genuinely maximise NMB in the population. However, it is not known what criteria achieve the best balance in minimising inefficiency and bias for economic evaluations. This paper aims to assess the magnitude of interactions within a sample of published economic evaluations, evaluate the impact that different analytical methods would have had on the results and compare the performance of different criteria for identifying which interactions should be taken into account. We first conducted a systematic review of full economic evaluations conducted alongside factorial RCTs and reviewed the methods used in different studies and the incidence, magnitude, statistical significance, and type of interactions observed within the trials. As part of this review, we identified the existence of "mixed" interactions and developed the "interaction-effect ratio" as a measure of the magnitude and direction of interactions compared with main effects. For those studies reporting sufficient data, we assessed whether changing the form of analysis to ignore or include interactions would have changed the conclusions. We then evaluated how well different criteria for determining which interactions are considered in the analysis would perform in practice, using simulated data generated to match the summary statistics of the published examples. Systematic review Methods A systematic review was conducted to identify studies for the simulation study. This aimed to identify all factorial RCTs with economic evaluations published before 2010 evaluating any intervention/comparator in any patient group. The protocol is available in Additional file 1. MEDLINE (including daily update and old MEDLINE), EMBASE, Econlit and Journals@Ovid were searched through Ovid on 9th February 2010. We also searched www.bmj.com, Tufts CEA registry (https://research. tufts-nemc.org/cear/Default.aspx), Wiley Interscience, National Institute for Health Research (NIHR) publications list (http://www.hta.ac.uk) and Centre for Reviews and Dissemination (CRD, http://www.crd.york.ac.uk/ crdweb) Database on the same date. The review was not updated because the original review was sufficient to identify a representative sample of studies and provide the basis for the simulation study. The review followed PRISMA guidelines [9]. Search terms to identify factorial trials (e.g. "factorial", "2 x 2", "2 by 2", "two by two", or "2 x 3") were combined with search terms to identify economic evaluations ("cost-effect*" or "economic evaluation" (See Additional file 1). Since some papers on factorial trial-based economic evaluations do not describe the design as factorial, clinical papers on factorial trials that happened to be picked up in the main database searches and which mentioned plans for an economic evaluation or collection of cost data were flagged. Additional targeted literature searches were then conducted to identify papers reporting economic evaluations of these specific factorial trials. One author (HD) examined titles and abstracts to assess whether they met all of the following inclusion criteria: Described the methods and/or results of a costeffectiveness, cost-utility, cost-consequence or costbenefit analysis quantifying the costs and benefits of interventions designed to improve health or affect healthcare systems. Used patient-or cluster-level data from a factorial RCT, as defined in Additional file 1. Published at least brief details of the methods and/or results of the trial-based economic evaluation on/before 31st December 2009. Studies were not excluded from the review based on language, providing that at least an English abstract was available. For completeness, protocols published as journal articles by 31st December 2009 were also included, to give information on intended analytical methods. The same author extracted data on study characteristics, study design, statistical methods and results (See Additional file 2). Mean costs and mean health benefits within each cell of the factorial design and their standard deviations were extracted if reported. These data were used in the simulation study and to estimate the magnitude, influence and (where possible) statistical significance of interactions. Interactions were placed in one of four categories: super-additive: where the effect of the combination is greater than the sum of the parts; sub-additive: where the effect of the combination is less than the sum of the parts, but the interaction does not change the direction of effects; qualitative: where at least one of the treatments under investigation changes sign (not just magnitude) depending on whether or not the other therapy is given; and mixed: we developed the "mixed" category to reflect situations where one factor decreases outcome while the other increases it, such that the interaction has the same sign as one treatment effect, but the opposite sign from the other. To measure the magnitude of interactions relative to between-group differences, we developed the interaction:effect ratio, 1 which indicates both the size of interactions and whether the interaction is super-additive, sub-additive/mixed or qualitative. The interaction:effect ratio (IER AB ) equals the interaction term (I AB = μ 0 − μ a − μ b + μ ab ) divided by the simple effect of A (δ A ): Simple effects comprise the difference in means between the group receiving one treatment and the group not receiving that treatment (δ A = μ a − μ 0 ). When all treatments have the same direction of effect (e.g. when A and B both increase cost, or both decrease cost), the factor defined as A is the one for which the simple effects has the smaller absolute magnitude (where |μ a − μ 0 | < |μ b − μ 0 |). For mixed interactions, factor A should be the factor for which δ A has the opposite sign to I AB . These rules ensures that qualitative interactions (those changing the ranking of treatments) have interaction:effect ratios <− 1. In all cases, interaction:effect ratios <− 1 indicate qualitative interactions, ratios between − 1 and 0 indicate sub-additive or mixed interactions, ratios equal to 0 indicate additive effects, while interaction:effect ratios > 0 indicate super-additive interactions. Results Searches identified 1671 references ( Fig. 1, Additional files 1). Of these, 40 complete studies presenting economic evaluation results, 13 published protocols and one prematurelyterminated study 2 met the inclusion criteria. Additional file 2 gives details of all included studies. Of the completed studies, 23% (9/40) allowed for interactions between factors when analysing the primary clinical endpoint, 53% (21/40) assumed no interaction, while 25% (10/40) did not clearly state their methods (Table 1). Twenty studies (50%) used regression methods for the primary endpoint, of which five included interaction terms, seven did not and eight did not clearly describe their methods. Four studies used inside-the-table analysis and 14 used at-the-margins. Only three studies (8%) observed statistically significant interactions for the primary endpoint, although nine others (23%) observed large or qualitative interactions that did not reach statistical significance or for which significance was not reported. Interaction results were not clearly reported for 15 studies. By contrast, 53% (21/40) of completed studies allowed for interactions in their base case economic evaluation: more than twice the number allowing for interactions in the primary endpoint. Studies were also more likely to report sufficient information to identify whether interactions were taken into account for cost-effectiveness than primary endpoints, although in most cases it was necessary to infer the methods used from the tables reported. Only five studies analysed economic results using regression analyses, while two used event-based cost-effectiveness analysis, 17 inside-the-table and 14 at-the-margins; this may reflect the difficulties associated with regressionbased economic evaluation identified previously [1]. 1 The interaction:effect ratio differs from the "interaction ratio" used by McAlister et al. [10]. McAlister's interaction ratio is simply the relative effect (e.g. odds ratio) of A vs. not-A for patients also receiving B, divided by the relative effect of A vs. not-A for patients not receiving B (interaction ratio = (odds ab /odds b ) ÷ (odds a /odds 0 )) and therefore equals the interaction on a logarithmic scale (interaction on log-scale = exp[(ln[odds ab ] − ln[odds b ]) − (ln[odds a ] − ln[odds 0 ])]). Unlike our interaction:effect ratio, McAlister's interaction ratio is appropriate only for data interpreted on a multiplicative scale and does not distinguish between qualitative and non-qualitative interactions. At least one previous paper has used the interaction divided by simple effect to describe the ranges of interaction magnitude in which different analytical approaches performed best [3]. However, this study did not include any adjustment for mixed interactions, did not link the ranges of ratio values with different types of interaction and did not propose this ratio as a method for describing interactions in general. 2 One study meeting inclusion criteria was terminated early due to poor recruitment but was published as a monograph without analysis of economic results; this is considered in the review alongside protocols. Fifteen completed studies (38%) presented the probability of treatment being cost-effective within the text or as cost-effectiveness acceptability curves. Of these: nine studies presented pair-wise comparisons giving the probability that one treatment is cost-effective compared with a single comparator; three studies presented figures showing how the probability of each treatment evaluated in the trial having highest NMB varies with ceiling ratio; and a further three studies presented acceptability curves for both pair-wise and multiple comparisons. Six further studies quantified uncertainty in other ways (e.g. scatter graphs or confidence intervals). One study also presented the value of information [11][12][13]. Sixteen studies (40%) reported results inside-the-table in sufficient detail that interactions for both costs and health benefits could be directly evaluated (See Additional file 3). 3 Large interactions arose frequently: 33% (24/72) of interactions had an absolute magnitude larger than one or more simple effect (interaction:effect ratios > 1 or < − 1; Table 2). Interaction:effect ratios varied between − 44 and 232. Overall, 33% of interactions were super-additive (23/72), 49% (35/72) were sub-additive or qualitative, while 17% (12/72) were mixed ( Table 2). Large and qualitative interactions occurred at least as commonly for health benefits as for costs and NMB. Among the studies measuring health in units other than QALYs, 50% (7/14) of interactions were larger than simple effects. However, although 29% (7/24) of studies had qualitative interactions for NMB, the interaction changed the treatment adoption decision in only one case [15]. Methods The six studies reporting standard deviations for each group [15][16][17][18][19][20] were used in simulation work to evaluate the different criteria for identifying which interactions should be included in economic analyses. Using simulated data means that: (a) whereas for a real trial, we only see one sample, for simulated data, we can generate multiple samples and see how performance varies; (b) we specify the true data-generating mechanism and can compare the conclusions of each individual sample against the true answer; (c) we can vary the characteristics of the data-generating mechanism (e.g. interaction size and sample size) and see the impact on the results. For simplicity, simulations focused on balanced 2 × 2 full factorial designs with no covariates or missing data. We therefore only included the first two levels for each factor evaluated by Hollis et al. [20] and the Alexander Technique, Exercise And Massage (ATEAM) trial [17]. In addition to the original studies, five variants of each trial were simulated using interaction terms that were 0, 50% or 200% of the size observed in the original study, and using double the sample size with either the original interaction or zero interaction (See Additional file 3). The analysis used Stata version 12 (College Station, Texas) to simulate and analyse 300 samples of each of the 36 scenarios from the six trials. The data-generation methods and Stata code are shown in Additional file 3 and use the data in Additional file 5. The costs and benefits for each sample were analysed using four mixed models with different combinations of interaction terms: no interactions; interaction for costs only; interaction for health benefits only; and interactions for costs and benefits. The mixed models implemented seemingly-unrelated regression allowing for correlations between costs and benefits by predicting outcomes (which could be either costs or benefits) with random effects by patient. However, separate constants, treatment effects and (where appropriate) interactions were estimated for costs and benefits and unstructured residuals were used. This approach gives identical results to the sureg command [21]. The log-likelihood, degrees of freedom and coefficients and their standard errors were recorded for each model. The coefficients estimated in mixed models were used to calculate NMB. For simplicity, all costs were interpreted as though they were in pounds Sterling. Results focus on ceiling ratios of £20,000/QALY [14] for the five a A ceiling ratio of £20,000 (or $20,000 or €20,000) was used for quality-adjusted life-years (QALYs) [14] and life-years gained, while a ceiling ratio of £5000 (or $5000 or €5000) per unit of benefit was arbitrarily used for all other health benefits b Mixed interaction: one factor increases the outcome of interest, while the other decreases it. The interaction therefore cannot be classified as either sub-additive or super-additive studies measuring benefits in QALYs, and £5000 per unit of benefit for other studies. We evaluated 15 criteria for determining which interactions should be taken into account (Table 3) and applied these to each simulated trial sample. We compared the results of each analysis against the "true" results for each dataset, which (for the purposes of this simulation study) were assumed to equal the mean values for treatment effects and interactions shown in Additional file 3, Table 3.3 The sensitivity and specificity for identifying interactions, the probability of adopting the best treatment and the opportunity cost of making the wrong decision [1] were evaluated for each of the 15 criteria (Table 4). We used the opportunity cost as the primary measure of which criterion works best, since it focuses on the central question of economic evaluation: namely maximising health gains from a finite budget. Coverage, statistical power and bias were also calculated (Additional file 4). Results The 15 criteria differed in the proportion and type of interactions that were correctly identified (Table 5). Other than the "always include interactions" criterion (criterion 1), including interactions where p < 0.25 (criterion 5) and including interactions that are statistically significant or greater than simple effects (criterion 10) resulted in the largest number of cost interactions being included. By contrast, criteria 5 and 9-12 included the largest number of benefit interactions. In general, specificity and sensitivity were inversely proportional; measures based on information criteria or statistical significance at alpha = 0.05 tended to have high specificity and low sensitivity. Averaging across all 36 scenarios from the six trials, including interactions ≥0.25 or ≥ £250 minimised the opportunity cost from adopting treatments that do not in fact maximise true NMB, while the opportunity cost of "always include interactions" was £0.04 larger (Table 5). "Never include interactions" performed worst, while criteria 3-7 (based on statistical significance and information criteria) also performed poorly. However, the criterion with lowest opportunity cost differed between individual scenarios (See Additional file 4). As expected, "never include interactions" was, on average, the best criterion for the scenarios that did not have qualitative interactions, although no criteria had high opportunity costs when interactions were zero. Across the 13 scenarios with qualitative interactions, "always include interactions" performed best, although criteria 11-13 also performed well (including qualitative interactions, including interactions >simple effects or including interactions ≥0.25 or ≥ £250). Across all scenarios, criterion 9 (including interactions >simple effects) had the highest probability of adopting the treatment that has highest true NMB (Table 5). "Never include interactions" performed worst overall on this measure, but performed best in scenarios without qualitative interactions for NMB. "Always include interactions" performed best when there were qualitative interactions. However, results differed substantially between scenarios (not shown). Doubling the sample size reduced the opportunity cost and the probability of adopting the wrong treatment for all criteria. However, criteria based on statistical significance or information criteria (which explicitly take account of sample size) did not appear to perform any better relative to other criteria in larger studies. Furthermore, criterion 11 (including qualitative interactions for cost., benefits or NMB) performed best in scenarios with double the original sample size, whereas "always include interactions" performed best with a smaller sample size. Including all interactions was also the only criterion for which the 95% confidence intervals gave 95% coverage and also had no bias (See Additional file 4). Excluding all interactions had lowest coverage and highest bias. Including all interactions had lowest statistical power, while criteria 2, 8, 14 and 15 had highest statistical power (never include interactions, include qualitative interactions, include interactions ≥0.5 or ≥ £500 and include interactions ≥1 or ≥ £1000). Discussion Between-treatment interactions that can change the treatment adoption decision need to be taken into account in healthcare decision-making, model-based economic evaluations and economic evaluations based on factorial RCTs [1,2]. However, to our knowledge, this is the first study to evaluate the magnitude of interactions within published economic evaluations or compare different criteria for determining which interactions should be included in economic analysis. This systematic review found that 26% of all interactions in factorial trial-based economic evaluations published before 2010 were qualitative (i.e. change the ranking of treatments and render at-the-margins estimates misleading [5,36]), although interactions changed the treatment adoption decision in only one study. This provides empirical evidence on the importance of taking account of interactions within economic evaluations based on factorial trials [1] and within decisionanalytical models and health technology assessment [2]. Our results may also be useful for researchers defining informative priors for Bayesian analyses: one previous study assumed that the probability of a qualitative interaction is just 2.5% [12]: less than a tenth of the frequency that we observed in our review. However, 60% of studies did not report mean costs and benefits for each group inside-the-table; such Table 3 List of the criteria for determining which interactions are taken into account that were evaluated in the study Name of criterion Rationale Details of how it was applied 1 Always include all interactions Sometimes referred to as "never pool" [3,22]. Avoids bias, but has lower power unless interactions are very large [3][4][5][6]. May be particularly appropriate for economic evaluation [1] since this focuses on maximising expected net benefit subject to current information [8]. This approach is statistically consistent, in that we would always adopt the treatment that truly had the highest NMB if the sample size were infinite, although results may be more sensitive to chance than approaches excluding some interactions. Interactions were included in analyses on all trial samples. 2 Never include any interactions Sometimes referred to as "always pool" [3,22]. Maximises statistical power unless interactions are very large, but is biased unless the true interaction is zero [3][4][5][6]. This approach is not statistically consistent and would cause us to adopt a suboptimal treatment whenever there is a qualitative interaction in NMB that changes which treatment had highest NMB, even with an infinite sample size. No interactions were included analyses on any trial samples. 3 Include interactions where p < 0.05 Reflects standard practice for clinical endpoints, where only interactions that are statistically significant in an initial test are included in the main analysis [3,4,7]. Significance levels > 0.05 are sometimes used for the test on interactions [23]. However, most studies are underpowered for main effects in costs and QALYs [24][25][26][27][28][29][30], which are likely to have variances a quarter of the size of those found interaction terms [31][32][33]. Statistical inference may be irrelevant for decision-making, as health gains from the budget are maximised by adopting the treatment with the highest expected net benefit [8]. [34], although this trade-off is based on information theory, rather than decision analysis. These measures have been used outside of healthcare to decide whether to include interactions in factorial experiments [35]. Results are based on the mixed model with lowest AIC/BIC. 7 Include interactions decreasing BIC Include qualitative interactions in cost or benefits Interactions that change the ranking of treatments for cost or benefits may also have a high chance of changing the ranking of treatments for net benefits and therefore could also change the conclusions. This approach is simpler to implement than the criteria based on interactions for net benefit as it does not depend on the ceiling ratio. However, at ceiling ratios other than zero and infinity, the conclusions of economic evaluation could be sensitive to interactions even if this criterion does not pick up qualitative interactions for either costs or benefits. Includes interactions for cost [benefits] that change rankings of treatments for cost [benefits]: i.e. those that are larger than and have the opposite sign from one or both of the simple effects (which will have interaction: effect ratios <− 1). Include interactions for cost or benefits if >simple effect This criterion includes super-additive interactions for cost or benefits that are larger than as the smaller of the two simple effects, as well as the qualitative interactions included in criterion 8. However, like 8, it may not identify all qualitative interactions for net benefit. All interactions with an absolute magnitude larger than the smaller of the two simple effects (i.e. all those with interaction:effect ratios <− 1 or > 1) are included. 10 Include interactions for cost or benefits if p < 0.05 or > simple effect This approach takes account of statistical significance and interactions that are larger than main effects. As for 9, but also including smaller interactions that are statistically significantly different from zero. Include qualitative interactions for cost, benefits or NMB Allowing for interactions will have no effect on the conclusions about which treatment is adopted unless the interactions are qualitative on a NMB scale (i.e. change the ranking of treatments) at the ceiling ratio(s) of interest. However, since the true shadow price of a QALY is unknown, this approach requires arbitrary judgements about the ceiling ratio(s) at which the interactions are assessed. Including all interactions that are qualitative at any ceiling ratio would generally result in inclusion of all interactions, since any quantitative interaction in either costs or QALYs will produce a qualitative interaction in presentation is important to allow readers to assess the impact of interactions and the extent to which they may bias the results [1]. Furthermore, the 16 studies reporting costs and benefits inside-the-table may not be typical: studies may have reported results inside-the-table because interactions were large. Of the completed studies, 53% allowed for interactions in their base case economic evaluation, whereas only 23% considered interactions for the primary clinical endpoint; these figures are similar to those reported previously [10,37]. The higher figure for economic evaluations could be due to interactions being smaller for the primary clinical analysis than the endpoint used in economic evaluation, or interactions being smaller when analysed on the logarithmic scale, which may be appropriate for many clinical endpoints but not economic evaluation [1]. Alternatively, the greater use of inside-the-table analysis within economic evaluation could reflect economic thinking: particularly the view that inference is irrelevant [8], or that treatment-combinations should be evaluated as mutually-exclusive alternatives. Our review aimed to assess the magnitude of interactions in a representative sample of studies and provide data inputs for simulation work. Our literature search was conducted in 2010 and a separate systematic review of economic evaluations of factorial trials conducted in 2013 identified seven studies published since our search date but used a different search strategy [37]. However, Table 4 The measures used to assess performance of the criteria for deciding which interactions are considered Measure Rationale Details of how it was calculated Sensitivity for including non-zero interactions Sensitivity and specificity evaluate the extent to which criteria identify non-zero interactions, but do not reflect the consequences of ignoring them. The proportion of samples in which interactions in cost [or benefit] were taken into account in the analysis when the true interaction was not zero Specificity for excluding interactions equal to zero The proportion of samples in which interactions in cost [or benefit] were excluded from the analysis when the true interaction equalled zero Probability of adopting treatment with highest NMB This focuses on the purpose of economic evaluation: namely to inform a treatment adoption decision regarding which treatment has highest expected NMB and to thereby maximise health gain from the budget. It assumes that inference is irrelevant to decision-making [8], but nonetheless acknowledges that inefficient analysis and small sample sizes may cause us to adopt the wrong treatment by chance. The probability of making the wrong decision may be relevant risk-averse decision-makers. However, it does not take account of the consequences of making the wrong decision. The treatment arm with highest expected NMB was identified at the ceiling ratio of interest for (a) the "true" parameters used to generate the data and (b) based on the mixed model coefficients estimated on each sample. The proportion of samples in which the treatment predicted to have highest NMB (b) was the same as the "true" best treatment (a) was calculated for each scenario. Opportunity cost associated with adopting a suboptimal treatment This measure takes account of the opportunity cost of adopting the wrong treatment, as well as the probability of adopting the wrong treatment [1]. It is similar to the opportunity cost of ignoring interactions [1] but is based on a contrast between the genuine best treatment and the treatment predicted to be best, rather than a comparison between two imperfect analyses on finite samples. As such, the opportunity cost estimated here takes account of situations where allowing for spurious interactions causes us to adopt the wrong treatment by chance, as well as situations where ignoring interactions biases the analysis. For each sample, the opportunity cost was defined as the NMB for the "true" best treatment (a) minus the NMB for the treatment predicted to have highest NMB in that analysis of that sample (b). In both cases, NMB for each treatment was calculated using the "true" parameters used for data generation. Opportunity cost was therefore zero for all samples in which the "true" best treatment was adopted and positive in all other cases. Opportunity cost was then averaged across samples and scenarios. Abbreviations: NMB net monetary benefit Table 3 List of the criteria for determining which interactions are taken into account that were evaluated in the study (Continued) Name of criterion Rationale Details of how it was applied NMB at some ceiling ratio whenever the treatment lies in the north-east or south-west quadrants [1]. Include interactions for cost, benefit or NMB if >simple effect Includes all qualitative interactions in cost, benefits or NMB and any super-additive interactions that are larger than the smaller of the two simple effects. Calculated as for 11, but also including large super-additive interactions. 13 Include |interactions| ≥0.25 or ≥ £250 An absolute limit for the size of interaction that can safely be ignored could be pre-specified. However, there is no general rule for how large this limit should be and it may vary between applications. The size thresholds used were chosen arbitrarily. Only interactions above the designated size threshold were taken into account. For example, criterion 13 includes interactions in benefits that are ≥0.25 (or ≤ −0.25) units in size and interactions in cost that are ≥£250 (or ≤ −£250) in size. 14 Include |interactions| ≥0.5 or ≥ £500 15 Include |interactions| ≥1 or ≥ £1000 Abbreviations: AIC Akaike information criterion, BIC Bayesian information criterion, NMB net monetary benefit, QALY quality-adjusted life-year there is no reason to expect the incidence of interactions or the performance of different criteria to have changed over time. Systematic identification of factorial trials is hindered by the absence of a medical subject heading (MeSH) term specific to this type of design. Our review may therefore have missed studies that did not mention the factorial design in the abstract, particularly if they presented results for only one factor; as result, the review may underestimate the proportion of studies that have ignored interactions. However, our literature searches nonetheless identified four times as many pre-2010 papers than the review by Frempong et al. [37]: probably by using more general search terms, which yielded 10 times as many hits in bibliographic databases. In four studies [38][39][40][41], the interaction between factors was confounded as the treatment given to the ab group was not equal to the sum of the treatments given to the a and b groups. 4 Giving an additional intervention (e.g. advice or training) to the control group or to the three active treatment groups or varying how treatment is administered means that all estimates of the AB interaction are confounded by differences in treatment and makes analyses ignoring any interactions questionable. Future studies should avoid such confounding. If it is essential to give an additional treatment (e.g. for ethical reasons), papers should justify this decision and discuss what effect this is likely to have had on outcomes and interactions and (arguably) should not describe the study as factorial if the additional treatment is likely to influence outcomes. Across all 36 scenarios, strategies of including all interactions, or including interactions larger than an arbitrary but relatively low threshold minimised the average opportunity cost associated with adopting the wrong treatment. Excluding all interactions, or using information criteria or statistical significance generally performed poorly on the measures most relevant to economic evaluation. However, the best criterion depended on how criteria were evaluated. For example, the probability of adopting the treatment with the highest NMB was slightly higher for criterion 9 (including interactions larger than simple effects) than for "always include interactions". There were also substantial variations in the relative performance of different criteria between trials and scenarios. In particular, criteria that excluded most interactions performed well in scenarios where interactions equalled zero or did not change the ranking of treatments. The performance of different criteria varied little with sample size and the best-performing criteria take no account of sample size, suggesting that avoiding bias is more important than avoiding inefficiency even when sample size is limited. The simulation study was based on six factorial trials and a small range of variants on each study. Since the most appropriate criterion differs between studies, different results could have been obtained with a different set of trials or scenarios. The analysis focused on 2 × 2 full factorial trials and interactions between two factors. Although the same principles are likely to apply to larger factorial designs, higher-order interactions between three or more treatments may be harder to detect. Furthermore, all trials were simulated and analysed as though they measured health benefits on continuous scales and all costs and health benefits were analysed on a natural scale using arbitrary ceiling ratios. The data-generating mechanism also simulated trials with complete, uncensored data, equal numbers in each arm, gamma-distributed costs with predictable patterns of heteroskedasticity and Gaussian, homoskedastic health benefits. Interactions were assumed to affect all patients in the A + B group equally (which may not be the case for rare events). Mixed models and the criteria based on statistical significance may perform less well in real trials where these idealised data characteristics do not apply. The optimal choice of criteria may also be sensitive to these features common to all simulated datasets (see Additional file 3). Conclusions Large and qualitative interactions occur relatively commonly for costs, QALYs and net benefits. Future systematic review updates may help assess whether the conduct of economic evaluations of factorial trials has changed and quantify interactions in a wider sample of trials. The simulation study demonstrated that it is better to include interactions that may have arisen by chance than risk ignoring genuine interactions that could change the conclusions. Researchers planning an economic evaluation based on a factorial trial should pre-specify and justify the criterion used to determine which interactions will be taken into account in the base case analysis [1]: e.g. in a health economics analysis plan [42]. The chosen criterion should balance the risk of bias from ignoring interactions against the loss of power from including interactions and the risk of drawing the wrong conclusions by chance. Although the criteria that performed best in our study depended on the magnitude of the true interaction, minimising the risk of bias by including all interactions or excluding only small/quantitative interactions tended to perform best. Criteria relying on statistical significance or information criteria performed poorly. This differs from the approach currently used by statisticians, although at least one published economic evaluation has used a pre-specified rule that interactions larger than main effects would be taken into account [43]. Any prior evidence or beliefs about the size of interactions could be used to select the appropriate criteria or as informative priors in a Bayesian analysis. In particular, a strategy of including all interactions above a certain size may perform better if the threshold is based on the expected treatment effects or the amount of bias that is acceptable in a particular setting. In addition to the criteria considered here, researchers could exclude all interactions not hypothesised a priori, or those that do not have plausible biological explanations. Whenever the base case analysis excludes any interactions, researchers should always present a sensitivity analysis including all interactions to assess the risk of bias [1]. Additional file 1. Protocol for the systematic review. Includes search strings and numbers of hits. Additional file 2. Data extraction table for the systematic review of studies conducting economic evaluations of factorial design studies. Includes full details on each study meeting inclusion criteria. Additional file 3. Additional methods on data simulation and analysis. Includes data on the magnitude of interactions for each of the studies 4 Three of these studies were included in the review as the authors described them as factorial [38][39][40]. In one study [40], general practitioners randomised to one of the three active treatment groups received a training session not given to the control group. Conversely, two trials gave patients in the control group an additional intervention not given to the other three groups. A fourth trial, in which the second factor compared physiotherapy against reinforcement of the advice given as part of factor 1 (a whiplash book or usual advice), was excluded from the review as the authors did not describe it as factorial [41]. A further study that was included in the review allowed information sharing between practitioners within the ab group that was not possible within the groups receiving < 2 interventions [19].
2020-05-07T15:28:05.762Z
2020-03-18T00:00:00.000
{ "year": 2020, "sha1": "458b59679f85a41c9c88a5a15ab3ea0eefac4881", "oa_license": "CCBY", "oa_url": "https://bmcmedresmethodol.biomedcentral.com/track/pdf/10.1186/s12874-020-00978-0", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "458b59679f85a41c9c88a5a15ab3ea0eefac4881", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
250430554
pes2o/s2orc
v3-fos-license
“COVID knocked me straight into the dirt”: perspectives from people experiencing homelessness on the impacts of the COVID-19 pandemic Background People experiencing homelessness are uniquely susceptible and disproportionately affected by the impacts of the COVID-19 pandemic. Understanding context-specific challenges, responses, and perspectives of people experiencing homelessness is essential to improving pandemic response and mitigating the long-term consequences of the pandemic on this vulnerable population. Methods As part of an ongoing community-based participatory research study in partnership with a homeless service organization in Indiana, semi-structured interviews were conducted with a total of 34 individuals experiencing homelessness between January and July 2021. Guided by the NIMHD Health Disparities Research Framework, which builds on the socio-ecological model, data was thematically coded using Nvivo12 qualitative coding software and themes were organized by levels of influence (individual, interpersonal, community, societal) and domains of influence (biological, behavioral, physical/built environment, sociocultural environment, health care system). Results Narratives revealed numerous and compounding factors affecting COVID-19 risks and health outcomes among people experiencing homelessness across all levels and domains of influence. At the individual level, people experiencing homelessness face unique challenges that heightened their susceptibility to COVID-19, including pre-existing physical and mental health conditions, substance use and behavioral health risks, socioeconomic precarity, and low health literacy and COVID-related knowledge. At the interpersonal level, poor communication between people experiencing homelessness and service providers led to limited understanding of and poor compliance with COVID safety measures. At the community level, closures and service disruptions restricted access to usual spaces and resources to meet basic needs. At a policy level, people experiencing homelessness were disregarded in ways that made pandemic relief resources largely inaccessible to them. Conclusions Our findings reveal important and mitigable issues with ongoing pandemic response efforts in homeless populations through direct, first-hand accounts of their experiences during COVID-19. These insights offer opportunities for multilevel interventions to improve outreach, communication, and impact mitigation strategies for people experiencing homelessness. This study highlights the importance of centering the voices of vulnerable communities to inform future pandemic response for homeless and other underserved and marginalized populations. Background The COVID-19 pandemic has disproportionately impacted vulnerable communities across the country, highlighting existing social inequities further exacerbated by the pandemic. People experiencing homelessness face increased risk and susceptibility to COVID-19 infection and adverse outcomes due to pre-existing comorbidities, barriers to healthcare, socioeconomic precarity, and limited ability to social distance in congregate shelter settings [1,2]. As a result, heightened risk of transmission and outbreaks in shelters persisted despite decreases in cases among the general population [3]. Shelters and other homeless service providers have taken numerous approaches to mitigate risks, control transmission, and limit outbreaks to minimize adverse outcomes [1,4,5]. Previous research on COVID-19 responses from the perspective of homeless service organizations in Indiana found that service providers experienced multilevel challenges during the pandemic, such as limited public health and emergency management guidance and difficulty enforcing safety measures among shelter guests, but also showed innovative responses with systems and staffing in place, along with the support of community and government partners [6]. The COVID-19 response in homeless populations led to improvements in crisis execution and public health protocols such as hand hygiene, social distancing, and quarantine and isolation protocols, and also created initiatives to sustain these programs [6][7][8]. However, limited adaptable guidance and policies for people experiencing homelessness and service providers have severely strained their response and resources [6]. Others have also discussed COVIDrelated responses and challenges from the perspective of homeless service providers, including limited availability of testing resources which severely hindered the ability of shelter staff to adequately screen people experiencing homelessness and prevent shelter outbreaks [9]. Furthermore, the economic consequences of the pandemic intensified the strain on low-income populations and evictions disproportionately put those most socially disadvantaged at risk for COVID-19 [10]. While a significant number of studies report trends in coronavirus cases, hospitalizations, and deaths [11], few consider the other numerous impacts of COVID-19 on people experiencing homelessness and scant have explored the impact from the perspective of people experiencing homelessness directly. Among the few studies that have qualitatively explored perspectives of people experiencing homelessness, most have been conducted outside the United States [12][13][14][15]. Findings from our previous work in Indiana [6] highlighted the need to hear and learn from people experiencing homelessness directly, in order to holistically understand the impact of the pandemic and to better inform responses that address the specific needs of this uniquely vulnerable population. Thus, guided by the Socio-Ecological Model [16] which recognizes the interrelatedness of person-environment, this study sought to understand 1) experiences of people experiencing homelessness throughout the COVID-19 pandemic and 2) perspectives of people experiencing homelessness on homeless service organizations' responses to the pandemic and the impacts of those responses. Awareness of this vulnerable population's multidirectional needs creates an opportunity to discover motivations, hesitations, and challenges contributing to increased risk, susceptibility, and adverse health outcomes. Understanding these critical factors can better inform future pandemic response as well as interventions to mitigate the long-term impacts of COVID-19 for homeless populations. Methods After exploring local homeless service providers' and community-based organizations' responses during COVID-19, we turned to learn from people experiencing homelessness themselves in order to understand how they have personally experienced the pandemic, the challenges they have faced, and the unmet needs that persist. As part of an ongoing community-based participatory research (CBPR) project [6], this study's recruitment and data collection activities took place at our community partner organization, a transitional housing center in Indiana that serves as the coordinated point of entry for all people experiencing homelessness in the county. The organization includes an engagement center that operates as a day shelter, offering three daily meals, showers, laundry machines, phones, and case management services, and a small night shelter where some but not all guests stay overnight. Using convenience sampling, recruitment involved passive outreach via flyers and general announcements at the shelter. Interested participants were told to contact the experiencing homelessness. This study highlights the importance of centering the voices of vulnerable communities to inform future pandemic response for homeless and other underserved and marginalized populations. Keywords: COVID-19, Homelessness, Health disparities, Community-based participatory research, Socio-ecological model, Disaster response, Pandemic response phone number on the flyer or to speak to a study team member on site. At no time were people experiencing homelessness approached directly. To be eligible participants had to be age 18 or older, currently experiencing homelessness, and receiving any services from the transitional housing center. There were no pre-determined enrollment targets, as we aimed to capture as many perspectives and narratives from people experiencing homelessness as possible in the six-month study period. Participants received a $25 giftcard to a local grocery store in compensation for their time providing an interview. All study activities took place from January through July, 2021. An interview guide was developed to understand unique challenges, responses, and experiences faced by people experiencing homelessness during the COVID-19 pandemic. Development of the interview guide was guided by: [1] the Socio-Ecological Model [2]; a review of academic and grey literature conducted to gain insights into COVID-19 responses taken by entities working with people experiencing homelessness, and to identify knowledge gaps that could be informed through interviews; and [3] preliminary findings from our previous research with community-based organizations [6]. Interviews were conducted in-person, in private rooms at the center, by community health workers (CHWs) who live in the surrounding community and serve as health educators in the center, providing health-related education and daily public service announcements during the COVID-19 pandemic. CHWs have relationships, knowledge, and trust with the community they serve and are increasingly being brought into research to better understand the health needs of marginalized populations [17][18][19][20]. The CHWs were part of the research team, completed IRB-required trainings on responsible conduct of research and human subjects research, and were further trained on research ethics and data collection by study principal investigators (authors NMR and YR) who have extensive experience conducting CBPR and CHW training interventions. CHWs received extensive training to ensure that people experiencing homelessness understood that participation was voluntary and that involvement, or lack thereof, would not in any way affect their access to center services. Interviews were recorded and transcribed by Otter.ai, a digital scribing platform. Transcriptions were quality checked for accuracy by the research team who also led the analysis. Utilizing a combination of deductive and inductive coding based on the interview guides, two researchers coded each interview independently using Nvivo12, a qualitative coding software, and discussed the interviews as a group to ensure intercoder consistency [21]. Disagreements were brought to the entire research team, including CHWs, and codes and resulting themes were discussed until consensus was reached [22]. Guided by National Institute of Minority Health and Health Disparities (NIMHD) research framework [23], which builds on the Socio-Ecological Model, data was thematically analyzed [24], and themes were organized by levels of influence and domains of influence. Preliminary deidentified findings that highlighted strengths, opportunities, and the challenges of the responses to COVID-19 as perceived by people experiencing homelessness were shared with the study's community partner as well as other community-agencies that serve this population. Feedback from the community partners helped contextualize and clarify aspects of the findings, and sharing preliminary findings also allowed the community partners to act quickly on identified needs or opportunities to improve service delivery for people experiencing homelessness. This study was approved by the University's Institutional Review Board (protocol IRB-2020-1488). Results In total 34 people experiencing homelessness (M age = 46 years [range 22 to 63]; 65% male; M = 3.5 years spent experiencing homelessness [range from 6 months to over 10 years]) participated in semi-structured interviews. Most identified as White (79%) with 6% as Black/ African American, 9% American Indian or Alaska Native, and 3% as multiracial. Most (59%) reported having high school or equivalent education and over half (53%) reported no monthly income (6% < than $500, 24% between $500-999, and 18% ≥ $1000). The demographic characteristics of our participant pool are fairly representative of the 2021 point-in-time count results of homeless populations in Indiana [25]. Additional demographic information is presented in Table 1. Qualitative content analysis and resulting themes were organized by level of influence: individual, interpersonal, community, and societal ( Fig. 1). At the individual level, across all domains of influence, people experiencing homelessness face unique challenges that heightened their vulnerability during the COVID-19 pandemic Participants spoke of concerns about their biological risk of severe COVID-19 due to their pre-existing health conditions. One participant shared: "I know it, if I get it I'm dead, because the way my lungs and my health and stuff are-it's killing athletes, why wouldn't it kill me? I don't eat right, I smoke cigarettes, I've been a drug addict my whole life. If it's killing athletes, it's definitely going to kill me" (Participant 009). Others shared how the pandemic amplified existing mental health disorders: "Just being depressed… there's nothing to do, and people aren't meeting and this just like sucks" (Participant 028). Others shared fears that their behaviors such as substance use would increase their COVID susceptibility and their risk for adverse COVID-19 outcomes. One noted an increase in smoking, "I did pick up cigarettes again since I've been here [shelter], and I had stopped for six months, and now I'm smoking again about a half a pack a day… " (Participant 018). While others shared how the pandemic led some to battle with their substance abuse, "the urge to want to use went up" (Participant 028). By contrast, some participants expressed hope that the pandemic could offer them a chance to start over and because the pandemic forced many people to lose their jobs it could create opportunities for them to find work, "In a sense, COVID may have actually helped some of us homeless, because it kind of ground to a halt, something that is that we've been missing when it's been flying by us, we haven't been able to put the pieces together and actually get on and get in the grind. And so in that sense it's helped because it's, it's helped is to put a lot of people in the industry as well. So in that respect, it can help because everybody has to start over and look at that job again. I had, I'm starting over" (Participant 022). One participant described seeing themself as a survivor with determination of overcoming challenges, "My mental attitude hasn't changed... even when I had COVID, I wasn't like sad or angry or anything… I know I'm going to get through this, I know I'm going to survive" (Participant 017). Reflecting on the physical/built environment, specifically the impracticality of social distancing in the shelter a participant shared: "I don't think the six feet social distancing applies here in the homeless community, because people are within two feet when we eat 3 times a day, or when they go outside and smoke cigarettes, they're within four feet, or when they sleep inside the shelter, they're within four feet" (Participant 018). Indeed some felt that the shelter's congregate living conditions increased their risk of infection, "this [shelter] would be the best place to catch COVID" (Participant 018). As many lost employment due to Evidence of low COVID-19 health literacy and knowledge gaps emerged with only 50% of all participants believing they were at risk for COVID-19 (Table 1) and through participant narratives, some sharing, "I don't know anything about COVID…" (Participants 020) and "I was in prison, so I don't know too much" (Participant 015). This lack of knowledge affected COVID risk perceptions among people experiencing homelessness with some expressing fear: "I'm scared to death, about getting this COVID, it really, really makes me paranoid" (Participant 009), some feeling hopelessness, "… even if I did get it, I wouldn't care. I mean…my life, besides, the quality of it is not good, so…it doesn't matter" (Participant 005), and others sharing feelings of indifference and disbelief: "I don't know anyone who has died from it. My girlfriend doesn't know anyone who has died from it. So, I don't trust the media…I think that maybe this whole thing was blown out of proportion. But I know that it made my homeless crisis that much more difficult" (Participant 023). Poor COVID knowledge also affected willingness by people experiencing homelessness to adopt COVID prevention measures like vaccination and testing. At the time of the study, most participants (79%) shared that they were willing to be tested for COVID-19, however only 21% of participants were vaccinated, 24% were unvaccinated but expressed a willingness to be vaccinated, 21% were unvaccinated and unwilling, and 32% were undecided ( pant 005), and others feared that the swabbing procedure would be painful, "I will not take a giant Q-tip up on my nose. Everybody else says it's painful and very uncomfortable" (Participant 024). Moreover, some shared being unwilling because they feared being quarantined if they tested positive. At the interpersonal level, poor communication and discrimination led to misunderstandings and tension between people experiencing homelessness and homelessness service providers Some participants acknowledged that the pandemic made the work of shelter staff, given all the uncertainty and limited resources, quite difficult: "they're [staff ] doing the best they can with what they have" (Participant 004). Others expressed appreciation towards staff who enforced mask wearing, "Every time they turn around, they're telling somebody 'put your mask on, if you're not eating put your mask on' …Yeah, they take it very serious, it's a good thing" (Participant 034). In fact, one newly homeless individual described how quickly staff alerted him of the already in place COVID-19 policies of mask wearing, sanitizing, and social distancing: "I was pretty much baptized right into what was already going on... Wearing masks. Social distancing. The hand sanitizers" (Participant 020). In contrast, many participants described how authoritative power dynamics and poor staff-to-client communication led to misunderstandings and tension between people experiencing homelessness and service providers that often resulted in poor compliance of COVID safety policies. One participant shared feeling frustrated by staff who instead of explaining the rationale behind mask-wearing simply threaten to remove people experiencing homelessness from shelter premises for non-compliance: "If we don't have this [ Adding to the tension between people experiencing homelessness and service providers was a sense that staff often did not follow the rules themselves, "I don't think they know how to handle this virus situation, you know? When we go to lunch, they say 'six feet apart ' In addition to concerns related to how staff handled COVID safety practices, participants expressed skepticism and concern towards the ways staff handled shelter closures when clients tested positive for COVID-19, "… they put the building on lockdown stopping new people from coming in... And I noticed that the health department was quick to lock down the buildings, but they weren't quick to lift the lock down with a false positive happening and being reported, which makes me question both the response and the direction that they're taking with the lockdown. As I'm already noticing people coming in looking for services and being turned away as a result of a false positive" (Participant 018). Similarly, another shared, "what upsets me is like for instance, if you've not been here since [start of lockdown]… you will not receive services. So, what are you supposed to do? You're out on the street" (Participant 005). One participant went on to share how a lockdown further exacerbated his homelessness status, "when the building was closed down and [they were] not accepting new clients because of COVID, it forced us to use the last bit of our savings on a hotel… putting us in a really, really bad financial spot" (Participant 018). Tensions between people experiencing homelessness themselves were also reported. The congregate shelter setting and limited personal space led to tension between some shelter guests, exacerbated by interpersonal discrimination where some participants described other people experiencing homelessness as "a stubborn bunch" (Participant 020), stating that "some people just do not care at all about other people and they just cough right in their face and wipe their snot everywhere… sometimes you have people basically touching you, or touching your backpack or whatever clothes. Maybe it's an accident maybe they're doing it on purpose, who knows" (Participant 015). At the community level, closures and service disruptions restricted access to usual spaces, routines, and ability to meet basic needs All participants spoke in detail about how COVIDrelated closures in the community had affected their ability to meet basic needs on a daily basis. Some had great difficulty finding spaces to shelter or even just to be. Regarding shelter-specific organizational responses to the pandemic, some participants expressed a positive reaction to the accessibility of sanitizing products along with more frequent bathroom cleaning in the shelter, stating, "I feel like the hand sanitizer everywhere. That's awesome. I think that helps" (Participant 028). Another stating, "I like how [the shelter] nightly have cleaners to go clean the bathroom, like it's never been cleaned before... " (Participant 034). Others commented on the lack of consistent resource availability available in restrooms such as soap, toilet paper, and paper towels, "…the people taking all the paper towels or soap dispensers being empty…generally you can't get in the bathroom in here anyway, so I just mostly use the hand sanitizer... they don't always have toilet paper, they don't always have paper towels. It seems like the soap dispensers aren't being filled" (Participant 020). The increase in shelter demands because of the pandemic further strained the already limited resources leading to longer wait times for restrooms, one participant stating, "It's so bad here in the morning with these bathrooms that I got to take the number seven out to Walmart and use the bathroom out there. I don't even bother trying to come here" (Participant 023). Two participants felt that during the pandemic, relief resources allowed for continued or even increased accessibility to services. One stating that services "Became easier to get. Felt it. " (Participant 008), and another sharing, "I actually got housed within like a couple weeks" (Participant 024). However, the majority of participants described how COVID-related service disruptions severely restricted or delayed access to key services. "I had to continue to live on the streets, even though it was four-degree weather out because of the fact that the [shelter], had a case of COVID.... " (Participant 022). One participant shared, "[behavioral health providers] used to come, but they don't no more because of the COVID thing" (Participant 012). One participant described the delays he experienced in accessing necessary paperwork, "[I was] referred to [homeless service organization] to get my green card and [organization] was shut down… somebody COVID in there so nothing happened until January… that I finally was able to get in there into the zoom thing, meeting with them got, you know the application kind of filled out and everything…" (Participant 017). In addition to service disruptions, participants also described the notable decrease in community support from volunteers, "And with the virus and all, what it did, it brought the families closer together at their homes. It makes some of the [people] or the churches or some that used to help nonexistent. They just don't want to take the time to do it or take the time to help" (Participant 027). In contrast, some participants spoke of increased visibility of homelessness as a silver linings for homeless communities resulting from the pandemic, "How is [covid] affecting me? it's affecting everyone in the whole country but even more so is the homelessness. I think COVID has actually helped empower some homeless communities because of the fact that some of the commercials are put out by the big conglomerate businesses that are really enlightening and really heartfelt and very spot on" (Participant 022). At a policy level, people experiencing homelessness continue to be neglected in ways that made pandemic relief resources largely inaccessible Participants described how unclear guidance on COVID policies and stimulus funding led to overwhelming confusion and inability to access relief resources. Local COVID-related policies, including "stay-athome" orders, mask mandates that meant people experiencing homelessness were never able to be without a mask indoors, and transit rules were not clearly communicated to people experiencing homelessness and often disregarded their specific needs and context. One participant shared, "We were riding the buses for free because they didn't want to handle the change… Alright. Nothing has changed. The virus is still there. So it's like, why we ain't riding the buses for free now?...Yeah, you know and they're making us pay now, and it's like, the virus hasn't changed" (Participant 006). Discussion This study explored the impacts of the COVID-19 pandemic on homeless populations in the US through firsthand accounts from people experiencing homelessness in Indiana. Across all domains of influence (biological, behavioral, physical environment, sociocultural environment, health care system), interviews with PEH revealed multilevel factors affecting their susceptibility to COVID-19 and other adverse outcomes of the pandemic. While existing research has surveyed people experiencing homelessness to understand specific COVID-related issues such as loneliness and isolation [13,14], mental health and substance use [26], and attitudes towards vaccinations and testing [8,27,28], there have been limited efforts to provide accounts from people experiencing homelessness themselves, in a way that centers the voices of the most affected to understand direct impacts of the pandemic on this vulnerable population. This community-based, qualitative study explored narratives of lived experiences and perspectives about being homeless during the pandemic. To date, several studies have garnered perspectives from homelessness service providers and reported on numerous challenges these frontline workers faced throughout the pandemic as well as the complex and innovative ways they navigated and responded to these challenges [6,29,30]. In many aspects, narratives of people experiencing homelessness supported provider accounts, particularly around this population's pre-existing physical, mental, and behavioral health conditions that were exacerbated by COVID-related service disruptions and multilevel challenges that made safety measures like social distancing difficult and often impossible. The absence of the voices of people experiencing homelessness in research can miss a more direct and nuanced understanding of motivations, knowledge, attitudes, and beliefs that contribute to challenges. Indeed, our findings highlighted several important and mitigable issues that had not come up in our previous work which focused solely on provider perspectives. For instance, many participants spoke of communication issues between shelter staff and guests that led to poor understanding and low compliance of COVID-related safety measures. Specifically, participants felt that little to no effort had gone into informing or educating them about COVID-19 and they were rarely offered rationale for new shelter rules such as mask-wearing and social distancing. Moreover, they shared that staff neither explained nor modeled expected behavior, but instead "contradicting themselves", communicated by "yelling", "trying to control", or "threatening to kick out" shelter guests. Interviews with people experiencing homelessness also revealed important knowledge gaps and misinformation surrounding COVID-19 that were made worse by a lack of reliable information sources. Not a single participant mentioned shelter staff as a source of COVID-related information, instead indicating that sources were often word-of-mouth and social media. Furthermore, interviews emphasized key policy failures that made state and federal pandemic responses especially neglectful of and even harmful to homeless communities including mask mandates, stay-at-home orders, and closures of public spaces and transportation, which disregarded the context and unique needs of people experiencing homelessness. Other policies such as eviction moratoriums contained loopholes and exceptions that failed to protect this vulnerable population. Lack of tailored guidance also led to confusion among people experiencing homelessness surrounding COVID healthcare-related policies and procedures and created substantial barriers to acquiring relief resources and stimulus benefits. This study had several limitations. First, as the interview participants were recruited only from one county in Indiana, we cannot assume that our results are representative of the larger population of people experiencing homelessness in other areas of the US. Because recruitment and eligibility was restricted to guests receiving services at a homelessness service organization, it is possible that this study is missing the perspective of unsheltered people who may not be accessing any support services and thus remain particularly vulnerable. In addition, we relied on convenience sampling for this study, which depends on the motivation of those who participate in the research and thus can introduce motivation bias. Nonetheless, our findings reveal numerous opportunities for multilevel interventions and improved disaster response for homeless populations that may be useful for other contexts. At the individual level, this work highlights the imperative for outreach, education, and navigation of PEH through healthcare and social welfare systems. Community health workers and other types of outreach workers have served as essential links between underserved populations and health and social services both during and long before the pandemic [18,19,31,32]. Hiring and deploying trusted individuals with lived experience or knowledge of the community could be the key to pandemic response in homeless populations by providing education, testing, access to vaccines, and navigation of relief programs, stimulus checks, etc. At the organizational level, training interventions for shelter staff and other homelessness service providers on implicit bias, cultural competency, effective communication, conflict resolution, mental health and substance abuse, and COVID-19 mitigation in shelters [33][34][35][36][37], could allow for better communication skills, strategies, and improved ability to meet the needs of people experiencing homelessness. At the societal and policy level, federal and state guidance and policy must be inclusive of our most vulnerable populations and tailored to their local contexts, which can only be achieved through meaningful engagement of members of these vulnerable communities. People experiencing homelessness must be engaged, listened to, and counted in a meaningful and participatory way. The majority of federally reported COVID-19 outcomes in homeless populations focused on numbers of cases and deaths, and disregarded both the complexities that made those counts inaccurate, as well as the enormous range of other impacts these communities faced [38]. People experiencing homelessness must also be protected from policy loopholes and other exceptions that exacerbate inequities and perpetuate a vicious cycle of falling through cracks. Stronger eviction prevention measures and policies to prevent homelessness and provide affordable housing, including permanent supportive housing, are increasingly critical beyond the COVID-19 pandemic [39][40][41]. Despite the overwhelming challenges faced by homeless populations, participants' also described numerous elements that helped them cope, overcome, and even grow despite the traumas and significant stressors, with some indicating being hopeful that the pandemic might offer them an opportunity for a fresh start. There is increasing evidence that demonstrates that supportive programs can assist people to exit homelessness [42][43][44], yet without centering these efforts on the voices of those most affected these efforts will continue to fall short. Further research is needed to enable the U.S. to create a system that is person-centered. These efforts must provide not only a better understanding of the unique and multidirectional needs of people experiencing homelessness, but also move beyond a deficit model towards one that identifies supportive protective factors so programs and policies can not only help individuals exit homelessness but also strive to reduce risk of homelessness.
2022-04-16T13:08:46.058Z
2022-07-12T00:00:00.000
{ "year": 2022, "sha1": "18df87f3eb7dd6f61f8e895e53f5ce712a094de1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "1ab8c63c2eb896d699ac39d8a0db988dac4df0a0", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
53709033
pes2o/s2orc
v3-fos-license
Gravitational waves generated by second order effects during inflation The generation of gravitational waves during inflation due to the non-linear coupling of scalar and tensor modes is discussed. Two methods describing gravitational wave perturbations are used and compared: a covariant and local approach, as well as a metric-based analysis based on the Bardeen formalism. An application to slow-roll inflation is also described. I. INTRODUCTION The generation of gravitational waves (GW) is a general prediction of an early inflationary phase [1]. Their amplitude is related to the energy scale of inflation and they are potentially detectable via observations of Bmode polarization in the cosmic microwave background (CMB) if the energy scale of inflation is larger than ∼ 3 × 10 15 GeV [2,3,4,5,6]. Such a detection would be of primary importance to test inflationary models. Among the generic predictions of one-field inflation [7] are the existence of (adiabatic) scalar and tensor perturbations of quantum origin with an almost scale invariant power spectrum and Gaussian statistics. Even if non-linear effects in the evolution of perturbations are expected, a simple calculation [8], confirmed by more detailed analysis [9], shows that it is not possible to produce large non-Gaussianity within single field inflation as long as the slow-roll conditions are preserved throughout the inflationary stage. Deviations from Gaussianity can be larger in, e.g., multi-field inflation scenarios [8,10] and are thus expected to give details on the inflationary era. As far as scalar modes are concerned, the deviation from Gaussianity has been parameterized by a (scaledependent) parameter, f NL . Various constraints have been set on this parameter, mainly from CMB analysis [12] (see Ref. [13] for a review on both theoretical and observational issues). Deviation from Gaussianity in the CMB can arise from primordial non-Gaussianity, i.e. generated during inflation, post-inflation dynamics or radiation transfer [14]. It is important to understand them all in order to track down the origin of non-Gaussianity, if detected. Among the other signatures of non-linear dynamics is the fact that the Scalar-Vector and Tensor (SVT) modes of the perturbations are no longer decoupled. This implies in particular that scalar modes can generate gravity waves. Also, vector modes, that are usually washed out by the evolution, can be generated. In particular, secondorder scalar perturbations in the post-inflation era will also contribute to B-mode polarization [15] or to multi-pole coupling in the CMB [16], and it is thus important to understand this coupling in detail. In this article, we focus on the gravitational waves generated from scalar modes via second order dynamics. Second-order perturbation theory has been investigated in various works [17,18,19,20,21,22,23,24,25] and a fully gauge-invariant approach to the problem was recently given in Ref. [25]. Second-order perturbations during inflation have also been considered in Refs. [9,26], providing the prediction of the bispectrum of perturbations from inflation. Two main formalisms have been developed to study perturbations, and hence second order effects: the 1 + 3 covariant formalism [27] in which exact gauge-invariant variables describing the physics of interest are first identified and exact equations describing their time and space evolution are then derived and approximated with respect to the symmetry of the background to obtain results at the desired order, and the coordinate based approach of Bardeen [28] in which gauge-invariants are identified by combining the metric and matter perturbations and then equations are found for them at the appropriate order of the calculation. In this article we carry out a detailed comparison of the two approaches up to second order, highlighting the advantages and disadvantages of each method, thus extending earlier work on the linear theory [29]. Our paper also extends the work of Ref. [22], in which the relation between the two formalisms on super-Hubble scales is investigated. In particular, we show that the degree of success of one formalism over the other depends on the problem being addressed. This is the first time a complete and transparent matching of tensor perturbations in the two formalisms at first and second order is presented. We also show, using an analytical argument, that the power-spectrum of gravitational waves from second-order effects is much smaller than the first order on super-Hubble scales. This is in contrast to the fact that during the radiation era the generation of GW from primordial density fluctuations can be large enough to be detected in principle, though this requires the inflationary background of GW to be sufficiently small [23]. This paper is organized as follows. We begin by reviewing scalar field dynamics in Section II within the 1 + 3 covariant approach. In Section III, we formulate the problem within the covariant approach followed by a reformulation in the coordinate approach in Section IV. A detailed comparison of the two formalisms is then presented in Section V. In Section VI, we study gravitational waves that are generated during the slow-roll period of inflation. In particular, we introduce a generalization of the f N L parameter to take into account gravity waves and we compute the three point correlator involving one graviton and two scalars. Among all three point functions involving scalar and tensor modes, this correlator and the one involving three scalars are the dominant [9]. Finally, we conclude in Section VII. II. SCALAR FIELD DYNAMICS Let us consider a minimally coupled scalar field with Lagrangian density 1 where V (φ) is a general (effective) potential expressing the self interaction of the scalar field. The equation of motion for the field φ following from L φ is the Klein -Gordon equation where the prime indicates a derivative with respect to φ. The energy -momentum tensor of φ is of the form provided φ ,a = 0, equation (3) follows from the conservation equation We shall now assume that in the open region U of spacetime that we consider, the momentum density ∇ a φ is timelike: This requirement implies two features: first, φ is not constant in U , and so {φ = const.} specifies well-defined 1 We use conventions of Ref. [30]. Units in which = c = k B = 1 are used throughout this article, Latin indices a, b, c... run from 0 to 3, whereas Latin indices i, j, k... run from 1 to 3. The symbol ∇ represents the usual covariant derivative and ∂ corresponds to partial differentiation. Finally the Hilbert-Einstein action in presence of matter is defined by surfaces in spacetime. When this is not true (i.e., φ is constant in U ), then by (4), in U , [the last being necessarily true due to the conservation law (5))] and we have an effective cosmological constant in U rather than a dynamical scalar field. A. Kinematical quantities Our aim is to give a formal description of the scalar field in terms of fluid quantities; therefore, we assign a 4velocity vector u a to the scalar field itself. This will allow us to define the dot derivative, i.e. the proper time derivative along the flow lines:Ṫ a···b c···d ≡ u e ∇ e T a···b c···d . Now, given the assumption (6), we can choose the 4velocity field u a as the unique timelike vector with unit magnitude (u a u a = −1) parallel to the normals of the where we have defined the field ψ =φ = (−∇ a φ∇ a φ) 1/2 to denote the magnitude of the momentum density (simply momentum from now on). The choice (8) defines u a as the unique timelike eigenvector of the energymomentum tensor (4). 3 The kinematical quantities associated with the "flow vector" u a can be obtained by a standard method [33,34]. We can define a projection tensor into the tangent 3spaces orthogonal to the flow vector: with this we decompose the tensor ∇ b u a as where ∇ a is the spatially totally projected covariant derivative operator orthogonal to u a (e.g., ∇ a f = h a b ∇ b f ; see the Appendix of Ref. [35] for details),u a is the acceleration (u b u b = 0), and σ ab the shear (σ a a = σ ab u b = 0). Then the expansion, shear and acceleration are given in terms of the scalar field by where the last equality in Eq. (11) follows on using the Klein -Gordon equation (3). We can see from Eq. (13) that ψ is an acceleration potential for the fluid flow [36]. Note also that the vorticity vanishes: an obvious result with the choice (8), so that ∇ a is the covariant derivative operator in the 3-spaces orthogonal to u a , i.e. in the surfaces {φ = const.}. As usual, it is useful to introduce a scale factor a (which has dimensions of length) along each flow -line bẏ where H is the usual Hubble parameter if the Universe is homogeneous and isotropic. Finally, it is important to stress that which follows from our choice of u a via equation (8), a result that will be important for the choice of gauge invariant (GI) variables and for the perturbations equations. B. Fluid description of a scalar field It follows from our choice of the four velocity (8) that we can represent a minimally coupled scalar field as a perfect fluid; the energy -momentum tensor (4) takes the usual form for perfect fluids where the energy density µ and pressure p of the scalar field "fluid" are given by If the scalar field is not minimally coupled this simple representation is no longer valid, but it is still possible to have an imperfect fluid form for the energy -momentum tensor [31]. Using the perfect fluid energy -momentum tensor (17) in (5) one obtains the energy and momentum conservation equationsμ ψ 2u a + ∇ a p = 0 . If we now substitute µ and p from Eqs. (18) and (19) into Eq. (20) we obtain the 1+3 form of the Klein -Gordon equation (3):φ an exact ordinary differential equation for φ in any spacetime with the choice (8) for the four -velocity. With the same substitution, Eq. (21) becomes an identity for the acceleration potential ψ. It is convenient to relate p and µ by the index γ defined by This index would be constant in the case of a simple onecomponent fluid, but in general will vary with time in the case of a scalar field: Finally, it is standard to define a speed of sound as C. Background equations The previous equations assume nothing on the symmetry of the spacetime. We now specify it further and assume that it is close to a flat Friedmann-Lemaître spacetime (FL), which we consider as our background spacetime. The homogeneity and isotropy assumptions imply that where f is any scalar quantity; in particular The background (zero -order) equations are given by [37]: where all variables are a function of cosmic time t only. The study of linear perturbations of a FL background are relatively straightforward. Let us begin by defining the first-order gauge-invariant (FOGI) variables corresponding respectively to the spatial fluctuations in the energy density, expansion rate and spatial curvature: The quantities are FOGI because they vanish exactly in the background FL spacetime [38,40]. It turns out that a more suitable quantity for describing density fluctuations is the co-moving gradient of the energy density: where the ratio X a /µ allows one to evaluate the magnitude of the energy density perturbation relative to its background value and the scale factor a guarantees that it is dimensionless and co-moving. These quantities exactly characterize the inhomogeneity of any fluid; however we specifically want to characterize the inhomogeneity of the scalar field: this cannot be done using the spatial gradient ∇ a φ because it identically vanishes in any space-time by virtue of our choice of 4-velocity field u a . It follows that in our approach the inhomogeneities in the matter field are completely incorporated in the spatial variation of the momentum density: ∇ a ψ, so it makes sense to define the dimensionless gradient which is related to D a by where we have used Eq. (18) and γ is given by Eq. (23); comparing Eq. (33) and Eq. (13) we see that Ψ a is proportional to the acceleration: it is a gauge-invariant measure of the spatial variation of proper time along the flow lines of u a between two surfaces φ =const. (see Ref. [33]). The set of linearized equations satisfied by the FOGI variables consists of the evolution equationṡ and the constraints The curl operator is defined by curl ψ ab = (curl ψ) ab = ε cd a ∇ c ψ d b where ǫ abc is the completely antisymmetric tensor with respect to the spatial section defined by ǫ bcd = ǫ abcd u a , ǫ abcd being the volume antisymmetric tensor such that ǫ 0123 = √ −g. The divergence div of a rank n tensor is a rank n − 1 tensor defined by Because the background is homogeneous and isotropic, each FOGI vector may be uniquely split into a curl-free and divergence-free part, usually referred to as scalar and vector parts respectively, which we write as where curl VS a = 0 and div VV = 0. Similarly, any tensor may be invariantly split into scalar, vector and tensor parts: where curl TS ab = 0 , div div TV = 0 and div TT a = 0. It follows therefore that in the above equations we can separately equate scalar, vector and tensor parts and obtain equations that independently characterize the evolution of each type of perturbation. In the case of a scalar field, the vorticity is exactly zero, so there is no vector contribution to the perturbations. Let us now concentrate on scalar perturbations at linear order. It is clear from the above discussion that pure scalar modes are characterized by the vanishing of the magnetic part of the Weyl tensor: H ab = 0, so the above set of equations reduce to a set of two coupled differential equations for X a and Z a : and a set of coupled evolution and constraint equations that determine the other variableṡ B. Gravitational waves from density perturbations The preceding discussion deals with first-order variables and their behavior at linear order. It is important to keep in mind that we were able to set H ab = 0 only because pure scalar perturbations in the absence of vorticity implies that curl σ ab = 0 at first order. The vanishing of the magnetic part then follows from equation (43). However, at second order curl σ ab = 0. We denote the non-vanishing contribution at second order by [21] Σ ab = curl σ ab . The new variable is second-order and gauge-invariant (SOGI), as it vanishes at all lower orders [38]. It should be noted that the new variable is just the magnetic part of the Weyl tensor subject to the conditions mentioned above i.e. We are interested in the properties inherited by the new variable from the magnetic part of the Weyl tensor. In particular, it can be shown that the new variable is transverse and traceless at this order and is thus a description of gravitational waves. It should be stressed that in full generality, there are tensorial modes even at first order. By assuming that there are none, we explore a particular subset in the space of solutions. From the "iterative resolution" point of view, this means that we constrain the equations in order to focus on second order GWs sourced by terms quadratic in scalar perturbations. In doing so, we artificially switch off GW perturbations at first order. C. Propagation equation The propagation of the new second-order variable now needs to be investigated using a covariant set of equations that are linearized to second-order about FL. We make use of Eqs. (20), (21) and the following evolution equations which are up to second order in magnitude; together with the constrainṫ Unlike at first-order, where the splitting of tensors into their scalar, vector and tensor parts is possible, at second order this can only be achieved for SOGI variables. We may isolate the tensorial part of the equations by decoupling Σ ab : since it is divergence free it is already a pure tensor mode, whereas E ab is not. The wave equation for the gravitational wave contribution can be found by first taking the time derivative of (57) and making appropriate substitutions using the evolution equations and keeping terms up to second order. The wave equation for Σ ab then reads: where the source is given by the cross-product of the electric-Weyl curvature and its divergence (or acceleration): To obtain this, we have used the fact that with a flat background space-time and used the commutation relation We have also used Eqs. (24) and (25) to eliminateψ/ψ from the source term. It can also be shown that S ab is transverse, illustrating that Eq. (59) represents the gravitational wave contribution at second order. Note that this is a local description of gravitational waves, in contrast to the non-local extraction of tensor modes by projection in Fourier space. Since Σ ab contains exactly the correct number of degrees of freedom possible in GW, any other variable we may choose to describe GW must be related by quadrature, making this a suitable master variable. The situation is analogous to the description of electromagnetic waves: Should we use the vector potential, the electric field, or the magnetic field for their description? Mathematically it doesn't matter of course -each variable obeys a wave equation and the others are related by quadrature. Physically, however, it's the electric and magnetic fields which drive charged particles through the Lorentz force equation -the electromagnetic analogue of the geodesic deviation equation. In order to express the gravitational wave equation in Fourier space, we define our normalised tensor harmonics as where ξ ab is the polarization tensor, which satisfies the (background) tensor Helmholtz equation:∇ 2 Q ab = −(q 2 /a 2 )Q ab . As q a is required to satisfy q a u a = 0 in the background, it can thus be identified with a 3-vector and will subsequently written in bold when necessary. We denote harmonics of the opposite polarization with an overbar. Amplitudes of Σ ab may be extracted via with an analogous formula for the opposite parity. This implies that our original variable may be reconstructed from (63) The same relations hold for any transverse tensor. Hence, our wave equation in Fourier space is with an identical equation for the opposite polarization. We have converted to conformal time η, where a prime denotes derivatives with respect to η, and we have defined the conformal Hubble parameter as H = a ′ /a. The source term is composed of a cross-product of the electric part of the Weyl tensor and its divergence. At first-order, the electric Weyl tensor is a pure scalar mode, and can therefore be expanded in terms of scalar harmonics. To define these, let Q (s) = e iq·x /(2π) 3/2 , be a solution to the Helmholtz equation:∇ 2 Q (s) = −(q 2 /a 2 )Q (s) . Begin-ning with this basis, it is possible to derive vectorial and (PSTF) tensorial harmonics by taking successive spatial derivatives as follows: This symmetric tensor has the additional property q a q b Q (s) ab = −(2q 4 /3a 2 )Q (s) . Using this representation we can express our source in Eq. (64) in terms of a convolution in Fourier space, by expanding the electric Weyl tensor as Then, the right hand side of Eq. (60) expressed in conformal time, accompanied by appropriate Fourier decomposition of each term and making use of the normalization condition for orthonormal basis, yields: where with a similar expression for the other polarization. In principle we can now solve for the gravitational wave contribution Σ ab , and calculate the power spectrum of gravitational waves today. For this however, we need initial conditions for the electric Weyl tensor (or, alternatively Ψ a ). IV. GRAVITATIONAL WAVES FROM DENSITY PERTURBATIONS: COORDINATE BASED APPROACH In this formalism, we consider perturbations around a FL universe with Euclidean spatial sections and expand the metric as where η is the conformal time and a the scale factor. We perform a scalar-vector-tensor decomposition as and whereB i ,Ē i are transverse (D iĒ i = D iB i ), andĒ ij is traceless and transverse (Ē i i = D iĒ i j = 0). Latin indices i, j, k... are lowered by use of the spatial metric, e.g. B i = γ ij B j . We fix the gauge and work in the Newtonian gauge defined by B i = E = B = 0 so that Φ = A and Ψ = −C are the two Bardeen potentials. As in the previous sections, we assume that the matter content is a scalar field φ that can be split into background and perturbation contributions: φ = φ(η) + δφ(η, x). The gauge invariant scalar field perturbation can be defined by where H ≡ a ′ /a ≡ aH. We denote the field perturbation in Newtonian gauge by χ so that Q = χ + (φ ′ /H)Ψ. Introducing the equation of state (23) takes the form γ = w + 1 = 2ε/3. We thus have two expansions: one concerning the perturbation of the metric and the other in the slow-roll parameter ǫ. A. Scalar modes Focusing on scalar modes at first order in the perturbation, it is convenient to introduce and in terms of which the action (1) takes the form when expanded to second order in the perturbations. It is the action of a canonical scalar field with effective square mass m 2 v = −z ′′ /z. v is the canonical variable that must be quantized [41]. It is decomposed as followŝ and the annihilation and creation operators satisfy the commutation relation, [â k ,â † k ′ ] = δ(k − k ′ ). We define the free vacuum state by the requirementâ k |0 = 0 for all k. From the Einstein equation, one can get the expression for the Bardeen potential (recalling that Ψ = Φ) and for the curvature perturbation in comoving gauge Once the initial conditions are set, solving Eq. (79) will give the evolution of v k (η) during inflation, from which Φ k (η) and R k (η) can be deduced, using the previous expressions. Defining the power spectrum as one easily finds that Note also that z and ε are related by the simple relation so that B. Gravitational waves at linear order At first order, the tensor modes are gauge invariant and their propagation equation is given bȳ since a minimally coupled scalar field has no anisotropic stress. Defining the reduced variable the action (1) takes the form when expanded tosecond order. DevelopingĒ ij , and similarly µ ij , in Fourier space: where ε λ ij is the polarization tensor, the action (88) takes the form of the action for two canonical scalar fields with effective square mass m 2 (90) If one considers the basis (e 1 , e 2 ) of the 2 dimensional space orthogonal to k then ε λ ij = (e 1 i e 1 j −e 2 i e 2 j )δ λ + +(e 1 i e 2 j + e 2 i e 1 j )δ λ × . µ λ are the two degrees of freedom that must be quantized [41] and we expand them aŝ µ k is solution of the Klein-Gordon equation where we have dropped the polarization subscript. The annihilation and creation operators satisfy the com- We define the free vacuum state by the requirementb k,λ |0 = 0 for all k and λ. Defining the power spectrum as one easily finds that where the two polarizations have the same contribution. C. Gravitational waves from density perturbations At second order, we split the tensor perturbation as E ij =Ē (1) ij +Ē (2) ij /2. The evolution equations ofĒ (2) ij is similar to Eq. (86), but inherits a source term quadratic in the first order perturbation variables and from the transverse tracefree (TT) part of the stress-energy tensor It follows that the propagation equation is where S TT ij is a TT tensor that is quadratic in the first order perturbation variables. Working in Fourier space, the TT part of any tensor can easily be extracted by means of the projection operator wherek i = k i /k (note that ⊥ ij (k) is not analytic in k and is a non-local operator) from which we get (98) The source term is now obtained as the TT-projection of the second order Einstein tensor quadratic in the first order variables and of the stress-energy tensor The three terms respectively indicate terms involving products of first order scalar quantities, first order scalar and tensor quantities and first order tensor quantities. The explicit form of the first term is The first term was considered in Ref. [43] and the second term was shown to be the dominant contribution for the production of gravitational waves during preheating [42]. In Fourier space, it is given by µ (2) ij (x, η) can be decomposed as in Eq. (89), using the same definition (87) at any order. The two polarizations evolve according to Since the polarization tensor is a TT tensor, it is obvious that P ab ij ε ij λ = ε ab λ , so that From the equation (102), we deduce that the source term derives from an interaction Lagrangian of the form It describes a two-scalars graviton interaction. In full generality the interaction term would also include, at lowest order, cubic terms of three scalars, two gravitonsscalar and three gravitons. They respectively correspond to second order scalar-scalar modes generated from gravitational waves and second order tensor modes. As emphasized previously, we do not consider these interactions here. V. COMPARISON OF THE TWO FORMALISMS Before going further it is instructive to compare the two formalisms and understand how they relate to each other. Note that we go beyond Ref. [35], where a comparison of the variables was made at linear order. Here we investigate how the equations map to each other and extend the discussion to second order for the tensor sector. At the background level the scale factors a and expansion rates H introduced in each formalism agree, which explains why we made use of the same notation. The perturbations of the metric around FL space-time has been split into a first-order and a second-order part according to We make a similar decomposition for the quantities used in the 1 + 3 covariant formalism. As long as we are interested in the gravitational wave sector, we only need to consider the four-velocity of the perfect fluid describing the matter content of the universe which we decompose as Its spatial components are decomposed as V i being the vector degree of freedom and V the scalar degree of freedom. As V µ has only three independent degrees of freedom since u µ satisfies u µ u µ = −1, its temporal component is linked to other perturbation variables. We assume that the fluid has no vorticity (V i = 0), as it is the case for the scalar fluid we have in mind and consequently we will also drop the vectorial perturbations (Ē i = 0). A. Matching at linear order At first order, the spatial components of the shear, acceleration and expansion are respectively given by The electric and magnetic part of the Weyl tensor take the form Note that η kli is the completely antisymmetric tensor normalized such that η 123 = 1, which differs from ε abc . We deduce from the last expression that (113) where we have used simpler notation by writing (ĉurlĒ) ij asĉurlĒ ij . We also note that the derivative along u µ of a tensor T of rank (n, m), vanishing in the background, takes the forṁ Again, recall that a dot refers to a derivative along u µ . Indeed at first order, it reduces to a derivative with respect to the cosmic time but this does not generalize to second order. Now, Eq. (39) can be recast a Using the expressions (111-112) for the geometric quantities, this equation takes the form Similarly Eq. (59) can be recast as so that it reduces at first order tô and H ab is no longer a description of the GW, i.e. directly related to the TT part of the spacetime metric and the matching is not valid anymore. B. Matching at second order At second order, the matching is much more intricate mainly because the derivative along u µ does not match with the derivative respect to cosmic time any more. Let us introduce the short hand notation for any tensors X k and Y lm . If Y lm = ∂ l ∂ m Z, or X k = ∂ k W , we also use the short-hand notation Y = ∂∂Z X = ∂W . Among the terms quadratic in first-order perturbations, those involving a first-order tensorial perturbation can be omitted, as we are only interested in second-order effects sourced by scalar contributions. At second order, the geometric quantities of interest read From the latter expression, we remark that H ij has a term quadratic in first-order perturbations involving V (1) and Φ (1) . This terms arise from a difference between the two formalisms related to the fact that geometric quantities, such as H ij E ij etc., live on the physical spacetime, whereas in perturbation theory, any perturbation variable at any order, such as V (1) , E (2) ij etc., are fields propagating on the background space-time. It follows that the splitting into tensor, vector and scalar modes is different. In the covariant formalism, the splitting refers to the fluid on the physical spacetime, whereas in perturbation theory it refers to the comoving fluid of the background solution. Indeed, this difference only shows up at second order as the magnetic Weyl tensor vanishes in the background. The one to one correspondence at first order between equations of both formalisms disappears, as the second order equations of the covariant formalism contain the dynamics of the first order quantities. When keeping terms contributing to the second order, Eq. (39) has an additionnal source term and readṡ If first order tensorial perturbations are neglected then H ab vanishes at first order and Eq. (114) still holds when applied to H ab . Thus Eq. (123) can be recast as Substituting the geometric quantities for their expressions at second order, and making use of Eq. (115) to handle the derivatives, Eq. (116) reads at second order (125) Using the momentum and constraint equation (41) at first order and the background equation H ′ − H 2 = −4πGµ(1 + w)a 2 , that we deduce from the Raychaudhuri equation and the Gauss-Codacci equation at first order, we can link it to Eq. (96) as it then reads When applied to a scalar field, this is exactly the gravitational wave propagation equation (96) with the source term (100). C. Discussion In conclusion, we have matched both the perturbation variables and equations at first and second order in the perturbations. This extends the work of Ref. [35] which considered the linear case, and has not been previously investigated. Even though we restrict to the tensor sector, this comparison is instructive and illustrates the difference of approach between the two formalisms, in a clearer way than at first order. In the Bardeen approach, all perturbation variables live on the unperturbed spacetime. At each order, we write exact equations for an approximate spacetime. In particular, this implies that the time derivatives are derivative with respect to the cosmic time of the background spacetime. In the covariant approach, one derives an exact set of equations (assuming no perturbation to start with). These exact equations are then solved iteratively starting from a background solution which assumes some symmetries. The time derivative is defined in terms of the flow vector as u a ∇ a . Indeed, at first order for scalars, this derivative matches exactly with the derivative with respect to the background cosmic time. At second order, this is no longer the case. First the flow vector at first order does not coincide with its background value. This implies a (first-order) difference between the two time derivatives which must be taken into account. Then, the geometric quantities, such as H ij E ij etc., "live" on the physical space-time, whereas in perturbation theory, any perturbation variable at any order, such as V (1) , E (2) ij etc., live on the background spacetime. This explains why e.g. H (2) ij has a term quadratic in first-order perturbations involving V (1) and Φ (1) . The master variables and corresponding wave equations in both formalisms are also different in nature. In the metric approach the wave equation with source is defined non-locally in Fourier space; in the covariant approach, we are able to derive a local tensorial wave equation which, because it is divergence-free, represents the gravitational wave contribution. Of course, we can make a non-local decomposition in Fourier space as required. Furthermore, on one hand the TT part of the metric in a particular gauge is a perturbative approach used to describe GW, and this tells us the shear of spatial lengths with respect to a homogenous and isotropic background, referring implicitly to a hypothetical set of averaged observers. On the other hand, the covariant description using H ab which is built out of the Weyl tensor and the comoving observer's velocity, directly describes the dynamically free part of the gravitational field [39] (up to second-order when rotation is zero) as seen by the true comoving observers. This is part of the dynamic spacetime curvature which directly induces the motion of test particles through the geodesic deviation equation, and it accounts for effects due to the non-homogenous comoving fluid velocity. There is one more difference between the two formalisms, concerning the initial conditions. In the Bardeen approach, as we recalled in section IV, there is a natural way to set up the initial conditions on sub-Hubble scales by identifying canonical variables, both for the scalar and tensor modes, and promoting them to the status of quantum operators. In the covariant formalism such variables have not been constructed in full generality (see however Ref. [44] for a proposal). Consequently this sets limitations to this formalism since it cannot account for both the evolution and the initial conditions at the same time. A. Slow-roll inflation In this section, we focus on the case of a single slowrolling scalar field and we introduce the slow-roll parameters Using the Friedmann equations (28)(29), these parameters can be expressed in terms of the Hubble parameter as Interestingly Eq. (28) takes the form which impliesä The equation of state and the sound speed of the equivalent scalar field are thus given by The evolution equations for ε and δ show thatε andδ are of order 2 in the slow-roll parameters so that at first order in the slow-roll parameters, they can be considered constant. Using the definition of the conformal time and integrating it by parts, one gets a(η) = − 1 Hη assuming ε is constant, from which it follows that where η varies between −∞ and 0. This implies that The general solution of Eq. (79) is with |c 1 | 2 − |c 2 | 2 = 1, where H (1) ν and H (2) ν are Hankel functions of first and second kind and ν = 3/2 + 2ε − δ. Among this family of solutions, it is natural to choose the one with c 2 = 0 which contains only positive frequencies [41]. It follows that the solution with these initial conditions is On super-Hubble scales, |kη| ≪ 1, we have Now, using Eq. (133) to express η and Eq. (84) to replace z in expression (83), we find that where we have set M 2 p = G −1 . At lowest order in the slow-roll parameter, it reduces to The evolution of the gravitational waves at linear order are dictated by the same equation but with ν T = 3/2 + ε, so that Similarly as for the scalar mode, we obtain B. Gravitational waves at second order The couplings between scalar and tensor modes at second order imply that the second order variables can be expanded as and a similar expansion for E, where, e.g., R RE stands for the second order scalar modes induced by the coupling of first order scalar and tensor modes etc. The deviation from Gaussianity at the time η of the end of inflation can be characterized by a series of coefficients f a,bc NL defined for example as These six coefficients appear in different combinations in the connected part of the 3-point correlation function of R and E. For instance and f R,RR NL is the standard f NL parameter. One can easily check that The interaction Lagrangian is thus given by which reduces to This is the same expression as obtained in Ref. [9]. In full generality, during inflation, we should use the "in-in" formalism to compute any correlation function of the interacting fields. As was shown explicitly in Ref. [45] for a self-interacting field and more generally in Ref. [47], the quantum computation agrees with the classical one on super-Hubble scales at lowest order. Note however that both computations may differ (see Ref. [26] versus Ref. [9]) due to the fact that in the classical approach the change in vacuum is ignored. The difference does not affect the order of magnitude but the geometric kdependence. In order to get an order of magnitude, we thus restrict our analysis here to the classical description. This description is also valid when considering the postinflationary era. In the classical approach, we can solve Eq. (103) by mean of a Green function. Since the two independent solutions of the homogeneous equation are , the Wronskian of which is 4i/(πk), the Green function is given by It follows that the expression of the second order tensor perturbation is given by We thus obtain If we want to estimate Eq. (143) in the squeezed limit k 1 ≪ k 2 , k 3 the contribution coming from the term involving f E λ RR N L (k, q 1 , q 2 , η) can be computed by use of the super-Hubble limit of the Green function . This contribution will be proportional to H 4 M 4 p ε k −8 2 k 2i k 2j ε ij λ δ 3 (k 1 + k 2 + k 3 ), which is the same order of magnitude as in Ref. [9], but do not have the same geometric dependence as it goes like k −5 2 k −3 1 instead. We can now take the super-Hubble limit of this expression at lowest order in the slow-roll parameters. In order to do so, we make use of the super-Hubble limit of the Green function given above, and we perform the time integral from 1/k to η and keep only the leading order contribution: where, with the defnitions y ≡ q/k and n ≡ k/k, F (ǫ, δ) ≡ (y |n − y|) −3−4ǫ+2δ y 6 dy 1 − µ 2 2 dµ (153) is a numerical factor. In this approximation, the ratio between the second order power spectrum and the first order power spectrum at leading order in the slow-roll parameters, is given by: Indeed there are ultraviolet and infrared divergences hidden in F (ǫ, δ). We expect the infrared divergence not to be relevant for observable quantities due to finite volume effects (see for instance Ref. [46]). The ultraviolet divergence, on the other hand, has to be carefully dimensionaly regularized in the context of quantum field theory (see e.g. Ref. [47]). VII. CONCLUSIONS In this article we have investigated the generation of gravitational waves due to second order effects during inflation. We have considered these effects both in the covariant perturbation formalism and in the more standard metric based approach. The relation between the two formalisms at second-order has been considered and we have discussed their relative advantages. This comparison leads to a better understanding of the differences in dynamics between the two formalisms. As an illustration, we have focused on GW generated by the coupling of first order scalar modes. To characterize this coupling we have introduced and computed the parameter f E,RR NL . It enters in the expression of E k1 R k2 R k3 c that was shown to be of order (H/M p ) 4 /ε, as R k1 R k2 R k3 c . On the other hand the power spectrum of GW remains negligible. This shows that the contribution of E k1 R k2 R k3 c to the CMB bispectrum is important to include in order to constrain the deviation from Gaussianity, e.g. in order to test the consistency relation [48].
2018-10-27T18:15:59.238Z
2006-12-18T00:00:00.000
{ "year": 2006, "sha1": "7fa14e0cb9a951446f8b01e38563aeb14f5f0837", "oa_license": null, "oa_url": "http://arxiv.org/pdf/gr-qc/0612108", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "bedf910e0e1d927faf189f4d2062526115550f52", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
5608303
pes2o/s2orc
v3-fos-license
Cloak and Dagger: Alternative Immune Evasion and Modulation Strategies of Poxviruses As all viruses rely on cellular factors throughout their replication cycle, to be successful they must evolve strategies to evade and/or manipulate the defence mechanisms employed by the host cell. In addition to their expression of a wide array of host modulatory factors, several recent studies have suggested that poxviruses may have evolved unique mechanisms to shunt or evade host detection. These potential mechanisms include mimicry of apoptotic bodies by mature virions (MVs), the use of viral sub-structures termed lateral bodies for the packaging and delivery of host modulators, and the formation of a second, “cloaked” form of infectious extracellular virus (EVs). Here we discuss these various strategies and how they may facilitate poxvirus immune evasion. Finally we propose a model for the exploitation of the cellular exosome pathway for the formation of EVs. Introduction Through millions of years of coevolution, viruses have devised numerous strategies to invade, hijack, and turn host cells into virus assembly factories. In turn, human cells have evolved diverse mechanisms to detect and combat these invading pathogens. Many of these are employed at cellular locations that allow for detection and deployment of defence mechanisms prior to the virus gaining a foothold and initiating its replication cycle. Given that all viruses must bind to the host cell surface, enter host cells through direct fusion or endocytosis, and ultimately transit the host cytoplasm [1]; it is perhaps no surprise that cellular antiviral factors can be found on the cell surface, within endosomes, and in the host cytoplasm [2,3]. For instance, Toll-like receptors (TLRs), a family of pattern recognition receptors that detect repetitive or conserved pathogen structures are exclusively located at the cell surface and in endosomal membranes [2,4]. As TLRs recognise a broad range of pathogen associated molecular patterns (PAMPs), they are ideally located to detect and initiate an inflammatory response to invading viruses during cell entry [5]. Cells also express danger associated molecular pattern receptors (DAMPRs). These receptors detect host proteins aberrantly located due to damage or infection. For example, it was recently shown that C3 complement proteins bound to the capsid of non-enveloped viruses are detected by a DAMPR after cytoplasmic delivery [6]. This results in intracellular virus neutralisation, triggers mitochondrial antiviral signalling, and initiates proinflammatory cytokine secretion. Cytoplasmic nucleic acid receptors detect viral genomes released into the cytoplasm. RIG-like Receptors sense foreign RNA, while cytosolic DNA is detected by a range of sensors including the AIM2-like receptors, and the recently identified cyclic GMP-AMP synthase [7][8][9][10][11]. In response to this multi-level defence, viruses have evolved different strategies to evade or disable these antiviral detection systems [12]. For most viruses this exclusively involves expression of immunomodulatory factors during the initial stages of infection that counteract the antiviral signalling triggered during entry. Exceptions to this are the large DNA viruses such as members of the Herpesviridae and Poxviridae families [13]. In addition to immediate early expression of a subset of potent immunomodulators, these viruses package immune modulating proteins during assembly. Upon infection, these immune modulating proteins are delivered into the cytoplasm of the host cell to combat the intrinsic immune response before viral gene expression ensues [14][15][16][17]. Amongst the large DNA virus families, the poxviruses encode the greatest number of immune antagonising viral proteins. They dedicate 30%-50% of their "200 genes to encoding immunomodulating proteins and thus display the most diverse range of immune evasion strategies [18]. The poxvirus family includes, variola virus the causative agent of smallpox, monkeypox virus, and vaccinia virus (VACV) [19]. Best known for its use as the vaccine during the global eradication of smallpox [20], today VACV serves as the laboratory model poxvirus. Like all poxviruses, VACV is a large, enveloped double-stranded DNA virus, which replicates exclusively in the host cell cytoplasm [13]. Poxviruses are unique in that during replication they produce two forms of infectious particles: mature virions (MVs) and extracellular virions (EVs). Structurally, MVs consist of a biconcave core containing the viral genome and flanked by two proteinaceous "lateral bodies" (LBs). This is further surrounded by a single lipid bilayer viral membrane [21,22]. EVs consist of a MV-like virion surrounded by an additional cell-derived membrane containing cellular proteins and seven virus proteins not found in MVs [23,24]. During infection, MVs and EVs serve different purposes; the MVs are released from cells after lysis, and due to their exceptional stability are thought to be required for host-to-host transmission [13,25]. EVs, on the other hand, are released into body fluids where they are responsible for the dissemination of virions within tissues and between organs [26]. As such, the outer EV membrane is thought to help virions evade immune detection while in circulation. Thus, with a multitude of encoded immunomodulatory genes, the ability to package and deliver a subset of these directly into host cells, and two infectious virus forms that display different membranes containing divergent lipid and protein constituents, poxviruses pose a unique multifaceted challenge to the host immune system. As Smith et al. recently presented an extensive review of the poxvirus immunomodulatory proteins, which are expressed during infection [18], here we will review and discuss the intrinsic means of immune evasion "cloak" and immunomodulation "dagger" exhibited by poxviruses. In particular, we discuss three strategies used by the Poxviridae: MV viral apoptotic mimicry and its potential role in immune suppression, the use of LBs as immune modulatory delivery packets, and membrane cloaking as a means to facilitate spread of EVs. VACV Replication Cycle The lifecycle of poxviruses, illustrated in Figure 1, is a complex multi-step process beginning with the binding and internalisation of virions into host cells. Virus internalisation is followed by uncoating of viral genomes, and their subsequent amplification. In the final stages of the lifecycle, new virions are assembled and exit the cell to spread infection. Both VACV MVs and EVs enter host cells by inducing their own macropinocytic uptake [27][28][29][30][31]. This cellular endocytosis mechanism constitutes a transient, growth factor-induced, actin dependent process that leads to the uptake of extracellular fluid into large cytoplasmic vacuoles [27,32,33] Under non-pathological conditions macropinocytosis is used by cells for immune surveillance, clearance of apoptotic debris, and uptake of nutrients [34]. MV macropinocytic entry is triggered by phosphatidylserine (PS) in the viral membrane [27]. As the clearance of apoptotic debris is also dependent on PS exposure, it was proposed that VACV MVs use apoptotic mimicry as a means of inducing their cellular uptake. For EVs, the cellular binding factors, endocytosis receptors, and viral proteins that mediate these processes remain to be determined. It has however been demonstrated that the PS-bridging molecule Gas6 can boost EV infection in a PS-receptor dependent fashion suggesting that like MVs, perhaps EV entry involves apoptotic mimicry [35,36]. Fusion of both MVs and EVs from macropinosomes depends on the entry fusion complex (EFC), a macromolecular assembly of 12 viral proteins located in the MV membrane [37]. In addition, fusion of both virus forms is low pH dependent [15,[38][39][40] indicating a requirement for macropinosome maturation during viral entry [41]. For MVs, acidification is required for removal of the EFC regulatory proteins A25 and A26 and subsequent EFC fusion activity [42]. Interestingly the MV-like particles that become EVs do not carry these negative regulators [23]. Instead for EVs, macropinosome acidification serves to disrupt the outermost EV membrane thereby exposing the underlying EFC to allow for fusion [23,28,39]. Upon fusion the lifecycle of VACV MVs and EVs converge, the viral cores containing the viral DNA, viral transcription factors, and RNA polymerases pre-bound to early promoters [43], are released into the host cell cytoplasm. The cores immediately undergo dramatic morphological changes switching from biconcave to oval; this process is termed activation. Core activation is marked by the uncoupling of the LBs, which remain behind with the fused viral membrane, the reduction of core proteins, the expansion of the core structure, and the initiation of early viral gene expression [15,44]. Activation does not require viral early gene expression suggesting that the process is intrinsically built into newly assembled particles [15,45,46]. the host cell cytoplasm. The cores immediately undergo dramatic morphological changes switching from biconcave to oval; this process is termed activation. Core activation is marked by the uncoupling of the LBs, which remain behind with the fused viral membrane, the reduction of core proteins, the expansion of the core structure, and the initiation of early viral gene expression [15,44]. Activation does not require viral early gene expression suggesting that the process is intrinsically built into newly assembled particles [15,45,46]. Approximately one half of the proteins encoded by early genes serve an immunomodulatory function, while the remainder are required for genome uncoating and subsequent genome replication. Genome replication occurs in cytoplasmic viral factories where MVs are also assembled. Assembly is a highly complex multi-step process involving the formation of several non-infectious virus intermediates (crescents/immature virions). Once formed, MVs either exit cells by lysis or become wrapped by two additional cell derived membranes (red) which direct their exocytosis and thereby formation of EVs. The "80 early genes are transcribed within cytoplasmic cores, then extruded and translated on host ribosomes. This gives rise to a set of early viral proteins required for DNA replication, intermediate gene transcription, and a wide array of immune modulation activities [13]. Amongst these early gene products is the viral AAA+ ATPase D5, which facilitates genome uncoating in collaboration with host ubiquitin, and proteasome activity [47,48]. Once released, the genome is replicated giving rise to large cytoplasmic viral factories. Intermediate and late gene expression occurs only from replicated genomes, resulting in the production of structural proteins and enzymes required for virion morphogenesis and proteins destined to be packaged into the newly assembled virions. The formation of new infectious MVs requires no less than 40 virus encoded structural proteins and 10 virus encoded enzymes [49]. As recently reviewed by Liu et al., the process begins with the formation of single crescent shaped membrane sheets and culminates with the formation of the characteristic brick shaped MV, having gone through a handful of distinct assembly intermediates [49,50]. Newly assembled MVs leave the cell by lysis approximately 72 hours after initial infection. To overcome this relatively slow infection kinetic, a subset of the newly assembled MVs go on to become EVs, the second infectious virus form [23]. To speed the process of virus spread, the first round of EVs is released from cells as early as 6 hours after initial infection and their spread is enhanced by a novel form of superinfection exclusion, termed of superinfection repulsion [51]. For EV formation, the MVs acquire two additional membranes thought to be derived from virus-modified trans Golgi or endosomal membranes [52][53][54][55][56]. Once formed, these triple-membrane bound wrapped virions (WVs) are transported along microtubules to the cell surface, where they exit the cell by exocytosis [24,[57][58][59]. Fusion of the outermost WV membrane with the plasma membrane results in the formation of cell surface bound double-membrane EVs. A subset of EVs are released from the cell surface, while others induce the formation of actin tails which propel the EVs away from the producer cell to facilitate cell-to-cell spread [57]. The complexity of the poxvirus replication cycle offers the infected cell a plethora of opportunities to target and neutralise infection. To combat this, the poxviruses dedicate nearly half of their genomic capacity to encoding cell and immune modulatory factors. Yet between the time of VACV exit from one cell and the initiation of early gene expression in the next, host modulatory factors are not being synthesised and thus the virions are potentially vulnerable to immune detection and destruction. To this end, the poxviruses MVs and EVs have developed several protein expression independent strategies to combat and evade host immune responses during virus entry and spread. Immune Suppression during MV Entry For host cell entry VACV MVs use an apoptotic mimicry strategy to trigger their macropinocytic uptake [27,60]. For this, the virus mimics an apoptotic cell or body by concentrating PS within its membrane in order to facilitate infection. Interestingly, apoptotic clearance is intimately linked with a dampening of inflammatory responses [61][62][63]. Engagement of the PS bridging molecules Gas6 or Protein S, with PS receptors Tyro3, Axl, or Mer (TAM receptors), has been shown to initiate enhanced transcription of TLR and cytokine suppressors SOCS1 and SOCS3 [64]. While initially hypothesised as a potential viral immune evasion strategy in 2003 [65], not until recently has the potential of viral apoptotic mimicry to serve for immune modulation come to light. A recent study by Bhattacharyya et al., showed that PS-containing enveloped viruses complexed with PS-bridging molecules act as "super agonists" that activate TAM receptors to disable host immune responses [66]. For VACV MVs, envelope PS serves to trigger the signalling cascade (Rac1/Pak1/PI3K/PKC) needed for macropinocytosis [27,67,68]. While the receptors for both MVs and EVs remain elusive, the PS receptor Axl has been implicated in MV entry. For MVs, ligand-based receptor capture technology showed that VACV MVs on the cell surface bound a subset of six receptors, including Axl. Subsequent RNAi-mediated depletion of Axl was shown to reduce infection [69]. Interestingly, it has been suggested that VACV EVs may also use apoptotic mimicry for entry. Although whether EVs display PS on their outer envelope has not been investigated, both the PS-bridging molecule Gas6 and Axl overexpression were found to enhance infection [36]. Although no direct link between VACV apoptotic mimicry and immune modulation has been established, in vivo VACV infections result in the induction of anti-inflammatory cytokines including TGF-β and IL-10, prevent macrophage infiltration, and inhibit T cell maturation [70,71]. These processes are identical to those triggered during apoptotic cell clearance to dampen unwanted inflammatory responses. While this early immune suppression by VACV was proposed to be connected to unchecked replication, it is possible that this is rather due to engagement of PS receptors during the entry process. Post Entry VACV Immunomodulation Upon their cytoplasmic arrival viruses encounter a new subset of host defence mechanisms in the form of innate immune sensors [2,72]. These include factors that serve to detect and destroy the incoming viral capsids and genomes [2,3], as well as signalling proteins (PAMP receptors and TLRs) that may have been triggered during virus binding or endocytosis [73]. To overcome these innate defence mechanisms poxviruses bring their own subset of intrinsic immune modulatory proteins. The factors are packaged into the virus during assembly and reside in the two LBs found between the viral core and membrane. These enigmatic structures were first visualised by electron microscopy (EM) in 1956 [74]. As early as the 1960s, EM studies showed that LBs detach from VACV cores during the membrane fusion step of virus entry [44]. Biochemistry-based analysis of VACV MVs in the 1980s indicated that LBs were proteinaceous and that they were structurally distinct from both the viral core and membrane [75]. A function of poxvirus LBs was recently elucidated through investigation of VACV core activation. Using a variety of biochemical and imaging techniques Schmidt and Bleck et al., demonstrated that one function of LBs is to serve as viral immunomodulatory delivery packets [15]. They identified three VACV proteins that reside in LBs, the phosphoprotein F17, the dual-specificity phosphatase H1 and the viral oxidoreductase G4 [15]. F17 is the third most abundant protein packaged into virus particles and accounts for approximately 69% of the LB proteinaceous mass [76]. While highly disulphide linked within virions, deposition of LBs into the reducing environment of the cytoplasm results in reduction of F17 and its subsequent proteasome dependent degradation. These findings led to the suggestion that F17 serves as the LB structural protein. In support of this, proteasomal degradation of F17 was found to be required for release of the LB resident protein, H1 phosphatase and its subsequent immunomodulatory activity [15]. To date the viral phosphatase H1 is the only LB component with a defined role in immunomodulation. In response to viral infection, interferon-γ (IFNγ) induces the phosphorylation of the transcription factor STAT1 leading to its homodimerisation, nuclear translocation and subsequent induction of antiviral gene transcription [77]. LB-mediated delivery of H1 counteracts this antiviral response by dephosphorylating STAT1 to prevent its nuclear translocation and thereby block IFNγ-induced immune signalling [15,78] (Figure 2). Viruses 2015, 7 7 To date the viral phosphatase H1 is the only LB component with a defined role in immunomodulation. In response to viral infection, interferon-γ (IFNγ) induces the phosphorylation of the transcription factor STAT1 leading to its homodimerisation, nuclear translocation and subsequent induction of antiviral gene transcription [77]. LB-mediated delivery of H1 counteracts this antiviral response by dephosphorylating STAT1 to prevent its nuclear translocation and thereby block IFNγ-induced immune signalling [15,78] (Figure 2). Figure 2. VACV LBs as Immunomodulatory Delivery Packets. After internalisation via macropinocytosis VACV particles undergo fusion with the limiting membrane of the macropinosome releasing the viral core into the cytoplasm. The released viral cores are "activated" as indicated by morphological changes and the initiation of early gene expression from within. Upon fusion, the LBs detach from the core and remain associated with the viral membrane. Once exposed to the cytoplasm, LBs are rapidly disassembled, with the major LB structural protein, F17, undergoing proteasome dependent degradation. Disassembly of the LB appears to facilitate release of other LB proteins and, in the case of the viral dual specificity phosphatase H1, is required for their action. Release of H1 from LBs, serves to shunt cellular antiviral transcription prior to the expression of early viral genes. To do this, H1 dephosphorylates phospho-STAT1 preventing its homodimerisation and nuclear translocation. To date only three LB components F17, H1, and a viral disulfide oxidoreductase G4 have been identified. Figure 2. VACV LBs as Immunomodulatory Delivery Packets. After internalisation via macropinocytosis VACV particles undergo fusion with the limiting membrane of the macropinosome releasing the viral core into the cytoplasm. The released viral cores are "activated" as indicated by morphological changes and the initiation of early gene expression from within. Upon fusion, the LBs detach from the core and remain associated with the viral membrane. Once exposed to the cytoplasm, LBs are rapidly disassembled, with the major LB structural protein, F17, undergoing proteasome dependent degradation. Disassembly of the LB appears to facilitate release of other LB proteins and, in the case of the viral dual specificity phosphatase H1, is required for their action. Release of H1 from LBs, serves to shunt cellular antiviral transcription prior to the expression of early viral genes. To do this, H1 dephosphorylates phospho-STAT1 preventing its homodimerisation and nuclear translocation. To date only three LB components F17, H1, and a viral disulfide oxidoreductase G4 have been identified. STAT1: Signal Transducer And Activator Of Transcription 1; pSTAT1: phosphorylated Signal Transducer And Activator Of Transcription 1; P: Phosphorylation. Currently, no immune modulatory roles for G4 or F17 have been identified. F17 is packaged at 27,000 copies per virion and is known to carry two proline-directed phosphorylation sites that can be phosphorylated by ERK1/JNK1/cdk1/cyclin B in vitro [79,80]. While mutation of these sites does not impact the assembly of virions, those that package a mutant form of F17 lacking these phosphorylation sites display defects in early viral gene expression [80]. Interestingly, post entry activation of mitogen-activated protein kinase (MAPK) signalling has been reported to be required for VACV early gene expression and genome replication [81]. Given the vast amount of F17 delivered by each virus during entry, it is tempting to speculate that these F17 phospho-sites may play an important role in modulating the cellular immune response initiated through MAPK signalling pathways [80]. The three identified LB proteins are expressed during the late stages of the viral lifecycle and packaged into assembling virions [49]. Together they account for "70% of the LB mass with H1 and G4 each contributing to around 1% [15]. In addition to their LB residence, each is known to play an active role in the viral life cycle. Both F17 and G4 are essential for viral morphogenesis [80,82], and H1 for assuring the transcriptional competence of newly assembled virions [83]. As testament to the importance of LBs, all poxviruses identified to date carry them [15]. Thus, it will be of major interesting to determine if the factors that make up the remaining LB mass also play multiple roles during the virus lifecycle; perhaps facilitating viral replication or assembly in addition to modulating host immune defences. Overview: From MV to EV The production of a double-membrane bound second infectious virus form, EVs, is entirely unique to the Poxviridae family. As the outermost EV membrane is unstable these particles are not thought to be particularly effective for transmission between hosts [39,84,85]. Instead, evidence suggests that the virus evolved this strategy as a way to cloak MVs during the spread of virus within and between host tissues. The formation of VACV EVs is a highly orchestrated, multi-step process involving intracellular virion transport, membrane wrapping, and exocytosis events outlined in Figure 3. The MVs destined to become EVs are actively transported along microtubules away from the viral factories towards the microtubule organising centre, the site of wrapping [86,87]. Of note, the MVs that become EVs do not carry the viral fusion regulatory proteins A25 and A26 [23] suggesting that the wrapping of an individual MV is pre-determined during morphogenesis. During wrapping MVs are enveloped by a double cell-derived membrane to become WVs containing three membranes. These additional membranes contain a set of viral proteins not found in MVs: A33 [88], A34 [89], A36 [90], A56 [91], F12 [92], F13 [93], B5 [94], E2 [95] and K2 [96,97]. These proteins are involved in MV wrapping (B5 and F13), WV transport (A36, E2, and F12), actin tail formation (A33, A34, and A36), and EV superinfection exclusion (A56 and K2). Once transported to the cell periphery, the outer WV membrane fuses with the plasma membrane resulting in the release of double-membrane EVs. The majority of EVs remain cell-associated; however a subset initiate the formation of actin tails, which drive the EVs away from the producer cell, while others entirely detach from the cell surface to mediate long distance dissemination [57]. WV Formation How MVs are wrapped by two additional membranes to become WVs is not fully understood. Early attempts at identifying the cellular source of the double-membrane that envelopes MVs established that brefeldin A abrogates the production of EVs without impacting MV production. As brefeldin A inhibits the formation of COPI vesicles thereby resulting in the collapse of the Golgi into the endoplasmic reticulum, it was concluded that the Golgi or a post-Golgi compartment was involved in MV wrapping [98]. EM studies support the involvement of the trans-Golgi network (TGN), as WV membranes contain glycoprotein and glycolipid sugars which are only added in the late TGN [52]. When EV proteins are individually expressed, A56, B5 and F13 are found within TGN membranes and it has also been reported that VACV infection enhances membrane trafficking between endosomes and the Golgi compartment [52,53,55,99,100]. In addition, the phospholipid composition of the WV membranes is similar to that of the TGN [54]. In support of a role for Golgi or a post-Golgi compartment in wrapping, Rab1a, a protein essential for structural maintenance of endoplasmic reticulum to Golgi transport, was shown to be required for MV wrapping, although no direct interaction of Rab1a with VACV was defined [101]. However, evidence for the involvement of endosomes in WV formation also exists. Using EM in conjunction with fluid phase tracers it was demonstrated that the MV wrapping membranes were likely derived from early endosomes (EEs) [55,56]. Furthermore, interference with retrieval of EV proteins from the plasma membrane via clathrin-mediated endocytosis results in a quantitative reduction in EV yield and delayed virus spread, although no qualitative difference in WV formation was reported [102]. To date, only three viral proteins have been shown to be required for the formation of WVs. The MV associated protein A27, and the two EV specific proteins B5 and F13. Deletion of any one of these from VACV severely inhibits WV formation without impacting the formation of MVs [87,94,[103][104][105]. While the A27 protein appears to be important for transport of MVs to the site of wrapping, and B5 for the wrapping process itself, little more information regarding their role in this process is available. On the other hand several important features of F13, critical for its function in wrapping, have been elucidated [104]. F13 is a non-glycosylated protein, which associates with both of the WV membranes through pamitylation of cysteines 185 and 186 [53,106]. It is located on the cytosolic side of the outermost WV membrane and on the MV-facing side of the inner WV membrane [107]. F13 carries a putative phospholipase D domain (HKD) [108] that is required for its wrapping activity [108]. Interestingly, F13 has been reported to have broad spectrum lipase activity which is thought to mediate Golgi vesicle budding and formation of late endosomes (LEs) containing the various WV proteins [109,110]. In support of this, expression of F13 is required to drive localisation of the other WV proteins, B5 and A36, to LEs [99,100]. In the absence of F13, or upon mutation of its phospholipase D domain these proteins remain in the TGN [99,100]. As no direct interaction between F13 and these proteins has been identified, their LE relocalisation is likely driven by F13's Golgi budding activity. Importantly, expression of phospholipase D does not rescue EV formation in the absence of F13, implying that F13 has additional roles in MV wrapping, beyond driving vesicle budding [99]. In addition to the phospholipase D domain, F13 contains a conserved tyrosine-tryptophan motif that has been shown to be required for interaction with tail interacting protein of 47 kDa (TIP47) [111]. Mutation of the F13 tyrosine-tryptophan motif results in loss of interaction with TIP47 and abrogation of plaque formation [111]. This late endosomal-derived transport vesicle effector protein interacts with Rab9, a small Ras GTPase, which is also enriched in LEs [112]. Together these proteins mediate receptor recycling from LEs to the TGN [112,113]. Interestingly, Rab9/TIP47 function has also been shown be important for human immunodeficiency virus (HIV), Ebola, Marburg, measles, and hepatitis C virus replication and release, suggesting that the cellular trafficking pathway controlled by these proteins is commonly exploited by enveloped viruses [114][115][116]. Finally, F13 contains a viral late assembly domain (L domain) [117]. These domains, consisting of four-residue motifs, have been identified within many different enveloped virus proteins, and are often important for virus assembly and egress (recently reviewed in [118]). The L domain motif of F13, YPPL, is conserved throughout all orthopoxviruses, and the variant YXXL conserved throughout the Poxviridae family [117]. This high level of conservation indicates the importance this domain plays in viral replication. Mutation of the conserved Y and L within this motif results in a virus with a small plaque phenotype, indicative of a defect in virus spread [117]. Interestingly, all viral late domains identified to date interact with members of the endosomal sorting complex required for transport (ESCRT) or one of its associated proteins [119]. ESCRT is a network of cytoplasmic protein complexes (ESCRT-0, ESCRT-I, ESCRT-II, ESCRT-III, Vps4 complex), required for sorting and degradation of ubiquitinated LE membrane proteins. Briefly, ESCRT-0 recognises ubiquitinated cargo proteins and sequesters them into distinct regions of the LE membrane. Then ESCRT-I/ESCRT-II drive membrane deformation to form buds directed into the lumen of the LE. ESCRT-III is then recruited by ESCRT-I, via ESCRT-II or the accessory protein Alix [120]. Upon its arrival ESCRT-III drives invagination and subsequent membrane fission either on its own, or through interaction with the AAA+ ATPase complex, Vps4. The vesicles formed by this process are released into the lumen of the late endosome, which then becomes known as a multi-vesicular body (MVB). Finally, MVB fusion with lysosomes leads to degradation of the intraluminal vesicles and their associated cargo [121]. ESCRT proteins are also required for numerous other cell trafficking events, such as exosome formation [119]. Exosomes are a type of extracellular vesicle, which when released from cells carry signalling proteins, RNA and lipids to neighbouring cells [121]. Exosomes are formed when the limiting membrane of an MVB fuses with the plasma membrane leading to the release of the intraluminal vesicles into the extracellular space [121]. Unlike canonical intraluminal vesicle formation, sorting of cargo proteins into exosomes does not always depend on ubiquitination of the cargo. It can be driven by direct interaction of the cargo protein with a member of ESCRT or with one of its associated proteins such as Alix [120]. For example, the cytoplasmic protein syntenin interacts directly with Alix to facilitate its packaging into intraluminal vesicles and its eventual exosome-mediated release [122,123]. Is EV Formation an Exosome-Like Process? Several viruses have also been shown to hijack exosome formation to mediate their own envelopment and release from host cells. This process was first identified and is best characterised for HIV (reviewed in [119]). Several lines of evidence suggest that VACV may also use an exosome-like pathway to facilitate WV formation (illustrated in Figure 4). As described above F13 carries an YXXL late domain motif, which when present in viral proteins, are known to interact with the ESCRT accessory protein Alix [117,124,125]. For VACV, depletion of Alix as well as the ESCRT-I component TSG101 has been shown to inhibit EV production [117]. Although no direct interaction between F13 and TSG101 or Alix has been demonstrated, TSG101 is known to interact with Alix [120]. Given that exosome formation via syndecan-syntenin-Alix is known to depend on TSG101 [122], collectively these studies suggest that VACV wrapping may proceed by a similar mechanism. For recognition as cargo, MVs have been shown to carry membrane associated lipid-modified ubiquitin [126]. As ESCRT-0 initiates exosome formation through recognition of ubiquitinated proteins in LE membranes, perhaps F13 acts as an ESCRT-0 mimic, binding the ubiquitinated MV membrane, recruiting Alix and targeting the MV for exosome-like wrapping. Alternatively, as A27 is the only MV membrane protein essential for EV formation, in addition to transporting MVs to the site of wrapping, maybe A27 also targets MVs for wrapping through direct interaction with F13 or with a cellular factor required for this process [86,103]. Such a model of MV wrapping would dictate that the wrapping membranes are derived from LEs. This is supported by the confocal studies suggesting the F13 mediates transport of B5 and A36 from the TGN to LEs [99,100], and that in the absence of over expression, no Golgi derived proteins are found in EV membranes [127]. Furthermore, the exocytotic release of WVs is highly reminiscent of exosome release mediated by fusion of the outermost MVB membrane with the plasma membrane. Like the fused MVB membrane, the deposited outermost WV membrane is recycled via endocytosis [110,128]. Additional MVs could then be wrapped by EEs now containing the proteins required for WV formation, F13 and B5 [128]. Tip47 interaction with F13 in LEs then mediates recycling of the proteins to the TGN and ensures that B5 and F13 are not trafficked to the lysosome for degradation [111]. Both processes proceed through four major steps: Cargo capture and membrane deformation, intraluminal budding, exocytosis and finally fusion with the plasma membrane to release the membrane bound cargo. Canonical exosome formation (left) is regulated by the ESCRTs. ESCRT-0 acts to recognise membrane-bound ubiquitinated cargo proteins and direct them into distinct late endosome (LE) membrane regions. ESCRT-I/ESCRT-II drive membrane deformation. After recruitment of ESCRT-III via ESCRT-II or the accessory protein Alix, ESCRT-I/ESCRT-II depart, and ESCRT-III drives invagination and subsequent membrane fission with assistance of the AAA+ ATPase complex, Vps4. The newly formed multivesicular body (MVB) is transported to the cell surface on microtubules and the intralumenal vesicles are released from the cell when the limiting membrane of the MVB fuses with the plasma membrane, thereby forming exosomes. Based on the evidence described in the text, we propose a model of VACV wrapped virion (WV) formation akin to exosome formation (right). As the EV protein F13 is essential for wrapping, contains a late domain, is present in LEs during infection, and interacts with late endosomal factors, we suggest that F13 acts as an ESCRT-0 mimic that serves to recognise mature viruses (MVs) as cargo for wrapping. While it is unknown what F13 recognises on the MV; both A27, an MV membrane protein required for EV formation, and ubiquitin on the VACV membrane could serve as F13 recognition targets. As an ESCRT-0 mimic, F13 could also serve to recruit ESCRT-I/II and/or ESCRT-III via the accessory protein Alix. This would initiate wrapping, a process topologically analogous to intralumenal budding during MVB formation. In support of this both the ESCRT-1 component, TSG101, and the accessory protein Alix are required for EV formation. To complete WV formation, the Vps4 complex could be recruited to facilitate the sealing of the protective EV membrane. Like exosome release, fully formed WVs require microtubules for transport to the plasma membrane where they fuse, releasing the membrane-bound MV cargo, thus forming the double-membrane EV. Immune Evasion Role of the EV Membrane Poxviruses are the only viruses that make a two infectious virus forms. That all poxviruses make EVs and dedicate nine genes to EV formation highlights their importance in the virus lifecycle. The formation of two infectious forms is a clever tactic adopted by VACV. As MVs and EVs display a unique set of membrane proteins VACV forces the host immune system to generate a response to two immunologically distinct invaders. VACV EVs are specifically designed for the purpose of spread. With such a specialised role during infection, not surprisingly the EV membrane provides a number of advantages that help the virus evade immune detection when in the extracellular environment. Antibodies play a critical protective role against poxvirus infection in humans and primates [129,130]. In vivo, EVs are the major form of virus found in circulation, thus their membrane proteins would be predicted to be major targets for protective antibodies generated by infected hosts. Consistent with this notion, EV specific antibodies protect mice and rabbits against lethal challenge better than MV specific antibodies [131,132]. In humans, poxvirus immunisation elicits neutralising antibodies targeted to several MV membrane proteins including A27, L1, H3, and D8, but only B5 on EVs [133,134]. This would suggest that the outer EV membrane acts as a cloak that hides the highly antigenic MV membrane proteins from exposure to the immune system while the virus is in circulation. In addition, EVs appear to protect their own surface proteins from stimulating humoral immunity. How they achieve this is currently unknown. It seems significant that all membrane proteins displayed on the EV surface are glycosylated while there are no glycosylated membrane proteins packaged into MVs [135,136]. Interestingly, glycosylation of the surface proteins of several viruses including HIV, hepatitis C and gammaherpesvirus, has been shown to shield them from neutralising antibodies [137][138][139]. Although the anti-B5 and the non-neutralising anti-A33 antibodies raised by the human immune response are independent of their glycosylation state [133,134], perhaps EVs employ a glyco-shielding strategy in an attempt to hide their other outer membrane proteins from the humoral immune response. Several studies to elucidate how antibodies directed against EV proteins mediate protection have lead to the conclusion that complement activity is very important for their protective capacity [140,141]. Both anti-A33 and anti-B5 antibodies combat VACV in vitro via complement-mediated virolysis. At high concentrations, anti-B5 can also participate in complement dependent virus opsonisation or trigger complement-mediated lysis of infected host cells [141]. These specific activities have been shown to be important for anti-B5 mediated protection in vivo [140,141]. To combat these complement-mediated immune responses VACV has developed a couple of divergent strategies. When deposited on the host cell surface, the EV membrane protein A56 has been shown to bind a virally-encoded complement control protein, C3. In order to block complement-mediated host cell lysis activity VACV C3 binds host complement proteins C3a and C3b thereby abrogating their activity [142]. Although not tested, it is possible that A56 located in the EV membrane administers a similar immune evasion strategy through binding of VACV C3 to EVs to prevent their complement-mediated destruction. The virus also hijacks the host cell proteins CD46, CD55, CD59, CD71, CD81, and major histocompatibility complex class I [84]. Both CD55 and CD59 are complement control proteins that have been shown inhibit the complement-mediated immune response against VACV, both in the presence and absence of EV specific antibodies [84,140]. Thus the EV membrane helps the extracellular virions evade immune detection while in circulation by cloaking the underlying MV, by displaying very few neutralising antibody targets, and through incorporation of host and potentially virus encoded complement control proteins [84,143]. Perspectives Likely owing to their large size, exclusively cytoplasmic replication, and large coding capacity, poxviruses have evolved several unique immune evasion strategies that cover the whole of the virus lifecycle from entry to spread. While apoptotic mimicry has been linked to VACV entry it is an attractive possibility that apoptotic mimicry may also facilitate engagement of PS receptors to dampen the host immune response. This strategy would provide VACV with the possibility to modulate cellular immunity prior to entering cells and without the need to encode and package additional viral proteins. Furthermore, the broad cell type and tissue tropism of VACV may be attributable to the existence of multiple PS-receptors and the ability of both professional and non-professional phagocytes to clear apoptotic debris [144,145]. A detailed analysis of the host signalling pathways activated by VACV will be important for understanding the multifaceted role of PS receptors in binding, endocytosis, infection, and innate immune suppression during infection of relevant cell types and in vivo. Perhaps most importantly, the therapeutic potential of targeting viral PS to prevent poxvirus infection should be investigated [146]. In addition to expressing a large subset of immunomodulatory proteins [18], poxviruses uniquely carry LBs that allow for the delivery of potential immunomodulatory factors prior to gene expression. The advantages of such a strategy range from shunting of antiviral innate immune responses to establishing a favorable cytoplasmic environment for DNA replication prior to genome release. Interestingly another large DNA virus, herpes simplex virus 1 (HSV1), also carries a proteinaceous layer between its capsid and envelope, termed tegument [14]. Akin to poxviruses Lbs, upon entry some HSV1 tegument proteins are shed into the cytoplasm in order to modulate host cell activities [14]. The tegument protein pUL13 for example has been shown to inhibit the type I IFN response, while pUL41 and pUL34.5 down regulate expression of major histocompatibility complex class II [16,17]. To date only three LB components have been identified [15]. By analogy to HSV1 tegument, it is tempting to hypothesise that the remaining mass of poxvirus LBs account for an artillery of viral encoded immune modulatory enzymes that serve to shut down very early cellular immune responses to infection. Identification and characterisation of the remaining LB constituents deserves further work. Such studies may serve to confirm an immune-modulating function of LBs and identify the early immune pathways that sufficiently threaten invading Poxviridae such that these viruses have evolved to defuse them. Poxviruses have, at least in part, evolved EVs to allow the virus to spread faster than it replicates, via superinfection repulsion [51]. As EVs are the first form of virus released into the extracellular environment after initial infection, it reasons that they are provided with an additional cloak to protect them from host immune detection and destruction. EVs achieve this by masking the underlying MV, potentially glyco-shielding EV proteins from antibody recognition, and by packaging viral and host proteins to block complement mediated destruction. While much is known about the individual viral proteins required for EV formation [57,135], several interesting aspects of this process await further investigation. These include the microtubule motors that transport MVs to the site of wrapping, the viral and cell factors involved in WV fusion at the plasma membrane, and as discussed in this review the cellular membrane source, mechanism, and cell factors that facilitate WV formation. Collectively, these unique immune evasion strategies are likely to provide researchers the opportunity to define novel immunomodulatory functions of poxviruses and in turn the possibility of uncovering previously undefined cellular innate and intrinsic immune responses to viral infection. Furthermore, in depth understanding of how poxviruses modulate the immune system is likely to lead to better antiviral therapeutic design and smarter oncolytic poxvirus development [147,148].
2018-04-03T04:05:45.639Z
2015-08-01T00:00:00.000
{ "year": 2015, "sha1": "de2fceefa0b12d78d38ac9950de3afc2cb46ecc9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4915/7/8/2844/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "de2fceefa0b12d78d38ac9950de3afc2cb46ecc9", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
108289949
pes2o/s2orc
v3-fos-license
Geometric Lorenz flows with historic behavior We will show that, in the the geometric Lorenz flow, the set of initial states which give rise to orbits with historic behavior is residual in a trapping region. 1 n+1 n i=0 g(ϕ i (x)) does not exist for some continuous function g : X → R. The notion of historic behavior was introduced by Ruelle [Ru]. We say that a subset A of X is a historic initial set if, for any x ∈ A, the forward orbit O + (x, ϕ) has historic behavior. Jordan, Naudot and Young [JNY] showed that the convergence of every higher order average in [BDV,p. 11] is totally controlled by the presence of the historic initial sets. Let ϕ : S → S be the doubling map on the circle S = R/Z. Takens [Ta2] showed that there exists a residual historic initial set in S. In fact, he presented only one orbit O + (x, ϕ) which is dense in S and has historic behavior. Then, by Dowker [Do], there exists a historic initial set which is residual in S. Dowker's theorem is very useful to show the existence of a residual historic initial set for various 1dimensional maps. The quenched random dynamics version of Takens' result is obtained by Nakano [Na]. Takens' argument is applicable also to the Lorenz map α : [−1, 1] → [−1, 1], see Remark 1.1. Many of such residual sets would have zero Lebesgue measure. On the other hand, for any integer r with 2 ≤ r < ∞, Kiriki and Soma [KS] proved that there exists a two-dimensional diffeomorphism which is arbitrarily C r close to a diffeomorphism with a quadratic homoclinic tangency and has a non-empty open historic initial set D. Note that the open set D has positive 2-dimensional Lebesgue measure. Hence, in particular, this result gives an answer to Takens' Last Problem [Ta2] in the C 2 -persistent way (see [PT,Section 6.1] for the definition). Moreover, it suggests that, in certain classes of 2-dimensional diffeomorphisms, the historic initial set is not negligible from the physical point of view. In this paper, we will study the historic behavior on flow dynamics. Let x(t) t≥0 be a forward orbit of a flow on a compact space X. Then we say that the orbit has historic behavior if the the time average lim t→∞ 1 t t 0 g(x(s)) ds does not exist for some continuous function g : X → R. See Takens [Ta1] for the definition. Bowen's example given in [Ta1] is a flow on R 2 which has a heteroclinic loop consisting of a pair of saddle points and two arcs connecting them. The loop bounds an open disk D in R 2 which contains a singular point p of the flow such that the complement D \ {p} is a historic initial set. However, this example is fragile in the sense that it is not persistent under perturbations which break the saddle connections. Very recently, Labouriau and Rodrigues [LR] present a persistent class of differential equations on R 3 exhibiting historic behavior for an open set of initial conditions, which answers Takens' Last Problem for 3-dimensional flows. Here we consider the geometric Lorenz flow introduced by Guckenheimer [Gu] as a robust model which does not belong to classes in [LR]. Robinson [Ro] proved that the geometric Lorenz flow is preserved under C 2 -perturbation. Note that Tucker [Tu] showed that the flow exhibited by the system of differential equations in Lorenz [Lo] (the original Lorenz flow) is realized by some geometric Lorenz model. Our main theorem (Theorem 2.1) of this paper proves that any geometric Lorenz flow satisfying the conditions in Section 1 has a residual historic initial set. On the other hand, Araujo et al [APPV] proved that, for any singular hyperbolic attractor of a 3-dimensional flow, the historic initial set in the topological basin of attractor has zero Lebesgue measure. Since the geometric Lorenz attractor is proved to be a singular hyperbolic attractor by [MPP], the historic initial set is negligible from the physical point of view. But, Theorem 2.1 implies that it is not the case in dynamical systems from the topological point of view. Finally, we note that Dowker's result does not work in flow dynamics. So, in our proof, we need to construct a residual historic initial set for the geometric Lorentz flow practically. Acknowledgements. The authors appreciate the hospitality of NCTS, Taiwan, where parts of this work were carried out. The first and third authors were partially supported by JSPS KAKENHI Grant Numbers 25400112 and 26400093, respectively, and the second author by MOST 104-2115-M-009-003-MY2. Preliminaries First of all, we will review the geometric Lorentz flow briefly. See [Wi1,GW,Wi2] for details. Remark 1.1 (Historic behavior for the 1-dimensional Lorenz map). We denote the forward orbit of x ∈ [−1, 1] under α by O + (x, α). By Hofbauer [Ho], the dynamics of α on [−1, 1] is described by a Markov partition on finite symbols. Let s be a periodic sequence of these symbols and s a sequence such that, for the point x of [−1, 1] corresponding to s , the partial averages 1 n+1 n i=0 δ α i (x ) converge to the Lebesgue measure. As in Takens [Ta1,Section 4], there exists a sequence s 0 of these symbols in which long initial segments of s and those of s appear alternately and such that, for the point and has historic behavior. Then, by Dowker [Do], there exists a historic initial set which is residual in [−1, 1]. We identify the square Σ and any subset of Σ with their images in R 3 via the embedding ι : R 2 → R 3 with ι(x, y) = (x, y, 1). A C 2 -vector field X L on R 3 is said to be a geometric Lorenz vector field controlled by the Lorenz map L : Σ \ Γ → Σ (1.1) if it satisfies the following conditions (i) and (ii). (i) For any point (x, y, z) in a neighborhood of the origin 0 of R 3 , X L is given by the differential equation for some λ > 0, µ > ν > 0. Moreover, Γ is contained in the stable manifold W s (0) of 0. (ii) All forward orbits of X starting from Σ\Γ will return to Σ and the first return map is L. Note then that 0 is a singular point (an equilibrium) of saddle type, the local unstable manifold of 0 is tangent to the x-axis, and the local stable manifold of 0 is tangent to the yz-plane, see Figure 1. t)) is called the geometric Lorenz flow associated with the vector field X L . The closure of z∈Σ\Γ ϕ L (z, [0, ∞)) in R 3 is homeomorphic to a genus two handlebody as illustrated in Figure 1.2, which is called the trapping region of ϕ L and denoted by T ϕ L or T L . Any forward orbit for ϕ L with its initial point in T L cannot escape from T L . For simplicity, we suppose moreover that the geometric Lorenz flow satisfies the differential equation ( In fact, this assumption is not crucial and our subsequent argument still works for any geometric Lorenz flow which satisfies (1.3) only on an arbitrarily small neighborhood of 0 in T L . Historic behavior for the geometric Lorenz flow Let ϕ L be the geometric Lorenz flow given in the previous section. Suppose that g : T L → R is a continuous function on the trapping region T L . For τ > 0 and δ > 0, the forward orbit ϕ L (x, t) t≥0 emanating from x ∈ T L is said to have (τ, δ)-historic behavior with respect to g if there exist τ 0 , τ 1 with τ 0 , τ 1 ≥ τ such that In particular, ϕ L (x, t) t≥0 has historic behavior if and only if there exists δ > 0 and a continuous function g on T L such that, for any τ > 0, ϕ L (x, t) t≥0 has (τ, δ)-historic behavior with respect to g. For any y, z ∈ T L contained in the same forward orbit ϕ L (x, [0, ∞)) with x ∈ Σ, the sub-arc of ϕ L (x, [0, ∞)) connecting y with z is denoted by Φ L (y, z) or Φ L (z, y). Let t x (y) ≥ 0 be the number with ϕ L (x, t x (y)) = y. We set τ (y, z) = |t x (y) − t x (z)|. Note that τ (y, z) is independent of x ∈ Σ with ϕ L (x, [0, ∞)) y, z. We also set τ (y, z) = τ (γ) if γ = Φ L (y, z). Let A be a compact subset of T L \ {0} such that Φ L (y, z) ∩ A is a disjoint union of finitely many arcs γ 1 , . . . , γ n . Then the total sum n i=1 τ (γ i ) is denoted by τ (y, z)| A . Take a periodic point x per(2) of α with period two. Let π : R 3 → R 2 be the orthogonal projection defined by π(x, y, z) = (x, z). For any point x of Σ with x [1] = x per(2) , the the image Q(x per(2) ) = π(ϕ L (x, [0, ∞))) is a closed curve in the xz-plane disjoint from the origin of R 2 . Here we denote the first entry of an element a of R 3 by a [1] , that is, (a, b, c) [1] = a. Though Q(x per(2) ) depends on x per(2) , it is independent of the y-entry of x. See Note that the Lorenz flow does not have singular points in the compact set T L \ Π(η), where A denotes the closure of a subset A of T L . It follows from the fact that there exists a constant C > 0 satisfying for any x ∈ Σ \ Γ. The following is our main theorem in this paper. Theorem 2.1. There exists a residual subset H of Σ such that, for any x ∈ H, the forward orbit ϕ L (x, t) t≥0 has historic behavior. Here we fix a continuous function g : T L → R satisfying the following condition. (1) 0 ≤ g(x) ≤ 1 for any x ∈ T L . The following lemma is crucial in the proof of Theorem 2.1. Here we note that the disk U (x0,N,ε) is not necessarily required to have x 0 as an element. It follows that ψ L (z, t) t≥0 has (N, 1/2)-historic behavior with respect to g. Proof of Theorem 2.1. For any N, m ∈ N and any x ∈ Σ \ Γ, let U (x,N,1/(m+1)) be the open disk given in Lemma 2.2 with ε = 1/(m + 1). Then the union U N = m∈N,x∈Σ\Γ U (x,N,1/(m+1)) is an open dense subset of Σ, and hence H = ∞ N =1 U N is a residual subset of Σ. Since each element z of H satisfies the condition (H N ) of Lemma 2.2 for any N ∈ N, the forward orbit ϕ L (z, t) t≥0 has historic behavior. This completes the proof.
2016-08-18T13:44:24.000Z
2015-11-17T00:00:00.000
{ "year": 2015, "sha1": "08d55d7a484448fecd3ea05d4bdb97efaf181181", "oa_license": "CCBY", "oa_url": "https://www.aimsciences.org/article/exportPdf?id=19696a00-e978-4c91-bc29-9bb2e7a463ab", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "08d55d7a484448fecd3ea05d4bdb97efaf181181", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics" ] }
22290266
pes2o/s2orc
v3-fos-license
Rosiglitazone enhances neovascularization in diabetic rat ischemic hindlimb model Background. There is increasing evidence that peroxisome proliferator–activated receptors (PPARs) may be involved in the regulation of angiogenesis. In this study, we examined whether rosiglitazone, a PPARγ agonist, can restore angiogenesis in a rat hindlimb ischemia model of diabetes. Methods. Male wistar rats were divided into four groups (n=6 each): control, diabetic and control and diabetic rats who received rosiglitazone (8mg/kg/day). Diabetes was induced by streptozotocin (55mg/kg; ip). After 21 days, serum concentrations of nitric oxide (NO), vascular endothelial growth factor (VEGF) and soluble VEGF receptor-2 (VEGFR-2) were measured and neovascularization in ischemic legs was evaluated by immunohistochemistry. Results. Capillary density and capillary/fiber ratio in hindlimb ischemia of diabetic animals were significantly lower than the control group (P<0.05). Rosiglitazone significantly restored neovascularization in diabetic animals (P<0.05). Conclusions. rosiglitazone enhances neovascularization in diabetic ischemic skeletal muscle and could be considered for treatment of peripheral artery disease in diabetic subjects. INTRODUCTION Type 2 diabetes is a major cause of morbidity and mortality in advanced societies.Cardiovascular disease is responsible for up to 80% of death in diabetic subjects 1 .Some of the long-term complications of diabetes are associated with impaired angiogenesis which can result in severe organ damage 2 .Angiogenesis is defined as sprouting of blood vessels from preexisting ones and is considered a physiological response to tissue ischemia 3,4 .Hypoxia is the main stimulus for angiogenesis 4 .Thus, angiogenic therapy is a novel approach for improving tissue perfusion in diabetic patients with reduced regional organ perfusion 5,6 . Peroxisome proliferator-activated receptors (PPARs) are ligand-activated transcription factors that have three nuclear receptor isoforms, PPARα, PPARδ and PPARγ (ref. 7).Rosiglitazone is a PPARγ agonist that belongs to a new class of insulin sensitizers, used clinically in the management of diabetes 8 .PPARγ is expressed in endothelial and vascular smooth muscle cells 9 .It is indicated that PPARγ ligands not only have beneficial effects on endothelial function 10 , the amelioration of hyperlipidemia and hyperglycemia 11 , but also, upregulate angiogenic factors such as endothelial nitric oxide (NO) synthase and vascular endothelial growth factor (VEGF) expression in vascular smooth muscle cells 12 .In recent years, there is increasing evidence that PPARs might be involved in regulation of physiological and pathological angiogenesis 7 .Since peripheral artery disease is a major complication of diabetes, in this study, we test the hypothesis that rosiglitazone can improve skeletal muscle angiogenesis in diabetic and control rats in hindlimb ischemia model. Animals Ten week old male wistar rats weighing between 180-230 g were provided by the Pasteur Institute of Iran.The animals were randomly divided into two groups: diabetic and control.Experimental diabetes was induced by a single intraperitoneal injection of streptozotocin (55 mg/kg) dissolved in 0.9% saline.Control rats received the same volume of 0.9% saline.After 48 h, blood samples were taken and the animals with blood glucose concentration higher than 16.7 mmol/l were considered diabetic 13 .Then, all rats were randomly divided into 4 groups as follows: Group1: control rats received vehicle.Group2: control rats received rosiglitazone (8 mg/kg/ day) by gavage 14 . The treatments were started one day after induction of hindlimb ischemia and lasted for 21 days.All experimental procedures were approved by the ethics committee of the authors' institution. Rat hind limb ischemic model All rats were anaesthetized with ketamine (75 mg/kg) and xylazine (7.5 mg/kg), intraperitoneally.Unilateral hindlimb ischemia was induced as previously described 15 .In brief, the left legs were shaved and locally disinfected.Through a small incision, the left femoral artery was isolated.The proximal and distal portions of the femoral artery and distal portion of saphenous artery with side branches were ligated and excised.Subsequently, the skin was closed with 3-0 silk surgical suture.Then, the animals were returned to their cages. Capillary density analysis For capillary density measurement, the ischemic gasterocnemious muscles were dissected.After overnight fixation at 10% formalin, they were embedded in paraffin and cut with 5µm thickness.Then, the sections were deparaffinized and incubated with a rat-monoclonal antibody directed against mouse CD31 (Abcam Co.).Finally, capillary density was measured at 400× in ten different fields from each tissue preparation and determined as the number of CD31 positive cells per mm 2 .To avoid overestimate or underestimate of capillary density because of muscle atrophy or interstitial edema, capillary/muscle fiber ratio was also expressed. Measurement of plasma parameters After 12h fasting, blood samples were taken from retroorbital space before and at the end of experiment.Blood samples were centrifuged with 10000 rpm for 15 min to obtain serum triglycerides (TG), High-density lipoprotein cholesterol (HDL-C), Total cholesterol (TC) and Lowdensity lipoprotein cholesterol (LDL-C), glucose and insulin concentrations with commercially available kits. Measurement of serum NO, VEGF and VEGFR-2 concentrations Serum NO concentrations were measured using Griess reagent method (Promega Corp, USA).In this method, serum nitrite, the main metabolite of NO, was measured.The limit of detection is 2.5µM.Serum VEGF and VEGFR-2 concentrations were measured by Enzymelinked Immunosorbent assay using available reagents and recombinant standards (R&D systems, Minneapolis, USA).The minimum sensitivity of VEGF and VEGFR-2 assays are 3.9 pg/ml and 0.027 ng/ml, respectively. Statistical analysis All data are expressed as mean ± SE.One way ANOVA tests using tukey´s test were performed for comparison of data between groups.Paired t-test was used for comparison of paired data.A P value less than 0.05 was considered statistically significant. Plasma parameters As shown in Table 1, the plasma level of TG was significantly reduced and serum HDL-C increased in control rats treated with rosiglitazone (P<0.05).In diabetic groups, rosiglitazone also reduced serum TG and increased serum HDL-C concentrations (P<0.05).Blood glucose levels were higher than 16.7 mmol/l in diabetic rats throughout the study and rosiglitazone administration did not alter the blood glucose level or serum insulin concentrations compared to control (P>0.05)(data not shown). Evaluation of serum angiogenic factors Fig. 1. illustrates serum nitrite concentrations on day 21 after operation.Diabetic animals had lower serum nitrite concentration than control (P=0.08).Rosiglitazone did not change serum nitrite concentration in control or diabetic animals (P>0.05).Serum VEGF and VEGFR-2 concentrations were not different between control and diabetic animals (P>0.05). .There were no significant differences in serum VEGF and VEGFR-2 concentrations between rosiglitazone-treated and non-treated groups (P>0.05) (Fig. 2A and B). Evaluation of neovascularization Neovascularization was evaluated as capillary density (CD31-positive cells) per mm 2 and number of capillaries per muscle fiber ratio.Neovascularization was significantly impaired in hindlimb ischemia of diabetic animals com- pared to control and rosiglitazone significantly restored capillary density and capillary/fiber ratio in the ischemic leg of diabetic rats toward control level (Fig. 3A and B).Some photographs of histological sections stained with rat-monoclonal antibody directed against murine CD31 are illustrated in Fig. 4. DISCUSSION In this study, we investigated the role of rosiglitazone, a PPARγ agonist, on angiogenesis in hindlimb ischemia in diabetic and control rats.Our data illustrated that diabetes is associated with reduced angiogenesis in ischemic skeletal muscle and rosiglitazone administration restored neovascularization in hindlimb ischemia of diabetic animals. Rosiglitazone is a drug from thiazolidindiones (TZDs) that is not only used for improvement of insulin resistance in diabetic patients but also safeguards diabetic patients from cardiovascular events 16 .In the present study, we found that rosiglitazone improved serum HDL and lowered TG concentration in control and diabetic rats, however, it did not change serum insulin or glucose concentrations.In this study, we used normal rodent chow, not high-fat diet, however, our findings are in agreement with previous studies which showed that activation of PPARγ lowered plasma triglyceride levels and increased plasma HDL (ref. 17).However, a study on cholesterol-fed rabbits revealed that rosiglitazone significantly reduced aortic atherosclerosis without modifying the plasma levels of glucose, insulin or lipid profile 18 . It is believed that PPARs are involved during the angiogenesis process 7 .In this study, we found that angiogenesis in hindlimb ischemia of diabetic animals was impaired compared to control.In addition, serum NO concentration in diabetic animals was lower than control.Enhanced angiogenesis has an important role in some complications of diabetes including diabetic retinopathy and nephropathy, on the other hand, reduced angiogenesis which is related to lower arteriogenesis and poor growth of collateral artery has an important role in cardiovascular diseases in diabetes 2,19 .NO not only enhances angiogenesis, but also, other angiogenic growth factors exert their angiogenic response through increasing NO production 20 .Reduced NO bioavailability in diabetic subjects has been reported in several studies [21][22][23] .Suppression of endothelial NO synthase (eNOS) expression and its activity 24 , overproduction of superoxide 25 and activation of protein kinase C (ref. 23) during high-glucose concentration are possible mechanisms for lower NO availability in diabetic subjects.VEGF is another angiogenic factor in a variety of in vivo models 26 .VEGFR-2 is also an effector of proangiogenic signaling in the angiogenesis process 27 . In the present study, we found no significant differences in serum VEGF and VEGFR-2 concentrations between control and diabetic animals.It is suggested that without considering the serum VEGF level, the VEGF signaling pathway is impaired during diabetes which is considered as VEGF resistance 28,29 .Therefore, it is possible that reduced serum NO concentration may be responsible for lower neovascularization in hindlimb ischemic tissue of diabetic animals. We also found that rosiglitazone restored neovascularization in the ischemic leg of diabetic animals.The angiogenic abilities of PPARγ agonists have been broadly examined; however, the results are contradictory.Studies in different angiogenesis models revealed that activation of PPARγ upregulates receptor of antiangiogenic factor thrombospondin in chorioallantoic membrane 30 , inhibits bFGF-and VEGF-mediated angiogenesis, suppresses VEGF-induced angiogenesis in rat cornea model 31 , and inhibits tumor growth angiogenesis and metastasis 32 .In agreement with our results, a recent study on KKAy mice indicated that pioglitazone administration restored ischemia-induced angiogenesis 15 .In that study, Huang et al. used pioglitazone as the PPARγ agonist.It was shown that pioglitazone is a partial PPARγ agonist and considered as a less potent ligand, whereas, rosiglitazone appears to be a pure PPARγ agonist with high affinity to receptor 7 .Another study demonstrated that PPARγ agonists increase angiogenesis after focal cerebral ischemia 33 .Rosiglitazone also has a myocardial protective role during ischemia/reperfusion injury 34 .Several mechanisms have been suggested for the angiogenic role of PPARγ agonists.Huang et al. suggested that activation of eNOS is the main mechanism for enhanced angiogenesis 15 .In the present study, we found that serum NO, VEGF and VEGFR-2 concentrations did not alter after rosiglitazone treatment. Diabetic+R Control Increase in number of endothelial progenitor cells is suggested as another mechanism for enhanced angiogenesis of PPARγ activation 35 .Therefore, it is possible that the effect of PPARγ agonists on angiogenesis in pathological or ischemic condition is different. In conclusions, diabetes is associated with impaired angiogenesis in ischemic skeletal muscle and rosiglitazone restored neovascularization in diabetic animals.Since diabetes is one of the most important risk factors for the development of peripheral vascular disease, it seems that rosiglitazone can be considered for treatment of peripheral artery disease in diabetic subjects.Further studies are needed to clarify the exact role and mechanisms of PPARγ agonists on physiological and pathological angiogenesis. LETTER TO THE EDITOR Rosiglitazone belongs to the thiazolidinedione class of compounds that exhibit agonist activity on PPAR-gamma (peroxisome proliferator-activated receptor gamma).These drugs enhance the sensitivity of tissues to the effects of endogenous insulin.Rosiglitazone was registered in the European Union as Avandia in July 2000.As an oral antidiabetic drug, it was used in the treatment of patients with type 2 diabetes mellitus.Rosiglitazone was served as a second-line drug where other treatments had failed.In June 2010, studies were published describing the adverse effects of this antidiabetic drug especially on the cardiovascular system 1,2 .Based on this information, the European Medicines Agency recommended the withdrawal of rosiglitazone in the EU member states. The paper, entitled "Rosiglitazone enhances neovascularization in diabetic rat ischemic hind limb model" describes a new effect of rosiglitazone on angiogenesis in an animal model (rat).The results of this communica- Table 1 . Serum lipid profile before and after study in experimental groups. Data are expressed as Mean ± SE; *: P<0.05 compare to before experiment.
2018-04-03T05:49:46.954Z
2012-12-12T00:00:00.000
{ "year": 2012, "sha1": "567d625b4d4a50b64a296cfcec6f89f8f028bc60", "oa_license": "CCBY", "oa_url": "http://biomed.papers.upol.cz/doi/10.5507/bp.2012.052.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0dcdc677eb22bed680fc397bfb0139649f7b4e46", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
250683459
pes2o/s2orc
v3-fos-license
Upgrading the ATLAS Level-1 Calorimeter Trigger using topological information The ATLAS Level-1 Calorimeter Trigger (L1Calo) is a fixed latency, hardware-based pipelined system designed for operation at the LHC design luminosity of 1034cm−2s−1. Plans for a several-fold luminosity upgrade will necessitate a complete replacement for L1Calo (Phase II). But backgrounds at or near design luminosity may also require incremental upgrades to the current L1Calo system (Phase I). This paper describes a proposed upgrade to the existing L1Calo to add topological algorithm capabilities, using Region of Interest (RoI) information currently produced by the Jet and EM/Hadron algorithm processors but not used in the Level-1 real-time data path. Introduction The L1Calo trigger (figure 1) is a fixed-latency, 40 MHz pipelined digital system [2]. Its input data comes from about 7200 analogue trigger towers of reduced granularity, mostly 0.1 × 0.1 in ∆η × ∆φ , from all the ATLAS electromagnetic and hadronic calorimeters. 1 The L1Calo electronics has a latency of less than a microsecond, resulting in a total latency of about 2.1 µs for the L1Calo chain including cable transmission delays and the Central Trigger Processor (CTP) processing time. The Cluster Processor (CP) identifies candidate electrons, photons and τ's with high ET above programmable thresholds and, if desired, passing isolation requirements. The Jet/Energy-sum Processor (JEP) operates on 'jet elements' at the somewhat coarser granularity of 0.2 × 0.2 in ∆η × ∆φ to identify jets as well as produce global sums of total, missing, and jet-sum ET. Both the CP and the JEP count 'hit' multiplicities of the different types of trigger objects, and send them, together with tower energy information (total and x,y components), to the two Common Merger Modules (CMMs) in each crate. The 'crate' CMMs (one at each end of the main block of modules in each crate) processes the results from the CPMs or JEMs to produce results over the entire crate, and send them to a 'system' CMMs in order to produce system-wide results. These final results are sent on cables to the CTP. Upon receiving the L1Accept signal from the CTP, the Region of Interest (RoI) information is sent to the Data Acquisition (DAQ) system through ReadOut Drivers (RODs). The present trigger capabilities allow to make selections on counts of objects of various types (for example 2 jets > 40 GeV), and even separate counts of objects (eg MET > 50 GeV && 2 jets > 40 GeV). However, there is presently no provision for spatial correlation of different objects, or for differentiating jets and em/tau clusters identified in the different subsystems but originating from the same energy deposits. A possible solution could be to include jet/cluster position information (RoI) in the real time data path (in the current system it is available only for the DAQ system) and to use this information to add topology-based algorithms at Level 1. For example, identification of spatial overlap between e/tau clusters and jets, usage of the local jet Et sum to estimate energy of overlapping e/tau objects and calculation of invariant transverse mass would be possible. Some of them require only local information, others need global information. Modifications required for limited upgrade For several years the calorimeter electronics and the trigger hardware up to and including the Cluster Processor Modules (CPMs) and Jet/Energy Modules (JEMs) will remain essentially unchanged. The hardware components on which the system is built are about 5 to 7 years old and don't allow much freedom for modifications; many parts are already obsolete. A limited modification of the algorithm processor firmware can be done in order to extract the extra information, but in order to run topological algorithms a new module must be designed. This module will replace the so-called "common merger" module (CMM) in the current system. In the proposed new architecture (figure 1) the CP and JEP systems (and L1 Muon trigger processor) transmit additional RoI information (which is not currently used in the real-time data path) taking advantage of the higher bandwidth potential inherent, but not used, in the backplane to the re-designed CMM module (CMM++). The data transfer rate over the crate backplane can be increased from 40 Mbit/s to 160 Mbit/s. A topological processor (TP) performing more sophisticated algorithms on the combined feature set and sending results to CTP can be added at later stage. Common Merger Module modifications The current CMM module [3] processes the results from the CPM or JEM modules to produce results over the entire crate, and send them to a "system" CMM in order to produce system-wide results. These final results are sent on cables to the CTP. A "crate" FPGA on each CMM receives backplane data and produces crate/wide sums of identified features. A "system" FPGA collects crate results over LVDS cables and sends the trigger output to the CTP. On an L1A, data and ROI are sent via G-Links to the DAQ system. All CMMs (figure 2) have identical hardware and several different firmware variants allowing to perform different functions. The current L1Calo trigger system must remain unchanged for the next few years and it would be desirable that the system modifications can be done and tested in parallel with the running system. The CMM++ development scenario assumes that the module can be a drop-in replacement for the CMMs, with the ability to perform additional logic on top of providing all the necessary backwards compatible CMM functionality (figure 2). The two basic requirements are that this module should: • provide all the necessary functionality to replace a current CMM (electrical interfaces, programming model, data formats), • be able to transmit all the backplane data, received from upgraded CPM/JEM modules, onwards to the TP over optical links (with or without internal processing) and to receive the data via optical links in order to implement internal topology processing without TP. A desirable extra feature would be that the module could perform extra processing (and possibly output extra trigger bits) which would act as a test bed for future trigger algorithms. Development of such a module can be staged in the following way: • hardware design with all present interfaces plus the optical links for the new topological processor, use one large FPGA, adaptation of the current CMM firmware to the new hardware for initial use, • upgrade CPM/JEM and CMM++ modules firmware using new data format and 160 Mb/s data transfer, incrementally add new functionality, data supplying to the topological processor connected to the CTP. In order to prepare the CMM++ specification, several feasibility studies are currently under way in different areas, namely -backplane data transfer rates tests, latency survey, optical link and FPGA technologies studies, study of transferring current firmware to new hardware. Backplane data transfer The CMM receives over the backplane from the CPM/JEM modules up to 400 signal lines on each 40 MHz clock cycle. The CMM++ interface to CPM/JEM shall be able to run in the backward compatible (CMM) version first and then shall be upgraded to 160 Mb/s data transfer and a new data format. The deployment of the CMM++ module requires firmware modifications in the CPM and JEM in order to collect the RoI information, generated in the modules, and to send it to the CMM++ over the crate backplane. The Backplane and Link Tester module (BLT) [4] was built to qualify the backplane transmission lines for increased data rates between CPM/JEM and CMM++ modules. As a result of the backplane transmission test, stable data transmission at 160Mb/s was achieved with source termination of the data lines and the forwarded clock line sink terminated. Therefore, for each 40 MHz clock cycle 96 bits of data (24×4) can be transferred from each CPM/JEM module to CMM++ module. The 25th signal line on the backplane will be used to forward the encoded clock/parity onto the CMM++. Latency survey The maximum ATLAS L1 trigger latency was defined as 2500 ns (100 bunch crossings, BCs)from colliding beams to L1Accept arrival to detector front-end electronics. The new topology algorithms require additional latency to process the data and it was decided to measure the actual latencies of different parts and paths of the L1Calo system. The total L1Calo latency of 36 BCs was measured in the counting room in the installed complete L1Calo system. The detailed breakdown was performed in the test setup, which allows access to individual modules/cables. These measurements provided an insight into the used latencies in different parts of the L1Calo system [5]. A maximum possible reserve for the upgrade in complete L1 system is about 18 BCs by the expense of increased dead time. Optical links study In order to investigate a possibility to exploit inexpensive 30 Gbit/s link using commercial components (Xilinx Spartan-3 FPGA, 10Gb Ethernet transceiver TLK3114SC, SNAP12 Tx/Rx pair) a prototype was built. It is driven by the LHC TTC clock, jitter reduced with LMK03033CISQ clock conditioner. To run links synchronously with LHC clock, alignment characters in some LHC bunch gaps are sent for link maintenance. Technology study The CMM++ module will be based on new components (modern FPGAs, high-speed optical links). In order to acquire experience with the new technologies, the GOLD (Generic Optical Link Demonstrator) is under design. Firmware study The CMM++ development scenario assumes that the module can be initially a drop-in replacement for the CMMs. This implies the adaptation of the current CMM firmware to the new hardware in order to provide full backward compatibility and testing with the current system. A first attempt was made to port Jet CMM firmware to a Virtex 6 device (XC6VHX565T-2FF1924). The aim was to use existing VHDL with minimal changes, to update architecturespecific features, to estimate I/O requirements and to produce a realistic user constraint file for timing simulation. Results so far demonstrate that existing VHDL is easily ported, uses ∼2% of available resources and by emulating G-Link in FPGA, can keep I/O count below 600/640 pins. Conclusion Performed simulations show that rates for electrons and jets (especially of larger size) will be hardly kept within the current L1 trigger rate budget even near the design luminosity. Trigger algorithms may need to be augmented to reduce rates and improve selectivity even before the current L1Calo trigger system is replaced. A promising solution consists in adding topological algorithms based on relationships among triggered objects and thereby reduce the L1 rate. This modest modification to the current system will have low impact on other ATLAS components. The R&D projects and studies are well under way: backplane data transfer rates, technology demonstrator, firmware studies and others. Preliminary results of these studies look promising and proposed modifications will improve the performance of the current ATLAS L1Calo trigger system.
2022-06-28T05:49:16.084Z
2010-01-01T00:00:00.000
{ "year": 2010, "sha1": "8322a0f2a010e6ebd687875de2443e6e87377476", "oa_license": "CCBY", "oa_url": "http://cds.cern.ch/record/1302268/files/JINST5.C12046.pdf", "oa_status": "GREEN", "pdf_src": "IOP", "pdf_hash": "8322a0f2a010e6ebd687875de2443e6e87377476", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
236171050
pes2o/s2orc
v3-fos-license
Hypertriton production in p-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 5.02 TeV The study of nuclei and antinuclei production has proven to be a powerful tool to investigate the formation mechanism of loosely bound states in high-energy hadronic collisions. The first measurement of the production of ${\rm ^{3}_{\Lambda}\rm H}$ in p-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 5.02 TeV is presented in this Letter. Its production yield measured in the rapidity interval $-1<y<0$ for the 40% highest multiplicity p-Pb collisions is ${\rm d} N /{\rm d} y =[\mathrm{6.3 \pm 1.8 (stat.) \pm 1.2 (syst.) ] \times 10^{-7}}$. The measurement is compared with the expectations of statistical hadronisation and coalescence models, which describe the nucleosynthesis in hadronic collisions. These two models predict very different yields of the hypertriton in charged particle multiplicity environments relevant to small collision systems such as p-Pb and therefore the measurement of ${\rm d} N /{\rm d} y$ is crucial to distinguish between them. The precision of this measurement leads to the exclusion with a significance larger than 6.9$\sigma$ of some configurations of the statistical hadronization model, thus constraining the theory behind the production of loosely bound states at hadron colliders. In the last few decades, the production of deuterons, 3 H, 3 He, 4 He and their charge conjugates was measured in many different colliding systems and energies. The results of the measurements in hadronic and heavy-ion collisions at the LHC [1][2][3][4][5][6][7], in e + e − collisions at LEP [8], at lower-energy collider [9][10][11][12][13][14][15][16] and fixed target experiments [17][18][19][20] significantly constrained the parameter space for production models like coalescence [21][22][23] and statistical hadronisation [24,25], yet they were unable to decisively discriminate between these two models. The interest in the phenomenon of nucleosynthesis in the final state of hadronic collisions has risen again in recent years owing to its relevance in dark matter searches in space [26,27]. A precise modelling of the production of nuclei and antinuclei is required for the interpretation of the expected fluxes of antinuclei originating from dark matter annihilation, and for the relevant Standard Model background channels. For large colliding systems, such as Pb-Pb collisions at the LHC, the predictions of statistical hadronisation and coalescence models are very similar and they are both able to describe the measured production of nuclei [28]. The statistical hadronisation model (SHM) describes the system as a hadron-resonance gas (HRG) in thermal equilibrium at hadron emission, hence it predicts particle yields starting from the volume and the temperature of the system at chemical freeze-out (T chem ). The Grand Canonical formulation of the SHM describes the measured production yields of light hadrons and nuclei in Pb-Pb collisions at 2.76 TeV with T chem = 155 MeV [5]. This temperature, which successfully describes the yield of light hadrons in central Pb-Pb collisions, is one to two orders of magnitude larger than the typical binding energies of light nuclei (a few MeVs), and nuclei are likely to interact with the other hadrons in the dense HRG after chemical freeze-out due to the large cross sections [29], thus further modifying the yield. How these loosely bound objects can be formed and survive in such a hostile environment is still an unsolved question [30]. The coalescence model uses a different approach to explain the production of nuclei: the size of the nucleon-emitting source, accessible through the analysis of femtoscopic correlations [31], and the nuclear wave function are the two inputs that determine the formation probability of bound states [23,26]. While the SHM can compute directly the absolute yields of particles, in the hadron coalescence model the yield of bound states can be computed only relative to the yields of other particles. The measurement of the production of large bound states in small collision systems, such as pp and p-Pb, is considered to allow for conclusive tests [28,32] of nucleosynthesis in hadronic collisions. An extreme example is the hypertriton 3 Λ H, the bound state of a proton, a neutron, and a Λ baryon. This state is characterised by a very small Λ separation energy, of the order of a few hundreds of keV [33,34], and consequently it has a wide wave function that can extend up to a radius of ≈ 10 fm [35,36]. The size of the 3 Λ H wave function is therefore much larger than the hadron emission radius estimated with a femtoscopic technique in p-Pb collisions (1-2 fm, [37,38]). For this reason, the 3 Λ H yield in p-Pb collisions predicted by the coalescence model, where the ratio of nucleus size to source size directly influences its yield, is suppressed with respect to the statistical hadronisation model expectations, where the nuclear size does not enter explicitly [23,25,28]. The results presented in this Letter are based on data collected during the 2013 and 2016 p-Pb LHC runs at √ s NN = 5.02 TeV. With this beam configuration, the nucleon-nucleon centre-of-mass system moves in rapidity by ∆y cms = 0.465 in the direction of the proton beam. The ALICE detector and its performance are described in detail in [39,40]. Collision events are selected by using the information from the V0A and V0C scintillator arrays [41], located on both sides of the interaction point, covering the pseudorapidity intervals −3.7 < η < −1.7 and 2.8 < η < 5.1. A coincident signal in both arrays is used as a minimum-bias (MB) trigger. In addition, only events with the primary vertex position within 10 cm along the beam axis to the nominal centre of the experiment are selected to benefit from the full acceptance of the detector. Furthermore, to ensure the best possible performance of the detector and the proper normalisation of the results, events with more than one reconstructed primary interaction vertex (pile-up events) are rejected. In total, about 750 million MB events are selected for analysis, Hypertriton production in p-Pb collisions at the √ s NN =5.02 TeV ALICE Collaboration corresponding to an integrated luminosity of L MB int = 359 µb −1 , with a relative uncertainty determined by the van der Meer scan to be 3.7% [42]. For this analysis, the 40% of events with the highest multiplicity measured by the V0A detector are used. The 3 Λ H candidates are reconstructed via the charged two-body decay channel 3 Λ H → 3 He + π − (and the related charge conjugated particles for 3 Λ H). In this work, 3 Λ H and 3 Λ H are combined to reduce the statistical uncertainty. In the following, we use the notation 3 Λ H and 3 He for both the particle and the antiparticle, as well as for their average. The charged-particle tracks are reconstructed in the ALICE central barrel with the Inner Tracking System (ITS) [43] and the Time Projection Chamber (TPC) [44], which are located within a solenoid that provides a homogeneous magnetic field of 0.5 T in the direction of the beam axis. These two subsystems provide full azimuthal coverage for charged-particle trajectories in the pseudorapidity interval |η| < 0.8. The TPC is also used for the particle identification (PID) of the 3 He and the π − via their specific energy loss dE/dx in the gas volume, with a dE/dx resolution of about 5% [44]. The n(σ TPC i ) variable represents the PID response in the TPC expressed in terms of the deviation between the measured and the expected dE/dx for a particle species i, normalized by the detector resolution σ . The expected dE/dx is computed with a parameterised Bethe-Bloch function [40]. Pion and 3 He tracks within 5σ TPC are selected. The identified 3 He and π tracks are then used to reconstruct the 3 Λ H weak decay topology with an algorithm similar to that used in previous analyses [45,46]. By combining the information on the decay kinematics and decay vertex, several selection variables are defined. Those used in the analysis are: the radial distance of the decay vertex from the beam line, the distance of each daughter track from both the primary and the decay vertex, the proper decay length of the candidate (ct) and cos(θ P ), where θ P is the angle between the total momentum vector of the decay daughters and the straight line connecting the primary and secondary vertices. The final candidate selection based on these variables is performed with a gradient boosted decision tree classifier (BDT) implemented by the XGBoost library [47][48][49] and trained on a dedicated Monte Carlo (MC) simulated event sample. The MC sample is created using the HIJING event generator [50] for simulating the underlying p-Pb collisions, while 3 Λ H were injected with a p T distribution represented by a m T exponential function that describes the p T distribution of 3 He as measured in p-Pb collisions [5]. The particles are transported through the detector geometry using GEANT4 [51], which simulates the interaction with the material and the weak decay of the 3 Λ H. The BDT is a supervised learning algorithm that determines how to discriminate between two or more classes, signal and background in this case, by examining sets of examples called the training sets. In this analysis, the training sets are composed of 3 Λ H signal candidates extracted from the MC sample and background candidates from paired like-sign 3 He and π tracks from the data. For each 3 Λ H candidate, the BDT combines topological and single track variables to return a score, which is proportional to the candidate probability of being signal or background. The selection is based on the BDT score, defining a threshold that maximises the expected signal significance assuming thermal production. In this analysis the default BDT score selection corresponds to a 72% signal efficiency and a 3x10 −5 background rejection factor. The candidates that pass the BDT selection are used to populate the invariant mass distribution in the transverse momentum interval 0 < p T < 9 GeV/c. An excess of entries is observed at a mass near 2.99 GeV/c 2 , as shown in Fig. 1. The unbinned distribution is fitted with a Kernel Density Estimator (KDE) [52,53] function tuned on the MC sample to describe the signal and a linear function to describe the background component. The KDE is chosen for smoothing the template extracted from the MC. The invariant mass distribution with the superimposed fit is shown in Fig. 1. The significance associated with the signal is evaluated following the procedure described in [54]: the probability for a background fluctuation to be at least as large as the observed maximum excess (local p-value) is computed by employing the asymptotic formulae for likelihood-based tests. The local pvalue is expressed as a corresponding number of standard deviations using the one-sided Gaussian tail convention. The excess of entries observed above the expected background has a local significance of standard deviations at the nominal 3 Λ H mass. The production yield is obtained starting from the signal extracted from the fit to the invariant mass spectrum. Then the fitted signal is corrected for the reconstruction and the selection efficiency, including reconstruction efficiencies for the daughter particles and the topology, the acceptance of the ALICE detector, the number of analysed events, the branching ratio (B.R.) of the 3 Λ H in the two-body decay channel and the fraction of 3 Λ H that are absorbed in the ALICE detector ( f abs ). The simulation of inelastic interactions of the daughter particles is done with GEANT4 and is taken into account in the reconstruction efficiency computation. The B.R. value is assumed to be 0.25 according to the calculation published in [55]. The systematic uncertainties originate from (1) the 3 Λ H selection and the signal extraction, (2) the choice of the 3 Λ H input p T distribution in the Monte Carlo sample, and (3) the 3 Λ H absorption in the detector. In addition (4), a 9% systematic uncertainty is added due to the uncertainty of the B.R. as explained later in the text. The total uncertainty is obtained as the quadratic sum of the individual contributions. The first contribution, which is the dominant one, is computed by varying simultaneously the BDT threshold (±5%) and the background fit function (constant, linear, exponential). The standard deviation (RMS) of the different yields represents our systematic error associated with the BDT selection and the signal extraction, and it amounts to 14%. The second contribution is evaluated by using different input p T distributions for the Monte Carlo sample and evaluating the effects on the efficiency. Four different p T models (m T exponential, p T exponential, Boltzmann and Blast Wave [56]) are fitted to the 3 He p T distribution [5]. For each of them the efficiency and the yield are computed assuming that the 3 He and the 3 Λ H have the same p T distributions as already seen for light flavour hadrons with similar masses in all collision systems [1,45,57]. The RMS among the trials is calculated, yielding a systematic uncertainty of 7%. Finally, the uncertainty of f abs is considered. According to [58], the expected absorption cross section of 3 Λ H due to the inelastic interactions in the ALICE detector material is ≈ 1.5 times that of 3 He The result is compared with the expectations from the canonical SHM [25], which assumes exact conservation of baryon number, strangeness, and electric charge across a correlation volume V c . The SHM predictions are computed using a fixed chemical freeze-out temperature of T chem = 155 MeV and two correlation volumes extending across one unit (V c = dV /dy), and three units (V c = 3dV /dy) of rapidity [25]. The size of the correlation volume governs the influence of exact quantum number conservation, with smaller values leading to a stronger suppression of conserved charges and V c → ∞ leading to the grand canonical ensemble. The 3 Λ H p T integrated yield is 1.1x10 −6 and 2.0x10 −6 with V c = dV /dy and V c = 3dV /dy, respectively. The dN/dy predictions by the model were obtained using the code released together with the publication [59]. As explained above, in the case of the coalescence model it is not possible to compare directly the measured absolute yield to the model prediction. Hence, this comparison is attained by computing the 3 Λ H /Λ ratio and the strangeness population factor S 3 = ( 3 Λ H/ 3 He)/(Λ/p) [60] using previous ALICE measurements of p, Λ, and 3 He yields [5,57], as shown in Fig. 2. The yield of the Λ baryon, measured in −0.5 < y < 0, has been extrapolated to the 3 Λ H rapidity region using MC generators [61][62][63] that are known to reproduce the pseudorapidity density distribution of charged hadrons [64]. The corresponding correction is approximately 2%. In central Pb-Pb collisions the data are consistent with both coalescence and SHM predictions, which are similar, as shown in Fig. 2. The situation is different for p-Pb collisions where the two models are well separated. Taking into account the uncertainties of the measurement as well as the model uncertainty, the measured S 3 ratio is compatible with the two-body (deuteron-Λ) and three-body (proton-neutron-Λ) coalescence within 1.2σ and 2σ , respectively. With its large uncertainties, also due to the large uncertainty on the 3 He yield, the S 3 is compatible within 2σ with the SHM calculations too. Hence, the 3 Λ H /Λ ratio is used as a test for coalescence and SHM predictions Hypertriton production in p-Pb collisions at the √ s NN =5.02 TeV ALICE Collaboration in the following. In this case, the measurement is deviating by 3.2σ and 7.9σ from the SHM with V c = 1dV /dy and V c = 3dV /dy, respectively. On the other hand, both the coalescence calculations are within 2σ of the measured 3 Λ H /Λ. It has to be noted that recent measurements of the 3 Λ H mass [34] suggest a larger binding energy, and hence a smaller wave function, of the 3 Λ H. This would further shift upward the coalescence predictions. Figure 3: 3 Λ H/Λ times branching ratio as a function of branching ratio. The horizontal line is the measured value and the band represents statistical and systematic uncertainties added in quadrature. The expectations for the canonical statistical hadronization [25] and coalescence models are shown [23]. The value of B.R. = 0.25 for the 3 Λ H → 3 He + π decay used in this analysis was computed theoretically in Ref. [55]. To investigate the uncertainty resulting from this assumption, Fig. 3 shows the measured 3 Λ H/Λ × B.R. for different theoretical model calculations [23,25] assuming a possible variation of the B.R. value. The variation range is chosen by evaluating the relative deviation between the theoretical R 3 and the world average of all the R 3 measurements including the most recent measurement in heavy-ion collisions [65], where R 3 is defined as: . This uncertainty on R 3 is propagated to the B.R.( 3 Λ H → 3 He + π − ) and corresponds to a variation range of ±9% around the nominal value. While the two-body coalescence calculation is compatible with the data for the nominal or larger B.R., a discrepancy of 2σ is observed between data and the three-body coalescence prediction. Furthermore, in the whole B.R. variation interval, the SHM is more than 2.7σ and 6.9σ away from the measured 3 Λ H/Λ × B.R. for the V c = 1dV /dy and V c = 3dV /dy configurations, respectively. In summary, the first measurement of the production yield of hypertriton in p-Pb collisions at the LHC is reported. The measurements of yields of 3 Λ H in p-Pb collisions provide an opportunity to potentially discriminate between nucleosynthesis models. The measured p T integrated yield excludes, with high significance, canonical versions of the SHM with V c ≥ 3dV /dy to explain the (hyper)nuclei production in p-Pb collisions. It remains to be seen if advanced versions of the SHM using the S-matrix approach to account for the interactions among hadrons [66] will be able to solve this discrepancy. The 3 Λ H/Λ ratio Hypertriton production in p-Pb collisions at the √ s NN =5.02 TeV ALICE Collaboration is well described by the two-body coalescence prediction, while the three-body formulation is slightly disfavoured by our measurement. While the general conclusions of the comparison with the models are unaltered even when considering large variations of the B.R.( 3 Λ H → 3 He + π − ) around the value available in literature, the significance of the comparison between data and models is influenced by this uncertainty. Upcoming studies using the LHC Run 2 Pb-Pb data will help to reduce this uncertainty by measuring the 3 Λ H → d + p + π − decay channel relative branching ratio. Furthermore, with the upgraded ALICE apparatus and the upcoming LHC Run 3, it will be possible to reduce both the statistical and the systematic uncertainties of the 3 Λ H yield measurements in pp [67] and p-Pb collisions and to study the 3 Λ H production as a function of the size of the nucleon-emitting source measured with femtoscopic correlations. These studies may make it possible to decisively distinguish between the two production models. Acknowledgements The ALICE Collaboration would like to thank all its engineers and technicians for their invaluable contributions to the construction of the experiment and the CERN accelerator teams for the outstanding performance of the LHC complex. The [30] A. Andronic, P. Braun-Munzinger, K. Redlich, and J. Stachel, "Decoding the phase structure of QCD via particle production at high energy", Nature 561 (2018)
2021-07-23T01:15:42.915Z
2021-07-22T00:00:00.000
{ "year": 2021, "sha1": "9c8e2593f02b297f0dd305c96e293075a9421317", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevLett.128.252003", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "9c8e2593f02b297f0dd305c96e293075a9421317", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
268083976
pes2o/s2orc
v3-fos-license
The novel cytotoxic polybisphosphonate osteodex decreases bone resorption by enhancing cell death of mature osteoclasts without affecting osteoclastogenesis of RANKL-stimulated mouse bone marrow macrophages It has previously been demonstrated that the polybisphosphonate osteodex (ODX) inhibits bone resorption in organ-cultured mouse calvarial bone. In this study, we further investigate the effects by ODX on osteoclast differentiation, formation, and function in several different bone organ and cell cultures. Zoledronic acid (ZOL) was used for comparison. In retinoid-stimulated mouse calvarial organ cultures, ODX and ZOL significantly reduced the numbers of periosteal osteoclasts without affecting Tnfsf11 or Tnfrsf11b mRNA expression. ODX and ZOL also drastically reduced the numbers of osteoclasts in cell cultures isolated from the calvarial bone and in vitamin D3–stimulated mouse crude bone marrow cell cultures. These data suggest that ODX can inhibit osteoclast formation by inhibiting the differentiation of osteoclast progenitor cells or by directly targeting mature osteoclasts. We therefore assessed if osteoclast formation in purified bone marrow macrophage cultures stimulated by RANKL was inhibited by ODX and ZOL and found that the initial formation of mature osteoclasts was not affected, but that the bisphosphonates enhanced cell death of mature osteoclasts. In agreement with these findings, ODX and ZOL did not affect the mRNA expression of the osteoclastic genes Acp5 and Ctsk and the osteoclastogenic transcription factor Nfatc1. When bone marrow macrophages were incubated on bone slices, ODX and ZOL inhibited RANKL-stimulated bone resorption. In conclusion, ODX does not inhibit osteoclast formation but inhibits osteoclastic bone resorption by decreasing osteoclast numbers through enhanced cell death of mature osteoclasts. Introduction Bisphosphonates (BPs) are anti-resorptive pharmaceuticals that have been used for several decades in the treatment of diseases with excessive formation of osteoclasts like osteoporosis, skeletal metastasis of malignant tumors, and malignant osteolysis with hypercalcemia [1].The efficacy of BPs as inhibitors of bone resorption and skeletal fractures is demonstrated by the findings that a yearly intravenous administration of zoledronic acid to postmenopausal women reduced all clinical fractures by 35% during a 2-year follow-up [2] and new vertebral fractures and hip fractures by 70 and 41%, respectively, during a 3-year follow-up [3].According to the "seed-and-soil" hypothesis, suggested by Paget more than 100 years ago [4], osteoclasts provide tumor growth substances from the bone matrix during bone resorption in osteolytic lesions in patients with breast and lung cancer, as well as in sclerotic lesions in patients with prostatic cancer [5][6][7].For this reason, BPs are used with the aim not only to protect the skeleton from excessive bone resorption and skeletal-related events (SRE) but also to reduce tumor growth [8].Zoledronic acid (ZOL) has been shown to significantly reduce the time to first SRE and the overall risk of SRE in breast cancer patients with skeletal metastases [9] and in patients with skeletal metastasis of lung cancer and other solid tumors except breast and prostatic cancers [10,11]. The general chemical structure of BPs is two phosphonate groups linked to a central carbon (P-C-P) and with two sidechains, R 1 and R 2 , linked to the central carbon [1,12].The P-C-P group renders the compounds resistant to degradation by phosphatases.The R 1 side group is usually a hydroxyl group facilitating binding to hydroxyapatite crystals in bone, and R 2 may have a range of chemical structures.There are two classes of BPs, nitrogencontaining and those without nitrogen, where those having nitrogen in the R 2 side chain are second-and thirdgeneration BPs.BPs containing nitrogen are several orders of magnitude more potent as anti-resorptive agents than the first-generation BPs [13].Third-generation BPs are the most potent compounds with a tertiary nitrogen incorporated within a ring structure, e.g., imidazole in ZOL.BPs bound to bone become internalized in osteoclasts [14] when these cells resorb bone, which causes accumulation of BPs at concentrations high enough to inhibit osteoclast activity through apoptotic cell death [15,16]. The current prevailing hypothesis regarding the primary mode of action, i.e., how nitrogen-containing BPs inhibit osteoclastic bone resorption, is by inhibition of farnesyl diphosphate synthase (FPPS), an enzyme in the mevalonate pathway [12].The ultimate consequence of FPPS inhibition is loss of geranylgeranylated GTPases leading to disruption of osteoclast cytoskeleton, osteoclast apoptotic cell death, and loss of bone resorption activity [17], although the intracellular mechanism leading to the proapoptotic pathway is unknown. Osteodex (ODX) is a polymer conjugate constituting a carbohydrate backbone with alendronate and guanidine moieties covalently coupled to the backbone.ODX is bifunctional, having anti-resorptive properties and pronounced anti-tumor efficacy [18].We have previously reported that ODX inhibits bone resorption in organ-cultured mouse calvarial bones [18].The aim of the present study was to explore the cellular mechanism by which ODX inhibits bone resorption.To achieve this, we utilized various bone organ and bone cell culture systems that allowed us to study osteoclast differentiation, formation, and function. Materials Recombinant mouse macrophage colony-stimulating factor (M-CSF) and recombinant extracellular domain of mouse receptor activator of NF-κB ligand (RANKL) (Arg72-Asp316) fused to a six histidine residue tag (cat.no.462-TR) were purchased from R&D Systems; the kit for leukocyte acid phosphatase staining, SIGMA 104 Phosphatase Substrate, ATRA, and zoledronic acid were from Sigma Chemical Co.(www.sigma aldri ch.com); α-modification of minimum essential medium (α-MEM), and fetal calf serum (FCS) were from Thermo Fisher Scientific; Thermo Sequenase™ II DYEnamic ET™ terminator cycle sequencing kit were from Amersham (www.amers ham.com); oligonucleotide primers were from Invitrogen (www.invit rogen.com) or Applied Biosystems (www.appli edbio syste ms.com); HotStarTaq polymerase kit and QIAquick PCR Purification Kit were from QIA-GEN Ltd. (www.qiagen.com); DNA free was obtained from Ambion, Inc. (www.ambion.com); 1st strand cDNA synthesis kit and the PCR Core Kit were from Roche (www.roche-appli ed-scien ce.com); fluorescent-labeled probes (reporter fluorescent dye VIC at the 5′end and quencher fluorescent dye TAMRA at the 3′end), TaqMan Universal PCR Master Mix, and the kits for quantitative real-time PCR were from Applied Biosystems (www.appli edbio syste ms.com); culture dishes, multiwell plates, and glass Chamber Slides were from Nunc Inc. (www.nuncb rand.com); suspension culture dishes were from Corning Inc. (www.scien cepro ducts.corni ng.com); and bone slices and CrossLaps® for Culture ELISA (CTX) were from Immunodiagnostics a/s (www.idsplc.com/ no/ home/).1,25(OH) 2 -vitamin D3 (D3) was a kind gift from Hoffmann-La Roche, Basle, Switzerland.ODX was a kind gift from DexTech Medical, Uppsala, Sweden.The cathepsin K antiserum was a kind gift from Professor Göran Andersson at Karolinska Institute, Stockholm, Sweden. Animals We utilized CsA mice from our own inbred colony at Umeå University to conduct bone organ cultures, periosteal cell cultures, and crude bone marrow cell cultures.These mice have been extensively used in numerous studies for over 30 years, and the results obtained have always been comparable to those seen in other mouse strains, including C57BL/6 mice.C57BL/6 mice from Harlan Laboratories, Inc., and Taconic Bioscience were used for the bone marrow macrophage cultures.We ensured that animal care and experiments were conducted in accordance with internationally accepted standards of humane animal care.Additionally, we used animals only as deemed appropriate by the Animal Care and Use Committees of Umeå University, Umeå, and the University of Gothenburg, Gothenburg. Mouse calvarial bone cultures Parietal bones from 5-to 7-day-old mice were microdissected and cut into calvarial halves.The bones were preincubated for 18-24 h in α-MEM containing 0.1% albumin and 1 µmol/l indomethacin [19,20].Following preincubation, the bones were extensively washed and subsequently cultured for 96 h in multiwell culture dishes containing 1.0 ml of an indomethacin-free medium with or without test substances.The bones were incubated in the presence of 5% CO 2 in humidified air at 37 °C.At the end of the cultures, bones were used for immunohistochemistry or gene expression analysis. Mouse calvarial periosteal cell cultures Cells were isolated from 2-to 5-day-old mice using timesequential collagenase digestion, and cells from all digestions (1-10) were pooled [21].These isolations contain not only osteoblastic cells but also osteoclast progenitor cells [21,22].The periosteal cells were seeded in 2 cm 2 multiwell dishes at a density of 10 3 cells/cm 2 and incubated in α-MEM/10% FCS in the absence or presence of RANKL with or without either ODX or ZOL for 12 days.At the end of the cultures, the cells were stained for tartrate-resistant acid phosphatase (TRAP), and cells with more than three nuclei, expressing TRAP, were considered osteoclasts and numbers counted (TRAP + MuOCL). Mouse bone marrow cell cultures Bone marrow cells (BMC) were flushed from femurs and tibiae from 5-to 7-week-old male mice.BMC were seeded in 48 multiwells (10 6 cells/cm 2 ), incubated overnight in α-MEM/10% FCS, and subsequently cultured in the same medium in the absence or presence of 1,25(OH) 2 -vitamin D3, with or without ODX or ZOL for 9 days.After this time period, the cells were fixed with acetone in citrate buffer/3% formaldehyde and stained for TRAP.TRAP-positive cells with three or more nuclei were considered osteoclasts and the number of multinucleated osteoclasts was counted (TRAP + MuOCL). Mouse bone marrow macrophage cultures Bone marrow cells were flushed from femur and tibiae and seeded in α-MEM/10% FCS containing 30 ng/ml mouse M-CSF on plastic suspension culture dishes to which stromal cells and lymphocytes do not adhere [21,23].After 2 days, the adhering cells (bone marrow macrophages (BMM)) were detached, and then, 5000 cells in 5 µl α-MEM/10% FBS were spot seeded at the center of 96-well plates and left to adhere for 10 min.Subsequently, 200 µl medium was added containing either 30 ng/ml of M-CSF (controls) or 30 ng/ ml M-CSF + 4 ng/ml of RANKL, without and with ODX or ZOL.After 3-4 days, the cells were fixed and stained for TRAP.TRAP-positive cells with three or more nuclei were considered osteoclasts, and multinucleated osteoclasts (TRAP + MuOCL) were counted.For the actin ring staining, osteoclasts were fixed after 4 days with 4% phosphate-buffered formaldehyde for 20 min, washed 3 times in PBS, and permeabilized using 0.1% Triton X-100 for 10 min.Then, the cells were incubated with 2% BSA/PBS and stained with FITC-labeled phalloidin diluted 1:40 in 2% BSA/ PBS for 30 min.In some experiments, 20,000 cells in 20 µl α-MEM/10% FBS were spot-seeded in 48-well plates, cultured as above, and used for gene expression analysis. Mouse BMM were also seeded on slices of devitalized bovine bone (2 × 10 4 cells/bone slice) in 96-well plates in α-MEM/10% FCS and cultured for up to 14 days, with change of medium every third day.Subsequently, cells were removed, and bones were stained with 0.5% toluidine blue to visualize resorption pits.The release of CTX into the culture medium during resorption was analyzed by CrossLaps ELISA. Immunohistochemistry Calvarial bones were fixed in 4% phosphate-buffered paraformaldehyde; decalcified in 10% EDTA in Tris buffer, pH 6.95; and embedded in paraffin.Sections were cut, deparaffinized, fixed in cold acetone, and subsequently treated with 3% H 2 O 2 in PBS and Avidin/Biotin blocking kit.After blocking with protein block, sections were incubated with unlabeled polyclonal rabbit anti-mouse cathepsin K [24] diluted 1:700 or normal rabbit serum as a negative control.After blocking with normal goat serum, biotin-labeled goat anti-rabbit serum was used as a secondary antibody and was followed by incubation with a VECTASTAIN ABC kit and DAB substrate kit.All sections were counterstained with Mayer's hematoxylin and evaluated using a Leica Q500MC microscope (Leica, Cambridge, UK) by an observer (CL) blinded to the identity of the sections.The numbers of cathepsin K-positive multinucleated cells per section were determined; two sections per bone were analyzed. RNA extraction and gene expression RNA was isolated from mouse calvarial bone cultures using the RNAqueous-4 PCR kit.Single-stranded cDNA was synthesized from 0.1 to 0.5 µg of total RNA using a High Capacity cDNA Reverse Transcription Kit.Quantitative real-time PCR analysis of Tnfsf11 and Tnfrsf11b was performed using the KAPA™ Probe Fast qPCR Kit with primers and probe as described in detail previously [25]. RNA from bone marrow macrophage cultures was isolated using the RNeasy Micro Kit.Single-stranded cDNA was synthesized using a High Capacity cDNA Reverse Transcription Kit, and gene expression was analyzed using custom TaqMan Assays.The following premade primer-probe mix from Applied Biosystems assays was used: Acp5 (Mm00475698_m1), Ctsk (Mm00484036_m1), Nfatc1 (Mm00479445_m1), Fas (Mm01204974_m1), Bax (Mm00432051_m1), Bcl2 (Mm00477631_m1), Bcl2l1 (Mm00437783_m1).The housekeeping gene 18S was used as endogenous control, and the data were displayed as percent of control. Statistics Statistical differences were analyzed using one-way ANOVA, followed by Dunnett's multiple comparisons test versus ATRA, D3, RANKL, or M-CSF/RANKL treated cells as indicated. Osteodex decreases osteoclast numbers in mouse calvarial bones We have previously shown that ODX can inhibit bone resorption, as assessed by mineral release, in organ-cultured mouse calvarial bones stimulated by all-trans-retinoic acid (ATRA) [18], a well-known stimulator of bone resorption in vitro and in vivo [26].To determine whether inhibition of bone resorption by ODX was a result of decreased numbers of osteoclasts, we stimulated bone resorption in the neonatal mouse calvarial bones in ex vivo organ cultures by using ATRA (10 −7 M) [27] with and without either ODX (2 × 10 −7 M) or ZOL (2 × 10 −7 M).ZOL is the most potent, clinically used nitrogen-containing BP inhibiting bone resorption by causing osteoclast apoptosis [28] and used in the present experiments as a positive control.ODX and ZOL significantly decreased the numbers of cathepsin K + osteoclasts present in the periosteum of the calvarial bones after ATRA treatment (Fig. 1A). In the calvarial bones, the inhibitors might act either at the level of RANKL-producing osteoblasts or by directly targeting the osteoclasts or their progenitors.We, therefore, assessed if ODX affected the ATRA-induced expression of RANKL and its inhibitor OPG.Neither ODX nor ZOL affected the robust enhancement of Tnfsf11 mRNA expression (encoding RANKL) induced by ATRA (Fig. 1B).The mRNA expression of Tnfrsf11b (encoding OPG) was not affected by ATRA, in agreement with previous observations [27], and the expression was not affected by co-treatment with ODX or ZOL (Fig. 1C). Osteodex decreases the numbers of osteoclasts in mouse calvarial periosteal cell cultures The osteoclasts formed in the mouse organ cultured bones are derived from mononuclear osteoclast progenitors present in the periosteum.We have reported that such progenitors are present in collagenase-digested periosteal cell isolations from neonatal mouse calvarial bones and that stimulation of cells isolated from the periosteum results in mature osteoclast formation [21].As shown in Fig. 2A, B, TRAP + cells were formed in unstimulated control cultures, but very few were TRAP + MuOCL (Fig. 2F).Stimulation of the cell cultures with RANKL (10 ng/ml) for 12 days resulted in the formation of many TRAP + MuOCL (Fig. 2A, C), and this response was abolished by ODX and ZOL (Fig. 2A, D-F). Osteodex decreases the numbers of osteoclasts in mouse bone marrow cell cultures Bone marrow cell cultures are widely used to assess osteoclast formation.To explore the potential of ODX in inhibiting the formation of osteoclasts from bone marrow osteoclast progenitor cells, we stimulated crude mouse bone marrow cells (BMC) cultures with 1,25(OH) 2 -vitamin D3 (D3).D3 primarily targets stromal cells present in the BMC cultures inducing their expression of RANKL, and subsequently, the differentiation of RANK + osteoclast progenitors is stimulated [29].Stimulation of BMC cultures with D3 (10 −8 M) for 6 days resulted in the formation of many TRAP + MuOCL (Fig. 3A, B).Co-treatment with either ODX or ZOL substantially decreased the number of TRAP + MuOCL (Fig. 3A, B). Osteodex enhances osteoclast cell death without affecting osteoclast differentiation in bone marrow macrophage cultures The observations in the calvarial bone and BMC cultures suggest that ODX decreases the numbers of osteoclasts either by inhibiting osteoclast progenitor cell differentiation or fusion at late stages or by acting directly on mature osteoclasts to enhance cell death.To further assess if ODX can directly target osteoclast progenitor cells, we used purified bone marrow macrophage (BMM) cultures, which were stimulated by M-CSF and RANKL to induce osteoclastogenesis.We then analyzed the effect by ODX and ZOL either by analyzing osteoclast differentiation in TRAP-stained cultures at different time points or by analyzing the expression of osteoclastic and osteoclastogenic genes. In BMM cultures stimulated by M-CSF (30 ng/ml) and RANKL (4 ng/ml) for 3 days, most of the mononucleated cells were TRAP + , and some of them had formed TRAP + MuOCL (Fig. 4A).After 4 days, very many of the BMM stimulated by M-CSF/RANKL had formed mature TRAP + MuOCL (Fig. 4A), and 1 day later, several of these cells had started to die as assessed by their morphology (Fig. 4A).Treatment of the M-CSF/RANKL-stimulated BMM with ODX (2 × 10 −7 M) did not affect the appearance of mononucleated TRAP + cells or mature TRAP + MuOCL at day 3 or 4 (Fig. 4A).At day 5, however, very few mature TRAP + MuOCL could be seen (Fig. 4A).Similar to ODX, treatment with ZOL (2 × 10 −7 M) did not affect the appearance of TRAP + mono-or multinucleated osteoclasts at day 3 (Fig. 4A).At day 4, the numbers of mature TRAP + MuOCL were fewer in ZOL-treated cultures than in M-CSF/RANKLstimulated BMM with or without ODX (Fig. 4A).At day 5, no mature TRAP + MuOCL could be seen in the ZOL-treated cells similar to the observation in ODX-treated BMM.In agreement with these findings, ODX did not affect the presence of mature osteoclasts with phalloidin + actin rings at day 4, whereas these cells were much fewer in ZOL-treated BMM cultures (Fig. 4B). To investigate if ODX can inhibit bone resorption when BMM cells were cultured on bone slices, we incubated BMM on bovine bone slices for 14 days in the presence of M-CSF/RANKL with or without ODX or ZOL.The formation of osteoclasts in BMM cultures on bone slices was considerably delayed compared to BMM cultures on plastic dishes, and therefore, bone resorption was assessed by analyzing the release of the bone matrix fragment CTX from the bones to the media during days 10 to 14 and bone resorption pits visualized by toluidine blue staining on day 14.Stimulation of BMM with M-CSF/RANKL resulted in the formation of numerous resorption pits, a response that was substantially reduced by ODX or ZOL (Fig. 4C).The release of CTX was enhanced tenfold (approx.)from bone slices with BMM stimulated by M-CSF/RANKL compared to M-CSF-stimulated controls (Fig. 4D).This response was substantially decreased by ODX and ZOL. The effect by ZOL on mature osteoclasts in the BMM was concentration-dependent (Fig. 5).At day 4, ZOL at 1 × 10 −9 M had a modest effect, and then the response gradually was more evident at 1 × 10 −8 M, 1 × 10 −7 , and 2 × 10 −7 M. At day 5, effects were more pronounced than at day 4 at all concentrations tested. These observations indicate that ODX, similar to ZOL, decreases osteoclast numbers by inducing cell death in mature osteoclasts without affecting the differentiation of mononucleated osteoclast progenitor cells.However, the cell death response seems slightly delayed and slightly less potent compared to ZOL.To further confirm that ODX does not affect the differentiation of osteoclast progenitor cells, we next analyzed the expression of genes in BMM known to be associated with osteoclastogenesis.M-CSF/RANKL stimulation induced the mRNA expression of Acp5 (encoding TRAP) and Ctsk (encoding cathepsin K), as expected (Fig. 6).The expression of these osteoclastic genes was not significantly affected by ODX or ZOL at early (day 2) or late stages (days 3 and 4) of osteoclastogenesis.This finding suggests that ODX and ZOL did not interfere with osteoclastogenic signaling mechanisms downstream the receptor RANK.This conclusion was further confirmed by the observation demonstrating that M-CSF/RANKL-induced upregulation of the mRNA expression of the osteoclastogenic To assess if the activities by ODX and ZOL on mature osteoclast cell death were associated with regulation of antior pro-apoptotic genes, we analyzed the mRNA expression of four such genes.The Bax and Fas genes, known to be pro-apoptotic, were both significantly downregulated by RANKL (Fig. 7A, B).This response was not affected by ODX or ZOL (Fig. 7A, B).The Bcl2 and Bcl2l1 genes are known to be anti-apoptotic, and both were significantly downregulated by RANKL, responses also unaffected by ODX or ZOL (Fig. 7C, D). Discussion ODX is a polymer based on a carbohydrate backbone with alendronate and guanidine moieties having anti-tumor and anti-resorptive capacity [18].ODX has been investigated in clinical trials (phase I, phase II) for the treatment of bone metastases in castration-resistant prostate cancer.The results confirm a profound inhibitory effect on bone markers, primary on osteoclast markers and secondary on osteoblast markers.Direct anti-tumoral effects were recorded without significant side effects [30,31].We here demonstrate that ODX exerts anti-resorptive activity by enhancing mature osteoclast cell death without affecting the differentiation of osteoclast progenitor cells to mature osteoclasts. We have previously reported that ODX inhibits bone resorption in ex vivo cultures of neonatal mouse calvaria, resulting in decreased calcium release from the explants.Here, we demonstrate that this response is associated with a robust decrease of multinucleated osteoclast numbers.The stimulator used, ATRA (the biologically active metabolite of vitamin A [26]), increases osteoclast formation and bone resorption indirectly through enhanced production of the osteoclastogenic cytokine RANKL [27].RANKL can be produced by several cell types including osteoblasts and osteocytes and binds to the cognate receptor RANK on mononuclear osteoclast progenitor cells to induce their differentiation to mature, multinucleated osteoclasts [32].The decoy receptor osteoprotegerin (OPG) also binds to RANKL and inhibits the binding to RANK.We found that ODX did not affect the mRNA expression of Tnfsf11 (encoding RANKL) or Tnfrsf11b mRNA expression (encoding OPG) indicating that ODX inhibited osteoclast formation through a direct action on either osteoclast progenitors or mature osteoclast. Periosteal cells isolated from neonatal mouse calvariae, often designated mouse calvarial osteoblasts, are enriched with osteoblasts but also contain a substantial amount of macrophages/osteoclast progenitor cells which will form osteoclasts when stimulated with RANKL or by osteoclastogenic cytokines and hormones enhancing the expression of RANKL in osteoblasts [21].Recent singlecell RNA sequencing has also demonstrated the presence of macrophages in these isolations and that their numbers enhance during cell culture [22].To gain further support to the conclusion that ODX targets cells in the osteoclastic lineage, we stimulated calvarial periosteal cell cultures with RANKL with and without ODX.The finding that ODX abolished RANKL-stimulated osteoclast formation further shows that ODX can inhibit osteoclast formation independent of RANKL/OPG production, although these experiments cannot demonstrate if ODX exerts its antiosteoclastogenic effect by inhibiting osteoclast differentiation or promoting mature osteoclast cell death. The experiments using calvarial explants and calvarial cells indicate that ODX can decrease the number of osteoclasts on the cortical periosteum.We used bone marrow cell cultures to assess whether ODX can inhibit osteoclast formation also on endosteal surfaces or on trabecular bone.We found that ODX robustly inhibited osteoclast formation also in these cell cultures. The common standard view is that inhibition by BPs of bone resorption in vivo is due to the binding of BPs to bone mineral through its affinity to hydroxyapatite crystals and that osteoclasts are exposed to high concentrations of BPs when the mineral crystals are dissolved during the resorptive process [28].This is the reason why one injection of ZOL per year is sufficient for the treatment of patients with osteoporosis.The reduction of mature osteoclasts observed in cell cultures on plastic dishes due to ODX illustrates the ability of ODX to impede mature osteoclasts irrespective of its binding to mineral crystals and without requiring active bone resorption by osteoclasts. To investigate if ODX inhibited osteoclast formation in the cell cultures by interfering with osteoclast differentiation or by enhancing mature osteoclast cell death, we next purified macrophages from bone marrow (BMM) and used them as osteoclast progenitor cells [23,33].Since all cells in these cultures express the macrophage marker CD11b as assessed Fig. 4 Osteodex (ODX) and zoledronic acid (ZOL) do not affect osteoclast formation in bone marrow macrophage (BMM) cultures but enhance mature osteoclast cell death and inhibit late stages of bone resorption.BMMs were purified from bone marrow cells and then incubated in the presence of M-CSF (M; 30 ng/ml) and RANKL (RL; 4 ng/ml) with or without ODX or ZOL, both at 2 × 10 −7 M. At the stated time periods, cells were stained for TRAP (A) or with FITClabeled phalloidin (B).In separate experiments, BMM were incubated on bone discs and resorption pits visualized by toluidine blue staining and reflective light microscopy after 14 days (C), and CTX released to culture medium during days 10-14 was analyzed (D).Data are means of four observations, and SEM is given as vertical bars.Asterisks denote statistical significance; ***P < 0.001, one-way ANOVA, followed by Dunnett's multiple comparisons test versus M/ RL ◂ by FACS analysis [33], ODX can only target cells in the macrophage/osteoclast lineage in these cultures.Similar to the findings in bone marrow cell cultures, ODX robustly decreased the number of osteoclasts in the BMM cultures as demonstrated in TRAP-stained cultures and by staining of the characteristic actin ring in mature osteoclasts.When BMM cells were incubated on bone slices, ODX robustly inhibited the release of CTX from bone slices, demonstrating the anti-resorptive effect by ODX although this analysis cannot discriminate between inhibition of osteoclast formation and stimulation of osteoclast cell death.Osteoclast counting was not performed in these experiments since the continuous fusion of mono-and multinucleated osteoclasts to huge, pancake-like osteoclasts, which gradually become apoptotic, makes counting of the numbers of osteoclasts not an accurate measure.The observation that ODX did not affect RANKLinduced upregulation of mRNA expression of the osteoclastic genes Acp5 and Ctsk during the first 4 days in the BMM cultures, a time period when the mononuclear osteoclast progenitors differentiate to mature osteoclasts, demonstrates that ODX does not affect osteoclast progenitor cell differentiation.Intracellular signaling downstream activated RANK includes activation of MAPK and transcription factors such as AP-1, NF-κB, PU.1, MITF, and NFATc1, with NFAC1 being considered the master regulator of osteoclastogenesis [34].The fact that ODX did not affect RANKL-induced upregulation of Nfatc1 mRNA expression shows that ODX does not affect RANK signaling upstream Nfatc1.Similarly, ZOL did not affect the mRNA expression of osteoclastic and osteoclastogenic genes. In all three assay systems used, the effects by ODX were similar to those obtained by ZOL, a well-documented and clinically often used BP.Since ODX, similar to ZOL, is a nitrogen-containing BP and since this class of BP exerts its anti-osteoclastic effects through stimulating mature osteoclast apoptosis [28], we made detailed observations on osteoclast morphology at different time points during the culture.It was evident that ODX had no effect on mature osteoclast numbers at early time points (days 3 and 4) but clearly at day 5, also demonstrating that ODX does not affect osteoclast differentiation and formation of mature osteoclasts.The remnants of osteoclasts observed at late time points had the appearance of apoptotic osteoclasts with parts of the cell membrane, nuclei, and cytosolic compartments persisting, although formal proof for that ODX caused mature osteoclast death by apoptosis would require more detailed analyses.This response was very sensitive and observed at concentrations of ODX at 0.01 µM and above, which is considerably lower than those usually used to study Fig. 6 Osteodex (ODX) and zoledronic acid (ZOL) do not affect the expression of osteoclastic (Acp5, Ctsk) and osteoclastogenic (Nfatc1) genes induced by RANKL.Bone marrow macrophages were incubated in the presence of M-CSF (M; 30 ng/ml) and RANKL (RL; 4 ng/ml) with or without ODX or ZOL, both at 2 × 10 −7 M.After 2, 3, and 4 days, RNA was extracted, and gene expression was analyzed.Data are means of four observations, and SEM is given as vertical bars.Asterisks denote statistical significance; **P < 0.01 and ***P < 0.001, one-way ANOVA, followed by Dunnett's multiple comparisons test versus M/RL Fig. 7 Osteodex (ODX) and zoledronic acid (ZOL) do not affect the expression of proapoptotic (A) and anti-apoptotic (B) genes regulated by RANKL.Bone marrow macrophages were incubated in the presence of M-CSF (M; 30 ng/ml) and RANKL (RL; 4 ng/ml) with or without ODX or ZOL, both at 2 × 10 −7 M. At the stated time periods, RNA was extracted, and gene expression was analyzed.Data are means of four observations, and SEM is given as vertical bars.Asterisks denote statistical significance; **P < 0.01 and ***P < 0.001, one-way ANOVA, followed by Dunnett's multiple comparisons test versus M/RL A B effects by BPs on apoptosis in vitro where concentrations in the range of 10-100 µM often are used [17,35]. We assessed the mRNA expression of four pro-and antiapoptotic genes regulated by RANKL, but none was affected by ODX, which suggests that ODX-induced cell death is induced by other mechanisms.This observation further supports our conclusion that ODX does not affect RANK signaling.Both nitrogen-containing BPs and non-nitrogen BPs inhibit osteoclasts, but only nitrogen-containing BPs inhibit the mevalonate pathway [17].Nitrogen-containing BPs inhibit the incorporation of 14 C-mevalonate into both farnesylated and geranylgeranylated GTP-binding proteins in rabbit osteoclasts [17].The fact that GGTI-298, a specific inhibitor of geranylgeranyl transferase I, induces osteoclast apoptosis indicates that geranylgeranylation of proteins is more important than farnesylation of proteins for osteoclast function. In conclusion, we here report that ODX does not inhibit mature osteoclast formation but inhibits osteoclastic bone resorption by decreasing osteoclast numbers through enhanced cell death of mature osteoclasts.ODX and ZOL seem to be equipotent as inhibitors of bone resorption [18] and inducers of osteoclast cell death (present study).Important from a clinical point of view is the observation that ODX is considerably more potent than ZOL as inducer of apoptotic cell death in human prostate cancer and breast cancer cell lines [18].These findings indicate that ODX causes apoptosis in tumor cells and cell death of mature osteoclasts by different mechanisms. Fig. 1 Fig.1Osteodex (ODX) and zoledronic acid (ZOL) decrease osteoclast numbers induced by all-trans-retinoic acid (ATRA) in cultured neonatal mouse calvarial bones (A) without affecting the mRNA expression of the osteoclastogenic cytokine Tnfsf11 or its decoy inhibitor Tnfrsf11b (B, C).Calvarial explants were incubated in the presence of ATRA (10 −7 M) with or without ODX or ZOL, both at 2 × 10 −7 M for 96 h, and numbers of osteoclasts per section were counted after immunostaining for cathepsin K (A).Gene expression of Tnfsf11 and Tnfrsf11b was analyzed after 48 h (B, C).Data are means of four observations, and SEM is given as vertical bars.Asterisks denote statistical significance; *P < 0.05, **P < 0.01, and ***P < 0.001, one-way ANOVA, followed by Dunnett's multiple comparisons test versus ATRA Fig. 2 Fig. 3 Fig.2Osteodex (ODX) and zoledronic acid (ZOL) decrease osteoclast numbers induced by RANKL in mouse calvarial periosteal cell cultures.Cells were isolated from neonatal mouse calvaria and incubated in the presence of RANKL (10 ng/ml) with or without ODX or ZOL, both at 2 × 10 −7 M for 12 days and then stained for TRAP.Overview photo of TRAP-stained cells in culture plate (A), representative microscope photos (B), and counting of TRAP + MuOCL (C).Data are means of four observations, and SEM is given as vertical bars.Asterisks denote statistical significance; ***P < 0.001, oneway ANOVA, followed by Dunnett's multiple comparisons test versus RANKL-treated cells ◂ Fig. 5 Fig. 5 Osteodex (ODX) and zoledronic acid (ZOL) enhance mature osteoclast cell death in a concentration-dependent manner.Bone marrow macrophages were incubated in the presence of M-CSF (M;
2024-03-02T06:17:36.341Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "b82e251c40813f1d3ea366ec64b6ea7418684020", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10637-024-01427-1.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "51df7d0018a8a7aee7df1ce5d68fecbb9d74206e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
52901210
pes2o/s2orc
v3-fos-license
Crosstalk Between Mammalian Autophagy and the Ubiquitin-Proteasome System Autophagy and the ubiquitin–proteasome system (UPS) are the two major intracellular quality control and recycling mechanisms that are responsible for cellular homeostasis in eukaryotes. Ubiquitylation is utilized as a degradation signal by both systems, yet, different mechanisms are in play. The UPS is responsible for the degradation of short-lived proteins and soluble misfolded proteins whereas autophagy eliminates long-lived proteins, insoluble protein aggregates and even whole organelles (e.g., mitochondria, peroxisomes) and intracellular parasites (e.g., bacteria). Both the UPS and selective autophagy recognize their targets through their ubiquitin tags. In addition to an indirect connection between the two systems through ubiquitylated proteins, recent data indicate the presence of connections and reciprocal regulation mechanisms between these degradation pathways. In this review, we summarize these direct and indirect interactions and crosstalks between autophagy and the UPS, and their implications for cellular stress responses and homeostasis. INTRODUCTION The ubiquitin-proteasome system (UPS) and macroautophagy (hereafter referred as autophagy) are two major intracellular protein degradation pathways. Degradation of short-lived proteins through the UPS is initiated by sequential addition of ubiquitin chains to target proteins (Hershko, 1983(Hershko, , 2005Finley, 2009). Polyubiquitylated proteins are then recognized by the subunits of multicatalytic protease complexes called proteasomes (Hershko and Ciechanover, 1998;Schwartz and Ciechanover, 2009). Proteasomes are extremely efficient organelles that degrade short-lived proteins and soluble unfolded/misfolded proteins and polypeptides. On the other hand, long-lived proteins, insoluble protein aggregates (usually originating from misfolded proteins, disease-related mutant proteins) and dysfunctional organelles, such as degenerated mitochondria and peroxisomes, are eliminated by the autophagy-lysosome system Huber, 2003, 2004;Klionsky, 2007). Autophagy is characterized by the formation of double-membrane structures termed as autophagosomes, which later on fuse with lysosomes, forming autolysosomes degrading autophagosome contents. The UPS and autophagy are interconnected, and inhibition of one system was shown to affect the other. There is accumulating evidence in the literature about connections between the UPS and autophagy. In this review article, we will first briefly summarize the two systems, and then discuss in detail various examples of coordination and crosstalk between them. For more detailed discussion on individual systems, the readers are referred to recently published excellent review articles (Collins and Goldberg, 2017;Kwon and Ciechanover, 2017;Mizushima, 2018;Yu et al., 2018). This review article mainly focuses on the mammalian system and advances in this field. For crosstalk in other systems, such as plants, readers should check other recent and relevant reviews [for example see, (Minina et al., 2017)]. The Ubiquitin-Proteasome System Ubiquitylation-dependent degradation is involved in the regulation of several cellular processes, including protein quality control, transcription, cell cycle progression, DNA repair, cell stress response and apoptosis. For example during cell cycle regulation, timely progression of each phase of the cycle rely on sequential transcription and degradation of cell cycle proteins, such as cyclins (Glotzer et al., 1991;Benanti, 2012). During apoptosis, ubiquitylation leading the degradation of survivin depends on ubiquitin ligase XIAP (Arora et al., 2007;Altieri, 2010;Delgado et al., 2014). Ubiquitylation involves the addition of the small protein ubiquitin to specific lysine residues on the target proteins. Covalent attachment of ubiquitin to protein targets occurs through a three-step mechanism involving E1 (ubiquitin-activating), E2 (ubiquitin-conjugating) and E3 (ubiquitin ligase) enzymes as summarized in Figure 1 (Hershko and Ciechanover, 1998). At least seven lysine (K) residues in the ubiquitin protein are involved in the polyubiquitin chain formation (K6, K11, K27, K29, K33, K48, or K63). Initially, K48-linked ubiquitin chain formation was introduced as the degradation signal for proteasomal degradation. In contrast, K11 or K63 chains or single ubiquitin moieties (monoubiquitylation) were initially connected to non-proteolytic functions (Welchman et al., 2005;Behrends and Harper, 2011). However, recent reports indicate that K63-linked ubiquitin chains as well as various other chains prime substrates for autophagic elimination (Tan et al., 2008b). The 26S proteasome is an ATP-dependent protease complex, consisting of a core complex, the 20S proteasome and a regulatory complex, the 19S proteasome cap. The 20S proteasome forms a barrel-shape structure with two end rings formed by α subunits regulating the entry of unfolded proteins, and two middle rings are composed of β subunits harboring proteolytic activity (Heinemeyer et al., 2004). Substrates must be unfolded and then guided by α subunits prior to catalytic cleavage. At the end, polypeptides are chopped into 3-25 amino acid long fragments, and further cleavage to single amino acids is carried out by peptidases (Tomkinson and Lindås, 2005) (Figure 1). By this way, recycling of proteins result in the generation of amino acids that are ultimately reused by cells in the synthesis of new proteins. The 26S proteasome contains an additional 19S cap structure that further regulates the internalization of ubiquitylated substrates (Lander et al., 2012). The central part of the 19S cap consists of six AAA ATPases (Rpt1-Rpt6) forming the Rpt ring that is responsible for substrate binding and unfolding as well as substrate transfer through the channel (Collins and Goldberg, 2017). Non-ATPase proteins such as Rpn10 and Rpn13 in the 19S cap, possess ubiquitin-binding domains and therefore function as receptors for ubiquitin-labeled substrates (Finley, 2009). Recent studies showed that ubiquitylation is a reversible phenomenon. Deubiquitinating enzymes (DUBs) are proteases that remove ubiquitin or ubiquitin-like molecules from substrates and disassemble polyubiquitin chains. DUBs regulate UPS-mediated degradation in different cellular contexts (Reyes-Turcu et al., 2009;He et al., 2016;Pinto-Fernandez and Kessler, 2016). Moreover, they play an important role in the control of available free ubiquitin pool in cells, allowing recycling and reuse of ubiquitin. Some DUBs are also responsible for processing newly synthesized ubiquitin precursors (Komander et al., 2009;Lee et al., 2011;Grou et al., 2015;Collins and Goldberg, 2017). Autophagy There are three major types of autophagy: Macroautophagy, microautophagy and chaperon-mediated autophagy (CMA). In this review, we chose to focus on macroautophagy (herein autophagy). CMA and microautophagy were discussed in elsewhere (Kaushik and Cuervo, 2018;Oku and Sakai, 2018). Autophagy is characterized by the engulfment of cargo molecules by double-membrane vesicles, called autophagosomes (Klionsky, 2007;Mizushima, 2010Mizushima, , 2018Lamb et al., 2013). Following closure, autophagosomes are transported by the microtubule system, leading to their fusion with late endosomes and lysosomes, forming autolysosomes. In this new compartment, sequestered cargos are degraded by the action of lysosomal hydrolases. Building blocks that are generated by hydrolysis of macromolecules (e.g., amino acids from protein degradation) are then transferred back to cytosol for reuse (Figure 2). Active at a basal level, autophagy is upregulated following a number of stimuli and stress conditions. Amino acid deprivation, serum starvation and growth factor deprivation, hypoxia, exposure to various chemicals and toxins might be counted among stress conditions activating autophagy. Most autophagy inducing signals converge at the level of mTOR protein complexes (mTORC1 and mTORC2) that coordinate anabolic and catabolic processes (Sabatini, 2017;Saxton and Sabatini, 2017) (Figure 2). Cellular energy sensor AMPK directly regulates mTOR and therefore contributes to the regulation of the autophagic activity. Moreover, the ERK/RSK pathway, PI3K/AKT pathway, amino acid sensor RAG system as well as hypoxia are among autophagy-related pathways converging at the level of mTOR. Under normal conditions, mTORC1 limits the autophagic activity through inactivation of the ULK1/2 autophagy complex. mTORC1-dependent phosphorylation of ULK1 and Atg13 (Hosokawa et al., 2009) result in the inactivation of ULK1/2 complex and down regulation of autophagy. Under stress, mTORC1 is inhibited and ULK1/2 complex dephosphorylated. ULK1/2 then phosphorylates itself, Atg13 and FIP200 and activate autophagy. A class III phosphatidylinositol 3-kinase (PI3K) complex, including the lipid kinase VPS34 and the regulatory protein Beclin1, controls the membrane nucleation stage and initial phagophore formation. Phosphatidylinositol 3-phosphate (PtdIns3P) that is generated by PI3K activity serves as a landing pad for autophagy-related proteins containing PI3P-binding domains (e.g., FYVE-domains). Among them WIPI1-4 and DFCP1 were involved in the formation of a membrane structure called omegasome or cradle, a structure that creates a platform for the elongation of autophagosome precursor isolation membranes (Mauthe et al., 2011;Mercer et al., 2018). Elongation of the isolation membrane depends on two ubiquitin-like conjugation systems. In the first system, autophagy-related gene 12 (ATG12) is covalently conjugated to the ATG5 protein through the action of ATG7 (E1-like) and ATG10 (E2-like) proteins. Then, recruitment of the ATG16L1 protein to ATG12-5 dimer results in the formation of a larger complex. Then forming ATG12-5-16L1 oligomers serve as E3 ligases that conjugate lipid molecules (such as phosphatidylethanolamine) to ATG8 orthologs MAP1LC3, GATE16, GABARAP Shpilka et al., 2011;Tsuboyama et al., 2016). Lipid-conjugated ATG8 proteins are required for the elongation, expansion and closure of autophagosome membranes (Nakatogawa et al., 2007). In order to acquire lytic capacity, autophagosomes fuse with late endosomes or lysosomes. In mammalian cells, fusion requires lysosomal integral membrane protein LAMP-2, several SNARE proteins (e.g., STX17 and WAMP8) and RAB proteins (e.g., RAB5 and RAB7) (Tanaka et al., 2000;Jager, 2004). Following fusion of the outer membrane of autophagosomes, materials contained in the inner membrane are degraded by the action of lysosomal hydrolases (Tanida et al., 2004). Building blocks (e.g., amino acids, fatty acids etc.) are then transported back to cytosol for reuse in the metabolic processes of the cells. Autophagic vesicles engulf targets such as portions of cytoplasm and various cytoplasmic components in a non-selective manner. On the other hand, several selective forms of autophagy have been described (Kraft et al., 2010;Anding and Baehrecke, 2017). In most cases, ubiquitylation of the cargo constitutes a key step in the chain of events leading to its autophagic removal (Kirkin et al., 2009;Rogov et al., 2014). Selective targets include mitochondria (Okamoto et al., 2009), peroxisomes (Till et al., 2012), lysosomes (Hung et al., 2013), endoplasmic reticulum (ER) (Khaminets et al., 2015), ribosomes (An and Harper, 2018), cytoplasmic protein aggregates (Lamark and Johansen, 2012), pathogenic intracellular invaders (Wileman, 2013) and even certain free proteins and RNAs (Huang et al., 2015) were shown to be targets of selective autophagy. By this way, cells control number of the organelles, eliminate dysfunctional components and get rid of potentially harmful aggregates and invaders. THE UPS-AUTOPHAGY CONNECTION The UPS and autophagy are the two major and evolutionarily conserved degradation and recycling systems in eukaryotes. Although their activities are not interdependent, recent studies show that connections and crosstalks exist between the two systems. Mitophagy constitutes a prominent example connecting these two degradative systems, yet several other examples exist. In this section, we will summarize biological events involving autophagy and the UPS, and discuss molecular details of the crosstalk mechanisms. Compensation Between the Two Degradative Pathways Initial observations about functional connections between the UPS and autophagy systems revealed that inhibition of one led to a compensatory upregulation of the other system. In order to maintain homeostasis, cellular materials that accumulate following inhibition of one degradative system needs to be cleared, at least in part, by the other system (Figure 3). Here, we will give examples of scenarios where these compensation mechanisms are operational. Inhibition of the UPS using various compounds (e.g., MG132, bortezomib, lactacystine etc.) (Wu et al., 2008;Selimovic et al., 2013;Fan et al., 2018) or by genetic approaches (Demishtein et al., 2017) resulted in the upregulation of the autophagic activity in cells (Figure 3). For example, inhibition of proteasomal activity by the proteasome inhibitor and chemotherapy agent bortezomib led to an increase in the expression of autophagy genes ATG5 and ATG7, and induced autophagy. In fact, autophagy gene upregulation depended on an ER stress-dependent pathway that involved eukaryotic translation initiation factor-2 alpha (eIF2α) phosphorylation (Zhu et al., 2010). In another study, proteasome inhibition was associated with an increase in p62 and GABARAPL1 levels by Nrf1-dependent and -independent pathways prior to autophagy activation (Sha et al., 2018). In other contexts, MG132-mediated proteasome inhibition resulted in a decrease in cell proliferation, cell cycle arrest at G2/M phase and stimulation of autophagy through upregulation of Beclin1 and LC3 (Ge et al., 2009). Autophagy induction following proteasome inhibition correlated with AMPK activation as well. A number of studies provided evidence that proteasomal inhibition is sensed by both AMPK and mTORC1, two major regulators of autophagy. For instance, in macrophages, epitelial and endothelial cells, proteasome inhibition using chemicals resulted in the activation of AMPK (Xu et al., 2012;Jiang et al., 2015). In some other cancer cell types, CaMKKβ and glycogen synthase kinase-3β (GSK-3β) were identified as upstream regulators of AMPK activation, proteasome inhibition was linked to a decrease in GSK-3β activity and to the activation of AMPK and autophagy . On the other hand, Torin-1-or rapamycin-mediated inhibition of mTOR stimulated long-lived protein degradation through activation of both UPS and autophagy Zhao and Goldberg, 2016). In retinal pigment epithelial cells, inhibition of proteasome by lactacystin and epoxomicin was shown to block the AKT-mTOR pathway and induce autophagy . SiRNA-mediated knockdown of Psmb7 gene coding for the proteasome β2 subunit, resulted in enhanced autophagic activity, and it was linked the mTOR activation status of cultured cardiomyocytes (Kyrychenko et al., 2013). Similarly, impairment of autophagy correlated with the activation of the UPS. In colon cancer cells, chemical inhibition of autophagy and small RNA mediated knock down of ATG genes resulted in the upregulation of proteasomal subunit levels, including the catalytic proteasome β5 subunit, PSMB5 and led to increased UPS activity . In another study, 3-MA-mediated autophagy inhibition in cultured neonatal rat ventricular myocytes (NRVMs) increased chymotrypsin-like activity of proteasomes (Tannous et al., 2008). Since proteasomes were identified as autophagic degradation targets (proteaphagy), enhanced proteasome peptidase activity following autophagy inhibition might be associated with the accumulation of proteasomes (Cuervo et al., 1995;Marshall et al., 2015). Yet in several cases, autophagy inhibition correlated with the accumulation of ubiquitylated proteins. For instance in independent studies with ATG5 or ATG7 knockout mice, accumulation of ubiquitylated conjugates were observed, especially in the brain and the liver of the animals (Komatsu et al., 2005(Komatsu et al., , 2006Hara et al., 2006;Riley et al., 2010). Similar results were observed in other animal models such as Drosophila (Nezis et al., 2008). In line with these data, inhibition of autophagy through siRNA-mediated knockdown of ATG7 and ATG12 in HeLa cells resulted in the impairment of UPS, accumulation of ubiquitylated proteins as well as other important UPS substrates, including p53 and β-catenine (Korolchuk et al., 2009a). In above-cited papers, autophagy impairment followed by the autophagy receptor p62 accumulation in cells, and played a key role in the observed UPS defects. Ubiquitylation was proposed to be a common component that directs substrates to the proper degradation system and even contribute to the UPS-autophagy crosstalk (Korolchuk et al., 2010;Dikic, 2017). According to this view, proteins that are predominantly linked to K48-based ubiquitin chains are generally directed for degradation through UPS. Conversely, aggregates that are linked to K63-based ubiquitin chains are directed for autophagic degradation. P62 binding capacity was introduced as the critical step in the choice between the UPS and autophagy. Although, p62 is able to attach both K48-and K63-linked ubiquitin chains through its UBA domain, binding affinity of the protein for K63-linked chains seems to be higher (Long et al., 2008;Tan et al., 2008a;Wooten et al., 2008). Due to this dual ubiquitin binding ability, p62 might show UPS inhibitory effects in some contexts. A competition between p62 and p97/VCP (a ubiquitin binding ER-associated degradation protein) determined the fate of ubiquitylated proteins in cells (Korolchuk et al., 2009a,b). Over expression of p97/VCP protein prevented binding of p62 to ubiquitylated substrates, and directed them for degradation by the UPS. On the other hand, accumulation of p62 following autophagy inhibition led to the sequestration of proteins that were otherwise p97/VCP targets. In summary, in the case of a defect in one of the two degradation systems, the other system is upregulated in order to eliminate ubiquitylated protein substrates. Yet, compensation does not always work and its success largely depends on cell types, cellular and environmental conditions and target protein load. Interplay Between the UPS-Autophagy in the Selective Clearance of Cytosolic Proteins Function of proteins depend on their proper folding and 3D structures. Various insults, including heat shock, organellar stress, oxidative stress etc., might lead to the accumulation of unfolded or misfolded proteins. Moreover several diseaserelated mutations were associated with folding problems. Failure to refold result in dysfunctional or malfunctional, hence toxic protein accumulations, activation of stress and even cell death pathways. In order to control toxic protein accumulations, an active process of protein aggregate formation comes into play. Additionally some proteins, including mutant proteins are already prone to form aggregates. Selective clearance of most cytosolic proteins require ubiquitylation. Depending on their solubility, ubiquitylated proteins and protein aggregates are then cleared by the UPS or autophagy. Soluble fractions of proteins with a folding problem are recognized by the chaperone machinery and directed to the UPS for degradation. Hsp70 and Hsp90 chaperone interactor CHIP was identified as one of the E3 ligases that are responsible for K48-linked ubiquitin chain addition to unfolded/misfolded proteins. BAG family proteins, especially BAG1, interact with the Hsp70 complex and induce proteasomal degradation of client proteins. On the other hand, clearance of insoluble aggregate-prone proteins require formation of aggresomes. Ubiquitylation by a number of different E3 ligases, including CHIP, Parkin, HRD1 and TRIM50 prime aggregate-prone proteins (Olzmann et al., 2007;Mishra et al., 2009;Zhang and Qian, 2011;Mao et al., 2017). HDAC6 is another protein that plays a key role in the process of aggresome formation. HDAC6 was shown to provide the link between K63-based ubiquitylated aggregates and microtubule motor protein dynein (Matthias et al., 2008;Olzmann et al., 2007). Then, dynein-mediated mechanism direct the aggregates toward microtubule organizing centers (MTOCs), resulting in their piling of as aggresomes (Johnston et al., 1998;Kopito, 2000) ( Figure 4). Following aggresome formation, direct interaction of adaptor proteins p62 and NBR1 with ubiquitylated aggregates result in their delivery to autophagosomes (Ichimura et al., 2008;Lamark and Johansen, 2012). Another autophagy-related protein, ALFY, was also identified as a player in the selective autophagy FIGURE 4 | Misfolded proteins can be eliminated by both the UPS and autophagy system. Misfolded proteins are ubiquitylated and based on the differences in ubiquitin linkages and ubiquitin binding proteins, they are directed for proteasomal degradation or further accumulated in aggresomes. Aggresomes are selectively cleared by autophagy. and degradation of aggresomes (Clausen et al., 2010;Filimonenko et al., 2010). An alternative pathway for aggresome formation require Hsp70 partner proteins BAG3 and CHIP (Zhang and Qian, 2011). Similar to HDAC6, BAG3 binds to dynein, and this directs Hsp70 substrates to aggresomes. However, BAG3-dependent aggresome formation was not dependent on the ubiquitylation of substrates as in the case of HDAC6, and CHIP E3 ligase activity was dispensible (Gamerdinger et al., 2011;Zhang and Qian, 2011). Yet, E3 ligases such as CHIP were required for BAG3-dependent aggresome clearance by autophagy (Klimek et al., 2017). Proteolytic Degradation of the UPS or Autophagy Components as a Mutual Control Mechanism Until so far, we focused on the UPS and autophagy as complementary but independent mechanisms. However, there are cases where components of one system were reported to be a proteolytic target of the other system. For example, a number of autophagy proteins were regulated through degradation by the UPS. On the other hand, even the whole proteasomes were shown be selective targets of autophagic degradation. Here, we will give examples of how mutual regulation through proteolysis contributes to the crosstalk and the interplay between the two systems. Control of the UPS by the Autophagic Activity Early studies indicated that proteasomes could be degraded in lysosomes (Cuervo et al., 1995). Later on, plant studies revealed that lysosomal degradation of 26S proteasomes occurred by a specific form of selective autophagy, proteaphagy (Marshall et al., 2015). RPN10 protein was introduced as an ATG8 interacting plant proteaphagy receptor. Unlike the plant protein, yeast and mammalian RPN10 failed to interact with ATG8/LC3. Instead, Cue5 protein in the yeast and its human ortholog TOLLIP, were introduced as selective receptors regulating proteasome clearance by autophagy (Lu et al., 2014). Moreover, p62 was also described as another proteaphagy receptor (Cohen-kaplan et al., 2016). For example, in mammals, amino acid starvation significantly upregulated ubiquitylation of 19S proteasome cap components RPN1, RPN10, RPN13, and led to their p62-mediated recruitment to autophagosomes (Cohen-kaplan et al., 2016) ( Figure 5). Interestingly during carbon or nitrogen starvation, plant and yeast proteasomes were shown to localize in proteasomal storage granules (PSGs), protecting them from autophagic degradation during stress (Peters et al., 2016;Marshall and Vierstra, 2018). Whether similar mechanisms exist in the mammals is currently an open question. These observations underline the importance of selective degradation of proteasome by autophagy in the control of proteasome numbers as well as overall UPS and lytic activity in cells. Control of Autophagy Components by the UPS Modulation of the half-life of some proteins in the autophagy pathway by the UPS serves as a means to control cellular autophagic activity. For instance, LC3 protein was shown to be processed in a stepwise manner by the 20S proteasome, a process that was inhibited by p62 binding (Gao et al., 2010). On the other hand, E3 ligase NEDD4-mediated K11-linked ubiquitylation of Beclin1 prevented its binding to the lipid kinase VPS34, and led to its degradation (Platta et al., 2012). Another E3 ligase, RNF216 ubiquitylated Beclin1 adding K48linked ubiquitin chains on the protein . Beclin1 ubiquitylation resulted in autophagy blockage in both cases. Conversely, reversal of Beclin1 ubiquitylation by the DUB protein USP19 stabilized the protein under starvation conditions and promoted autophagy (Jin et al., 2016). USP10 and USP13 as well as USP9X were characterized as other DUBs that regulated autophagy through control of Beclin1 stability (Liu et al., 2011;Jin et al., 2016). Beclin1 is not the only autophagy protein that is targeted by the UPS in a controlled manner. G-protein-coupled receptor (GPCR) ligands and agonists were reported to regulate cellular Atg14L levels, and therefore autophagy, through ZBTB16-mediated ubiquitylation of the protein . Serum starvation increased GSK3β-mediated phosphorylation of ZBTB16, leading to its degradation. Under these conditions, stabilization of Atg14L restored of autophagy. AMBRA1 is another UPS-controlled autophagy protein. Cullin-4 was identified as an E3 ligase that was responsible for the ubiquitylation of AMBRA1, dooming it for degradation under nutrient-rich conditions where autophagy should be inhibited (Antonioli et al., 2014). The PI3K complex subunit p85b is another example. Ubiquitylation of this autophagy signaling component by the E3 ligase SKP1 led to a decrease in its cellular levels and stimulated autophagic activity (Kuchay et al., 2013). Ubiquitylation of some autophagy proteins did not result in their immediate proteasomal degradation, yet the post-translational modification provided an extra layer of control for the autophagy pathway. For instance, autophagy receptor OPTN was ubiquitylated as a target of the E3 ligase HACE1, and K48-linked ubiquitylation regulated the interaction of the protein with p62 (Liu Z. et al., 2014). TRAF6, a central E3 ligase of the NF-κB pathway, participated controlled ULK1 activity through K63-linked ubiquitylation. Under nutrient-rich conditions, mTOR phosphorylated AMBRA1 leading to its inactivation. When nutrients were limiting, mTOR inhibition resulted in AMBRA1 dephosphorylation and increased the interaction of the protein with TRAF6. This event facilitated ULK1 ubiquitylation by TRAF6 (Nazio et al., 2013). Ubiquitylation of ULK1 resulted in the stabilization of the protein, controlled its dimerization and regulated its kinase activity. Another ubiquitin-dependent regulation mechanism involved AMBRA1-Cullin-5 interaction in the regulation of mTOR complex component DEPTOR (Antonioli et al., 2014). Above-mentioned AMBRA1-Cullin-4 complex dissociated under autophagy-inducing conditions, allowing AMBRA1 to bind another E3 ligase, Cullin-5. This newly formed complex was shown to stabilize DEPTOR and induce mTOR inactivation, providing a negative feed-back loop in the control of autophagy (Antonioli et al., 2014). In another study, TLR4 signaling triggered autophagy through Beclin1 ubiquitylation and stabilization. TLR4-associated TRAF6 protein was identified as the E3 ligase responsible for K63-linked ubiquitylation of Beclin1 at its BH3 domain. This modification blocked inhibitory BCL-2 binding to the protein, and free Beclin1 could activate autophagy (Shi and Kehrl, 2010). On the other hand, the deubiquitinating enzyme A20 reversed TRAF6-mediated ubiquitylation of Beclin1, resulting in autophagy inhibition (Shi and Kehrl, 2010). Another K63-linked ubiquitylation event on Beclin1 was promoted by AMBRA1 protein. In the same context, the WASH protein interacted with Beclin1, blocked AMBRA1-mediated Beclin1 ubiquitylation, and suppressed autophagy (Xia et al., 2013). LC3 and p62 were also subjected to regulatory ubiquitylation. NEDD4 was identified as the E3 ligase in these reactions. NEDD4 was reported to interact with LC3 and p62 (Lin et al., 2017), and LC3 binding to NEDD4 stimulated its ubiquitin ligase activity on the p62 protein . Moreover, NEDD4 deficient cells exhibited aberrant p62 containing inclusions, indicating the defect in aggresome clearance (Lin et al., 2017). Hence, NEDD4 is important for the regulation of p62 function and autophagy. Xenophagy: Removal of Intracellular Invaders Another essential function of autophagy is the clearance of intracellular pathogens. This special form of autophagy, called xenophagy, is a result of a cooperation between the ubiquitylation machinery and the autophagy pathway. Pathogens such as Streptococcus pyogenes, Mycobacterium tuberculosis, Listeria monocytogenes, and Shigella flexneri were identified as autophagy targets (Gutierrez et al., 2004;Kirkegaard et al., 2004;Ogawa et al., 2005). As a form of selective autophagy, xenophagy involves cargo labeling with ubiquitin, followed by the recognition by autophagy receptors (Figure 6). K48-and K63-linked and linear M1-linked ubiquitin chains were shown to mediate recognition FIGURE 6 | Selective degradation of invaders by xenophagy is example of coregulation of the UPS and autophagy. Cellular degradation of invading bacterium was ubiquilated by various E3 ligases and recognized by adaptor proteins for recruitment autophagic membranes around bacterium. of different pathogens by the xenophagy machinery (Collins et al., 2009;Randow and Youle, 2014). For example, both K48-and K63-linked ubiquitylation were observed on Mycobacterium, and Parkin was identified as the E3 ligase catalyzing the K63-linked ubiquitylation (Collins et al., 2009;Manzanillo et al., 2013). Moreover endosome-free areas on the intracellular Salmonella Typhimurium contained a directly attached ubiquitin coat, and addition of linear M1-linked ubiquitin chains by the E3 ligase HOIP of the LUBAC on these ubiquitins contributed to the autophagy of the intracellular parasite (Noad et al., 2017). Xenophagy receptors that were described to date include p62, OPTN, NDP52, and NBR1 (Thurston, 2009;Zheng et al., 2009;Wild et al., 2011). These receptors were reported to bind pathogen-and/or endosome-associated ubiquitin, and directing the selective targets to autophagic membranes (Wild et al., 2011;Richter et al., 2016). The interplay between ubiquitylation and autophagy achieves the important task of keeping host cells pathogen-free and providing an intracellular innate immune defense mechanism against invaders. In some reports, ubiquitylated bacteria were found to be surrounded by proteasomes as well (Perrin et al., 2004) and proteasomal activity might also be required for efficient killing of intracellular parasites (Iovino et al., 2014). Whether in the elimination of invading organisms, the crosstalk between the UPS and autophagy systems goes beyond ubiquitylation, needs further consideration. As discussed below, cellular mechanisms controlling commensal-turned ancient intracellular microorganisms, namely mitochondria, indeed rely on the function of both the UPS and autophagy. Mitophagy: Mitochondrial Turnover Mitochondria are vital organelles that form an intracellular dynamic network in the cytosol of eukaryotic cells. Through fusion and fission, they are constantly made and destroyed. Under steady state conditions, mitochondria might be eliminated by basal in a non-selective manner. On the other hand, elimination of damaged, dysfunctional or superfluous mitochondria requires a selective form of autophagy called mitophagy (Lemasters, 2005). Programmed elimination of mitochondria during development and differentiation (e.g., reticulocyte maturation to erythrocyte, in oocytes after fertilization, during lens formation in the eye) also relies on mitophagy (Schweers et al., 2007;Song et al., 2016;Esteban-Martínez et al., 2017). Recent studies showed that mitophagy is a biological phenomenon that involves both the UPS and autophagy. In this section, we will discuss mechanisms of mitophagy, and analyze connections between the UPS and autophagy in this context. PINK1/Parkin-Dependent Mitophagy Depending on the E3 ligase that ubiquitylates proteins on mitochondria, mitophagy can be divided into two major forms: Parkin-dependent and Parkin-independent mitophagy. The E3 ligase Parkin was first characterized as the product of the gene PARK2, mutations of which were linked to early-onset of Parkinson's Disease. Strikingly, Parkin recruitment to mitochondria was found to be necessary for mitophagy (Narendra et al., 2008). Further studies showed that Parkin, together with another familiar Parkinson's Disease-associated gene, PINK1 (PARK7), was responsible for priming mitochondria for autophagic degradation (Figure 7). Under normal conditions, after being synthesized as precursor in the cytoplasm, PINK1 was imported to mitochondria by its N-terminal mitochondria targeting sequence (MTS). Then, PINK1 was post-translationally modified within mitochondria by resident proteases: MPP and PARL (Jin et al., 2010;Deas et al., 2011). Cleavage by PARL resulted in destabilization of the protein and its degradation by cytoplasmic proteasomes (Yamano and Youle, 2013). Under mitochondrial stress however, PINK1 cleavage did not occur and the protein accumulated on the outer mitochondrial membrane (OMM) (Lazarou et al., 2012;Hasson et al., 2013). Recruitment of cytoplasmic E3 ligase Parkin onto mitochondria required stabilization and the kinase activity of the PINK1 protein (Lazarou et al., 2012). Parkin itself was a substrate of PINK1 (Kondapalli et al., 2012;Shiba-Fukushima et al., 2012). Phosphorylation of Parkin by PINK1 resulted in a conformational change overcoming an Frontiers in Cell and Developmental Biology | www.frontiersin.org autoinhibition, and stimulated its E3 ligase activity (Kondapalli et al., 2012;Shiba-Fukushima et al., 2012;Trempe et al., 2013;Wauer and Komander, 2013). Interestingly, PINK1 was shown to phosphorylate ubiquitin molecules on mitochondrial resident proteins as well. Ubiquitin phosphorylation correlated with an increase in the amount of mitochondria-localized Parkin, providing a feed-forward mechanism of Parkin recruitment (Kane et al., 2014;Kazlauskaite et al., 2014;Koyano et al., 2014;Shiba-Fukushima et al., 2014). Several proteins on the mitochondrial outer membrane were identified as Parkin ubiquitylation substrates. The list includes VDAC, TOM proteins, mitofusins etc (Sarraf et al., 2013). Following ubiquitylation some of these targets were shown to be degraded by the proteasome (e.g., mitofusins) and some were not (e.g., VDAC). Degradation of proteins related to mitochondrial integrity promoted fission events that facilitate engulfment of mitochondrial portions by autophagosomes, whereas proteins that are not degraded upon ubiquitylation rather contributed to mitochondrial rearrangements (e.g., aggregation). The UPS activity was a prerequisite in the preparation of mitochondria for autophagy. Ubiquitylation of mitochondrial targets preceeded the recruitment of the autophagic machinery onto mitochondria (Yoshii et al., 2011). Selective autophagy receptors were shown to bind ubiquitin-labeled proteins on mitochondria and recruit ATG8/LC3 proteins for mitophagy. Serial knock out of putative autophagy receptors showed that NDP52, optineurin (OPTN) and TAX1BP1 were functional mitophagy receptors, and a triple knockout of these proteins completely blocked mitophagy (Lazarou et al., 2015;Shi J. et al., 2015). On the other hand, the autophagy receptor p62 was essential for clustering of damaged mitochondria in perinuclear region of the cells, but not for mitophagy Okatsu et al., 2010). Ubiquitin modifications on mitochondria might be reversed by the action of DUB proteins. Several DUBs were identified as positive or negative regulators of mitophagy (Dikic and Bremm, 2014;Wang et al., 2015). For example, deubiquitylation of mitochondrial targets by USP15, USP30, and USP35 prevented further progression of mitophagy in a number of cell lines and experimental models (Bingol et al., 2014;Cornelissen et al., 2014;Wang et al., 2015). DUB-mediated deubiquitylation of targets decreased Parkin recruitment onto mitochondria as well (Bingol et al., 2014). USP8-mediated removal of K6-linked ubiquitin chains from Parkin itself affected recruitment of the protein onto mitochondria and therefore mitophagy (Durcan et al., 2014;Durcan and Fon, 2015). Parkin-Independent Mitophagy Expression of Parkin is restricted to a few cell types, including dopaminergic neurons. Consequently, Parkin-null animals showed prominent mitophagy defects only in selected brain regions . Therefore in other cell types and tissues, mitophagy has to proceed in a Parkin-independent manner. Alternative E3 ligases were found to play a role in mitophagy in these contexts. Mulan (MUL1) is an E3 ubiquitin ligase that resided on the OMM, and it was shown to play a role in Parkin-independent mitophagy in different model organisms, including Caenorhabditis elegans, Drosophila and mammals (Ambivero et al., 2014;Yun et al., 2014). Mulan stabilized DRP1, led to degradation of MFN2, and interacted with ATG8 family member protein GABARAP (Braschi et al., 2009;Ambivero et al., 2014). Another E3 ligase that was associated with mitophagy was GP78 (Christianson et al., 2012). Over expression of GP78 induced MFN1 and 2 ubiquitylation and degradation, that was followed by mitochondrial fragmentation and mitophagy in cells lacking Parkin (Fu et al., 2013). Synphilin-1-dependent recruitment of the E3 ligase Siah1 to mitochondria resulted in mitochondrial protein ubiquitylation and mitophagy in a PINK1-dependent but Parkin-independent manner (Szargel et al., 2015). Conversely, another OMM E3 ligase, MITOL (MARCH5), was reported to ubiquitylate FIS1, DRP1 (Yonashiro et al., 2006) and MFN2 ), yet inhibited hypoxia-induced and Parkinindependent mitophagy through ubiquitylation and degradation of FUNDC1 (Chen et al., 2017). All these findings underline the fact that mitophagy might proceed in cells which do not express Parkin. Further studies are required to unravel the molecular mechanisms of Parkin-independent mitophagy in different tissues and cell types, and reveal the details of the crosstalk between the UPS and autophagy under these conditions. A Special Type of Mitophagy During Reticulocyte Maturation During differentiation, in order to increase their capacity to load hemoglobin-bound oxygen, reticulocytes lose their organelles, including mitochondria, and become mature red blood cells (Dzierzak and Philipsen, 2013). During this process, a protein called NIX (also known as BNIP3L) is upregulated (Aerbajinai et al., 2003). NIX is a C-terminally anchored outer mitochondrial membrane (OMM) protein that contains a LC3-interacting region (LIR) at its cytoplasmic N-terminal part. Through its LIR domain, NIX interacted with LC3, enabling engulfment of mitochondria by autophagosomes in reticulocytes (Novak et al., 2010). Characterization of NIX-deficient mice showed that, NIX-deficient Erythrocytes failed to eliminate their mitochondria revealing a critical role for NIX in mitophagy (Schweers et al., 2007;Sandoval et al., 2008) (Figure 7). Although NIX-dependent mitophagy was predominantly studied in reticulocytes, NIX-dependent mitophagy might be important for other cell types as well [for example, see (Esteban-Martínez et al., 2017)]. A role for the UPS in NIX/BNIP3L-dependent mitophagy was revealed. NIX/BNIP3L was discovered to be ubiquitylated through a PINK1/Parkin-dependent mechanism. Ubiquitylated NIX/BNIP3L colocalized with selective autophagy receptors, and the process was necessary for mitochondrial stress-induced mitophagy (Ding et al., 2010;Gao et al., 2015;Palikaras et al., 2015). Therefore, the role of NIX/BNIP3L seems to be more general than previously thought and beyond the developmental context, and stress-induced mitochondrial elimination by autophagy might also require NIX/BNIP3L in different cell and organism types. Pexophagy: Autophagic Removal of Peroxisomes Autophagy of peroxisomes, pexophagy, is a selective degradation process of peroxisomes during which the UPS and autophagy mechanisms work in collaboration. Peroxisomes are responsible of a number of cellular functions, including fatty acid oxidation, purine metabolism and phospholipid synthesis (Wanders et al., 2016). Several peroxisomal enzymes are involved in redox regulation due to their dual functions in the generation and scavenging of reactive oxygen and nitrogen species. Therefore, peroxisome biogenesis and degradation must be tightly regulated in order to control peroxisome size, number and function (Du et al., 2015;Honsho et al., 2016). Moreover under stress conditions such as hypoxia, oxidative stress, starvation or conditions causing UPS defects, pexophagy is upregulated. During pexophagy, a number of peroxisomal membrane proteins, including peroxins and PMP70 become ubiquitylated (Kim et al., 2008). PEX2-PEX10-PEX12 complex serves as an E3 ligase at least for two well studied peroxisome proteins, PEX5 and PMP70. Ubiquitylation of peroxisome proteins result in the recruitment of p62 and/or NBR1 autophagy receptors, priming these organelles for autophagic degradation. For example, PEX2 overexpression or amino acid starvation activated the ubiquitylation of PEX5, and another peroxisomal membrane protein, PMP70, and led to peroxisome degradation (Sargent et al., 2016). Moreover in response to oxidative stress, ATM was recruited onto peroxisomes through physical interaction with PEX5 and promote its ubiquitylation. Inactivation of mTORC1 in a TSC2-dependent manner and stimulation of ULK1 phosphorylation by ATM, potentiated pexophagy Tripathi et al., 2016;Wang and Subramani, 2017). On the other hand, AAA ATPase complex (PEX1, PEX6, and PEX26) was shown to extract ubiquitylated PEX5 from peroxisomal membranes and regulate pexophagy (Carvalho et al., 2007;Okumoto et al., 2011;Law et al., 2017) (Figure 8). Both NBR1 and p62 were shown to be recruited onto peroxisomes during pexophagy. Yet, NBR1 was a major pexophagy receptor in a number of contexts, and p62 increased the efficiency of NBR1-dependent pexophagy through direct interaction with the latter (Deosaran et al., 2013;Sargent et al., 2016). Altogether, these findings underline the importance of ubiquitylation for the selective degradation of peroxisomes by autophagy. Autophagic Removal of Ribosomes and Stress Granules In addition to major cellular organelles, autophagy was implicated in the clearance of ribosomes. Although ribosomes can be degraded in a non-specific manner during non-selective autophagy, a special form of selective autophagy is activated under various stress conditions, and the process is called ribosomal autophagy or ribophagy. On the other hand, mRNA protein complexes that are stalled during translation form stress granules, and their clearance requires both the UPS and autophagy. Ribophagy was first described in the yeast during nutrient stress, and was shown to involve ubiquitylation of the 60S ribosome protein Rpl25 by the ubiquitin ligase Ltn1/Rkr1 Ossareh-Nazari et al., 2014). In the mammalian system, in addition to mTOR inhibition, oxidative stress, induction of chromosomal mis-segregation, translation inhibition and stress granule formation were all shown to induce ribophagy (An and Harper, 2018). Ubiquitylation of ribosomes was observed under ER stress-inducing conditions (Higgins et al., 2015). P97/VCP that binds to ubiquitylated proteins and that functions in the delivery of these substrates to proteasome was necessary for ribophagy both in yeast and mammalian cells (Verma et al., 2013;An and Harper, 2018). Yet, individual ribosomal proteins were indeed shown to be a target of the UPS (Wyant et al., 2018). NUFIP1-ZNHIT3 proteins were identified as novel ribophagy receptors that directly connected ribosomes to LC3 and autophagy, yet whether ubiquitylation is a prerequisite for ribophagy needs to be clarified by future studies (Wyant et al., 2018) (Figure 9). Accumulating data indicate that both the UPS and autophagy play a role in stress granüle control and elimination, and the p97/VCP protein was a key component in these processes. For example, inhibition of autophagy or p97/VCP deficiency was linked to decreased stress granule removal (Buchan et al., 2013). Co-factors of p97/VCP determined target selectivity of the protein. In this context, while the association of p97/VCP with the co-factor UFD1L led to the degradation of defective ribosomal products and dysfunctional 60S ribosomes by the UPS (Ju et al., 2008;Fujii et al., 2012;Verma et al., 2013), HDAC6 containing p97/VCP and PLAA associated granules were made a target of ribophagy (Ossareh-Nazari et al., 2010). Therefore depending on the co-factor of choice, p97/VCP has a decisive role in the choice of the degradative pathway through which ribonuclear substrates are eliminated. Cross Talk Between UPS and Autophagy During Endoplasmic Reticulum Stress Endoplasmic reticulum (ER) stress is one of the conditions under which both the UPS and autophagy pathways are being activated. Abnormalities in calcium homeostasis, oxidative stress and conditions leading to protein glycosylation or folding defects etc. may result in the accumulation of misfolded and/or unfolded proteins in the ER lumen, a condition known as ER stress. ER stress might be very destructive for cells, therefore ER-specific stress response pathways such as the unfolded protein response (UPR) and the ER-associated degradation (ERAD) pathways were evolved. Both pathways are directly or indirectly connected to the UPS and autophagy. In mammalian cells, accumulation of unfolded proteins in the lumen of the ER result in the activation of stress responses. Following protein accumulation in the ER, the chaperone protein GRP78/BiP dissociates from the lumen-facing parts of the transmembrane proteins IRE1, ATF-6, and PERK and bind to unfolded proteins in order to assist their refolding. GRP78/BiP release triggers activation of these stress proteins (Bertolotti et al., 2000;Shen et al., 2002). PERK activation leads to the phosphorylation of the α subunit of the translation initiation factor, eIF2α, which inhibits the assembly of the 80S ribosome and cap-dependent protein synthesis, while allowing cap-independent translation of the stress response genes such as ATF4. Activation of IRE1 and ATF6 promotes transcription of other stress response genes. IRE1-mediated processing generates a splice-form of the XBP1 mRNA, resulting in the production of a transcription factor that upregulates chaperones and other relevant genes. GRP78/BiP dissociation results in the transfer of ATF6 to Golgi where cleavage of the protein by S1P and S2P proteases creates an N-terminal ATF6 fragment possessing a transcriptional activity (Figure 10). Due to a decrease in the protein load in the ER and an increased folding capacity, the UPR facilitates recovery from stress. In case of failure, the UPR sensitizes cells to programmed death mechanisms. Components of the UPR were subject to active regulation by the UPS. For example, SCF component E3 ligase βTrCP was shown to lead to the ubiquitylation ATF4 following its phosphorylation (Lassot et al., 2001). On one other hand, persistent ER stress induced transcription of E3 ligase Siah1/2 following PERK-ATF4 and IRE1-XBP1 activation. On the other hand, by targeting prolyl hydroxylase PHD3, Siah1/2 was shown to regulate ATF4 hydroxylation and activity (Scortegagna et al., 2014). CHOP stability was regulated by the UPS and p300 and cIAP were responsible for CHOP ubiquitylation and degradation FIGURE 10 | Crosstalk between the UPS and autophagy systems during ER stress and ERAD. counterbalancing its upregulation during ER stress (Qi and Xia, 2012;Jeong et al., 2014). Another UPR component, IRE1 was identified as a ubiquitylation target of the E3 ligase CHIP during ER stress. Ubiquitylation IRE1 inhibited its phosphorylation, perturbed its interaction with TRAF2, and attenuating JNK signaling (Zhu et al., 2014). Under stress conditions, translation of XIAP, an E3 ligase protein and an inhibitor of apoptosis was downregulated in a PERK-eIF2α-dependent manner. In the same context, ATF4 may promote ubiquitylation and degradation of XIAP, leading to sensitization of cells to ER stress-related cell death (Hiramatsu et al., 2014). Conversely, activation of PERK-eIF2α axis might also show opposing effects through induction of other IAP proteins, cIAP1 and cIAP2, and counter balance cell death induction signals (Hamanaka et al., 2009). Endoplasmic reticulum stress was shown to trigger autophagy, and ER-related stress response mechanisms were involved in the process. PERK-mediated phosphorylation of eIF2α and resulting ATF4 and CHOP activation, were associated with the transcription of genes such as ATG5, ATG12, Beclin1, ATG16L1, LC3, p62 and TSC2 activator, hence mTOR inhibitor REDD1 (Whitney et al., 2009;B'Chir et al., 2013). Moreover, CHOP downregulated BCL2 binding (Mccullough et al., 2001). TRB3, an AKT inhibitor protein, was also described as a target of CHOP (Ohoka et al., 2005). In addition, IRE1 activation resulted in the recruitment of ASK1 by the adaptor TRAF2 and the outcome was the activation of JNK and p38 kinases (Nishitoh et al., 2002). BCL2 is one of the targets of JNK, its phosphorylation by the kinase resulted in destabilization the inhibitory BCL2-Beclin1 complex, stimulating autophagy (Bassik et al., 2004). On the other hand, in its unspliced form, IRE1 splicing target XBP1, in its unspliced form was shown to target the autophagy activator FOXO1 for degradation by the UPS (Vidal et al., 2012;Xiong et al., 2012). Endoplasmic reticulum is a major calcium store in cells, and calcium release to cytosol was observed during ER stress. In addition to problems with SERCA refill pumps and leakiness of membranes during stress, upregulation of ERO1-α by CHOP resulted in an IP3-mediated calcium release . Calcium binding protein calmodulin senses the cytosolic increase in the concentration of the ion, and bind to calmodulin-regulated kinases such as CaMKII and DAPK1, modulating their activity. Activated CaMKII was shown to stimulate autophagy through AMPK phosphorylation and activation (Høyer-Hansen et al., 2007). In addition, calmodulin-binding and PP2A-mediated dephosphorylation was necessary for the activation of the autophagy-related kinase DAPK1 (Gozuacik et al., 2008). DAPK1 could directly phosphorylate Beclin1 on the BH3-domain, resulting in the dissociation of Beclin1 from the BCL2-Beclin1 complex and allowing it to stimulate autophagy (Zalckvar et al., 2009). Proteins that accumulate in the ER are degraded by the ER-associated degradation (ERAD) system. ERAD mediates transport, extraction and ubiquitylation of proteins that cannot be salvaged and target them for degradation in proteasomes. In mammalian cells, ER membrane-resident complexes containing E3 ligases such as HRD1 and GP78, and other regulatory components such as EDEM1, SEL1L, ERManI, and HERP control the ERAD pathway. P97/VCP protein and its co-factors also play a role in the pathway (DeLaBarre et al., 2006;Nowis et al., 2006). Unfolded/misfolded proteins are recognized in the lumen of the ER by chaperone proteins, including BiP/GRP78 and EDEM1, and are then subsequently targeted them to the ERAD pathway. During retrotranslocation of client proteins to cytosol, ubiquitylation is followed by a p97/VCP-assisted extraction. P97/VCP also assists in the delivery of proteins to proteasomes for degradation. DUB proteins, including YOD1, USP13, USP19, and Ataxin-3 were implicated in the control of client protein ubiquitylation and ERAD substrate modulation (Zhong and Pittman, 2006;Bernardi et al., 2013;Harada et al., 2016). ER-associated degradation regulators and therefore ERAD might be controlled by the UPS and autophagy pathways. For example, E3 ligase Smurf1 was found to be downregulated during ER stress, resulting in the accumulation of its direct ubiquitylation target WFS, which is a stabilizer ER-related E3 ligase HRD1 (Guo et al., 2011). Smurf1 was also involved in selective bacterial autophagy (Franco et al., 2017). On the other hand, while the ERAD complex component HERP protein was degraded by the UPS (Hori et al., 2004), EDEM1 and ERManI proteins were eliminated by the autophagy machinery (Le Fourn et al., 2013;Park et al., 2014;Benyair et al., 2015). An ER-localized E3 ligase synoviolin protein was shown to ubiquitylate HERP protein and control its degradation by proteasome (Maeda et al., 2018). Yet, other ERAD-related components, EDEM1 and Derlin2 as well as ubiquitylated EDEM1 proteins colocalized with cytoplasmic aggregates and autophagy receptors p62 and NBR1, they were degraded by selective autophagy (Le Fourn et al., 2013;Park et al., 2014). ERManI, a mannosidase that is responsible for priming ERresident glycosylated proteins for degradation, was described as an accelerator of the ERAD pathway and clearance of clients by the UPS. But, following proteasome inhibition and subsequent ER stress, ERManI colocalized with LC3 and degraded in an autophagy-dependent manner (Benyair et al., 2015). All these findings point out to the presence of important junctions and coregulation nodes between the UPS and autophagy in the context of ER stress. Additionally, ERphagy, the autophagy of portions of the ER, was implicated in the recovery from ER stress and control of ER size, but this mechanism was so far described as a ubiquitin-independent process (Schuck et al., 2014). Transcriptional Mechanisms Connecting the UPS and Autophagy Several transcription factors that are regulated by the UPS, including p53, NFκB, HIF1α, and FOXO, have been implicated in the control of autophagy. In general, these factors were shown to directly activate transcription of key autophagy genes under stress conditions. Some autophagy proteins such as LC3 are consumed in the lysosome following delivery, and during prolonged stress, cellular levels of these proteins are sustained by mechanisms, including transcription. On the other hand, regulation of the transcriptional activity NRF2 involves a special crosstalk between the two systems. In this section, we will summarize molecular details of transcription regulation by the UPS and autophagy. NF-κB is a well studied transcriptional regulator of autophagy. As a result of its association with IκB, NF-κB is found in an inactive state in the cytosol. In response to agonists, IκB was reported to be ubiquitylated and subsequently degraded by the UPS. Regulation of NF-κB by external signals involved phosphorylation of IκB by upstream kinases of the IKK complex (IKKα, IKKβ, and IKKγ/NEMO). Phosphorylated IκB recruits the E3 ligase SCF-βTRCP, followed by its degradation in the proteasome (Orian et al., 2000). After IκB degradation, NF-κB was then free to migrate to the nucleus of the cell, and induce transcription of target genes, including Beclin1 and p62, and induce autophagy (Copetti et al., 2009;Ling et al., 2012). Another level of regulation involved TNF-α receptor-associated protein complexes. Binding of TNF-α to TNFR1 led to the recruitment of TRADD and RIPK1 to the receptor, promoting TRAF-and cIAP-mediated K63 and/or K11 linked ubiquitylation of the RIPK1. Ubiquitylated RIPK1 could recruit NEMO and TAB-TAK1 complex for IKK activation and hence NF-κB stimulation. Additionally, RIPK1 could also be modified by A20 through addition of K48-linked poly-ubiquitin chains, sending the kinase for proteasomal degradation (Kravtsova-ivantsiv et al., 2015). However, in some contexts, TNF-α-induced NF-κB activation was reported to inhibit autophagy (Djavaheri-Mergny et al., 2006). TNF-α-induced activation of IKKα or IKKβ could stimulate phosphorylation of TSC1/2 and activate mTOR, leading to a similar inhibitory outcome (Lee et al., 2007;Dan and Baldwin, 2008). Furthermore in some contexts, RIPK1 silencing activated autophagy under both basal and stress conditions (Yonekawa et al., 2015). On the other hand, RIPK1 itself was reported to be a target of p62-mediated selective autophagy (Goodall et al., 2016). Moreover, autophagy was responsible for the degradation of NF-κB activator NIK and IKK complex subunits, indicating the presence of a tight cross-regulation of the NF-κB pathway by the UPS and autophagy (Qing et al., 2007). FOXO family of transcription factors (FOXOs) were associated with various cellular pathways, including autophagy (Zhao et al., 2007). The activity of FOXOs were regulated by their phosphorylation status and following activation, FOXOs translocated to the nucleus and triggered the expression of a number of genes associated with different stages of the autophagy pathway, including ATG4, ATG12, BECN1, ULK1, PIK3C3, MAP1LC3, and GABARAP (Mammucari et al., 2007;Zhao et al., 2007;Sanchez et al., 2012). There are several connections between FOXOs and autophagy. Activation of the AKT pathway inhibited FOXO3 activity, led to a decrease in LC3 and BNIP3 expression, therefore blocked autophagy (Stitt et al., 2004;Mammucari et al., 2007). On the other hand, AMPK activation led to the phosphorylation of FOXO3a and ULK1, inducing MAP1LC3, GABARAP, and BECN1 expression and subsequent autophagy activation (Sanchez et al., 2012). Another FOXO family protein FOXK1/2, a negative regulator of FOXO3, was associated with a decrease in autophagy by removing Sin3A/HDAC complex from histone H4 to diminish its acetylation. In this context, nuclear localization of FOXK1/2 was mTOR-dependent and showed an inhibitory effect on autophagy gene expression under basal conditions (Bowman et al., 2014). Moreover, JNK deficiency in neurons increased autophagic activity through FOXO1-mediated BNIP3 upregulation and Beclin1 disassociation from BCL-XL . Another example of a link between FOXOs autophagy involved ATG14. Liver specific knockout of FOXOs resulted in the downregulation of ATG14 and this event was associated with high levels of triglycerides in the liver and serum of mice (Xiong et al., 2012). Additionally, GATA-1 shown to directly regulate FOXO3-mediated activation of LC3 genes to facilitate autophagic activity . Phosphorylation of FOXO proteins by various protein kinases, including AKT, IKK, and ERK, affected their ubiquitylation by E3 ligases and their stability (Huang and Tindall, 2011). For instance, AKT-mediated phosphorylation of FOXO1 provided a signal for its recognition by the SKP protein, an SCF E3 ligase complex component, followed by FOXO1 ubiquitylation and degradation (Huang et al., 2005). COP1 was also identified as an E3 ligase that regulated FOXO protein stability. COP1 ubiquitylated FOXO1 and promoted its proteasomal degradation. This type of regulation might be important in the glucose metabolism of hepatocytes, and possibly in autophagy modulation under this conditions (Kato et al., 2008). Another FOXO regulating E3 ligase was MDM2 that was reported to be responsible for FOXO1 and FOXO3A ubiquitylation and degradation (Fu et al., 2009). MDM2-mediated ubiquitylation was activated by the phosphorylation of FOXOs by AKT. Due to its role in p53 regulation, MDM2 could be part of a more complex regulatory mechanism which might link the UPS, transcriptional regulation and autophagic activity. Autophagy-UPS Crosstalk in Diseases Crosstalk between autophagy and the UPS may change character under disease conditions, contribute to the pathogenesis of diseases and even affect their outcome. Degenerative diseases and cancer are examples of diseases that illustrate the interplay between the UPS and autophagy in the clearance of misfolded abnormal proteins (Juenemann et al., 2013). For example, Huntington Disease is caused by poly-glutamine extensions in a protein called Huntingtin (Htt), leading to abnormal organization and eventual aggregation of the protein. Htt protein was shown to be ubiquitylated via K48-or K63-linked ubiquitin chains (Bhat et al., 2014). Mutant Htt clearance depended on both the UPS and autophagy in different experimental settings. Mutant Htt aggregates were largely cleared by K63-dependent autophagy mechanisms (Renna et al., 2010;Menzies et al., 2015). On the other hand, overexpression of K48-specific E3 ligase Ube3a, resulted in a UPS-dependent degradation of mutant proteins. Yet, cellular levels of E3 ligase was shown to decline in an age-dependent manner. Therefore, in elderly people, accumulation of K63-linked polyubiquitylated proteins might tip the balance toward clearance of protein aggregates by autophagy. A similar UPS switch was also observed in a CHIP-dependent manner (Jana et al., 2005;Bhat et al., 2014). Another example involves the ERAD protein p97/VCP. Mutant forms of the protein were associated with a rare syndrome that mainly affects muscles, bones and the brain (Inclusion Body Myopathy with the Paget's Disease of Bone and frontotemporal Dementia, IBMPFD). Moreover, p97/VCP mutations were detected in a fraction of patients suffering from familial forms of Parkinson's Disease or from Amyotrophic Lateral Sclerosis (ALS) (Johnson et al., 2010). As mentioned in the previous sections, p97/VCP is important for the extraction of misfolded ER proteins as well as their delivery to proteasomes. Moreover, p97/VCP was proposed to play a role in autophagosome maturation and autolysosome formation (Tresse et al., 2010). We recently showed that some of the diseaserelated mutations of p97/VCP (namely P137L and G157R) resulted in the aggregation of the protein itself. Mutant p97/VCP proteins formed complexes with wild-type counterparts and led to further accumulation of ubiquitylated proteins upon ER stress, indicating that the ERAD system was negatively affected by the mutant (Bayraktar et al., 2016). Indeed, ERAD cofactor and ubiquitin binding capacity of the mutant p97/VCP was decreased (Erzurumlu et al., 2013). Yet, autophagy was still functional under these conditions, and could significantly eliminate these aggregates (Bayraktar et al., 2016). Therefore, preferential elimination of mutant proteins by autophagy might tip the balance in favor of wild-type proteins and restore disease-related loss of cellular functions including UPS-related mechanisms. The role of the crosstalk between the two systems is also prominent in the cancer context. For example, the P53-regulated and cancer-related protein EI24, was introduced as a critical link between the UPS and autophagy (Devkota et al., 2012). EI24 controlled the stability of E3 ligases TRIM41, TRIM2, and TRIM28 by the regulation of their autophagic degradation (Devkota et al., 2016;Nam et al., 2017). Cellular levels of other E3 ligases, namely MDM2 and TRAF2, were also regulated by EI24-controlled degradation, modulating p53 and mTOR pathways, respectively, and influencing cancer formation and progression (Devkota et al., 2016). Deregulation and/or mutations of proteins that function in the autophagy and/or the UPS were observed in some cancer types, resulting in the modification of individual pathways and possibly affecting the crosstalk between the two systems. Changes include, modulation of levels of E3 ligases such as MDM2 (Haupt et al., 2017), SMURF1 (Fukunaga et al., 2008;Kwon et al., 2013), SCF components (e.g., βTrCP), point mutations of NEDD4 (Amodio et al., 2010), COP1 (Marine, 2012), FBXW7 (Korphaisarn et al., 2017), and mutations in autophagy related proteins Beclin1 (Laddha et al., 2014), LKB1 (Ji et al., 2007), ATG5 (Takamura et al., 2011), ATG4C (Marino et al., 2007) as well as deletions of genes of proteins, such as Beclin1 (Liang et al., 1999;Qu et al., 2003), AMPK and UVRAG (He et al., 2015). Under these circumstances, dynamic and complex changes in the regulation of the degradative pathways should have dramatic effects that contribute to cancer-related alterations in the proteomic landscape of cells. Autophagy-UPS crosstalk emerges as a critical factor that determines the success of disease treatment, chemotherapy is one striking example. For instance, proteasome inhibition by the chemotherapy agent bortezomib resulted in the accumulation of misfolded proteins and induced compensatory autophagy in cancer cells (Obeng et al., 2006). Under these circumstances, autophagic activity protected cancer cells from bortezomibinduced cell death, and inhibition of autophagy improved the outcome of chemotherapy. These dual autophagy-UPS targeting approaches also gave promising results in clinical trials (Vogl et al., 2014). Several companies are now developing drugs that modulate the UPS or autophagy [for example, (Huang and Dixit, 2016)]. Concepts and data that were discussed above and elsewhere indicate that, depending on the disease type and treatment strategy, the crosstalk between the UPS and autophagy should definitely be taken into account in these efforts. CONCLUSION AND PERSPECTIVES Autophagy and the ubiquitin proteasome systems are major degradation systems in mammalian cells that allow recycling of cellular contents ranging from soluble proteins to intracellular organelles. Although their mode of action and their requirements for substrate recognition are different, there are several overlaps and interconnections between the UPS and autophagy pathways. A prominent component of the crosstalk is the ubiquitin protein itself and ubiquitylation. Indeed, ubiquitin is a common signal for both the UPS and autophagy. It was proposed that, ubiquitin chain type could determine the pathway of choice for protein degradation. K48-linked ubiquitylation was introduced to be a signal for the UPS, whereas K63-linked ubiquitylation directed proteins for autophagosomal degradation (Herhaus and Dikic, 2015). Yet, a number of independent studies provided evidence that both ubiquitylation types could lead to autophagic degradation of substrates (Wandel et al., 2017). Moreover, recent studies underline the importance of ubiquitin phosphorylation as an event that increased the affinity of autophagy receptors for their targets during selective autophagy (Kane et al., 2014;Koyano et al., 2014). Additionally, non-ubiquitin modifications (e.g., acetylation, sumoylation, neddylation etc.) were shown to affect protein degradation as well (Hwang and Lee, 2017). Therefore, a barcode of ubiquitin and other modifications seem to prime proteins for one or the other degradation pathway and determine their fate. As another level of regulation, deconjugating enzymes such as DUBs may counteract or redirect proteins for different degradation systems. E3 ligases emerged as important components of the UPS-autophagy switches. For example, Cullin-3 (Pintard et al., 2004), SMURF1 (Ebisawa et al., 2001), MDM2 (Shi and Gu, 2012) E3 ligases directed proteins to degradation by the UPS, whereas the role of Parkin (Chan et al., 2011), LRSAM1 (Huett et al., 2012), and CHIP (Shin et al., 2005) in priming proteins for autophagic degradation was observed in several studies. On the other hand, the same E3 ligase that might be able to generate different ubiquitin linkages under different conditions and on different substrates (Chan et al., 2011), the switch between degradative pathways being controlled by specific E3 ligase adaptors, post-translational modifications on target proteins as well as other unknown factors. A prominent example is the Parkin protein. During mitophagy although some of the proteins that are ubiquitylated by Parkin are degraded, other ubiquitylated proteins contribute to mitochondrial clustering and recognition by autophagy receptors. To date, factors or modifications that determine the substrate selectivity of Parkin are unknown. Another example of UPS-autophagy switch involves the p97/VCP protein. While binding of the co-factor PLAA to p97/VCP resulted in the autophagic degradation of ubiquitylated clients of the protein, binding of UFDL1 as a co-factor favored degradation by the UPS. Moreover, p97/VCP was also associated with aggregate formation in collaboration with some autophagy receptors. Signaling switches involved in the regulated activation of one or the other system was shown to modify cellular responses to stress. For example, NRF2 degradation by the UPS was controlled through p62-mediated KEAP1 elimination by autophagy (Jain et al., 2010). Prevention of HIF1α degradation by the UPS, resulted in the expression of stress response genes, including autophagy genes, led to autophagy activation. In another example, the UPS activity was required for NF-κB activation and NF-κB-mediated autophagy gene upregulation. Yet, autophagic degradation of NF-κB activators NIK and IKKs provided a negative feedback loop in the control in this context (Qing et al., 2007). Therefore, modification of cellular signaling pathways by degradative systems might modulate upstream signals that control autophagy and/or the UPS, and affect their activation and amplitude. Degradation of the components or regulators of one system by the other system was also reported. For example, proteasomes were defined as substrates of selective autophagy (Marshall et al., 2015). Conversely, various autophagy proteins were ubiquitylated and degraded by the UPS in a regulated manner. Therefore, checks and balances between the two systems exist, and these control mechanisms possibly allow remodeling of the cellular proteome under different conditions. Compensation mechanisms are also operational between the two systems. Inhibition of the UPS generally upregulated autophagy, whereas failures in the autophagy system were associated with increased UPS activity, although inefficient compensation and failure in both systems were also observed under certain conditions (Korolchuk et al., 2009a,b). Moreover, alternative protein degradation pathways, such as CMA and microautophagy might come into play under these conditions as well. Nevertheless, depending on the character of the target to be degraded, compensation mechanisms were less or more effective. For example, large aggregates and whole organelles should be cleared by autophagy, but defective ribosomal products that could not be accumulated in stress granules were shown to be directed for proteasomal degradation. Therefore for cellular homeostasis and for proper functioning of cells, ideally both systems should be fully operational. Data obtained so far demonstrate that crosstalk and communication between autophagy and the UPS generally rely on non-specialized and even indirect links. Yet, there might exist so far unknown specialized proteins providing coordination and co-regulation of the two systems. Furthermore, regulation through direct protein-protein interactions between known system components is another possibility. Therefore, dedicated communication proteins or pathways between the degradation mechanisms may be present, allowing better and faster coordination in case of need. Further studies are required to unveil the nature of these putative proteins, interactions and pathways. An emerging theme in the regulation and coordination of autophagy and the UPS involves non-coding RNAs and their intricate networks. A growing list of microRNAs as well as long non-coding RNAs were implicated in the control of autophagy (Tekirdag et al., 2016) as well as the UPS (Wu and Pfeffer, 2016;Chang et al., 2018). MicroRNAs have the advantage of affecting the level of multiple proteins at once, and they are able to rapidly reshape cellular signaling mechanisms and pathways. Therefore, non-coding RNA networks possibly contribute to the co-regulation of these degradative systems. Intriguingly, deregulation of non-coding RNA levels contribute to the progression of diseases such as cancer. Future studies on non-coding RNAs will reveal their relevance in the autophagy-UPS crosstalk under physiological and pathological conditions. Overall, coordination, interconnection and crosstalk mechanisms between the UPS and autophagy exist at various levels. In addition to ubiquitin and ubiquitylation, several proteins and signaling pathways were implicated in the communication and mutual regulation of the two systems. Considering the importance of protein catabolism for cellular and organismal homeostasis and health, a better understanding of individual systems as well as the interconnections and crosstalks between them will be most rewarding from both a basic science perspective and with regards to clinical management of diseases involving protein quality control problems. AUTHOR CONTRIBUTIONS NK and DG wrote the manuscript and did critical reading. NK prepared the illustrations in the manuscript.
2018-10-02T13:03:01.208Z
2018-10-02T00:00:00.000
{ "year": 2018, "sha1": "03c71482761044fad551e55b7a939dfa8b1ca679", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcell.2018.00128/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "03c71482761044fad551e55b7a939dfa8b1ca679", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
254823353
pes2o/s2orc
v3-fos-license
Pricing Bermudan Swaption under Two Factor Hull-White Model with Fast Gauss Transform This paper describes a fast and stable algorithm for evaluating Bermudan swaption under the two factor Hull-White model. We discretize the calculation of the expected value in the evaluation of Bermudan swaption by numerical integration, and Gaussian kernel sums appears in it. The fast Gauss transform can be applied to these Gaussian kernel sums, and it reduces computational complexity from $O(N^2)$ to $O(N)$ as grid points number $N$ of numerical integration. We also propose to stabilize the computation under the condition that the correlation is close to $-1$ by introducing the grid rotation. Numerical experiments using actual market data show that our method reduces the computation time significantly compared to the method without the fast Gauss transform. They also show that the method of the grid rotation contributes to computational stability in the situations where the correlation is close to $-1$ and time step is short. Introduction Bermudan swaption is an option which has several exercise days and enters an interest rate swap when it is exercised. This instrument is widely traded for the purpose of structuring bonds and loans which have early redemption conditions. Evaluating Bermudan swaptions is an important part of financial businesses since traders have to quote and manage portfolios which contain Bermudan swaptions. According to the risk-neutral valuation, the price of a derivative can be calculated based on the expectation of discounted value of the payoff. If the payoff, model, and other conditions are simple, there are analytic solutions. Otherwise, numerical methods are often required to calculate the expected value. Considering a valuation of a Bermudan swaption, most situations require complex calculations since a valuation of Bermudan swaption needs calculations of both expectation of payoff and optimal exercise time. Using dynamic programming, we can derive a procedure that the exercise decision is made based on the large-small relation of exercised swap value and holding value for future exercise rights. However, the holding value itself has a recursive structure that depends on the exercise value and the holding value at the time of the next exercise. Due to this structure, most models cannot provide any analytical solution of Bermudan swaption, and numerical calculations are required. In practice, many iterative calculations are required not only for price calculations but also for calculations of sensitivities for risk management. Therefore, to speed up these numerical calculations is an important issue. Several methods are known for the valuation of Bermudan-type derivatives. Longstaff and Schwartz [11] proposed least square Monte Carlo (LSMC) which is a general-purpose method that can be used to evaluate derivatives including early exercises. Karlsson et al. [4] apply the method to calculating the Bermudan swaption. These methods have a slow convergence issue of the Monte Carlo method. The calculation errors of these methods are reduced as order O(N −0.5 ) with path number N , which means that we need 100 times paths to reduce the error to 1/10. Hull and White [8], Li et al. [10], and Lee and Yang [9] proposed Tree and finite difference methods solving partial differential equations which require a finer grid in the time direction as well as in the state space direction to improve the accuracy of the calculation, and they require more computations than the product of the number of discretization grids of state space and time directions. Load et al. [12] and Karlsson et al. [4] introduced the CONV method which uses the fast Fourier transform. It can be calculated in (O(N log N )) complexity for the number of spatial discretization N if the transition density function or its characteristic function is known. The fast Gauss transform (FGT) introduced by Greengard and Strain [7] is a method for calculating the weighted sums of a Gaussian function at multiple points, whose computational complexity is O(N + N ′ ) instead of O(N N ′ ) by direct calculation, where the number of input grids is N and the number of output grids is N ′ . Yang et al. [13] extended this method to any dimension and reducing computational complexity as increasing dimension. The weighted sums of Gaussian functions often appear in the calculation of derivatives, and in fact, Broadie and Yamamoto [2] showed that the FGT can be applied to the evaluation of Bermudan-type and other derivatives. After the financial crisis, credit valuation adjustments (CVA and widely XVA) have become essential for pricing and risk management. In CVA calculation which treats counter party credit risk of derivatives, it is necessary to evaluate transactions involving various assets in the netting set at the same time, so hybrid models incorporating various factors such as interest rates, exchange rates, stocks, and default probabilities are used. In addition, the calculation of exposures at many points in time and in many model state spaces is needed, which makes the calculation time of the models assuming default much longer than those assuming defaultfree. In these situations, Gaussian models including the one or two factor Hull-White model are being re-evaluated as an interest rate model for CVA. There are many reasons such as its ability to perfectly reproduce the yield curve, its ability to obtain many analytical solutions for various interest rate products, its ease of handling interest rates as a Gaussian distribution even when constructing a hybrid model with exchange rates, and its suitability for the recent negative interest rate environment. The parameters of the two factor Hull-White model (equivalent to G2++ model) are calibrated to fit the cap and swaption volatilities of the market. Brigo and Mercurio [1] have mentioned that the correlation parameter of the model is sometimes very close to −1, especially when fitting to Cap volatilities. In practice, the correlation is sometimes very close to −1 even if we are fitting model to Swaption volatilities. Stability of the calculation in such situations is essential. Even if the situation occurs only 1%, it occurs an average of 2.5 times per year in daily calculation. In this paper, we consider the evaluation of Bermudan swaption by the two factor Hull-White model, and discuss an efficient and stable calculation method using the fast Gauss transform and grid rotation for the integral calculation appearing in the valuation formula. Our method is not limited to the evaluation of Bermudan swaptions but can be applied to the evaluation of a wider range of products. Also, it is applicable wide range of models whose transition density is a multivariate Gaussian distribution of arbitrary dimensions. This paper is organized as follows. In Section 2, we outline the evaluation method of interest rate derivatives using the two factor Hull-White model, the multi-curve modeling, the evaluation formula of Bermudan swaption, and the principle of the fast Gauss transform. In Section 3, we construct an approximation by numerical integration of the integrals appearing in the evaluation formula of Bermudan swaption based on the two factor Hull-White model and show that the FGT can be applied. We also explain grid rotation technique as stabilizing calculation in the situation that correlation is close to −1. In Section 4, numerical examples are presented. We conclude in Section 5. 2 Bermudan Swaption Pricing on Two Factor Hull-White Model Two Factor Hull-White Model We assume the ordinary filtered probability space (Ω, F, P; {F t } t≥0 ) Let R t be the spot interest rate and we consider prices of derivatives in time [0, T h ]. If the market is arbitrage-free and complete, there exists an equivalence martingale measure Q such that the relative prices of any derivative to the money market account, exp t 0 R s ds , become a martingale and it is called the risk-neutral measure. Hereafter, we use E[·] as the expectation under the measure Q and introduce the conditional expectation E t [·] = E[·|F t ] for brevity. Let X t = (X 0,t X 1,t ) T be a two-dimensional stochastic process and W t = (W 0,t W 1,t ) T be a {F t } t≥0 -adapted two-dimensional independent standard Brownian motion under a risk-neutral measure Q. The two factor Hull-White model which is an interest rate model for the R t is defined by (2.1) with 1 = (1 1) T and the variables in (2.1) are , where diag (·) is a diagonal matrix whose arguments are diagonal elements and ψ(t), κ(t), σ(t), and ρ are deterministic. Integrating (2.1) gives In this paper, integrals of vectors and matrices are denoted as vectors and matrices with integrals of each element. The variables in (2.2) and (2.3) are written by and the conditional variance-covariance matrix of U (t, τ ) under σ-algebra F t is calculated as The discounted bond price P t (τ ) with maturity τ at evaluation time t is written by this model can reproduce the discount factor at time 0 perfectly. Next, we derive the expression under the forward risk neutral measure. Let the Radon-Nikodym derivative process be From Girsanov's theorem, Brownian motion defined by become an independent standard Brownian motion under the measure Q τ which is called τ forward risk neutral measure. Under Q τ , the stochastic process of X t can be written as Integrating (2.4) gives where variables are written by Note that the variance-covariance matrices of U τ (t, τ ) are the same as U (t, τ ). Let V (t, X t ) be derivative value at time t and state X t . If there are no cashflow between [t, τ ], the relation between V (t, X t ) and V (τ, X τ ) is written by In the two factor Hull-White model with constant coefficients, analytical solutions for cap and swaption prices are known (see e.g., [1]) and the parameters κ(t), σ(t), ρ are calibrated to market cap and swaption prices. To evaluate a Bermudan swaption, we sometimes calibrate parameters with co-terminal swaptions which satisfy that the sums of the option maturity and the underlying swap period are equal to term of the Bermudan swaption, because trader often hedge Bermudan swaption with these co-terminal swaptions. Brigo and Mercurio [1] also mentioned that the correlation ρ is often very close to −1 in the calibration to market Cap. In practice, the correlation is sometimes very close to −1 even when we calibrate to swaptions. Such conditions are prone to cause errors in numerical calculations and often require ingenuity. Multi Curve Model After the financial crisis, the theory of discounting has changed drastically as counterparty default risk has to be considered. It has become necessary to use multiple curves (see e.g. [5]), such as using LIBOR, which will soon disappear by LIBOR reform and risk-free rate (RFR) will be standard, for a reference rate while using OIS for discounting collateralized derivatives. This subsecsion introduces multi curve modeling with the assumption that the difference between the instantaneous forward rates of the risk-free rate and reference interest rate curves is deterministic. Assuming t ≤ t s < t e , we define forward-looking reference rate whose term is [t s , t e ] and which is observed at t as At this time, the value of the floating cashflow with reference rate L ts (t s , t e ) is We find that L t (t s , t e ) is a martingale under the measure Q te and also is equivalent to a portfolio of discounted bonds. Furthermore, is valid as t s < t m < t e . Let t s = t o,0 < · · · < t o,No = t e be business days sequence and be compounding interest rate between [t s , t e ], we can derive the value of compounding rate cashflow as which equal to the value of forward looking reference rate. (2.8) and (2.9) allows us to express the value of a swap as a portfolio of discounted bonds, even if the discount and reference curves are different. For example, let t S,0 be the beginning of the swap period, and t S,1 , · · · , t S,N S is the interest payment date of the swap, the valuation V S (t) of a swap which receives a fixed interest rate K with the same frequency on the fixed and floating legs is Bermudan Swaption A Bermudan swaption is an option which enters an interest rate swap if it is exercised and there is a one-time opportunity to exercise the right from a predetermined set of dates. In a normal transaction, an interest rate swap is defined as underlying asset and future cashflows from each exercise date are arise as exercising. Here we introduce the recurrence relation for the Bermudan swaption without assuming any specific model. Let E(t 1 , X t 1 ), · · · , E(t n , X tn ) be the value of the swap entering by the exercise of the Bermudan swaption at T = {t 1 , · · · , t n }, where X tn is the state variable of the evaluation model at each time. The swaps are commonly exchanges of fixed and floating interest rates, and equivalent to a portfolio of discounted bonds. Caplet floorlets and other variable cashflows can also be included if their value can be written only in terms of the state variable X tn . The value of a Bermudan swaption which has not been exercised until time t is described by is calculated by the following recurrent formula using dynamic programming with k(t) as the smallest k satisfying t ≤ t k . Calculation of this expected value is the most difficult issue to evaluate Bermudan swaption and various methods have been developed. Fast Gauss Transform Fast Gauss transform (FGT) is a method used to calculate discrete convolution products for Gaussian functions. By multiplying both sides of the generating function by e −x 2 and transforming the expression The transformation of the expression from the second line to the third line is given by a Taylor expansion around x = x 0 and We scale it by √ δ, we obtain This infinite sum converges fast for the number of α, β. According to Broadie and Yamamoto [2], we only need to sum up to α = β = 8 to achieve an error of However, we will use different settings in numerical experiment for some reason. Using this expansion, the convolution (2.10) is written by does not depend on x i and β, so it can be performed by calculating order O(N + N ′ ) with saving the result of the calculation in advance. When not all the points are close enough to x 0 , y 0 , we can divide the points into appropriate blocks as shown in Figure 1. Let A, B, C, · · · be blocks of inputs and A ′ , B ′ , C ′ , · · · be blocks of outputs. In addition, let y A j , y B j , y C j · · · and x A ′ j , x B ′ j , x C ′ j · · · for j = 1 · · · N are points included in input and output blocks. Then, we can select x 0 , y 0 in each block and blocked FGT is calculated by the following three steps. Firstly, we calculateg α in each input block as where X = A, B, C, · · · . Next, we calculateg β in each output block as where X ′ = A ′ , B ′ , C ′ , · · · . Lastly, we have In the case of using block partitioning, a direct calculation requires x block size times y block size of (2.12). However, considering that the Gaussian function decreases rapidly and we can ignore if |x 0 − y 0 | is sufficiently large, for example larger than 8 √ δ, we can calculate by the same computational complexity O(N + N ′ ) when dividing into blocks. Although the FGT is superior to the direct computation in terms of computational complexity, the coefficient of computational complexity is large. It may be better to use the direct computation when there are only a few points in the block. The two-dimensional case can also be derived using the product of one-dimensional expres- where (i = 1, 2, · · · , N ′ ). This can be performed with the same computational complexity O(N + N ′ ). In the 2D case, as in the 1D case, we divide x and y into appropriate regions and apply the FGT to the calculations between each block. We can similarly extend to more than three dimensions. As increasing dimension number d, intermediate variables likẽ g 2,α 0 ,α 1 ,g 2,β 0 ,β 1 increase exponential power of d. For more than three dimensions, improved method is proposed by Yang et al. [13]. 3 Pricing Bermudan Swaption using Fast Gauss Transform Discretization of Expectation with FGT In this subsection, we show that the expected value calculation that appears in the recurrence relation of the Bermudan swaption can be discretized by numerical integration in the two factor Hull-White model, and that the FGT can be applied to the discretization to reduce calculation time. In addition, we propose a method of grid rotation for numerical integration in order to reduce the discretization error and increase the efficiency of the FGT, For the state variable X t of the two factor Hull-White model, we introduce the t-dependent coordinate rotation as Ξ(t) = cos (ξ(t)) − sin (ξ(t)) sin (ξ(t)) cos (ξ(t)) . The relationship between Y t and Y τ is deduced from (2.5) as The variance-covariance matrix ofÛ τ (t, τ ) iŝ and letĈ(t, τ ) be its Cholesky decomposition aŝ Since the probability distribution ofÛ τ (t, τ ) under F t follows a multivariate normal distribution, the transition density function from Y t = y t = (y 0,t , y 1,t ) T to Y τ = y τ = (y 0,τ , y 1,τ ) T is expressed by where derivative value is approximately computed by numerical integration with y τ,i and w i as nodes and weights. Furthermore, when we calculate the prices for various states Y t = y t,j , (j = 1, · · · , N t ) at time t, we use the same nodes and weights y τ,i , w i for the numerical integration. Therefore, we obtain the approximation aŝ w iV (τ, y τ,i )φ(y τ,i ; y t,j ). (3.2) To apply the FGT to this approximation, we introduce new variables n τ,i = n 0,τ,i n 1,τ,i =Ĉ −1 (t, τ )y τ,i , m t,j = m 0,t,j m 1,t,j =Ĉ −1 (t, τ ) {μ(t, τ )y t,j +η(t, τ )} , and the equation can be written as follows φ(y τ,i ; y t,j ) = 1 where | · | is a determinant of a matrix. As a result, the equation (3.2) can be computed quickly using a two-dimensional FGT. We divide (n 0,τ,i , n 1,τ,i ), (m 0,τ,i , m 1,τ,i ) into square blocks of width √ 2 and apply the FGT for each block combination if the distance between the center points of the block is less than σ max , otherwise we omit the calculation as the contribution from far block is sufficiently small. Numerical Integration Settings and Grid Rotation As an example of a specific numerical integration, we propose the following setting. The random variable Y τ is distributed around the origin (0, 0) where the variance isΣ 00 (0, τ ),Σ 11 (0, τ ) under the risk-neutral measure Q. For a sufficiently large M and the number of partitions N y , we set the region of , in the y 0,τ and y 1,τ directions respectively, and apply the midpoint rule of numerical integration. Applying Calculation of Expected Exposure CVA is a price adjustment when a counterparty defaults during the contract period. It plays an important role in the price quotation and risk management of derivatives. Assuming that counterparty defaults and market factors are independent, let λ C (t) be the default intensity, LGD be the "1− recovery rate," and the time grid be 0 = s 0 < s 1 < · · · < s n = T , V t be the living value of the transactions contracted with the counterparty. We define living value as the price including only living cashflows, rights and obligations, dropping cashflows that have already been paid out until t and considering the transaction changes by exercise. For example, V t contains only a Bermudan swaption and let τ B be the stopping time which represents the optimal exercise time of the Bermudan swaption. In the set of the events {ω|τ B ≤ t}, V t expresses the price of the cashflows of exercised swap ahead of time t. In the set of the complementary events, V t expresses the price of the Bermudan swaption which is not exercised up to t yet. CVA formula is given by Exposure grids Exposure grids Exercise grids where the part of the expectation in the formula is called the expected (positive) exposure. V s i contains various factors such as exchange rates, interest rates, stocks, etc., and this expected value is calculated using the Monte Carlo method. In other words, we generate a path of s i state variables using the Monte Carlo method and take the expectation by discounted value of the portfolio corresponding to the state variables. When a portfolio contains a Bermudan swaption, the method proposed in this paper can be combined with grid interpolation (see e.g. [6]). This is a method in which grid points are set up in the state space at time s i , the value on the grid points is obtained by the method described in this paper as preprocess. Then the value on the path generated by the Monte Carlo method is obtained by interpolation between the values on the grid points. To use this method, it is necessary to set up time grids to calculate the price at points other than the exercise. There are two choices to calculate these points either from immediately after exposure point, as in Figure 3, or from the next exercise point, as in Figure 4. When the time interval is narrow, the area of the equation (3.4) becomes large, resulting in a coarse discretization, so it is better to calculate it using Figure 4 method. In addition, Figure 4 method has the advantage of higher parallelization efficiency because it can calculate each expected exposures grid independently, whereas Figure 3 method needs to be calculated sequentially. Extension of application The method proposed in this section is not limited to the computation of Bermudan swaptions by two factor Hull-White model. Let X t be a state variable of arbitrary dimension, µ(t, τ ), η(t, τ ) be a deterministic function, and the same equation can be expanded by considering U (t, τ ) as arbitrary multivariate normal distributions, it can be applied to calculate the expected value E t [V (τ, X τ )] which payoffs can be written in terms of only the state variables at τ . A model with such a setup would apply, for example, Gn++ model (see e.g. [3]) and a multi-asset model where each asset follows correlated Black-Scholes models. The rotation matrix Ξ is chosen to be an appropriate orthogonal matrix for the model. We recommend an orthogonal matrix such that left side of formula (3.1) becomes close to the diagonal matrix. Experimental Overview and Environment In these experiments we use two factor Hull-White model whose coefficients are constant (κ 0 (t) = κ 0 , κ 1 (t) = κ 1 , σ 0 (t) = σ 0 , σ 1 (t) = σ 1 ). Other variables that appear in our method are described in Appendix A. We implemented our FGT method for pricing Bermudan swaption using the C++ language and evaluate speed and precision on Xeon Gold 6130 CPU. The operating system was Windows Server 2016 Standard Edition, and the compiler was Microsoft Visual Studio 2017. The conditions for calculating the Bermudan swaption price are M = 8, σ max = 8, and we choose a branch in 3.3 asΣ 00 (0, τ ) become smallest for continuity. For FGT, the width of the block is set as 2 √ δ which is twice mentioned in subsection 2.4 to reduce the number of inter-block calculation which become a bottleneck when the number of grid is small, and the expansion order of the FGT is set as α 0 = α 1 = β 0 = β 1 = 32 to achieve precision of double type. Using USD market data from Bloomberg 1 for September 20 2018 and Dec 5 2019, we calibrated model parameters with swaptions where the sum of the maturity and the underlying term is between 4Y and 6Y. The curves of the discount rate and the LIBOR forecast rate at each reference date were in Figure 5,6. The calibrated model parameters are in Table 1,2 and these values were used in the following experiments. Market data of September 20 2018 and December 5 2019 are chosen for the reason that correlation ρ of the former is close to −1 and the latter is not. Comparing the speed and accuracy of Bermudan swaption PV evaluation We calculated the price of a Bermudan swaption whose underlying asset is a 3-month roll for both fixed and floating legs, fixed rate is set near 5 years swap rate (September 20 2018 data: 3.0564%, December 5 2019 data: 1.5875%), floating index is USD LIBOR 3M, and exercise date are start from 3-month forward and 3 month frequency. To compare the efficiency, we define three settings that "FGT" is using FGT but no grid rotation, "FGT+Rotate" is using both FGT and grid rotation, and "NoFGT+Rotate" is only grid rotation. The number of N y was prepared from 50 to 6400 (up to 600 for NoFGT+Rotate due to the calculation time) We assume the value calculated by "FGT+Rotate" with N y = 25600 as the true value and we plot N y and the difference from true value in Figure 7, 7 as log-log graph. In these figure, "FGT+Rotate" and "NoFGT+Rotate" are in almost perfect agree since we set α max , β max of FGT more than the accuracy of double type. From another perspective, Figure 7, using September 20 2018 data, shows that the computational accuracy of "FGT" suddenly become worse as the grid becomes coarser, while error of "FGT+Rotate" is suppressed even with a coarse grid. This phenomenon is not seen in figure 8 using December 5 2019 data. It is indicating that grid rotation improves stability when the grid is coarse in markets where the correlation is very close to −1. Figure 9 and Figure 10 are log-log plots of N y and the computational speed for each calculation method. Theoretically, the computational time increases on the order of the square of N y when FGT is used and on the order of the fourth power of N y when FGT is not used because nodes for integration is proportional to N 2 y . In fact, the figure shows that the time is increasing theoretically when the number of grids is large. In contrast, the computation time is almost no change when the grid is coarse when FGT is used. The reason seems that the block-to-block calculations of (2.12) take up a large portion of computational time when the number of grid points in a block is small. Therefore, an improvement to switch the use of FGT can be considered when the number of grid points in a block is below a certain level. In addition, the result using September 20 2018 data shows a significant increase in computation time on coarse grids when grid rotation is not used. The reason is thought that the density of grid points increases due to the increase in ∆n τ , and the number of blocks in the FGT increases when the grid rotation is not used in the situation where the correlation is very close to −1. Figure 11 and Figure 12 are log-log plots with computation time on the horizontal axis and computation accuracy on the vertical axis. Focusing only on each method, the computation time increases as the computation accuracy increases, indicating a trade-off between compu-tation time and accuracy. Considering the comparison between methods, the lower left it is, the more efficient the calculation. The LSMC in the figure was measured in the same computing environment by implementing the Least Squares Monte Carlo method. The calculation method is following. First, the path of X t is generated by the formula (2.2) in which U t,τ is replaced by a random number whose variance-covariance is Σ(t, τ ). The optimal exercise boundary is estimated by a 2 dimensional second-order polynomial equation using the plus part of Underlying value and spot rate as an explanatory variable and path cashflow values as explained variable. Using above boundary, we calculate optimal exercise time and take the average value of the cash flow. We define this LSMC method with 10000 paths as one set, calculate the maximum N M C = 3200 sets and estimate Monte Carlo error by multiplying the standard deviation by . This procedure estimates the optimal exercise bounds with one explanatory variable for a two-factor model, which introduces lower bias into the calculation results. We intended to remove this bias by using the standard deviation of the mean as the error. In the result of September 20 2018 data, the accuracy deteriorates rapidly in relation to the calculation time when grid rotation is not used. However, the deterioration in accuracy is suppressed, and the case with grid rotation is generally superior. It is shown that the case with FGT is more computationally efficient than the case without FGT and than the LSMC case. In the case of grid rotation, the deterioration of accuracy is suppressed when using September 20 2018 data and a coarse grid, which shows the grid rotation improves the computational stability. From the above experiments, we can conclude that the proposed method using the combination of FGT and grid rotation is stable even when the correlation is close to −1. Numerical Example of Expected Exposure In this subsection, we experimented the calculation of expected (positive) exposure which appear in CVA and compare difference between rotation or not. Let V s be the living price at time s of the same 5Y Bermudan swaption as in the previous subsection, we calculate The expectation is calculated by Monte Carlo method which generate 10,000 paths of X s . When V s expresses Bermudan swaption in some paths, it is calculated by proposed method with the same settings of N y = 300 and as the same way of previous LSMC setting for each day from time 0 to trade maturity and interpolate V s on path X s by bilinear interpolation. In the case that Bermudan swaption is already exercised until s, V s is calculated analytically as fixed or float cashflows. Figure 13 shows a graph of the exposures without applying grid rotation to the September 20 2018 data. We can see that the values jump, especially just before the exercise which means that the error becomes large when τ − t is small. In contrast, no such jumps is found when we use December 5 2019 data or use grid rotation. The above results indicate that when the correlation is close to −1, the accuracy of the proposed method without grid rotation tends to be poor, especially for calculations with short time intervals, and grid rotation is superior from the viewpoint of stability. Conclusion In this study, we developed and evaluated a method for calculating the price of Bermudan swaptions in the two factor Hull-White model by numerical integration combined with the FGT and grid rotation. In the proposed method, the expected value appearing in the price calculation in the two factor Hull-White model is expressed as the integral of the product of the density function including the coordinate rotation and the payoff and is discretized as a numerical integral by introducing a grid. For the rotation angle of the coordinate rotation, the optimum value was derived by imposing the condition that the density of grid points is maximized from the viewpoint of the accuracy of numerical integration. Numerical experiments showed that the grid rotation stabilized the calculation accuracy and improved the calculation speed for market data with correlation close to −1 and was superior to the least-squares Monte Carlo method in terms of calculation speed and accuracy. Furthermore, it is shown that this method can be applied to the calculation of exposures in CVA. In the numerical experiments, the accuracy deteriorated when the correlation was close to −1 and the time of exposures calculation was very close to the time of next exercise, but the stability was improved by the method combining grid rotation. The variables that appear in the equations in the text can be written as follows.
2022-12-19T06:42:11.432Z
2022-12-16T00:00:00.000
{ "year": 2022, "sha1": "77959bcd6a7418e7300c6fffe7e06bcb27a5ea91", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "77959bcd6a7418e7300c6fffe7e06bcb27a5ea91", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Economics" ] }
211082682
pes2o/s2orc
v3-fos-license
Explicit optimal constants of two critical Rellich inequalities for radially symmetric functions We consider two critical Rellich inequalities with singularities at both the origin and the boundary in the higher order critical radial Sobolev spaces $W_{0, {\rm rad}}^{k, p}$, where $1<p = \frac{N}{k}$. We give the explicit values of the optimal constants of two critical Rellich inequalities for radially symmetric functions in $W_{0, {\rm rad}}^{k, \frac{N}{k}}$. Furthermore the (non-)attainability of the optimal constants are also discussed. Introduction Let N ≥ 2, 1 < p < N and B R ⊂ R N be the ball with the center 0 and the radius R > 0. The classical Hardy inequality: holds for all u ∈ W 1,p 0 (B R ), where W 1,p 0 (B R ) is a completion of C ∞ c (B R ) with respect to the norm ∇(·) L p (B R ) . We refer the celebrated work by G. H. Hardy [24]. The inequality (1) has great applications to partial differential equations, for example stability, global existence and instantaneous blow-up and so on. See e.g. [10], [7]. It is well-known that ( N−p p ) p in (1) is the optimal constant, ( N−p p ) p is not attained in W 1,p 0 (B R ) and (1) expresses the embedding W 1,p 0 ֒→ L p * ,p , where p * = N p N−p is the Sobolev critical exponent and L p,q are the Lorentz spaces. By an inclusion property of the Lorentz spaces: L p * ,p ֒→ L p * ,q for any q ∈ [p, ∞], the Hardy inequality (1) is stronger than the Sobolev inequality: S N,p u p * ≤ ∇u p in the view of the embedding as follows. W 1,p 0 ֒→ L p * ,p ֒→ L p * ,p * = L p * It is also well-known that Sobolev's optimal constant S N,p is not attained on B R and S N,p is attained on R N (ref. [6,50]), On the other hand, in the critical case where p = N, both two inequalities become trivial inequalities since two optimal constants ( N−p p ) p , S N,p become zero. Instead of these two inequalities, the Trudinger-Moser inequality is considered as a limiting case of the Sobolev inequality (ref. [53,36]) and the critical Hardy inequality: is considered as a limiting case of the Hardy inequality (ref. [30,29,8,9,32,49]). It is known that ( N−1 N ) N in (2) is the optimal constant, ( N−1 N ) N is not attained in W 1,N 0 (B R ) (ref. [4,2,28] etc.) and (2) It is also known that Trudinger-Moser's optimal constant is attained on B R (ref. [14]). For these embedding theorems, definitions and classification of these function spaces (the rearrangement invariant spaces, the Orlicz spaces and so on), see e.g. §1 in [15]. In the present paper, we focus on the higher order case. First, a higher order generalization of (1) was proved by Rellich [40] in W 2,2 0 (B R ), N ≥ 5. In general, let k, m ∈ N, m ≥ 1, k ≥ 2 and 1 < p < N k . Set |u| k,p = ∇ k u L p (B R ) , Then the Rellich inequality: holds for all u ∈ W k,p 0 (B R ), where W k,p 0 (B R ) is a completion of C ∞ c (B R ) with respect to |(·)| k,p . It is also known that A p k,p are the optimal constants (ref. [17,35,20,46] ). For the higher order Sobolev inequalities, we refer [41,31,48,16]. In the critical case where p = N k , both two inequalities do not hold. Instead of the higher order Sobolev inequalities, the higher order Trudinger-Moser inequalities which are called the Adams inequalities are considered (ref. [1], see also [42]). Especially, Adams [1] gave the explicit values of the optimal exponents of them in W k, N k 0 (B R ). On the other hand, instead of the Rellich inequalities (3), the second order critical Rellich inequality: (4) is considered (ref. [18,3,5,13]). It is known that the constant N−2 √ N N in (4) is optimal. In the higher order case where W k,2 0,rad (B R ) (k ≥ 3), the validities of the critical Rellich inequalities and the optimal constants are treated in a pioneer work by [5]. But, unfortunately, it seems that the argument in [5] contains a gap, see Remark 4. The purpose of this paper is to show the validities of the critical Rellich inequalities and give the optimal exponents and the optimal constants in W k, N k 0,rad (B R ) when a = 1. Namely, we show the positivity of the optimal constants R rad k,γ and give the optimal exponents with respect to γ and the explicit values of R rad k,γ . Furthermore, we show the (non-)attainability of R rad k,γ . Here R k,γ and R rad k,γ are given by In the case where a > 1, the potential function |x| −N log aR |x| −γ has a singularity only at the origin. Thus there is an optimal exponent with respect to γ in this case. However, in the case where a = 1, the potential function has singularities not only at the origin but also at the boundary ∂B R . Therefore there are two optimal exponents with respect to γ in this case. Our main result is as follows. Actually, the explicit value of R rad k,p in Theorem 1 (iii) is already founded by [38] on the homogeneous groups. [39,18,11], we see that Remark 1. From the known results |x| N log aR |x| p holds near the origin and log which is same as the optimal constant of the geometric type Rellich inequality by Owen [39]. Remark 3. It is already known that R 2,γ = R rad 2,γ and R 2,γ is not attained for γ = p, N for p = 2 (see [13]), R 3,2 = R rad 3,2 and R 3,2 is not attained (see [38]). We conjecture that for γ = p, N, R k,γ = R rad k,γ also hold for any k and N. As a corollary of Theorem 1, we also obtain a non-sharp (a > 1) critical Rellich inequality for radially symmetric functions as follows. holds for any u ∈ W k,p 0,rad (B R ), where R rad k,p is given in Theorem 1. Moreover the optimal constant of (5) is R rad k,p which is indenpendent of a ≥ 1, and is not attained. For simplicity, we consider the case where k = 2m, m ≥ 2 and N = 4m. In Theorem 2.3.(a) in [5], the authors showed that the inequality holds for any u ∈ W 2m,2 0,rad (B 1 ), where the constant A(N, m) is given by and A(N, m) is the optimal constant of (6). But, unfortunately, only the argument for showing the optimality of A(N, m) in [5] contains a gap. In fact, they used the following radial test function ψ δ . A few comments are in order. Our minimization problem R rad 2m,γ is related to the following polyhoarmonic elliptic equation with Dirichlet boundary conditions in the critical dimensions N = 4m: The minimizer for R rad 2m,γ is a ground state radial solution of the Euler-Lagrange equation (7). By [18,3,5,13], the critical Rellich inequalities have been studied so far based on "a reduction of the dimension argument" which is also called Brezis-Vazquez transformation or Maz'ya transformation (ref. Corollary 3. in Section 2.1.7 in [34], [10]). In contrast with it, our argument is based on the argument by Davies-Hinz [17] for the subcritical Rellich inequalities. This argument is more direct and calculations are simpler than it especially in the higher order case. Since the function |x| −N log aR |x| −γ is radially decreasing on B R if a ≥ e γ N . Therefore, we might expect to obtain a non-sharp critical Rellich inequality: at least for u ∈ W k,p ϑ (B R ) := u ∈ W k,p (Ω) ∆ j u | ∂Ω = 0 in the sense of traces for j ∈ 0, k 2 by Theorem 1 and the iterated Talenti comparison principle (ref. Theorem 1 in [51] and Proposition 3 in [21]). Unfortunately this does not follows from our result directly since we use the assumption u ′ (R) = 0 for radial function u to show our result, see also [20,21]. However we believe that this might be a just technical reason and the non-sharp critical Rellich inequality (8) holds for any u ∈ W k,p 0 (B R ). This paper is organized as follows: In §2, we consider a limit of the second order subcritical Rellich inequality (3) as p ր N 2 via some transformation. Of course, we can not consider a limit of the subcritical Rellich inequality (3) in the usual sense. However, we can consider a limit of an inequality which is equivalent to the subcritical Rellich inequality (3) via the transformation. This observation is inspired by a paper [26] which is a study for the Hardy and the Sobolev inequalities. For some limits of the Hardy and the Sobolev inequalities, see a survey [45]. From this observation, we can catch a glimpse of the strategy of the proof of the second order critical Rellich inequality. More precisely, we observe that some Hardy type inequalities are important ingredients to show the critical Rellich inequality. In order to show the higher order critical Rellich inequalities in Theorem 1, we need more general Hardy type inequalities than it. Therefore, in §3, we study such Hardy type inequalities and their improvements. To the best of our knowledge, these Hardy type inequalities are new. In §4, we show Theorem 1 by dividing two parts. In Part I, we derive the critical Rellich inequalities from the Hardy type inequalities in §3 and the subcritical Rellich type inequalities. And also, we show the attainability of R rad k,γ for γ ∈ (p, N). In Part II, we calculate the explicit values of R rad k,γ for γ = p, N and show the non-attainability of R rad k,γ for γ = p, N. In order to calculate them, logarithmic type functions log R |x| α (α 1) are important. For the Trudinger-Moser inequality, it is known that the Moser type function log R |x| is important to calculate the optimal exponents even in the higher order case (ref. [36,1]). In §5, we explain the cause of the gap in [5] for deriving the optimal constant R rad k,p . In §6, we calculate ∇ k log R |x| α and show the (non-)compactness of the related embeddings of our minimization problems R rad k,γ . We fix several notations: ω N−1 denotes an area of the unit sphere Throughout the paper, if a radial function u is written as u(x) =ũ(|x|) by some functionũ =ũ(r), we write u(x) = u(|x|) with admitting some ambiguity. A limit of the subcritical 2nd-order Rellich inequality and an observation for the critical Rellich inequality Recently, in a inspiring paper [26] the following improved Hardy type inequality is founded. Actually, the inequality (9) is equivalent to the classical Hardy inequality (1) on R N based on the transformation: One of the virtues of the improved Hardy type inequality (9) is that we can take a limit of (9) as p ր N in the usual sense. This is in striking contrast with the classical Hardy inequality (1). As a limit of (9) as p ր N, we can obtain the critical Hardy inequality (2). In this sense, we can say that the critical Hardy inequality (2) is a limiting form of the subcritical Hardy inequality (1) on R N as p ր N via the equivalent inequality (9) and the transformation (10). Remark 5. (Various transformations) Various transformations which connect between two derivative norms ∇(·) L p (Ω) are considered such as (10) by [19,54,25,26,47,43,44]. Actually, we can also regard these transformations as special cases or generalized cases of a transformation in a paper [19]. In fact, the following transformation for u ∈ W 1,2 0,rad (B 1 ) is given in Theorem 18. in [19], where and G Ω,z (y) is the Green function in a domain Ω, which has a singularity at z ∈ Ω. We can observe that the transformation (10) is (11) in the case where W 1,p 0,rad (B 1 ), p < N and z = 0, B 1 ⊂ R N = Ω. An explanation of the other transformations, see e.g. §2 in [44]. In this section, we consider an analog of the Hardy inequality (1) for the subcritical second order Rellich inequality (3), that is, we find a limiting form of the subcritical second order Rellich inequality (3) as p ր N 2 only for radial functions via some transformations. Differently from the first order case, there is no such beautiful transformation in the second order case. However, from Remark 5, we can expect that the below transformation (12) with α = N−2p p−1 is suitable. More generally, we shall consider the transformation (12) with a general exponent α below. Thanks to the transformation (12), we can obtain a limiting form of the subcritical second order Rellich inequality (3). A little strangely, it is a first order inequality, but it is a important ingredient to show the critical Rellich inequality associated with R rad 2, N 2 , R rad 2,N , see also §4. Consider the following transformation: functions. Let α > 0. Then we have the followings. we define the differential operator L p,α as follows. On the other hand, we have Consequently, we obtain the following. Then the subcritical Rellich inequality on R N for radial functions w: is equivalent to the following inequality for radial functions u: (12). 2p−1 and p = 1, then we have However, since the subcritical Rellich inequality (3) does not hold for p = 1, we exclude the case where p = 1. Therefore, the equality (16) does not hold in general when p > 1 . Now we take a limit of the inequality (15) as p ր N 2 , which is equivalent to the subcritical Rellich inequality (44) on R N . If α → 0 as p ր N 2 , then the left-hand side of the inequality (15) becomes an indeterminate form as p ր N 2 . Therefore, in order to obtain a limiting form, we assume that α → 0 as p ր N 2 . Especially, we assume that lim pր N , the left-hand side of the inequality (15) is On the other hand, in the right-hand side of the inequality (15), we have Therefore we observe that the limiting form of the inequality (15) Note that the inequality (17) is already known by [33]. Consequently, we obtain the following. Proposition 2. We obtain the inequality (17) as a limiting form of the subcritical Rellich inequality (44) on R N as p ր N 2 via the equivalent inequality (15) and the transformation (12) The inequality (17) is an important ingredient to show the critical Rellich inequality. Indeed, if we can show the inequality: then we can obtain the desired 2nd-order critical Rellich inequality from (17) and (18). Actually, the inequality (18) holds when A = N 2 − 1 or N − 1, see §3 and §4. In order to prove Theorem 1 completely, we need more general Hardy type inequalities than (18) in the next section. Another Hardy type inequality with two singularities at the origin and the boundary In Proposition 1.2. in [25] (a ≫ 1 case) and Proposition 1 in [27] (a = 1 case), the following generalization of the critical Hardy inequality (2) to the weighted critical Sobolev spaces We observe that the inequality (19) goes to the critical Hardy inequality (2) as p ր N. In this section, we investigate another generalization of the critical Hardy inequality (2) in the subcritical Sobolev spaces W 1,p 0 (B R ). Our inequality has similar structures to (19), see p.101 in [27] and Remark 8. (20) holds for any u ∈ W 1,p 0 (B R ). And the constant p−1 p p is optimal and is not attained. Furthermore the following improved Hardy inequality Remark 8. The inequality (20) is invariant under the scaling u x (x ∈ B R ). Furthermore, in the same way as the proof in [28], we can show that the radial function log R |x| p−1 p is the virtual minimizer of More precisely, we can show as follows. If there exists a nonnegative minimizer of (22), then there also exists a nonnegative radial minimizer U. On the other hand, if there exist two nonnegative minimizers u, v, then there exists C = C(u, v) > 0 such that u = Cv. Applying this property for u = U and v = U λ implies that C = 1 and U = U λ thanks to the scale invariance structure. From this, we observe that Here, we recall the improved Hardy type inequality (9) in §2, which is shown by Ioku [26] . Since for any we can see that our inequality (20) is weaker than the improved inequality (9). However, both inequalities (9), (20) go to the critical Hardy inequality (2) as p ր N and also have the scale invariance structure under each scaling. Besides, the proof of our inequality (20) is simpler and more direct since we do not use some transformation like (10). Proof. First, we show the inequality (20) in the similar way to [49,33]. Note that div Now we set α = p and β = p − 1. Then we have which implies that for any u 0 Therefore we obtain the inequality (20) for p ∈ (1, N]. Set By the fundamental inequality: which implies (21) and the non-attainability of the optimal constant p−1 p p in (20) except for p = N. Note that the case where p = N is already shown by [28]. Finally, we show the optimality of the constant p−1 p p in (20). For γ > p−1 p , set Therefore the constant p−1 p p in (20) is optimal. More generally, we can show the following inequality (24) which includes various inequalities in the same way as the above proof. The special case where α = N − p is shown by [33]. We omit the proof. Theorem 3. Let 1 < p < ∞ and β ≥ 1 − p. Then the following inequality In the case where α = N − p, the remainder termψ N,p,α,β (u) is zero. Therefore we give a remainder term of the inequality (24) only in this case. (25) holds for any u ∈ C 1 c (B R ), where C p,N depends only on p and N, and The proof of Theorem 4 is same as it of Theorem 1 in [27], which is the case where β = 0. In their proof, they used the following transformation for u. In order to show Theorem 4, it is enough to change it to the following transformation. v Note that v(0) = 0 without u(0) = 0. We omit the proof of Theorem 4. From Theorem 3 and Theorem 4, we obtain the following. Theorem 5. Let 1 < p < ∞ and β ≥ 1 − p. Then the following inequality holds for any radial functions u ∈ C 2 c,rad (B R ). Especially, if p = 2, α = 0 and β = 0, then the inequality holds for any functions u ∈ C 2 c (B R ). Proof. (Proof of Theorem 5) First we assume that u is a radial function. By Theorem 3, we have Next we assume p = 2, α = 0 and β = 0. By Theorem 2, we can show (28) without radially symmetry as follows. Critical Rellich inequalities: proof of Theorem 1 In this section, we show Theorem 1. In Part I, we show the positivity and the attainability of R rad k,γ . In Part II, we give the explicit values and the non-attainability of the optimal constant R rad k,γ for γ = p, N. Inequality: Part I on the proof of Theorem 1 In this subsection, we show the lower bounds of R rad k,p and R rad k,N which implies that R rad k,γ > 0 for γ ∈ [p, N]. Especially, we show More generally, we show the followings. Theorem 6. (I) If α ≤ N − 2mp, then the following inequality holds for any radial (30) where ψ N,p,α,β (u) is given by Corollary 2. (II) If 2(1 − p) < α ≤ N − 2mp, then the following inequality holds for any radial where C (N, m, p, β) is given by Theorem 7 and D(N, m, p, α) is given by (III) If α ≤ N − (2m + 1)p, then the following inequality holds for any radial (IV) If 2 − 3p < α ≤ N − (2m + 1)p, then the following inequality holds for any radial functions u ∈ C 2m+1 Here the constant E (N, m, p, α) is given by In order to show Theorem 6, we recall that the subcritical Rellich type inequalities by Davies-Hinz [17], an inequality by Musina [37] and the following Hardy type inequality for any functions u ∈ C 1 Furthermore, Theorem 8 holds true even if C 2 c,rad (B R \{0}) is replaced by C 2 c,rad (B R ). Before the proof of Lemma 1, we recall the one dimensional Hardy type inequality (37) for any a ∈ R, p > 1 and w ∈ C 1 (0, R) with w(0) = w(R) = 0: Proof. (Proof of Lemma 1) Let η be a smooth function with η ≡ 0 on B 1/2 and η ≡ 1 on for any functions u ∈ C 2m c (B R ). Hence Theorem 7 holds true even if C 2m c (B R ). Next we recall the proof of Theorem 8 and check that there are no problems. Let u ∈ C 2 c,rad (B R ). Then we have Here, note that r N−1 u ′ (r) = 0 at r = 0 without u(0) = 0. From (37), we have Theorem 8 holds true even if C 2 c,rad (B R \ {0}) is replaced by C 2 c,rad (B R ). From Lemma 1 and several Hardy type inequalities (26), (27), (34), we can obtain Theorem 6. (III) The proof of (32) is completely same as it of (I). Proof. (Part I on the proof of Theorem 1) We can obtain the lower estimates (29) from Theorem 6 with α = 0, p = N 2m . Since R rad k,p , R rad k,N > 0, we have R rad k,γ > 0 for γ ∈ [p, N]. Moreover R rad k,γ is attained for γ ∈ (p, N) from the compactness of the embedding in Proposition 3 in §6. Optimality and attainability: Part II on the proof of Theorem 1 Let kp = N. In order to calculate the optimal constant R rad k,γ , it is important to find a virtual minimizer of R rad k,γ . Differently from the first order case, it seems difficult to find a scale invariance structure of the derivative term |u| W k,p 0 even if we assume that u is a radial function. In this point of view, it seems difficult to find the virtual minimizer of R rad k,γ . However we expect the existence of such important functions which play similar roles to the first order case. Such important functions are Note that γ = p is the optimal exponent with respect to the singularity of the potential |x| −N log R |x| −γ at the origin. On the other hand, γ = N is the optimal exponent with respect to the boundary singularity of the potential |x| −N log R In the end of Part II on the proof of Theorem 1, we shall show V 1 (respectively, V 2 ) is a virtual minimizer of R rad k,p (respectively, R rad k,N ). For the details, see the proof below. Remark 9. We observe that in the first order case k = 1, which is known as a virtual minimizer of the critical Hardy inequality (2). Except for the first order case, V 1 V 2 . In this paper, we treat only the higher order case k ∈ N, k ≥ 2. However, even in the fractional case where 0 < k < 1, we believe that these two functions V 1 , V 2 are important. Proof. (Part II on the proof of Theorem 1) First we consider a radial test function φ ε ∈ W k,p 0,rad (B R ) which is given by where ϕ ∈ C ∞ c (B R ) is a radial function, where ϕ ≡ 1 on B R/2 and ϕ ≡ 0 on from Part I on the proof of Theorem 1. Next we consider a radial test function ψ ε ∈ W k,p 0,rad (B R ) which is given by From Proposition 4 in §6, we have Therefore we have from Part I on the proof of Theorem 1. Moreover, by using the same test functions φ ε , ψ ε , we can show that R rad k,γ = 0 if γ [p, N]. Finally, we shall show that R rad k,p , R rad k,N are not attained. Assume that R rad k,p is attained by u ∈ W k,p 0,rad \ {0}. Then ψ N,p,N−p,0 (u) = 0 in Theorem 6 (II), (IV) which implies that u(x) = c log R |x| p−1 p = cV 1 (x) (c 0) W k,p 0,rad (B R ). This is a contradiction. On the other hand, if we assume that R rad k,N is attained by u ∈ W k,p 0,rad \ {0}, then ψ N,p,N−p,N−p (u) = 0 in Theorem 6 (I), (III) which implies that . This is also a contradiction. Hence R rad k,p , R rad k,N are not attained. The proof of Theorem 1 is now complete. Proof. (Proof of Corollary 1) Since log R |x| ≤ log aR |x| for any a ≥ 1 and any x ∈ B R , the inequality (5) immediately follows from Theorem 1 (iii). In order to show the optimality of the constant R rad ϕ(x). Finally, the nonattainability of the optimal constant R rad k,p in (5) follows from the non-attainability of R rad k,p in Theorem 1. The cause of the gap in [5] In Remark 4, we explained the gap of the optimality of the constant A(N, m) in the higher order critical Rellich inequality. On the other hand, we can obtain the optimal constant R rad k,p in Theorem 1 correctly in our argument. Where does the gap come from? Actually, our argument resembles the argument in [5] in the view of tools, which are three Hardy-Rellich type inequalities, for showing the higher order critical Rellich inequality. The only difference between our argument and it in [5] is the order of use of these three tools. In this section, we explain concretely where the gap comes from when (k, p) = (4, 2), that is N = 8. We recall the argument in [5] to show the higher order critical Rellich inequality. They used the following three Hardy-Rellich type inequalities: In fact, we can derive the 8th order critical Rellich inequality by three Hardy-Rellich type inequalities (43), (44), (45) as follows. On the other hand, our aurgument is as follows, see the proof of Theorem 6 (II). Recently, the authors in [23] showed that the optimal constant in (44) can be improved for curl-free vector fields, see Corollary 4. in [23]. Therefore we can observe that (N−6)(N+2) 4 2 in (44) is not the optimal constant for curl-free vector fields v = ∇u. More precisely, the authors in [23] obtained the optimal constant 77 as follows. Besides, since u is a radial function, we can improve (46) a little bit more. In fact, by (34) and (36), we have which implies that Therefore, if we use (47) instead of (44), then we can obtain the optimal constant R rad 4,2 correctly even in the argument in [5]. As a consequence, the cause of the gap in [5] comes from the non-optimality of the constant in the inequality (44) for curl-free radial vector fields v = ∇u. Appendix . Then the embedding: W k,p 0,rad (B R ) ֒→ L p B R ; P γ (x) dx is compact for γ ∈ (p, N) and is non-compact for γ = p, N. Proof. First we assume that γ ∈ (p, N). Let (u m ) ∞ m=1 ⊂ W k,p 0,rad (B R ) be a bounded sequence. Then there exists a subsequence (u m k ) ∞ k=1 such that see e.g. Theorem 2.1 and Theorem 2.4 in [22]. For any small ε > 0, there exists δ > 0 such that Form (48) and (49), we have Thus the continuous embedding W k,p 0,rad (B R ) ֒→ L p B R ; P γ (x) dx is compact for γ ∈ (p, N). On the other hand, we observe that the continuous embedding W k,p 0,rad (B R ) ֒→ L p B R ; P γ (x) dx is non-compact for γ = p, N from the non-attainability of R rad k,p , R rad k,N in Theorem 1. Here we give a non-compact sequence of W 2,p 0,rad (B R ) ֒→ L p B R ; P γ (x) dx for γ = p, N(= 2p) concretely. Let u ∈ C ∞ c (B R ) be a radial function. Consider the scaling: u λ (r) = λ a u(s), where s = s(r) = r λ R 1−λ for λ > 0. Then we have And also, we see that If γ = p, then we take a = − p−1 p and λ = 1 m (m ∈ N). By (51), we see that where C m, j and D m, j depend on m, N and j, and satisfy as follows.
2020-02-13T02:00:58.011Z
2020-02-12T00:00:00.000
{ "year": 2020, "sha1": "acaedb45afb4670f9703d5148172a6dc7f3a4605", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "acaedb45afb4670f9703d5148172a6dc7f3a4605", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
56179101
pes2o/s2orc
v3-fos-license
Blood Lead Levels in Women of Child-Bearing Age in Sub-Saharan Africa: A Systematic Review This paper reported available studies on blood lead level of childbearing age in Sub-Saharan African women. PubMed and Google scholar databases were searched for original articles reporting blood lead levels of women of childbearing age in Sub-Saharan Africa. Searches were not limited to year of study but limited to studies published in English Language. Data were extracted and synthesized by estimating the weighted mean of the reported blood lead levels. Fifteen papers fulfilled the inclusion criteria. Mean blood lead levels of women in the studies ranged from 0.83 to 99 μg/dl. The overall weighted mean of blood lead levels was 24.73 μg/dl. The weighted mean from analyses of data on blood lead levels of pregnant women alone was 26.24 μg/dl. Identified sources of lead exposure included lead mine, informal lead-acid battery recycling, leaded gasoline and piped water. Elevated BLLs were associated with incidence of preeclampsia, hypertension, and malaria. Important contributing factors for elevated blood lead levels (BLL) in these women include poverty, high environmental lead burden, low awareness on lead exposure hazards and lack of regulation for lead in consumer products. BLLs of women of childbearing age in SSA are unacceptably high. There is need therefore, for aggressive programs to address lead exposure in this population. INTRODUCTION Lead is a bluish-gray metal that occurs naturally in the earth's crust, most often in its ore deposits with coal and other metals such as zinc, silver and copper. It is soft, malleable, a relatively poor conductor of electricity, highly resistant to corrosion, and able to absorb sounds and other vibrations as well as radiation. Lead has a low melting point and is resistant to fire (1). This range of properties makes it versatile, as it is used in hundreds of consumer products, which has resulted in widespread human exposure to this toxic metal. The general population is primarily exposed through ingestion of contaminated food and inhalation of airborne lead (1). Lead contaminated water can be an important source of exposure to people living in houses with leaded plumbing pipes and fittings. In addition, workers in some occupations may be exposed to lead. These include; lead miners, lead smelters, and refiners, car battery manufacturers and repairers, paint and pigment manufacturers, printers, stained-glass makers, welders etc. (1). Human exposure to lead has remained a public health problem, especially in developing countries. Evidence of toxicity due to lead exposure has been recognized long time ago, with the earliest published reports dating back to 2000 BC (2). However, lead production and use has continued to rise, despite growing evidence of its health effects. The developing fetus and infants are most vulnerable to lead, both in terms of exposure and health effects (3). The US Centers for Disease Control and Prevention [CDC] has set an action level of 5 µg/dl for lead in children and women of childbearing age (4). However, there is widespread scientific consensus that there is no safe level of exposure to lead. In 2016, about 13,873,550 disability adjusted life years (DALYs) globally was attributed to lead exposure, amounting to 1.28% of total DALYs attributable to all risk factors (5). In 2017, lead exposure accounted for about 63.07, 66.39, and 54.3% of DALYs caused by idiopathic developmental intellectual disability in all ages, women of childbearing age and children <5 years in SSA, respectively (6). In South Africa, lead was estimated to have caused about 1,428 (0.27%) of all deaths in the year 2000 (7). Also in the year 2000, about 40% of all children globally were estimated to have blood lead levels >5 µg/dl, with most [90%] of them living in developing countries (8). Using an Environmentally Attributable Fraction (EAF) model and limiting analysis to neurodevelopmental impacts of lead in children, lead attributable economic loss was estimated at $134.7 billion (4.03% of Gross Domestic Products) in Africa (9). In well-developed nations, blood lead levels (BLL) of the general population have continued to decrease over the years following regulatory bans on leaded gasoline and reduction in lead content of consumer products such as paints (10,11). In contrast, reports from Sub-Saharan African (SSA) indicate that BLL in this population have remained elevated (12,13) despite an official phase-out of leaded-gasoline in these countries (14). Little or no attention has been paid to lead content of other consumer products. Lead levels greater than maximum permissible limit set by WHO in food and water samples from different water samples (15)(16)(17)(18)(19)(20), etc. collected from different parts of SSA. In the year 2007, Adebamowo and co-workers reported lead levels as high as 50,000 µg/g in paints sold in Nigerian markets (21). Other reports indicate that products such as herbal remedies, cosmetics and cooking pots used in countries in SSA contain very high levels of lead (22)(23)(24). Furthermore, many point sources of lead exposure exist in SSA. The two incidents of large scale lead poisoning in Zamfara, Nigeria from informal gold mining (25) and Dakar, Senegal from recycling of lead batteries (26), indicate that there may be on-going unidentified point sources of lead poisoning in Sub-Saharan African. Although fatalities recorded in these incidents were in children below the age of 6 years, very high BLLs were reported for adults living in that vicinity, including mothers of the deceased children (26). Exposure to lead affects persons of all ages, with children and women of childbearing age being the most susceptible group to these effects (4). Lead exposure in women of childbearing age is an issue of health concern because of its effect on both maternal and infant health: Pregnancy and lactation are associated with increased metabolic activity in bone, due to increased demand for calcium for fetal bone formation and breast milk formation, respectively. In women who have been exposed to lead prior to pregnancy, remobilization process in response to calcium needs ends up remobilizing lead into the blood, thus raising maternal blood lead during these periods (27). High BLL during pregnancy has been associated with pregnancy induced hypertension (27)(28)(29)(30) and preeclampsia (30)(31)(32). Lead readily crosses the placenta (33,34) and causes increased risk of spontaneous abortion (35)(36)(37). Lead is excreted in breast milk (38)(39)(40)(41) causing additional exposure to breast-fed infants. High levels of exposure to lead may affect female reproductive health and fecundity (42)(43)(44). Over the last four decades, there have been some studies on blood lead levels of women of childbearing age in SSA. Most of these are small scale epidemiological studies conducted by individual researchers. There have been no attempts to summarize these data in order to strengthen the evidence base. An estimate of the BLL, with an evidence of the sources and adverse health effects of lead exposure in women of childbearing age in SSA will be key to the development of effective regulatory measures aimed at reducing lead exposure in this population. In the light of this, this paper presents the summary of findings of previous literature on lead exposure in women of childbearing age living in Sub-Saharan African through a systematic review. Specifically, we sought to address the question on blood lead levels, sources, health effects and risk factors of lead exposure in this population. METHODS A systematic computerized literature search of PubMed and Google scholar databases was performed for papers published in peer-reviewed journals, for original research articles reporting blood lead levels of women of childbearing age in Sub-Saharan Africa, using the following search terms: ("Blood lead levels" OR "toxic metals" OR "trace elements" OR "heavy metals" OR "BPb" OR "lead exposure") AND ("Africa" OR "Sub-Sahara" OR "developing countries" AND "names of each country in Sub-Saharan"). These search terms were used to ensure that publications were not missed. Article abstracts, keywords, and titles were scanned to assess their relevance for full-text review. After the search, many non-relevant papers were excluded. Searches were limited to studies published in the English Language. All available publications from Sub-Saharan Africa, dating back to 1977 were included in the review provided they met the inclusion criteria. The search was done during the months of June, 2016 to March, 2017. Studies with the following characteristics were included in the review: studies specifying the age of women to be within 15-49 years or indicating that they are women of childbearing age such as pregnant women, lactating/nursing mothers; general population studies with specific data on women of childbearing age, studies conducted in any of the countries in Sub-Saharan Africa; studies that used whole blood lead as biomarker of lead exposure. Studies with the following characteristics were excluded: Studies on males and children below 15 years of age; studies on females above 50 years; studies that neither specified the age of the women nor indicated that they are women of childbearing age and studies using other biomarkers of lead exposure such as breast milk, urine, plasma, serum, umbilical cord blood, even if they were conducted on women of childbearing age and in Sub-Saharan Africa. The selected articles were examined and blood lead concentrations of women of childbearing age were extracted. Information on the study population's description, study location and sample size of the respective studies were also extracted. Identified environmental sources of exposures, health effects and factors associated with elevated blood lead levels were noted. The articles that met the inclusion criteria were assessed for quality using the following study characteristic: description of study population, explanation of sampling strategy/method, use of venous blood collection method, use of standard equipment for blood lead measurement and quality control during lead analyses. The quality assessment was conducted independently by all the authors and their assessments were compared and disagreements resolved by discussion. We also assessed reference lists of included studies for other relevant studies. Data were synthesized by estimating the weighted mean of blood lead levels reported in the studies using the formula: Where M W is the weighted mean, M i is the mean of study i, and Ni is the sample size for study i. RESULTS A total of 2,137 papers were found from the databases searched. From these, 294 relevant abstracts and tittles were identified. Fifteen papers fulfilled the inclusion criteria and were chosen for data extraction (Figure 1). A study on BLL, health effects and risk factors for elevated BLL among pregnant women in Abakaliki, Southeast Nigeria was reported in two papers by Ugwuja et al. (45,46), and so these two papers were merged as one. The selected articles and their characteristics are summarized in Table 1. Two studies did not give information on the date of sampling (47,48). In all, except two studies, sampling was done between the year 2005 and 2011 (47,49). The subject (sample) size in the studies varied widely from 23 (26) to 349 (45). Majority of studies were from Nigeria (n = 6), followed by South Africa [n = 2]. Benin republic, Botswana, Ethiopia, Kenya, Senegal, and Zambia had one study each. Ten of the studies (30,(45)(46)(47)(49)(50)(51)(52)(53)(54)(55) were on pregnant/delivering women, 2 studies (26, 57) were on mothers of infants, 1 study (56) was generally on women of childbearing age and one of the studies was on non-pregnant women of childbearing age, with occupational exposure to lead (48). Seven of the studies (30,45,46,49,(51)(52)(53)55) did not report the sources of lead exposure in the study population. Eleven of the studies were hospital-based (30, 45-47, 49-52, 54, 55, 57), subjects for three of the studies (26,53,56) were recruited through community mobilization campaigns while one study was field-based mouth-to-mouth campaign (48). Venous blood samples were used in all the studies. Blood lead levels were measured using either Inductively Coupled Plasma-Mass Spectrometry (50,54,55,57) or Atomic Absorption Spectrometry (26, 30, 45-49, 51-53, 56) and all the studies reported adequate quality control procedures for blood lead analysis. Blood Lead Levels of Sub-Saharan African Women of Childbearing age Mean blood lead levels reported in the studies ranged from 0.83 µg/dl, for women in Johannesburg, South Africa (54) to 99 µg/dl, for pregnant women in Owerri, Nigeria (52). The weighted mean of blood lead levels was 24.73 µg/dl for all women of childbearing age, 26.24 µg/dl for pregnant women alone and 32.32 µg/dl for women with no known sources of lead exposure (Figure 2). One study did not indicate the mean BLL of the subjects, but presented the range as 0.61-16.15 µg/dl for women in Johannesburg, South Africa (50) and so was not included for estimation of weighted mean. Overall, the range of BLL reported in the reviewed studies varied from levels below detection limits to 448 µg/dl in pregnant women in Owerri, Nigeria (52). Mean blood lead levels from all the studies except those from South Africa (50,54) and Botswana (53) were above 5 µg/dl. Six studies gave report on the prevalence of BLL ≥10 µg/dl in the study population (26,46,48,49,52,55). Only three studies (53,54,57) reported prevalence of BLL ≥5 µg/dl which is the present action level given by CDC for pregnant and lactating women. In comparison, blood lead levels of women of childbearing age reported from some developed and other developing countries are shown in Table 2. Blood lead levels reported in studies from the United States of America (USA) show a downward trend. Overall, blood lead levels reported in this review are higher than those reported from developed and most developing countries. However, they are comparable to those reported from India (70,72) and Egypt (31). Sources of Lead Exposure in Sub-Saharan African Women Sources of lead exposure among women of childbearing age in SSA are summarized in Table 3. Five of the studies reported the sources of lead exposure in their study population. These include: broken hill lead mine at Kabwe, Zambia (47), informal used lead-acid battery recycling in Dakar, Senegal (26), geophagia in South Africa (54), leaded gasoline in Ethiopia (56), piped water and consumption of animals killed by ammunition in Benin Republic (57). Health Effects of Lead Exposure in Sub-Saharan African Women Effect of lead exposure on maternal health and neonatal outcomes as reported in the studies are summarized in Table 3. Six studies (30,45,47,50,52,55) associated BLL with either maternal health or neonatal health outcomes. Maternal BLL was positively associated with preeclampsia and high blood pressure (30). Higher incidence of hypertension, malaria and low birth weight were reported among women with BLL > 10 µg/dl (45). Ugwuja and co-workers reported a negative association between BLL and maternal Hb concentration (46). However, Njoku and Orisakwe (52) did not find any association between maternal BLL and Hb concentration, liver enzymes and renal function parameters measured (unpublished data). Clark (47) did not find any association between maternal BLL and infant Hb level. Maternal BLL was not associated with the measured birth outcomes such as birth weight, birth length, head circumference, gestational age at birth, and crown-rump length (50,55). Risk Factors for Elevated Blood Lead Levels Among Women of Child Bearing Age in SSA Nine studies gave reports on some risk factors for elevated BLL among women of child bearing age and these factors varied between studies (30,46,47,(50)(51)(52)(53)(54)56). The summary of these factors as reported in the studies are given in Table 3. Residential proximity to major sources of lead exposure such as lead mine (47), lead acid recycling area (26), and heavy vehicular traffic roads (56) is the primary risk factor for elevated BLL indicated in the studies. Another major factor for elevated BLL among women of childbearing age is gestational age. Significantly higher BLL were reported in women in their 3rd trimester than those in 1st trimester of pregnancy (51,53). Higher maternal age, lower educational status, farming occupation (46), pregnancy status (30, 51), residency in a rural setting (52, 53), geophagia (ingestion of soil) during pregnancy (54) and poor nutritional status (45) were also reported as risk factors for elevated BLL. DISCUSSION This systematic review summarized the reports of 14 studies available on blood lead levels of women of childbearing age in Sub-Saharan Africa. From these studies, mean BLL ranged from as low as 0.83 µg/dl in women from South Africa (54) to 99 µg/dl in women from Owerri, Nigeria (52). The mean BLL (weighted by sample size) of women of childbearing age in Sub-Saharan Africa was 24.73 µg/dl. Using data on BLL for pregnant women alone, the weighted mean was 26.24 µg/dl. Five of the studies reviewed reported prevalence of elevated BLL ≥10 µg/dl and the prevalence was estimated from the given results in one study (26). Of these, prevalence of elevated BLL of more than 70% was reported in 4 studies (26,46,49,52). Mathee and co-workers reported a 2.3% prevalence of BLL ≥5 µg/dl among women in South Africa (54), while Mbogwe (53) reported prevalence of BLL ≥5 µg/dl to be 5.5, 5.6, and 3.1% among pregnant women from Botswana in their 1st, 2nd, and 3rd trimesters, respectively. Overall, the weighted mean of blood lead level of women of childbearing age in Sub-Saharan Africa is more than 400% above the action level of 5 µg/dl set by CDC (Figure 1). A comparison of BLL of women of childbearing age from developed nations and some other developing nations, indicate that BLL of women in SSA are more than 10-folds higher than those in the developed countries, and most of the other developing countries (27,28,31,32,37,44,. The observed differences in BLL among women of childbearing age in SSA and their counterparts in developed nations may be reflecting differences in environmental burden of lead in these areas and level of public awareness on sources and hazards of lead exposure. In addition to the considerably high level of awareness on sources and hazards of lead exposure among women and the general population in the developed nations (3), regulatory measures to reduce lead exposure has been in place for long (65) and these have resulted in decline in BLL of the general population over the years (10,11,(87)(88)(89). As could be observed in Table 2, blood lead levels of women of childbearing age in the USA showed a downward trend, with the reported mean BLLs decreasing from about 7.9 µg/dl in the 1980s (59), to 1.9 µg/dl in the 1990s (61), and to 0.34 µg/dl in 2009 (68). On the other hand, leaded gasoline was officially phased out in all countries in SSA in January, 2006 (14), more than a decade after it was phased out in developed nations. Although studies in some countries in SSA comparing BLLs in populations before and after initiation of phase-out of leaded gasoline (48,(90)(91)(92), indicate reduction in BLL in the study populations, significant proportions of these populations still have elevated BLL. The impact of leaded gasoline is still pronounced in urban settings with high vehicular traffic density (56). Although mean BLL (6.81+2.61 µg/dl) in women who were occupationally exposed to lead in Ile Ife, Nigeria in the year 2007 was reported to be significantly lower than that (12.0+6.0 µg/dl) reported by the same authors for similar subjects at the same area in the 1990s, 11% of the subjects had BLL >10 µg/dl (48). No study in this review investigated change in BLL of women of childbearing age in the same location, between two periods, or following phase-out of leaded gasoline. Therefore, trends in BLL of women in SSA in relation to phase-out of leaded gasoline could not be established. However, looking at the studies generally, it could be observed that those from South Africa (50,54), where phase-out of leaded gasoline was initiated earlier (1996) and where reasonable efforts have been put in place toward reduction of lead exposure (93), reported much lower BLLs. Sources of lead exposure as identified in the reviewed studies include lead mines (47), used lead-acid battery recycling (26), geophagia (54), piped water and consumption of animals killed by ammunition (57) ( Table 3). However, the weighted mean BLL (32.32 µg/dl) for women with no known source(s) of lead exposure was higher than the overall weighted mean. This indicates that women of childbearing age from SSA could be exposed to lead from unidentified sources. Various sources of lead exposure abound in SSA, however, the impact of most of these sources on BLL of this population remains understudied. This underscores the need for more studies aimed at identifying possible exposure sources and associations between these sources and BLL in this population. Diet remains the most important source of environmental lead exposure. There have been many reports on lead contamination in food substances in SSA. Levels greater than maximum permissible limit set by WHO for lead in food and water have been reported for vegetables (15), fruits (16,17) food crops (17,18), beverages (19), fishes (20), local spices (94), different water samples (20, 57, 95, 96), etc. collected from different parts of SSA. There is no doubt that these could contribute significantly to body burden of lead in SSA women. Furthermore, geophagia (defined as habitual eating if clay or soil) is highly prevalent among women in SSA (54,97) and has been identified as source of lead exposure among pregnant women in South Africa (54). Artisanal aluminum cookware can be an additional source of dietary lead exposure in women of SSA (22). Used lead-acid battery recycling could be a significant source of lead exposure in women of childbearing age in SSA living near lead-acid battery recycling areas. In Africa as most of the recycling are done in small scale informal settings, with the operations being carried out in the immediate surroundings of residential homes, exposing family and community members to lead (98). Generally, none of these workshops have adequate solid and liquid waste management systems and level of awareness on the risk of lead poisoning among repair shop owners and workers were found to be very low (98). Deteriorated house paint is among the sources of lead with the highest risk of exposure in Africa (99). Legislation on paint lead content is lacking in most countries in SSA and this leaves the general public at the mercy of paints manufacturers. High lead levels have reported in paint samples (21) and paint flakes collected from buildings (100,101) in places in SSA. Although use of leaded paint has been restricted in South Africa since 2010 (13), women living in older houses may still be at risk of lead exposure from paints. Lead containing paints are important sources of lead exposure in women of childbearing age because these women are directly involved in home renovations, sweeping, and cleaning where they may inhale lead laden dust from such activities. However, no study in SSA has investigated BLL of women of childbearing age in relation to exposure to lead in house-paint. Lead containing medicines and cosmetics represent additional sources of exposure for women of childbearing age in SSA. High levels of lead have been reported in ready to use herbal remedies produced and sold in Nigeria (23,102,103). Among women of childbearing age, they are mostly used in treatment of infections and infertility. Mathee and co-workers gave report on lead poisoning outbreak as a result of consumption of an ayurvedic medicine in Durban, South Africa (104). Lipsticks containing lead levels as high as 73.1 µg/dl and 369.9 µg/g are in use in South Africa and Nigeria (24,105), respectively. Lead, as well as other metals concentrations in some cosmetic products used in some parts of SSA have been reported (24,106,107). "Tiro, " a Nigerian eye cosmetic that is also used as a folk remedy to promote visual development, was implicated as cause of lead poisoning in a male infant of Nigerian descent at Boston Children's Hospital, USA. The tiro applied to the infant's eyelids was reported to contain 82.6% lead (108). Lead in cosmetic products is not currently regulated in countries of SSA. There is paucity of data on occupational exposure to lead in SSA women of childbearing age. Only one study (48) in this review reported BLL in women occupationally exposed to lead. However, many women of childbearing age in SSA could be working at major lead using industries and therefore, may be exposed to higher levels of lead as there are little or no regulatory standards in these industries. Although occupational exposures are still important sources of lead exposure in US women of childbearing age (109), BLL in these individuals have fallen dramatically with the revision of lead industry standards in 1978 (87). Although an action level of 5 µg/dl has been recommended for pregnant and lactating women (4), studies have shown that adverse maternal and fetal health effects are observed at BLLs much lower than 5 µg/dl (27,29,63,66). The reported positive associations between BLL and some health outcomes such as blood pressure, preeclampsia (30) and hypertension (46) are in line with other reports from other developing and developed countries (27,29,31,32,73). Report of WHO indicates that incidence of preeclampsia is seven times higher in developing countries than in developed countries (110). In Nigeria, the prevalence has been reported to be in the range of 2-16.7% of live births (111). High prevalence of preeclampsia in this population may be attributable at least in part to high level of lead exposure in these women. Ugwuja and co-workers reported negative association between BLL and hemoglobin concentrations among pregnant women in Abakaliki, Nigeria (45). In line with that, some studies have reported negative association between BLL and hemoglobin concentrations (79). Lead has been shown to inhibit heme synthesis by altering the activities of δ-amino levulinic acid dehydratase [ALAD] thereby inducing microcytic and hypochromic anemia (79,85). However, the reported associations between BLL and health outcomes in some of the studies (30,45,46) should be interpreted with care as these are cross-sectional studies and therefore were not subjected to certain statistical measures of association. The authors did not adjust results for confounding variables such as smoking status and age. In addition, other possible causes of such health effects were not accounted for in the interpretation of results, which may have diminished the possibility of certain health outcomes being due to lead exposure alone. The weighted mean BLL (24.73 µg/dl) estimated in this review suggests that women of childbearing age in SSA and indeed their infants are at very high risk of adverse health effects resulting from lead exposure. However, contrary to this expectation, some of the reviewed studies reported no association between BLL and measured maternal health outcomes (47,52). Also, no significant association was reported between maternal BLL and measured neonatal outcomes such as preterm delivery, birth weight, birth length, head circumference, abdominal circumferences (46,50,55). Adverse effects of maternal BLL on birth outcomes such as birth weight (82,112,113), premature membrane rupture and delivery have been reported even in studies where maternal BLL is relatively lower (<5 µg/dl) (76,77). In addition, High maternal BLLs have been associated with spontaneous abortion (35)(36)(37), intra-uterine growth retardation (72) and incidence of neural tube defects (114) and follow-up studies into infancy and adulthood; have linked high prenatal lead exposure with impaired cognition and neurodevelopment in children (115,116) and higher rates of criminal arrests in early adulthood (117). Given the high blood lead levels reported in the reviewed studies, high prenatal lead exposure may partly account for high prevalence of cognitive impairment and high rates of crime observed among children and adolescents in SSA. However, the lack of significant association between lead exposure and health outcomes observed in these studies could be attributed to small sample sizes (47,50,52,55) and study designs (mostly crosssectional) used in these studies. This clearly underscores the need for more studies (especially well-designed case-control studies and prospective studies) from SSA, with appropriate sample sizes to permit measurement of associations between BLL and health outcomes. Several risk factors of elevated BLL were identified across the studies. Among these, residential proximity to major sources of lead was identified as the most important risk factor for high BLL among women of childbearing age across the studies (47,50,52,53,57). In line with some of these, BLLs have been shown to be greatest in areas where there is high exposure to environmental lead, such as near lead mines and smelters (58,118), high vehicular traffic roads (70), or solid waste incinerator (119). However, contrary to expectations, two of the studies (52, 53) reported higher BLLs for pregnant women living in rural areas than those in urban areas. Although this may be attributed to lower socioeconomic and educational status obtainable in rural settings, this observation calls for further studies especially air quality sampling. Another major risk factor for elevated BLL among women of childbearing age is gestational age. Three studies determined the effect of gestational age on BLL of women (46,51,53). Significantly higher BLLs were reported in women in their 3rd trimester than those in 1st trimester of pregnancy (51,53), although Ugwuja and co-workers did not observe any relationship between BLL and gestational age (46). In line with these, there are several reports on positive association between maternal BLL and gestational age (27,36,61). Maternal blood lead levels seem to be associated with biological processes associated with calcium needs. Lead competes with calcium for binding sites and may subsequently substitute for calcium in bone cell formation where it remains incorporated until time for bone remodeling. Thus, in late pregnancy when fetal need for calcium increases, maternal response to meet this demand can occur through calcium resorption from bone (36,118), especially when dietary calcium intake is inadequate. Thus, in women who have had substantial exposures to lead prior to pregnancy, remobilization process in response to calcium need may end up remobilizing lead into the blood, thus raising maternal blood lead in late pregnancy. This may explain the positive association between blood lead and gestational age. Other risk factors for elevated BLL include higher maternal age, lower educational status, poor nutritional status (46), pregnancy status (30,51) and lower socioeconomic status (46,53). These are in line with the reports from other developing nations (119)(120)(121) and developed nations with relatively lower BLL (36,63,122,123). However, in addition to these, studies from USA (63,64,122) report that BLL of women of childbearing age is positively associated with race and ethnicity (being African-American or Hispanic). Low economic status and poor nutrition (especially on essential trace elements) play significant role in lead exposure. Absorption and retention of lead are enhanced by deficiencies of nutritionally essential elements such as calcium, iron, and zinc (1). In a study to determine blood lead levels in pregnant women of high and low socioeconomic status in Mexico City, Farias and co-workers reported that consumption of milk products (rich in calcium and zinc) significantly reduced blood lead levels in higher socioeconomic status group and that calcium supplementation lowered blood lead levels in women whose diets were deficient in calcium (120). High BLL reported for women of childbearing age in SSA is critical because this is occurring in population with risk factors such as high prevalence of poverty, lower educational attainment, malnutrition and poor/strained health system (124). Deficiencies of copper and zinc have been reported among pregnant women in Abakaliki, Nigeria (125), due to consumption of monotonous diet with low contents of minerals and vitamins. Food consumption and nutrition survey conducted in Nigeria reported that approximately 24.3% of mothers and 35.3% of pregnant women were at different stages of iron deficiency and that 43.8% of pregnant women and 28.1% of the mothers were zinc deficient (126). The risk factors for elevated BLL reported in studies from SSA are similar to those reported from developed nations. This strongly suggests that the observed difference in BLL between these two populations may be reflecting high environmental lead pollution, poverty, lack of public health awareness on sources and hazards of lead exposure and lack of regulatory laws for lead in consumer products in SSA. Whereas, there is considerably high level of awareness on sources and hazards of lead exposure among women and the general population in the developed nations (3), studies from SSA countries report very low level of public awareness on this very important health issue (127,128). In a study to determine the level of knowledge on lead hazards among pregnant women in an area with high risk of lead exposure and poisoning in west of central Johannesburg, South Africa, Haman et al. (128) reported very low (11%) level of awareness of the dangers of lead in pregnancy. Similar study in Ibadan, Nigeria (127) reported low level of knowledge of domestic exposure to lead and its health implications among the study population. Although protective measures have been put in place in some SSA countries, these efforts have been patchy and have lagged behind other regions (3). In the US, Minnesota, New York City, and New York State jurisdictions have active guidelines for monitoring maternal blood lead levels (129). Unlike some of the developed nations, none of the countries in SSA has a national blood lead bio-monitoring program. Information on BLL of women of childbearing age and indeed other population subgroups in SSA largely depend on small scale epidemiological studies undertaken by individual researchers in these countries. CONCLUSION BLLs of women of childbearing age in SSA as reported in these studies are unacceptably high and varied from place to place. Women of childbearing age and indeed the general population in SSA are exposed to multiple sources of lead and are therefore at very high risk of adverse effects of lead. However, the reported associations or lack of associations in the reviewed studies should be interpreted with care. High environmental lead pollution, poverty, lack of awareness on the sources and hazards of lead exposure and lack of regulation for lead in consumer products are important contributing factors for elevated BLL in these women. There is need therefore, for aggressive programs to address lead exposure in the general population of SSA. These should include: initiating national, statewide, and community level biomonitoring programs, particularly in susceptible populations such as women of childbearing age and supporting individual research work on population bio-monitoring of lead exposure in order to identify places most at risk and the sources of exposure in such areas; making adequate regulatory laws on lead content of most consumer products and ensuring proper enforcement of such laws; establishing regulatory standards on occupational lead exposure; promoting mothers' awareness on sources and health effects of lead exposure. These would go a long way in reducing lead exposure, thereby protecting human health and lessening the economic burden of lead exposure in this population. These programs may be accomplished through collaborations with similar international programs. AUTHOR CONTRIBUTIONS OB-O and CA were involved in the literature search, analyses, and drafting the manuscript. OO conceptualized the design and proofread the final manuscript.
2018-12-19T14:05:42.710Z
2018-12-19T00:00:00.000
{ "year": 2018, "sha1": "71d5d3746d109e091a46c23fcd9f0809eb764de4", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3389/fpubh.2018.00367", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "71d5d3746d109e091a46c23fcd9f0809eb764de4", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
267234820
pes2o/s2orc
v3-fos-license
The Neutrophil/Lymphocyte Ratio and Outcomes in Hospitalized Patients with Community-Acquired Pneumonia: A Retrospective Cohort Study We aimed to assess the prognostic role of the neutrophil/lymphocyte ratio (NLR) in community-acquired pneumonia (CAP) via a single-center retrospective cohort of hospitalized adult patients from 1/2009 to 12/2019. Patients were dichotomized into lower NLR (≤12) and higher NLR (>12). The primary outcome was mortality. ICU admission and hospital- and ICU-free days were secondary outcomes. The pneumonia severity index (PSI) and the NLR’s ability to predict outcomes was also tested. An NLR ≤12 was observed in 2513 (62.2%) patients and >12 in 1526 (37.8%). After adjusting for PSI, the NLR was not associated with hospital mortality (odds ratio [OR] 1.115; 95% confidence interval [CI] 0.774, 1.606; p = 0.559), but it was associated with a higher risk of ICU admission (OR 1.405; 95% CI 1.216, 1.624; p < 0.001). The PSI demonstrated acceptable discrimination for mortality (area under the receiver operating characteristic curve [AUC] 0.78; 95% CI 0.75, 0.82) which was not improved by adding the NLR (AUC 0.78; 95% CI 0.75, 0.82, p = 0.4476). The PSI’s performance in predicting ICU admission was also acceptable (AUC 0.75; 95% CI 0.74, 0.77) and improved by including the NLR (AUC 0.76, 95% CI 0.74, 0.77, p = 0.008), although with limited clinical significance. The NLR was not superior to the PSI for predicting mortality in hospitalized CAP patients. Introduction Community-acquired pneumonia (CAP) is an acute infection of the lung parenchyma and one of the leading causes of hospitalization and mortality in the United States [1], with more than 1.5 million annual hospitalizations and about 100,000 deaths [2].Risk stratification and prognostication provide useful information on the disease trajectory and guide management [3].Although several prognostic tools for CAP have been evaluated, the pneumonia severity index (PSI), Confusion, Uremia, Respiratory rate, Blood pressure, being 65 years of age and older (CURB-65), and the ATS/IDSA are the most validated and remain commonly used in CAP, with satisfactory outcomes [4][5][6].Evaluations of the prognostic value of biomarkers like NT-pro-BNP [7], C-reactive protein, and procalcitonin have been less satisfactory when used individually [8].The neutrophil/lymphocyte ratio (NLR) (the ratio of absolute neutrophil count to absolute lymphocyte count), is an easily measurable index that is receiving growing interest as a useful prognostic biomarker in several stressful conditions including infections.In CAP, multiple studies have shown conflicting results on the prognostic value of the NLR alone or in addition to clinical severity scores [7,[9][10][11][12].The addition of the NLR to the PSI and CURB-65 in one study did not improve the prediction of mortality [12].More recently, a systematic review demonstrated a comparable prognostic value of the NLR to the PSI, CURB-65, CRP, procalcitonin, neutrophil count, lymphocyte count, and white blood cell count [13].However, only a few studies in the systematic review confirmed an association between the NLR and adverse outcomes utilizing a multivariate analysis, thus resulting in limited consideration of confounding factors on mortality.Also, most studies focused on mortality only as an outcome, with minimal evaluation of other clinical outcomes including need for invasive and non-invasive mechanical support and admission to the intensive care unit (ICU).Further, most studies were limited by sample size. The aim of this study was to evaluate the prognostic value of the NLR in predicting the clinical outcomes of patients who are hospitalized with CAP. Materials and Methods The institutional review board (IRB) at the Mayo Clinic approved this research as low-risk, denoted by the IRB number 17-011140, with a waiver of the requirement for written informed consent. Design, Setting, and Participants This study had a retrospective cohort design.All adult patients admitted to the Mayo Clinic in Rochester, Minnesota, between January 2009 and December 2019 who had community-acquired pneumonia (CAP) underwent screening.The presence of CAP was determined using International Classification of Diseases 9 (481-486) and 10 (J13, J15, and J18) codes, coupled with note searches.Community-acquired pneumonia was defined as an acute infection of the lung parenchyma exhibiting clinical symptoms (cough, fever, pleuritic chest pain, and dyspnea) and a new radiographic infiltrate not being acquired in the hospital or healthcare setting [14]. The exclusion criteria were: -The absence of Minnesota research authorization; -Baseline conditions of human immune deficiency virus infection, interstitial lung disease, leukopenia, or neutropenia; -Diagnoses of hospital-acquired pneumonia, ventilator-associated pneumonia, or aspiration pneumonia; -Readmissions (only the earliest admission during the study period was included per individual); -Hospital stays under 24 h; -The absence of neutrophil or lymphocyte levels within the initial 24 h of admission (Figure 1). Variable Definitions The data were extracted by the Anesthesia Clinical Research Unit team using standardized and validated queries [15][16][17].Extracted data included relevant demographic information, comorbidities (recorded as the presence of individual comorbidities alongside the Charlson Comorbidity Index [CCI]) [18], and disease severity indexes, evaluated via clinical scores.The medication exposure evaluation was limited to corticosteroids and vasopressors. The white blood cell count, which measures the total number of leukocytes, is recorded as 10 9 /L in our institution, with the institutional normal falling between 3.4 and 9.6 × 10 9 /L.As per the differential components, the normal ranges for neutrophil and lymphocyte counts are considered to be 1.56 to 6.45 × 10 9 /L and 0.95 to 3.07 × 10 9 /L, respectively.Neutrophil and lymphocyte levels tested during the first 24 h of admission (the first one available if more than one result was available during the timeline) were recorded as continuous variables and used to calculate the neutrophil/lymphocyte ratio (NLR).Based on the literature, an a priori NLR cut-off of 12 was chosen (patients were compared as those whose NLR level was ≤12 vs. those >12) [13]. Outcomes Primary outcomes included mortality during hospitalization and within six months of admission.Secondary outcomes encompassed the need for invasive and non-invasive mechanical ventilation (IMV) (NIMV), intensive care unit (ICU) admission, and hospitaland ICU-free days, calculated as days alive spent outside of the hospital or the ICU, respectively, within 28 days of admission, resulting in 0 for patients who died during the stay or had a length of stay of ≥28 days [19].In a similar fashion, IMV-free days were also calculated and evaluated.The prediction of in-hospital mortality and the requirement for ICU admission using PSI-only and PSI combined with NLR were also calculated. Variable Definitions The data were extracted by the Anesthesia Clinical Research Unit team using standardized and validated queries [15][16][17].Extracted data included relevant demographic information, comorbidities (recorded as the presence of individual comorbidities alongside the Charlson Comorbidity Index [CCI]) [18], and disease severity indexes, evaluated via clinical scores.The medication exposure evaluation was limited to corticosteroids and vasopressors. The white blood cell count, which measures the total number of leukocytes, is recorded as 10 9 /L in our institution, with the institutional normal falling between 3.4 and 9.6 × 10 9 /L.As per the differential components, the normal ranges for neutrophil and lymphocyte counts are considered to be 1.56 to 6.45 × 10 9 /L and 0.95 to 3.07 × 10 9 /L, respectively.Neutrophil and lymphocyte levels tested during the first 24 h of admission (the first one available if more than one result was available during the timeline) were recorded as continuous variables and used to calculate the neutrophil/lymphocyte ratio (NLR).Based on the literature, an a priori NLR cut-off of 12 was chosen (patients were compared as those whose NLR level was ≤12 vs. those >12) [13]. Outcomes Primary outcomes included mortality during hospitalization and within six months of admission.Secondary outcomes encompassed the need for invasive and non-invasive mechanical ventilation (IMV) (NIMV), intensive care unit (ICU) admission, and hospitaland ICU-free days, calculated as days alive spent outside of the hospital or the ICU, respectively, within 28 days of admission, resulting in 0 for patients who died during the stay or had a length of stay of ≥28 days [19].In a similar fashion, IMV-free days were also calculated and evaluated.The prediction of in-hospital mortality and the requirement for ICU admission using PSI-only and PSI combined with NLR were also calculated. Statistical Analysis The median, interquartile range (IQR) for continuous data and the numbers and frequencies for categorical variables were used to perform and present descriptive summary statistics.The chi-square test was used to examine categorical data, while the Mann-Whitney U test was used to evaluate continuous variables.The NLR ≤ 12 group was used as the reference for comparing the two groups.Results were reported using odds ratios (ORs), p values, estimates, and 95% confidence intervals (CIs).The analyses were adjusted for the baseline conditions and severity of the patients, measured by the PSI score using binary logistic regression and linear regression, depending on the characteristics of the outcomes of interest.As PSI already includes baseline characteristics such as age, gender, and certain comorbid conditions, no additional covariates were deemed necessary for the multivariable calculations comparing the NLR groups. To better understand NLR's association with outcomes, the following sensitivity analyses were conducted: - Limiting the analyses to patients with lymphopenia (defined as having an absolute lymphocyte count of <0.95 × 10 9 , the institutional cut-off level).Finally, C-statistics was used to examine the NLR and PSI's predictive ability for mortality and ICU admission.This included creating a receiver operating characteristic (ROC) curve and determining the area under the curve (AUC), as well as the 95% CI The potential contribution of NLR to the PSI was also evaluated.The ROC curves for both approaches were compared using DeLong's test [20].A two-sided p-value of 0.05 was considered significant.IBM SPSS v27.0 (IBM Statistical Package for Social Sciences Statistics for Windows, Armonk, NY, USA) and MedCalc Statistical Software v19.1 (MedCalc Software bv, Ostend, Belgium) were used to perform the calculations. Results Following the application of exclusion criteria, 4039 patients out of the initial 6847 were included in the cohort (Figure 1). Comparison of Patients with an NLR ≤ 12 vs. >12 The primary analysis compared 2513 patients with NLR ≤ 12 and 1526 with NLR > 12. Table 1 illustrates the distribution of baseline characteristics across both groups.Although there were some differences in the distribution of specific comorbidities, the baseline comorbidity burdens were generally well-balanced, as indicated by a comparable CCI (median [IQR] = 7 [5,10] and 7 [5,9] for NLR ≤ 12 and >12 groups, respectively, p = 0.541). Evaluation of the Impact of the NLR as a Continuous Variable Upon analyzing the relationship between an increasing NLR and various outcomes without categorization, we observed a rise in the odds of both hospital mortality and ICU admission as the NLR increased (adjusted OR [95% CI] = 1.01 [1.00, 1.02], p = 0.047 and adjusted OR [95% CI] = 1.02 [1.01, 1.02], p < 0.001 for hospital mortality and ICU admission, respectively).Additionally, higher NLR levels were associated with a significant decrease in the number of hospital-free days (adjusted estimate [95% CI] = −0.02[−0.04, −0.01]).Although univariate analyses suggested a significant association between the NLR and mortality within six months after admission, as well as the need for mechanical ventilation, these associations did not remain significant after adjusting for the PSI (adjusted OR [95% CI] = 1.0 [1.0, 1.01], p = 0.693 and adjusted OR [95% CI] = 1.0 [1.0, 1.01], p = 0.198 for mortality at six months and the requirement for mechanical ventilation, respectively). Four-Group Comparison Based on NLR Quartiles Supplementary Table S1 and Table 3 illustrate the distribution of baseline characteristics and outcomes among the four groups of patients based on NLR levels.Similar to the primary analyses, the need for ICU admission was significantly different in patients belonging to Q#3 (n = 405, 40.1%) and Q#4 (n = 492 (48.7%) compared to Q#1 (n = 319, 31.6%)(adjusted OR [95% CI] = 1.38 [1.16, 1.69] and 1.64 [1.34, 2.01] for Q#3 and Q#4, respectively).significant difference in the severity of illness as measured by the PSI, CURB-65, and APACHE III scores between the subgroups (NLR ≤ 12 and >12), and a significant association with mortality following univariate analysis.However, after adjusting for baseline comorbid conditions and the severity of illness using the PSI, the association between a higher NLR and mortality became insignificant.Additional adjusted analysis with the NLR as a continuous variable revealed a barely significant association between the NLR and in-hospital mortality.On assessment of secondary outcomes, a higher NLR was only found to be associated with an increased odds of ICU admission in this cohort, while no association was observed with other secondary outcomes including the need for invasive and non-invasive mechanical ventilation.Further adjusted analysis with the NLR as a continuous variable also revealed a significant association with ICU admission and hospital-free days, but no association with other secondary outcomes.These findings were consistent on additional sensitivity analysis evaluating outcomes in NLR quartiles, with an increasing odds of ICU admission in the third and fourth quartiles.Interestingly, the increased odds of ICU admission were only observed in the subset of patients with a high NLR and co-existing lymphopenia compared to patients with concomitant neutrophilia. Additionally, an increased odds for a need of both invasive and non-invasive mechanical ventilation were noted in this lymphopenic subgroup.Potential explanations include the presence of other confounding factors like the increasing use of non-invasive mechanical support for high-risk extubation patients such as COPD.Finally, in this cohort, the discriminatory capacity of the NLR in predicting both mortality and ICU admission was inferior to the PSI.Moreover, the addition of the NLR to the PSI did not improve the prediction of the mortality model.The improvement with the addition of the NLR to the PSI for predicting ICU admission, albeit statistically significant, was very small and not clinically meaningful.The utility of the NLR in predicting adverse outcomes in CAP has been previously reported in other studies [7,8,10,11,21,22].However, these studies reported conflicting results and were limited by sample size.Further, only a few studies confirmed an association between the NLR and mortality by multivariate analysis.In a recent systematic review of nine studies (n = 3340), the association between a higher NLR and mortality was observed to be significant.An NLR cut-off value >10 in this systematic review was found to predict mortality compared to the PSI, CURB-65, and other biomarkers including C-reactive protein, lymphocytes, neutrophils, and WBC [13].A higher sensitivity and specificity were observed at a cut-off value between 11.2 and 13.4.In contrast, our results demonstrated a barely significant association between an elevated NLR and mortality, and also a poor discriminatory ability of the NLR in predicting mortality compared to the PSI.Although the results of a retrospective study by Postma et al., the largest study included in the systematic review (n = 1549), reported a significant association between the NLR and mortality based on bivariate analysis, this was absent in the subsequent multivariate analysis.In addition, the lack of improvement in the PSI model with the addition of the NLR, as seen in our study, was similarly observed in the large study by Postma et al. [12].Therefore, our result in this larger cohort further limits the clinical applicability of utilizing the NLR as a prognostic tool for mortality, alone or in combination with severity scores, in CAP.Further studies, including prospective studies and an updated systematic review will be needed. In comparison with other prior studies evaluating the prognostic value of the NLR in CAP, our study further found a significant association between the NLR and need for ICU admission, an important outcome that is rarely assessed.In addition, other outcomes, including the need for mechanical ventilation, hospital-free days, and ventilator-free days, were all assessed.Moreover, our study explored the outcomes in subset of patients with neutrophilia versus lymphopenia, with significant associations observed in the lymphopenia group. The strengths of our study compared to prior similar studies include the large sample size (the largest sample size to our knowledge), the multivariable analysis adjusting for the severity of illness and comorbidities, the exploration of other clinical outcomes in addition to mortality, and the sensitivity analysis evaluating specific subgroups.As this was a single-center cohort study, findings reported are reflective of characteristics observed in a large academic center.Other limitations include the potential bias existing within the dataset, missing data, the lack of microbiological data, and other unmeasured confounders. Conclusions In this study, the neutrophil/lymphocyte ratio was not superior to the pneumonia severity index for predicting in-hospital mortality in patients who were hospitalized with community-acquired pneumonia.This limits its applicability as a prognostic enrichment tool, but additional studies are needed for assessing its use as a predictive enrichment tool. Table 1 . Demographics and clinical characteristics of patients. Table 2 . Primary and secondary outcomes based on NLR levels. CI: confidence interval, ICU: intensive care unit, IQR: interquartile range, NLR: neutrophil/lymphocyte ratio.* Data were analyzed using multivariable regression models adjusting for pneumonia severity index.NLR ≤ 12 was the reference. Table 3 . Primary and secondary outcomes of patients according to their neutrophil/lymphocyte ratio levels *. ≥16.35.** Data were analyzed using multivariable regression models adjusting for pneumonia severity index.Quartile #1 was the reference.
2024-01-26T16:29:49.710Z
2024-01-24T00:00:00.000
{ "year": 2024, "sha1": "7416ad2075f6cb4c6b5ee722880af0d17bb4cf3b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9059/12/2/260/pdf?version=1706069755", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5a348ae91b500fe4616843979128b93ae903b563", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
125524643
pes2o/s2orc
v3-fos-license
THE PLUS-MINUS AXIOLOGICAL PARAMETER IN SELECTED SEX-RELATED ORIENTATIONAL METAPHORS IN SPANISH AND ITS ROLE IN INTERLINGUAL CONTRASTIVE STUDIES The purpose of this article is to verify the functioning of the PLUS-MINUS axiological parameter in the orientational sexual metaphors which exist in Peninsular Spanish. The initial premise of the investigation is based on the concept proposed by Tomasz P. Krzeszowski (1993), according to which, upon studying image schemas, an additional parameter called PLUS-MINUS should be taken into account. This parameter is considered by Krzeszowski as “directly responsible for the dynamism of the metaphorization processes inherent in the formation of concepts based on the relevant schemata. Among [them are] the concepts of varying degrees of abstraction and of varying degrees of axiological load”. The present paper aims to demonstrate that the inclusion of the PLUS-MINUS parameter is necessary for the correct reconstruction of the picture of particular elements of reality grounded in language. An attempt will be made to try to answer the question of whether the type of orientation is in fact associated with a particular axiological load universally, or whether the axiology connected with the spatial orientation may vary within a single system or two different language systems. In this respect, the final part of the paper will encompass some examples of linguistic manifestations of sex-related orientational metaphors in Spanish with their Polish counterparts. 1 Metaphorical orientations and the concept of embodied meaning Lakoff and Johnson, in one of their works which gave birth to the Cognitive Theory of Metaphor, claim that "metaphorical orientations are not arbitrary.They have a basis in our physical and The PLUS-MINUS axiological parameter in selected sex-related orientational metaphors in Spanish. . .cultural experience."According to them, "these spatial orientations arise from the fact that we have bodies of the sort we have and they function as they do in our physical environment."(Lakoff & Johnson, 1980, p. 14).Following these lines of reasoning, in their later research the American academics state that: "Real people have embodied minds whose conceptual systems arise from, are shaped by, and are given meaning through living human bodies.The neural structures of our brains produce conceptual systems and linguistic structures that cannot be adequately accounted for by formal systems that only manipulate symbols."(Johnson & Lakoff, 1999, p. 6). These observations refer clearly to the concept of embodied meaning1 , developed, among others, by both Lakoff and Johnson themselves in their previously mentioned investigations and by Johnson individually (Johnson, 1987(Johnson, , 2007)).This concept assumes that the way in which humans understand reality is a reflection of their sensorimotor activity in the outside world, which takes the form of constantly repetitive actions.These actions lead to the emergence of preconceptual structures, called image schemas or image schemata (Johnson, 1987, p. 28;Lakoff, 1987, p. 267).They give rise to conceptual structures which are reflected in the language we use (Lakoff, 1987, p. 267). According to Johnson, image schemas organize our experiences and help us to comprehend them.They are "patterns [which] emerge as meaningful structures for us chiefly at the level of our bodily movement through space, our manipulation of objects, and our perceptual interactions."(Johnson, 1987).On one hand, preconceptual image schemata directly structure concrete, non-abstract concepts.On the other hand, they shape abstract concepts through metaphorical mappings of the structure of the schemata from the source domain into the target domain, (Johnson, 1987, p. 169). The PLUS-MINUS axiological parameter in preconceptual image schemata In 1993 Tomasz P. Krzeszowski proposed the inclusion of an additional parameter, which he called PLUS-MINUS, in the study of preconceptual image schemata.The researcher considers that: "[This parameter is] directly responsible for the dynamism of the metaphorization processes inherent in the formation of concepts based on the relevant schemata.Among these concepts are the abstract concepts [. . . ] as well as other concepts of varying degrees of abstraction and of varying degrees of axiological load" (Krzeszowski, 1993, p. 310). It would appear that some concepts are related to positive or negative values in a permanent and immutable way regardless of the context.However, there are situations when the axiology connected with one concept depends on the type of schema evoked by a specific linguistic expression. 3 The (dis)ambiguous axiology of the concept of COPULA-TION and the aims of the study One of the conclusions drawn by Krzeszowski is that there exists a stable axiology associated with the act of copulation which, according to him, is a highly positively loaded concept.He points out that copulation may be interpreted as an interpersonal bond, the aim of which is the transmission of life.Thus, it belongs in the LINK schema, or as unity in the PART-WHOLE The PLUS-MINUS axiological parameter in selected sex-related orientational metaphors in Spanish. . . schema.Consequently, it combines the positive values of both schemata from which it arises (Krzeszowski, 1993, p. 313). It is interesting to collate the conclusions drawn by Krzeszowski with the results of the current author's own research on Spanish erotic lexicon, according to which copulation and related concepts are axiologically ambiguous and not so straightforward (cf.Popek-Bernat, 2015).Taking this into account, and following the premises developed by Krzeszowski, this paper intends to establish whether and to what extent the type of orientation related to a particular image schemata, which is manifested in language by means of specific expressions, determine the axiology of the concept of our interest (in this case, the act of copulation).In order to answer these questions, the axiology grounded in some orientational sex-related metaphors detected in Spanish will be analyzed and reconstructed, applying the PLUS-MINUS parameter.Afterwards, an attempt will be made to determine whether the type of orientation is in fact associated with a particular axiological load universally, or whether the axiology connected with the spatial orientation may vary within a single system or two different language systems.In the final part of the paper, some examples of the linguistic realizations of sex-related orientational metaphors in Spanish will be compared with their Polish counterparts. The PLUS-MINUS axiological parameter in selected orientational schemata Before analyzing specific examples, it is necessary to outline some general remarks concerning the function of the PLUS-MINUS parameter in selected orientational image schemata (cf.Krzeszowski, 1993, pp. 320-323). The UP-DOWN orientation This schema refers to the canonical form of the human body, directed upwards.Human physical development is connected with growing upwards, which is one of the basic positive experiences, supported by many other socio-cultural habits and situations (Krzeszowski, 1993, p. 321).It is common in many societies to move the thumbs up to signal that everything is fine.Our bodies are erected and our heads lifted up when we feel well and comfortable.When we smile, the corners of our mouths curve upward.The UP orientation is therefore positively charged (pp. 321-322).It is reflected in many languages.In English, for example, there exist expressions like he has risen to the top or you've grown in my eyes.2According to Krzeszowski, the DOWN orientation is associated, in contrast, with negative values.When we want to indicate that something has failed or gone wrong, we normally direct our thumbs downwards.When we experience a decrease in happiness or feel depressive, the corners of our mouths curve down (pp. 321-322).In English, this is linguistically illustrated by such expressions as he fell into depression or he came down with flu. The FRONT-BACK orientation This orientational schema is supported by the UP-DOWN schema.As observed by Krzeszowski: Due to the evolutionary process [. . .] man assumed the erect position as a result of which what was originally FRONT (the head) became also UP without actually ceasing to be FRONT (human face).Similarly, what was BACK became DOWN without ceasing to be BACK.(Krzeszowski, 1993, p. 322). These resemblances are visible at the level of axiology because the values associated with the FRONT-BACK orientation are the same as in the case of the UP-DOWN orientation.FRONT, allied to UP, "has a definitely positive value due to the fact that the fundamental experience The PLUS-MINUS axiological parameter in selected sex-related orientational metaphors in Spanish. . .connected with this orientation is the experience of human face, the most representative part of the human body" (Krzeszowski, 1993, p. 322).BACK, by contrast, is conventionally a negatively loaded orientation."The back parts of our bodies are certainly less representative of us as human beings."(Krzeszowski, 1993, p. 323). The positive values attributed to FRONT and the negative ones attributed to BACK are illustrated in most languages by numerous expressions.In English, for example, when somebody is sick and helpless, this person is on one's back.If somebody wants to have good seats in the theatre or another type of auditorium, they should reserve the front ones. The IN(TO)-OUT orientation The IN(TO)-OUT schema is not ascribed by Krzeszowski to the category of orientational image schemata3 , although it is obviously connected with a concrete type of preconceptual orientation of particular human experiences.He alludes to the IN(TO)-OUT orientation when he describes the CONTAINER schema and its two variants: BODY-AS-A-CONTAINER and BODY-IN-A-CONTAINER.The axiologies linked to these variants are similar only to a certain degree, mainly due to the fact that their motivations are different (Krzeszowski, 1993, pp. 314-317). The BODY-AS-A-CONTAINER variant is grounded in the experiences of breathing and eating.What we usually introduce to the BODY-AS-A-CONTAINER are different sorts of substances indispensable for the functioning of our organisms, without which it would be impossible to survive (e.g.nutrients or inhaled air).At the same time, our organisms expulse, exhale or excrete OUT everything which is, or may be, harmful or unnecessary.Consequently, IN(TO) is conventionally associated with positive values and OUT with negative ones (Krzeszowski, 1993, p. 315). Nevertheless, Krzeszowski proves that the axiology attributed to the IN(TO)-OUT orientation is not stable.As far as the second variant of the CONTAINER schema is concerned, the axiological load of the two poles of this orientation is constantly changing.Since "the primary experience associated with [this variant of] the CONTAINER schema is that of being in our mother's womb" (Krzeszowski, 1993, p. 315), IN(TO) is positively charged, as the container (mother's womb) provides favorable conditions for the development of the organism inside and protects it from external factors.On the other hand, the body in a container is in some way limited by its boundaries and sooner or later it has to get OUT in order to be able continue its further development outside.In this case, the IN(TO) orientation is associated with negative values because it involves the limitation of freedom and constrictions.Getting out of the container or, more precisely, freeing the body from it, is in this context synonymous to gaining freedom, which is, almost universally, positive.In other words, this interpretation evokes a negative axiological charge attributed to the IN(TO) orientation (Krzeszowski, 1993, p. 316). It must be remembered, however, that getting out of a mother's womb leads not only to freedom, but also exposes us to potential dangers or threats.There is no exterior "shell" to protect us.Because of this, the OUT orientation in the context of the BODY-IN-A-CONTAINER schema is axiologically as complex and contradictory as the IN(TO) orientation.It may be charged positively (the body imprisoned in the container, which succeeds in getting out of it, gains freedom) or negatively (the body which comes out of the container is not safe anymore as it loses its natural protection) (Krzeszowski, 1993, pp. 316-317). The "axiological dialectics" (Krzeszowski, 1993, p. 315) described above is deeply grounded in language structures.To give some examples, pairs of expressions like in business -out of business, within reach -out of reach, the Democrats are in -the Republicans are out reflect the contradictory values stemming from the IN orientation (conventionally positively loaded) and the OUT orientation (conventionally negatively loaded).Nevertheless, observed by Krzeszowski, The PLUS-MINUS axiological parameter in selected sex-related orientational metaphors in Spanish. . .[. . . ] the assignment of values [to a particular orientation] may depend on a number of factors, such as other schemata which may interact with the CONTAINER schema as well as the value of the container itself; if the container, for example, home, is positively charged, being IN it is also positively charged; if the container, for example, prison, is negatively charged, being IN it is also negatively charged.(Krzeszowski, 1993, p. 317). Correlations between orientational schemata and other types of schemas The CONTAINER schema, referred to in the context of the IN(TO)-OUT orientation, is also related to the UP-DOWN orientation.As Krzeszowski states: "through our mouths, situated in the upper parts of our bodies, we take in nourishment, which sustains our life" (Krzeszowski, 1993, p. 321).This preconceptual experience reinforces, therefore, a positive axiology of the UP orientation. As far as the IN(TO)-OUT orientation is concerned, it evokes not only the CONTAINER schema, but also the LINK schema, which, for its part, overlaps with the BODY-IN-A-CONTAI-NER variant of the CONTAINER schemata.Krzeszowski claims that: When we are born, we get OUT of the original CONTAINER, and at the same time the umbilical cord, our first LINK, is broken.In this way, both getting out of our mother's womb and severing the umbilical cord constitute primary experiences of acquiring freedom.[. . .] However, the LINK is only physically broken.On the social level, it continues to exist as we still very much depend on our parents for both food and protection within the bounds of the family, which we begin to conceive metaphorically as a SHELTER.[. . .] Later in life, we form new links and enter new containers [. . .], our freedom is constrained by new links and new containers.Thus, the dialectical struggle grounded in this primary axiological experience continues within us: on one hand, we wish to be free, and we consider freedom to be among the most outstanding positive values, while, on the other, hand, we willingly impose constraints on our freedom to obtain security and protection.(Krzeszowski, 1993, p. 316). The FRONT-BACK schema, described in subsection 2.1.1,is correlated with other schemata, as is the case with other orientational schemata.The FRONT-BACK schema, reinforced by the UP-DOWN one, is additionally motivated by the BALANCE and SOURCE-PATH-GOAL schemata.The loss of equilibrium normally causes a fall (DOWNside movement), prevents us from moving towards the FRONT, and stops us from reaching a chosen GOAL.When we manage to keep our balance, we maintain an erected body position, directed UPwards, we are able to move forward in a chosen path, gradually reaching our GOAL.These experiences are associated with a concrete type of axiological load (positive or negative), resulting from the functioning of the PLUS-MINUS parameter within a particular schema.Both UP and FRONT, as well as BALANCE and GOAL, are positively charged.Their axiologically negative counterparts are DOWN, BACK, LACK OF BALANCE and stillness which makes it impossible to reach the GOAL (Krzeszowski, 1993, pp. 321-322). Orientational metaphors in Spanish erotic lexicon Spanish erotic lexicon abounds in expressions which are linguistic realizations of different types of orientational metaphors related to heterosexual intercourse.This conclusion was drawn from the analysis of 98 lexical units which elaborate these metaphors and are registered in Diccionario del sexo y el erotismo by Félix Rodríguez González (2011).4All of the 98 lexical units designate the act of copulation between people of different sexes, or are semantically related to activities aimed at triggering sexual excitation. The PLUS-MINUS axiological parameter in selected sex-related orientational metaphors in Spanish. . .The UP-DOWN, IN(TO)-OUT and FRONT-BACK orientations evoked by the sex-related metaphors in question are associated with the type of movement denoted by their different elaborations.In addition to this, however, they are associated with a specific projection of male and female genitalia which are linguistically positioned in a specific location in relation to different parts of the body of the agent or patient of sexual action.Taking into account the basic (nonerotic) meanings of the lexical items which form the basis of the metaphors of those orientations, they can be categorized into the following groups: 1. Expressions which semantically allude to the act of changing vertical position in space by moving UPwards, e.g.montar ('to go up, to climb'), empinarse ('to stand on one's toes or, referring to the horse or other quadrupedal animal, to rear up')5 ; 2. Expressions which semantically allude to the act of changing vertical position in space by moving DOWNwards, e.g.apearse, descabalgar ('to dismount the horse'), enterrar [la sardina] ('to bury, to dig [the sardine] in the ground'), acostarse ('to lie down, to change the body position from the vertical to the horizontal one'); 3. Expressions which semantically allude to the act of changing horizontal position in space by moving something INwards, e.g.meter ('introduce something into an object or a container'), entrar ('to enter, go in'); 4. Expressions which semantically allude to the act of changing horizontal position in space by moving something OUTwards, e.g.soltar el manda(d )o ('to free, to let the phallus out'), echar un cohete ('to launch a rocket, to throw a banger"); 5. Expressions which are not semantically related to any explicitly directed movement but evoke the FRONT-BACK orientation through reference to a concrete part (front or back) of the human body, e.g.culear ('to move the bottom'), rabear ('to wag the tail'). 5 Axiological analysis of selected sex-related orientational metaphors in Spanish The sexual act is an UPward movement This orienational metaphor is evoked in Spanish mainly by means of expressions which, in a nonerotic context, are semantically connected with mounting and riding a horse.Most of them are based on the verb montar (see Table 1) and its basic meanings are: 'to go up, to climb', 'to get on (a horse)' or 'to ride (a horse)' (DRAE). 6It is necessary to emphasize that within the semantic scope of Spanish verbs and phrasal verbs which designate the activity of riding a horse (cabalgar / montar a caballo) the action of climbing or going up is always being denoted (literally designated in Spanish by the verb subir ).All this taken into account, it becomes clear that the lexical items formed on the basis of the verb montar on all occasions entail the UPward movement. According to the previously quoted observations by Krzeszowski, whatever is currently UP or is directing UPwards shall be associated with positive values.It is interesting to contemplate these conclusions in the context of the expressions included in Table 1 (p.7).Most Spanish expressions which designate heterosexual intercourse, based on the verb montar, evoke a male grammatical subject.It tends to be a man who "mounts a woman" in erotic situations.The high frequency of the use of the expressions number 2 and 3 in Spanish reflect this.As a matter of fact, a woman who participates in a sexual act is often called jaca or yegua (in English mare).In these cases, it is the man who moves UPwards in order to mount her.It is important to note that initially the man is located DOWN and the woman is UP.However, when it comes to the intercourse between them, the positioning of lovers in the vertical axis is inverted.After climbing UP, the man places the woman DOWN and gets on her as if he was a horse rider who mounts a mare.Consequently, what we are dealing with is the projection of a male lover who is directing UPwards, trying to reach his GOAL, gradually reinforcing his position.The action of reaching the GOAL (UP) involves leaving the woman DOWN.The role of the man located UP seems then to be crucial since he has control over the whole situation, and where the mare moves afterwards depends solely on his whim.According to Rodríguez González, the expressions formed on the basis of the verb montar which connote the action of mounting the woman entail also the action of dominating her or even taking possession of her by a man (Rodríguez González, 2011, p. 691).This perspective implies that the woman involved in sexual intercourse receives the attributes both of an animal and of an object which may be possessed.Undoubtedly it significantly lowers her axiological and ontological status since the axiology is determined, among others, by the anthropocentric attitude of man towards the world around him, which contributes to the attribution of negative values to all of the non-human elements of reality (Tokarski, 1991, pp. 75-76). The positive image of a male horse rider, who is UP, contrasts with the negative image of the female, who is dominated, moved DOWNwards from the upper position, and animalized.The axiology emerging from the metaphors elaborated by the described expressions is therefore consistent with the axiology stemming from the UP-DOWN schemata after the insertion of the PLUS-MINUS parameter to its study.It is additionally reinforced by the axiology of the SOURCE-PATH-GOAL schema (a man reaches his GOAL through the realization of the Upward movement, which increases the positive axiological load associated with the male lover). The UP-DOWN orientation in a sex-related context is, however, not connected with a stable axiology since it involves movement.In the case of expressions which realize the metaphor THE SEXUAL ACT IS AN UPWARD MOVEMENT, the man at first is positioned DOWN (which should be associated with negative values attributed to him) and only later he is UP (which suggests that he is positively charged).In order to determine correctly the axiology emerging from orientational schemata which evoke a movement towards a particular direction, the possible correlations with other schemata (e.g.SOURCE-PATH-GOAL) should be taken into account, along with the orientation of this movement and its final result.Only on this basis is it possible to reconstruct the values attributed to a concrete expression. Before embarking on the analysis of other orientational metaphors related to sexual intercourse in Spanish, it is worth examining expression number 1 included in Table 1.This expression proves that the verb montar is not always associated with the male grammatical subject.This has an influence on the axiology of the orientational metaphor evoked. As far as expression number 1 is concerned, a woman is not only a grammatical subject but also the agent of the action of 'getting on / mounting a bar'.Consequently, it is the woman who makes the UPward movement.As a result, during intercourse the woman is the person who is UP and who plays a crucial role in the act.The values attributed to the female lover are highly positive The PLUS-MINUS axiological parameter in selected sex-related orientational metaphors in Spanish. . . in this case.The male lover, by contrast, is negatively loaded.This negative axiology of the man is motivated by his position in space (he is situated DOWN, below his female counterpart) and also influenced by the anthropocentric perception of reality (see above).The allusion to the male lover entailed by the expression in question is fairly implicit and hardly noticeable since it is made through metonymical extension.The concept of barra (bar ) is to be interpreted as a metaphorical projection of the phallus.In this case, it is more than obvious that it is not only the male sex organ itself on which the woman is trying to get during the sexual act but the entire man.Assuming that the phallus, which is conceived in terms of a bar, stands for the whole person, what we perceive is the reification of the man as a lover (he receives the attributes of a bar) and his axiological depreciation. The sexual act is a DOWNward movement According to analysis of Spanish erotic lexicon, the expressions which evoke this orientational metaphor may be divided into 3 groups based on their non-erotic meanings: 1. Expressions semantically connected with horse-riding. 2. Expressions semantically connected with changing body position from vertical to horizontal. 3. Expressions semantically connected with the activity of directing an object DOWNwards, e.g.putting something underneath the ground. Among the expressions ascribed to group 1 one can find verbs such as descabalgar or apearse which, according to their basic lexicographical definitions registered in DRAE, refer to the action of dismounting a horse.The participant of a heterosexual act, who is initially UP as they mount the other lover, realizes a DOWNward movement, towards the orientation conventionally associated with negative values.It is worth mentioning that the verb descabalgar in an erotic context alludes to the finalization of the intercourse (Rodríguez González, 2011, p. 322), whereas the verb apearse refers to its momentary interruption and tends to be used as part of the expression apearse en marcha (in English 'to alight from the horse in motion'), which is associated with the practice of coitus interruptus (Rodríguez González, 2011, p. 95).Because of this, the DOWNward movement shall be related to the action of ceasing (temporarily or permanently) the intercourse.This means that only the final stage of the sexual act, or its momentary interruption, have a negative axiological charge, rather than the entire sexual act. As far as the second group of expressions is concerned, it is represented, on one hand, by the lexical items referring in non-erotic context to the action of lying down to bed (e.g.acostarse, empiltrarse, encamarse) and, on the other hand, by the units semantically connected with the action of falling to the ground and rolling around (e.g.revolcarse). In the case of the former expressions, it is worth investigating whether the DOWNwards orientation evoked by their non-erotic meanings determine a potentially negative axiology of the sexual intercourse.It seems that the lovers projected by means of these verbs are changing their body position from the vertical to the horizontal in order to feel more comfortable.We normally lie down to bed to take a rest or relax.From this perspective, the horizontal position achieved by us as a consequence of the DOWNward movement becomes our GOAL, which improves our comfort.The GOAL, which is conventionally positively loaded, therefore causes a reorientation of conventionally negative axiology attributed to the DOWN orientation and creates a positive axiology of the sexual act illustrated by the orientational metaphor THE SEXUAL ACT IS A DOWNWARD MOVEMENT.It should not be forgotten, however, that changing one's body position from the vertical to the horizontal may be also triggered by a negative experience, e.g.illness.When we are ill, we often lose energy and we do not have enough strength to keep walking or even to remain standing.In other words, illness can cause DOWNward movement.Despite this negative connotation, it is clear that the bed becomes the GOAL of our movement towards DOWN.When The PLUS-MINUS axiological parameter in selected sex-related orientational metaphors in Spanish. . .we are ill and lie down in bed, our state of health normally improves.This experience reinforces the positive axiology associated with the metaphor which evokes the DOWN orientation.These observations show that the DOWN orientation in a sex-related context does not always denote negative values. Similar conclusions may be drawn from the analysis of the verb revolcarse.As mentioned above, in a non-erotic context it alludes to the action of falling to the ground and rolling around, which means that it evokes an implied animal metaphor.The actions of falling down to the ground and rolling around are typical of many animals (e.g.dogs) when they want to play or to express their joy.Due to this fact, the verb revolcarse is to be associated with a positively grounded experience.However, the image of lovers projected by the metaphor elaborated by means of this expression is not so positive.They roll around on the ground as if they were animals.Even if they experience some positive emotions, from the axiological point of view they receive animal (non-human) attributes.It needs to be remembered that animals are traditionally considered as less valuable than humans, who are situated higher in the hierarchy of the great chain of being (see 4.1).Because of this, the anthropocentric perspective yet again reinforces a negative axiological load of the participants of the intercourse who realize the DOWNward movement.Nevertheless, it does not affect the axiology associated with the intercourse itself since the negative axiological load conventionally attributed to the DOWN orientation is reoriented under the influence of the positive axiology attributed to DOWN interpreted as a GOAL (through the correlation with the SOURCE-PATH-GOAL schema).When one achieves the goal, they will achieve joy. Despite these observations, the orientational metaphor THE SEXUAL ACT IS A DOWN-WARD MOVEMENT remains associated in most cases with the negative axiology typical of the DOWN orientation.It is illustrated, among others, by means of the expression enterrar la sardina, which literally means 'to bury a sardine'.La sardina is one of numerous Spanish lexical items which refer metaphorically to the penis.The action of 'burying', which involves digging down in the ground, evokes the DOWNward movement.A conventionally negative axiology of DOWN is in this case reinforced by the reference to the negative experiences of death and funeral.However, it is interesting that burying the sardine is connected with placing it IN(side) a hole.This means that DOWN overlaps here with IN, the orientation conventionally associated with positive values.It weakens the intensity of the negative axiological load attributed to SEXUAL ACT evoked by the metaphor but does not reorient its axiology totally since it is strongly motivated by the negative experiences described above. The sexual act is an INward movement In most cases this orientational metaphor is elaborated in Spanish by expressions axiologically associated with positive values, e.g.envainar (in English 'to sheathe') or meter la flauta en el estuche (in English 'put the flute in a case').The positive axiological load conventionally attributed to the IN orientation is additionally reinforced by the positive experiences evoked by the non-erotic connotations of these lexical units.When we put a weapon, such as a knife or sword, into a sheath or a flute into a case, we protect these objects and prevent them from the impact of external factors.The insertion of the phallus to the vagina during the act of copulation is based in this case on the positive axiology associated with the IN(to) orientation and on the positive experiences of protection and safety. Nevertheless, the axiology of the metaphor in question is not always positive in such a straightforward way since it is also linguistically realized by expressions such as hincar, meter un clavo or dar por donde amargan los pepinos.The first two refer to the act of sexual penetration in terms of the action of hammering a nail, which is evoked by the basic meanings of these lexical items registered in DRAE.The metaphorical projections emerging from them are based on the imagery of the phallus receiving the attributes typical of a sharp nail, introduced into the vagina by force and in a violent way (it is not put in but rather stuck IN).The projection of sexual intercourse stemming from these expressions is grounded in a negative experience of violence, whose victim is the woman participating in the intercourse.It is worth emphasizing that from the perspective of the agent of the actions (who is conceptualized in terms of a torturer) the action of hammering the nail is associated with reaching the GOAL.It reinforces a positive axiological load which is weakened by the negative experience of violence. An interesting case from the axiological point of view is the expression dar por donde amargan los pepinos, which literally means 'to give through the place where the cucumbers get bitter'.Although this translation sounds exotic and artificial in English, it is necessary to understand the metaphor elaborated by this phrasal verb in order to be able to reconstruct its axiology. First of all, there are different expressions in Spanish which designate the action of inserting an object INTO something.One of the most frequently used of them is the verb meter, whose basic meaning is 'to put, to introduce something inside'.Nevertheless, it is also possible to use in this context the expression dar por which, as can be seen in the previously proposed translation, literally means 'to give through'.This clarification is fairly relevant axiologically, since the penetration conceptualized in terms of putting the flute in a case or hammering a nail is associated with completely different values than the penetration metaphorically projected by means of the action of giving.If we give something, it usually shows our kindness and generosity towards somebody.From this perspective, the penetration evoked by the expression dar por has positive connotations.It contrasts with the violent image of penetration projected by the previously quoted expressions, hincar or meter un clavo. In order to fully reconstruct the axiology connected with the expression dar por donde amargan los pepinos, it needs to be added that in an erotic context it refers to the act of sodomy (Rodríguez González, 2011, p. 310).A place 'where the cucumbers embitter' (donde amargan los pepinos) is one of numerous Spanish lexical items referring to the anus.The anus is located at the BACK of the body.Its negative axiology associated with the BACK orientation is additionally reinforced by the image of embittered cucumbers.If the cucumbers embitter, they are probably experiencing some kind of decomposition or degradation and, as a result, they lose their canonical form (Krzeszowski, 1993, p. 311).In other words, the expression in question evokes one of the experiences considered by Krzeszowski as those which reinforce the negative axiological load attributed to the PART in the PART-WHOLE preconceptual image schema (Krzeszowski, 1993, p. 311). In summary, the expression dar por donde amargan los pepinos elaborates the metaphor of the sexual act based on the axiological struggle.On one hand, the act of penetration itself, which consists in introducing the phallus into the body of the woman, is positively charged.It is motivated by the axiology associated conventionally with the IN orientation in the BODY-AS-A-CONTAINER variant of the CONTAINER schema.This positive axiological load is reinforced by the positive semantical connotations of the expression dar por.On the other hand, taking into account that the penetration denoted by the lexical unit in question is carried out through the anus, the sexual act is negatively charged.It is motivated by the negative axiology conventionally attributed to the BACK orientation and by the image of the embittering cucumber, which reinforces the negative axiology of the PART in the PART-WHOLE schema. The sexual act is an OUTward movement The axiology associated with this metaphor is systematic and stable, which contradicts the observations made by Krzeszowki referring to the changeable character of values ascribed to the OUT orientation (see subsection 2.1.3).Two selected examples of expressions which elaborate the metaphor THE SEXUAL ACT IS AN OUTWARD MOVEMENT will now be analyzed. The first expression is the idiom descargar el biberón, whose literal meaning in a non-erotic context is 'to empty the bottle with teat'.The metaphor of heterosexual intercourse evoked by this expression is based on the series of visual analogies between the action of emptying a baby's bottle of milk through the hole in the teat and ejaculation.In both cases, there is an allusion to a white liquid (milk and sperm) getting OUT of the CONTAINER (a bottle or, projected in terms of this object, a male sexual organ) inside of which it was to be found before.In order to attribute a proper axiological load to the orientational metaphor, several factors need to be taken into account. Firstly, it is no coincidence that Spanish speakers use the verb descargar, which literally means 'to remove a load or weight, to get rid of it' (DRAE), in this context.It is worth asking why they do not use the verb vaciar, since, semantically speaking, it is directly connected with the action of leaving something empty (DRAE).The use of descargar is probably due to the fact that it would change the projection of the sexual act evoked by the expression in question.It seems that the semantics of the verb descargar have a significant influence on the axiology of the whole expression.The entails not only an OUTwards movement (getting out of a CONTAINER), but also the fact that the load which was inside the CONTAINER was excessive (the Spanish term carga semantically refers to an object which presses heavily and may be perceived as a burden).Male semen, which receives the attributes of milk, is therefore considered to be an overload or excessive burden, which is something that has a negative impact on human life and from which a human being can not break free.The act of expelling a substance of such properties out of the body is to be associated with something positive: the male organism is "freed" of the "overload". The second expression to be analyzed in the context of the metaphor THE SEXUAL ACT IS AN OUTWARD MOVEMENT is soltar las cascarrias, which can be translated as 'shake off the dirt from the bottom of your trousers / dress' (DRAE).Similar to the expression descargar el biberón, this idiom evokes the image of ejaculation in the context of heterosexual intercourse.In this case, the sperm is conceptualized in terms of dirt which is let OUT of the male body.The OUTward movement yet again is associated with a positive axiology since, thanks to this activity, the man is "cleansed".The positive axiological load connected with the OUT orientation is reinforced by the positive experience of freedom, to which the semantics of the verb soltar alludes. 5.5 The sexual act is a movement from/at the BACK Spanish sex-related orientational metaphors are not always connected with a straightforwardly directed movement.There is, for instance, a group of expressions which evoke the BACK orientation by referring to the BACK part of the human body. One of the most interesting lexical items from the axiological point of view is the verb rabear, whose basic meaning is 'to wag a tail'.Rabo ('tail') is one of many Spanish terms which allude to the penis.Its use in the erotic context involves the spatial reorientation of those elements that are of interest.The human phallus, positioned in the FRONT part of the body, receives the attributes of the animal tail positioned in the BACK part of the body.The movements of the penis executed in the FRONT during the intercourse are conceptualized in terms of the movements of the tail executed from/at the BACK.As mentioned before, BACK is conventionally negatively loaded (see 2.1.2). The negative axiology associated with the human penis and with the sexual act denoted by the verb rabear is directly connected with the values ascribed to the BACK orientation.It is reinforced by the "homo-animal opposition", described by Agnieszka Libura (2003, p. 121).8According to Libura, "the human body is 'normal' so non-human parts of the, body -wings, tails, horns -are axiologically differently charged" (Libura, 2003, p. 121).9From the anthropocentric perspective, everything which is not originally human should be considered inferior (see 4.1).Due to this fact, the perception of the human penis in terms of a tail, and of heterosexual intercourse in terms of wagging the tail, lowers the axiological status of male genitalia and of the act of copulation itself.The axiology evoked by this expression is not, however, straightforwardly negative.It should be remembered that animals frequently express joy and contentment by wagging their tails.Taking The PLUS-MINUS axiological parameter in selected sex-related orientational metaphors in Spanish. . .into account this extralinguistic experience, the action of wagging the tail is to be associated with a positive axiology. The case of the verb rabear proves, contrary to Krzeszowski's statements, that the BACK orientation is not always negatively charged. Conclusions The results of the research on sex-related metaphors in Spanish presented above show that the PLUS-MINUS parameter is indispensable in order to reconstruct as completely as possible the picture of particular elements of reality grounded in language. As has been demonstrated, in many cases the observations made by Krzeszowski concerning the correlations between the orientation and the axiology are universal.To give an example, a lot of linguistic elaborations of sex-related orientational metaphors in Spanish are positively loaded in the context of the UP orientation and negatively loaded in the context of the DOWN orientation.However, the conventional axiology attributed to a specific orientation is sometimes reversed.There are cases of positively charged expressions, e.g.emplitrarse, encamarse or revolcarse, which evoke the DOWN orientation, normally associated with the minus pole of the UP-DOWN schemata (see 4.2). The analysis carried out within this paper enables the verification of the universality of Krzeszowski's thesis that the values attributed to the IN(to)-OUT orientation are not stable.The study of Spanish metaphors associated with the erotic sphere confirms that as far as the IN(TO) orientation is concerned, the axiology associated with the IN orientation is in fact complex and sometimes (e.g.dar por donde amargan los pepinos) leads to different axiological struggles (see subsections 4.3, 4.4).The OUT orientation, by contrast, is axiologically stable and is always positively loaded in a sexual context.This shows that Krzeszowski's claims cannot be considered a general principle. Other important statements which were presented in Krzeszowski's 1993 work, and which are referred to in this paper, place an emphasis on the correlations between different image schemata.He argues, for example, that the IN(to)-OUT schema overlaps with the CONTAINER and LINK schemas (see 2. 1.3, 2.1.4).This paper has revealed some correlations which were not described by Krzeszowski and should be considered unconventional.For instance, there was a case when the DOWN orientation overlapped with the IN(to) one (e.g.enterrar la sardina), which contributed to the reorientation of positive values attributed conventionally to the IN(to) orientation. During the reconstruction of an expression's axiology, it is worth remembering that the orientation is only one of many determinants of the values ascribed to particular metaphors.Researchers who conduct investigations into this matter should also take into account all of the denotations and connotations of the linguistic elaborations of these metaphors and other factors like, for example, the human attitude towards the surrounding world (see 4.1, 4.2, 4.5).The results of axiological analysis conducted in this manner may reveal details of special importance for contrastive studies or translation practice.Some examples of these findings will be presented in the following section. The PLUS-MINUS parameter in contrastive studies Taking into account the final conclusions drawn from the study, following section of the paper will try to determine whether the type of orientation is associated with a particular axiological load universally, or whether the axiology connected with the spatial orientation may vary within two different language systems.In this regard, it is interesting to search for some Polish equivalents of the selected sex-related orientational metaphors in Spanish. To illustrate the problem, the metaphor THE SEXUAL ACT IS AN UPWARD MOVEMENT will be used as an example.In Spanish it is evoked, among others, by means of the expressions formed on the basis of the verb montar (see subsection 4.1).As has been demonstrated, the montar alludes to the UPWARD movement made by the man, as a consequence of which the man is UP during the intercourse.The UP orientation associated conventionally with the plus pole of the UP-DOWN schema contributes to the positive values attributed to the man in the sexual context.The Polish verb dosiadać has, however, different connotations, which affect the axiology of the male lover.One of its basic meanings is "to sit DOWN on a horse's back in order to ride it" (Lewinson, 1999, p. 40). 10 This lexicographical definition suggests that in Polish a man involved in the sexual act has to make a DOWNward movement in order to be positioned UP during the intercourse.Of course, the action of getting on the horse by somebody always involves an UPWARD movement and, according to other lexicographical definitions of the verb dosiadać registered in Polish general dictionaries, it entails this meaning as well. 11Nevertheless, at the same time it evokes a DOWNward movement.This means that the positive axiological load of the man associated with the UP orientation is weakened by the negative axiology conventionally associated with the DOWN orientation.In other words, the axiology attributed to the male lover emerging from the analysis of the Polish expression dosiadać jak kotkę is not as straightforward as in the case of its apparent Spanish counterpart montar una mujer.It also proves that the conventionally bi-polar UP and DOWN orientations sometimes do overlap, which leads to axiological struggles. As far as the Polish expressions formed on the basis of the verb dosiadać are concerned, it is worth observing that while in Spanish it tends to be the man who "mounts a woman" in erotic situations (see section 4.1), in Polish it is more frequently the woman.The action of mounting a horse evoked, among others, by the phrasal verb dosiadać konia (see Table 2) always involves a female grammatical subject in Polish.This means that Polish sex-related orientational metaphors connected with the UP-DOWN orientation normally entail the projection of the woman UP and the man, who receives the animal attributes of a horse, DOWN.The female lover is therefore more frequently associated with a positive axiological load in Polish than in Spanish in the context of the UP-DOWN schema.The male lover, by contrast, is more frequently associated with a positive axiological load in Spanish in the context of the same orientational schema.In Polish, he is normally negatively charged.This is illustrated, among others, by means of the expressions 2, 3 and 4 included in Table 2.In Polish, it is the woman who 'gets on a man', 'climbs a cuckoo' or, in more vulgar register, 'hops on a cock'.In each case, a female lover is positioning herself UP and the man DOWN.Furthermore, the low axiological status of the man resulting from the DOWN orientation ascribed to him is in some cases weakened even more by his projection in terms of non-human elements of the world (e.g.'cuckoo'). All of these conclusions reveal that the insertion of the PLUS-MINUS parameter to interlingual contrastive studies presents great potential.It can be observed that the orientational metaphors in two different language systems may have different axiological backgrounds.This is extremely important for translation studies and translation practice since, as has been demonstrated, some interlingual equivalents of sex-related expressions are only apparently similar and often entail totally different metaphorical projections associated with opposite axiological poles. Table 1 : Selected Spanish erotic expressions with the verb MONTAR and their translations into English 7
2019-04-22T13:08:30.752Z
2017-12-03T00:00:00.000
{ "year": 2017, "sha1": "2aff1de3bb4e23e177e54ff73c3e3d19e762d1c0", "oa_license": "CCBY", "oa_url": "https://ispan.waw.pl/journals/index.php/cs-ec/article/download/cs.1318/3066", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2aff1de3bb4e23e177e54ff73c3e3d19e762d1c0", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Mathematics" ] }
247092031
pes2o/s2orc
v3-fos-license
Philanthropic Investment in Equity: Cultivating Grass Roots Leaders for the Equitable Revitalization of Marginalized Communities Community development must include deeper investment to foster a pipeline of community leaders to support equitable redevelopment practice in marginalized communities under threat of gentrification in the city. We argue that philanthropy is critical to develop this pipeline, particularly in the era of the neoliberal city. The following case study analyzes efforts to develop place-based grass roots leadership in marginalized neighborhoods of Columbus, Ohio. The United Way of Central Ohio, through their Neighborhood Leadership Academy (NLA) program, has partnered with community organizations to develop multiple cohorts of grass roots neighborhood leaders over several years within three specific neighborhoods. Our case identifies how philanthropic investment into a grass roots leadership development model centered on equity has impacted policy outcomes, built bridging social capital and spurred successful activism. Our case illustrates a potential model for building social infrastructure through philanthropic investment to buttress potentially disruptive neighborhood change. In the era of the neoliberal city, neighborhoods can no longer rely upon federal funding, leaving redeveloping neighborhoods particularly vulnerable to market driven gentrification and displacement. In this void of resources, philanthropic efforts to support robust grass roots leadership is the last remaining defense against widespread displacement and the primary asset to support equitable development practices. Introduction The communicative model and its emphasis on citizen engagement has largely become a vehicle for middle class interests in public sector decision making (Fainstein, 2011). We are at a crossroads in terms of rethinking approaches to genuine community participation, engagement and empowerment, particularly in meeting the needs of marginalized communities threatened by gentrification in the city. Contemporary planners seeking to support robust engagement within marginalized neighborhoods face community barriers pertaining to trauma, trust and time (Mullainathan & Shafir, 2013;Uslaner & Brown, 2003;Weinstein et al., 2014). These factors are especially acute for marginalized communities sensitive to gentrification processes and the prospect of displacement (powell & Spencer, 2002). Community development must include deeper investment to foster a pipeline of community leaders to support equitable redevelopment practice in marginalized communities under threat of gentrification in the city. We argue that philanthropy is critical to develop this pipeline, particularly in the era of the neoliberal city. In this paper, we analyze efforts to develop place-based grass roots leadership in marginalized neighborhoods experiencing redevelopment in Columbus, Ohio. The United Way of Central Ohio, through their Neighborhood Leadership Academy (NLA) program, has partnered with community organizations to develop multiple cohorts of grass roots neighborhood leaders over several years within three specific neighborhoods. The growth machine politics of the Columbus region can frustrate local community activism (Smola & Ferenchik, 2019;Webb, 2013). This case study focuses on the development of a multi-year intervention in several neighborhoods undergoing market driven redevelopment in the city of Columbus. The NLA program emerged as a philanthropic partnership with community organizations to build a stronger pipeline of leaders to support equitable development. As a result, more than 100 leaders have emerged to create a new layer of social capital and activism for these neighborhoods which are under significant redevelopment pressure. Our case study includes interviews, surveys and participant observation documenting the evolution of this grass roots leadership development effort over the past five years. Our case identifies the way philanthropic investment into a grass roots leadership model has impacted policy decisions, built bridging social capital and spurred successful activism. Our case illustrates a potential model for building social infrastructure through philanthropic investment to buttress potentially disruptive neighborhood change. Philanthropy, Civic Engagement and the Neoliberal City In this case study, we bring together different strands of scholarly literature relevant to our understanding of the contribution philanthropic organizations make to equitable community development in marginalized communities in the city. The first area of previous research relates to the increasing relevance of philanthropic organizations in community and economic development broadly. Philanthropic foundations and organizations are now well-recognized as key stakeholders in the revitalization and economic growth of cities (Martinez-Cosio & Rabinowitz Bussell, 2013). These philanthropic initiatives are primarily place based and have been categorized as attempting to utilize spatial approach to addressing interconnected social challenges while building community capacity and aligning various stakeholders (Murdoch et al., 2007). Some scholars question if these kinds of organizations adequately distribute resources to those most in need (Díaz & Shaw, 2002, Eikenberry, 2006 or if they can foster real social change that is beneficial to marginalized groups and neighborhoods (Scott et al., 2020;Nickel & Eikenberry, 2009). Pill's (2019) case analysis of philanthropic place-based revitalization efforts in Baltimore and Cleveland found efforts primarily aligned with exiting neoliberal policy agenda of more powerful political stakeholders and were less successful in promoting the agency of neighborhood residents. Scholars have also long recognized the power imbalance between philanthropic institutions and community stakeholders in redevelopment initiatives and the limited racial, ethnic and economic diversity in philanthropic leadership (Azevedo et al., 2021;Barkan, 2013). Others, in contrast, suggest that philanthropic organizations have a diverse range of economic development strategies that encompass social equity components (Giloth, 2019), and that philanthropic groups, more so than local governments, can be flexible and innovative, take risks and collaborate on initiatives to bring about significant social transformations and change. Recently, philanthropic coalitions have emerged to engage in significant economic development initiatives in the city. One prominent example is the Living Cities, a collaboration of large foundations and financial institutions that aim to improve the lives of low-income residents in some 40 cities across the United States (Giloth, 2019). Doctor's (2014) case analysis of philanthropic investment to support neighborhood revitalization in East Oakland, CA to have produced more equitable outcomes by centering the initiative on the needs of the community by integrating active listening, acknowledging power imbalances, practicing cultural humility and prioritizing the voices of community stakeholders. In conjunction with these kinds of large-scale collaborations, the work of individual philanthropic organizations can be significant in the realm of community capacity building. A recent trend in the world of philanthropy is to support communityled development strategies, and for philanthropic organizations to seek ways to gain community buy-in for the purposes of community planning and development. In this role, philanthropic organizations can be instrumental as leaders in local community development efforts. Philanthropic organizations, using the tools of community assessment, can serve to be facilitative leaders themselves, and they can promote community dialogue and collective action (Shier & Handy, 2015). Although Bonds et al.'s (2015) research has found that non profit and philanthropic organizations that seek to foster grass roots leadership in neighborhood improvement too often are led by a "colorblind" lens of community development, which can create more conflict and further marginalize communities of color. In this paper, we focus on the efforts of a philanthropic organization to build neighborhood capacity by developing leaders within the community. As mentioned, we examine the United Way of Central Ohio's Neighborhood Leadership Academy as a tool for the development of grassroots leadership in different marginalized communities in Columbus Ohio. We argue that the development of grassroots leaders in these communities is necessary to offset threats of neighborhood gentrification. They act as primary assets to support equitable development in city neighborhoods undergoing rapid change. This brings us to our second strand of literature, the role of community leadership in community development efforts and community wellbeing. Community, in this context, is a geographic location as well as a space of shared interests and experiences (Walker, 2008). Community leadership, as we define it in this paper, occurs at the neighborhood scale, and community leaders also include non-elected and informal leaders. Grass roots leadership can instigate changes within their neighborhood through communication and cooperation with larger stakeholders. Mundell et al. (2015) find that grassroots leadership development and collaboration is more effective than top down advocacy in preventing gentrification-based displacement. We must recognize leadership development as not just skill development but relational development and enhanced connectedness. Social connectedness to both place and people is associated with increased wellbeing, life satisfaction, flourishing and hope (Munoz et al., 2020). Cloutier, Ehlenz and Afinowich find in addition to material resources, infrastructure and amenities that 'purpose, place, and relation' are critical to community wellbeing (Cloutier et al., 2019). Ideally relations should be across a wide spectrum of difference and be "reciprocal and empathetic" and in practice emphasize "being and interacting" within the community (Cloutier et al., 2019). Case Methods We utilized a single case study approach to understand the impacts and influence of the NLA on resident empowerment, skill development, social capital formation and impacts on policy and community initiatives. More specifically we sought to understand how the NLA impacted alumni, influenced relationships and affected decision making or community initiatives. More broadly, we sought to utilize the NLA case to better understand the limitations and potential of sustained equity focused grass roots leadership development on countering the potential detrimental effects of gentrification and displacement. We utilized Birch's (2012) review of case study classification to align the case design with best practices in case analysis for planning practice. Our case is action oriented (emphasizing implications for practice), focused on the impact on a specific population (marginalized communities within neighborhoods facing redevelopment pressure) and reevaluating a substantive issue in scholarship (countering gentrification pressures). Informed by Yin's (2014) categorical definitions of case studies, our case is both exploratory (seeking to understand how leadership development impacts individuals and communities) and descriptive (seeking to describe how enhanced social capital can impact neighborhood development processes). We triangulated several sources of data to develop the case study including participant observation, focus groups, surveys and interviews. Survey questions and semi-structured interview questions are provided in Appendix. In addition to data triangulation, our methods pulled from additional practices to enhance internal validity (Yazan, 2015) these additional techniques are described in the limitations section below. Participant Observation Participant observation was conducted over the course of four years (2015-2019). Participant observation included early planning activities designing the structure of the three NLA programs, participation in NLA events and observation of NLA alumni in community meetings and working groups. Approximately fifty hours of time was spent participating and observing for the case analysis over a four-year time span. Focus Groups In collaboration with graduate students participating in a city planning studio course, we held three focus groups with both NLA alumni, residents in the Linden neighborhood and NLA program directors. Semi structured questions were utilized to understand the experience of both alumni, residents and program directors/staff. The first two focus groups focused on residents in the Linden community (where the NLA was going to be launched) and the program alumni from both the City wide and South Side NLA. The third focus group specifically focused on the experience of NLA program directors from Linden, the Near East Side and the South Side. All focus groups occurred in the fall and winter of 2017/2018. Approximately three dozen participants in total attending the focus group meetings. NLA Alumni Survey A Qualtrics survey of NLA alumni occurred in the spring of 2019. The survey was distributed to 100 alumni of the various NLA programs and received a 17% response rate. Due to the low survey response rate, a series of alumni and program director interviews were conducted to provide additional data. Alumni and Program Director/Staff Interviews In the summer and autumn of 2020, zoom based semi-structured interviews were held with six of the South Side NLA alumni and one NLA program director and NLA program staff member. The NLA program director and staff member were both involved in managing and providing curriculum for the NLA on the South Side. Analysis Due to the low survey response rate, survey data was only summarized as descriptive data. Qualitative data collected in surveys, focus groups and interviews were transcribed and analyzed through both inductive and deductive coding. Field notes from all participant observations were also analyzed to inform the analysis. Additionally, program reports and other administrative documents were reviewed to support the case study. Limitations Our case analysis has limitations, the limited number of focus groups (2), and limited sample of participants included in interviews (8) and surveys (17) is a limiting factor. To strengthen the validity of our findings we have we utilized best practices in case methods (triangulation, prolonged engagement, persistent observation and extensive member checking) (Denzin, 1978, Lincoln & Yuba, 1985, 1986, Yazan, 2015. Our approach at triangulation emphasized data source triangulation, methods triangulation and analyst triangulation (Yin, 2013). These strategies have strengthened the internal validity of our case analysis, we caution that the case has limited external validity (or generalizability to other community settings). While not generalizable, the case analysis does provide a basis for further research in other community settings. Case Background & Context Unlike other midwestern regions, the city of Columbus has experienced substantial population growth in recent decades, growing from 540,000 residents in 1970 to more than 900,000 residents in 2020. The metropolitan region's population would double in size during this time frame (U.S. Census Bureau, 2020). In the past two decades Columbus has experienced growing inequality as the region's resurgence masked a growth in the population in poverty and economic segregation (Price, 2015). As a growing city and region, the city's urban core neighborhoods that had lost population in the preceding decades and were deeply impacted by the 2008 foreclosure crisis have experienced a resurgence. As seen in Fig. 1, recent development activity has grown tremendously in urban core neighborhoods. The development of the local NLA's in the South Side, Near East Side and Linden community coincides with increasing development pressure near and within the neighborhoods. The nexus of growing inequality and urban redevelopment has raised fears of displacement fueled by gentrification in the city's core urban neighborhoods. Columbus's identity as embracing "growth machine" politics and neoliberal public private partnerships has contributed to concerns about political disengagement and marginalization in neighborhoods in which developers have significant political power in driving redevelopment (Smola & Ferenchik, 2019;Webb, 2013). These dynamics and the city's political history were directly referenced in the founding of the academies, as one former NLA program director stated "we (planners) have created the apathy (among residents) because of the legacy of engagement which is after the fact." Scholars have recognized that community demands for equity in the context of the growth machine are more likely to be successful in places with more economic growth (Cain, 2014). Community members have more leverage in demanding equitable development in neighborhoods where developers have a higher potential for profit. From this perspective, the regional market conditions of Columbus and the strong neighborhood real estate conditions, in neighborhoods targeted by the NLA, create more potential for leveraging community voice. The NLA was first launched as a city-wide leadership development program by the United Way of Central Ohio in 2012. The citywide program differs in many ways from the 'hyper local' neighborhood focused academies that we study for this case. The city-wide program was primarily focused on developing leaders who would fill more structured roles as volunteers, board or committee members or those working in the nonprofit community. The city-wide program mirrored more traditional neighborhood leadership programs commonly found in cities across the nation. The city-wide program curriculum overlapped with the local neighborhood academies in some aspects, but the local NLA academies focused more on grass roots and informal leadership development, with curriculums that were structured around the unique characteristics, assets and issues relevant to each neighborhood. The local NLA programs were led by a local community organization in collaboration with the United Way of Central Ohio and often involved partnership with Ohio State University. Local NLA's recruit from local stakeholders who live, work, learn or worship in the neighborhood. A selective recruiting process produces cohorts of approximately ten to twenty participants who take part in nine months of learning, relationship building and direct community work. Cohorts work on several community projects created collectively within the cohort. The local academies were inspired by a goal of creating a large cohort of emerging leaders in each community over a multi-year time span. The local neighborhood development academies first launched in the South Side community in 2015 as a collaborative partnership between the neighborhood's CDC (Community Development for All People or CD4AP), Nationwide Childrens Hospital (the primary anchor institution in the neighborhood), the United Way of Central Ohio and the Kirwan Institute at Ohio State University. The Near East side neighborhood academy was developed in 2017 and the Linden neighborhood academy was launched in 2018. The South Side academy is entering its sixth year of programming and has more than 75 alumni from the first five years of the program. The Near East Side and Linden academies ran for two years but have suspended operation. Curriculum: Assets Based and Committed to Equity The local NLA curriculums placed emphasis on asset-based models of community development and commitment to supporting equity, inclusion and diversity in the neighborhoods. The focus on assets, equity and inclusion is centered in all aspects of the program design, from recruiting materials, to class-based curriculum and engagement within the community. Program directors noted that communicating the "values' of the academy was critical in helping recruit applicants who were committed to those values. As one former academy director described. "People self-select in or out based on what they see in the application, (this is) the intentionality of the program design, and this gets further enforced through the curriculum. What is taught, how it's taught and who is teaching it, that is not just left to chance. (We) define in the wording of the application, what is the culture you are trying to promote, cast the vision and values in the application early on, this is about change." For example, the themes equity & inclusion are very evident in descriptive text utilized to market the South Side academy: "We seek participants that aspire toward a safe, opportunity-rich South Side that maximizes diversity, builds relationships, secures and advances current residents, all while maintaining the unique fabric of the community"(CD4AP, 2020). Program curriculum focuses explicitly on issues of racial bias and the ways structural racism has impacted the neighborhoods historically or in the present. As described by an NLA curriculum provider: "(The curriculum) particularly is strong in topics related to race and ethnicity, it is in a way that it won't turn anybody off won't take the conversation where it's not healthy. The (curriculum) gives people the opportunity to approach new knowledge and be self-reflective about themselves, how they've seen the community, how they have seen community problems, and see it at a different light." Philosophies of asset-based development are continually structured into the application materials, activities and programming. As described by an NLA program director. "We don't ask on the application about needs, we ask about assets, What are your goals, hopes, aspiration, inspiration, skills you bring?" Asset mapping within the context of the neighborhood is also essential to helping participants shift their perspective in how they view the community's resources and opportunities. As described by an NLA program director. "The asset mapping is very important -because people don't think about or see the assets in their community, key to changing the lens of the individual, very important to use asset based language in the program, what are the talents, gifts, treasures that exist in each neighborhood." Asset based perspectives also places emphasis on the community's population diversity as its primary asset. As described by an NLA program director: "We're driving home the notion that (being) asset based will always get you further and that our people will always be our greatest asset and never a deficit." Diversity & Group Heterogeneity Substantial deliberation is given to the applicant review process, specifically with the intention of producing a highly heterogenous class cohort each year, which represents an extensive representation of difference within the neighborhood. As described by the South Side NLA director. "The South Side neighborhood Leadership Academy, probably not unlike the other ones is incredibly diverse across race, class and education background. It's always multi-racial, it's the full income spectrum, folks that are making six figures, folks that are making no figures. We had multiple people with PhDs. We've got people that don't have GED'S and so it is the full range." NLA directors emphasize the importance of balancing different skill sets and life experiences in the class cohort, with a specific emphasis on creating a learning environment that de-centers the knowledge of participants who are more privileged and formally educated and trained. As described by the South Side NLA director, this means creating a space of restraint, listening and self-reflection for individuals who are often very assertive in their leadership style. Growing cultural humility among more privileged participants is a direct goal, as described by the South Side NLA director: "Particularly for our middle-and upper-class folks, particularly for our white folks, the biggest lesson that I'm trying to (assure) that most of them get by the end, is they don't have all the answers. In fact, they're the ones with much to learn." Analysis Our case analysis focused on understanding several key outcomes from the NLA. In regard to impacts on alumni we explored skills development, participant engagement and shifts in perspective. Social capital formation identified the extent of relationship building and the types of social capital developed (bonding or bridging) among alumni. Survey, focus group and interview data indicate NLA alumni feel the program has been beneficial in strengthening their skills, relationships and political connections and deepening their perspectives on the neighborhood and issues of redevelopment. Alumni are highly engaged and politically active. Alumni have more mixed perspectives on the long-term potential of the NLA to counter the potential for displacement in the neighborhood. Skill and Network Development Alumni in surveys, interviews and focus groups were asked to identify what skill development was most important from their time in the academy. In surveys participants ranked the following as the most important skills learned in the NLA. Alumni in focus groups and interviews generally identified similar skill development. Focus group participants primarily emphasized their engagement with community assets, particularly places they had never been exposed to previously. This identification of new assets deepened their pride in the community. In addition to community assets, interview participants primarily discussed fundraising and grant writing training as important new skills learned. Both focus group and interview participants focused extensively on how the NLA expanded their personal networks, community partnerships and political connections. The average NLA alumni was still connected or collaborating with at least four other NLA alumni. Participants particularly focused on leveraging their new connections and social networks to support community initiatives or solve community challenges. As described by NLA alumni. "I just recently…launched a new training program called the bridge to selfsufficiency and I got an opportunity again through all these connections that have been built through my relationships that launched with NLA. So now I've launched an actual program that I'm trying to get funding for." "I have learned how to effectively create a change in my community by reaching out to stakeholders, community leader, and resident to find the best way to get the needs meet as a whole." "I think the biggest thing for me is that that not only the program, but then the people that they brought in to talk to us and participate or like, you know, we might (City Council president) Shannon Hardin and we met (state) house representatives and actually having access to people like that and feeling like you're being heard and like if you want. Like I said, if you want to be involved. If you wanted to do something positive, like you have all of these inroads so I feel incredibly connected to Columbus." Shifts in Perspectives: Embracing and Asset Based Lens As previously discussed, participants routinely identified the program as effective in teaching them about local neighborhood assets that they were unaware of. In interviews, alumni discussed that their views on assets extended beyond just focusing on neighborhood assets but represented a more robust embrace of an asset based perspective for personal development and problem solving. As articulated by an NLA alumni. "The one big thing I took away from the NLA was the whole concept of looking at a challenge and seeing it as an opportunity, and just kind of like imbuing that idea across everything that we did. You know that that all you really need is you and your resourcefulness. You can bring that mindset to whatever the next step is and whatever it is you're doing or whatever process you're managing or leading. (The NLA) really emphasized spending time with each individual person on (identifying) their own assets, and being able to name what they are, and recognize them as assets." Building Perspective and Relationships Across Lines of Difference- Bridging social capital, is social capital formation developed between lines of difference. Participants routinely identified growth in deepening their personal relationships and perspective across various lines of difference as a key benefit of the program. This relationship building was essential in broadening their horizons through exposure to places and people outside of their personal experiences in the neighborhood. In many cases, more privileged and economically affluent participants discussed how this cross-cultural learning was critical as a learning experience to reshape their views of the community. As one NLA alumni articulated. "Doing the, the leadership academy gave me an opportunity to sort of break that bubble and get to know people who are living a little bit grittier and have people who've grown up, you know, in the heart of the South Side and seeing, you know, seeing things and been a part of life here for a long time. So a lot of the people who live in my neighborhood now are new. I mean, we still have old timers. But there's a lot of newbies like us, so I do feel more connected because of that knowing people who are sprinkled all over the area and hearing their stories." Bridging social capital development was also important to shift existing perspectives on neighborhood change and development. Alumni commonly referenced their lack of depth and understanding on the impacts of gentrification, particularly understanding the impacts to longtime residents and economically marginalized community members. The shift in perspective around issues of displacement and neighborhood change was routinely referenced in interviews by alumni. "Oh, I (gained) an enlightenment of (the process of) displacement." "White middle class folks will explicitly say. ' (I understand) at what cost, it's good to see the neighborhood improve. We've definitely had some issues with the drugs, prostitution and all of the joys that come with that but (the NLA's) given me insight into how pushing that further down the road to the next neighborhood is not the solution." Political Engagement, Influence on Decision Making and Community Initiatives- The South Side NLA has been the longest running academy and has the largest cohort of alumni active in the neighborhood. Our case analysis finds that the NLA alumni on the South Side have remained politically engaged, had direct impact on policy/decision making, advocacy and launching new community initiatives. We did not have enough data to accurately gauge the long-term impacts on policy and political outcomes in the Near East Side and Linden academy. Alumni have also embedded themselves into local civic associations and boards and are highly engaged. In surveys two thirds of alumni indicated they were actively involved in at least two ongoing community programs, initiatives or organizations. Surveys indicated that 66% of alumni work 4 or more hours a week on community issues and 40% work more than 10 h a week. These have included more informal positions leading grass roots initiatives and more formal positions with policy and decision-making entities in the neighborhood. As described by NLA program directors and alumni. "I just became a Far South Area Commissioner (the local governing body for Columbus neighborhoods). So now I'm actually like civically involved as well and on a couple of different boards." "We do find there's a significant number of our alumni that are in leadership and their civic associations or their area commissions, after having gone through the program. And I wouldn't say that that's the driving motivator (for joining the NLA)." Alumni have played a direct role in policy/decision making and joined successful advocacy efforts to dispute inequitable policy decisions. These efforts have ranged from building support for affordable housing development, stopping a disruption in neighborhood transit services and fighting the closure of schools in the neighborhood. Alumni have also successfully transitioned some of the NLA community projects into more robust community programs. For example, the ID program, a program focused on removing barriers for obtaining state identification to assist marginalized residents in accessing benefits and voting, expanded and received direct support from funders in the community. As described by an alumnus. "Well, I'm really excited because our project actually turned into a fullfledged program that was funded by the city. Our project was helping folks who did not have proper ID to acquire it because without state ID, you can't get a job. Yeah. You can't access resources and benefits that you may be entitled to or need so we thought that it was a real critical area where we had an opportunity to make a difference. So, with our funds we actually connected with the local Bureau of Motor Vehicles and identified other obstacles to people getting their ID. It's not just about having eight bucks to go in and get it. You gotta have a birth certificate. You got to have proof of residence and all these other things. So, we are investigating what was required and realize that maybe transportation was an obstacle to somebody being able to even get to the Social Security Office nowadays." Can NLA Alumni Help Prevent Displacement? NLA alumni were more mixed in their perspectives on grassroots efforts to counter displacement produced by gentrification. While alumni deepened their understanding of the consequences of displacement and who was most vulnerable, they were less sure about how effectively they can prevent it in the long term. While 2 out of 3 survey respondents felt their work would help counter gentrification and displacement harming marginalized community residents, interview participants were mixed in their response to this question. Alumni who were optimistic in this regard emphasized the ongoing work of the local CDC (Community Development for All People) and hopes for more balanced growth. "Organizations like the Church For All People and this academy and all of the things that they're involved in like knowing that their groups active and pushing for the right things and are actually making strides. That makes me feel (positive, but) I don't think we're going to get out of it unscathed. I certainly think that." "Nobody likes change but (can we) help balance and way that change affects the future of the community in a positive way, it's…something that I'm really honored to be a part of. And again, I think it all started with the NLA." More pessimistic alumni expressed a need for a deeper skill set to counter gentrification, the vulnerability of lower income renters and skepticism about being able to counter the political influence of developers. "And (in the end) money will talk and eventually this developer will be able to get what he wants." "(We need) …better training on fighting capitalism and gentrification." "What I feel like we're missing those potentially marginalized renters, who should have a voice, but maybe don't know that they do or aren't engaged enough to even be asked…my focus is, how do we get more involvement." "I don't know, I think that some (displacement) is inevitably going to happen. That's going to be inevitable." NLA program directors present a more positive, long term and macro view of the potential of the program in countering displacement and supporting a diverse opportunity rich neighborhood. The South Side program director referenced the cumulative effect of five years of NLA alumni active in the community, with many in decision making roles. "One of our big goals for the south side neighborhood Leadership Academy is this notion that we've got now got close to 75 people running around the south side that have sort of drunk the Kool Aid around valuing a diverse mixed income opportunity rich community…they are the ones having the front porch conversations, they are, frankly, the one sitting in civics (civic associations) and Area Commission leadership." The South Side NLA has utilized NLA members to engage and counter Not In My Backyard (NIMBY) resistance to affordable housing in the community. The program anticipates strategically leveraging the support of NLA alumni as the local CDC and other community partners seek to expand the stock of affordable housing in the community. "We've become very intrigued about how we leverage the academy, particularly given that the (alumni) are on Commission's or chairing zoning committees. So that when the next multi-unit affordable housing goes up for approval, we flip the nimby's into seeing that is value added. There was going to be multi-unit affordable housing LIHTC development (that) was trying to get through the area commission and just meeting with tons of opposition and NIMBY'ism. They (the developer) hadn't necessarily done their due diligence in getting community buy in ahead of time. But we were fairly successful in sort of beating the bushes for our alumni that lived in that area to say, here's the conversation that is happening. Would you be willing to get involved to provide another perspective?" Discussion & Conclusion Leadership development is more than skills development but relational development and shifts in perspective. We contend that grassroots leadership development programs, if equity centered and sustainable can build political power, enhance activism, spark grassroots initiatives and foster bridging social capital to counter inequities produced by gentrification in neighborhoods experiencing reinvestment. Increasing awareness and anxiety among "newcomers" who feel they may be causing unjust outcomes in neighborhoods under transition create an opportunity to fuel relationship building and activism across lines of difference. The development of bridging social capital is part of equitable neighborhood development that should be emphasized. The NLA experience also presents an application in practice of centering equity and decentering Whiteness and privilege in neighborhood contexts. The Importance of Programs Centered on Values of Equity & Inclusion- Equity planning has long contended that value neutrality in planning is problematic, and an open and robust embrace of equity as a value is essential (Davidoff, 1965). In the context of cultivating grass roots leadership development, values must lead the structure, curriculum and recruiting of programs. The NLA has been very effective (particularly the South Side program) in centering recruiting materials, recruitment efforts and curriculum on valuing equity, diversity and inclusion in the context of neighborhood development. Alumni routinely report either being attracted to the program due to its equity orientation or having their perspectives positively shifted in regard to equitable neighborhood development. Recent scholarship suggests that this orientation of equity as a central value will be more likely to produce more equitable outcomes. Harwood (2007) argues that Neighborhood Improvement Programs (NIPs) too often focus on non-political, non-confrontational issues such neighborhood aesthetics while sidelining issues of social justice and crime prevention. Harwood case study in Santa Ana, CA notes that "in Santa Ana, the emphasis on cleaning the neighborhood and keeping the 'politics' out of neighborhood improvement ultimately depoliticizes many neighborhood activists by limiting the scope of their work and the resources made available to create meaningful social change." Ultimately, Harwood finds that NIPs can prove hurtful to true progress for communities whose objectives include goals which the city's bureaucracy has deemed 'political' such as social justice, health care, affordable housing, and crime prevention. As an alternative to the top-down NIP approach, Harwood suggests neighborhood-based governance that "gives neighborhoods decision-making power and the resources to promote change without regard to the citizenship of their residents" (Harwood, 2007). An analysis of the application of community benefit agreements to counter growth machine politics is presented in Colleen Cain's Negotiating with the Growth Machine. Cain's case analysis of the aftermath of the sports arena community benefits agreement in Pittsburg, PA finds potential for utilizing community voice to assure some form of "value conscious" growth (e.g. growth that supports community's needs over capital). Although, Cain's analysis of the Pittsburg CBA is pessimistic of the long-term potential of CBA's in this role, noting that CBA's do not fundamentally alter the political influence and dominant power of growth machine regimes (Cain, 2014). Cain call for a "larger deconstruction" of the growth machine to alter the political and economic domination of growth machine regimes (Cain, 2014, pg. 955). We argue that the NLA presents a potential model for an equity centered initiative that could contribute to this "larger deconstruction" of the growth machine. As described by the South Side NLA director, communicating values and norms centered on equity and inclusion must be a central part of the NLA and broader community engagement. Channeling Awareness and Anxiety about Gentrification into Equitable Community Change Programs like the NLA can provide an outlet to capitalize on more privileged recent residents who want to support the community and not gentrify it. As articulated by a NLA director, the program has seen an increase in recent newcomers who are White and economically privileged. These individuals represent an influx of newcomers who were attracted to the neighborhood for its diversity but recognize that they may be part of a gentrification process which could produce displacement. The anxiety around contributing to gentrification and desiring to be a positive force in the neighborhood was commonly referenced by White economically advantaged NLA alumni. As articulated by an NLA alumni below. Programs like the NLA could potentially leverage this demographic to build a broader base of support for assuring the community remains affordable, accessible and diverse. More importantly, the bridging social capital developed in the program could build a political base to support a different form of community change. A model of neighborhood redevelopment that is truly focused on equity and values diversity as a critical community asset. Goetz et al. (2020) document the exclusion, value, durability and invisibility of Whiteness in city planning. The traditional process of urban gentrification is built upon Whiteness and centered around a form of racialized capitalism (Burns & Berbary, 2020). An influx of White wealthy homebuyers extracts profits from neighborhoods that had been intentionally devalued through a legacy of redlining and exclusion. Hightower and Fraser (2020) describe this phenomenon as a form of exploitative 'reverse blockbusting.' Traditional residents, who are primarily people of color, are displaced and the existence of lower income people in the neighborhood is problematized or implicitly and explicitly associated with a variety of community deficits such as crime or blight (Goetz et al., 2020). The process of gentrification extends beyond the housing market but has implications for increased policing and harassment of people of color (Ramírez, 2020). Centering Equity and Decentering Whiteness and Privilege The equity centered curriculum of the NLA presents an opportunity to increase the visibility of Whiteness (or surfacing Whiteness) in urban redevelopment. The NLA intentionally targets incoming White affluent residents to engage and expose implicit and explicit biases and to shift perspectives. As described by an NLA program director. "We definitely see the south side neighborhood Leadership Academy as one way of instilling that value (racial equity) in the community. And so in that sense it is the white middle class folks that is a target audience. That person that isn't quite sure if Black Lives Matter or isn't quite sure how they feel about like a line (of hard living people) in front of the Free Store. That's the person that I want in this class because that's who I'm trying to reach." As described by an NLA alumni who is a long time South Side resident, his role in the program and as an alumnus is to bring authenticity to engagement and decision making because of his experience. "We've been that family --poor, hustling, on public assistance. There were many times that we got the wagon and the backpack and went to the food pantry. I am of this community" (Price, 2017). The NLA's structure challenges the norms of Whiteness in changing urban space and seeks to de-center White privilege in community processes. For example, direct engagement with more marginalized residents enhanced knowledge of the harmful impacts of gentrification. Interviews with program directors and alumni repeatedly identified examples of more privileged academy members learning, reflecting and shifting their behaviors and perspectives due to the intentional engagement of diverse perspectives in the program. "Well, while the middle-class person wrote into the grant things like tents and sleeping bags. It was the homeless person that was able to push back on that and say, so I'm homeless, but that doesn't necessarily mean I'm living under a bridge. I'm couch surfing at a friend's house and so a tent is not only useless to me. It's actually a liability." "(My) whole focus was on a home ownership and just building a sense of community, my whole goal was like to get more people to be homeowners because I feel like when you're a homeowner you're vested in the community. But through (the NLA) what I've discovered...is that you got to meet people where they are, and a lot of folks are not ready to be homeowners and you still need to value them." "And for me, learning to see and hear people. It's something I'm still working on because I love to talk. I'm still learning not to talk and to listen has been probably one of the more valuable things. I feel like they did that throughout all of the training. There was always an element about learning about your neighbors and not pushing on to your neighbors what you think they need but asking them what they need." Structural Factors Contributing to Program Sustainability Only the South Side NLA is still active and has sustained operations throughout the challenges of the COVID-19 pandemic. Although the United Way is no longer funding the South Side NLA, the program is now funded internally by the local CDC (CD4AP). The organization housing the NLA on the East Side went through a leadership transition and reorganization. The Linden NLA was temporarily based out of a nearby Settlement house but never found a strong local organization to host the academy. In contrast, CD4AP was deeply engaged in the design of the South Side NLA and has had stability throughout the NLA's six years of existence. Thus, sustainability for programs like the NLA requires a robust community organization whole values align with the academy and can provide long term stability. The alignment of more structural interventions must compliment grass roots leadership development. Amplifying the effectiveness of the South Side NLA are structural conditions which have fostered equitable development activities. The South Side neighborhood is home to the largest hospital based affordable housing initiative in the United States, the Healthy Neighborhood Healthy Families initiative (Kelleher et al., 2018). The collaborative program between Nationwide Childrens Hospital, Community Development for All People and other nonprofit, for profit and public sector partners has invested more than $70 million (USD) in affordable housing in the South Side community for the past thirteen years. Additional investments in earlier childhood development, workforce development, youth development and food security have complimented the large investment in affordable housing. The durability and impact of the South Side NLA cannot be disentangled from the larger structural investments in supporting diversity and inclusion in the neighborhood. The Neoliberal Era: The Role of Philanthropy in Cultivating Community Power The need to foster grassroots leadership in the neoliberal era is paramount. During the 1980s, there was a withdrawal and reduction of federal programs that forced local government to generate revenue in ways that exacerbated intermunicipal competition for businesses, capital and middle-class residents (Eisinger, 1998;Hackworth, 2007). Our findings suggest that locally driven efforts to cultivate grass roots community leaders, are even more vital in the contemporary neoliberal era. Arnstein's Ladder was developed during a time period of substantial government interventions in urban development (Arnstein, 1969). In the following half century since its release, cities have experienced devolution and a shrinking federal role in funding urban development. In the era of the entrepreneurial city, neighborhoods can no longer rely upon federal funding, leaving redeveloping neighborhoods particularly vulnerable to market driven gentrification and displacement. In this void of resources, philanthropic efforts to support robust grass roots leadership is the last remaining defense against widespread displacement and the primary asset to support equitable development practices. We must also recognize that philanthropic efforts to counter gentrification pressures and support equitable development involves more than just financial investment. In the case of the South Side neighborhood leadership academy, the United Way was not only a financial supporter, but was essential as a convener to bring together multiple organizations and stakeholders who were philosophically aligned (Community Development for All People, Nationwide Childrens Hospital, the Kirwan Institute) and seeking to address a common challenge in the South Side neighborhood (displacement). The various institutional partners co-developed a program and curriculum was equity focused and centered neighborhood inclusion and diversity as a primary value. In this case, the foundation provided not only direct investment and convening, but also utilized its internal expertise (guiding participants in understanding grant writing and fund raising for local projects) and by leveraging the expertise of its extensive local networks of stakeholders to leadership academy students and alumni. 3. What type of issues do you work on in your community or neighborhood? 4. Do you feel more of a connection to your neighborhood and city as after graduating from the leadership academy? a. Can you elaborate on how you feel more connected? 5. What subject material had the deepest impact on you from the leadership academy curriculum (select all that apply)? 6. Do you think the academy prepared you to be a leader (for example, to implement leadership skills) in your community? If so, how? If not, why? Perceptions of Community Change: 7. Do you think that the leadership academy made you more aware about the needs and issues that are important to your neighborhood/community? 8. Has the leadership academy influenced your perspective, views or knowledge about neighborhood change, neighborhood redevelopment or concerns about displacement of your neighbors? 9. Do you feel more aware about what is going on in your neighborhood/community after graduating? i.e. development projects, community events, issues that the community is facing 10. In what ways has your experience through the leadership academy changed how you see the future of your neighborhood (are you more optimistic or less optimistic)? Agency and Empowerment: 11. Do you feel more prepared to be involved in government meetings and express your opinions and concerns at these meetings or to engage and advocate with decision makers? 12. Do you feel that the leadership academy prepared you to have the ability to organize members of your community to speak out on issues that are important to your community? 13. Has your experience in the leadership academy made you feel more empowered to influence the future of your neighborhood? 14. After participating in the academy do you feel more optimistic that community input can create change in your neighborhood and city? 15. Do you feel leadership academy leaders in your neighborhood can work to assure neighborhoods change in a way that benefits all residents, particularly those who are more vulnerable or marginalized? 16. Do you feel leadership academy leaders in your neighborhood can work to help assure more vulnerable or marginalized residents are not displaced as the neighborhood changes? Networks and Organizing: 17. Do you stay in contact with other academy graduates since graduating from the leadership academy? a. If Yes, how many graduates do you stay in contact with? 18. How many organizations, initiatives or programs have you become involved in since graduating from the leadership academy? 19. How much time do you contribute to community work since graduating from the leadership academy (hours a week or hours a month)? 20. Have you played a leadership role in any community meetings or engagements since graduating from the academy? (i.e. served on task force, area commission, civic association, government citizens committee) 21. In what ways do you think the leadership academy can be strengthened? Interview Questions for Program Directors or Staff Motivations and Experience: 1. What do you see motivating most of the applicants to the leadership academy? 2. What are some of the more memorable projects that academy graduates worked on in the community? What kinds of issues do the graduates tend to focus on? 3. What subject material in the curriculum has the deepest impact on participants? 4. Do you feel that the graduates feel a deeper connection to the neighborhood after graduating from the leadership academy? 5. What does the diversity of leadership academy alumni look like? 6. How do you work in issues of equity, diversity and inclusion into the NLA structure and curriculum? Perceptions of Community Change: 7. Do you think that the leadership academy makes alumni more aware about the needs and issues that are important to your neighborhood/community? 8. Do you feel the leadership academy influenced the perspective, views or knowledge of participants about neighborhood change, neighborhood redevelopment or concerns about displacement of your neighbors? 9. In what ways has your experience with leadership academy alumni changed how you see the future of your neighborhood (are you more optimistic or less optimistic)? Agency and Empowerment: 10. Do you feel the leadership academy makes graduates feel more empowered to influence the future of the neighborhood? 11. Do you feel like the alumni will be able to create change in the neighborhood? 12. Do you feel leadership academy leaders in the neighborhood can or are working to assure neighborhoods change in a way that benefits all residents, particularly those who are more vulnerable or marginalized? Conflict of Interest The authors declare that they have no conflict of interest. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2022-02-26T00:01:54.766Z
2022-02-23T00:00:00.000
{ "year": 2022, "sha1": "5f34e2312d0946689be9b6dd6e154dde4aae323e", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s42413-021-00159-x.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "415179c5c9b300edc5a738a37e5aaee501ef8024", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Medicine" ] }
246078778
pes2o/s2orc
v3-fos-license
Study protocol of associated criteria used in investigating septic transfusion reactions (STRs): A scoping review about available evidence Background and objective Assessment criteria for septic transfusion reactions (STRs) are variable around the world. A scoping review will be carried out to find out, explore and map existing literature on STRs associated criteria. Methods This scoping review will include indexed and grey literatures available in English or French language from January 1, 2000, to December 31, 2021. Literature search will be conducted using four electronic databases (i.e., MEDLINE via PubMed, Web of Science, Science Direct, and Embase via Ovid), and grey literatures accompanying the research questions and objectives. Based on the inclusion criteria, studies will be independently screened by two reviewers for title, abstract, and full text. Extracted data will be presented in tabular form followed by a narrative description of inputs corresponding to research objectives and questions. Introduction Septic transfusion reactions (STRs) resulting from the bacterial contaminated blood components transfusion remain a significant cause of transfusion associated morbidities and mortalities, despite having prevention and control measures in place such as pathogen reduction, and mandatory post collection culturing of platelets [1][2][3]. Bacterial contamination of blood or blood components may occur as a result of donor bacteremia, contamination during blood collection, contamination of the collection pack, and finally contamination during the blood processing procedure [4]. During the 90s, bacterial contamination of blood components was recognized as a major cause of transfusion-transmitted infection, accounting for between 14% and 24% of transfusion-associated fatalities reported to the US Food and Drug Administration (FDA) [5,6]. Although all types of blood components have been reported with contamination leading to infections, platelets (PLTs) stored at room temperature, allowing growth of many bacterial organisms, seem most frequently involved in STRs [7]. A recent report from the US FDA reported that between 2012 and 2016, 18 fatalities were attributed to bacterial contamination of red blood cells (n = 7), pooled platelets (n = 2), apheresis platelets (n = 8), and plasma (n = 1) [8], while nearly 21 million of blood components were transfused each year in the United-States [9]. Importantly, STRs are underdiagnosed when blood recipients are on broadspectrum antibiotics or have other underlying medical conditions with overlapping signs and symptoms [10][11][12]. Continuous improvement and implementation of bacterial STRs reduction include a widerange of strategies both from blood donor to receiver perspectives such as donor screening, skin disinfection, diversion of initial blood collection, pre-storage culturing, post transfusion culturing of blood recipients, and blood components, and additionally, there is also provision of ongoing education, and up-to-date information regarding STRs to the related stakeholders [13][14][15]. Despite having implemented preventive and control measures in the reduction of transfusion-transmitted septic reactions (TTSRs) through the use of existing hemovigilance system in many countries worldwide [16][17][18], there are still reports of TTSRs. For instances, a recent peer-reviewed publication including hemovigilance data from North America, Europe, Africa, and Oceania, reported that the transfusion-transmitted bacterial infection frequency ranged 1:14,515 to 1:384,903 in transfused platelets, and 1:96,850 to 1:3,448,275 in transfused erythrocytes in 2016 and 2017 [19]. In Canada, according to the Transfusion Transmitted Injuries Surveillance System (TTISS), bacterial infection contributed to a total of 0.8% (33/ 3957) of adverse reactions following blood components and plasma derivative transfusions during the period of 2006-2012 [20]. Investigating TTSRs and associated suspected risks is difficult as blood culture is usually performed on passive reporting of suspected acute transfusion reactions when there is no recognizable signs and symptoms of infection. In addition, there are reports of high rate of false positive culture results related to STRs [21] and variability of criteria used in the identification of those STRs [22]. Although less transfusion-transmitted bacterial infections are reported these days, identification of STRs and provision of safe blood products are still ongoing challenge in transfusion medicine. Given that there is no consensus in a set of criteria associated with STRs in the published and grey literatures, it is imperative to answer the specific research questions that will further assist identification of research gap and help identify more precise criteria associated with STRs. With these backgrounds, the aims and objectives of this scoping review are to find out, explore and map the existing literature on STRs associated criteria. Based on the country-specific preventive and control measures implemented to mitigate STRs, this review aims to: (1) report criteria used for STRs detection in published and grey literatures; and (2) identify the prevalence of STRs related to platelets, red blood cells and plasma among those selected literatures. It is expected that the results of this review will inform policy makers, planners, researchers, and governments in transfusion medicine, and help identify the gap for further studies. Review protocol This scoping review protocol will review the published and grey literatures systematically for all STRs and associated criteria. The proposed review will follow the methodological framework recommended by Peters et al. [23], such as: 1) Defining and aligning the objective/s and question/s, 2) Developing and aligning the inclusion criteria with the objective/s and question/s, 3) Describing the planned approach to evidence searching, selection, data extraction, and presentation of the evidence, 4) Data extraction, 5) Presentation of the results, 6) Study implications and dissemination. This scoping review will follow the quality appraisal of the selected literatures. Quality assessment will follow the identification of the key characteristics of the studies such as the appropriateness of the study design to the research question being asked, the suitability of the sample, the methods used to recruit the sample, the methods used to obtain the results, and general direction of the study findings. More importantly, examining the possible reasons for study similarities or differences. Since this manuscript is a scoping review protocol, it does not require ethical approval letter from an institutional review board (IRB) or ethical committee and therefore, ethical approval for this protocol will automatically be waived. Objectives The main research question is: "What are the criteria associated with STRs that are reported on published and grey literatures?" Based on the country-specific preventive and control measures implemented to mitigate STRs, the specific research questions derived from the general research question are: What are the different criteria used for STRs suspicions and their sensitivity and specificity in published and grey literatures? 2. What is the prevalence of STRs related to different blood components (Platelets, Red Blood Cells (RBCs), and Plasma) reported? Eligibility criteria Developing and aligning the inclusion criteria with the objective/s and question/s. This study will follow the population, concept, and context (PCC) as recommended in the Joanna Briggs Institute (JBI) guideline. The inclusion criteria of this review should meet the followings: • Population receiving blood transfusion (platelet, RBC and/or plasma). • Concept will be all those individual and collective criteria used to identify situation to be investigated for TRSRs. • Context will be settings such as the health care, hemovigilance, and regional and provincial health care service facilities. • Scope: quantitative and qualitative studies, both in published and grey literatures, diffused between January 1, 2000, to December 31, 2021, in English or French language, for which full-text articles or reports are available/obtained Exclusion criteria The study will have the following exclusion criteria: duplicate publications, study has no outcome of interest included, non-original research, including editorials, opinion pieces, letters, protocols, studies published before January 1, 2000, and those that are in languages other than English or French language. Search method. The literature search will be included to all published peer-reviewed articles and all relevant guidelines and grey literature pertaining to the research objectives and questions in the English or French languages. For all peer-reviewed published articles, a search will be performed using the following electronic databases: MEDLINE via PubMed, Web of Science, Science Direct, and Embase via Ovid, while for those in grey literature, we will use conference proceedings, theses and dissertations, association reports, government reports (e.g., federal, provincial, and regional, from organization's websites), and WorldCat. We will conduct an initial search using key words and controlled vocabulary for peerreviewed articles, and grey literature. We have demonstrated examples of our search strategy including our search terms and queries in different databases (S1 and S2 Tables in S1 Appendix). The search results will be compiled, and duplicates will be removed using EndNote (Clarivate Analytics, Philadelphia, United States). An additional literature search will be conducted by hand search in key journals (i.e., Transfusion, Vox Sanguinis, Blood, Critical Care Medicine, Clinical Infectious Diseases, Transfusion Clinique et Biologique, Transfusion Medicine Reviews, Blood Transfusion, BMC Hematology, Journal of Blood Medicine, Journal of Hematology, Blood Reviews, Journal of Thrombosis and Haemostasias), and from the reference list of all included studies to identify other studies not captured through the electronic search. Study selection process and review management. All published and grey literatures will be sought in English or French languages from January 1, 2000, to December 31, 2021, for which full text are available with an up-to-date information, while non-original research, including editorials, opinion pieces, letters, and protocols, will be excluded from the study. All available and selected literature will be complied following the Preferred Reporting for Systematic Review and Meta-analysis-scoping review (PRISMA-SCR) [24]. The authors will be divided into groups of two and will independently conduct the selection of full text articles and other selected literatures to sort out their eligibility for inclusions based on the articles through title and abstract analysis. In cases of disagreements, consensus will be made based on mutual agreement and obtention of opinion from experts in the field. Endnote software will be made use for the management of the results of the search. The PRISMA flowchart will be used to describe the selection procedure (Fig 1). Data extraction. The data charting will be made in a logical order and a descriptive summary of the results will be presented in line with the research questions and objectives. The tabular form will be used to record the key information such as author/s, year of publication, country of origin, aims, study design, sample selection, population and sample size, level of evidence, data quality, reference, results, etc. related to the review questions. More specifically, among others, extracted data will include the following information: 1) definition of STR mentioned, 2) sign and symptoms of STRs, 3) defining body for STR (e.g., international, national, provincial, regional), 4) type of blood components, 5) fatality (yes vs no), 6) prevalence rates of STR for each blood components, 7) sensitivity and specificity of STR criteria. Presentation of the results. The results will be presented in tabular form along with a narrative summary aligned with the research questions of the scoping review. In the table, data will be grouped as per the study type. Descriptive analysis will be conducted to summarize the information obtained in the form of rates, mean and standard deviation wherever necessary. The results will be discussed in line with the future work, and practice in Canadian and global context. Study implications and dissemination. This scoping review has the potential to advise policy makers, health care providers and researchers on how possible non-immunological septic transfusions are assessed, and thus, might be the milestone in the development of a guideline for assessment and management of STRs in Canadian and global context. The scoping review will be made available to public though peer-reviewed publications and public-oriented media types.
2022-01-22T05:22:18.377Z
2022-01-20T00:00:00.000
{ "year": 2022, "sha1": "3d5327f73a76ef9259f0f414979cf6e02fbd39f7", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0262765&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3d5327f73a76ef9259f0f414979cf6e02fbd39f7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256166
pes2o/s2orc
v3-fos-license
Microfluidic high-throughput culturing of single cells for selection based on extracellular metabolite production or consumption A new microfluidic system makes it easy to identify rare cells that secrete or consume specific molecules Phenotyping single cells based on the products they secrete or consume is a key bottleneck in many biotechnology applications, such as combinatorial metabolic engineering for the overproduction of secreted metabolites. Here we present a flexible high-throughput approach that uses microfluidics to compartmentalize individual cells for growth and analysis in monodisperse nanoliter aqueous droplets surrounded by an immiscible fluorinated oil phase. We use this system to identify xylose-overconsuming Saccharomyces cerevisiae cells from a population containing one such cell per 104 cells and to screen a genomic library to identify multiple copies of the xylose isomerase gene as a genomic change contributing to high xylose consumption, a trait important for lignocellulosic feedstock utilization. We also enriched L-lactate–producing Escherichia coli clones 5,800× from a population containing one L-lactate producer per 104D-lactate producers. Our approach has broad applications for single-cell analyses, such as in strain selection for the overproduction of fuels, chemicals and pharmaceuticals. A r t i c l e s Metabolic engineering has made substantial contributions to the rational improvement of strains for industrial applications. Traditionally, enzymatic steps closely associated with the productforming pathway have been engineered to prune side reactions and eliminate kinetic bottlenecks [1][2][3][4] . However, other, so-called distal genes may also affect production in a profound way owing to (often unknown) kinetic and regulatory effects. Inverse metabolic engineering (IME) emerged as an approach to identify such distal genetic factors. IME uses combinatorial methods whereby libraries are constructed harboring random genetic variants of the host or other strains; cells with superior properties are selected; and genetic inserts affecting the superior phenotype are characterized 2 . Though many strategies can be deployed in library construction, broad applicability of IME to strain improvement for overproduction of secreted metabolites is severely limited by the lack of high-throughput methods for selecting strains with substantially improved metabolite secretion or uptake rates 1,5 . Fluorescence-activated cell sorting (FACS) has the ability to sort large numbers of cells. Typically, single cells from a bulk culture are measured based on a fluorescent reporter that is linked to the conditions of either the intracellular space or the surface of the cell. Although FACS is a powerful technique, its use is limited to a certain class of problems where the metabolite of interest is not secreted by the cell. FACS cannot be used to identify clones that overproduce extracellular metabolites such as lactate or xylitol, because the metabolites in a bulk culture have lost their association with the cells. Thus, it is important to develop an approach that can examine problems that FACS does not address. Many of those problems are industrially important, especially in the context of metabolic engineering, as substrates and many interesting products are by their very nature extracellular. Even for products that are intracellular, there are substantial efforts in the research community to engineer microorganisms to secrete these products to relieve stress on the cell caused by toxicity and to increase the ease of separation of the desired product [6][7][8] . In this context, we developed an approach that is ideally suited to help these researchers identify hyperproducers that are further engineered to secrete molecules of interest. This limitation of selecting for strains based on extracellular metabolite levels can be addressed by compartmentalizing clonal populations in separate wells in a microtiter plate. In this system, the concentration(s) of the metabolite(s) of interest can be measured and used to select strains 9 . However, this method is laborious, expensive, low-throughout and poorly suited for the improvement of strains for the production of fuel and chemical products by screening large (≥10 4 unique clones) mutant libraries. Automation through the use of colony pickers and liquid handling robots increases the throughput to 10 4 clones/day but cannot handle larger libraries unless multiple machines are used in parallel. Furthermore, both colony pickers and liquid handling systems cost hundreds of thousands of dollars and have a large footprint. These cost and space issues are further exacerbated when using multiple machines. Thus, it would be preferable to use a higher-throughput method that also addresses these issues. Several such high-throughput encapsulation methods have been reported. Gel microdroplet technology was developed to measure extracellular metabolite levels for Microfluidic high-throughput culturing of single cells for selection based on extracellular metabolite production or consumption Phenotyping single cells based on the products they secrete or consume is a key bottleneck in many biotechnology applications, such as combinatorial metabolic engineering for the overproduction of secreted metabolites. Here we present a flexible high-throughput approach that uses microfluidics to compartmentalize individual cells for growth and analysis in monodisperse nanoliter aqueous droplets surrounded by an immiscible fluorinated oil phase. We use this system to identify xylose-overconsuming Saccharomyces cerevisiae cells from a population containing one such cell per 10 4 cells and to screen a genomic library to identify multiple copies of the xylose isomerase gene as a genomic change contributing to high xylose consumption, a trait important for lignocellulosic feedstock utilization. We also enriched L-lactate-producing Escherichia coli clones 5,800× from a population containing one L-lactate producer per 10 4 D-lactate producers. Our approach has broad applications for single-cell analyses, such as in strain selection for the overproduction of fuels, chemicals and pharmaceuticals. A r t i c l e s individual cells, but it produces gel containers that vary greatly in size 10 . Because the metric for screening IME libraries is a metabolite concentration, variation across the library is generally small (e.g., less than a factor of two). Thus, a minimal variation in droplet size is critical for the success of a high-throughput screening system. Droplet emulsion technology provides a similar method for compartmentalizing clones by placing them individually in aqueous droplets surrounded by an immiscible oil phase and has improved the activity of enzymes produced by cells or in vitro translation 11,12 . Creating the emulsion with a homogenizer also produces substantial variations in droplet volume. Thus in a previous application of this technology, only libraries with a great deal of diversity could be screened 13 . In contrast, using microfluidics can produce droplet size distribution variations as small as 3% (ref. 14). Furthermore, droplet sizes of a nanoliter or less can be produced. The use of low-volume droplets increases the measurement sensitivity because the concentration change inside the droplet is larger. Microfluidics also facilitates droplet merging, analysis, fluorescence detection and sorting [15][16][17][18] . Integrating these functions into a single device provides several advantages. The assay reagents are added to the cell-containing droplets after cell culturing, which allows the cells to be cultured in the same media as they would be in a shake flask, instead of being cultured in a mixture of media and assay chemicals. The consumption of a media component can be measured as well. RESULTS Microfluidic screening and assay system Our flexible, integrated, high-throughput screening system encapsulates cells in monodisperse, nanoliter-volume, aqueous droplets surrounded by an immiscible fluorinated oil phase. The system then cultures the cells, mixes the contents of the cell-containing droplets with fluorescent enzymatic assay reagents, measures the resulting fluorescence and sorts the droplets based on that measurement ( Fig. 1, Supplementary Note 1, Supplementary Figs. Supplementary Movies 1-4). 1-4 and The number of cells placed in droplets follows a Poisson distribution, which is dependent on the incoming cell density ( Supplementary Fig. 5). The latter can be manipulated to ensure the encapsulation of one cell in approximately every two to three droplets, thereby minimizing the number of droplets with more than one cell (and thus the number of false-positive events) and ensuring that the number of droplets with single cells is high enough to support sufficient throughput. The sorting system has been designed to have negligible false negatives, and the false-positive rate is 2.5% (Supplementary Note 2 and Supplementary Table 1). The high-throughput screening system is flexible because any fluorescent assay system can be used to measure the concentration of the metabolite of interest. The assay described in this paper is based on an oxidase enzyme/horseradish peroxidase/Amplex UltraRed system and allows for the use of any oxidase enzyme (tunable to the target metabolite) 19 In this reaction, the amount of fluorescent resorufin produced is proportional to the concentration of the metabolite of interest in solution. The assay reaction starts when a cell-containing droplet coalesces with an assay droplet. Enriching populations for high xylose-consuming strains We used our system to identify high xylose-consuming strains of S. cerevisiae. The consumption of xylose was chosen for the demonstration of this system because of its relevance to biofuels research. Lignocellulosic feedstocks, such as corn stover, contain a substantial amount of xylose 21 . However, S. cerevisiae, which readily converts glucose to ethanol, cannot naturally ferment xylose. As a result, engineering an S. cerevisiae strain that readily assimilates xylose is a critical step in the full utilization of lignocellulosic feedstocks and the development of economically viable bioethanol processes based on renewable resource utilization [22][23][24] . A high-throughput screening method is an important tool for identifying these strains. First, we sought to establish that the fluorescence distributions of droplets encapsulating yeast cells with varying xylose-assimilation capabilities were sufficiently different to allow such droplets to be efficiently sorted. To this end, we chose two strains of S. cerevisiae, H131 and TAL1. H131 is the higher xylose-consuming strain (Supplementary Fig. 6). Both strains contain the Scheffersomyces stipitis genes XYL1, XYL2 and XYL3, which code for the xylose reductase, xylose dehydrogenase and d-xylulokinase enzymes, respectively. The addition of these enzymes and the overexpression of the native TAL1 gene, which codes for a transaldolase enzyme in the pentose phosphate pathway, allow the cells to utilize xylose. In addition, H131 overexpresses several additional pentose phosphate pathway genes, which ensure that the additional flux from the utilization of xylose is diverted toward the glycolytic pathway 25 . The two individual strains were cultured separately in droplets, and we used a fluorescence assay based on the pyranose oxidase enzyme to detect xylose (Supplementary Note 3 and Supplementary Figs. 7 and 8) 26 . Fluorescence distribution data were measured at various time points using biological replicates. Representative distributions from the H131 and TAL1 strains after 2 d of culturing are shown in Figure 2a. The bimodal distribution is expected, with the higher Figure 1 Microfluidic high-throughput screening platform. Initially, cells in PBS are mixed with cell culture media. Droplets are formed by combining this aqueous stream with two streams containing a fluorinated oil and surfactant mixture. The 0.3-nl droplets formed in this device are collected in a syringe that provides a microaerobic environment when capped. The syringe is placed in an incubator for cell culturing. After culturing for a predetermined amount of time, droplets from the incubated syringe are reinjected into a second device (lower panel) where they are combined with another set of droplets containing fluorescent enzymatic assay reagents. After droplet coalescence, the resulting droplets flow through channels for 30 s to allow the assay reaction to proceed. The extracellular concentration of the metabolite of interest is quantified by measuring the droplet fluorescence with a laser/photomultiplier tube system. Based on this measurement, droplets are sorted into one of two channels. This system as currently configured can screen ~1 to 2 clones per second so that 10 4 clones can be screened in less than 3 h. npg fluorescence population representing droplets that contained no cells and therefore had a xylose concentration equal to the initial xylose concentration of 5 g/liter. Cell-containing droplets exhibit lower fluorescence. The fluorescence distributions reveal that there were more droplets with low fluorescence for the H131 than for TAL1. Thus, the xylose concentration in the H131-containing droplets is lower, as expected for these xylose-consuming cells (Supplementary Note 4 and Supplementary Figs. 8 and 9). To perform a more quantitative comparison of the data from Figure 2a, we evaluated the fraction of total droplets with fluorescence below a certain threshold. Results for the 2-d culturing experiment show a substantial difference between the fractions of droplets containing the two types of strains in the fluorescence intensity ranges of 0-0.5, 0-0.6 and 0-0.7 (Fig. 2b). The ratio of H131-to TAL1-containing droplets in these ranges was also calculated and found to be as high as 25 (Fig. 2b). This ratio is an estimate of the enrichment in H131 cells that would be obtained in the outgoing population if the incoming population contained equal cell concentrations of the two strains and droplets were perfectly sorted based on these ranges. Analysis of similar data obtained at different time points shows that the percentage of droplets in the 0-0.6 fluorescence intensity range increased in both strains with time, but the enrichment decreased after 2 d, presumably owing to an already high depletion of xylose in the H131-containing droplets (Fig. 2c). Next we tested whether the microfluidic assaying-sorting device was capable of enriching the H131 cell population by screening an incoming cell population comprising equal proportions of the H131 and TAL1 strains. This mixture was cultured on xylose for 2 d, and the resulting cell-containing droplets were screened for low xylose content using the microfluidic device. The difference in the auxotrophic markers for the two strains was used to determine the contents of the sorted droplets (Supplementary Note 5 and Supplementary Figs. 10-13). The H131 strain does not grow on leucine-deficient media, whereas TAL1 does. As a result, the sorted population was grown on two types of agar plates, one with leucine and another without. After culturing, we counted the number of colonies on each type of plate. The TAL1 colony forming units (CFU) parameter was the number of colonies on the leucinedeficient plates, whereas the H131 CFU was calculated by subtracting the number of TAL1 colonies from the total number of colonies on the leucine plates. The fluorescence data from two screening experiments (carried out with sorting gates of 0-0.6 and 0-0.7) are shown in Figure 3a. For the two fluorescence gates used for droplet sorting (0-0.6 and 0-0.7), H131 enrichments of 18× and 22× were obtained, respectively (Fig. 3b). H131 enrichment due to the cell growth difference between the two strains was only 2.8×, which is substantially lower than the total enrichments observed. Actual cell libraries typically contain a very low number of desired cells in the overall cell population. Hence, two test libraries with incoming desired (H131) to undesired (TAL1) cell population ratios of 1:1,000 and 1:10,000 were screened so that droplets having fluorescence intensities in the range of 0-0.7, which should contain a substantially larger proportion of the desired H131 cells, were sorted into one bin. A target final population ratio of 1:2.5 was defined (meaning that randomly selecting five clones should assure recovery of an H131 cell). This target was achieved after only one round of screening with the 1:1,000 library. Two rounds were necessary to screen the 1:10,000 library. The sequence for one round of screening involved four stages: a preculture of the incoming cells, a shake flask culture grown into early exponential phase, droplet encapsulation of cells and selection of droplets with low xylose concentrations. The one round of screening enriched the 1:1,000 library by 420×, and two rounds enriched the 1:10,000 library by 42,600× (Fig. 3c). These results are in line with the enrichment experiments summarized in Figure 3b. Screening a yeast genomic DNA library for high xylose consumption As we have shown the ability to enrich for high xylose-consuming populations, our next test was to screen a genetic library. The library we screened was generated to determine the nature of the genetic modification(s) underlying the superior xylose uptake performance that some strains acquired as a result of the evolution, under proper selection pressure, in series culturing or continuous cultivation experiments. The H131-A31 strain used in this experiment is similar to H131 with one important exception: instead of the XYL1 and XYL2 genes, it contains the Piromyces sp. E2 XYLA gene encoding a xylose isomerase enzyme to convert d-xylose to d-xylulose 24 . This strain initially exhibited negligible growth and xylose consumption rates. After several months of evolution through growth and serial subculturing, we obtained strain H131E-A31; this strain exhibited high growth (µ ~ 0.2 h −1 ) and high xylose consumption rates (14 g/liter in 2 d) when cultured microaerobically in a shake flask with an initial 20 g/liter xylose concentration (Supplementary Figs. 14 and 15). Other groups have also used similar strategies to generate S. cerevisiae strains that can consume xylose at high rates by evolving cells containing a Piromyces sp. E2 xylose isomerase gene insertion 23 . However, the origin of improved strain performance with respect to xylose assimilation has been largely To identify the genetic elements responsible for the improved performance of the H131E-A31 strain, we used our system to screen H131-A31 cells transformed with a library of 10 5 genomic clones from H131E-A31. The library was constructed such that each insertion had a high probability of containing at least one open reading frame. Assuming that a single mutation, rather than a combination of multiple mutations, is sufficient to yield cells with an improved xylose assimilation rate, it should be possible to isolate a mutant harboring a single genomic fragment by screening the cell population transformed with the library using our system 29 . After one round of screening where the cells were cultured for 70 h in 5 g/liter xylose, five clones were grown microaerobically in shake flasks. Mutant W2 was isolated as the one with the highest xylose consumption rate. We calculated the cumulative xylose consumption over the course of 4 d for strain H131-A31, transformed with an empty plasmid (control); mutant W2; and strain H131-A31, with the plasmid isolated from mutant W2 (retransformed W2) when cultured in media with 5 g/liter xylose (Fig. 4a). Biological replicates were used in these measurements. The retransformed W2 strain consumed 2.6 g/liter xylose after 4 d of culturing compared with 4.7 g/liter for mutant W2, suggesting that a background genomic mutation, in addition to the presence of the plasmid, also contributed to the phenotype of the W2 mutant. However, both mutants consumed more xylose than did the control, confirming a mutation on the plasmid that provided a benefit over the control. The sequencing and restriction enzyme digest analysis determined that the plasmid isolated from the W2 strain contained three full copies of the XYLA gene construct flanked by truncated XYLA sequences (Fig. 4b) 29 . Quantitative PCR was performed to determine the number of copies of XYLA in the H131-A31 and H131E-A31 strains. There were 1.3 ± 0.3 copies in H131-A31 and 47.9 ± 9.0 copies in H131E-A31 normalized to the copies of the phosphoglycerate kinase (PGK) gene, which confirmed the increased number of copies of XYLA after evolution. The xylose isomerase gene catalyzes the conversion of d-xylose to d-xylulose, which initiates xylose assimilation by the cell. Additional copies of XYLA would allow for increased xylose uptake and cell growth. As the original H131-A31 strain did not grow well on xylose, the selection pressure of having xylose as the sole carbon source in the medium led to the enrichment of cells harboring increased copies of XYLA (because of the growth advantage that such cells would enjoy in this medium). As these multiple copies of XYLA were linked, they were created through the naturally occurring process of tandem gene duplication where recombination occurs between two sites owing to unequal crossing over. The pRS426 plasmid from the H131-A31 strain contained not only the XYLA gene flanked by a promoter and terminator but also S. stipitis XYL3 with identical flanking regions. These homologous flanking regions would allow tandem gene duplication to occur during DNA replication 30 . Screening for L-lactate-producing cells Our results demonstrate that the microfluidic system is capable of isolating cells with high xylose assimilation rates from a population comprising cells with varying xylose uptake rates. Production phenotypes, however, are different in that the carbon flux is directed toward the desired product and need not be coupled to and may even be in npg competition with growth. To demonstrate the ability of the system to also identify overproducing strains, we used it to enrich for a high l-lactate-producing E. coli strain. We used strains TG108 and TG113, which produce optically pure l-and d-lactate, respectively 31 . We transformed TG108 with the cloning vector pBR322 and TG113 with pACYC184 to allow for quantification of enrichment in sorted populations by selective plating. A comparison of shake flask fermentation characteristics shows that TG108 pBR322 and TG113 pACYC184 have similar growth profiles and lactate production as measured by high performance liquid chromatography (HPLC) (Fig. 5a,b). The assay reaction uses lactate oxidase from Pediococcus sp. This enzyme was chosen because of its high selectivity for the l-isomer of lactate. Quantification of lactate in the shake flask fermentations using the enzymatic assay in a 384-well plate format confirmed this enantiomeric selectivity (Fig. 5a,b). We constructed two test libraries where the desired and undesired strains were TG108 pBR322 and TG113 pACYC184, respectively. As with the xylose enrichment experiment, the desired to undesired cell ratio was 1:1,000 and 1:10,000. These libraries were then screened for highly fluorescent droplets to identify ones containing l-lactate-producing cells. After two rounds of screening, the 1:1,000 population was enriched 775×, whereas three rounds of screening resulted in an enrichment of 5,800× for the 1:10,000× population (Fig. 5c); both of which meet the target final population ratio of 1:2.5. These results not only demonstrate the efficacy of our system for enrichment of a production phenotype but also the ability to distinguish between enantiomers. DISCUSSION Although others have demonstrated the use of microfluidic emulsion droplet technology to screen populations of cells, these previous applications have been limited to screening cells that have already completed their culturing process and that produce an analyte of interest physically connected to the cell by being either intracellular or membrane-bound 18,32 . In these examples, the initial formation of droplets involved the addition of only assay reagents to determine the activity of the enzyme. This is very different from the detection of extracellular metabolite production or consumption. In this scenario, the culture and assay steps cannot be performed simultaneously because the time scale for production or consumption is much longer than the assay time, which is on the order of seconds or minutes. Thus, a separate step is necessary to culture each individual cell in its own droplet. Here, we have described a system to measure extracellular metabolite secretion or consumption, which utilizes a microfluidic droplet maker to encapsulate cells and growth medium, a syringe for multi-day microaerobic culturing of collected droplets, and a second microfluidic device containing coalescence, delay line, detection and sorting modules. In this second device, the reagents to detect the analyte of interest are added to the droplets collected in the syringe; the assay reaction occurs while the droplets are in the delay lines; and droplets are sorted and collected based on resulting fluorescence. Combining multiple modules into a single microfluidic device was necessary because the assay incubation time was only 30 s. It was critical to ensure that the droplet order after coalescence was maintained through the detection step to ensure that, and the assay reaction time was constant for all droplets. Because there is additional complexity in this device, the correct timing of the droplet reinjection and assay droplet formation must be properly set to ensure that the incidence of both incorrect coalescence (e.g., combining one assay droplet with two cell-containing droplets) and sorting (e.g., undesired droplets into desired droplet output) was low. The device we used had a throughput of 10 4 cells/h. Thus, in a 10-h screening experiment, 10 5 cells can be screened. Additional optimization, as well as a more controlled process for making the devices, could improve the throughput to 10 5 -10 6 cells/h. Also, as microfluidic devices are easily parallelizable owing to their small size, creating such a parallelized system could improve the throughput another two to three orders of magnitude (up to 10 9 cells/h). This system has several advantages over other screening technologies. We described in Supplementary Note 6 how our method was more efficient than the traditional serial subculturing technique for identifying high xylose-consuming strains. Compared with automated colony pickers and liquid handling systems, our screening system has higher throughput even without additional optimization and uses fewer reagents, which may contribute to lower costs (Supplementary Note 7). Another advantage is the reduction of culturing space. As an example, distributing 10 4 clones, one clone per well, into 96-well plates would require 105 plates, which occupies a large amount of space; 10 8 clones would require 10 6 plates. We have demonstrated that culturing cells in a droplet matches the performance of culturing them in a shake flask. In contrast, static plates do not match a shaken system owing to the reliance of the system on diffusion to transfer nutrients to the cells. Using shaken plates further increases the footprint because, typically, deep-well plates are used and are also taller. In addition to the microfluidic screening system, we also described a flexible assay for measuring various metabolites through the use of oxidase enzymes with horseradish peroxidase. The same basic system can be used for measuring different metabolites simply by exchanging Figure 5 Shake flask data from lactate-producing strains and enrichment of l-lactate-producing strain. (a,b) Measurements of biological replicate experiments of cell growth, HPLC lactate measurements and lactate oxidase assay results from l-lactate shake flask fermentation of strain TG108 pBR322 (a) and d-lactate shake flask fermentation of strain TG113 pACYC184 (b). (c) Enrichment of TG108 pBR322 strain from initial 1:1,000 and 1:10,000 TG108 pBR322/TG113 pACYC184 mixtures from biological triplicate experiments. One round of screening yields a 775× enrichment of the TG108 pBR322 strain from an initial 1:1,000 mixture of TG108 pBR322/TG113 pACYC184; three rounds of screening enriches TG108 pBR322 by 5,800× from an initial 1:10,000 mixture. The lines in a and b represent the average of the data points from the biological replicate experiments (n = 2). Error bars in c, s.d. from biological replicate experiments (n = 3). npg A r t i c l e s the oxidase enzyme. Thus, a metabolite such as xylitol can be detected using xylitol oxidase 33 . The number of metabolites compatible with this assay system could be increased further by coupling NADH oxidase with dehydrogenase enzymes. Moreover, the microfluidic screening system is not limited by these specific assays. In general, fluorescent assay measurements developed for plate-based measurements are readily transferable to this microfluidic droplet screening system. Along with the improvements in the screening speed of this system, other capabilities can be added. Although we described only the use of yeast and bacteria, this system could be extended for use with other organisms, such as mammalian or insect cells. Although we grew cells for only up to 4 d without any issues, longer-term culturing in droplets can occur as long as the cells have sufficient nutrients and there is minimal evaporation inside the droplets. Additional nutrients, such as those used in a fed batch, could be added to the droplets through a microfluidic droplet coalescence module and incubation in a humidified incubator reduces the amount of evaporation. As our system is a series of microfluidic modules, a logical next step would be to integrate these with commercially available microfluidic systems for additional analysis. Although the system we have described is flexible and can be extended to other areas, the detection of the metabolite of interest must use either an enzymatic fluorescence assay or another assay that is compatible in a microfluidic droplet system. Another limitation is that the metabolite must also be miscible in water and not in the oil phase to ensure effective encapsulation. Our system allows large libraries to be screened for a variety of applications in metabolic engineering. Furthermore, it could be extended for use in the fields of antibody lead identification and optimization, antibody production and other areas that benefit from the characterization of single clones. We have demonstrated that the system can isolate bacterial and yeast strains capable of overproducing or overconsuming secreted metabolites and can help identify the dominant mutations responsible for technologically important phenotypes. METHODS Methods and any associated references are available in the online version of the paper. (Sigma) and afterwards air was blown to remove the solution. The device was then baked at 65 °C to remove any remaining solution. When the device was placed on a hot plate at 80 °C, Indalloy 19 solder (52% In, 32.5% Bi, 16.5% Sn, 0.020 inch diameter wire from Indium Corporation) was placed in the electrode inlets and allowed to melt. Once the solder reached the outlets, 22 gauge wire was placed in the outlets to form an external electrical connection. All other devices used uncoated 2 inch × 3 inch Swiss Glass slides. Before using the microfluidic devices, the PDMS channel surface was made hydrophobic by injecting Aquapel (PPG) into the channels and then blowing air to remove the Aquapel. Statistics. For all experiments where statistical significance was stated, the sample size was denoted in each figure caption. Statistical models were used to test statistical significance for data sets with multiple time points. These data sets had biological replicates for each time point. Biological replicates were generated by using separate cultures. In the statistical models, the effect variables were time and the groups being compared (e.g., control and experiment group) and the response was the measured variable (e.g., xylose consumption). Statistical significance was denoted when P < 0.05 for the effect of the group variable. We also confirmed that the residuals had a normal distribution. The sample sizes were chosen to ensure that with a confidence interval of 95% and a statistical power of 0.80, we have sufficient statistical power to identify differences greater than three times the s.d. of the replicates when the number of replicates is greater than or equal to 2. The calculated P values and the equation used to calculate the sample sizes based on the statistical power are listed in Supplementary Note 8.
2016-02-20T08:33:50.931Z
2014-04-01T00:00:00.000
{ "year": 2014, "sha1": "b11188142df58b559c982f7ef1f60ae0ee7b324a", "oa_license": "CCBYNCSA", "oa_url": "https://dspace.mit.edu/bitstream/1721.1/101236/1/Microfluidic%20high-throughput.pdf", "oa_status": "GREEN", "pdf_src": "SpringerNature", "pdf_hash": "9890e880c788a2db7c2380abadcc74ac9b2f783e", "s2fieldsofstudy": [ "Biology", "Chemistry", "Engineering" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }