id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
53191514 | pes2o/s2orc | v3-fos-license | Surgical Treatment of Malignant Pheochromocytomas in Spine
To the Editor: Pheochromocytomas are rare neuroendocrine tumors that originate from the chromaffin cells in the adrenal glands or associated sympathetic ganglia.[1] To the best of our knowledge, malignant pheochromocytoma is extraordinarily rare, with a frequency of 0.2–0.9 case per 1,000,000 individuals per year, and there is obvious shortage of clinical reports on metastatic pheochromocytoma to the spine. Thus, it can be difficult to diagnose and may result in devastating consequences upon mismanagement.[2]
180/110 mmHg during the past nine years. Postadmission, we consulted the department of endocrinology who suggested that phenoxybenzamine should be taken before the operation. We, therefore, performed a percutaneous vertebroplasty procedure to the spinal metastases 4 weeks later in order to alleviate the symptoms caused by the spinal cord compression. This has subsequently stabilized the vertebral spine to prevent multiple vertebral bodies from collapsing. The biopsy specimens appeared to be fish like. The operation was a success, with the intraoperative bleeding being about 50 ml. It was also recorded that blood pressure fluctuated in the operation with 277/101 mmHg as the highest point when bone cement was injected into the vertebral bodies. The X-ray taken after the surgery confirmed that the cement was positioned with satisfactory without any signs of displacement [ Figure 1e and 1f]. The postoperative pathology report later confirmed the diagnosis of malignant pheochromocytomas to the spine [ Figure 1g]. One week after the operation, the VAS score of his back pain has improved down to 0-1 points compared with the preoperative status of 4-6 points. The patient was unwilling to undergo any further postoperative adjuvant treatment including radiation therapy, chemotherapy, or metaiodobenzylguanidine therapy. The postoperative 3-month follow-up visit to the patient showed that there is no tumor progression or any new symptoms.
The location of the spinal lesion determines the neurological deficits, and there is a great deal of variability. Pheochromocytoma may become malignant through their metastatic tendency, and the metastases can help us diagnose a malignant pheochromocytoma. The "gold standard" diagnosis of pheochromocytoma relies on pathological findings. [2,3] It is believed that the surgery is the best treatment for metastatic spinal pheochromocytomas causing back pain or paralysis. This protocol enables the accomplishment of two objectives: it alleviates the neurological deficits by decompressing the stenosis while provides histopathological specimens for diagnosis at the same time. Nevertheless, there are several considerations to be kept in mind when deliberating on surgical intervention to malignant pheochromocytoma with spinal metastasis including preoperative hemodynamic instability and cardiac arrhythmia control, possible incomplete tumor resection, intraoperative blood loss and hemodynamic instability, as well as postoperative adjuvant therapy. Percutaneous vertebroplasty by cement augmentation may be an alternative treatment for patients with metastatic pheochromocytoma in the spine who cannot undergo appropriate surgery or decline open surgery. [3,4] There is yet a consensus on the combined treatment for metastatic pheochromocytomas in the spine due to insufficient amount of case studies. [5] In spite of the low occurrence rate, it is still highly recommended that metastatic pheochromocytoma of the spine should be carefully differentiated when patients present with back pain or paralysis with labile blood pressure. With a multidisciplinary team approach, proper planning, and adequate perioperative medical management, metastatic pheochromocytoma in the spine could be managed much more effectively.
Declaration of patient consent
The authors certify that they have obtained all appropriate patient consent forms. In the form, the patient has given his consent for his images and other clinical information to be reported in the journal. The patient understands that his name and initial will not be published and due efforts will be made to conceal his identity, but anonymity cannot be guaranteed.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest. | 2018-11-15T08:55:05.813Z | 2018-11-05T00:00:00.000 | {
"year": 2018,
"sha1": "f5cc9bc405d6b24764fcc67d5e5b7c026b52c6bb",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/0366-6999.244126",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f5cc9bc405d6b24764fcc67d5e5b7c026b52c6bb",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
232155637 | pes2o/s2orc | v3-fos-license | Exploring the Awareness, Motivations, and Coping Strategies of Problematic Internet Users
,
Introduction
In the present time, where there is often information overload, it is necessary and useful to have a clear image of what is seen as internet addiction (IA) or problematic internet use (PIU).Since the 1990s, the internet has become the most used and relayed information source in our everyday lives.Excessive internet use has resulted in neglecting social activities, work responsibilities and health consequences.Psychologists and researchers identified those problematic behaviours as internet addiction (Young, 1998), PIU (Davis, 2001), and compulsive internet use (Meerkerk et al., 2009).Although multiple terms and measures have evolved to assess internet addiction, it is generally described in terms of symptoms related to addiction such as obsessive and compulsive use, withdrawal signs, and impairment of life activities.Young (1998), for example, developed the Internet Addiction Test (IAT) measure using gambling addiction criteria from DSM 5 to measure internet addiction.
Recent studies have found that internet addiction and PIU are associated with conditions such as emotional instability, loneliness, social withdrawal, depression, low self-esteem, anxiety, and other addictive behaviours.The consequences of internet addiction can be severe; excessive internet use has the potential to cause career failure, marriage breakdown, as well as a financial crisis, with negative psychosocial effects.However, it is uncertain whether problematic internet use leads to social and psychological impairments or whether social and psychological issues cause the PIU.Understanding this causality is important to solve the root cause of the behaviour.Although internet addiction is largely recognised by psychologists and researchers as a problematic behaviour pattern, it is still not documented in the DSM-5.Many psychologists view PIU as a set of behaviours that may reflect an underlying psychiatric disorder such as depression or social withdrawal.More research is being conducted in the area aimed at determining whether internet addiction should be defined as a separate disorder with a distinctive treatment programme.
This article provides a review of internet addiction and wellbeing, followed by results from interviews aimed at exploring the awareness, motivations, and coping strategies of problematic internet users.
Internet Addiction Systematic Review Method
Pubmed and Psycinfo databases were searched for peer-reviewed articles published in English that addressed the association between internet addiction and wellbeing in adults.Selected studies were published in a time range that spanned the years 2000-2017.Studies were selected based on their relation to the association of wellbeing, mental health and internet addiction.Studies on adolescents were excluded, as were those on online gaming addiction disorder studies which has been classified as a separate disorder.
After duplicates were excluded there were 146 results for internet addiction and wellbeing.The first author read all abstracts and full text of relevant articles.In the conducted review, a total of 35 empirical studies were identified.
The majority of studies were cross-sectional (n= 29), four were longitudinal studies, one was qualitative, and one was an experimental study.Studies were classified into four main themes and sub-themes.The main themes were the association between internet addiction and positive and negative outcomes, internet addiction and predictors of wellbeing, internet addiction and individual effects, and internet addiction and appraisals.In the reviewed studies, the sample sizes varied from 101 to 23,533 adults.The search terms used in the different search engines are described in the next section.
Results
The conducted literature searches for this review revealed 33 articles that assessed the association between problematic internet and wellbeing.Two studies were added by identifying them from the reference lists of other studies.
Studies were divided into four themes and subthemes, based on the Demands-Resources-Individual Effects (DRIVE) Model structure (Mark and Smith, 2008;Williams et al., 2017): internet addiction association with positive and negative outcomes, internet addiction and risk factors, internet addiction and individual effects, and internet addiction and appraisal.Some studies were categorised in more than one theme.
The Association between Internet Addiction and Positive and Negative Outcomes
In this theme, all studies that investigated the association of the negative and positive outcomes of wellbeing were discussed, starting with studies that measured wellbeing as a whole, and then studies that investigated internet addiction and depression.
Internet Addiction and Wellbeing
In a cross-sectional online survey of 330 young adults in Malaysia conducted by Kutty and Sreeamareddy's (2014), the compulsive internet use scale (CIUS) and the 12-item general health questionnaire (GHQ-12, high scores representing more mental health problems) were used.The results suggest that compulsive internet use is correlated with the GHQ score and negatively associated with age and marital status.
In a study aimed to investigate the association between PIU of communicative services and wellbeing of 495 Italian undergraduate students, Casale et al. (2015) used an Italian adaptation of the Psychological Wellbeing Scale and the Generalized Problematic Internet Use Scale 2 (GPIUS2), to assess the association between wellbeing and PIU.The findings present significant evidence that PIU of communicative services is associated with low psychological wellbeing.Cardak (2013), examined the relationship between internet addiction and wellbeing in a sample of 479 Turkish university students, who completed online versions of the Turkish cognition scale (OCS) and Psychological Wellbeing scale (SPWB).The results indicated that internet addiction had a negative effect on wellbeing, with high levels of pathological internet use being associated with a lower level of wellbeing.Similar results were reported by Alavi et al. (2011) with a sample of 259 Iranian university students.Participants answered the Young diagnostic questionnaire and the Symptom Checklist-90-Revision (SCL-90-R).They found a high association between psychiatric symptoms such as sensitivity, depression, anxiety, aggression, phobias and internet addiction after controlling for age, marital status, gender, type of universities, and education level.Akin (2012), examined the relationships between internet addiction, subjective vitality, and subjective happiness in a sample of 328 Turkish university students.Participants completed the Subjective Vitality Scale, Online Cognition Scale and the Subjective Happiness Scale.The results revealed that internet addiction negatively predicted subjective vitality and subjective happiness.Satici and Uysal (2015), explored the possible relation between problematic Facebook use and wellbeing in a sample of 311 university students, where participants completed a battery of questionnaires.These were the Bergen Facebook addiction scale, satisfaction with life scale, the subjective happiness scale and the subjective vitality scale.Life satisfaction, subjective happiness, flourishing and subjective vitality, were negatively correlated with problematic Facebook use.Chen (2012), used a longitudinal study to distinguish the effect of online entertainment, social use, problematic internet use (PIU), and gender on psychological wellbeing.The sample consisted of 757 Taiwanese college freshmen.Participants answered questions about demographics and four questionnaires: Self-Esteem Scale, Loneliness Scale, Beck"s Depression Inventory II, and short PIU form.The questionnaires were given twice during the second and third year of college.Results revealed that increased PIU was associated with lower psychological wellbeing.Increased use of social networks was associated with positive wellbeing, yet not associated with fewer psychological wellbeing problems.A four-year longitudinal study was carried out by Muusses et al. (2014) using a sample of 398 married couples.The aim of the study was to explore the direction of the association of compulsive internet use with positive and negative wellbeing.The results suggested that PIU lowers wellbeing, through increases in depression, stress and loneliness over time, which result in a decrease in happiness.However, there was no effect of PIU on changes of self-esteem over time.Senol-Durak and Durak (2011) explored life satisfaction and self-esteem roles as effective components of subjective wellbeing and problematic internet use cognitions.The theoretical frameworks of Davis (2001), Caplan (2002) and Lent et al. (2009) were used as a model for this study which was tested on a sample of 480 Turkish university students, using structural equation modeling (SEM).The results revealed that self-esteem was a mediator and had a positive/negative effect on life satisfaction through indirectly influencing problematic internet use.
Senol-Durak and Durak (2011) explored the predictors of Facebook addiction using behavioral, psychological, health and demographic information from 447 Turkish college students.They used the Facebook Addiction Scale (FAS), which was constructed and validated through factor analysis.Participants also completed the General Health Questionnaire (GHQ-28).The results revealed that insomnia, anxiety and severe depression were associated with Facebook addiction.Gender and other demographics were not significant predictors.
Most wellbeing and internet addiction studies have used university student samples and produced results which show that problematic internet use influences negative psychological wellbeing (Alavi et al., 2011;Cardak, 2013;Casale et al., 2015).Akin (2012) confirmed that internet addiction negatively predicted subjective vitality and happiness.Chen (2012) and Muusses et al. (2014) longitudinal studies revealed that increased PIU lowers wellbeing, through an increase in stress, depression and loneliness.Low life satisfaction influenced PIU (Senol-Durak and Durak, 2011) however, Kutty and Sreeamareddy's (2014) findings conflicted with these previous results, which suggests that compulsive internet use influences general health.Senol-Durak and Durak (2011) carried out a similar cross-sectional study using the same GHQ and Facebook Addiction Scale measures and confirmed the association between insomnia, anxiety and severe depression with Facebook addiction.The main problems with the literature were the failure to use appropriate models of wellbeing and to control for other predictors.The next section considers a specific outcome, namely, depression.Gedam et al. (2016) compared medical and dental students who were internet addicts in a study that aimed to estimate the prevalence of internet addiction and examine the association between internet use and psychopathology.A sample of 597 students from medical and dental colleges was recruited, and participants completed the internet addiction test and mental health inventory questionnaires.The results revealed significant differences in the two samples in terms of internet use, depression and emotional ties.Min-Pei et al. (2011), investigated the prevalence and psychosocial factors that were associated with internet addiction in a large sample of 3,616 Taiwanese university students.The prevalence of internet addiction was estimated as 15.3%.The results suggested that internet addicts have more depressive symptoms, lower self-efficacy and lower academic performance satisfaction.Also, males were more likely to be internet addicts, and an insecure attachment style was associated with internet addiction.A Japanese study of 165 healthy undergraduate participants conducted by Hirao (2015) through a cross-sectional survey assessed the mental state of internet addicts and noninternet addicts.The results revealed the prevalence of internet addiction to be 15% of the small sample, and the frequencies of depressive symptoms and flow experience were significantly higher in the internet addicts.Yao et al. (2013), conducted a longitudinal study that aimed to explore whether university freshmen"s mental health status and adaptation level were predictors of internet addiction.A sample of 977 Chinese college students answered the Chinese College Student Mental Health Scale (CCSMHS) and the Chinese College Student Adjustment Scale (CCSAS).In a 1-3 year follow-up study, 62 internet -addicted participants were recognised using IAT-8.The results revealed that freshmen students with characteristics of depression, anxiety, and self-contempt were found to be casual symptoms of internet addicts.
The Association between Internet Addiction and Depression
A sample of 13,588 users was recruited for a study in Korea by Whang et al. (2003), and the study investigated the psychological profile of internet overuse, using a "Survey on Internet Use," which consisted of four sections: demographic information, the pattern of internet use, the degree of internet dependence, and psychological wellbeing, which is adapted from The Diagnostic Scale of Excessive Internet Use.The results revealed the prevalence of internet addicts in this Korean sample was 3.5%, while 18.4% were classified as possible internet addicts or problematic internet users.Internet addiction showed a strong association with dysfunctional social behaviour, with internet addicts trying to escape from reality when they are depressed or stressed through excessive internet use.Internet addicts reported high levels of depressed mood and loneliness.Further investigation is needed to explore the direction of causality.
An experimental study was conducted by Iacovelli and Valenti (2009) on a sample of 74 undergraduate female students, with the aim of examining internet addicts" social skills, through the use of telephone communications, comparing the average internet users" likeability and rapport.The study consisted of two phases: the first phase was data collection to identify participants with high internet use, and the second phase was the experiment in which a telephone conversation was held between the two participants, where each rated the conversation in terms of rapport and likeability.The results found that excessive internet users were rated with less likeability and had less ability to build rapport compared to average internet users.However, when participants were asked to rate themselves, there was no difference.The results also revealed that excessive internet users rated themselves as more depressed and socially reserved compared to average users.
A cross-sectional study of 3,267 undergraduate students from China, Singapore and the United States aimed to compare internet addiction, online gaming addiction, and social network addiction and the related depressive symptoms in the three countries.Tang et al. (2016), used the IAT, Bergan social networking addiction scale, Problematic Online Gaming Questionnaire and the 9-item Depression Scale adopted from DSM-5.The results indicated that females were more addicted to online social networks, whilst males were more addicted to online gaming.In comparison to students from Singapore and the United States, Chinese students had the highest level of depressive symptoms, although Chinese and Singapore students had a higher internet addiction rate compared to Americans.
When the results of the association between internet addiction and depression were summarised, findings from Gedam et al. (2016), Hirao (2015) and Iacovelli and Valenti (2009) supported the idea that internet addicts have more depressive symptoms compared to non-addicts.Internet addicts reported higher scores of depressive moods and used the internet to escape from their depression (Whang et al., 2003).A cross-cultural study also found that Chinese internet addicts scored the highest on depressive symptoms (Tang et al., 2016).
The Association between Internet Addiction and Lack of Sleep
The one study that investigated the association of internet addiction and sleeping found that high internet use is associated with low sleep quality.A sample of 1,788 young American adults participated in a diary study that investigated the association between sleep disturbance and social media use.The participants" social media volume and frequency were self-reported daily by writing the time spent online using items adopted from the Pew internet research questionnaire; sleep was assessed using the sleep disturbance measure.The results reported that the median time spent online on social networks was 61 minutes a day.Fifty-seven percent of the sample experienced moderate to high levels of sleep disturbance, which had been associated with high internet use (Levenson et al., 2016).
The Association between Internet Addiction and Academic Performance
Although most of the internet addiction studies recruited university student samples, only two studies explored the negative influence of internet addiction on academic performance.Skues et al. (2016), examined the effects of loneliness, boredom and distress tolerance on PIU, in a sample of 169 undergraduate university students.The association between academic performance and PIU was also measured.The results indicated that boredom was significantly associated with PIU and played a moderator role in a model that included distress tolerance and loneliness.Low academic performance was correlated with problematic internet use.Min-Pei et al. (2011) conducted a study on a sample of 3,616 Taiwanese university students, and the results indicated that internet addicts have lower academic performance satisfaction.
The Association between Internet Addiction and Predictors of Wellbeing
Most of the current internet activities are linked to communicating, being addicted to socialising and other virtual activities, which might be a sign of an absence of, or difficulties with, real-life social experiences.The need for social support or the feeling of loneliness in internet addicts will be discussed below.
The Association between Internet Addiction Social Support Family Loneliness
Loneliness may be a result of a lack of social skills or low self-esteem and poor adjustment.Studies have explored the association of poor social support and loneliness with internet addiction.For example, a study in Iran (Naseri et al., 2015) recruited a random sample of 101 female university students and had the participants complete the Multidimensional Scale of Perceived Social Support, Rosenberg"s Self-esteem Scale, and the Yang Internet Addiction Test.Results revealed that individuals with low self-esteem were more likely to be internet addicts.Significant negative correlations were found between internet addiction and perceived social support, as well as family support.The main limitation of the study was its small sample.There is a need for further investigation to demonstrate the relationship between internet addiction and social support using a larger sample.Odaci and Cikrikci (2014), investigated the association between problematic internet use, attachment styles and the subjective wellbeing of 380 Turkish university students.The participants answered questions about demographics, as well as questions from the problematic internet use scale, the relationship scale, and the subjective wellbeing scale.The results suggested a significant correlation between problematic internet use and subjective wellbeing and dismissive and preoccupied attachment styles.Individuals who have negative self-perception and positive perceptions of others, and who need to be in relationships with others can be described as having a preoccupied attachment style (Permuy et al., 2010).At the other extreme, individuals who had a high positive selfperception and negative perception of others had a dismissive attachment style.Those individuals avoid establishing close relationships with others and tend to underestimate their self-worth by rejecting the value of forming proximity to others out of a fear of disapproval (Bartholomew and Shaver, 1998).The results confirmed that problematic internet use differed significantly according to gender and attachment styles.The results possibly explain the reason for problematic internet use.For participants with a preoccupied attachment, the internet is used in order to fulfil their attachment needs either by stalking or being connected to those they care about for long periods of time.For individuals with a dismissive attachment, problematic internet use may keep them busy or be a source of fulfilment to avoid needing others.Quinones and Kakabadse (2015), investigated the association between self-concept clarity, social support and compulsive internet use of two adult samples from the US (n=268) and UAE (N=270).The participants were assessed through answering the Self-Concept Clarity Scale, Compulsive Internet Scale (2010), three items from Caplan et al. (2009) preference for online interaction scale, the four-item subscale of neuroticism from the Mini-IPIP and Rena et al."s social support Likert scale.The results revealed that CIU is strongly related to low social support and self-concept clarity in the US sample.Due to cultural differences between the two samples in defining self-clarity, the results of self-concept clarity and CIU were weakly associated.Moreover, using core CIU dimensions lowered the prevalence of CIU 20-40% in US and UAE.Kerkhof et al. (2011), examined compulsive internet use consequences in a sample of 190 newlywed couples.Participants self-reported on how many hours they spent online and were assessed using the compulsive internet use scale, the Dyadic Adjustment Scale for general relationship satisfaction to assess relationship adjustment, the Intimacy and Passion subscales of the Perceived Relationship Quality Components Questionnaire, the Relationship Maintenance Strategy Measure relationship-specific disclosure scale, and the partner-specific concealment scale.The study took place at three-time points; demographics were first collected, and then in spring 2007 and 2008 data was collected.At both data collection points, members of the couples answered separately.The results revealed that compulsive internet use predicts marital wellbeing and not vice versa.The occurrence of internet use was positively associated with marital wellbeing.The findings conflict with all previous studies on the impact of compulsive internet use on low levels of likability and rapport (Iacovelli and Valenti, 2009), which is important in the intimacy of the close relationship.Yan et al. (2014), study investigated personality traits, perceived family functioning, recent stressful life events, and internet addiction in a sample of 892 Chinese college students.Participants" internet addiction was assessed by the Chen Internet Addiction Scale, the Adolescent Self-Rating Life Events Checklist, the Eysenck Personality Questionnaire, and the Family Adaptability and Cohesion Scale.Participants were classified into categories based on their scores (non-addict, mild internet addiction, severe internet addiction).Participants (9.98%) were classified as severe internet addiction, and 11.12% with mild internet addiction.Those with severe internet addiction had lower family functioning, high neuroticism and psychoticism, more stressful life events, and were introverts.Those with mild internet addiction had more health and adaptation problems and higher neuroticism scores.Neuroticism, adaptation problems and health problems predicted internet addiction.Caplan (2003) introduced and tested this model, which explained the reason for online social problematic use as a gateway for lonely and depressed individuals, which led to negative outcomes associated with excessive online use.Three hundred and eighty-six (386) undergraduate students participated in the study by answering the Generalized Problematic Internet Use Scale (GPIUS), Beck Depression Inventory-II, and UCLA Loneliness scale.Results suggested that psychosocial health predicted different preference levels for online social interaction with expected negative outcomes related to problematic internet use.
An experimental study, designed by Iacovelli and Valenti (2009), used a sample of 74 undergraduate female students and aimed to examine internet addicts" social skills.The results found that excessive internet users were rated as less likeable and were less able to build a rapport compared to average internet users.However, when participants were asked to rate themselves, no differences were reported.
Another study (Lee-Won et al., 2015) was conducted with 243 U.S. college students.The study aimed to investigate the role of social anxiety and the need for social assurance in problematic Facebook use.The variables measured were the social anxiety scale, the need for social assurance scale and the problematic Facebook use scale, developed and validated by Koc and Gulyagci (2013).The results revealed that social anxiety and the need for social assurance were significantly associated with problematic Facebook use.Most notably, the need for social assurance was a significant moderator of the association between social anxiety and problematic Facebook use.Kim et al. (2009), study was built on the assumption that the main major motive of internet use was loneliness and depression, or generally relieving psychosocial problems.Loneliness was measured by 10 items from Russell"s UCLA Loneliness Scale.Two items were used from the Self-Monitoring Scale to measure deficient social skills, and online social interaction preference was measured by three items from the Caplan Scale.The results showed that lonely individuals, or individuals with low social skills, were more likely to develop severe compulsive internet use behaviours, and experience negative life outcomes.A study designed by Tsai et al. (2009) explored the risk factors of internet addiction using a sample of 1,360 Taiwanese freshmen.The results revealed that internet addicts have poor social support, while Yan et al. (2014) study found that severe internet addicts had lower family functioning.
A qualitative study on online social networking resulted in five main themes that reflected an in-depth understanding of the compulsive use of social networks from the users" point of view.Eight university students participated in the interviews conducted by Powell et al. (2013).Individual responses varied from using social networks when feeling isolated in order to stay connected, to problematic internet users justifying their problematic use of social networks through its equivalence to real-life interactions.
The previous studies utilised a range of different methodologies: cross-sectional, qualitative and experimental.They all studied the association between internet addiction and social support and loneliness, using samples from different cultures, and confirmed the association of PIU and problematic Facebook use with loneliness, social anxiety, lower family functioning, low social skills and low self-esteem.An exception to this was Kerkhof et al. (2011) self-reported longitudinal study, which concluded that compulsive internet use was related to positive marital wellbeing.
Associations between Internet Addiction and Individual Differences
Individual differences such as personality, academic performance and demographics influence the association of internet addiction and wellbeing.Previous studies of social support indicated the association between internet addiction and lack of social support.Studies of individual differences and internet addiction are divided into four sub-themes which address sleep, gender differences, academic performance and personality associations with internet addiction.
The Association between Internet Addiction and Gender
A large sample of 4,852 participants was examined using the IAT and six items of the German socioeconomic panel.Lachmann's et al. (2016) results suggested there was a negative association between PIU and life satisfaction, with men reporting higher levels of PIU, whereas females were more sensitive to negative impacts.This confirms the results from Min-Pei et al. (2011), indicating that males are more likely to be internet addicts.A study by Tang et al. (2016) indicated that females were more addicted to online social networks, whilst males were more addicted to online gaming.
All of the prior studies confirmed that men are more likely to be internet addicts, and only (Tang et al., 2016) distinguished which internet activity each gender was more addicted to.
The Association between Internet Addiction and Personality
A French study by Laconi et al. (2018) explored the associations between PIU and personality variables in a sample of 786 participants.The findings revealed that 20% of the sample reported PIU.When PIU was compared to non-PIU participants, those with PIU scored significantly higher in all personality disorders, depressive symptoms, and non-adaptive coping.
A study designed by Tsai et al. (2009) explored the risk factors for internet addiction in a sample of 1,360 Taiwanese freshmen.The participants answered a battery of questionnaires including the Chinese Internet Addiction Scale-Revision (CIAS-R), the Measurement of Support Functions (MSF), the neuroticism subscale of the Maudsley Personality Inventory (MPI), and the 12-item Chinese Health Questionnaire (CHQ-12).The results revealed that 17.9% of the participants were internet addicts.Being male, having a habit of skipping breakfast, low mental health, poor social support and obsessive personality characteristics were found to be risk factors for internet addiction in Taiwan.
Marino 's et al. (2016) study aimed to examine a model that assessed the contribution of personality traits, motives, and metacognition to problematic Facebook use, among a sample of 815 Italian university students.Metacognitions are defined by Wells (2000) as "information individuals hold about their own cognition and internal states, and about coping strategies that impact both".Participants answered the Generalized Problematic Internet Use scale, the Big Five Questionnaire, the Internet Motives Questionnaire, and the MCQ-30.The results revealed that coping, conformity and enhancement, which are three of the four motives, as well as cognitive confidence and negative beliefs about thoughts from metacognitions, predicted problematic Facebook use.Additionally, only extraversion scores were weakly associated with PIU.Yan et al. (2014) found that severe internet addiction resulted in lower family functioning, high neuroticism and psychoticism, more stressful life events, and introversion, while mild internet addiction had more health and adaptation problems and higher neuroticism.Neuroticism, adaptation and health problems were found to predict internet addiction.
A cross-sectional study of 23,542 Norwegians (Andreassen et al., 2012) explored the association between social media addiction and narcissism with self-esteem using the Bergen social media addiction scale (BSMAS), Narcissistic Personality Inventory-16 and the Rosenberg self-esteem scale.The results showed an association between social media addiction, narcissism and low self-esteem.However, the design of the study cannot identify the direction of causality (e.g., is it narcissism that is causing social media addiction or the other way around?). Tsai et al. (2009), results also indicated that internet addicts are more likely to have obsessive personality characteristics.
Personality traits have a significant influence on feelings and reactions in different situations.The previous studies explored the association between problematic internet use and personality.The findings confirmed the strong association with personality disorder clusters B and C, neuroticism traits, immature defensive style, psychoticism characteristics, introversion and low self-esteem.The studies featured large samples from different cultures, used different personality scales and confirmed the positive association between personality disorders and internet addiction.
The Association between Internet Addiction and Life Satisfaction and Perceived Stress
This part of the paper discusses the studies that investigated the association between internet addiction and life appraisal, with stress as a subtheme of life appraisal, where a person evaluates life satisfaction and/or their perceived stress level.
A study of 713 adults in the United States aimed to examine the relationship between pornography use and wellbeing.The results revealed that internet pornography predicted psychological distress.The model was replicated using a sample of 1,215 undergraduates, with a one-year longitudinal follow-up with 106 participants.The results revealed a significant association between perceived addiction to internet pornography and psychological distress over time (Grubbs et al., 2015).Yan et al. (2014) also found that those with severe internet addiction had more stressful life events.
A comparison study was carried out by Ko et al. (2014) using a sample of 79 women diagnosed with a premenstrual dysphoric disorder (PMDD), and a control sample of 76 healthy women.Participants answered the Perceived Stress Scale, Chen Internet Addiction Scale, and the Barratt Impulsiveness Scale twice, once in the premenstrual and once in the follicular phases, to examine the association of PMDD, internet addiction and their associated factors such as impulsivity and stress.The results revealed that women with PMDD were more likely to have internet addiction and greater severity of internet addiction, perceived stress and impulsivity.Both perceived stress and impulsivity mediated the relationship between PMDD and internet addiction.
Studies on stress have confirmed the association between stress and PIU however, the studies are limited to student and female samples, and there is a need to distinguish between the types and causes of perceived stress.
Conclusions About the Literature
The literature review aimed to evaluate the studies that examined the relationship between internet addiction and wellbeing by categorising the studies into the DRIVE model structure, then identifying gaps in the literature.Although there were some studies relating internet addiction and information overload to parts of the wellbeing process, there is an enormous absence of multivariate studies which control for other predictors of wellbeing and examine the different stages of the model which can indicate a holistic view of the influence of internet addiction on wellbeing.There is an evolving literature on the psychological impact of internet addiction.However, most of the methodology is cross-sectional, which limits the understanding of the causality and motives behind the problematic use.Research on this topic will clarify the gap in the literature and our understanding of the association between different information-age problems.The cultural influence on internet addiction has been investigated in only one study that compared both US and UAE internet users (Quinones and Kakabadse, 2015).The study findings revealed that the cultural influence on social support caused a decrease in internet addiction in the UAE sample.Further studies on cultural influence are needed to investigate other aspects of the influence of internet addiction.Most of the samples studied were university students, and there has been little recognition of the specificity of the many stresses that students face based on their university circumstances and the nature of students" life and age group.Findings might be limited to the university students, and it is debatable whether all findings can be extrapolated to all adults, specifically to working adults who might face different life stressors related to different life stages.Although previous studies have focused on university students, not all aspects of students" stress, perceived academic performance, and related stress have been investigated.
In sum, several gaps about aspects of internet addiction and wellbeing research that need additional investigation have been identified in the literature review.Notably, there is an absence of a comprehensive approach to the study of internet addiction and wellbeing, and research appears to be limited to certain perceptions, methodologies, and samples.The next section describes results from interviews aimed at exploring the awareness, motivations, and coping strategies of problematic internet users.
Introduction to the Interviews
The current section of the study aimed to investigate and gain an in-depth understanding of the awareness levels of problematic internet users, the motivations, and the used coping strategies.This was achieved through qualitative interviews with a sample of Kuwaiti adults who scored of problematic internet users.Exploring the causality of excessive internet use will help navigate the possible solutions.The study objectives were to 1) investigate the PIU awareness of the associated negative consequences of excessive internet use and information overload; 2) explore the motivations and causes of the excessive internet use; and 3) Investigate the used coping strategies of information overload and PIU negative effect.
Methodology
A semi-structured interview was used to explore participants" perception and awareness of problematic internet use.The interview was designed around five core questions which were used to guide the conversation about a number of issues including: the awareness of the negative impact of problematic internet use and information overload, used strategies to lower internet use, internet use time, and feelings after long hours spent online.
Participants
The participants were Kuwaiti adults" who were problematic internet users and scored 50+ on the IAT.Six participants volunteered to be interviewed.Three males and three females aged between 23 and 30 years.The six participants who were interviewed had diverse occupations: marketing student, postgraduate student, marketing and public relations specialist, police officer, accountant, and employee in the ministry of information.
Materials
Participants were asked five open questions about their awareness, coping strategies, and approaches.Demographics and IO were measured and collected at the beginning of the study.Participants were also given a debriefing and information sheet:
Procedures
Participants were recruited using purposive sampling.Participants who answered the IAT test provided their contact details.Six participants who had a high IA score were contacted to participate in the interviews; they agreed to volunteer.Participants answered demographic questions and completed the IO test online through Qualtrics.Interviews were recorded and lasted between 5-10 minutes.They were later transcribed by the researcher.Appropriate ethical procedures were followed.
Ethical
The study was ethically approved by the School of Psychology, Ethics Committee at Cardiff University.Participants were given a consent sheet and information sheet, along with a demographics and information overload test before commencing the interviews.The purpose and rationale of the study were explained and repeated at the beginning of the interview.Consent to record the interview was obtained to ensure accurate documentation.
Thematic Analysis
Thematic analysis is a commonly used approach in qualitative research analysis.It is used through identifying and highlighting themes within the data.The thematic analysis approach is flexible, accessible and can be applied across a range of research types.Using this method, the researcher was able to organise the data into meaningful themes (Braun and Clarke, 2006).
Results
Six main themes were extracted from the semi-structured interviews.These key themes represented a level of awareness of IO and IA and the negative impact on wellbeing.Other themes illustrated the causes of excessive internet use, coping responses, and feelings after long hours online.The themes and supporting narratives are summarised in the following sections.
Theme 1: Awareness of Information Overload and Internet Addiction Impact
Awareness and mindfulness is seen as the key to wellbeing by lots of clinicians.Being aware of the consequences of daily behaviour and routine is important in order to have a healthy life and wellbeing, and to develop better life choices.Awareness is also a key to change; In order to change or avoid a certain behaviour a person should be aware of the behaviour and its negative consequences to make a logical, long-term decision to avoid.Lack of awareness will leave the person performing the same negative behaviour without knowing it has an impact on their WB.
Among the persons interviewed, lack of awareness was noted as a key cause of the excessive internet use.The following three sub-themes highlighted the awareness of problematic internet users
Not aware
Interviewees answered the question of, "are you aware of the negative impact of IO and IA on wellbeing?"with a short straightforward answer -No.This reflects the lack of awareness or even thoughts about the possibility of the negative impact of IO and IA.
I guess there is-
feeling the negative impact without compelling evidence of the cause, leaves the user or unable to link the negative impact with the cause: excessive use of internet: I guess we are all aware of their negative impact,I think it has an impact I haven't searched.I'm not sure what it is.But there is something negative I know!{int.2}
Confusing the Negative Psychological Impact with Body Fatigue-
The nature of using the internet from a laptop or a smartphone, which causes bending of the neck, hands and wrists positions were not relaxed on the keyboard, sitting for long hours, will result in neck pain, back pain, headache, and eye dryness due to not sitting properly and watching the screen for long hours, some participants confused this with the impact of wellbeing: Yes, I'm aware of the negative side effects of IO and IA, I feel pain while sitting long hours using the internet in my neck and in my back.{clarifying: I mean the negative impact on psychological wellbeing?}No, I'm not aware but I find myself very attached to the internet and mentally preoccupied; when I get up in the morning, I want to check the internet to see what I've missed.I don't know the exact negative effect but I think there is negative effects.{int.1}
Theme 2: Information Overload Coping strategies
Participants were asked whether they knew any used strategies to decrease online attachment and responses were divided into two main sub-themes.These were strategies in coping with information overload and decreasing internet attachment strategies:
Information Overload Coping Strategy
Interviewees agreed that the only used strategies they were aware of when feeling information overload symptoms was to have a break, leave the work if possible, or to draw their attention to something else.Apparently, lack of awareness to the proper strategies to deal with information overload and the absence of information literacy skills might cause the high probability of IO reoccurrence and which causes stress and lower productivity.I just turn it off once I feel stressed, have a break, and then I come back to it.It does help at the moment but not in the long run.{Int.2}
Theme 3: Controlling Internet Attachment
As a way to decrease internet attachment, participants" responses were divided into four main subthemes which reflect:
Never Tried Cutting Down Internet Use
Some participants didn"t feel the negative wellbeing effect, nor were they interested in decreasing their internet attachment.So, they had not thought about decreasing their internet use.Their responses were a simple straight forward No.
Tried and Failed
It can be difficult for problematic internet users to reduce internet use without using a studied strategy I've quitted all social media accounts, and I've tried not to use the internet much, for almost one day it was very hard.I couldn't complete it.{int.6} Get myself busy, work, but my mind is preoccupied with internet activities.{int.4}
Cut Online Game Addiction Because of Being Bored
Losing interest and feeling bored might be a good cause to cut an addiction: I used to be an online game addict, and I used to feel I'm missing something if I'm not playing it.But I've just felt bored and cut it off.{int.3}
Losing Self-Control
The internet is endless, it doesn't finish, and I go from something to another thing.Mostly social media and searching the net for things I don't know.{int.6}
Business and Work Related
I use the internet for my business, I'm a freelance photographer, so I search for things I don't know online, and try to stay connected as much as possible to keep track of my business.And I use social media to stay connected.{int.5}
Fear of Missing Out
This term might explain the situation of mentally being preoccupied with online activities, and completely relying on it.Missing out on real-life moment sensation and relying mainly on virtual life is a problem, where a person is no more connected to real-life activities.
When I'm online, I feel connected, but as soon as I turn it off I feel so disconnected to what is around.So I get confused and sometimes I don't get what is happening around me.I feel I'm missing out.{int.2}
Theme five: Using Internet Time
Participants were asked about the time they most frequently used the internet.Although all interviewees agreed that they used the internet all day through their smartphones, they found it hard to calculate the hours spent online as there are times in the day they ensure it is dedicated for internet use.
Spare time and loss of control were also the causes of excessive internet use.Participants agreed that the internet kept them busy if they had lots of spare time.It was endless and hard to control, and the way it was designed (hyperlinks, easy access) made it hard for problematic internet users to control their usage.
Sleep Quality and Using Internet Before Sleep
Previous studies suggested smartphone addiction and internet addiction were associated with sleep disturbance and insomnia (Koc and Gulyagci, 2013) and most importantly, the impact of sleep on wellbeing and day productivity (Levenson et al., 2016).Of the current sample, 66% agreed that they sometimes had good sleep quality.All participants agreed that they use the internet in the time before sleep, and the duration ranged from 30 minutes to 3 hours.
Different internet Use, Different Feelings
Different type of internet use results in different effects on wellbeing.Participants stated that after spending long hours online, their psychological outcome differed depending on the internet source or activity they were using.
Relating to Wellbeing
By analysing interviews scripts, factors of negative wellbeing (anxiety, negative feelings, low life satisfaction) negative appraisal (mental and physical fatigue) after spending long hours of using the internet were identified.Negative personality (low and self-efficacy) was associated with a failure to reduce internet hours.Negative coping (avoidance, and wishful thinking) with internet use, was identified in the problematic internet users" answers.This confirms findings from previous studies, which have shown an association between IO and IA/PIU and negative wellbeing, negative appraisal factors and the predictors of these negative outcomes.
Based on previous research, the following are the proposed solutions for dealing with problematic internet use and information overload.Internet users should be taught information literacy and proper strategies of decreasing their internet use.Failure to decrease internet attachment can be caused by low awareness and lack of knowledge about the negative effects of excessive internet use.Educational institutions, psychologists and support centres should make people aware through campaigns and courses of information literacy to ensure efficient information retrieval and productivity, and to lower stress and allow freedom from any internet attachment.Problematic internet users should be encouraged to increase offline social involvement and activities, and also to set time limits for online usage (Powell et al., 2013).
Conclusion
The present article has presented a review of the literature on the association between problematic internet use and wellbeing.The DRIVE model and the Wellbeing Process approach were used to organise the literature.Gaps in the research were identified, and interviews carried out to provide information on topics that are important for the prevention and management of PIU.The results of the interviews revealed six themes that reflect: the awareness of PIU, coping strategies to deal with high information overload, control of internet attachment, causes of excessive internet use, preferred time for use of the internet, and the psychological effects of spending long hours online.This information can now be used to prevent PIU and to develop better management of the interaction with the internet and other media.
Search (((("internet addiction"[Title/Abstract]) OR "compulsive internet us*"[Title/Abstract]) OR "problematic internet us*"[Title/Abstract]) AND "mental health"[Title/Abstract]) Filters: English language, Publication date from 2000/01/01 to 2017/12/31 results 61 (((("internet addiction"[Title/Abstract]) OR "compulsive internet us*"[Title/Abstract]) OR "problematic internet us*"[Title/Abstract]) AND "wellbeing"[Title/Abstract]Compulsive internet us* or problematic internet us* or internet addiction AND Wellbeing or wellbeing or mental health (peer reviewed) publication date 2000-2017 results 94 Compulsive internet us* or problematic internet us* or internet addiction AND Academic performance (peer reviewed) publication date 2000-2017 results 30 1. Are you aware of the negative impact of Problematic Internet Use and Information Overload on Wellbeing?If you are aware have you tried decreasing your online attachment? 2. What strategies have you used to decrease online attachment?And information overload?3. Why do you find Internet very addictive?4. When do use internet the most? 5. How do you feel after spending long hours online?Do you feel the negative influence of Internet Addiction and Information Overload on your wellbeing?
2.2.4. Theme four: Excessive Internet Use Causes 2.2.4.1. Spare Time I have
Managing priorities, and being aware of valuable, unrepeated opportunities leads to cutting the internet if needed.What I do now when I gather with my friends I don't use my smartphone, and I don't let them use theirs too, so we have a proper face to face communication.This is the only time I leave the internet initially.{int.5}lots of spare time, and it gets me busy.(int.3)This is my last course in university, I have only one module, I have lots of spare time to spend on the internet.{int.2} | 2021-03-09T13:19:54.987Z | 2021-02-17T00:00:00.000 | {
"year": 2021,
"sha1": "12526ae635a84a79128ec47d29b00c528b633e29",
"oa_license": null,
"oa_url": "https://www.sumerianz.com/pdf-files/sjss4(1)53-58.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "135baae501dfaa56b0ae1ed3cd8946c880c34757",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
256505245 | pes2o/s2orc | v3-fos-license | Case Report: Two cases of survival after complete transection of the left common carotid artery
Penetrating carotid artery traumas are rare yet fatal injuries with a high rate of mortality, and survivors may live with neurological sequelae. Of all the types of penetrating carotid artery traumas, the total transection of the common carotid artery (CCA) may be the most serious, can lead to death quickly, and has few reports of survivors. We described two cases of patients with complete CCA transections who survived without any neurological sequelae. The penetrating neck traumas of both patients were confirmed as complete CCA severance by CT and surgical exploration. Case 1 received the insertion of an interposition polytetrafluoroethylene graft to reconstruct the CCA, with postoperative ultrasound and CT angiography (CTA) verifying the total occlusion. Case 2 underwent nonoperative management under close observation and did not develop delayed active bleeding or neurological symptoms. Both patients recovered well, and no nervous system sequelae appeared during the follow-up period. A carotid artery injury cannot be ruled out in an asymptomatic penetrating neck injury. If CTA is feasible given the patient's hemodynamic condition, then it should be used as a routine examination to evaluate cervical vascular injury in patients with penetrating neck trauma. Management for hemodynamically stable carotid artery injuries remains controversial. These two cases of transverse carotid artery injury have caused us to further consider the principles of this kind of case management.
Introduction
Penetrating carotid artery injuries are uncommon yet fatal lesions, leading to death through massive hemorrhage or survival with neurological sequelae. Among these injuries, the complete common carotid artery transection (CCCAT) may be the most severe, can lead to fatal bleeding quickly, and has an extremely low survival rate. The cases of two patients with CCCAT who survived and had no neurological symptoms during the follow-up of more than two years are explored below.
Case presentation Case 1 The patient, a 40-year-old male, was admitted to the hospital 2 h after being stabbed in the left neck. The vital signs on admission were as follows: body temperature 35.9°C; heart rate 98 beats/min; blood pressure 100 mmHg; respiratory rate 19/min; and fingertip oxygen saturation 100%. An open wound of approximately 4.0 cm and ongoing bleeding were found in the left supraclavicular fossa. The amount of bleeding was approximately 500 ml. No evidence of respiratory distress was noted. The patient was estimated to be clinically stable given his sober consciousness, normal physical movement, cooperation in physical examination, and hemodynamic stability. CT scan ( Figure 1A) indicated soft tissue swelling, gas accumulation in the neck, and massive blood clots around the left carotid artery which compressed and displaced the trachea. The patient underwent operative exploration immediately. We asked the vascular surgeon to Frontiers in Surgery consult and handle the vascular injury. During the operation, the clots were removed, followed by active bleeding mainly from the ruptured jugular vein which was then repaired with 5-0 polyethylene thread. Carotid artery injury was suspected because the pulsation of the internal carotid artery (ICA) was not palpable. Further surgical exploration revealed that the left common carotid artery (CCA) was completely transected at 4 cm below the bifurcation of the CCA, and the irregular proximal and distal ends were filled with thrombus. Then, we applied a vascular clamp to the broken end of the blood vessel to remove the thrombus in the proximal and distal vessels. Blood flow was unobstructed after releasing the hemostatic forceps. After an approximately 1-cm resection of the irregular stumps, end-to-end anastomosis was no longer possible ( Figure 1B). Thus, the insertion of an interposition polytetrafluoroethylene (PTFE) graft ( Figure 1C) was performed to reconstruct the CCA, and the patient was transferred to the ICU after the operation. Anticoagulant agents and antibiotics were given to improve the long-term patency of the reconstructed CCA and prevent infection. However, on the 8th day after the operation, ultrasonic examination showed thrombosis in the distal segment above the PTFE graft, and the flow velocity of the left ICA decreased. Consultation recommendations included vascular surgery. As the patient showed no neurological symptoms, no further intervention was performed. Two weeks after discharge, CT angiography (CTA) showed that blood flow was absent at the site of the PTFE graft but present in the internal and external carotid arteries above the bifurcation, these arteries being thinner than their counterparts on the opposite side ( Figure 1D). The patient showed no physical and neurological sequelae during the follow-up by telephone for more than 2 years after discharge.
Case 2
The patient, a 51-year-old male who suffered from depression for more than 3 years, was admitted to the local hospital 2 h after he attempted suicide by cutting his neck. The patient was conscious, with ongoing bleeding in the neck, and a blood pressure of 70/40 mmHg on admission. He underwent initial management including massive transfusion, pressor agent therapy, and operative exploration. During the operation, the left CCA was identified to be completely transected with approximately 6 cm of the distal end exposed on the wound surface ( Figure 2A). Exploration of the origin of the CCA was not performed because of potential lifethreating hemorrhage. After repairing the damaged internal jugular vein and ligating the superficial vein including the external jugular vein, the distal end of the CCA was also ligated to stop the reflux bleeding after the blood pressure Case 2: (A) the left common carotid artery that was severed from the proximal end of the heart and prolapsed at the time of treatment. Note: the arrow points to the left common carotid artery. The triangle points to the distal end of the left common carotid artery. The cervical CT coronal (B) and sagittal (C) images showed that the origin of the left common carotid artery was not shown, the contrast filling was not seen in the distance, and the structure was not clearly displayed. Note: the arrow refers to the beginning of the left common carotid artery. Frontiers in Surgery increased to normal. After ensuring that no active bleeding was present, the severed cervical muscles and the incision were sutured, leaving the undetected proximal end of the CCA untreated because of the difficulty of exposure and potential risk of uncontrollable massive hemorrhage. Then, the patient was transferred to our hospital ICU under sedation and with neck bandaging, endotracheal intubation, and pressor drugs to maintain his blood pressure. On admission, vital signs were as follows: body temperature 36.0°C; heart rate 75 beats per minute; blood pressure 95/61 mmHg; respiratory rate 12/min; and fingertip oxygen saturation 100%. Contrast CT scan further identified the CCCAT according to the proximal stump of the CCA and the absence of the prograde blood flow through the CCA (Figures 2B-E). Then, a multidisciplinary consultation was held with the participation of the Ear, Nose, and Throat (ENT), vascular surgery, and cardiothoracic surgery departments. Cardiothoracic surgery suggested thoracotomy, and vascular surgery suggested occlusion with coils. However, thoracotomy entailed high risk, trauma, and high cost. By contrast, coil implantation entails the risk of re-bleeding. The vital signs of the patient were stable at that time, no obvious active bleeding occurred, comprehensive evaluation of the patient's condition was conducted, and finally the patient and his family chose conservative treatment. The patient was kept in careful observation for 1 week in the ICU before discharge, and no bleeding and neurological symptoms appeared. He was in good condition and remained neurologically asymptomatic during the follow-up by telephone for more than 2 years after discharge.
Discussion
Carotid artery injury accounts for approximately 6% of penetrating neck injuries (1) and is described as trauma to the neck that has breached the platysma muscle (2), causing death through massive hemorrhage or survival with an irreversible neurological deficit. In the two cases reported in this paper, the injuries involved complete transection of the left CCA for a significant duration of time prior to arrival at the hospital without fatal hemorrhage or neurological deficit, and no neurological symptoms appeared during the follow-up period after discharge. No similar cases were reported. Andrási et al. reported a survival after suicidal transection of the CCA (3), which occurred in the hospital where rescue measures, such as local pressure of the wound, transfusion, and clamping of both arterial stumps, were implemented quickly, a situation which obviously differs from the cases we reported. O'Banion et al. performed a cohort study of 50 patients (CCA: 3; ICA: 47) with traumatic carotid injuries who were initially managed nonoperatively: only one patient ultimately required conversion to surgery (4). However, injury types, such as occlusion, transection, partial transection, and pseudoaneurysm, were not provided by O'Banion et al., nor did they provide the imageological and pathological details of the injured arteries, details that are vital for analyzing the mechanism of temporary or permanent hemostasis of the injured major carotid arteries.
Both CCA and ICA injuries carry a high risk of death through massive hemorrhage and neurological sequelae, and their treatment remains controversial. At present, no consensus exists on the management of injured arteries in patients with stable hemodynamics and without neurological dysfunction. Some scholars suggest reconstruction of the artery if technically possible (1, 5); others propose ligation or mere observation as the preferred treatment in such situation (6). In addition, some researchers present views that fall somewhere between the two above approaches, proposing nonoperative management as acceptable in well-selected patients (7). However, few independent reports are available on carotid artery transection. In the cases we reported, the patient in Case 1 underwent surgical exploration and the insertion of an interpositional PTFE graft for the CCA reconstruction. Postoperative CTA showed no prograde flow through the reconstructed CCA, indicating that the CCA was completely occluded. We postulate that this outcome may be related to anastomotic stenosis, the patient's hypercoagulable state, local inflammatory reaction, intimal injury, and subsequent thrombosis. Given unsuccessful restoration of vascular continuity and the ideal outcome of the patient without stroke or neurological sequelae, ligation of the injured CCA is also proposed as a feasible surgical option in such circumstances. The patient in Case 2 was transferred to our hospital with stable hemodynamics, without active bleeding or neurological symptoms, after repair of the internal jugular vein, ligation of the distal end of the transected CCA, and suturing of the incision in the local hospital, leaving the proximal end of the CCA untreated. Given the stable condition of the patient, the CT results revealed no prograde flow through the CCA. Note that the option of an open or endovascular surgery entails difficulty and may lead to considerable trauma and potentially life-threating massive bleeding. Finally, the patient's family chose nonoperative management of the transected artery. The mortality rate of penetrating neck trauma is estimated to be as high as 6%, with massive hemorrhage being the most common cause of death (8). Patients presenting with penetrating carotid trauma have an overall stroke rate of 17% (4), and more specific data about CCCAT is not available. In this paper, no fatal massive hemorrhage occurred in these two patients with CCCAT despite the long duration of time before the availability of hemorrhage control treatment, thereby suggesting that CCCAT may not necessarily cause death through massive hemorrhage. We postulate that the bleeding following the rupture of the carotid artery can stop spontaneously in some circumstances. Although the CCAs eventually failed to recanalize, neither patient experienced stroke or neurological symptoms, indicating that the choice between revascularization or ligation of the CCA should be carefully balanced in the treatment of CCCAT, especially in the presence of a complete Willis circle (CTA finding) and good distal regurgitation of the artery (operative finding). In addition, we learned the following lesson from Case 1: CTA can be chosen as the first-line imaging modality for diagnosing vascular injuries in hemodynamically stable patients with neck trauma.
In summary, we shared two cases of CCCAT and found some interesting and confusing phenomena that have not been reported previously in the literature. We remain uncertain about the mechanism of automatic hemostasis in these two CCCAT patients. To date, no relevant literature has examined this rare phenomenon.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors.
Ethics statement
The studies involving human participants were reviewed and approved by Ethical Committee of Yantai Yuhuangding Hospital. The patients provided their written informed consent to participate in this study. | 2023-02-03T14:25:41.872Z | 2023-02-03T00:00:00.000 | {
"year": 2023,
"sha1": "66e4dea7364986e9650fa5143ca6752bcd924fe4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "66e4dea7364986e9650fa5143ca6752bcd924fe4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
260245192 | pes2o/s2orc | v3-fos-license | The arrhythmogenic cardiomyopathy phenotype associated with PKP2 c.1211dup variant
Background The arrhythmogenic cardiomyopathy (ACM) phenotype, with life-threatening ventricular arrhythmias and heart failure, varies according to genetic aetiology. We aimed to characterise the phenotype associated with the variant c.1211dup (p.Val406Serfs*4) in the plakophilin‑2 gene (PKP2) and compare it with previously reported Dutch PKP2 founder variants. Methods Clinical data were collected retrospectively from medical records of 106 PKP2 c.1211dup heterozygous carriers. Using data from the Netherlands ACM Registry, c.1211dup was compared with 3 other truncating PKP2 variants (c.235C > T (p.Arg79*), c.397C > T (p.Gln133*) and c.2489+1G > A (p.?)). Results Of the 106 carriers, 47 (44%) were diagnosed with ACM, at a mean age of 41 years. By the end of follow-up, 29 (27%) had experienced sustained ventricular arrhythmias and 12 (11%) had developed heart failure, with male carriers showing significantly higher risks than females on these endpoints (p < 0.05). Based on available cardiac magnetic resonance imaging and echocardiographic data, 46% of the carriers showed either right ventricular dilatation and/or dysfunction, whereas a substantial minority (37%) had some form of left ventricular involvement. Both geographical distribution of carriers and haplotype analysis suggested PKP2 c.1211dup to be a founder variant originating from the South-Western coast of the Netherlands. Finally, a Cox proportional hazards model suggested significant differences in ventricular arrhythmia–free survival between 4 PKP2 founder variants, including c.1211dup. Conclusions The PKP2 c.1211dup variant is a Dutch founder variant associated with a typical right-dominant ACM phenotype, but also left ventricular involvement, and a possibly more severe phenotype than other Dutch PKP2 founder variants. Supplementary Information The online version of this article (10.1007/s12471-023-01791-2) contains supplementary material, which is available to authorized users.
Original Article
Arrhythmogenic cardiomyopathy phenotype associated with PKP2 c.1211dup variant Results Of the 106 carriers, 47 (44%) were diagnosed with ACM, at a mean age of 41 years. By the end of follow-up, 29 (27%) had experienced sustained ventricular arrhythmias and 12 (11%) had developed heart failure, with male carriers showing significantly higher risks than females on these endpoints (p < 0.05). Based on available cardiac magnetic resonance imaging and echocardiographic data, 46% of the carriers showed either right ventricular dilatation and/or dysfunction, whereas a substantial minority (37%) had some form of left ventricular involvement. Both geographical distribution of carriers and haplotype analysis suggested PKP2 c.1211dup to be a founder variant originating from the South-Western coast of the Netherlands. Finally, a Cox proportional hazards model suggested significant differences in ventricular arrhythmia-free survival between 4 PKP2 founder variants, including c.1211dup. Conclusions The PKP2 c.1211dup variant is a Dutch founder variant associated with a typical right-domi-
Introduction
Arrhythmogenic cardiomyopathy (ACM) is an umbrella term describing a range of progressive, often familial cardiomyopathies resulting in ventricular arrhythmia (VA) and ventricular dilatation [1][2][3]. The best characterised subtype of ACM is arrhythmogenic right ventricular cardiomyopathy (ARVC), which is diagnosed according to the 2010 Revised Task Force Criteria [4]. ARVC patients can show considerable clinical variation [5], in part due to age-related penetrance and risk factors such as male sex, frequent endurance exercise and genetic aetiology [6][7][8].
The most commonly identified causal gene in patients with ARVC is plakophilin-2 (PKP2), with pathogenic variants found in nearly half of a series of 439 Dutch and American probands with ARVC [9]. PKP2 variant carriers typically exhibit a right ventricular (RV) dominant disease progression. Although left ventricular (LV) involvement has been acknowledged
What's new?
The PKP2 c.1211dup variant is a Dutch founder variant that is associated with a typical rightsided arrhythmogenic cardiomyopathy phenotype and also milder but substantial left-sided involvement. Ventricular arrhythmia (VA) was an early, predominant manifestation, occurring in nearly a third of carriers from 14 years of age onwards. Heart failure, on the other hand, was uncommon before the age of 55 years. VA occurred earlier in life in male carriers compared with female carriers. About 60% of carriers remained asymptomatic by age 60. Dutch PKP2 truncating founder variants may differ in phenotype severity. increasingly for desmosomal variants in general, the evidence still indicates less common involvement of PKP2 variants [10].
The heterozygous truncating PKP2 variant, c.1211dup (p.Val406Serfs*4), alternatively referred to as c.1212insT, was initially reported in a cohort of 56 patients fulfilling the 1994 ARVC Task Force Criteria [11] and regularly since then (ClinVar Variation ID 45015). More recently, homozygosity of PKP2 c.1211dup has been associated with a severe form of hypoplastic left heart syndrome [12]. Still, no large cohort of individuals carrying the c.1211dup variant of the PKP2 gene has been studied. As genetic factors likely contribute to facets of the ACM phenotype, such as onset of heart failure (HF) and degree of LV involvement [8], and considering that treatment decisions (pharmacotherapy and/or device therapy) depend on phenotype severity [3], genotype-phenotype research is essential to adequately inform patients and physicians and guide them in the decision-making process.
In the current report, we describe the phenotype of the PKP2 c.1211dup variant ( Fig. 1) and compare it with other Dutch PKP2 founder variants.
Study population
Of the 123 PKP2 c.1211dup heterozygous carriers identified through the academic DNA diagnostic laboratories in the Netherlands, 2 refused informed consent and 15 had no follow-up medical records. Obligate carriers were not included due to lack of systematic cardiological evaluation. In addition to the heterozygous carriers, 2 patients homozygous for the PKP2 c.1211dup variant were identified and excluded.
Apart from one male proband who presented with hypertrophic cardiomyopathy (next generation sequencing panel for cardiomyopathy genes revealed no additional (likely) pathogenic variant), no other type of cardiomyopathy than ARVC was observed.
All participants provided informed consent, according to the local medical ethics committees of all participating medical centres and/or conform the requirements of the Netherlands ACM Registry [13]. The study was conducted in accordance with the principles of the Declaration of Helsinki.
Data collection
In addition to the information required to assess diagnostic status according to the 2010 Revised Task Force Criteria, we initially collected data on medical history, medication use and exercise history from the Netherlands ACM Registry (n = 46) [13]. Electronic medical records of all participating medical centres were used to corroborate and supplement these data where possible (see Table S1 in Electronic Supplementary Material). In case of any discrepancies, data from primary medical records were used. Data collection for the remaining 60 carriers, not included in the ACM Registry, was based on the electronic medical records of the participating centres.
Sustained VA was defined as a composite of sudden cardiac death (SCD), sudden cardiac arrest, spontaneous sustained ventricular tachycardia (VT) (≥ 30 s at ≥ 100 bpm or with haemodynamic compromise requiring cardioversion), ventricular fibrillation/flutter (VF) or appropriate implantable cardioverter-defibrillator (ICD) intervention. This definition was also used for development of the ARVC risk calculator [14]. HF was defined by cardiological diagnosis with symptoms graded as New York Heart Association class ≥ 2 (for all definitions, see Table S2 in Electronic Supplementary Material).
Genetic evaluation
All participants were genetically tested between June 2005 and July 2021. Probands were tested conform standard practices at the time of genetic testing, whereas family members were usually only tested for family variants (see Table S3 in Electronic Supplementary Material). Genetic testing revealed no other (likely) pathogenic variants in any participant (see Table S4 in Electronic Supplementary Material).
Pedigrees constructed from patient records, government archives and online genealogical records were used to find a common ancestor (see 'Pedigree evaluation' in Supplemental Methods in Electronic Supplementary Material). Participant and grandparent birthplaces were generalised to 2-digit postal codes. Geographical distribution maps were generated with MapInfo Pro 2019 (Precisely, Burlington, MA, USA).
Haplotype analysis using the CytoScan HD single nucleotide polymorphism (SNP) array was performed according to the manufacturer's instructions on 4 samples from two different, geographically separated pedigrees carrying the PKP2 c.1211dup variant for which no common ancestor could be found (see 'Haplotype analysis' in Supplemental Methods in Electronic Supplementary Material).
Statistical analysis
The statistical analyses are described in detail in the Supplemental Methods. Two-tailed p-values < 0.05 were interpreted as statistically significant. Data are n (%), mean ± standard deviation, or median (interquartile range) VT ventricular tachycardia, VF ventricular fibrillation, NYHA New York Heart Association, ICD implantable cardioverter-defibrillator, ARNI angiotensin receptor-neprilysin inhibitor, ACEI angiotensin converting enzyme inhibitor, ARB angiotensin-receptor blocker a Sotalol was classified as both a beta-blocker and class III antiarrhythmic agent
ACM phenotype
Disease onset Overall, 47 carriers (44%) fulfilled the ARVC diagnostic criteria, at a mean age of 41 ± 19 years (range: 12-87), with no significant differences by proband status (p = 0.24) or sex (p = 0.17). However, sustained VA and HF was diagnosed significantly earlier in males than females (Fig. 2). By the age of 40 years, 33% of men (versus 9% of women) had experienced sustained VA. HF occurred at a later age than sustained VA for both men (5% by age 40 years, increasing to 21% at age 60 years) and women (no HF by age 40 years, increasing to 8% at age 60 years). Both events also occurred significantly earlier in probands than relatives (Fig. 2). Notably, by the age of 40 years, only 5% of relatives had experienced sustained VA and none had developed HF. Moreover, 7 male carriers (5 of whom were ≤ 35 years old) experienced sudden cardiac arrest, whereas only 1 female carrier experienced sudden cardiac arrest (at age 65 years). One 17-year-old male underwent heart transplantation due to HF. In total, 4 deaths (3 men and 1 woman) were observed following end-stage HF. Of note, 6 men with a 50% chance of carrying the c.1211dup variant experienced SCD, between 16 and 70 years old.
Founder status
Geographical distribution and pedigree evaluation The majority of PKP2 c.1211dup carriers were born in the South-West of the Netherlands (Fig. 3). Grandparental birthplaces converged to 3 coastal communities in the province of South Holland. Pedigrees from 2 communities (13/29) could be linked to a putative common ancestral couple (range: 8-11 generations), which was born in the late 17th century (see Figure S1 in Electronic Supplementary Material).
Haplotype analysis
CytoScan HD SNP array values on chromosome 12 of 2 individuals from the genealogically linked com-
Variant-specific phenotypic variability
The 4 PKP2 variant groups (c.235C > T, c.397C > T, c.1211dup and c.2489+1G > A) are described in Table S5 in Electronic Supplementary Material. A Cox proportional hazards model adjusted for sex and proband status showed c.1211dup to be associated with a significantly higher rate of sustained VA compared with c.235C > T but not compared with c.397C > T or c.2489+1G > A (Tab. 3).
Discussion
Our study has provided a detailed characterisation of the ACM phenotype of over 85% of all known Dutch heterozygous carriers of the PKP2 c.1211dup variant. In agreement with previously described PKP2 variants [2], the ACM phenotype associated with the PKP2 c.1211dup variant showed incomplete penetrance and variable expression and was skewed towards male carriers. VA was an early, predominant manifestation in c.1211dup carriers, occurring mostly before the age of 40 but documented as early as 14 years old. Additionally, 33% had a high PVC burden. HF, on the other hand, was uncommon before the age of 55 years. In three studies documenting 434 PKP2 variant carriers [8], 53 PKP2 c.235C > T carriers [15] and 14 PKP2 c.2489+4A > C carriers [16], respectively, VA was also a predominant disease manifestation, whereas HF occurred in ≤ 3%. Although the higher incidence of HF currently found might be specifically related to the c.1211dup variant, a more likely explanation is the large number of carriers (34%) who were older than 60 years by the end of follow-up.
As could be expected from other PKP2 studies, the c.1211dup variant was associated with a RV-dominant phenotype. However, our study adds to the growing body of evidence that desmosomal variants, even in PKP2, may be related to a significant degree of LV involvement [8,17,18]. In the entire cohort, at least 19% (20/106) showed LV dilatation and 32% (34/106) LV dysfunction on CMR imaging and/or echocardiography. The degree of LV dilatation and dysfunction was generally mild, with only 12 carriers (11%) showing symptomatic HF. Any discrepancies between CMR and echocardiographic data are most likely explained by echocardiography's lower sensitivity [19].
Geographic data and the putative common ancestral couple in the late 17th century strongly suggested that in the Netherlands, the PKP2 c.1211dup variant [15,20]. Despite the relatively high regional frequency of this variant, no additional cases of homozygosity, which were previously shown to be associated with hypoplastic left heart syndrome [12], were observed. Based on our results, the current recommendations on clinical management and family screening-with cardiological screening from the age of 10 onwards and no upper age limit in patients and family members with the PKP2 c.1211dup variant-is justified [3]. Currently, risk calculators only exist for patients recently fulfilling diagnostic criteria [14]. Further research is warranted on risk stratification of carriers who showed relatively low risks of VA and HF.
Study limitations
In addition to the inherent limitations of our study's retrospective design (such as limited data on cardiac inflammation and exercise history, which are both of pathophysiological interest [7,24]), the current results should be interpreted in the context of the following considerations. Firstly, we cannot rule out that the 17 carriers who could not be included may have had a lower disease burden, possibly leading to an overestimation of disease severity in this study. On the other hand, carriers were not followed-up from birth, which could have led to underestimation of phenotype severity in the form of ascertainment bias of carriers surviving long enough to undergo genetic testing. Another potential source of ascertainment bias is presented by 6 men with a 50% chance of carrying PKP2 c.1211dup who suffered SCD between the ages of 16 and 70 years old but could not be included in our cohort due to lack of genetic test results (data not shown).
Furthermore, the unexpected difference found in VA risk between 4 truncating PKP2 variants suggested that PKP2 truncating variant carriers may carry different risks depending on variant location. However, our approach was limited in sample size and available clinical data, only allowing us to compare the founder variants on VA-free survival. Additionally, mechanistic work on PKP2 truncating variants so far has generally suggested a haploinsufficiency mechanism [21][22][23], which would exclude variant-specific influences on phenotype severity. Our results should therefore be interpreted with caution and corroborated in more elaborate studies. Phenotype differences may also be explained by unknown coinherited factors or environmental influences.
Conclusion
Our results confirm the PKP2 c.1211dup variant to be a Dutch founder variant associated with a typical right-sided ACM phenotype and also milder but substantial left-sided involvement. Although penetrance at the age of 60 years was only 40%, sustained VA mostly occurred prior to the age of 40 years and may present from 14 years of age onwards, with an increased risk for males. In contrast, HF was less common, particularly prior to the age of 55 years. Further studies including more clinical data should confirm whether phenotype severity is dependent on the variant location in PKP2 truncating variants.
Members of the European Reference Network for rare, low prevalence and complex diseases of the heart: ERN GUARD-Heart Arjan C. Houweling, Ronald Lekanne Deprez, Anneline S. J. M. te Riele, Arthur A. M. Wilde, J. Peter van Tintelen by compiling and providing permission for use of medical records. In particular, we would like to thank the families who assisted us by personally providing their family pedigree and giving permission for haplotype analysis. We would like to thank Mrs H. Linde for her assistance with data collection from the Netherlands ACM Registry, Dr Mar Rodriguez-Girondo for her statistical advice and Mrs Cindy Richel-van Assenbergh for her assistance with haplotype analysis. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 2023-07-29T06:16:16.735Z | 2023-07-28T00:00:00.000 | {
"year": 2023,
"sha1": "601c41ae3691a2290b892391af1083517253ec00",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "da4dce9bd1efdf1a050932464a27757c08ef8c14",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
94968089 | pes2o/s2orc | v3-fos-license | NO emission characteristics in counterflow diffusion flame of blended fuel of H2/CO2/Ar
Flame structure and NO emission characteristics in counterflow diffusion flame of blended fuel of H2/CO2/Ar have been numerically simulated with detailed chemistry. The combination of H2, CO2 and Ar as fuel is selected to clearly display the contribution of hydrocarbon products to flame structure and NO emission characteristics due to the breakdown of CO2. A radiative heat loss term is involved to correctly describe the flame dynamics especially at low strain rates. The detailed chemistry adopts the reaction mechanism of GRI 2.11, which consists of 49 species and 279 elementary reactions. All mechanisms including thermal, NO2, N2O and Fenimore are taken into account to separately evaluate the effects of CO2 addition on NO emission characteristics.
SUMMARY
Flame structure and NO emission characteristics in counter#ow di!usion #ame of blended fuel of H /CO /Ar have been numerically simulated with detailed chemistry. The combination of H , CO and Ar as fuel is selected to clearly display the contribution of hydrocarbon products to #ame structure and NO emission characteristics due to the breakdown of CO . A radiative heat loss term is involved to correctly describe the #ame dynamics especially at low strain rates. The detailed chemistry adopts the reaction mechanism of GRI 2.11, which consists of 49 species and 279 elementary reactions. All mechanisms including thermal, NO , N O and Fenimore are taken into account to separately evaluate the e!ects of CO addition on NO emission characteristics.
The increase of added CO quantity causes #ame temperature to fall since at high strain rates a diluent e!ect is prevailing and at low strain rates the breakdown of CO produces relatively populous hydrocarbon products and thus the existence of hydrocarbon products inhibits chain branching. It is also found that the contribution of NO production by N O and NO mechanisms are negligible and that thermal mechanism is concentrated on only the reaction zone. As strain rate and CO quantity increase, NO production is remarkably augmented. Copyright 2002 John Wiley & Sons, Ltd.
INTRODUCTION
Nearly 90 per cent of the world's energy still depends on combustion and CO /NO V emission through combustion causes more and more serious environmental problems, such as global warming and acid rain. However, it seems likely to be very di$cult to expect that an epochmaking technology to reduce CO emission in combustors shall be developed for the time being.
It is then natural that special interests should be focused on the utilization technologies of land"ll gas (LFG) since the major energy sources are CO , H , and CH and it is essential that the e!ect of CO addition on emission characteristics of NO V and soot, based on exhaust gas recirculation concept, should be estimated. Nevertheless, limited research e!orts have been devoted to understand the role of CO addition in #ame dynamics. In the previous research ) the e!ects of CO addition on fuel side in H /air counter#ow di!usion #ame have been categorized into two: diluent e!ects due to the relative reduction in the concentration of the reactive species and direct chemical e!ects caused by the breakdown of CO through the reactions of third-body collision and thermal dissociation. This might give a hint that there is a possibility of the change of reaction paths by the contribution of the hydrocarbon products, produced by the breakdown of CO . The numerical study on the e!ect of CO addition in CH /air counter#ow di!usion #ame have shown in the process of methane oxidation that as a result of CO addition the reaction path of C -branch may be much more vigorous than that of C -branch in some conditions. The combination of the two results suggests the possibility that the e!ect of CO addition on NO emission mechanisms should be quite di!erent. Special interest is focused on the variations of thermal NO and Fenimore NO according to strain rate and the CO quantity. This is because with increasing strain rate, thermal NO might decrease due to the decrease of #ame temperature but the role of Fenimore NO might be remarkable due to a vigorously produced hydrocarbon product. NO emission characteristics of methane}air Bunsen-type burner #ames were studied numerically in terms of counter#ow #ames (Nishioka et al., 1994). The contribution of the respective NO formation mechanisms in total NO production was evaluated systematically according to strain rate: thermal NO, N O, NO and Fenimore mechanisms. In high-temperature combustion, radiation from #ames is the predominant mode of energy transport. For instance, in boiler furnaces, where combustion takes place, a large portion of the total heat #ux from the #ame to the surrounding walls is contributed from radiation. Maruta et al. (1998) numerically studied the in#uence of radiation heat loss on the counter#ow di!usion #ame, and showed that at a low strain rate radiation has a strong impact on #ame structure. In the study of NO V emission of di!usion #ames, Chan et al. (1995) examined radiative heat transfer e!ects by using the optically thin and full radiation models, and found that due to the sensitivity of thermal NO kinetics to temperature, thermal radiation has a profound e!ect on thermal NO formation, practically at all ranges of scalar dissipation rate except near the extinction point where #amelets are excessively stretched. Generally, CO together with H O is regarded as a highly radiant material and the present study changes the CO quantity in the fuel side to grasp the e!ect of CO addition on #ame structure of counter#ow di!usion #ame. It is, therefore, essential that for the correct description of #ame structure radiative heat loss term should be taken into account in the study of these #ames.
In the present study, e!ects of CO addition on #ame structure and NO emission characteristics in counter#ow di!usion #ame have been examined with detailed chemistry. The blended fuels with H , CO , and Ar are utilized to clearly show the contribution of hydrocarbon products to #ame structure due to the breakdown of CO . Specially radiative heat loss term, which is known to be remarkable at low strain rate, is included to clarify the #ame dynamics. The necessary thermochemical and transport properties are obtained from CHEMKIN data base (Kee et al., 1989). The detailed chemistry adopts the reaction mechanism of GRI 2.11, which consists of 49 species and 279 elementary reactions. The most important attention in the calculation is paid to ascertain how CO addition to the fuel side contributes to NO emission characteristics. All mechanisms including thermal, NO , N O, and Fenimore (Nishioka et al., 1994) are taken into account. This might make it possible to distinguish the contribution of the respective mechanisms in total NO formation. The e!ects of important #ame parameters on the emission characteristics were studied with strain rate and the quantity of CO addition.
NUMERICAL MODEL
The #ame we modelled can be produced by two counter#owing reactant streams which emerge from two coaxial jets. In the neighbourhood of the stagnation plane produced by these #ows, a chemically reacting boundary layer is established, as shown in Figure 1. The governing equations for mass, momentum, chemical species and energy can be found elsewhere (Kee et al., 1988;Lutz et al., 1977;Park et al., 2001). The only di!erence between this work and that conducted by them is the energy conservation equation, in which we retained the sink term due to the thermal radiation . In the present study, all these conservation equations are transformed into a set of ordinary di!erential equations. For the sake of brevity, we give only the energy conservation equation here where q is the sink term due to the radiative heat loss. The main contribution on radiative heat loss is given to CO , H O, CO and CH and the radiative heat loss, based on the optically thin approximation, is calculated as follows (Tien, 1968): where is the Stefan}Boltzmann constant, ¹ and ¹ the local and ambient temperatures, respectively, and K Plank mean absorption coe$cient. P I and K I are the partial pressure and the Plank mean absorption coe$cient of a species, respectively. The Plank mean absorption coe$cient is approximately obtained as follows: Here, A IH is the polynomial coe$cient of a species expressed as a function of temperature (Ju et al., 1997).
In the present calculation, the nozzle separation distance is 3 cm and the strain rate is obtained as follows (Chellian et al., 1990): Here the subscripts F and O mean the fuel and oxidizer, respectively. The detailed chemistry adopts the reaction mechanism of GRI 2.11, which consists of 49 species and 279 elementary reactions. The above governing equations are solved using a CKEMKIN-based code (Kee et al., 1988) and a Transport-based one (Kee et al., 1989). The methane oxidation includes C branch reactions and all the reaction paths of NO V are depicted in detail. For a detailed understanding of the reduction mechanism of NO V in terms of the addition of CO , NO separation technique, in which the contribution of the respective NO formation mechanisms (thermal NO, N O, NO and Fenimore mechanisms) in total NO production is evaluated, is utilized (Nishioka et al., 1994). Especially, Fenimore mechanism is the one obtained by the full mechanism calculation minus the summation of the thermal NO, N O and NO mechanisms.
RESULTS AND DISCUSSIONS Figure 2 shows the variation of maximum #ame temperature with strain rate to clearly display radiation e!ect. The open symbol is the case not to consider the radiative heat loss term and the solid symbol is the case to include it. The maximum #ame temperature decreases montonically with increasing strain rate in the case of no inclusion of the radiation term, whereas in the case of inclusion of the radiation term the maximum #ame temperature with strain rate initially increases and then decreases after taking a maximum at a certain strain rate. The di!erences between maximum #ame temperatures with and without radiation term are quite large at low strain rates and decrease gradually with increasing strain rate. It is naturally expected that radiation e!ect is remarkable at low strain rates since radiative heat loss is largely dependent upon #ame volume and the thickness of reaction zone is closely related to the reciprocal of strain rate. This is con"rmed from the fact that the di!erence between the maximum #ame temperatures of cases with and without the radiation term decreases with increasing strain rate. This phenomena has been well known elsewhere (Maruta et al., 1986;Ju et al., 1997;Lee et al., 2001). The maximum #ame temperatures become globally low according to the increase of the CO quantity and the di!erences between maximum #ame temperatures with and without radiation term are much larger especially at low strain rates. The reasons for the global decrease of maximum #ame temperature with increasing strain rate are as follows. Firstly, at high strain rates diluent e!ect is dominant, so that the concentrations of reactive species are relatively reduced. Secondly, at low strain rates, the breakdown of CO produces relatively populous hydrocarbon products and the existence of hydrocarbon products inhibits chain branching because the chain initiation reaction, H#O " O#OH, is considerably less than the rate of reaction between H atoms and hydrocarbons (Westbrook and Dryer, 1984) and this is con"rmed from the decrease of the maximum mole fractions of O, H, and OH with increase in the CO quantity. Radiation e!ect is, therefore, seen to be remarkable due to CO addition. Consequently, it is found that radiative heat loss should be essentially considered to correctly evaluate #ame structures, especially provided that the species indicating a high radiative absorption coe$cient such as CO is included. For the clear comparison of the e!ect of radiative heat loss on detailed #ame structures as shown in Figure 2, a"25.0 s\ is selected as a representatively low strain rate and a"124.0 s\ as a high one. Figures 3 and 4 describe #ame structures with the full chemical mechanism of GRI 2.11 for a"25.0 and 124.0 s\ in cases of 10 per cent CO , 20 per cent CO and 30 per cent CO addition in the fuel side, respectively. In Figurs 3 and 4, Z represents a mixture fraction and is the direct outcome in the previous researches (Drake and Blint, 1988;Park et al., 2001). According to the increase of CO quantity, the penetrated position of O and H is shown to shift to the fuel-rich side. This is because the stoichiometric mixture fraction becomes larger and larger. As a result, the maximum mole fraction position of H O shifts to the fuel-rich side too. As was well depicted in the previous research , the existence of the de#ection point of CO pro"le and the production of CO are caused by the breakdown of CO due to thermal dissociation. It is also shown that the CO mole fraction increases and the CO pro"les become broad with the increase of added CO quantity. However, the CO mole fractions of a"25.0 s\ are globally larger than those of a"124.0 s\. This is because according to the increase of strain rate, #ame temperature decreases and then the breakdown of CO is limited, so that the role of CO is changed to a diluent . As shown in Figures 3(c) and 4(c), the breakdown of CO causes the hydrocarbon products such as HCO, CH and CH to be populous according to the quantity of CO addition. As a result, the maximum mole fractions of H, O, and OH decrease with increasing CO quantity in Figures 3(b) and 4(b) since those radicals should be prosperously utilized to form the hydrocarbon products. It was shown in the previous research (Drake and Blint, 1988;Park et al., 2001) that the increase of strain rate induced the increase of the maximum mole fractions of O, H, and OH except near extinction limit. In the present results, the maximum mole fractions of O, H, and OH in case of a"25.0 s\ are also shown to be smaller than those in case of a"124.0 s\. Therefore, the decrease of the maximum mole fractions of H, O, and OH with increasing CO quantity is directly attributed to the increased hydrocarbon products as can be seen from the comparison between Figures 3(c) and 4(c). It is found in Figures 3(b) and 4(b) that the increase of CO quantity and strain rate causes NO mole fraction to be reduced. At a glance, this seems to be as a matter of course since the increase of CO quantity and/or strain rate gives rise to the decrease of maximum #ame temperature as shown in Figure 2. However, cautious observation forces one to analyse the NO formation mechanism in detail. It may be natural that the decrease of maximum #ame temperature due to the increase of CO quantity and/or strain rate produces the reduction of thermal NO. But there is room for argument in the side of Fenimore NO. It has been recognized that the formation and destruction processes of hydrocarbon products a!ects the formation of Fenimore NO directly*the so-called prompt NO (Nishioka et al., 1994). Previous research have shown implicitly from the numerical simulation with a relatively simpli"ed chemistry that considerable quantities of CO can breakdown through thermal dissociation and then the formed hydrocarbon product such as HCO can alter a reaction mechanism partly. Lee et al. (2001) displayed clearly that CO addition to the oxidizer side in CH /air counter#ow di!usion #ame makes the C -branch reaction path be remarkable as much as to be comparable to the C -branch reaction path. Based on the those researches, in the present study the Fenimore NO should not be formed unless the added CO to fuel side plays a role of only a diluent. However, it is clearly shown in Figures 3(c) and 4(c) that the concentrations of hydrocarbon products such as HCO, CH , and CH are not negligible anymore. The NO formation due to NO and N O mechanisms is known to be negligible in counter#ow di!usion #ames (Nishioka et al., 1994). Therefore, the di!erence between NO mole fractions with full mechanism and thermal mechanism might be directly associated with the contribution of Fenimore NO. Figure 5 plots the di!erence between NO mole fractions calculated by full and thermal mechanisms. As shown in Figure 5, NO mole fraction decreases globally with increase in the CO quantity in both cases calculated by full and thermal mechanisms. This is closely associated with the reduction of #ame temperature according to the CO quantity as shown in Figure 2. It should be, however, noted that the maximum NO mole fraction calculated by full mechanism is larger than that with thermal mechanism for both the cases of a"25.0 and 124.0 s\. This means the existence of contributions through other mechanisms implicitly. Moreover, the fact that in Figure 5(a) the NO mole fraction calculated by full mechanism is even lower than that by thermal mechanism in the fuel-rich region makes one imagine the complicated contribution through other mechanisms except thermal mechanisms. To clearly distinguish the contribution of the respective NO formation mechanisms, the pro"les of NO mole production rate through the respective NO formation mechanisms are provided in Figures 6 and 7. In whole cases, the contribution of N O and NO mechanisms is seen to be negligible and that of thermal mechanism is concentrated on only the reaction zone. The negative NO mole production rates in the fuel-lean region of Figures 6 and 7 are associated with the conversion of NO to NO as can be seen in Figures 3 and 4. It was shown from the previous study that the di!erence between full and thermal mechanisms is absolutely caused by Fenimore mechanism (Nishioka et al., 1994) and the NO production by Fenimore mechanism is remarkably augmented with increasing strain rate. This trend with strain rate is con"rmed as well in Figures 6 and 7. Meanwhile, in Figures 6 and 7 the increase of CO quantity causes both positive mole production rates by full and thermal mechanisms to decrease. From Figures 6 and 7 cautious observation reveals that the ratio of the contribution by Fenimore mechanism to that by thermal mechanism in the total positive mole production rate becomes much larger with increase in the CO quantity and strain rate. These would be expected results since the increase of added CO quantity was shown to induce the hydrocarbon products of considerable quantity in the foregoing results. It should be noted in the present status that it is not the CO itself as a diluent but the breakdown of CO that gives rise to the above results. It is, moreover, shown in Figures 6 and 7 that there exist regions of negative production rate by Fenimore NO in the fuel-rich region. As a result, the NO mole fractions calculated by full mechanism was smaller in the fuel-rich region than those by thermal mechanism in Figure 5(a). Nishioka et al. (1994) showed that the calculated results by Fenimore mechanism are characterized by a sharp negative NO production and a sharp positive NO production in the concentrated region just around the maximum #ame temperature. However, in Figures 6 and 7, even if the positive NO production rate occurs near the maximum #ame temperature region, the maximum negative NO production rate is located at a more fuel-rich region and the maximum position of HCCN mole fraction traces nearly that of negative mole production rate of NO. Nishioka et al. (1996) reported the importance of HCN recycle route (HCCO#NOPHCNO#CO) especially, in di!usion #ames from the fact that HCCO increases with increasing strain rate. It is, as a result, seen that the negative NO production rate is related to the HCN recycle route. Figure 8 shows the variation of NO V mole fraction with strain rate for 10 per cent CO and 30 per cent CO . As strain rate increases, NO V mole fraction is shown to be reduced. The NO pro"les with strain rate present the well known characteristic features observed in the previous researches (Nishioka et al., 1994;Li and Williams, 1999), suggesting that the adopted GRI mechanisms are adequate to describe NO formation in the #ame. The overall emission characteristics can be evaluated quantitatively in terms of the emission index (Nishioka et al., 1994;Li and Williams, 1999), de"ned as where = I and w I represent molecular weight and mole production rate of species k, respectively. The integration goes from the lower to upper nozzle exit in Figure 1. Results of this integration are plotted with strain rate and CO quantity in Figure 9. As shown in Figure 9(a), in the case of low strain rate the EINO drops steeply according to CO addition while in the case of high strain rate the EINO decreases mildly. In whole cases of the changed CO quantity, the increase of strain rate shows the steep decrease of EINO and then produces the mild one.
CONCLUDING REMARKS
The present numerical study on #ame structure and NO emission characteristics in counter#ow di!usion #ame of blended fuels of H /CO /Ar has resulted in the following conclusions. 1. Radiation e!ect becomes much more remarkable at low strain rates due to CO addition and the increase of strain rate impedes its e!ect despite CO addition. As the added CO quantity increases, #ame temperature falls. This is becuase "rstly, at high strain rates, diluent e!ect is dominant and secondly, at low strain rates, the breakdown of CO produces relatively populous hydrocarbon products and the existence of hydrocarbon products inhibits chain branching. 2. The contribution of NO production by N O and NO mechanisms are negligible and that of thermal mechanism is concentrated on only the reaction zone. As a result, the contribution of Fenimore mechanism is remarkable according to CO addition and strain rate. The negative NO production in the fuel-lean region is due to the conversion of NO to NO . The negative NO production rate in the fuel-rich region is due to HCN recycle route. As the strain rate increases, the NO production rate by Fenimore mechanism is remarkably augmented. The increase of added CO quantity causes both positive NO production rates by Fenimore and thermal mechanisms to decrease. But the ratio of the contribution by Fenimore mechanism to that by thermal mechanism in the total NO mole production rate is much larger with increase in the added CO quantity. 3. At low strain rates the addition of CO drops the EINO steeply while at high strain rates the EINO decreases mildly. It is, consequently, seen that the overall emission characteristics is sensitively reduced as the added CO quantity increases at low strain rates. | 2019-04-05T03:32:44.640Z | 2000-11-01T00:00:00.000 | {
"year": 2002,
"sha1": "9e625ab5232741415dfe5d86f7c4de88fd919308",
"oa_license": null,
"oa_url": "https://doi.org/10.1002/er.778",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "a3a1533f6a49640ee4f69a438e2ce965690ab9de",
"s2fieldsofstudy": [
"Engineering",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
1178992 | pes2o/s2orc | v3-fos-license | Effect of aqueous solution of stevioside on pharmacological properties of some cardioactive drugs
Background/Aim. Stevioside is a glycoside that supposedly possesses a number of pharmacodynamic effects such as anti-infective, hypoglycemic, along with the beneficial influence on the cardiovascular system. The aim of this study was to determine the effect of rats pretreatment with aqueous solution of stevioside on pharmacological actions of adrenaline, metoprolol and verapamil. Methods. Rats were administered (intraperitoneally 200 mg/kg/day) stevioside as aqueous solution or physiological saline in the course of 5 days, then anaesthetized with urethane and the first ECG recording was made. The prepared jugular vein was connected to an infusion pump with adrenaline (0.1 mg/mL), verapamil (2.5 mg/mL) or metoprolol (1 mg/mL). Control animals, pretreated with saline, in addition to the mentioned drugs, were also infused with the solution of stevioside (200 mg/mL) in the course of recording ECG. Results. The infusion of stevioside produced no significant changes in ECG, even at a dose exceeding 1,600 mg/kg. In the control group, a dose of adrenaline of 0.07 ± 0.02 mg/kg decreased the heart rate, whereas in the stevioside-pretreated rats this occurred at a significantly higher dose (0.13 ± 0.03 mg/kg). In stevioside-pretreated rats, the amount of verapamil needed to produce the decrease in heart rate was significantly lower compared to the control. The pretreatment with stevioside caused no significant changes in the parameters registered on ECG during infusion of metoprolol. Conclusion. The results suggest that pretreatment with stevioside may change the effect of adrenaline and verapamile on the heart rate.
Introduction
Stevia leaves have been used by indigenous peoples in Paraguay and Brazil since before recorded history.Stevia became more widely known following the 1887, when it was discovered by botanist Antonio Bertoni.Due to its sweetness, Stevia has been given many names including honey leaf, sweet leaf of Paraguay, sweet herb and honey herba 1 .Stevia is used most in the countries of South America, much less in Europe, and since 1970 it has been widely used in Japan as sweetener of various beverages of mass use 2 .
The major glycosides found in the leaves of wild Stevia plants are stevioside, rebaudioside A, rebaudioside C and dulcosides A and B 3,4 .Other sweet diterpenoid glycosides which can be isolated include rebaudioside D and E 5 .
The sweet taste of Stevia tea is due to stevioside, a glycoside that supposedly possesses a number of pharmacodynamic effects such as anti-infective, hypoglycemic, along with the beneficial influence on the cardiovascular system and on seborrheic skin and skin with acne.The importance and actuality of scientific studies on Stevia is emphasized in many papers [6][7][8][9][10][11][12][13] .
Curi et al. 14 reported that Stevia extracts from 5 g of dried leaves administered thrice a day for 3 days to healthy volunteers lowered the plasma glucose levels.However, care should be taken interpreting these results as the plasma glucose level of the Stevia treated group was already significantly lower before the administration of the extract 14 .
Intravenous administration of stevioside [95% pure, in doses of 50, 100 or 200 mg/kg body weight (bw)] resulted in a significant hypotensive effect in spontaneously hypertensive rats without adverse effects on heart rate or serum catecholamine levels 15 .In a study with humans stevioside (250 mg thrice a day) was administered for 1 year to 60 hypertensive volunteers 16 .After 3 months the systolic and diastolic blood pressure significantly decreased and the effect persisted during the whole year.Blood biochemistry parameters including lipids and glucose showed no significant changes.No significant adverse effect was observed and quality of life assessment showed no deterioration.The authors concluded that stevioside is a well tolerated and effective compound that may be considered as an alternative or supplementary therapy for patients with hypertension.Liu et al. 10 reported that the underlying mechanism of the hypotensive effect of administered stevioside in dogs (200 mg/kg bw) was due to inhibition of Ca ++ influx from extracellular fluid.Also, Melis and Sainati 6 suggested that stevioside induced in rats a decrease in mean arterial pressure and promoted renal vasodilatation by lowering renal vascular resistance.The vasodilator effect is likely to occur via blockage of Ca ++ channels similarity to verapamil.Stevioside reduces blood pressure by decreasing the vascular resistance via inhibition of extracellular Ca ++ influx and by stimulating the release of a vasodilator prostaglandin.Stevioside also induces diuresis, natriuresis and reduction of Na + reabsorption resulting in the reduction of extracellular fluid volume 17 .
Preliminary human studies suggest that stevioside has an influence on the function of cardiovascular system, especially that stevioside can reduce hypertension 18,19 .
The European Food Safety Authority evaluated the safety of steviol glycosides, extracted from the leaves of the Stevia rebaudiana Bertoni plant, as sweetener and expressed its opinion on March 10, 2010.The Authority established an acceptable daily intake for steviol glycosides, expressed as steviol equivalents, of 4 mg/kg bw/day.On November 11, 2011, the European Commission allowed the usage of steviol glycosides as food additive, establishing maximum content levels for different types of foods and beverages 20 .
The aim of this study was to determine whether stevioside can cause a significant interaction with some cardioactive drugs (adrenaline, metoprolol and verapamil) and modify their effect on the heart rate.
The experiments were carried out on adult Wistar rats of both sexes, bw 180-260 g.Before and during the experiment the animals had free access to food and water, with a 12-hour cycle of light and dark periods.
The pretreatment period lasted 5 days, during which the animals were injected intraperitoneally (ip) with daily doses of: saline solution (1 mL/kg bw) -the control group (C) or an aqueous solution of stevioside (200 mg/kg bw) -the experimental group (S).
On the 5th day, 2 h after the last injection, the animals were anaesthetized with 25% urethane and connected to the ECG, to take the initial recording.The jugular vein of animals was prepared and animals were connected to the infusion pump.
The animals from the group C were connected to the infusion pump containing one of the investigated cardioactive drugs: adrenaline (0.1 mg/mL), metoprolol (1.0 mg/mL), verapamil (2.5 mg/mL), or aqueous solution of stevioside (200 mg/mL).
The animals from the group S were connected to the infusion pump containing one of the investigated cardioactive drugs: adrenaline (0.1 mg/mL), metoprolol (1.0 mg/mL), or verapamil (2.5 mg/mL).
The infusion rate for verapamil was 0.1 mL/min, ECG was monitored 12 minutes during verapamil infusion got.The infusion rate for the other drugs was 0.2 mL/min, ECG was also monitored during 12 minutes of infusion with other drugs (adrenaline, metoprolol or aqueous solution of stevioside).ECG analysis was performed in a single-blinded fashion.In all of animals in the group C and the group S ECG was recorded before application of cardioactive drugs.Also the investigator who conducted the experiment did not interpret ECG.It was interpreted by the investigator who did not know the treatment.
The ECG paper speed was 25 mm/sec, respectively one small block was 40 msec.On the basis of the time interval of infusion duration and the change in the ECG it was possible to calculate the amount of the drug required to produce the observed changes.This amount was correlated with the animal's bw to obtain the specific dose in mg/kg bw.
For all animals 5 min before the application of the investigated cardioactive drugs the control ECG was recorded.Therefore, each of the animal was a control for itself (Figure 1a).
The infusion pump and ECG machine started at the same time.During the application of cardioactive drugs changes in the ECG record were monitored.
The changes that were observed on the ECG are: 1) the first changes -lonely change in the heart rate that was seen in ECG recording.This changes in the heart rate most frequently were extrasystoles (Figure 1b) or atrioventricular block (Figure 1c); 2) second changes -frequently changes in the heart rate that were seen in the ECG recording (bradycardia was the most frequent change in the heart rate (Figure 1d); 3) the third changes or toxic effect -changes in the heart rate that were seen in the ECG recording in the form of extreme bradycardia (Figure 1e) or asystolia (cardiac arrest) (Figure 1f).Extreme bradycardia was frequency of the heart less than 100 beats per minute.Statistical analysis was performed by Student's t-test for small independent samples.Values p < 0.05 were considered statistically significant.
This experiment was carried out in accordance with the ethical principles of working with experimental animals (AEC approval Faculty of Medicine, University of Novi Sad, Novi Sad, Serbia).
Results
No toxic effect was observed in any case of stevioside infusion not even after the dose exceeding 1,600 mg/kg.ECG record of animals in the control group after infusion of stevioside was similar to the ECG record before infusion of stevioside.A decrease of frequency was observed in 2 out of 6 animals, and only after 240 s of infusion, which corresponded to the dose of 640 mg/kg.Further infusion of stevioside produced the normalization of the heart frequency after 180-300 s.
Infusion of adrenaline in all of the animals in the control group decreased the heart rate after the dose of 0.07 ± 0.02 mg/kg.None of the animals died but attenuated amplitudes of the QRS complex were observed in two cases after the dose of adrenaline of 0.56 ± 0.014 mg/kg.During infusion that lasted about 12 min, after the dose of adrenaline of 0.95 mg/kg, the changes in the ECG pattern that could indicate the occurrence of toxicity were not detected.
Pretreatment of rats with stevioside changed the sensitivity of the myocardium to adrenaline (Table 1).Namely, the decrease in the heart rate occurred significantly later compared to the control group of animals, at the doses of 0.13 ± 0.03 mg/kg.In this group of animals, an increased toxic effect of adrenaline was observed.Two out of 6 animals died after infusion of adrenaline of 0.82 mg/kg and 0.92 mg/kg, respectively.
The pretreatment of rats with stevioside decreased the sensitivity of the heart to adrenaline, and increased its toxicity.
The effect of verapamil on ECG of the control group and those treated with stevioside is given in Table 1.The in-fusion of verapamil in the control group caused the first reaction at the dose of 1.84 ± 0.38 mg/kg, while the second reaction was caused with the dose of 3.78 ± 0.89 mg/kg.Toxic effects during the infusion of verapamil in the control group were registered after the dose of 7.53 ± 1.45 mg/kg.All of the three evaluated parameters (the first, second and the third reaction of the heart to verapamil infusion) in the steviosidepretreated group, occured significantly earlier comparing with the control group.
The effect of metoprolol on ECG of the control animals and those pretreated with stevioside is given in Table 1.
Metoprolol in the control group caused bradycardia but this change of the heart rate was not significant.It was similar in the group of animals pretreated with stevioside.
Toxic effects of metoprolol were not registered in neither the control nor experimental group.
Discussion
The present data confirm that pretreatment with stevioside play an important role in the increase or decrease of the sensitivity of the myocardium in experimental animals to the studied drugs.
The infusion of aqueous solution of stevioside produced no significant changes in ECG.Only in two cases a temporary decrease in heart rate was observed, but the initial value was restored within 3 to 3.5 min during continuous infusion of stevioside.Similar results were also observed in our first study, when the concentration of stevioside in infusion was 20 mg/mL 18 .Chan et al. 15 found that stevioside applied intravenously in the dose of 200 mg/kg in rats was effective in blood pressure reduction and there was no change in the heart rate.The heart rate in rats were about 350 beats per minute.Clinical studies also show that stevioside applied at a dose of 250 mg three times a day orally for one year in people with hypertension exerts antihypertensive effect but no effect on the heart rate 16 .The study confirms our results that stevioside at a high dose of 1,600 mg/kg did not significantly affect the heart rate.In both cases (with control the and stevioside-pretreated rats) the decreased heart rate was observed during the adrenaline infusion.This result can be explained by the fact that the rats are laboratory animals in which 1 adrenergic receptors are very sensitive to adrenaline, resulting in the significant vasoconstriction and reflex bradycardia during infusion of adrenaline.This fact was first pointed out by Turner in 1965 21 .
In this study, the pretreatment of rats with stevioside reduced the sensitivity of the myocardium to adrenaline (bradycardia occurred later compared to the control).
In our previous study, when the animals were pretreated with a lower dose of stevioside (20 mg/kg bw) the toxic effects (asystolia, cardiac arrest) were not reported 18 .But, the pretreatment with a higher dose of stevioside (200 mg/kg bw) increased thebsensitivity of the myocardium of rats to adrenaline, and cardiac arrest in stevioside-pretreated rats occurred after the administration of adrenaline in a dose of 0.81 0.08 mg/kg bw.
The mechanism of the hypotensive action of verapamil is the blocking of calcium channels.The consequence of calcium channels blocking in the myocardium is the appearance of bradycardia.Stevioside, as reported in several papers is a calcium channels blocker.In stevioside pretreated animals, an increased sensitivity of the myocardium to verapamil was observed and the drug toxicity was significantly increased, too.Thus, a significantly smaller amount of verapamil was required to cause bradycardia in the stevioside-pretreated animals, which needed a significantly smaller amount of verapamil to cause toxic effects (cardiac arrest).In our previous study, when the animals were pretreated with stevioside at a dose of 20 mg/kg bw there was no report on the cardiac arrest after administration of verapamil.This indicates that cardiodepressive effect of stevioside in rats is dose-dependent.
Taking into account these facts it can be concluded that verapamil and stevioside applied together mutually potentiate their effect 6,10,15,16,18 .
Pretreatment with stevioside showed the tendency to reduce the sensitivity of the myocardium to a beta-blocker, but the decrease of the heart rate did not occurr within 12 min of metoprolol infusion.The average decrease of heart rate was 14%, and only with one animal it was 25% of the initial value after 12 min of infusion.No toxic effect was observed, neither in the control nor in the steviosidepretreated animals.The influence of metoprolol on the heart rate in our previous study, when the animals were pretreated with stevioside at a dose of 20 mg/kg bw was similar 18 .
The additional reason for the interaction of stevioside and cardioactive drugs might be the consequence of the influence of stevioside on the regulation of the level of glucose as described in the literature 12 .Namely, treatment with stevioside results in the increase in C-peptide concentrations in healty and diabetic rats.Stevioside influences the function of the endocrine pancreas and stimulates insulin secretion 12 .The results from Jeppesen et al. 9 in the experiments on rats indicate that the treatment with stevioside exhibited hypoglicemic, insulinotropic and glucagonostatic effects, which might influence the sensitivity of the heart to cardioactive drugs actions.
Conclusion
Our results suggest that pretreatment with stevioside might change the effects of adrenaline and verapamil on the heart rate.
Fig. 1 -
Fig. 1 -The changes that were monitored during infusion of the drugs tested.
Table 1 Effect of adrenaline, verapamil and metoprolol on the rat heart function monitored via ECG changes
S - | 2018-04-03T03:10:41.393Z | 2014-07-01T00:00:00.000 | {
"year": 2014,
"sha1": "a8a23bded5da9e2da3de89e500da8aff6e467b37",
"oa_license": "CCBYSA",
"oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0042-84501400014V",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a8a23bded5da9e2da3de89e500da8aff6e467b37",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
235395700 | pes2o/s2orc | v3-fos-license | The hazards of unconfined pyroclastic density currents : a new synthesis and 1 classification according to their deposits , dynamics , and thermal and impact 2 characteristics 3
26 Pyroclastic density currents (PDCs) that escape their confining channels are among the most 27 dangerous of volcanic hazards. These unconfined PDCs are capable of inundating inhabited areas that 28 may be unprepared for these hazards, resulting in significant loss of life and damage to infrastructure. 29 Despite their ability to cause serious impacts, unconfined PDCs have previously only been described 30 for a limited number of specific case studies. Here, we carry out a broader comparative study that 31 Non-peer reviewed preprint submitted to EarthArXiv 2 reviews the different types of unconfined PDCs, their deposits, dynamics and impacts, as well as the 32 relationships between each element. Unconfined PDCs exist within a range of concentration, velocity 33 and temperature: characteristics that are important in determining their impact. We define four end34 member unconfined PDCs: 1. fast overspill flows, 2. slow overspill flows, 3. high-energy surges, and 35 4. low-energy detached surges (LEDS), and review characteristics and incidents of each from 36 historical eruptions. These four end-members were all observed within the 2010 eruptive sequence 37 of Merapi, Indonesia. We use this well-studied eruption as a case study, in particular the villages of 38 Bakalan, 13 km south, and Bronggang 14 km south of the volcano, which were impacted by slow 39 overspill flows and LEDS, respectively. These two unconfined PDC types are the least described from 40 previous eruptions, but during the Merapi eruption the overspill flow resulted in building destruction 41 and the LEDS in significant loss of life. We discuss the dynamics and deposits of these unconfined 42 PDCs, and the resultant impacts. We then use the lessons learned from the 2010 Merapi eruption to 43 assess some of the impacts associated with the deadly 2018 Fuego, Guatemala eruption. Satellite 44 imagery and media images supplementing fieldwork were used to determine the presence of both 45 overspill flows and LEDS, which resulted in the loss of hundreds of lives and the destruction of 46 hundreds of buildings in inundated areas within 9 km of the summit. By cataloguing unconfined PDC 47 characteristics, dynamics and impacts, we aim to highlight the importance and value of accounting 48 for such phenomena in emergency management and planning at active volcanoes. 49
51
Pyroclastic density currents (PDCs) are the deadliest volcanic hazard, accounting for nearly a third 52 of all historical volcano-related fatalities (Brown et al. 2017). They are also some of the most complex 53 and unpredictable volcanic phenomena, which makes accurate forecasting of their occurrence, 54 characteristics and the area impacted difficult. In particular, the ability for PDCs to surmount 55 topography and travel outside of river valleys can place them in direct contact with communities on 56 the flanks of volcanoes. They can destroy whole towns and kill tens of thousands of people (e.g., St 57 Pierre, Martinique, 1902: Lacroix, 1904) but little is known about their internal dynamic processes, 58 and the ability to measure their dynamics in real time during eruptions does not yet exist. As a result, 59 they pose a significant challenge for emergency management planning at explosive volcanoes in 60 densely populated regions. and covering a widespread area (Rosi et al., 1993). Similar to blasts, the intensity of these PDCs wanes 277 on the margins, where they can display non-binary impacts, with survival of people and structures 278 indicating characteristics consistent with slow margins of dense PDCs as well as low-energy, dilute 279 surges (Rosi et al., 1993). For example, the 1902 column collapse from La Soufrière, St Vincent, 280 showed extensive damage in areas near source (Anderson andFlett 1903, Baxter 1990), but was not 281 capable of overturning trees or sturdy structures by the time it reached heavily inhabited areas, ~8 282 km from source (Baxter 1990). Despite this, there were over 1500 deaths as well as nearly 200 283 hospitalizations (with 80 subsequent deaths) largely from burns and the asphyxiating effects of the 284 ash (Will 1903, Baxter 1990. Similarly, in the 1902 Mount Pelée blast eruption (which had a death 285 toll of ca. 29,000), people in St Pierre, ~8 km from source, were severely burned by PDCs and fires 286 with many laying prone in the "pugilistic attitude" frequently associated with deaths due to 287 temperatures over 200 °C (Anderson and Flett 1903, Will 1903, Lacroix 1904, Baxter 1990. 288 Detached pyroclastic surges that decouple from and/or outrun their parent flows can also maintain 289 high dynamic pressure despite their low concentration. These surges are more capable of 290 overcoming topographic barriers than their parent dense PDC, as was the case in the deadly June 291 1991 Unzen eruption, in which dilute surges detached from their parent flows and outran them by 292 0.8 km, unexpectedly reaching an inhabited area where 43 people were killed (Nakada and Fujii 293 1993). The dynamic pressures of these surges were high enough (up to 8 kPa in some parts of the 294 surges; Clarke and Voight 2000) to destroy 50 houses, flatten trees, and move cars tens of metres 295 Fujii 1993, Cooper 2018). Similarly, some high-energy detached surges in the 1994 296 Merapi eruption maintained dynamic pressures that remained high enough to topple masonry walls, down trees, strip roof tiles, and destroy bamboo huts 5 km from source (Abdurachman et al. 2000), 298 which we estimate requires dynamic pressures of at least 2 kPa. 299 Deposits from these dilute, but high energy, surges are generally quite thin, but can reach greater 300 thicknesses in depressions and valleys. Following the Unzen eruption, surge deposits were typically 301 no more than 20 cm thick (Nakada and Fujii 1993) and were sometimes only a few centimetres thick, 302 in contrast to the up to 10 m thick deposits from the parent flows (Miyahara et al. 1992 Hills ranged from a few cm to 3 m outside of channels, while deposits were up to several metres thick 306 in river valleys (Belousov et al. 2007). High energy blasts often leave a distinctive two-layer deposit 307 (e.g., Soufrière Hills 1997, Merapi 2010) consisting of a basal, poorly-sorted, coarse layer that 308 typically includes ripped up clasts of the underlying surface, overlain by a much finer-grained, better 309 sorted deposit with some internal bedding (Brown and Andrews 2015; Komorowski et al., 2013). 310
Low-energy detached surge (LEDS) 311
LEDS represent the low-velocity, low-concentration end of the unconfined PDC spectrum. These these surges allows them to easily overcome topographic barriers. In recorded events, these surges 315 are most commonly seen moving laterally from their confined parent flows and escaping channels, 316 leading to unexpected inundation of inhabited areas. How, why or where along the flow path a surge 317 detaches is typically related to a change in the underlying syn-eruptive topography and/or the 318 pulsative nature of the eruption, which can act to reduce channel capacity or redirect the channel 319 away from the straight-line flow inertia, as described in the Introduction. 320 Due to their low velocity and concentration, and thus low dynamic pressure (typically <2 kPa), LEDS 321 damage to buildings, infrastructure or vegetation is typically minor (with the exception of secondary 322 damage through fire). For example, in the June 1997 Soufrière Hills eruption, LEDS were not capable 323 of blowing down trees or poles at distances greater than 2 km from source (Cole et al. 2002) and 324 damage to buildings was caused almost exclusively by temperatures up to around 400 °C (Baxter et 325 al. 2005). The impact for humans can range from minor through to fatal burns injuries, with the 326 chances of survival influenced by the LEDS temperature and duration as well as how much skin is 327 and Nakada 1999). In some events a "sear zone" or "singe zone" of charred vegetation was seen to 336 extend up to 25 m beyond the distal margins of surge deposits (Loughlin et al. 2002a). 337 The deposits of LEDS are characterized by their relative thinness and poor preservation in the long-338 term geologic record. In most historical cases, these surges have been recorded as thin as a few 339 centimetres and no thicker than 20 cm, even when associated with metres-thick, channel-confined Single eruptive events may contain both high-and low-energy detached surges, as seen in the 1991 350 Unzen eruption. In the June event, deadly high-energy surges killed 43 people and had dynamic 351 pressures large enough to sweep away cars and trees in one area (Cooper 2018), while low-energy 352 surges were capable of burning, but not bending or breaking, trees in other areas (Nakada and Fujii 353 1993). Similarly, in the September Unzen event, high-energy surges in some locations were powerful 354 enough to sweep away cars and trees damaged in the earlier eruption, while in another location low-355 energy surges caused damage only though heat, melting vinyl and charring building windows on the 356 volcano-facing side of the buildings (Fujii and Nakada 1999). Dynamic pressures in these events may 357 be up to 8 kPa in the high-energy surges, and lower than 2 kPa in the low-energy surges (Clark and 358 Field visits to the Merapi area three weeks after the 5 November 2010 event, and several times over 387 the years that followed, allowed some of the authors [SFJ, SJC, JCK, and PJB] to collect detailed field 388 data on confined and unconfined PDCs produced during the eruption. The geology, dynamics and 389 impacts associated with the directed blast that affected a large swathe of the upper flanks to ~8 km on the generation mechanisms, deposits, impacts and inferred dynamics associated with i) slow 392 overspill flows, and ii) low energy detached surges in villages along the Gendol river channel more 393 than 10 km to the south of the summit. Fast overspill flows were also observed along the Gendol, but 394 impacts were total, with all buildings, vegetations and victims buried with no observable remains. 395 Uniquely, our field studies combined geological and engineering expertise in collecting and 396 interpreting data on the deposits and physical impacts of the unconfined PDCs, which could be cross-397 referenced with medical data on the nature of burns injuries to victims. Some of the geology 398 previously; here we present data not included in these studies. We hope that these data and 401 interpretations are valuable for emergency management planning in providing the first multi-402 disciplinary case study of unconfined PDCs and their impacts. 403
Case study sites 404
We focus on two distinct type of unconfined PDC, for which we discuss the associated generation 405 mechanisms, dynamics, impacts and deposits at two villages ( Figure 2
Dynamics 478
The height of the unconfined PDC as it was emplaced remained above 10 m throughout Bakalan, as 479 evidenced by palms and trees that were singed to their full height (e.g., Figure 3a). Just to the west of Jenkins 25 days after impact on 30 November 2010. 528
Impacts 529
Direct damage from the LEDS was minimal, with buildings remaining largely intact but interior and 530 exterior plastic melted, furniture charred and paper singed. Although the LEDS were not hot enough 531 to directly ignite these flammable objects, fire was the cause of total and partial destruction of 532 buildings in the village. Of the 48 buildings impacted by LEDS in Bronggang, seven timber buildings 533 were completely destroyed and a further five masonry buildings, with timber frame and tiled roofs, 534 partially destroyed, all by fire rather than direct damage from the LEDS (Figure 5a). Firebrands 535 (embers from burning logs within the parent PDC) carried within the surge ignited flammable 536 materials such as hay in animal sheds and sticks and coconut husks in outside lean-to wooden 537 kitchens (Figure 5b), with fires beginning in these flimsy wooden structures and then rapidly 538 spreading into the adjacent houses. The large ventilation gaps also allowed firebrands to travel with 539 the ash inside several houses, as evidenced by ignited mattresses or sofas, which smouldered without 540 causing the houses to catch fire ( Figure 5c). In one building, a small rupture in the gas tank of a 541 motorbike stored inside greatly increased the availability of fuel leading to a fire that completely 542 destroyed the building. 543 Prior to the eruption, ~400,000 people were rapidly evacuated to emergency shelters (Surono et al. (c) are marked by the corresponding letters in (a). 568
Deposits 569
We identified a complex stratigraphy at the base of the sabo wall, just inside the village of Bronggang, 570 consisting of four different main depositional units ( Figure 6): 571 • At the base of the sequence, there were patches of dry very fine grained, very well-sorted, loose 572 grey ashfall deposit 1 cm thick, which we interpreted as pre-5 November tephra erupted between 573 • The third unit was a dark grey, fine ash, massive, well-sorted and normally graded 4 cm thick unit 580 with chunks of charcoal and a locally erosive lower contact. We interpreted this unit as a second 581 LEDS deposit contemporaneous or correlated to one of the surge units seen in the main Gendol 582 channel. It was again overlain by a 1 cm thick, very fine grained, pinkish-tan, ashfall layer. 583 • The uppermost unit was a very poorly sorted, massive, compact, pinkish brown, normally to 584 symmetrically and even reversely graded, 35-45 cm thick fines-rich unit. This unit contained 585 large dense clasts up to 23 cm in diameter scattered on the top surface and also formed a central 586 coarser clast-rich zone with a more pinkish matrix. We interpreted this unit as resulting from a 587 minor overspill lobe of a valley-confined, block-rich PDC. Field evidence suggests that this PDC 588 was not very mobile and was stopped by the ~30 cm tall stone-wall curb of the main village road 589 on the Gendol side ( Figure 6). This unit was correlative and thickened to a 93 cm thick sequence 590 directly on top of the sabo wall. It was overlain by a 5-6 cm thick, very well-sorted, massive, fine 591 pinkish tan ashfall layer with a vesicular texture and perhaps some poorly preserved 592 accretionary lapilli. 593 We interpreted these deposits as representing three different minor PDC overspills, and associated 594 ash cloud fallout, from the Gendol main channel: the lower two represent deposition from the dilute 595 Earth imagery, acquired ~5 months after the eruption, showed that most roofing material (metal 722 sheets) in San Miguel los Lotes had been scavenged from PD and NVSD buildings in the intervening 723 months (Fig 8a). From satellite and media imagery of the two overspill locations, we were able to 724 identify at least two types of unconfined PDC, and their impact: increased debris (e.g., branches, bricks) that they carried, ii) evidence of deposit surface 750 remobilisation, iii) that they were clearly wet in comparison with photos of pristine PDC deposits, and iv) the correlation with mechanical impact above the flow (lahars not imparting any mechanical 752 impact above their flow surface: Figure 9a). 753 Approximately two-thirds of the northwestern area of the village of San Miguel Los Lotes was almost 754 totally destroyed by overbank PDCs on 3 June (Figure 8a), which contained large boulders many 755 metres in diameter inside an ash matrix and left massive, poorly sorted deposits up to 2 or more 756 metres deep (Figure 9d and e). A small number of buildings at the northern end of the town escaped 757 total destruction, and buildings at the edge of the zone of total destruction towards the south and 758 east of the town mostly suffered partial damage, but their structures were still identifiable on satellite 759 imagery (Figure 8a
Inferred dynamics 784
The maximum height of the PDCs was not easy to determine from media images. There were a 785 number of tall (>10 m) trees in both affected areas that appear singed by the PDC, implying a current 786 height of more than 10 m as the PDC entered the town and golf resort. Media videos of the PDC 787 flowing past a bridge in the adjacent channel just to the south of San Miguel los Lotes suggest that 788 current heights maintained similar heights in the channel. However, trees in the southern half of San 789 Miguel Los Lotes and towards the peripheries of impacted areas at La Reunión golf resort were not 790 singed to their full height, with canopies still showing as green and unaffected in satellite imagery 791 while the full height of buildings was affected. Thus, the total PDC height decreased from >10 m to 792 between 2 to 10 m as the flow slowed down and came to a stop. 793 At the golf resort, the energy of the overspill flows was low enough that they could be largely blocked 794 by buildings, and where the flows entered buildings, they only moved objects such as chairs a few 795 tens of centimetres away with the associated surges leaving countertop items such as bottles coated 796 in ash but upright (Figure 9a). This suggests that both the dense and dilute PDC components were 797 traveling at ~1 m/s (certainly less then 5 m/s based on the categories outlined in Figure 1 Lotes, leading to partial or total roof collapse in places. It is not clear from remotely derived imagery 817 if isolated fires were the result of embers carried within the PDC, as at Merapi, or related to the heat 818 of the deposits. Flow deposits at Fuego contained large boulders, but field studies showed that most 819 of these boulders were not juvenile, and therefore unlikely to have provided a concentrated heat 820 source that may also have triggered fire. burns that have penetrated below the skin layer to involve the limb muscles (at least 200 °C) (Baxter, 836 1990). Not all casualties displayed this attitude, despite wearing similar clothing (t-shirt and 837 trousers) and being affected by LEDS in the same town. Since there was no observed evidence of fires 838 near these casualties, the temperatures necessary to cause these injuries can be attributed to the 839 LEDS. The evidence from thermal effects therefore suggests that there was variability in temperature 840 and/or duration of the impact across the impacted area, reflecting the uneven inundation of PDCs 841 across the village area. 842 While remotely assessing impacts visible in satellite, aerial and media images in this case study was 843 valuable, access to photos and on-the-ground experience in the aftermath of the Fuego eruption 844 provided vital detail that allowed us to confirm or refute inferences made from media images and 845 added information not visible in remote imagery. Ideally, remote and field approaches are combined, 846 with remote approaches providing information on the immediate post-impact situation and for areas that cannot be easily accessed as well as providing the larger scale overview, which can then be 848 refined and ground-truthed with informed field visits that provide more detailed information, 849 background and context. Studies relating impacts and PDC dynamics with the deposits provide an 850 evidence base from which likely PDC dynamics and impacts can be forecast. This is particularly The range of observed characteristics across different eruptions but within unconfined PDCs of the 902 same type (Table 1) can be related to a few potential factors. The size of the magma batch and the 903 volume of erupted material may affect PDC temperatures at their generation, leading ultimately to 904 differences in source temperatures between eruptions, before PDCs become unconfined. The PDC 905 generation mechanism appears to affect the temperature of ensuing PDCs: collapsing lava domes 906 (e.g., Soufrière Hills 1997, Merapi 1994Merapi , 2006Merapi , 2010 are correlated with higher initial PDC 907 temperatures than a sector collapse (e.g., Fuego 2018), and this appears to play a stronger role than 908 distance from the volcano. For example, PDCs at the affected sites near Fuego appear to have been 909 either similar or lower temperature despite being closer to the volcano (~8 km) than the sites at 910 Merapi (~13 km). 911 The velocity (and therefore the dynamic pressure) of PDCs seems to be strongly influenced by the itself. In all recorded LEDS, localised fires were ignited and at Merapi, this could be attributed to 951 embers (firebrands) carried within the LEDS (Jenkins et al., 2013). It is reasonable to infer that in 952 situations with fewer firebrands present (e.g., fewer trees consumed in the surge path), the likelihood 953 of building damage from LEDS may be lower. Building typology is also a factor affecting the level of 954 damage sustained during LEDS, with timber buildings much more likely to be damaged in a LEDS-955 caused fire than masonry buildings. Considering these factors, fire damage resulting from LEDS is By their nature, unconfined PDCs are difficult to forecast because they inundate areas beyond the 963 topographic lows that are typically given priority in volcanic hazard planning. As numerical models they may be better able to recreate, and therefore forecast, the path and dynamics of unconfined 966 PDCs. In the meantime, one approach in mitigation planning has been to apply a 'buffer' (e.g., Neri et 967 al., 2015) around a PDC-prone channel to highlight threatened populations and infrastructure, with 968 the aim of implementing long-term land-use or short-term proactive evacuation measures for 969 communities close to topographic lows. The extent of this buffer is difficult to define, and is a function 970 of the channel topography, PDC volume and local PDC mass flux/velocity as well as preceding events 971 in the eruption sequence (e.g., the infilling by previous PDC deposits). For directed blasts, a buffer is 972 clearly not appropriate because of their wide-reaching and topography-mantling nature, in these 973 cases an energy cone model that defines distance from the summit may be useful. For those high-974 energy surges that are not unconfined from origin, e.g., Unzen 1991, this type of model is less useful 975 as it is unable to identify locations of surge detachment. For overspill PDCs, we found they reach a 976 maximum lateral distance of 800 m (Table 1) from the flowpath channel. However, we recognise that 977 buffer extents are likely to be unique to the specific eruptions and require consideration of the 978 topography, channel path and likely eruptive style. Reliance on geological deposits for defining 979 buffers and potentially hazardous areas must be cognisant of the thinner deposits that reflect 980 unconfined PDCs that cannot be preserved but are still deadly. 981 Volcanic hazard and risk assessment relies upon empirical data from past eruptions and their 982 impacts. However, we are often limited in the amount of data that can be collected shortly after an 983 event, while deposits and impacts are preserved, because of safety and access limitations. In this 984 study, we have used lessons learned from remote and ground surveys of PDC dynamics following the 985 Merapi 2010 eruption to provide a similar assessment for Fuego 2018. Remote assessment at Fuego 986 using satellite imagery and media images to supplement a field study allowed for many similar 987 determinations of PDC dynamics and resultant impacts as at Merapi. In both cases, through imagery Bull Volcanol 33, 600-620. https://doi.org/10.1007/BF02596528 | 2021-06-10T08:46:18.523Z | 2021-06-08T00:00:00.000 | {
"year": 2021,
"sha1": "bfe9748d102bf86137a9d675f5e504b124e3962d",
"oa_license": "CCBY",
"oa_url": "http://eartharxiv.org/repository/object/2431/download/4994/",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "bfe9748d102bf86137a9d675f5e504b124e3962d",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": []
} |
247557275 | pes2o/s2orc | v3-fos-license | DIGITAL COMPETENCES FOR THE POLICE – A NEW ECDL
In the last year topics, such as Big data, data analytics and GDPR became more and more popular. However, companies – especially SMEs - and also the public sector have difficulties in dealing with these topics, which on the one hand might change the daily operative business or even their whole business process. Nevertheless digital competences are still not well spread among the citizen. The Austrian Computer Society (OCG) therefore tries to enhance digital skills by several activities. One of the most important initiatives is the European Computer Driving Licence (ECDL) or currently being renamed to European Certificate for Digital Literacy, which celebrated last years its 20th anniversary. Since 2018 every upcoming police-man /woman will receive the ECDL certificate during his/her three years appreciniship focussing on IT security, computational thinking and basic digital literacy knowledge
Introduction
As the European Union already stated 2016 [1] Europe is facing a huge gap in digital skills among their citizens. An average of 44% of the European citizens lack in digital competences as shown in Figure 1. This number is not changing much over the last decade although basic digital competences are becoming more and more mandatory in the work force. Several studies [2,3] are predicting that within the next five years more than 90% of all jobs need at least basic digital skills. Hence, initiatives as the Digital Skills and Job Coalition have been founded by the European Union. This initiative -based on a partnership among different stakeholders -has one shared purpose to attract young people for ICT and to reduce the digital skills gap.
As depicted in Figure 1 there is still a great variety among the EU28 countries with respect to digital skills. To some extent there is a clear west/north to east/south decline. While countries as Denmark, Luxembourg or the Netherlands have a very good coverage of digital skills (above 77%) citizens of countries as Bulgaria or Romania lack more than 75% of basic skills.
In the EU28, in 2014 more than 8.9 million people have been working as ICT practitioner and ICT mechanics [2]. It is obvious that digital skills are an important asset for the whole workforce not only particular for the ICT sector, which is covering approximately 48%.
In [2] it is further stated that 37% of all the labour force have no or only a low levels of digital skills. Figure 2 shows the estimated growing numbers of vacancies having e-skills in their portfolio. The study further emphasizes that approximately 750,000 more jobs could be generated if the needed skills were available. Especially the three big countries UK, Italy and Germany contribute for almost 60% of all vacancies in Europe. The study concluded that in order to overcome this shortage of ICT professionals but also to have an effective e-leadership, people need strong ICT and digital skills. The whole European ICT ecosystem is urgently called upon to tackle this problem.
An OECD paper [6] stated that 95% of all workers in OECD countries in large businesses are using internet as part of their jobs. Due to such predictions it becomes even more important that governments react in time to start an action plan to enhance the digital skills among all citizens. Already in 2001 the European Union stressed specifically the need to develop digital skills in their Europe Action Plan.
There is the potential that increasingly a high number of tasks might be automated over the next few years [7]. However, only less than 10% of all jobs on average are at risk of being replaced by machines.
Another survey which shows the grade of digitalization is the so called DESI (Digital Economy and Society Index) which comprises the following topics: [9] stated 2016 that nowadays all jobs require ICT know how except two professionsdishwashing and food cooking. It is obvious that almost all workers (more than 90%) have access to and use the internet as part of their jobs.
The country specific results showed a slight improvement within the human capital in Austria. In the field of IT skilled personnel Austria increased its ranking from 10th (2016) to 8th (2017) while Austria lost one place (down form 3rd to 4th) in the numbers of graduates among STEM. Especially in the field of Informatics the graduates remained quiet stable over the last decade. There was no significant increase visible among Austrian universities.
Another survey asked (top) management in the IT sector of Austria on the required skills for the todays jobs. On the top of these requirements was across the board by far "IT know-how" followed by "expert knowledge". Within the top 5 needs there was also "programming know-how" [5].
Therefore people have to be well educated in digital skills and digital literacy. Facing all these circumstances, ICT competencesespecially on a basic levelare becoming increasingly important for having better chances on the labour market. A lifelong learning attitude is a more and more important feature in order to stay competitive. The European Union has initiated an action plan for digital education aiming to increase digital basic skills along literacy, numeracy and problem solving. This action plan also implements computational thinking techniques which are an essential part of todays digital competences.
Digital Comptences for everyone
A survey of adult skills -PIAAC, Programme for the International Assessment of Adult Competences -2015 showed that the majority of people between 16 and 65 years have little or no knowledge in problem solving (see Figure 3). But such competences are becoming more and more essential especially in technology rich environments. Due to the digital transformation a high percentage of jobs are nowadays dependent on a certain knowledge level of digital competences. The PIAAC study differs among three different levels of cognitive skills. Level 1 means no or only little ICT skills. People on this level can fulfill only simple digital tasks. Level 2 and 3 have already a more advanced ICT and cognitive skills to evaluate and solve problems.
It is alarming that people with at least level 2 or 3 according PIAAC study are a minority in all OECD countries, even in the best-ranked ones, such as New Zealand, Sweden or Finland as depicted in Figure 3. According to this survey more than 60% of all adult citizens show little or no basic ICT skills at the OECD averageeven worse in Austria. Hence, governments as well as the educational sector are forced to implement an action plan to increase digital skills among its citizens. Although the study also shows a big gap between younger (25-34 years) and older (55-64 years) citizens it is obvious that even the younger generation have too few digital competences. An Austrian representative study [10] clearly showed a tremendous gap in digital skills between perception of one's own skills and real skill level. Through all generations a gap was visible while the biggest deviation was among the younger generation below 30 years. The so called digital natives thought at least to have a good coverage of digital literacy knowledge while the older ones already knew what they do not know with respect to digital literacy. The term "Digital Natives" came up by Marc Prensky in 2001 [11]. The author described "Digital Natives" as young people who grew up surrounded by, and using computers, cell phones and other tools of the digital age. His explanation derives from the assumption that young people are all "native speakers" of the digital language. Although the younger people did slightly better than the older generation the gap between their actual competences and their perception was even larger compared with the ones above 50 years.
Such a tendency is not surprising. Digital skills are normally associated with modern, active and successful people. It is natural for most people to aspire to be a part of such a group. Due to the fact that peopleand here especially young onesare continuously busy with online tools and equipments they believe to be an experienced ICT user. However, that is a fallacy that several surveys have shown.
Such international digital literacy studies were conducted in countries such as Switzerland, Singapore, Denmark, Finland, India and Germany [12]. All these surveys can be summarized with a common result: there is a clear gap between self-perceived and actual levels of digital skills. The data indicate that people can not adequately assess their digital skills. In Austria 94% of all participants believed to have at least averaged or better computer skills. But the practical test showed a completely different result: only 39% scored above average. Even in high developed countries as Switzerland or Singapore the differences between actual skills and self-assessed skills deviate dramatically as depicted in Figure 4. While the perception was always far beyond 80% only half (in Singapore) or just a third (as in Switzerland) showed sufficient results.
Figure 4. Digital Fallacy -Perception versus reality -A paper from the ECDL foundation [12].
Problem solving techniques become an essential feature for the current job descriptions. The founder of the so called Computational Thinking concept, J. Wing, already emphasized the importance of skills like problem solving and algorithm thinking [13]. Such competences are becoming a crucial skill for the today's challenges. As J. Wing pointed out already back in 2006 "… To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability…".
Unfortunately, the number of students finishing computer science at the university is slightly declining since the last decade as mentioned in [2]. The peak of computer science students was reached in 2006 in Europe. In only a few countries, such as Germany and France, the number of informatics students has increased over the last years while in other countries, such as the UK, a tremendous decline happened. In Austria the number of master students in informatics remained stable over the last decade. Hence -as stated in the OECD report -the major key priorities in order to meet the challenges of a digital world are amongst others: Basic education in ICT skills and problem solving skills Faster adoption of current needs for education and training systems and Life long learning attitude among citizens Therefore, OCG is trying to push all these factors within the Austrian educational sector. In 2015 OCG started an initiative "Education 4.0" aiming to enhance digital literacy and computational thinking among all citizens [14]. This initiative supported by several stakeholders is focusing on all the key priorities mentioned by the OECD report.
A new ECDLan approach for all citizens
The European Computer Driving Licence (ECDL or ICDL) was founded 1997 by a couple of European computer societies with the goal to bring digital literacy competences to all of Europe. In the meantimemore than 20 years laterthis initiative became the most successful and popular ICT certification over the world. No other product in the ICT domain remained so successful as the ECDL/ICDL certificate does. Currently ECDL/ICDL is spread over more than 140 countries and yearly more than 15 million people participate in such online tests.
In Austria the ECDL is a true success story. With more than 700.000 participants in over 800 schools per annum the ECDL program has set a standard. The national operator in Austria is the Austrian Computer Society which is responsible for the quality of the certification and is supervising more than 350 test centers. In 2018, Austria ranked second in the number of people doing a certificate testworldwide and in absolute numbers! (seen in Figure 5). Only in Italy (a country with seven times the population of Austria) more people were doing an ECDL/ICDL test. Austria even overtook countries like Germany, France or Romania in absolute numbers already several years ago.
Recently the ECDL Foundation started to redesign the ECDL/ICDL. A renaming of the acronym should indicate a change in the ECDL/ICDL program. From the European Computer Driving Licence it will change to European / International Certificate for Digital Literacy. Furthermore, the ECDL certificate series is split into school sector and workforce. There are different target groups which need different digital skills for their work.
The European Union settled a framework of digital competences which describes what it means to be digitally competent for citizens [15]. This framework is based on five pillars as shown in Figure 6. Within this framework several proficiency levels are defined as foundation, intermediate, advanced and highly specialized. Last year, the Austrian government started an initiative aiming to bring all citizens towards a certain level using the DigComp framework of the EU. An UNESCO study [16] analyzed several ICT certifications with respect to the coverage of the DigComp framework of the EU to the particular syllabus of the certification (see Figure 7).
The ECDL/ICDL programme turned out to be by far the most effective certification which covers the most part of the Digcomp framework as it is depicted in Figure 7. The ICDL competences reach 177 points while the next best certificate program only achieves 107 points. Only in a few topics the ECDL/ICDL has not reached any points, such as "Engaging in citizenship through digital technologies", "Solving technical problems" and "Identifying need and technological responses". With the new modules Information literacy and Computing all these missing topics are covered too. The polish ECDL national operator showed in an EU project that the minor shortcomings of the ECDL/ICDL towards this framework can be withdrawn by implementing a few additional components and questions. Furthermore, the ECDL/ICDL program was rebuilt in 2014 towards a life-long learning program aiming to keep his/her ECDL ID and giving the possibility to re-skill and up-skill one's proficiencies. While the ECDL/ICDL was focusing mainly on digital literacy skills, new modules are dealing more and more with problem solving topics as for example the newly-launched computing module. This module on the one hand emphasises computational thinking concepts and on the other hand has first steps in coding. However, this module is not meant to educate programmers but to spread problem solving techniques among all people.
The syllabus of the computing module covers the following topics:
Austrian good practices
Therefore OCG is pleased that some regional Austrian governments have taken the ECDL as a mandatory tool for the educational careers of their civil servants. The dual academy of the Chamber of Commerce in Upper Austria for example has implemented the ECDL advanced modules in their career plan. And just very recently the Austrian Chamber of Commerce announced this approach as one of its five light house projects in their master plan for reducing the shortage of young. ICT experts.
The Ministery of Defence has relied on the ECDL program for a long time. In total more than 10.000 recruits have done the ECDL during their military service.
The Austrian police school included the ECDL program in its curricula. From 2018 onwards all future police men/women can do the ECDL certificate during their training. The Ministry of the Interior which is responsible for the education of future police officers, identified the ECDL as an important vehicle to improve the digital skills among the police force.
With this milestone the Ministry of the Interior established a good standard of digital skills among their police schools. In the following years the police will double their students from currently 2000 to almost 4000 students per year. Topics, such as IT security, data protection and especially learning computational thinking methods, are essential skills in today's world.
In the meantime OCG is further investigating new ECDL modules including Robotics and Artificial Intelligence, which will be implemented in the ECDL program in the next few years.
Summary
The most successful ICT certificate worldwidethe ECDL/ICDL programis almost completely covering the DigComp framework of the European Union. With new ECDL modules like Computing and Data protection the ECDL program also remains up to date with the current developments.
OCG is proud that several ministeries made a commitment to the ECDL program and from 2018 onwards all police students will pass the ECDL certificate during their education.
The OCG is further trying to be on the leading edge of education in digital competences. The ECDL Foundation and its members are looking to establish a new ECDL/ICDL within schools and the workforce with the aim to face emerging IT developments, such as Artificial Intelligence, Machine learning and Robotics. | 2022-03-20T15:25:51.962Z | 2022-03-17T00:00:00.000 | {
"year": 2022,
"sha1": "56e0fe90668869c90cdf7509217966c8ef389fb3",
"oa_license": null,
"oa_url": "https://ejournals.facultas.at/index.php/ocgcp/article/download/2094/1749",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "40624d5acd46193f7e19436f31f3074742bf1623",
"s2fieldsofstudy": [
"Computer Science",
"Law",
"Political Science"
],
"extfieldsofstudy": []
} |
10314250 | pes2o/s2orc | v3-fos-license | Correspondence
Pregnancy is a physiological condition characterized by a progressive increase of the different components of the renin-angiotensin system (RAS). The physiological consequences of the stimulated RAS in normal pregnancy are incompletely understood, and even less understood is the question of how this system may be altered and contribute to the hypertensive disorders of pregnancy. Findings from our group have provided novel insights into how the RAS may contribute to the physiological condition of pregnancy by showing that pregnancy increases the expression of both the vasodilator heptapeptide of the RAS, angiotensin-(1-7) [Ang-(1-7)], and of a newly cloned angiotensin converting enzyme (ACE) homolog, ACE2, that shows high catalytic efficiency for Ang II metabolism to Ang-(1-7). The discovery of ACE2 adds a new dimension to the complexity of the RAS by providing a new arm that may counter-regulate the activity of the vasoconstrictor component, while amplifying the vasodilator component. The studies reviewed in this article demonstrate that Ang-(1-7) increases in plasma and urine of normal pregnant women. In preeclamptic subjects we showed that plasma Ang-(1-7) was suppressed as compared to the levels found in normal pregnancy. In addition, kidney and urinary levels of Ang-(1-7) were increased in pregnant rats coinciding with the enhanced detection and expression of ACE2. These findings support the concept that in normal pregnancy enhanced ACE2 may counteract the elevation in tissue and circulating Ang II by increasing the rate of conversion to Ang-(1-7). These findings provide a basis for the physiological role of Ang-(1-7) and ACE2 during pregnancy.
Introduction
Pregnancy is a physiological condition characterized by a progressive increase of the different components of the renin-angiotensin system (RAS).The physiological consequences of the stimulated RAS in normal pregnancy are incompletely understood, and even less under-stood is the question of how this system may be altered and contribute to the hypertensive disorders of pregnancy.In normal pregnancy, estrogen and/or progesterone cause an overexpression of the RAS by augmenting both tissue and circulating levels of angiotensinogen (1,2) and renin (3)(4)(5)(6)(7).During pregnancy, there is a large increase in plasma angiotensin-ogen due to stimulation of its hepatic synthesis by estrogen.In association with increased circulating estrogen, maternal prorenin and renin are also increased during pregnancy (8).Prorenin reaches a peak within 20 days after conception and remains high until parturition (9,10).Plasma renin activity rises during the first few weeks of pregnancy and plasma angiotensin II (Ang II) increases in association with the rise of angiotensinogen and plasma renin activity during gestation (11,12).Increased urinary and plasma aldosterone are found during pregnancy (10,11).Pregnant women and animals are resistant to the pressor effects of Ang II (13)(14)(15) and they remain normotensive despite a 2-fold increase in Ang II, an event that has been associated with a down-regulation of the AT 1 receptor (12,16).In pregnant animals administration of angiotensin converting enzyme (ACE) inhibitors results in a decrease in blood pressure, demonstrating the tonic role of Ang II in blood pressure maintenance during pregnancy (17).
Despite the activation of the RAS during pregnancy, blood pressure is normal or decreased (18,19).Why does an activation of the RAS not result in hypertension in normal pregnancy?Is the down-regulation of the AT 1 receptor sufficient to account for the blood pressure changes?Or are there other components of the RAS that need to be considered?In pursuing this question, we discovered that estrogen shifts the pathways of formation of the angiotensin peptides in a tissue-specific manner, reducing formation of Ang II and augmenting the production of angiotensin-(1-7) [Ang- (1)(2)(3)(4)(5)(6)(7)] in Sprague Dawley and mRen2 transgenic rats (20).Ang-(1-7) acts as counter-regulator of the cardiovascular actions of Ang II by acting as a modulator of vascular tone (21).The vasodilatory actions of Ang-(1-7) have been reported for a number of vascular beds (22)(23)(24)(25)(26)(27)(28) and involve the release of nitric oxide, kinins, and prostaglandins (22)(23)(24)(25).In addition, Ang-(1-7) was demonstrated to contribute to the anti-hypertensive actions of ACE inhibitors and AT 1 receptor antagonists (29)(30)(31).Prominent antihypertensive effects of Ang-(1-7) were demonstrated in experimental hypertensive models (32,33).Local regional angiotensin peptide levels (kidney and adrenal) are modulated by the estrous cycle, resulting in an increase in Ang I and/or its product Ang-(1-7) during estrus (34,35).
The discovery of a new enzyme, ACE2, adds a potential new dimension to the complexity of the RAS.This enzyme shows approximately 42% homology with ACE (36,37) but displays different biochemical activities from ACE. ACE2 is a carboxypeptidase that exhibits high catalytic efficiency to generate Ang-(1-7) at the same time that it inactivates the vasoconstrictor counterpart Ang II.ACE2 also cleaves one amino acid of Ang I to generate Ang-(1-9), which can be further processed into Ang-(1-7) by neprilysin and ACE (38).The catalytic efficiency of ACE2 in generating Ang-(1-7) from Ang II is almost 500-fold greater than that for the conversion of Ang I to Ang-(1-9) and 10-to 600-fold higher than that of prolyl oligopeptidase and prolyl carboxypeptidase, respectively, to form Ang-(1-7) (38).ACE2 is insensitive to ACE inhibitors (captopril, lisinopril and enalapril) (36,37), a fact suggesting that ACE2 may act to counter-regulate the activity of the vasoconstrictor components of the RAS.In light of these new findings, the question was raised if Ang-(1-7) and ACE2 could contribute to cardiovascular regulation in pregnancy.
Figure 1 is a schematic presentation of the action of the RAS and highlights potential places where pregnancy modulates components of the RAS.
Human studies Plasma Ang-(1-7) increases in normal human pregnancy and decreases in preeclampsia
In normal human pregnancy we evaluated whether the known rise in plasma Ang II is counterbalanced by an increase in plasma Ang-(1-7) and whether a fall in plasma Ang-(1-7) levels in preeclampsia may be a factor involved in the development of hypertension.Nulliparous preeclamptic patients and third trimester normotensive pregnant controls matched for parity, race, and gestational age were enrolled (N = 15/group).A nonpregnant group (N = 15) was also included for comparison.Preeclampsia patients had no previous history of hypertension.Preeclampsia subjects had significant hypertension (159 ± 3/98 ± 2 mmHg) and all had >3+ proteinuria.Plasma Ang I (176.4 ± 57.1 vs 32.5 ± 5.6 fmol/ml, P < 0.05), Ang II, and Ang-(1-7) were significantly elevated in normal pregnant women as compared to nonpregnant subjects (Figure 2).Plasma Ang-(1-7) was increased by 34% (P < 0.05) and plasma Ang II was increased by 50% (P < 0.05).In preeclampsia subjects plasma Ang-(1-7) was reduced (14.2 ± 2.3 fmol/ml, P < 0.05 vs 3rd trimester of pregnancy); plasma Ang II was also reduced (30.15 ± 2.3 vs 53.3 ± 10.1 fmol/ml, preeclamptic vs 3rd trimester normal pregnant, P < 0.05), but remained elevated compared to nonpregnant subjects (20.1 ± 1.5 fmol/ml, P < 0.05) and was 50% higher than plasma Ang-(1-7), suggesting that there is a shift in the balance of the peptides in preeclampsia.Other components of the RAS, with the exception of ACE, were reduced in preeclamptic subjects.Assessment of the relationship between Ang-(1-7) and blood pressure revealed a negative correlation of Ang-(1-7) with systolic (r = -0.4,P < 0.05) and diastolic (r = -0.5,P < 0.05) blood pressure.These data suggest a potential role for reduced production of Ang-(1-7) contributing to the elevated blood pressure.These studies confirm that the RAS is activated in the third trimester of normal pregnancy, including an increase in plasma Ang-(1-7) levels.In preeclampsia, the decreased levels of plasma Ang-(1-7) in the presence of persistent elevated plasma Ang II are consistent with the development of hypertension.and their subsequent lactation.Two groups of normotensive, non-proteinuric women were studied: group 1 consisted of 9 cycling women with previous normal obstetrical histories and group 2 consisted of 10 women with a normal pregnancy and previous normal obstetrical history.No significant differences in urinary Ang- (1)(2)(3)(4)(5)(6)(7) were observed between the follicular or luteal phase of the normal menstrual cycle (data not shown).
Potential sites of regulation of the RAS during pregnancy
There was a progressive rise of urinary Ang-(1-7) throughout normal human gestation, attaining levels that were 10-fold greater than those observed during the normal menstrual cycle (Figure 3).Urinary Ang II showed a similar pattern, reaching levels that were 25-fold higher than the values observed during the menstrual cycle.At 35 weeks of gestation, urinary Ang-(1-7) was the predominant angiotensin peptide in the urine, reaching levels that were 6-fold higher than Ang II.The urinary excretion levels may reflect local kidney production of peptides and thus during pregnancy the elevated renal Ang-(1-7) may play a role in the vasodilatory adaptations of mid and late human pregnancy.
Animal studies
In order to better dissect the RAS during pregnancy, studies were conducted on an animal model.Pregnant rats were evaluated at late gestation (19th day) and virgin female rats were evaluated during the diestrus phase of the estrous cycle.
Kidney concentration and urinary excretion rate of Ang-(1-7), Ang II and Ang I in virgin and pregnant rats
Table 1 shows the basal characterization of both virgin and pregnant rats.Virgin animals were at the diestrus phase of the estrous cycle as indicated by cytology and resting plasma 17ß-estradiol concentration, which were nearly 5-fold lower than those found in
Urinary Ang-(1-7) increases throughout normal human pregnancy
In these studies, the 24-h urinary excretion of Ang-(1-7) and Ang II was evaluated during the ovulatory menstrual cycle and during singleton normotensive pregnancies Ang-(1-7) and pregnancy pregnant animals.Urinary flow was significantly increased during late pregnancy (P < 0.05) without a change in creatinine excretion.Pregnancy was associated with significant increases in plasma angiotensinogen (57%) and plasma renin concentration (271%).Serum ACE activity was decreased by 20% in pregnant animals.In contrast to the human pregnant subjects, there was no significant change in plasma concentration of Ang I, Ang II and Ang (1-7) at the 19th day of pregnancy.
Twenty-four hour urinary excretion of the angiotensin peptides was significantly increased in pregnant animals (Figure 4), reaching levels that were increased by 93% (Ang I), 44% (Ang II) and 60% [Ang-(1-7)] of values found in virgin rats.Kidney Ang I and Ang-(1-7) concentrations were significantly increased in pregnant animals as compared to virgin females.The Ang I and Ang-(1-7) concentrations were increased by 7and 5-fold, respectively (P < 0.05).There was no significant change in kidney Ang II concentration of pregnant and virgin females.These studies provide evidence that urinary excretion of angiotensin peptides reflects local kidney content of angiotensin during pregnancy.
Renal immunocytochemical distribution of Ang-(1-7) and ACE2 during pregnancy
In these studies we determined the renal immunocytochemical distribution of Ang-(1-7) and ACE2 in kidneys of virgin and 19day pregnant Sprague Dawley rats.Rats were sacrificed on the 19th day of pregnancy and tissues were examined for renal immunocytochemical distribution using a polyclonal antibody to Ang-(1-7) or a monoclonal antibody to ACE2 (Figure 5).Ang-(1-7) immunostaining was predominantly localized to the inner cortex and outer medulla and was found in proximal and distal tubules.There was no staining in the glomerulus.ACE2 immunostaining was also visualized predomi- nantly in the tubules of the inner cortex/outer medulla region of both virgin and pregnant rats and was not found in the glomerulus.Analysis of the Ang-(1-7) (33 ± 0.6 vs 21 ± 1.6 arbitrary units, P < 0.05 pregnant vs virgin) and ACE2 (25 ± 4.0 vs 11 ± 0.9 arbitrary units, P < 0.05 pregnant vs virgin) immunostaining revealed that kidneys from pregnant rats had a more intense staining compared to virgin females.The similar localization of ACE2 and Ang-(1-7) suggests that ACE2 may play a role in the renal production of Ang-(1-7) in pregnancy.
These studies have demonstrated that there is enhanced Ang-(1-7) expression during pregnancy.In normal human pregnancy, plasma Ang-(1-7) is increased, whereas in preeclampsia there is a decrease in plasma Ang-(1-7).In both humans and rats, pregnancy is accompanied by increased urinary excretion of Ang-(1-7), which in the rat is reflective of the increased renal concentration of Ang-(1-7).In the kidney, Ang-(1-7) and ACE2 are similarly distributed and there is an enhanced distribution of both in pregnancy.Finally, the increased Ang-(1-7) vasodilation in mesentery resistance vessels in pregnancy supports the concept that Ang-(1-7) may be an important contributor to blood pressure regulation in pregnancy.] in isolated mesenteric vessels from virgin rats at diestrus and from pregnant rats at late gestation.Data are reported as means ± SEM for 7 virgin rats and 7 pregnant rats.Data are adapted from Neves et al. (41).*P < 0.05 for virgin vs pregnant rats (one-way ANOVA followed by Newman-Keuls).
Figure 3 .
Figure 3. Urinary excretion of angiotensin II (Ang II) and Ang-(1-7) during gestation, lactation (PP), and menstrual cycle (MC).Data are reported as means ± SEM for N = 10 during gestation, N = 10 during lactation and N = 9 during the menstrual cycle.Although not indicated, all values of urinary Ang II and Ang-(1-7) were significantly different from those found during the menstrual cycle.Data are adapted from Valdes et al. (40).*P < 0.05 compared to day 4 of gestation (Student t-test).
Figure 4 .Figure 5 .
Figure 4. Kidney and urinary levels of angiotensin I (Ang I), Ang II, and Ang-(1-7) in virgin rats at diestrus and in pregnant rats at late gestation.Data are reported as means ± SEM for N = 14 (urine) and N = 9 (kidney) virgin rats and N = 19 (urine) and N = 10 (kidney) pregnant rats.Data are adapted from Neves et al. (41).*P < 0.05 compared to virgin rats (Student ttest).
Table 1 .
Basal characterization of virgin female and 19th day pregnant rats.
Data are reported as mean ± SEM.ACE = angiotensin converting enzyme; PRC = plasma renin concentration.*P < 0.05 compared to virgin rats (Student t-test). | 2014-10-01T00:00:00.000Z | 2004-07-20T00:00:00.000 | {
"year": 2004,
"sha1": "18b79bd764c840c1a4c18318f135f087c9677e75",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/bjmbr/a/YrvD3QSgsfJS6gfxZb5JP3k/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "CiteSeerX",
"pdf_hash": "18b79bd764c840c1a4c18318f135f087c9677e75",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
238058294 | pes2o/s2orc | v3-fos-license | THE INTERNATIONAL JOURNAL OF BUSINESS & MANAGEMENT The Impact of Behavioral Factors Affecting Supply Chain Performance a Conceptual Model
2010)stated to pay more attention for the effect of behavioral factors like (Trust, Commitment, Mutuality and reciprocity, organizational culture, Nation culture, ...) On Sc performance and they argued that these behavioral factors can prover high efficiency in improving performance and avoiding BWE. Abstract: During the las few decades different Supply chain (SC) has been developed and became very complicated, approaches appeared, SC managers have to deal with much more partners from different back grounds, the main goal of any Sc manager became how to improve the SC effectiveness and efficiency, through increasing integration and information sharing. Through time authors and managers focused on the effect of technology in improving SC integration, and they ignored the effect of behavioral factors. The main aim of this research is so high light Behavioral SC, and different behavioral factors affect SC integration.
Theoretical Foundation
There are five main theories that explain the different mechanisms that govern and control the relation between different business partners, Transaction Cost Economics (TCE), Relational Exchange Theory (RET), Resource-Based View (RBV), Resource Dependence Theory (RDT) and Social Exchange Theory (SET). (Christos,, S., Konstantinos, G., & Alan, 2015) Behavioral Supply Chain Management "BSCM" as approach that focus on taking behavioral antecedences into consideration when designing SC and selecting SC partners, is outcome of different theories. Tsanos(2014) claimed that BSCM is out come of seven different theories "Resources Based View, Normative Theory, Cognitive Theory, Relational Exchange Theory, Transaction cost economics TCE, Resources Dependence theory, Social Exchange Theory". Trust is defined as the willingness to rely on an exchange partner in whom one has confidence due to the ability of that partner to provide expertise, dependability, and direction (Christine, Rohit, & Gerald, 1993). However, Moorman et al., (1993) claimed that Trust is considered as multi dimensions concept (characteristic-based trust, calculation-based trust, and institutional-based trust). Characteristic-based trust relies on credibility, trustworthiness, integrity, decency and consistency of another partner. Calculation-based trust relies on cost benefit analysis, dynamic capabilities and the level of technology adopted by other partners. Institutional -Trust is related to legal control methods of relationship. Building a Trust based relation between SC partners should rely on the three dimensions. In traditional SC, partners are obligated by contracts, each partner will expect exactly what is mentioned in this contract like delivery date, payment conditions, installments, … but later on managers started to look for more than these direct benefits of contracts like advice, hidden information, accumulated knowledge and experience, that's why authors like (Liu, 2012) conducted a research to examine why would managers look for that, they found that mangers usually ask two questions:
Conceptual Framework and Hypotheses
If the surrounding circumstances changed, suppliers will be able to assist and offer support? Can the decision maker rely on the personal judgment and accumulated knowledge? Ha et al. (2016) classified trust in to two main concepts affective trust which is related to the dimensions of emotion and personality, often developed in a long-term relationship; effective trust, directly the information sharing. The second concept is trust in competence is considered as the second phase trust in which suppliers share decision makers in decision making and strategic planning. According to Ha et al. [2016] Affective trust is measure by different factors like "Respect, Honesty, Credibility and Mutual Understanding" while trust in competence include "Knowledge/technique for performance, Commitment in the relationship".
2.2.1.2. Commitment (Herbert, 2015) defined commitment as, Herbert (2015) examined the relation between commitment and SC performance through comparing between companies that rely on commitment and long-term relationship with SC partners and those that rely more on legal contract conditions. Moreover according (Tian, 2019)dimensions of commitment between SC partners are continuity, communication and power. Continuity is defined according to (Heide and John, 1990) as "the perception of bilateral expectation of future interaction". The second dimension Communication (both informal and/or formal) is defined as "The sharing of meaningful and timely information between firms it's also defined as the "key to vitality" of a partnership "(James & Marc, 2018) while Power is the ability of one individual or group to get another unit to do something, (Dahl, 2007).
According to (Carlos, Daniel, Marqués, & María, 2016), commitment is a multi-faceted construct and should be viewed from three aspects: Affective Commitment refers to the feeling of belonging and the sense of attachment to the organization; Continuance commitment referred to the perceived cost, both the financial and non-financial, of leaving the organization, and is perceived to be caused by a lack of alternatives. Employee who's mainly attached to the organization based on continuance commitment remains because they need to do so. (Morgan, 2001); Normative commitment is concerned with the obligation that members feel to remain with an organization and build on generalized cultural expectations. Brown et al. (2005 in relationships where transfer of property rights among legally equal and free parties is involved". from the empirical point of view of Rousseau & Guillermo, (2004) mutuality can be reasoned to the existence of "Idiosyncratic" investment that makes the supplier feel as a part of the focal company not only as a partner. On the other hand, it will reduce the flexibility in many situations and the ability to shift to new product.
Reciprocity
Reciprocity is defined as the degree to which manager or decision maker expect cooperative action (as opposed to forced interaction) within a relationship and constitute a major factor in the formation of interorganizational relationships (Christos S. Tanos, 2014). Reciprocal relation will overcome the disadvantages of mutuality "inflexibility" as in reciprocal relation we assume that no partner will try to control the relation and will guarantee balance and equity.
Top Management Support
Authors like "Burgess (1998), Carter, Kale and Grimm (2000), Zhu and Sarkis, (2004), Simpson et al. (2007) "considered integrated or integration in general as culture, however adopting any culture or new strategy require both: managerial capabilities and support. BSCM as a concept required certain level of harmony between SC partners, although such harmony will have many advantages but managing it successfully requires more effort and talent that many managers consider it as extra burden and go for the traditional way of SCM. (Christos,, S., Konstantinos, G., & Alan, 2015)also claimed that managers must be aware with the expected profit that can be gained from implementing BSCM and that's it not only an extra cost. This awareness with the expected benefit will guarantee management support and will encourage top management to allocate extra resources to successfully implement BSCMM.
One of the main reasons that some decision maker may reject the concept of BSCM, is that it requires cross functional team to manage the integration among the focal organization and the whole SC. managing such teams require some special managerial skills that not all managers must have (Su, Q., & Shi, 2008) 2.2.1.5. National Culture Professor Greet Hofstede conducted one of the most important studies about the effect of culture "National/Organizational" on organizational performance. Hofstede examined the effect of values of national culture through defining six dimensions for culture.
Power Distance Index (PDI)
This dimension expresses to which extend the less powerful members of a society accept and expect that power is distributed unequally. The main idea behind this dimension is how a society handles inequalities among people. People in countries with large degree of power distance are more likely to accept hierarchal orders with no need for more justification.
Individualism versus Collectivism (IDV)
This dimension represents to which extend each individual care only for themselves and their immediate families. Collectivism means has un questioning loyalty toward their society.
Masculinity versus Femininity (MAS)
The Masculinity sides of this dimension represents to which extend members of the society prefer or focus on achievement, heroism, assertiveness, and material rewards for success. Society at large is more competitive. Its opposite, Femininity side represents the extent to which society members prefer cooperating with each other, care for weak people and quality of life 2.2.1.5.4. Uncertainty Avoidance Index (UAI) This measures the degree to which society members feel uncomfortable with uncertainty. it related to which extent society members believe that they can control future or they just need to deal with it. Societies with strong UAI usually reject what's is called "Unorthodox" behaviors, and ideas…they reject change …while those with weak UAI are usually described as more relaxed societies in which practice counts more than principles.
Long Term Orientation versus Short Term Normative Orientation (LTO)
This dimension measures to which extent society members are keen to keep links with its own past while facing the challenges of the present and the future. "In the business context, this dimension is referred to as "(short-term) normative versus (long-term) pragmatic" (PRA). In the academic environment, the terminology Monumentalism versus Flex humility is sometimes also used." (Chin & L, 2009) 2.2.1.5.6. Indulgence versus Restraint (IVR) Indulgence stands for a society that allows relatively free gratification of basic and natural human drives related to enjoying life and having fun. Restraint stands for a society that suppresses gratification of needs and regulates it by means of strict social norms.
SC Integration
Supply chain integration (SCI) is one of the most widely discussed topics in SCM research, this can be rooted to the fact of global environment, wide range of suppliers as no SC can be considered now as closed system. And no organization can survive alone. (Éverton & Elton, 2014)SCI can be defined as "the degree to which a manufacturer strategically collaborates with its supply chain partners and collaboratively manages intra-and inter-organization processes" (Flynn, Huo, & Zhao, 2010). SCI comprises two areas, internal and external integration. The latter involves integration with suppliers and integration with customers (Flynn et al., 2010).
According to (Mei & Qingyu, 2011) SCI is considered as multidimensional construct, as it can be classified into two main groups: relationship focus and process focus. Moreover, other scholars refer to integration dimensions as functional, operational, and relational. Furthermore, SCI practices could be classified into two categories, product-based and marketbased (Ram & Soo, 2012).
In order to attain a successful SCI, the focal company must build a long-term relation with its partners (Supplier/Distributer), furthermore this relation should extend to second level (Supplier/Distributer). Daniel & Handfield,( 2007) SCI includes different activities like Scheduling, integrating processes, shared information, shared technology, long-term contracts, reinforced quality improvements, improved supplier's overall capabilities, and shared risks and rewards, mainly they can be divided into two main dimensions Information sharing, SC coordination (Enchelt, Wyntra, & Welee, 2008) 2.2.2.1. Information Sharing Information sharing includes two main aspects, the breadth and the quality of information sharing (kim, lee, & Gosain , 2005) identified Breadth of Information as "the range of disseminating each firm's private information among the supply chain members. The breadth of information sharing depends on the coordination of the level of collaboration, and can be of three types: order information, partial information, and strategic information (ESmape, Lamouri, & Paris, 2013) mentioned that information is considered as a sensitive source of power for some mangers that they prefer to hoard, and this may be one of SCI failure in man situations. The Quality of information shared relies on the timeliness, speed, accuracy and appropriateness of information shared. Chin &L (2009) claimed that information is like any other physical product that will be expired after certain time and it will be useless after that time, they also claimed that information quality is affected mainly by the mutual trust between supply chain. Advanced technologies of information technology (IT), makes to possible to share information on different levels from operational to strategic level in no time in what we call on time information sharing. Deshmukh et.al., (2008) claimed that SC coordination is composed of risk, cost and gain sharing, sharing ideas and institutional culture, sharing decision making and sharing skills. According to this definition coordination includes set of activities like procurement, inventory management, distribution and SC relationship management. Through his study Deshmukh et.al., (2008) proposed three dimensions frame work for coordination: Internal integration, Forward integration (with customers) and backward integration (with suppliers). The first biller includes functions and activities to be integrated (Inventory management, Logistics, Forecasting, Procurement and product designing). The second is the mechanisms that will be used to manage and ensure the dependencies and coordination between SC partners like (contracts, information technology, information sharing agreements and joint decision making). The third biller is "Identification and solution of conflict" formal contracts or informal agreements.
SC Performance
SC performance management is widely discussed topic during the last few decades. One of the leading studies in this area is the study of McGaughey et.al.,( 2004). McGaughey et.al.,(2004 developed a measurement frame work for a four echelon SC (Supply, Manufacturing, Distribution and consumers) this model considers three categories of measures reflecting performance dimensions required for success, resource measures, output measures and flexibility measures. Resource measures include: inventory levels, personnel requirements, equipment utilization, energy usage, and cost, output measures include customer responsiveness, quality, and the quantity of final product produced and flexibility measures include volume flexibility, delivery flexibility, mix flexibility and new product flexibility.
There are different murices to measure performance, the researcher selected the appropriate metrics through taking into consideration some criteria like: i) the metrics should represent performance across the supply chain, ii) respondents can utilize objective data for assessing them (even if the data cannot be revealed) and iii) the data for the selected metrics can be provided by the focal firm as proxy for the supply chain. Among the different matrices author selected SC effectiveness and efficiency due to reasons presented previously.
Dimensions
Metrics Supply chain cycle efficiency Ratio of time in which inventory (i.e., raw materials / WIP / finished products) is active/moving in the supply chain over total time spent in the supply chain Supply chain flexibility Average time required for the supply chain to respond to an unplanned 20% increase in demand without service or cost penalty Effectiveness Order fulfilment lead-time Average time between order entry and time of order delivery Perfect order fulfilment Ratio of orders delivered i) complete, ii) on the date requested by the customer, iii) in perfect condition, iv) with the correct documentation over total number of orders Proposition 1: Mutuality & Reciprocity has a significant positive effect on the Commitment between SC partners. Proposition 2 Trust has a significant positive effect on the Commitment between SC partners. Proposition 3: Commitment has a has a significant positive effect on SC Integration. Proposition 4: SC integration has a has a significant positive effect on SC Performance.
Conclusion
SC integration became a formidable target for all organization for survival among strong competition. That is why focal companies spend large portions of their budgets to develop their information sharing technology like ERP" enterprise Resources Planning", and as a result companies raised their supplier selection standard also distributers to assure that they can fit with their system. On the other hand, some decision makers found that focusing on such strategy is not enough to reach the required level of information and tended to go for building SC relations based on trust and commitment. Research in BSCM is not less important than what's mentioned as authors should examine more behavioral antecedence in different contexts as the effect of these factors differ from context to another.
Research Limitation
There is limitation for this research concerning the no. of factors included in the research, as researcher selected two factors only according to findings of previous researches and due to research limitations. According to previous research it is recommended to use Organization culture as moderating variable that controls the effect of behavioral factors, but it was avoided due to research complications. Results can't be generalized as according to (Debbie L. Hahs-Vaughn, 2016)the effect of behavioral factors is very sensitive to the context "organization / industry and nation" level. Data sample may not represent the whole SC, e.g. In mobile industry in Egypt most SC located in Egypt is downstream level "distribution" as most of purchasing and procurement and even manufacturing is centralized in the mother country. | 2021-08-23T18:26:23.338Z | 2021-03-31T00:00:00.000 | {
"year": 2021,
"sha1": "ff4a8a4508ea74603ae132e4b95e2e035403cc04",
"oa_license": null,
"oa_url": "http://www.internationaljournalcorner.com/index.php/theijbm/article/download/159784/109690",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a2b5c01100a99f7ced96c225b998bd82312e96cf",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
6392388 | pes2o/s2orc | v3-fos-license | Bilateral coronoid hyperplasia causing painless limitation of mandibular movement
The coronoid process is a beaklike process in the ramus of the mandible. Coronoid process hyperplasia (CPH) is a rare possible cause of reduced mouth opening. An overgrown process interferes with mandibular rotation and lateral excursion and hence leads to restricted mouth opening (RMO). Although some factors are suggested, etiology of CPH is not completely known. Prescription of suitable radiography is necessary for an accurate diagnosis. This article reports a 30-year-old man with bilateral CPH and progressive RMO since childhood. This disorder affected his oral hygiene and quality of life. With the help of different types of radiography, CPH was diagnosed and coronoidectomy was the only treatment option. The patient showed normal jaw movements after the surgery and postoperative physiotherapy. General dentists have an important role in noticing RMO and referring the patients to maxillofacial radiologists. Although it is a rare phenomenon, general dentists need to keep CPH in mind as a possible cause. Panoramic imaging accompanied by computed tomography or cone beam computed tomography is the best imaging option in such cases.
Introduction
Opening the mouth is a result of coordinated function of muscles and bones especially in the temporomandibular joint (TMJ). Therefore, anything that interferes with the TMJ's correct function can cause restricted mouth opening (RMO) and even complete lock. One of the possible causes of progressive RMO is hyperplasia of the coronoid process, known as coronoid process hyperplasia (CPH). In Greek, korone means like a crown. The coronoid process is a beaklike process in the superioanterior part of the ramus of the mandible [1].
Bilateral CPH is a rare developmental condition characterized by abnormal overgrowth of the histopathologically normal coronoid processes. Movements of a larger coronoid process interfere with the medial or temporal surface of the zygomatic Competing Interests: No conflict of interest was provided for this paper. bone. In addition, as this process grows gradually, the infratemporal space needed for rotation and translation of mandible is reduced, which results in reduction of the ranges of mouth opening and lateral excursion [2,3].
The etiology of CPH is not completely known. However, several factors have been suggested as possible etiology, such as temporalis muscle hyperactivity, trauma, hormonal factors, genetics, and familial factors [4,5].
The purpose of this article is reporting clinical and radiographic characteristics of CPH and imaging modalities that may be helpful in accurate diagnosis of this disorder.
Case report
A 30-year-old man who visited his dentist, complaining about pain in the left mandibular premolar, was referred to the Department of Radiology, School of Dentistry of Isfahan because of insufficient mouth opening and difficult dental operation.
Restriction in mouth opening was found to be present since the patient's childhood; which had progressed gradually in years. There was no record of childhood disease or trauma, or familial history of trismus. The patient also complained about episodes of migraine headache.
In clinical examination, interincisal space was 21 mm. In addition, lateral excursions to the left and right were possible, but they were very limited (Fig. 1).
The normal range for interincisal space in maximum mouth opening is 35-50 mm and normal lateral excursion is 8-12 mm toward mandibular incisors [6].
Click and crepitation in the right TMJ were observed and palpated. Muscles of the area were not tender.
Based on clinical examinations, primary diagnosis of the TMJ dysfunction was indicated and radiographic examinations were prescribed.
In the panoramic image, the coronoid processes were larger in length in both sides and were observed to be higher than the zygomatic arch, although the bone trabeculae were normal in the processes. Most of the posterior teeth were extracted, root canalled, or affected with remarkable dental caries, whereas all the anterior teeth were sound and healthy, which was probably because of insufficient oral hygiene due to the reduction in mouth opening and lack of access to the posterior areas when brushing (Fig. 2).
In the next step, computed tomography (CT) scan was obtained and the coronoid processes were surveyed in different views including 3-dimensional (3D), bone window, and soft tissue window. In 3D view, apices of the coronoid processes were significantly higher than the zygomatic arch and the condylar head ( Fig. 3).
In axial images that were obtained using bone window, bone changes and the contact of medial parts of the zygomatic arch and the coronoid processes at bone sides were studied. In soft tissue window images, relation of the bones and the muscles were investigated (Fig. 4).
Based on the radiographic finding, CPH was diagnosed. The patient was referred to a maxillofacial surgeon for coronoidectomy. Bilateral coronoidectomy was carried out intraorally. After the operation, the patient underwent physiotherapy for 3 months to rehabilitate normal jaw movements.
Discussion
The purpose of this article is to report clinical and radiographic characteristics of CPH and imaging modalities that may be helpful in accurate diagnosis of this disorder.
The coronoid process is a triangular bone, which projects upward and slightly forward from the ramus of the mandible. Its posterior border is bounded by the mandibular incisure, and its anterior border aligns with the ramus [6]. Normal development of the coronoid process includes intramembranous ossification of the accessory nucleus of the cartilage during the fourth week of intrauterine life. This cartilage is covered by a thick fibrocellular capsule that is not evident before birth. Skeletal growth continues until adulthood along with transformation and proliferation of the fibrocellular layer's cells [3].
With evolution of the mammals and a shift of diet to more vegetarianism and changes of the bite forces, the coronoid process has gradually grown larger and has turned to its present form [7]. Intraoral removal of this process causes no functional defects or facial malformations. Thus, this bone can be used for reconstruction of orbital floor defects, alveolar defects, nonunion mandibular fractures, sinus augmentations, reconstruction of bone abnormalities, and other craniomaxillofacial surgical procedures [1].
This abnormality is manifested as malocclusion and reduction in mouth opening. Patients usually complain about limitation in mouth opening. It might be mistaken for more prevalent causes for RMO such as TMJ internal derangement, masticatory muscle contraction disorders, and ankylosis [4]. CPH and chronic disk displacement without reduction have similar signs and symptoms, except that pain is not a common symptom in CPH. Thus, correct diagnosis of CPH does not usually happen in the first appointment, and this disorder is mostly misdiagnosed. The first case of mandibular hypomobility due to CPH was reported in the second half of the nineteenth century and few cases have been reported since then. Disorders involving the coronoid processes have been diagnosed in only 5% of mandible hypomobility cases, in both men and women, at a ratio of 5:1, and bilateral CPH is more prevalent in men [8][9][10][11].
In spite of low prevalence, this disorder needs to be considered as a possible diagnosis in patients with painless, progressive, and chronic RMO. Correct diagnosis of this condition has an important effect on the treatment plans and patients' quality of life. As a result, prescription of suitable images is necessary and mandatory for an accurate diagnosis. Coronoidectomy with physiotherapy following the surgery is the only treatment option for this disorder.
The etiology of CPH is almost unknown, but several causes have been proposed to explain the abnormal growth of this bone and the formation of CPH. However, there is not enough evidence to support any of proposed causes [12].
The most probable proposed cause for CPH is hyperactivity of temporalis muscle [7]. Temporalis muscle originates from temporal fossa upper than inferior temporal line and deep surface of temporalis fascia. Muscle fibers then converge downward and are connected to the coronoid process and the upper part of the zygomatic arch. This muscle elevates the mandible and closes the mouth [6].
Studies carried out on postnatal surgeries on laboratory animals showed that changes in the muscular connection result in changes in size and form of the coronoid process. In addition, deficits in myogenic factors (MYF5/MYOD) cause coronoid and angular processes to be small or maintain complete defects [7]. Genetics and hormonal factors can also affect craniofacial size and morphologic characteristics [13,14].
Makek and Obwegeser indicated that different individual factors related to growth, which control generalized hypertrophy and longitudinal growth, may cause deformities [15]. Growth hormone (GH) is a peptide that is secreted from the anterior part of hypophesis and has an important role in growth and development of maxillofacial complex. GH bonds to GH receptors (GHRS) which exist on the surface of cells and activate signals inside cells. Variations of GHR gene and mutations may cause disorders in craniofacial growth [13]. Some studies show that varieties in size and form of the coronoid process are related to expression of Paired box gene 9 (pax9) and SRY-box gene 9 (sox9). Sox9 is a transcription factor which is normally related to skeletal condrogenesis. In the coronoid processes, its expression depends on the temporalis muscles' tension [7].
Fig. 4 -Axial CT in 2 views: soft tissue window (A and B) and bone window (C and D). The arrows indicate the relation between the coronoid process and the zygomatic bone. CT, computed tomography.
Diagnosis of CPH is mostly through radiography, scintigraphy of bone, and histopathologic studies. Thus, clinical examinations alone cannot diagnose and differentiate this disease from other disorders related to mouth opening. Progressive and painless mouth opening reduction through several years, especially in men, is a common clinical sign of CPH. Progressive RMO is also observed in syndromes of pain and dysfunction of the TMJ (especially chronic disk displacement without reduction) and also uncorrected fracture of the jaw or the zygomatic bone, radiotherapy-induced fibrotic changes of the masticatory muscles, rheumatoid arthritis, primary and secondary neoplastic diseases (including osteoma and osteochondroma), progressive fibrotic change of the oral mucous membrane, ankylosing spondylitis, myositis ossificans, and tetanus [3,16].
Among these causes, CPH is mostly mistaken for chronic disk displacement without reduction, which is the most common cause of mouth opening reduction. Chronic disk displacement without reduction is recognized with its typical clinical signs such as history of click in the TMJ which is vanished suddenly, pain, an experience of previous locked jaw, and unilateral deviation during opening of the mouth. However, there are no prevalent signs for CPH [4]. Thus, reaching a correct diagnosis is a very important and critical issue.
Patients with CPH usually receive treatments before the diagnosis. Because pain is not a common symptom in this disorder, patients with CPH do not usually notice its progress, and they do not pursue its medical treatment until there is too much restriction in mouth opening (such as what was observed in this case). Hence, the quantity of cases may be more than the reported ones.
Severe RMO causes considerable problems such as problems in oral health care. Thus, due to reduced oral health, caries rate is elevated in these patients, as was observed in this case. On the other hand, because of insufficient access to the posterior teeth, dental operations for such cases may be very difficult and even impossible for the dentist. In addition, surgical and prosthetic problems and even problems in intubation for general anesthesia are present in these patients; therefore, direct fiber-optic bronchoscopic nasal intubation is advocated [17,18].
Radiographic examination and observation of bony relations have an important role in correct diagnosis of this abnormality. Plain radiography is the first step in radiographic examination of patients with CPH. Panoramic radiography needs to be prescribed routinely for the cases in which clinical signs of jaw dysfunction are observed. Then, the TMJ areas should be surveyed for finding the cause of the restriction. The next step is prescription of CT, cone beam computed tomography (CBCT), or magnetic resonance imaging (MRI). The relation between the coronoid process and zygoma and every change in their bony surface is reviewable in CT and CBCT especially in 3D view. Not only are these techniques suitable for precise diagnosis, but they are also adequate for surgical protocols. Although MRI may help differentiate CPH from chronic disk displacement without reduction, this imaging is costly and not always accessible. On the other hand, with observing the coronoid process in CT and CBCT, MRI is no longer necessary.
Surgery improves the quality of life of patients with CPH. The purpose of surgery is to remove the coronoid process and eliminate the mechanical obstacle of mouth opening. Access area for the surgery may be intraoral or extraoral [3]. Removing a severely hypertrophic coronoid process is not possible intraorally. The access will be gained through submandibular, (bi) temporal, preauricular, or coronal flap [19]. In addition, endoscopic methods have been introduced in the recent years as methods leaving little scars [20,21]. Because early comprehensive postoperative physiotherapy has an important role in correct mouth opening and preventing relapse of the treatment, it is advised after the surgery. Active and passive stretching exercises with or without the use of bite blocks, dynamic devices, wedges, mouth screws, spatulas, or TheraBites were reported to be helpful [22].
Relapse of the treatment is reported in some cases [19]. Possible causes of relapse may be persistence of underlying causes of hypertrophy, hematoma or postoperative fibrosis, inadequate physiotherapy program, or a poor diet. Relapse may lead to a second coronoidectomy; hence, regular follow-up and routine panoramic radiography especially with open mouth are advised for a better observation of margins of the restriction, the degree of mouth opening, and the condylar excursion. For postoperative follow-up in young patients and in cases with abnormal mouth opening range after surgery or a possibility of treatment relapse, MRI is a suitable option because ionizing rays are not used in this method and also hematoma, fibrosis, or muscular atrophy in the surgical site are well detectable [23].
Conclusion
Many etiologies may cause reduction in mouth opening, but the important issue is to diagnose and to eliminate the causes at the right time. As mentioned previously, this problem may lead to deterioration of quality of life and missing several teeth of a young patient.
General dentists have an important role. Despite CPH being a rare condition, it has to be considered as one of the etiologies of progressive mouth opening reduction. The patients had better be referred to maxillofacial radiologists who can perform further examination and confirm the final diagnosis. Panoramic imaging accompanied by CT or CBCT is the most suitable option in these patients because it helps the surgeon to plan the surgical protocol and to find the right diagnosis. | 2018-04-03T03:30:43.229Z | 2017-12-29T00:00:00.000 | {
"year": 2017,
"sha1": "96f8fdde1cea33133b4a3293d392bcc0388b6d43",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.radcr.2017.11.001",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "96f8fdde1cea33133b4a3293d392bcc0388b6d43",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52813460 | pes2o/s2orc | v3-fos-license | Celiac-like Enteropathy Associated With Mycophenolate Sodium in Renal Transplant Recipients
Background Although colonic injury is a well-known complication of mycophenolic acid (MPA), the involvement of the upper gastrointestinal tract is less extensively documented. We present the occurrence of celiac-like duodenopathy manifested as a severe diarrhea syndrome in 2 renal transplant recipients on enteric-coated mycophenolate sodium. Methods The patients belong to a setting of 16 renal transplant recipients under MPA suffering from chronic diarrhea in the absence of MPA-related colitis. Results Both patients had a history of persistent diarrhea with significant weight loss. Colonic mucosa was unremarkable, whereas duodenal biopsies revealed celiac-like changes with increased epithelial cell apoptosis. Clinical symptoms completely resolved, and follow-up biopsies demonstrated normalization of histology after enteric-coated mycophenolate sodium withdrawal and switching to azathioprine. Conclusions Celiac-like enteropathy seems to represent a rare side effect of MPA-associated immunosuppressive therapy and should be taken into account in the differential diagnosis of diarrhea in transplant recipients treated with MPA particularly in the absence of MPA-related colitis. As macroscopic lesions are usually missing, blind duodenal biopsies are necessary to establish the diagnosis.
M ycophenolic acid (MPA) in its 2 available formulations, mycophenolate mofetil (MMF) and entericcoated mycophenolate sodium (EC-MPS), is commonly used in solid organ transplantation as part of the maintenance immunosuppressive regimen. Gastrointestinal (GI) toxicity is the most common side effect occurring in up to 45% of renal transplant recipients. 1,2 The pattern of colonic injury has been well described as MMF-associated colitis characterized by crypt distortion and prominent crypt cell apoptosis mimicking inflammatory bowel disease and, less often, graftversus-host disease (GVHD). 2,3 The upper GI tract is less frequently involved, whereas only few reports refer to isolated involvement of the small intestine causing chronic diarrhea and severe weight loss with no sufficient information available regarding pattern of injury. 4 In the present short communication, we report the development of celiac-like enteropathy in 2 renal transplant patients under EC-MPS treatment with a clinical history of chronic diarrhea and substantial weight loss.
MATERIALS AND METHODS
The cases to be presented belong to a series of 70 renal transplant recipients under MMF or EC-MPS and persistent diarrhea who underwent colonoscopy between 2003 and 2016 at Laiko General Hospital (Athens, Greece). Patients with infectious colitis have been excluded, whereas no patient had a clinical history of inflammatory bowel disease. In 54 (77.15%) patients, MPA-related colitis was diagnosed histologically, whereas in 16 (22.85%) patients colonic biopsies did not display any significant changes. These 16 patients underwent upper GI endoscopy, and duodenal biopsies were obtained from different parts of the duodenum, including bulbus. Histological evaluation showed no significant findings in 3 cases, nonspecific duodenitis in 9 cases (1 with mild villous atrophy), peptic duodenitis in 2, and in the remaining 2 cases, celiac-like changes were identified. Immunohistochemical staining was performed for intraepithelial T lymphocytes assessment using anti-CD3 and anti-CD8 antibodies (DAKO A/S, Denmark). The number of CD3+ intraepithelial lymphocytes (IEL)/100 villous epithelial cells (IEL count) was evaluated. Epithelial cell apoptosis was separately estimated in a total of 100 villi and 100 crypts on hematoxylin and eosin (H&E). Apoptotic index was defined as the mean number of apoptotic bodies per villous and crypt.
The study was performed in accordance with the Declaration of Helsinki and with the approval of the local ethics committee. Informed consent was obtained from both renal transplant patients whose cases were presented in this manuscript.
Case 1
A 76-year-old man presented 16 months after kidney transplantation from deceased donor with intermittent watery, nonbloody diarrhea, and substantial weight loss. Before transplantation, he was 8 years on hemodialysis due to endstage renal disease attributed to presumed glomerulonephritis. His daily maintenance immunosuppressive regimen comprised 1080-mg EC-MPS, 4-mg tacrolimus, and 4-mg methylprednisolone. He denied use of nonsteroidal anti-inflammatory drugs and antibiotics, alcohol abuse, recent travel, or similar symptoms among family members.
Symptoms manifested 13 months posttransplantation with sudden onset of semiwatery stools, approximately 5 per day. At presentation, almost 3 months after diarrhea onset, the patient complained of severe and almost daily episodes of watery diarrhea associated with abdominal cramps and low-grade fever as well as of weigh loss of approximately 10 kilos. Blind colonic biopsies did not display any significant changes. Considering that the absence of histological evidence does not rule out the possibility of MPA-induced colitis, the dose of EC-MPS was reduced to 720 mg daily, whereas the rest of the immunosuppressive regimen remained unchanged. After 3 months of clinical improvement, intermittent episodes of large-volume watery diarrhea reappeared. In addition, the patient reported nausea, decreased appetite, fatigue, and further weight loss of at least 10 kilos after his last visit. Of note, he was already following a gluten-free diet for at least 4 months with no beneficial effect. On admission, he had a body mass index of 19 kg/m 2 with significant muscle wasting and blood pressure of 100/70 mm Hg. Laboratory investigation showed normocytic, normochromic anemia with hemoglobin of 9.6 g/dL, leukocyte count of 4.9 Â 10 5 /L, serum creatinine level of 4.17 mg/dL (367 μmol/L), and serum albumin 2.7 g/dL (27 g/L). Fecal leukocyte count, stool cultures, stool Clostridium difficile toxin, and examination for parasites were repeatedly negative. Cytomegalovirus was excluded by quantitative polymerase chain reaction in plasma and immunohistochemically in all biopsy specimens.
Esophagogastroduodenoscopy and colonoscopy did not show any macroscopic abnormalities. However, blind duodenal biopsies revealed mild villous atrophy and increased number of intraepithelial T lymphocytes expressing CD3 and CD8 immunohistochemically ( Figures 1A and B). These findings were compatible with celiac-like duodenopathy analogous to type 3A of the modified Marsh classification. 5 In addition, apoptotic index was found increased, especially in the epithelium of the villi ( Figure 1C). Histological findings are shown in detail in Table 1. Colonic biopsies from different parts of the colon exhibited mild nonspecific changes.
Small bowel capsule examination was normal. Celiac serologies (IgA and IgG gliadin antibodies, endomysium antibody, tissue transglutaminase antibody) were negative while consuming gluten. HLA genotyping was inconsistent with celiac disease.
Enteric-coated mycophenolate sodium was discontinued, and the patient was switched to azathioprine 2 mg/kg once daily. After cessation, GI symptoms resolved completely within 1 week. The patient also underwent gluten challenge without symptom recurrence. Follow-up laboratory tests showed recovery of renal function and correction of anemia. One month later, the patient reported significant improvement of his appetite with subsequent weight gain. A follow-up biopsy, 9 months after EC-MPS withdrawal while consuming gluten-containing diet, revealed normalization of villous atrophy and of IEL count with no evidence of abnormal apoptotic rate ( Figure 1D). At his last visit, 20 months after EC-MPS withdrawal, the patient had regained nearly 20 kilos and had no GI complaints.
Case 2
A 70-year-old male patient, who had received a renal graft from a deceased donor, developed a posttransplant progressively aggravating diarrhea syndrome. Before transplantation, the patient was on hemodialysis for 4 years due to end-stage renal disease on a background of unknown nephropathy. Patient's daily maintenance immunosuppressive regimen comprised 1080-mg EC-MPS, 1-mg tacrolimus, and 5-mg prednisolone. There was no history of nonsteroidal anti-inflammatory drugs and antibiotics use, alcohol abuse, recent travel, or similar symptoms in his household.
Patient's symptoms began 6 months after kidney transplantation when he reported loose stools and rare episodes of nonbloody watery diarrhea with no other GI or systemic complaints. The dose of EC-MPS was reduced to 720 mg daily. Afterward, GI symptoms resolved completely for 2 and half years. Subsequently, diarrhea recurred and manifested with up to 6 episodes of severe watery, nonbloody stools per day. The patient also complained of abdominal colicky pain, anorexia, bloating, and intermittent nausea and stated weight loss of approximately 5 kilos. Physical examination revealed a body mass index of 24 kg/m 2 . Laboratory investigation showed hemoglobin of 13 g/dL, leukocyte count and CRP within the normal range, a serum creatinine level of 2.3 mg/dL (202 μmol/L), and serum albumin of 3.5 g/dL (35 g/L). Fecal leukocyte count, stool cultures, stool Clostridium difficile toxin, and examination for parasites were repeatedly negative. Cytomegalovirus was excluded by quantitative polymerase chain reaction in plasma and immunohistochemically in all biopsy specimens.
Colonic biopsies exhibited mild nonspecific colitis, whereas esophagogastroduodenoscopy was unremarkable. However, blind duodenal biopsies revealed moderate villous atrophy and increased number of intraepithelial CD3, CD8-positive T lymphocytes (Figures 2A, B). These findings were compatible with celiac-like duodenopathy corresponding to type 3B according to the modified Marsh classification. 5 A high apoptotic index predominantly in the villous epithelium constituted an additional finding ( Figure 2C). All histological findings are included in Table 1.
Small bowel capsule did not reveal any changes. Celiac serology and HLA genotyping for DQ2/DQ8 were negative. Nevertheless, gluten-free diet was instituted for at least 3 months with no significant clinical response. Subsequently, EC-MPS was discontinued, and the patient was switched to azathioprine 2 mg/kg once daily. After the switch, GI symptoms resolved completely within 1 week and did not recur after gluten challenge. Follow-up laboratory studies showed return of renal function to baseline values. One month later, the patient reported significant improvement of his appetite with subsequent weight gain of approximately 6 kilos. Repeat endoscopy was performed 8 months after EC-MPS withdrawal while consuming gluten-containing diet and demonstrated restitution of all pathological lesions ( Figure 2D). At the most recent follow-up, 20 months later, he was in excellent clinical condition with no GI manifestations.
DISCUSSION
These 2 cases provide sufficient evidence of celiac-like duodenopathy as a highly potential, although rare, complication of EC-MPS in kidney transplant recipients. The clinical manifestations in both cases included severe chronic diarrhea and substantial weight loss. Serology and HLA genotyping for celiac disease were negative. Gastrointestinal endoscopy was unremarkable, and only blind duodenal biopsies made the final diagnosis. Histology exhibited a combination pattern of villous atrophy and intraepithelial lymphocytosis differing from genuine celiac disease by the coexistence of epithelial cell apoptosis and absence of prominent lymphoplasmacytic mucosa infiltrates. 5 Lack of crypt damage and the presence of increased number of IEL constituted differential diagnostic criteria from GVHD. 6 Besides apoptosis, there were no other similarities with the histological features encountered in our cohort of MMF-associated colitis. 3 Complete clinical response and normalization of histology were achieved after EC-MPS withdrawal. Mycophenolic acid has the potential to affect both the upper and lower GI tracts, although large bowel involvement clearly predominates presenting a rather common side effect in renal transplant patients. [1][2][3][4] Mycophenolic acid-associated colitis is widely recognized as a distinct drug-induced colitis, whereas the spectrum of endoscopic and histological features has been well documented. 2,3,7 Limited data are available regarding upper GI tract involvement, including duodenum, in symptomatic solid organ mainly kidney transplant patients on MPA. The first histological description of MPA-associated damage of the duodenal mucosa was reported by Ducloux et al 8 in 1998 who described villous blunting and crypt hyperplasia in the duodenum of a kidney transplant patient receiving MMF. Subsequently, a similar observation derived from a large series of patients with chronic diarrhea and significant weight loss. 9 Duodenal villous atrophy was encountered in 16% of patients and was attributed to MMF and EC-MPS therapy in 86% of these cases. Mycophenolic acid withdrawal or dose reduction resulted in diarrhea cessation and normal histology in all patients but one. It should be mentioned that in a small number of cases, azathioprine has been implicated for chronic diarrhea and duodenal villous atrophy which resolved after drug discontinuation. 9,10 Another specific finding identified as MPA-associated lesion not only in the lower but also in the upper GI tract is prominent apoptosis. 3,4,11 In a small series of 12 duodenal biopsies, 4 cases presented GVHD-like features associated with villous blunting. 11 In another study of 17 duodenal biopsies, 82% exhibited increased apoptotic counts (>2 apoptotic bodies/100 crypts) with or without mucosa abnormalities. 4 In 2 cases, mucosa changes were determined as celiac-like due to additional intraepithelial lymphocytosis. 4 Our presented cases as well as the 2 aforementioned are the only reported cases diagnosed as celiac-like duodenopathy in the setting of symptomatic renal transplant patients receiving EC-MPS and MMF. 4 It should be stressed that the referred study was designed to evaluate mucosa injury and apoptotic counts in upper GI compared with normal controls. The fact that clinicopathological data both at baseline and during follow-up are largely missing renders our cases of prime importance in the presentation of an emerging drug-related entity.
Celiac-like duodenal pathology potentially associated with MPA therapy has been described in orthotopic liver transplant patients as well. 12 Four of 16 patients showing abnormal histology were on MPA and displayed celiac-like changes combining villous atrophy and intraepithelial lymphocytosis in addition to increased apoptotic and endocrine cell counts and lamina propria eosinophils. Similar to our presented cases, MPA discontinuation or dose reduction resulted in improvement of symptoms within 1 to 3 weeks. However, follow-up biopsies were not available. 12 Celiac-like enteropathy represents an increasingly recognized broad clinical and pathological spectrum of possibly unrelated disorders that share celiac-like changes in small intestinal biopsies and respond to withdrawal of the offending agent with clinical and histopathological improvement and not to a gluten-free diet. 13,14 Among the most frequently recognized celiac-like enteropathies are "medication-related." The list of implicated drugs is expanding and includes angiotensin receptor blockers, antimicrobials, chemotherapeutic, and immunosuppressive agents. 14 In a study evaluating adult patients with villous atrophy and negative celiac serology over a 10-year period, the 2 most common diagnoses were seronegative celiac disease (28%) and medication-related villous blunting (26%), the latter attributed to olmesartan in 16 of 19 cases, and MMF and methotrexate in the remaining 3 cases. 15 Celiac-like duodenopathy seems to present a rare complication of immunosuppressive therapy with MPA manifesting few months to several years after medication initiation with persistent diarrhea and weight loss. A celiac-like histological pattern combined with apoptosis is encountered reminiscent of autoimmune enteropathy. One should be aware that this condition may be underdiagnosed because duodenal mucosa may appear endoscopically normal. In addition, a precedent diagnosis of MMF-induced colitis usually renders upper GI endoscopy superfluous.
Our cases were both treated with EC-MPS, whereas most cases of villous atrophy including those with celiac-like enteropathy described so far in the literature received MMF. 4,9 Although the beneficial effect of switching from MMF to EC-MPS presents a common observation in graft recipients, there are findings demonstrating no significant clinical differences between these 2 MPA formulations. [16][17][18] Complete resolution of histological lesions in our 2 cases few months after MPA withdrawal is in agreement with the experience on olmesartan-associated enteropathy. 19 The common course provides evidence of underlying drug-induced mechanisms different from those of celiac disease per se, which resides months and even years after initiation of a gluten-free diet. 14,20 Although the association between MPA and celiac enteropathy has not yet been clarified, robust literature data support a causative link. 9,21 The multiple properties of MPA imply a complex pathogenetic process and incriminate possible immunologic, toxic, and inflammatory mechanisms. The direct action of MPA is determined by its antiproliferative impact, whereas its metabolites display proinflammatory effects. 1 Acyl-MPA glucuronide, a toxic metabolite produced by GI cells, may be pathogenetically involved, although local metabolite concentrations have not been reported. 22 Mycophenolic acid-induced alteration in cytokine production, which potentially affects gut homeostasis, may also play a contributory role. 17,23 However, the long latency period between drug initiation and development of symptoms and in particular the presence of histological features indicative of T lymphocytes induced epithelial cell apoptosis support, an immune-mediated process. An underlying MPA-related immune dysregulation in association with an immune response to putative autoantigens, such as adducts formed by Acyl-MPA glucuronide metabolites or altered cellular proteins should be considered. 24 The latter hypothesis is strengthened by the recognition of immunological disorders of the small intestine, such as autoimmune enteropathy and common variable immunodeficiency, which share a common histological pattern on the background of immunodeficiency. 3,25 In summary, celiac-like enteropathy seems to represent a rare side effect of MPA-associated immunosuppressive therapy and should be taken into account in the differential diagnosis of diarrhea in transplant recipients treated with MPA in particular in the absence of MPA-related colitis. As macroscopic lesions are usually missing, blind duodenal biopsies are necessary to establish the diagnosis. Clinical response appears to occur immediately after MPA withdrawal with complete reversal of mucosa lesions. | 2018-10-02T01:19:39.590Z | 2018-07-20T00:00:00.000 | {
"year": 2018,
"sha1": "92ec4741a07933c1ff3d5b98d22cc613e133bc7e",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1097/txd.0000000000000812",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "92ec4741a07933c1ff3d5b98d22cc613e133bc7e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
9800911 | pes2o/s2orc | v3-fos-license | On a trivial aspect of canonical specific heat scaling
We show that the canonical finite size scaling of the specific heat emerges naturally - and in some sense trivially - from the assumption that the microcanonical specific entropy exhibits no substantial system size dependence.
Introduction
Since the introduction of the Renormalization-Group by Wilson [1], we have learned much about critical phenomena and the striking feature of universality. Unfortunately, the majority of systems with nontrivial behaviour cannot be treated analytically. Nevertheless, there exist approximation schemes concerning finite systems which allow to estimate the properties of the corresponding infinite system by proper extrapolation to the thermodynamic limit. A very powerful method for the investigation of the critical properties of an infinite system along these lines is the socalled finite size scaling theory introduced by Fisher et al. (see [2], [3] and references therein). Although hypothesised before the advent of the Renormalization-Group, finite size scaling may be understood within the framework of the latter. The main result of finite size scaling theory may be stated like this: In the vicinity of the critical point of a given infinite system, the system size dependence of certain thermal properties of the corresponding finite system is governed by properties (namely: the critical indices) of the infinite system. Or, formulated slightly differently: In spite of the fact, that the free energy density of a finite system is a completely analytic function of its variables, the system size dependence of certain derivatives of the free energy density are dictated by quantities which describe the non-analytic behaviour of the free energy density of the corresponding infinite system.
All thermal properties of a finite system (volume 1 V ≡ N := L d ) are given by logarithmic derivatives of the canonical partition function where H(x) represents the energy of a particular microstate x of the system and the sum runs over all possible microstates which constitute the space Γ of all the states available to the system. With the definition of the microcanonical density of states and the microcanonical specific entropŷ the canonical partition function reads Here, the sum runs over all possible energy-values of the finite system. As it is clearly visible, the system size dependence of the partition function is due to two causes. Namely, the size dependence of the microcanonical specific entropyŝ N (ε) and the overall factor N in the exponential, which we will call the trivial system size dependence.
It is the aim of this paper to demonstrate that finite size scaling emerges naturally and in some sense trivially from the assumption that the critical properties of the infinite system are already contained in the microcanonical specific entropy of the finite system. Unfortunately, up to now we are able to show this only as far as the finite size scaling properties of the specific heat are concerned.
Although we are never introducing a specific system explicitly by giving its Hamiltonian, we restrict our discussion to systems with short range interactions. For this reason, standard finite size scaling is applicable and hyperscaling holds 2 .
Remark: In the thermodynamic limit, the entropyŝ N (ε) is replaced by the Massieufunctionŝ(ε, h/T) at zero magnetic field h (=:ŝ(ε)) , which is the Legendre transform of the entropy s(ε, m): Here, m denotes the magnetization per particle. In this paper, the thus defined Massieu-function will be called "microcanonical specific entropy".
In section 2, we give an explicit form for the microcanonical specific entropy of a system which undergoes a continuous phase transition with a power law singularity of the specific heat. In section 3, we show that the system size independence of the microcanonical specific entropy implies canonical finite size scaling. Namely, the scaling of the specific heat maximum (section 3.1) and the scaling of the softening of the specific heat singularity (section 3.2). Since we are not able to proof the reverse direction of this statement, in section 4, we give some hints onto the validity of the conjecture that the system size dependence of the microcanonical specific entropy is such weak not to impact the canonical finite size scaling.
Microcanonical specific entropy vs. continuous phase transitions
The microcanonical specific entropy of a system which undergoes a continuous phase transition may be written as a sum of a singular partŝ sing (ε) and a correction termŝ corr (ε) which is needed to correctly describe the behaviour of the specific entropy outside the critical region.
If the corresponding specific heat shows a power law singularity, the choicê (with g := 2−α 1−α ) for the singular part of the specific entropy yields the correct behaviour of the singular part of the specific heat 3 .ŝ c , ε c and β c are the values of the specific entropy, the specific energy and the inverse temperature at the critical point (β c := 1/T c ), the step-function Θ(x) is defined by Θ(x) := 1 ∀ x > 0 and Θ(x) := 0 ∀ x ≤ 0. Indeed, since the microcanonical specific heat is given by differentiating the singular part (7) of the specific entropy twice with respect to the specific energy ε yields in the vicinity of the critical point ε c . Alternatively, by going over from ε−εc the singular part of the microcanonical specific heat as a function of the reduced temperature t : The correction termŝ corr (ε) of the specific entropy yields no contribution to the singular behaviour of the specific heat if it obeys the following condition: 3 In the case of a logarithmic singularity the last term inŝ sing (ε) should be replaced by a function of the form (ε − ε c ) 2 / ln |ε − ε c | .
3 Microcanonical specific entropy vs. finite size scaling of the canonical specific heat In the rest of this paper, we will study finite size scaling properties of the canonical specific heat of a hypothetical N-particle system with specific entropŷ This implies the assumption that, at least for sufficiently large N, the singular contribution toŝ N (ε) is identical to the singular part of the entropy of the infinite system. The correction termŝ corr,N (ε) may show some N-dependence which should of course be consistent with the condition (12).
Having postulated the form of the entropy in (13) the canonical specific heat c N (T) of the N-particle system follows directly from (4). We shall compare the scaling properties of the thus determined specific heat c N (T) with the results of conventional finite size scaling theory [5]. At the critical temperature T c of the infinite system the finite size scaling theory predicts for the value of the specific heat of the finite system In the finite system the singularity is smeared. Scaling theory predicts for the width of the specific heat anomaly: Here, ν is the critical exponent of the correlation length ξ(t) of the infinite system, L is the linear dimension of the finite system and T c denotes the critical temperature of the infinite system.
We are now going to show, that the finite size scaling relations and ∆T(L) ∝ 1 L
Canonical specific heat scaling at T c
The first step consists in the calculation of the n-th moment of the specific energy of the N-particle system with respect to the canonical distribution at the critical temperature of the infinite system.
Z N (T c ) denotes the canonical partition function of the N-particle system at T c (cf. (4)). For sufficiently large N, it is justified to replace the sum over all possible energy-values by an integration along the energy-axis. Since the correction term s corr,N (ε) of the specific entropy will yield no contribution to the canonical quantities at T c (for sufficiently large N again), we are concerned with integrals of the type Defining new amplitudes B := Aβ c |ε c | 1−g /g, B ′ := A ′ β c |ε c | 1−g /g and renaming (ε − ε c ) → ε, we get Substituting x := NB ′ ε g in the first and x := NBε g in the second term of the integral, we end up with the following expression for I n : Since we want to study the finite size scaling properties of the canonical specific heat c N (T) at the critical temperature T c of the infinite system, we have to look at the second central moment of the specific energy with respect to the canonical distribution: With g := (2 − α)/(1 − α) and N = L d , d being the dimension of configuration space, eq. (16) follows immediately from (23). In conjunction with the hyperscaling relation dν = 2 − α , this implies (14).
Scaling of the specific heat width ∆T
The specific heat singularity (11) of the infinite system is rounded in the corresponding finite systems. Since in the canonical ensemble there are no phase transitions in finite systems, this effect seems to be quite natural. The standard (finite size scaling) argument for this softening is that the specific heat of the finite system saturates for those temperatures, where the correlation length ξ(t) of the infinite system becomes larger than the linear system size L. The correlation length ξ N (t) of the finite system is bounded from above by a length which is of the order of magnitude of the linear system size L. For this reason, there is a temperature region within which the specific heat of the finite system deviates essentially from the specific heat of the infinite system. Any measure of the width of this region will do. For numerical convenience, the width ∆T(L) is defined as the temperature range were c N (T ) is larger than 80% of its maximum value. Having defined ∆T(L), it is an easy matter to compute it by numerical integration via eqs. (4) and (7) for any lattice size L. The various parameters appearing in (7) have not been chosen arbitrarily but we have taken the parameter set obtained by a fit to the (simulated) entropy data of a threedimensional Ising model
On the system size dependence of microcanonical specific entropies
In the previous section, we have shown that a system size independent microcanonical specific entropy implies the canonical finite size scaling relations. Unfortunately, we are not able to proof this statement in the reverse direction. Nevertheless, we can report at least two observations which are necessary (not sufficient) for the validity of the statement that the system size dependence of the microcanonical specific entropy has no considerable impact on canonical finite size scaling (if the systems are not chosen to be too small).
1)
If the microcanonical specific entropy shows no system size dependence and if the critical properties of the infinite system are already contained in the entropy of the corresponding finite systems, then it should be possible to extract information about the critical exponent α by performing some fits of a function of the form (7) to the entropy data of finite systems obtained by, e.g., Monte Carlo simulation. And indeed, it has already been shown [4], that an entropy of the form (6) with the singular part given in (7) fits the data of a 10 × 10 × 10-Ising system very well and it will be shown elsewhere, that the same entropy with the same exponent g but slightly modified parameters ε c , β c is well suited to fit the data of larger 3D-Ising systems (the values of the critical exponent α emerging from this fits is well consistent with the respective value of α in the thermodynamic limit, i.e. α f it ∈ [.08; .12]).
2) If the microcanonical specific entropy shows no substantial system size dependence, then it should make no difference if the value of the specific heat of a finite system at the critical temperature of the infinit system is calculated by use of the entropy of the finite system or by use of the entropy of the infinit system. Fortunately, this can be checked in the case of the twodimensional Ising model, where the entropy of the infinit system can be calculated from Onsagers solution [6]. At zero external field, the internal energy per particle as a function of the inverse temperature reads as: where q := 2 sinh(2β) cosh 2 (2β) and K 1 (q) := π/2 0 dϕ 1 − q 2 sin 2 ϕ 1/2 , J ≡ k B ≡ 1 .
(26) Here, J denotes the Ising coupling constant. Since the inverse temperature β is defined to be the derivative of the entropyŝ(ε) with respect to the energy ε, the entropy can be calculated according tô with arbitrary ε 0 . β(ε) is obtained by inverting eq. (25). In the case of a logarithmic specific heat singularity, the canonical finite size scaling theory predicts [7] c N (T c ) ∝ ln(1/L) Having obtained the entropy of the infinit 2D Ising system, it is an easy thing to calculate the critical point specific heat using eq. (22). The result is shown in figure 2
Conclusion
We have shown that the finite size scaling relations (16) and (17) are trivial consequences of the postulate (13): for sufficiently large N, the entropy of the finite system was assumed to be identical to the entropy of the infinite system at least in the vicinity of the critical point. In this context, "trivial" means that the softening of the specific heat singularity is caused solely by the trivial factor N in the exponential of the canonical partition function (4). In the framework of this scenario it is therefore not astonishing that some properties of the finite system are governed by the critical indices of the infinite system: they are already contained in the entropy of the finite system but they are covered up by the averaging ("smearing") procedure of the canonical partition function (for a detailed discussion of this "smearing-effect" see [8]). For this reason it seems to be plausible that, as far as finite systems are concerned, we are in some sense blinded by the canonical formalism which obscures the information already available in the microcanonical specific entropy. Indeed, the hypothetical system which we have discussed may seem to be a rather strange construction but it is not as arbitrary as it seems to be since we have already shown that an entropy of the type (7) is well suited to fit the data of a 10 3 -3d-Ising system. Note that this is by no means the only example of a system with a microcanonical specific entropyŝ N (ε) which shows no substantial N-dependence. We will report about other examples elsewhere.
Acknowledgements
The author wants to thank Alfred Hüller and Hajo Leschke for many stimulating discussions. Captions | 2014-10-01T00:00:00.000Z | 1999-04-21T00:00:00.000 | {
"year": 1999,
"sha1": "4143c167d34b79061b2fc89ecb00b0268e05980b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "24503ccccdb1ca14f529e3e458b9c4d825a6d86b",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
2677323 | pes2o/s2orc | v3-fos-license | Ginseng pharmacology: a new paradigm based on gintonin-lysophosphatidic acid receptor interactions
Ginseng, the root of Panax ginseng, is used as a traditional medicine. Despite the long history of the use of ginseng, there is no specific scientific or clinical rationale for ginseng pharmacology besides its application as a general tonic. The ambiguous description of ginseng pharmacology might be due to the absence of a predominant active ingredient that represents ginseng pharmacology. Recent studies show that ginseng abundantly contains lysophosphatidic acids (LPAs), which are phospholipid-derived growth factor with diverse biological functions including those claimed to be exhibited by ginseng. LPAs in ginseng form a complex with ginseng proteins, which can bind and deliver LPA to its cognate receptors with a high affinity. As a first messenger, gintonin produces second messenger Ca2+ via G protein-coupled LPA receptors. Ca2+ is an intracellular mediator of gintonin and initiates a cascade of amplifications for further intercellular communications by activation of Ca2+-dependent kinases, receptors, gliotransmitter, and neurotransmitter release. Ginsenosides, which have been regarded as primary ingredients of ginseng, cannot elicit intracellular [Ca2+]i transients, since they lack specific cell surface receptor. However, ginsenosides exhibit non-specific ion channel and receptor regulations. This is the key characteristic that distinguishes gintonin from ginsenosides. Although the current discourse on ginseng pharmacology is focused on ginsenosides, gintonin can definitely provide a mode of action for ginseng pharmacology that ginsenosides cannot. This review article introduces a novel concept of ginseng ligand-LPA receptor interaction and proposes to establish a paradigm that shifts the focus from ginsenosides to gintonin as a major ingredient representing ginseng pharmacology.
INTRODUCTION
Ginseng root (Panax ginseng CA Meyer) is one of the herbal medicines that have been widely used in Eastern Asian countries for thousands of years. Currently, ginseng is one of the most popular herbs used worldwide (reviewed by Jia and Zhao, 2009). Ginseng history records that it was first considered to be a complementary treatment for "energizing the body" or as a "tonic" and is regarded as a Chinese herbal "Qi" by Asians. Qi in relation to ginseng means energy and life force. Qi was translated by Brekhman into another word in 1966 that describes ginseng is an "adaptogen, " which TABLE 1 | Summary of comparisons on membrane signal transduction between ginsenoside and gintonin.
Ginseng components Ginsenoside Gintonin Reference
Chemical structure One of triterpene dammarane glycosides (ginseng saponin) A complex with lysophosphatidic acid (LPA) and ginseng proteins such as ginseng major latex-like protein 151 and ginseng major storage protein Hwang et al. (2012), Nah (2014) Endogenous presence in animals No Yes(LPAs present in brain, platelet, serum, and body fluids) Hwang et al. (2012) Cell surface membrane binding protein Interact with ion channels and receptors with non-specific manners LPA receptors Hwang et al. (2012), Nah (2014) Signal transduction pathway systems No Pertussis toxin-sensitive and -insensitive G proteins Phospholipase C and IP3 receptor activation Hwang et al. (2012), Nah (2014) Second messenger production and influence to effector systems No [Ca 2+ ]i transient induction Ca 2+ -dependent various kinase, ion channels and receptors Hwang et al. (2012), Nah (2014) Concentration requirement for in vitro cellular responses High concentration of ginsenoside (µM) Low concentration of gintonin (<nM) Hwang et al. (2012), Nah (2014) Capability of intracellular and intercellular communications No Yes (through regulations of intracellular ion channels or receptors, glio-or neuro-transmitter release) Hwang et al. (2015), Kim et al. (2015a) means, "ginseng extract is believed to increase the body's ability to resist the damaging effects of stress and promote or restore normal physiological functions" (Brekhman et al., 1966). Currently, the term "adaptogen" is rather vague in terms of modern medicine, and Brekhman et al. (1966) did not provide further information on the active ingredient of ginseng that could play the role of an adaptogen. Instead, the adaptogenic effects of ginseng were translated into the following claims: promotes stamina, increases resistance to disease including cancer, improves physical performance as an ergogenic aid, reduces physical fatigue and mental stress, improves mental awareness, restores and enhances sexual function, and finally, increases life expectancy. However, evidence supporting these claims is still lacking and further studies demonstrating these effects are necessary. Advanced analytical techniques have revealed that ginseng contains bioactive components such as ginsenosides, acidic polysaccharides, and polyacetylenes as well as other minor components (reviewed by Leung and Wong, 2010). Ginsenosides and their chemical structures were first discovered and identified in ginseng, and are considered as its primary component (Shibata et al., 1965). Ginsenosides are triterpene saponins that are found only in ginseng species. The molecular weights of ginsenosides are in the 1000 Da range, which makes it is easy to isolate them from ginseng (Figure 3). The isolated ginsenosides have a peculiar bitter, sweet, or bittersweet aroma and taste compared to other components. Most ginseng efficacy-related studies have focused on the actions of its saponins (reviewed by Leung and Wong, 2010). However, accumulating evidence show that these identified components of ginseng, especially the ginsenosides, do not represent the complete diversity of ginseng pharmacology. In particular, ginsenosides does not fully exhibit systemic actions in vitro and in vivo that can be associated with all the underlying molecular mechanisms observed in ginseng pharmacology (Table 1; reviewed by Nah, 2014).
Other components of ginseng except those noted above are relatively unknown, despite the claims of its diverse efficacy.
A key clue suggesting the presence of a novel ingredient in ginseng is the crude ginseng total saponin (cGTS) fraction, which when prepared before further isolation of individual ginsenosides, contains approximately 50% of ginsenosides by weight. The biological properties of the rest of the components of the cGTS fraction besides ginsenosides were previously unknown. Interestingly, the cGTS fraction was shown to mimic G protein-coupled receptor (GPCR) ligands such as acetylcholine by activating the endogenous Ca 2+ -activated Cl − channel in Xenopus oocytes (Choi et al., 2001a,b;Lee et al., 2004). Furthermore, the unidentified ingredients in cGTS that induced Ca 2+ -activated Cl − channel activation were not ginsenosides but rather, were similar to an unidentified GPCR ligand (Pyo et al., 2011). Further studies showed that ginseng contains several types of lysophosphatidic acids (LPAs), which are endogenous phospholipid-derived growth factors in animals. In ginseng, the LPAs are isolated as a complex with ginseng proteins that stabilizes and prolongs their activity, and delivers them to their cognate receptors, which is a characteristic that distinguishes them from other plant-derived LPAs Choi et al., 2015; Figure 1). Gintonin, a complex of ginseng LPAs and proteins, activates LPA GPCRs with a high affinity . This was the first demonstration that ginseng also contains GPCR ligands. Interestingly, the above traditional claims of ginseng efficacy overlap with the known biological effects of LPAs in many aspects ( Table 2). Gintonin via LPA receptor activation provides further systemic evidence of the diverse effects of ginseng that ginsenosides do not (Figure 2; reviewed in Nah, 2012Nah, , 2014. The subsequent sections in this review illustrate the differences in the mode of action of ginsenosides and gintonin, and propose a necessary re-establishment of the representative ingredient of ginseng that shifts the claims for the pharmacological properties of ginseng from ginsenosides to gintonin.
FUNCTIONAL LIGAND OF GINTONIN BUT NOT GINSENOSIDES IS ENDOGENOUS TO BOTH ANIMAL AND PLANT SYSTEMS
Human beings have isolated and used pharmacologically active, plant-derived ligands for a long time. Some plant-derived ligands mimic the physiological/pharmacological effects of endogenously Anti-metastasis though inhibition of autotoxin activity - Hwang et al. (2013) occurring ligands in animals. However, plant-derived active ligands acting on animal cells usually differ from endogenous ligands in their chemical structures. Interestingly, LPAs are commonly found in both animal and plant systems with the same chemical structure. In animals, Vogt (1957) first found that LPA exists in the intestine and brain lipids. Tokumura et al. (1978) first reported the presence of free LPA in plant soybean lecithin. Plant systems synthesize LPAs and plant-derived LPAs such as gintonin, can serve as LPA GPCR ligands in animal cells, although plant LPAs are merely metabolic intermediates in de novo lipid synthesis in plant cell membranes or for glycerophospholipid storage (Millar et al., 2000). Recent studies show that LPA is found in most cell types such as neuronal and glial and nonneuronal cells such as adipocytes and fibroblasts (Tigyi and Parrill, 2003). LPA has also been detected in body fluids such as serum, saliva, and follicular fluid where it performs diverse biological functions (Moolenaar, 1994). LPAs like gintonin exist in both animals as endogenous ligands and plant systems as metabolic intermediates, whereas ginsenosides exist only in ginseng but not animal systems. Although the chemical structure of ginsenosides is similar to that of the steroidal backbone, they are actually triterpenoid saponins that differ from steroids found in animals.
Gintonin is an exogenous functional ligand for LPA receptors. This is the first feature distinguishing gintonin from ginsenosides.
FIGURE 2 | Schematic diagram that gintonin-mediated in vitro cellular effects through LPA receptors in neurons and astrocytes is linked to in vivo
pharmacological effects. The primary action of gintonin produces second messenger Ca 2+ via LPA receptor activations and regulates Ca 2+ -dependent to various ion channels and receptors regulations, and glio-transmitter and neuro-transmitter release. The ensuing inter-cellular communications via the released neurotransmitters (i.e., acetylcholine or glutamate) can be related to the pharmacological effects that can finally be linked to improvement of learning and memory in nervous system (Kim et al., 2015c). Gintonin also exhibits pharmacological effect against Alzheimer's disease by attenuating β-amyloid plaque formation and by ameliorating cognitive dysfunction via the activation of non-amyloidogenic pathway and by restoring cholinergic systems that were damaged by β-amyloid in transgenic Alzheimer's disease animal model Kim et al., 2015b). In astrocytes, gintonin-mediated ATP and glutamate release can be coupled to regulations of neuronal activity (Kim et al., 2015a). In addition, gintonin as exogenous LPA induces various cellular effects such as migration and proliferation of cells as LPAs do through LPA receptor activations. Gintonin also exhibits anti-metastasis activity via inhibition of autotaxin (ATX) activity. ACh, acetylcholine; AChE, acetylcholine esterase; sAPPα, soluble amyloid precursor protein α; ATX, autotaxin; ChAT, choline acetyltransferase; Glu, glutamate.
GINTONIN INTERACTS WITH PROTEINS ON ANIMAL CELL SURFACE MEMBRANES
Most of animal-and human-derived first messenger endogenous ligands such as hormones or neurotransmitters have binding or interaction protein(s) on cell surface membranes, and each ligand binds to its specific surface protein called a receptor (Kihara et al., 2015). The binding of a ligand to its receptor initiates or translates extracellular information to intracellular sites and transfers extracellular information even at very low concentrations. For example, most hormones and neurotransmitters such as peptides, proteins, catecholamines, and acetylcholine elicit cellular responses at less than nanomolar concentrations (Kihara et al., 2015). Extracellular information may also be transferred to intracellular sites by permeation of ligands into the cell where they bind to intracellular receptors. For example, most steroid hormones act on intracellular receptors (reviewed by Helzer et al., 2015). Animal-or plant-derived bioactive ingredients also exert their effects by acting on their respective receptors located on the cell surface or intracellularly in animal systems. These effects may be agonistic or antagonistic. Human beings have used plant-derived ligands for medicinal purposes since ancient times. Representative plant-derived medicinally active ligands include morphine as an analgesic isolated from the poppy plant and cannabinoids as an anti-glaucoma agent from marijuana, respectively (reviewed by Cichewicz, 2004).
Interestingly, ginseng has a long history of use in traditional herbal medicines but its pharmacology and specific therapeutic applications are not well defined (reviewed in Nah, 2012Nah, , 2014. This might be due to a lack of information on the bioactive components of ginseng with the exception of ginseng saponins, which are well characterized (reviewed by Nah, 2012Nah, , 2014. Ginsenosides are the first recognized bioactive components of ginseng (reviewed by Nah et al., 2007); however, their current designation as the representative bioactive components of ginseng has several shortcomings. First, ginsenosides have no known specific extracellular or intracellular receptors in animal cells (Nah, 2014), which do not show any spontaneous cellular responses following treatment with ginsenosides. To observe the pharmacological effects of ginsenosides, cells must be prestimulated by electrical currents, excitatory ligands, or other treatments or subjected to injuries like hypoxia or ischemia, in the case of organs (Nah, 2014). Second, ginsenosides must be applied at high micromolar concentrations (≈ 30-97 µM in EC 50 or IC 50 ) to elicit any physiological or pharmacological effects compared to other endogenous or exogenous ligands (Choi et al., 2002Nah et al., 2014. Third, the effects of ginsenosides are miscellaneous, non-selective, and receptorindependent (Figure 4). Ginsenosides lack specific membrane target proteins and interact non-selectively and indiscriminately with various plasma membrane proteins such as ion channels and receptors (Figure 4; reviewed by Nah, 2014). Gintonin as a first messenger isolated from ginseng, binds only to cell surface LPA receptors with a high affinity and elicits cellular responses at less than nanomolar or nanomolar concentration ranges (0.45-18 nM in EC 50 ), to primarily activate [Ca 2+ ] i transients (Figure 2; Pyo et al., 2011;Hwang et al., 2012;Nah, 2012). This is the second characteristic that distinguishes gintonin from ginsenosides.
GINTONIN HAS A SPECIFIC SIGNALING PATHWAY FOR ACTIVATION OF SECOND MESSENGER
Endogenous or exogenous bioactive ligands activate cell surface receptors and initiate a cascade that amplifies the first messenger action via signal transduction pathways (reviewed by Hofmann and Palczewski, 2015). First, the membrane signaling proteins are activated, and then they mediate subsequent reactions, following membrane receptor activation. Guanosine triphosphate (GTP)binding proteins are involved in the first step in transferring extracellular information from the first messengers to cytosolic effector systems, which include adenylate cyclase, phospholipase C (PLC), protein kinase C, and others. Finally, the activation or inhibition of effector systems is coupled to final second messenger production or inhibition such as cyclic adenosine monophosphate (cAMP) production or Ca 2+ release from storage. Endogenous or exogenous bioactive ligands activate receptors that are specifically coupled to cAMP, Ca 2+ , or other second messenger systems depending on the receptor types (reviewed by Clister et al., 2015).
Ginsenosides alone do not elicit any cellular responses related to signal transduction pathways (Nah et al., 1995) and only affect intracellular Ca 2+ or other cation concentrations when neuronal cells are pre-depolarized or neuronal receptors are prestimulated by excitatory ligands. For example, ginsenosides inhibit NMDA receptor-mediated Ca 2+ influx only in the presence of NMDA (Lee et al., 2006). Ginsenoside action is limited to the membrane (Table 1; Figures 3 and 4). However, gintonin treatment induces spontaneous responses in cells that express endogenous LPA receptors to elicit [Ca 2+ ] i transients. Gintonin modulates membrane signal transduction pathways and effector systems to induce [Ca 2+ ] i transients such as pertussis toxinsensitive and -insensitive Gα i/o , Gα 12/13 , Gα q/11 , and cytosolic PLC and inositol triphosphate (IP 3 ) receptors (Table 1; Hwang et al., 2012). Gintonin but not ginsenosides utilizes Ca 2+ to exert further diverse Ca 2+ -dependent intracellular effects. This is the third feature that distinguishes gintonin from ginsenosides.
GINTONIN USES VARIOUS EFFECTOR SYSTEMS TO AMPLIFY CELLULAR EFFECTS
Gintonin acts on LPA receptors to activate cells via a transient increase in [Ca 2+ ] i levels . There are numerous proteins that are dependent on Ca 2+ for their activation including various kinases, membrane ion channels, and receptors (reviewed by Berridge et al., 1998). Gintonin-mediated induction of [Ca 2+ ] i transients is coupled to Ca 2+ -dependent activation of various kinases [such as Ca 2+ /calmodulin-dependent protein kinase II (CaM kinase II), protein kinase C, and tyrosine kinase], ion channels (Ca 2+ -activated Cl − , Ca 2+ -activated K + , delayed FIGURE 4 | Schematic diagram on ginsenoside-induced various ion channel and receptor regulations on cell surface membrane. Ginsenoside (i.e., ginsenoside Rg3) actions on cell surface ion channels and receptors show several characteristics. First, ginsenoside shows various non-specific regulations of ion channels and receptors as illustrated here. However, overall actions of ginsenoside decrease the cellular excitability of excitable cells by inhibiting cation influx (i.e., Ca 2+ and Na + channel activity inhibitions or K + channel activation and ligand-gated ion channel inhibitions such as 5-HT3, nACh, and NMDA receptors), and by stimulating anion influx (i.e., GABA A and glycine receptor channel activation). Second, ginsenoside-induced ion channel and receptor regulations achieve via interaction with ion channel pore, channel pore entryway, and share channel blocker or toxin biding sites through site-directed mutagenesis studies (Nah, 2014). Third, ginsenoside itself does not induce ion channel or receptor inhibition or activation at resting state, without preceding stimulations of ion channel or receptors by depolarization or receptor ligand treatment. Thus, the biological or pharmacological effects of ginsenoside could be observed when cells or organs are stimulated beyond normal state rather than receptor mediation like gintonin. rectifier K + , and Kv1.2), receptors (NMDA and P 2 X 1 ), hormone secretion (such as dopamine), gliotransmitters [such as adenosine triphosphate (ATP) and glutamate], and neurotransmitter release (such as acetylcholine and glutamate; Shin et al., 2012;Choi et al., 2013b;Lee et al., 2013b;Hwang et al., 2015;Kim et al., 2015a). Ginsenosides do not have a specific signaling pathway for inducing cellular activation and neurotransmitter release. Instead, ginsenosides regulate various ion channels and receptors nonselectively (Figure 4; reviewed by Nah, 2014). Recent studies showed that ginsenosides inhibit ion channels and receptors by directly interacting with amino acids at channel pores similar to channel blockers or toxins (reviewed in Nah, 2014). Ginsenosides simply negatively affect cytosolic ion concentration by cellularly inhibiting Ca 2+ or Na + influx but by enhancing GABA A and glycine receptor channel Cl − currents (Lee et al., 2008Choi et al., 2009). Ginsenosides have no intracellular mediators that further amplify their effects but regulate ion channel activities by directly interacting with membrane ion channel proteins (reviewed by Nah, 2014). Gintonin but not ginsenosides acts via a second messenger (Ca 2+ ) and shows a consistent pattern in its actions that always involves cytosolic Ca 2+ , which mediated its pharmacological effects through various kinases, ion channels, and receptors. This is the fourth characteristic that distinguishes gintonin from ginsenosides.
GINTONIN MEDIATES CELL-CELL NERVOUS SYSTEM COMMUNICATIONS
The most important characteristic of endogenous or exogenous ligands is the ability to transfer information from one cell to another to induce subsequent biological effects. Hormones or neurotransmitters play a role in information transfer between cells or from cells to organs (reviewed by Berridge et al., 2003). The elevation of [Ca 2+ ] i by GPCR ligand-mediated activation is coupled to neurotransmitter release into the synaptic cleft where it delivers presynaptic information to postsynaptic neurons (reviewed by Berridge et al., 1998). The gintoninmediated [Ca 2+ ] i transient is also coupled to gliotransmitter or neurotransmitter release (Hwang et al., 2015;Kim et al., 2015a). Furthermore, gintonin-mediated release of hormones and neurotransmitters can affect other neighboring or remote cells, resulting in long-term potentiation (LTP) and enhancement of synaptic transmission with subsequent cognitive enhancing effects (Kim et al., 2015c;Park et al., 2015). Therefore, gintonin has specific modulating systems for cell-cell communications via neurotransmitter release in the hippocampus, ultimately leading to biological effects like enhancement of cognitive behavior (Kim et al., 2015c;Park et al., 2015). Ginsenosides themselves do not elicit [Ca 2+ ] i transients and are not capable of inducing intracellular and intercellular communications (Nah et al., 2007). This is the fifth feature that distinguishes gintonin from ginsenosides. However, ginsenosides might have several other receptor-independent cellular effects that are not related to cell surface receptor activation. For example, ginsenosides show antioxidant effects and reduce free radical-induced cell damage, which are also observed with most natural products such as fruits and vegetables (reviewed by Liu, 2012). Ginsenosides also inhibit platelet aggregation (reviewed by Mousa, 2010) and enhance non-specific immune responses, which are also observed with most mushrooms with immunomodulatory activities (reviewed by Kang and Min, 2012). The functional comparisons of gintonin and ginsenosides are summarized in Table 1. Hecht et al. (1996) first reported LPA as a ligand of the ventricular zone gene-1, which is abundantly expressed in the ventricular zone during mammalian brain neurogenesis. Since then in subsequent studies, six LPA receptor subtypes have been further cloned on the surface of various cells, and most organs also widely express endogenous LPA receptor subtypes (Yung et al., 2015). LPA and its receptors play important roles from the embryonic to adult stage as well as in nervous and non-nervous systems including the brain, cardiovascular, reproductive, and immune systems ( Table 2).
GINTONIN BUT NOT GINSENOSIDES HAS A CLEAR MODE OF IN VITRO AND IN VIVO PHARMACOLOGICAL ACTIONS VIA LPA RECEPTORS
Although the LPAs content of gintonin originates from ginseng, gintonin uses the same signaling transduction pathways as animal-derived LPA does in the induction of [Ca 2+ ] i transients in neuronal and non-neuronal cells as mentioned above . The acute or short-term effects of gintonin but not ginsenosides regulates various Ca 2+ -dependent ion channels and receptors, in vitro Choi et al., 2013aChoi et al., ,b, 2014Lee et al., 2013b;Hwang et al., 2015). The regulation of ion channels and receptors is coupled to cellular effects. For example, in the nervous system, the gintonin-mediated [Ca 2+ ] i transient is coupled to the release neurotransmitters such as acetylcholine, dopamine, and glutamate (Hwang et al., 2015;Kim et al., 2015a;Park et al., 2015). Gintonin-mediated Kv1.2 channel inhibition and NMDA receptor activation are closely associated with LTP induction and enhancement of synaptic transmission in the mouse hippocampus (Park et al., 2015). In addition, gintonin but not ginsenosides stimulates cell proliferation and migration in human umbilical vein endothelial cells and induces neurite retraction via pertussis toxin-sensitive and -insensitive G proteins (Hwang et al., , 2015. Kim et al. (2015c) observed that hippocampal LTP increased in mice that were previously treated with gintonin for 7 days orally compared to saline-treated mice. In addition, the hippocampi of mice that were previously treated for 7 days with gintonin showed increased expressions of learning and memory-related proteins such as phosphorylated cAMP-response element binding (pCREB) protein and brain-derived neurotrophic factor (BDNF). Finally, in a behavioral study, gintonin administration improved fear memory retention in the contextual fear-conditioning test in mice (Kim et al., 2015c).
Regarding the hippocampal cholinergic system, long-term administration of gintonin to wild-type mice increased the immunoreactivity of hippocampal choline acetyltransferase, which is responsible for acetylcholine synthesis (Kim et al., 2015b). This observation indicates that gintonin treatment not only stimulates acetylcholine release but also induces an increase in the level of the enzyme related to acetylcholine synthesis. In behavioral tests, gintonin treatment also restored scopolamineinduced memory dysfunction in passive avoidance and Morris water maze tests (Kim et al., 2015b). Gintonin showed boosting effects on the brain cholinergic system. Therefore, long-term oral administration of gintonin enhances cognitive functions via activation of cognition related proteins and the cholinergic system.
Astrocytes are considered to have simple structural functions and act as metabolic supporters and protector of neurons in the central nervous system. Recent studies show that astrocytes release gliotransmitters, which modulate neighboring neuronal activities by forming a tripartite synapse with neurons (Araque et al., 2014). LPA receptors are abundantly expressed in astrocytes (Tabuchi et al., 2000;Shano et al., 2008). In addition, astrocytes release LPA in the hippocampus, which then interacts with LPA receptor on neuronal presynaptic sites to induce hippocampal excitation by stimulating glutamate release (Trimbuch et al., 2009). In primary cortical astrocytes, gintonin but not ginsenosides induces a [Ca 2+ ] i transient via LPA receptor signaling pathways. Gintonin as well as LPA stimulates the release of gliotransmitters such as ATP and glutamate, and this effect is also [Ca 2+ ] isensitive because [Ca 2+ ] i -chelators abolish gintonin-mediated gliotransmitter release (Kim et al., 2015a). Therefore, astrocytes produce LPA for release and gintonin as an exogenous LPA source stimulates gliotransmitters via LPA receptors. These results imply that LPA receptors in astrocytes may be positive autoreceptors. The LPA released from astrocytes modulates neuronal activities directly via interaction with LPA receptors on neurons (Trimbuch et al., 2009) or indirectly by releasing gliotransmitter following the activation of astrocytic LPA receptors (Araque et al., 2014). Gintonin as an exogenous LPA source might control neuronal activity in two ways. One way is directly through interactions with neuronal cell surface LPA receptors, and the other is indirect via induction of gliotransmitter release from astrocytes expressing LPA receptors. Therefore, gintonin exert its effects in the nervous system via regulation of neuronal and astrocytic systems through the release of gliotransmitters and neurotransmitters (Figure 2).
EXOGENOUS GINTONIN EXERTS HEALING EFFECTS IN INJURED ORGANS
Lysophosphatidic acid production, release, and receptor expression levels have been observed to change in pathophysiological conditions following cellular or organs injury or trauma. In non-nervous systems, injury to blood vessels activates platelets to release LPA, induces platelet aggregation, and facilitates blood coagulation to stop bleeding (Yoshida et al., 2003;Bolen et al., 2011). Autotaxin (also called lysoPLD, which produces LPA from lysophosphatidylcholine) activity for LPA production increased in the aqueous humor of the eye following corneal damage and ischemia-reperfusion injury of the retina increased LPA release (Liliom et al., 1998;Savitz et al., 2006). Autotaxin activity for LPA production also increased in patients with cancers including breast, melanoma, and ovarian (Leblanc and Peyruchaud, 2015). These observations indicate that the endogenous LPA-LPA receptor system participates in healing processes in blood vessels and the eye. LPA-producing enzymes (autotaxin) are involved in pathophysiological conditions such as cancer progression (Goldshmit et al., 2010). We observed that gintonin stimulated the in vitro proliferation and migration of human corneal epithelial cell for wound healing via LPA receptor activation . Interestingly, gintonin strongly inhibits autotaxin activity released from melanoma cells, and inhibits cell motility and migration but gintonin had almost no effects on cell proliferation . In addition, oral administration of gintonin inhibited metastasis to the lung following administration of cells via the tail vein, and inhibited tumor growth after subcutaneous transplantation of melanoma cells in mice. Gintonin treatment also significantly decreased necrosis, mitosis, pleiomorphisms, and vascularity in tumor tissues . In the nervous system, there have been several reports that LPA concentration and receptor level or autotaxin expression are also altered under pathophysiological conditions in human. Plasma LPA concentration is elevated in patients with ischemic cerebrovascular disease (Li et al., 2008). Human brain neurotrauma caused an increase in LPA receptor expression level (Savaskan et al., 2007;Frugier et al., 2011). In addition, patients with Alzheimer-type dementia showed increased autotaxin expression in their frontal cortices compared to normal brains (Umemura et al., 2006). Treatment with gintonin as an exogenous LPA source exhibited anti-Alzheimer's disease (AD) activity via LPA receptors. Gintonin-mediated LPA receptor activation is coupled to an increase in the soluble amyloid precursor protein α (sAPPα) release instead of neurotoxic β-amyloid (Aβ) formation in neuroblastoma SH-SY5Y cells but ginsenoside showed no such effects . Long-term oral administration of gintonin decreased neuropathies by Aβ plaque formations in the cortices and hippocampi of wild-type mice and restored Aβinduced memory dysfunctions in a transgenic AD animal model .
Acetylcholine is an important neurotransmitter involved in cognitive brain functions such as learning and memory, and the brains of patients with AD show dysfunction of the cholinergic system (i.e., a decrease in brain acetylcholine concentration and choline acetyltransferase activity but increase in acetylcholine esterase, which is an enzyme for acetylcholine hydrolysis). Long-term oral administration of gintonin also attenuated cholinergic dysfunctions in the hippocampus by increasing brain acetylcholine concentrations and choline acetyltransferase activity and decreasing acetylcholine esterase activity (Kim et al., 2015b). Long-term oral administration of gintonin contributed to the restoration of adult brain cholinergic dysfunctions by Aβ itself and Aβ-induced cholinergic dysfunctions in a neurodegenerative AD animal model (Kim et al., 2015b).
Considering the previously described in vitro and in vivo actions of gintonin, the exogenous application of gintonin may contribute to restoring the condition of damaged cells or injured organs via regulation of LPA receptors and autotaxin activity. The functional comparisons of gintonin and LPA are summarized in Table 2.
PERSPECTIVES AND CONCLUDING REMARK
The modern interpretations of ginseng pharmacology have advanced tremendously alongside isolation of its components over the last few decades. However, a number of traditional herbal medicine practitioners still believe that the efficacy of ginseng is a mystery. In early ginseng studies in Japan, Russia, and other countries conducted prior to 1960, investigators focused on ginsenosides in an attempt to explain ginseng pharmacology, since they were first identified as components. However, with over five decades of studies on ginsenosides, accumulating evidence shows that the current ginseng pharmacology extends beyond effects that can be attributed to ginsenosides. In other words, attributing the wide variety of pharmacological effects exhibited by ginseng solely to the ginsenosides, is no longer plausible (Figure 3). To advance the elucidation of ginseng pharmacology and development of ginseng-derived medicines, a component with specific targets (i.e., receptor) needs to be identified. Gintonin, which targets LPA receptors unlike ginsenosides, could be a lead candidate for the development of ginseng-derived medicines.
Gintonin, for the first time, can be introduced as a candidate for the novel ligand-receptor interaction concept in the new modern description of ginseng pharmacology. Most therapeutic effects of modern medicines are attributed to ligand-receptor interactions. Currently, studies on gintonin, which is an LPA GPCR ligand, provide diverse evidences of its involvement in the effects of ginseng and, therefore, ultimately explain its pharmacology. In addition, since the activation of LPA receptors by LPA exhibit a variety of biological effects not observed with ginseng, we can further expand ginseng pharmacology by elucidating the gintonin-LPA receptor relationship (Table 2). Furthermore, gintonin could serve as a newly defined up-to-date adaptogenic component of ginseng and a novel candidate to fit the description of "a healing molecule that can restore disruption of physiology functions caused by damage or disease, via LPA receptors or autotaxin regulation. " Ginseng extract is also considered a complementary and alternative medicine. Currently, more than 50% of medicines used in animals and human beings act via GPCRs, and these receptors along with their ligands are still targets for novel drug development. Many international companies have also concentrated their efforts on the research and development of drugs that act via LPA GPCRs (Llona-Minguez et al., 2015). Gintonin was shown to ameliorate the neurodegenerative AD in an animal model via LPA receptors. Finally, gintonin could be a major lead candidate for development as a ginseng-derived natural medicine and not simply just a complementary and alternative functional food or medicine. | 2016-05-04T20:20:58.661Z | 2015-10-27T00:00:00.000 | {
"year": 2015,
"sha1": "887d4a305dc324665fb2d7301e56214aaf5eadb5",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2015.00245/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "887d4a305dc324665fb2d7301e56214aaf5eadb5",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251767459 | pes2o/s2orc | v3-fos-license | Blood culture positivity in patients with acute appendicitis: A propensity score–matched prospective cohort study
Background and objective: The prevalence of bacteremia in acute appendicitis is unknown. We aimed to assess prevalence and predictive factors of bacteremia in adult patients with appendicitis. Methods: In this prospective propensity score–matched cohort study, patients were recruited as part of one single-center prospective observational study assessing appendicitis microbiology in concurrence with two randomized controlled trials on non-operative treatment of uncomplicated acute appendicitis. All patients evaluated for enrollment in these three trials between April 2017 and December 2018 with both a confirmed diagnosis of appendicitis and available blood culture on admission were included in this study. Potential predictive factors of bacteremia (age, sex, body mass index (BMI), body temperature, C-reactive protein (CRP), leukocyte count, comorbidities, symptom duration, and appendicitis severity) were assessed. Prevalence of bacteremia was determined by all available blood cultures followed by propensity score matching using sex, age, BMI, CRP, leukocyte count, and body temperature of the patients without available blood culture. Results: Out of the 815 patients with appendicitis, 271 patients had available blood culture and the prevalence of bacteremia was 12% (n = 33). Based on propensity score estimation, the prevalence of bacteremia in the whole prospective appendicitis cohort was 11.1%. Bacteremia was significantly more frequent in complicated acute appendicitis (15%; 29/189) compared with uncomplicated acute appendicitis (5%; 4/82) (p = 0.015). Male sex (p = 0.024) and higher body temperature (p = 0.0044) were associated with bacteremia. Conclusions: Estimated prevalence of bacteremia in patients with acute appendicitis was 11.1%. Complicated appendicitis, male sex, and higher body temperature were associated with bacteremia in acute appendicitis.
Introduction
With an incidence of 100-200 cases per 1,000,000 person years, 1 acute appendicitis is one of the most common reasons for acute abdominal pain, and abdominal infections are the second most common source of sepsis. 2,3 The over centurylong paradigm of appendectomy as standard treatment for all acute appendicitis patients has recently been challenged by the effectiveness and safety of non-operative treatment of computed tomography (CT)-confirmed uncomplicated acute appendicitis. [4][5][6][7][8][9][10][11][12] With epidemiological and clinical data supporting the different disease hypothesis of uncomplicated and complicated acute appendicitis, 5,[12][13][14][15] further research is needed on understanding and identifying these different forms of acute appendicitis. During the coronavirus pandemic (COVID- 19), antibiotics were acknowledged as a safe alternative to surgery for uncomplicated acute appendicitis by the American College of Surgeons (COVID-19 Guideline for Triage of Emergency General Surgical Patients) 16 as nonoperative treatment would allow limiting inpatient bed use and reimplementation of health care resources.
To our knowledge, there is only one previous study from 1984 17 on blood culture positivity in adult patients with appendicitis focusing on the comparison of blood culture, appendicular lumen, intra-abdominal culture, and wound cultures. In this study, the prevalence of blood culture positivity was 5% (7 out of 140 patients with appendicitis), and there was no correlation between the degree of appendicitis and the incidence of positive blood cultures. 17 There are studies assessing bacteremia postoperatively after appendectomy 18 and many studies with specimens obtained from appendiceal lumen or swab samples from suppurative peritoneal fluid or periappendiceal abscess, [19][20][21] but only few small studies on bacteremia in pediatric patients with appendicitis on admission. 22,23 In the early 1990s, a small retrospective study on children reported positive blood cultures of 17% and 8% in perforated and nonperforated acute appendicitis, respectively. 24 In a small prospective pediatric study, the prevalence of bacteremia in patients with acute appendicitis was 6%. 23 In a retrospective cohort of 1315 children, there were 288 patients with available blood culture data on admission with a blood culture positivity prevalence of 0.35%. 22 A recent metagenome analysis study profiling bacterium in ascites and blood of patients with acute surgical abdomen found no positive blood cultures in patients with appendicitis. 25 Bridging the knowledge gaps in understanding the etiology and pathophysiology of uncomplicated and complicated acute appendicitis is of utmost importance to be able to optimize the accuracy of the pre-intervention diagnosis of appendicitis severity allowing the assessment and tailoring of all available treatment options accordingly. The aim of this study was to assess the prevalence of blood culture positivity in a large prospective patient cohort with CT and/or clinically confirmed complicated or uncomplicated acute appendicitis. We also aimed to evaluate potential predictive factors associated with blood culture positivity mainly focusing on the appendicitis severity.
Study design
This prospective study was a pre-planned subgroup analysis of blood culture data collected at Turku University Hospital in Finland in the prospective observational cohort study MAPPAC 26 (Microbiology APPendicitis ACuta) and the concurrent randomized controlled trials APPAC II and APPAC III. 6,7 MAPPAC (NCT03257423) is a prospective clinical trial conducted in close synergy with the concurrent APPAC II (NCT03236961) and APPAC III (NCT03234296) randomized clinical trials (RCTs). The MAPPAC study has both a singlecenter and multicenter arm and this blood culture study is part of the single-center arm at Turku University Hospital. The aim of this study is to evaluate the microbiological and immunological aspects in the etiology of uncomplicated and complicated acute appendicitis. 26 APPAC II is a multicenter, open-label, noninferiority RCT comparing oral moxifloxacin with intravenous ertapenem followed by oral levofloxacin and metronidazole in the management of CT-confirmed uncomplicated acute appendicitis aiming to demonstrate both the ability of oral antibiotics alone to manage acute appendicitis and the noninferiority of oral antibiotics compared with intravenous followed by oral antibiotics. 7 APPAC III is a multicenter, double-blind, placebo-controlled, superiority RCT comparing antibiotic therapy (intravenous ertapenem followed by oral levofloxacin and metronidazole) with placebo in the treatment of CT-confirmed uncomplicated acute appendicitis aiming to evaluate the role of antibiotics in the resolution of uncomplicated acute appendicitis. 6 All patients gave written informed consent. The trial protocol was approved by the ethics committee of Hospital District of Southwest Finland.
Study participants
Patients were recruited as part of one single-center prospective observational study (MAPPAC) in concurrence with the two RCTs (APPAC II and III), in which all patients aged 18-60 years admitted to the emergency department with clinical suspicion of acute appendicitis and uncomplicated appendicitis confirmed by CT were evaluated for RCT enrollment. In addition to enrolling the patients with uncomplicated acute appendicitis evaluated for enrollment in the RCTs, the MAPPAC trial also enrolled patients with complicated acute appendicitis. All patients evaluated for enrollment between 5 April 2017 and 10 December 2018 in these three trials at Turku University Hospital with a written informed consent, a confirmed diagnosis of appendicitis, and available blood culture on admission to the emergency room were included in this prospective sub-study. At Turku University Hospital, the aim was to obtain blood culture samples on all patients with confirmed appendicitis evaluated for participation in the three clinical trials. After completing enrollment, the inclusion criteria for this study were confirmed diagnosis of acute appendicitis (without previous episodes of acute appendicitis) and available blood culture taken on admission to the emergency room. Patients with acute appendicitis without available blood culture samples prior to any antibiotics on admission were excluded from this study. In patients undergoing non-operative treatment, appendicitis was confirmed by CT and in patients undergoing appendectomy, the diagnosis was confirmed both by CT and surgery with histology of the removed appendix. We selected all eligible patients within the original trial populations for the analyses performed in this predefined blood culture study assessing both the prevalence of blood culture positivity and the potential predictive factors for bacteremia.
Despite the study protocol instructions to retrieve blood cultures from all patients with suspected acute appendicitis evaluated for enrollment in the MAPPAC, APPAC II, and APPAC III trials, this was performed in only 37% (n = 299) of the cases based on the major challenges caused by the acute care surgery setting of the trial. As a post hoc analysis plan to overcome this limitation of potential selection bias, we decided to use propensity score matching from the whole patient cohort presenting with acute appendicitis.
Outcome measures
To overcome the limitation of potential selection bias of not having blood cultures from the whole appendicitis patient cohort, we used propensity score matching of the patients in this large prospective patient cohort with CT and/or clinically confirmed complicated or uncomplicated acute appendicitis, but no blood culture data matching the patient population using sex, age, body mass index (BMI), C-reactive protein (CRP), leukocyte count, and body temperature.
Potential predictive factors associated with blood culture positivity were evaluated in this study and the effect of appendicitis severity was our primary variable. Other characteristics evaluated were age, sex, BMI, body temperature, CRP, leukocyte count, duration of symptoms, and clinically significant comorbidities potentially having an impact on the blood culture positivity. A detailed list of the evaluated comorbidities is presented in Supplementary Table S1. Our definitions of uncomplicated and complicated acute appendicitis were performed according to all our APPAC trials. 4,6,7,26 The criteria for a radiological diagnosis of uncomplicated and complicated acute appendicitis are defined in Table 1. All clinical diagnosis were assessed in a blinded manner by two investigators unaware of the other's evaluation (S.S. and J.H.). In cases of disagreement, the clinical diagnosis was reviewed by a third investigator (P.S.). The presence of an appendicolith has been shown to be associated with a more complicated course of the disease. 9,12 As the definitions of complicated appendicitis are not yet internationally uniform and standardized, we also performed a subgroup analysis classifying patients presenting only with an appendicolith but no other complications as uncomplicated.
Blood cultures were performed at the Department of Clinical Microbiology of Turku University Hospital using the Bactec TM FX blood culture system (BD Diagnostic Systems, Heidelberg, Germany) and identification of bacteria was done with Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) using the MALDI Biotyper® instrument and MBT Compass Library (Bruker Daltonics, Bremen, Germany). All microbes found in blood cultures and all antimicrobial treatments were documented.
Statistics
Prevalence of blood culture positivity was calculated directly from the 271 patients with available blood culture data. Propensity scoring was used to match this subpopulation to patients with acute appendicitis but no available blood culture using sex, age, BMI, CRP (categorized below and above reference limit), leukocyte count (categorized below and above reference limit), and body temperature (categorized below or above 37.5°C). The matched cohort (n = 544) Table 1. Structured radiological report including radiological criteria and categorization of acute appendicitis.
1) Appendix Visualization
Report one of the following: Not visualized/Partly or unclearly visualized/Completely visualized 2) Appendix transverse diameter (mm): 3) Probability of appendicitis Report one of the following: Not likely/Rather unlikely/Rather likely/Very likely 4) Categorization of the appendicitis Report either I or II, if any: I. Uncomplicated appendicitis: transverse diameter >6 mm with typical findings -wall thickening and enhancement -periappendiceal edema and/or minor amount of fluid II. Complicated appendicitis: Above-mentioned criteria for appendicitis with at least one of the following: -Appendicolith: >3 mm stone within appendix -Abscess: periappendiceal walled of collection with enhancing walls -Perforation: appendiceal wall enhancement defect and periappendiceal excess of fluid and/or infectious phlegmon and/or extraluminal air -Tumor: tumor-like prominence of appendix 5) Other diagnosis: Report if any Diverticulitis/Complicated ovarian cyst/Pelvic inflammatory disease/Colitis/ Ileitis/ Intestinal obstruction or ileus/Ureter stone/Hydronephrosis/Tumor/Other diagnosis consisted of patients with confirmed acute appendicitis, but no available blood culture For patients with an available blood culture sample (blood culture positivity n = 31) and all of the propensity score matching parameters n = 259).
Categorical variables were summarized with counts and percentages, and continuous variables with mean and standard deviation (SD). In addition, range was reported for age. Association between blood culture positivity (positive/negative) versus appendicitis severity (uncomplicated/ complicated), sex, comorbidities, duration of symptoms (categorized), and antibiotics (categorized) was examined using Fisher's exact test. Comparison of age, BMI, body temperature, CRP, and leukocyte count between the blood culture positivity groups was performed using one-way analysis of variance. Square root transformation was used for CRP and leukocyte count to fulfill the assumption of normality. Normality assumption was checked from studentized residuals.
Confidence intervals (CIs) of 95% were calculated. All statistical tests were performed as two-sided, with a significance level set at 0.05. The analyses were programmed using SAS software, version 9.4 for Windows (SAS Institute Inc., Cary, NC, USA).
Results
From April 2017 and December 2018, there were a total of 815 patients with confirmed acute appendicitis and out of these, 299 (37%) patients had blood culture samples taken. After exclusion there were 271 eligible patients. The remaining 271 eligible patients with both acute appendicitis and available blood culture on admission were divided according to the presence or absence of bacteremia. Fig. 1 shows the patient flow; patient demographics and baseline characteristics are presented in Table 2. Of the 271 patients with blood culture data, the majority (70%, 189/271) of the patients presented with complicated acute appendicitis and 30% (82/271) of the patients had uncomplicated acute appendicitis.
Among 271 eligible patients with both confirmed appendicitis and available blood culture on admission, 33 patients (12%, 33/271) had bacteremia. The mean age of these 33 patients was 48 years (range, 20-69 years) and 24 (73%) were male. Of these 33 patients with bacteremia, 31 had appendectomy and 2 patients with uncomplicated acute appendicitis were treated with antibiotics only. In this large prospective patient cohort (n = 815), the propensity score-matched prevalence of bacteremia was 11.1%.
When comparing the prevalence of blood culture positivity, it was significantly more common in complicated acute appendicitis compared with uncomplicated acute appendicitis. Bacteremia was diagnosed in 15% (29/189) and 5% (4/82) of the patients with complicated acute appendicitis and uncomplicated acute appendicitis, respectively (p = 0.015).
Out of these 33 patients, 24 had complicated acute appendicitis presenting as perforation, gangrene of the appendix, or periappendicular abscess. In addition, five patients with bacteremia and complicated acute appendicitis presented with only an appendicolith and no other signs of complicated appendicitis. When analyzing the patients with only an appendicolith but no other complications as uncomplicated, the prevalence of bacteremia remained significantly more frequent in complicated acute appendicitis 16% (24/150) compared with uncomplicated acute appendicitis 7% (9/121) (p = 0.039). Out of the four patients having uncomplicated acute appendicitis and bacteremia, there was one woman and three men aged between 24 and 41 years with body temperature between 38°C and 39°C, CRP ranged from 52 to 131 mg/L, and leukocyte count from 8.2 to 29.9 109/L. Two of these patients had ulcerative colitis. Two out of these four were treated with appendectomy and the other two with antibiotics and all of them had oral antibiotics for 1 week. All four patients experienced an uneventful recovery with no need for further medical or surgical treatment. Medical reports were searched in January 2020 and none of these four patients had a recurrence or a readmission to the hospital for any abdominal problems.
Higher body temperature and male sex were the only two factors associated with blood culture positivity in patients with appendicitis, no other measured parameters had any association with blood culture positivity. Mean body temperature in patients with bacteremia was 38.2°C (SD, 0.9) and without bacteremia 37.8°C (SD, 0.7) (p = 0.0044). Out of the patients with bacteremia, 73% (24/33) were male and in patients without bacteremia, 50% (120/238) were male (p = 0.024). Complicated acute appendicitis in the 271 patients with available blood culture on admission was significantly more common in men compared with women, 77% (111/144) versus 61% (78/127), respectively (p = 0.0056). Association of higher body temperature and blood culture positivity was similar in men and women (37.8°C (SD, 0.7) and 37.9°C (SD, 0.8), respectively), making it an independent factor predicting blood culture positivity. When comparing the association of blood culture results with both sex and body temperature, the mean body temperature in men with bacteremia was 38.1°C (SD, 0.8) and without bacteremia 37.8°C (SD, 0.7) and in women 38.6°C (SD, 1.1) and 37.8°C (SD, 0.7), respectively. There was no statistical difference between men and women in the association of body temperature and blood culture results (p = 0.53). Duration of symptoms had no association with blood culture positivity (p = 0.21), even though longer duration of symptoms was associated with complicated appendicitis (p = 0.0001).
A total of 14 different bacteria were found in blood cultures (Table 3). There were 46 different isolated bacteria in 33 episodes of bacteremia: 25 anaerobes, 13 Gram-negative aerobes, and 8 Gram-positive aerobes. The most common bacteria were Bacteroides fragilis (n = 13) and Escherichia coli (n = 12). There was a variety of different antibiotics (n = 17) used in this study partially based on the ongoing concurrent RCTs.
Discussion
In this prospective cohort on 271 patients with both confirmed acute appendicitis and available blood culture on admission, the prevalence of bacteremia was 12%. In the propensity score-matched larger cohort of all 815 patients with appendicitis, the prevalence was 11.1%. Blood culture positivity was significantly associated with complicated acute appendicitis compared to uncomplicated acute appendicitis. To our knowledge, there is only one previous study assessing blood culture positivity in adult patients with acute appendicitis. 17 The bacteremia prevalence rate in this study 17 and the earlier studies mainly in pediatric patient cohorts [22][23][24] are corroborated by the outcomes of this study. One retrospective study on children reported bacteremia of 17% and 8% in perforated and non-perforated acute appendicitis, 24 when in our study bacteremia was diagnosed in 15% and 5%, respectively. There was huge variety in the overall prevalence in the earlier studies ranging from 0.35% to 17%, [22][23][24] and as all these other studies were performed in pediatric populations, they are not 815 Patients with confirmed acute appendicitis 516 Patients with confirmed acute appendicitis, but no blood culture samples taken 28 Excluded; did not meet inclusion criteria: x 18 Blood culture sample taken after surgery x 3 Administration of antibiotics before blood culture Fig. 1. Flow chart of study patients. 1 All 29 patients were treated with appendectomy, and 2 patients received only preoperative dose of antibiotics. Appendicitis was considered as complicated when presenting with an appendicolith, perforation, periappendicular abscess, tumor, or clear intraoperative finding of gangrene supported by histopathology. 2 Out of the four patients two were treated with appendectomy and two were treated with antibiotics only. *Patient treatment: Emergency appendectomy (n = 240), initial antibiotics for CT-confirmed uncomplicated acute appendicitis (n = 28), and periappendicular abscess with initial antibiotics followed by interval appendectomy (n = 3).
directly comparable to our study with adult patients. Both epidemiological and clinical studies have shown that nonoperative treatment for uncomplicated acute appendicitis is efficient, safe, and cost-effective. 4,5,[7][8][9][27][28][29][30] This underlines the importance of understanding the etiology, pathophysiology, and diagnostic and clinical findings of uncomplicated and complicated acute appendicitis enabling the optimization of different treatment alternatives for patients with the extremely common surgical emergency of appendicitis.
Other predictive factors of bacteremia were male sex and higher body temperature, and the majority (73%) of the patients with bacteremia in our study were male. In our study, complicated appendicitis was significantly more common in men compared with women, 77% (111/144) versus 61% (78/127), respectively. The higher risk for complicated appendicitis in men could explain the association of male sex and blood culture positivity and there might not be association between male sex and bacteremia. In our study, longer duration of symptoms was a risk factor for having complicated appendicitis, but there was no statistically significant difference in blood culture results. Association of higher body temperature and blood culture positivity was similar in both sexes, making it an independent factor predicting blood culture positivity. The only previous study on adults by Lau et al. reported predictive factors associated with increased incidence of septic complications to include late appendicitis, a positive would culture at the end of operation, longer duration of symptoms (over 36 h), and higher patient age (over 50 years). 17 There were two patients with bacteremia and uncomplicated acute appendicitis treated without source control, that is, without appendectomy, and interestingly both of these Clostridium species 3 (7) Alistipes onderdonkii 2 (4)
patients had an uneventful recovery from their appendicitis. Recent studies have shown that uncomplicated acute appendicitis may also resolve by only symptomatic treatment. 6,31 However, the notion that rare cases of patients with uncomplicated acute appendicitis could also present with bacteremia and have uneventful recovery after non-operative treatment without source control may somewhat challenge the need for a longer course of intravenous antibiotics for all patients with bacteremia. Two additional patients with blood culture positivity and complicated acute appendicitis were treated with appendectomy receiving only preoperative antibiotics as their blood culture results were obtained only after discharge without a clinical need for treatment alteration. In a search of electronic hospital records in January 2020, these two patients had recovered from appendicitis without complications or recurrent infections. A total of 14 different bacteria were present in the blood cultures. Most bacteria found were expected pathogens like E. coli and B. fragilis that are commonly involved in gastrointestinal septicemia and bacteremia. Among other bacteria there were both pathogenic bacteria and normal microbiota of mouth and other parts of gastrointestinal tract. This study has several limitations. First, only 30% (299/815) of the patients with confirmed appendicitis had their blood culture taken in the emergency department on admission despite the instructions to retrieve blood cultures from all patients with suspected complicated acute appendicitis evaluated for enrollment in the MAPPAC trial and from all patients with uncomplicated acute appendicitis enrolled in the APPAC II and III RCTs. This limitation is not driven by noncompliance to the study protocol, but the acute care setting of the study with a large variety of physicians at the emergency department not remembering the instructions and thus from a clinical perspective, the majority (70%) of the blood cultures were retrieved from patients with complicated acute appendicitis. However, we aimed to overcome this limitation of potential selection bias by using propensity score matching using the whole patient cohort with acute appendicitis. Second limitation is that in this study population men had a higher risk for complicated appendicitis possibly explaining the association of male sex and bacteremia, but due to the sample size and lack of blood culture samples on admission in the whole patient cohort this issue cannot be determined by our study.
The main strength of our study is the large prospective patient cohort (n = 815) enabling both the propensity score matching and assessment of predictive factors due to prospective data collection of all essential clinical parameters. Another strength is the accuracy of the differential diagnosis between uncomplicated and complicated acute appendicitis in every patient with either CT and/or surgery with histology of the removed appendix partially based on the synergy of the concurrent three trials (MAPPAC, APPAC II, and APPAC III).
In summary, estimated overall prevalence of blood culture positivity in patients with acute appendicitis was 11.1%. Complicated appendicitis, male sex, and higher body temperature were associated with blood culture positivity in acute appendicitis. | 2022-08-25T06:17:59.977Z | 2022-08-24T00:00:00.000 | {
"year": 2022,
"sha1": "9e21670bf57100dca5beb04b455c04894d711085",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/14574969221110754",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "3d8841d69553264b388072b6abf46d6cee704a59",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
11660605 | pes2o/s2orc | v3-fos-license | 4- Estrogenicity of Environmental PCBs The paper by Bergeron, Crews, and McLachlan, "PCBs as Environmental Estrogens: Turtle Sex Determination as a
The paper by Bergeron, Crews, and McLachlan, "PCBs as Environmental Estrogens: Turtle Sex Determination as a Biomarker of Environmental Contamin-ation" (EHP 102:780-781) presents data on the estrogenic activity of 11 chlorinated biphenyls or diphenyl ethers, or hydroxy-lated derivatives thereof, selected so as to represent a variety of structural types. Some of the PCB types examined (e.g., compounds A, C, D, E, and J) arise from PCB congeners actually detectable in the commercial Aroclors (1) and hence also in environmental samples, whereas other PCB types (e.g., those with heavily uneven chlorination of the two rings, as in compounds F,G, and L) arise from PCB con-geners that are not detectable in either the Aroclors themselves (1) or even environmentally transformed PCB compositions (2). It is noteworthy that the only compounds found to have statistically significant estrogenic activity (compounds F and G) both represented 4-hydroxylation products of PCB congeners belonging to the nonenvironmental group, whereas the five compounds representative of PCB structures actually present in the environment were all negative. In short, the results present zero evidence that environmental PCBs present risk of estrogenic activity. This is hardly what the authors claim, however. In their discussion they state (p. 781): This report contributes laboratory evidence of the effect of PCBs on sex determination ... and serve as a warning of conditions threatening wildlife populations. The PCB levels reported here as effective in disrupting normal gonadal differentiation in the turtles are comparable to average levels of PCBs found in human breast milk in industrialized nations. This most misleading statement represents a false alarm that can only impede the search for the environmental contaminants that actually do present estrogenic risk. characterization of polychlorinated biphenyl congeners in commercial Aroclor and Clophen mixtures by multidimensional gas chromatography-electron capture detection. There are several points we would like to address in our response. First of all, the compounds studied were chosen because these particular congeners were believed to be estrogenic based on their conformation-al structure. The primary reason for conducting this type of study was to show the effectiveness of a temperature-dependent sex determined (TSD) species as a tool in assessing estrogenic activity in vivo. This point is furthered by an earlier report in EHP by Guillette et al. (1). While the PCB compounds we used may not be primary components of commerically available PCB mixtures, there are parallels between these compounds and other PCB con-geners in the pattern …
Estrogenicity of Environmental PCBs
The paper by Bergeron, Crews, and McLachlan, "PCBs as Environmental Estrogens: Turtle Sex Determination as a Biomarker of Environmental Contamination" (EHP 102:780-781) presents data on the estrogenic activity of 11 chlorinated biphenyls or diphenyl ethers, or hydroxylated derivatives thereof, selected so as to represent a variety of structural types. Some of the PCB types examined (e.g., compounds A, C, D, E, and J) arise from PCB congeners actually detectable in the commercial Aroclors (1) and hence also in environmental samples, whereas other PCB types (e.g., those with heavily uneven chlorination of the two rings, as in compounds F,G, and L) arise from PCB congeners that are not detectable in either the Aroclors themselves (1) or even environmentally transformed PCB compositions (2). It is noteworthy that the only compounds found to have statistically significant estrogenic activity (compounds F and G) both represented 4-hydroxylation products of PCB congeners belonging to the nonenvironmental group, whereas the five compounds representative of PCB structures actually present in the environment were all negative. In short, the results present zero evidence that environmental PCBs present risk of estrogenic activity.
This is hardly what the authors claim, however. In their discussion they state (p. 781): This report contributes laboratory evidence of the effect of PCBs on sex determination ... and serve as a warning of conditions threatening wildlife populations. The PCB levels reported here as effective in disrupting normal gonadal differentiation in the turtles are comparable to average levels of PCBs found in human breast milk in industrialized nations.
Response to Hamilton
There are several points we would like to address in our response. First of all, the compounds studied were chosen because these particular congeners were believed to be estrogenic based on their conformational structure. The primary reason for conducting this type of study was to show the effectiveness of a temperature-dependent sex determined (TSD) species as a tool in assessing estrogenic activity in vivo. This point is furthered by an earlier report in EHP by Guillette et al. (1). While the PCB compounds we used may not be primary components of commerically available PCB mixtures, there are parallels between these compounds and other PCB congeners in the pattern of ortho-chlorine substitutions, as Korach et al. (2) indicate. It is important to study how these structures affect a developing organism in order to understand how PCBs can act as estrogens. Though these particular congeners may not be currently used in the readily available mixtures, McKinney et al.
(3) make reference to goals of using PCBs that are easily detoxified via hydroxylation. If such considerations lead to composition of commercially available PCB mixtures away from congeners that exhibit a dioxinlike toxicity, these considerations should include assessment of the estrogenic activity of these compounds. The TSD system can be used to test such mixtures in vivo. Furthermore, the appearance of hydroxylated PCB congeners in "nature" is an emerging issue: for example, Bergman et al. (4) report that the hydroxylated forms of heavily chlorinated (pentaor heptachlorinated) biphenyls were among the most retentive forms of PCBs found in blood from humans or seals. Clearly, this class of molecules may have environmental significance which is only now being appreciated. The second point that we would like to emphasize regards the potential for secondgeneration exposure to PCBs as environmental estrogens. While the congeners that we found to clearly exhibit estrogenic activity may not be produced in the commercial PCB mixtures, or even found in soil and water samples, they may exist within animal tissue during metabolism of the environmental compounds. Maternal metabolic by-products may affect the reproductive system at a critical stage in development of the offspring, producing the detrimental estrogenic effect on the second generation. This is particularly a concern when enhanced estrogenic effects are produced by the synergy of different combinations of low-level PCBs, an issue brought to light by the study in question.
Finally, we share Dr. Hamilton's concern that false alarms may impede identification of contaminants that actually present risk of estrogenic activity. However, Dr. Hamilton's cited quotation of our discussion, and his interpretation of it, require clarification so as not to be misleading. Dr. Hamilton's abbreviated quote would lead readers to believe that it is our report which we say serves as a warning of threatening environmental conditions. In fact, the passage he omitted clearly identifies "the usefulness of a TSD species as a biomarker" to serve in the capacity which Dr. Hamilton apparently ascribes to our report.
Our findings support the call from the scientific community for the need to further study the mechanisms of estrogenic activity of environmental contaminants.
These findings, together with the growing body of evidence that a number of environmental compounds mimic estrogens and do have an effect on developing reproductive systems, provide ample indication for further investigation of these mechanisms and their outcomes. Our report suggests a model by which to continue these efforts, and we appreciate the continued interest in our work and the opportunity to address questions regarding it. (2), about a chemist who was exposed to 2,3,7,8,tetrabromodibenzodioxin (TBDD) and to 2,3,7,8-tetrachlorodibenzodioxin (TCDD) in March and September 1956, respectively, when synthesizing these chemicals. The chemist was defined as "in good health" in 1990, when determinations of chlorinated and brominated dioxins and dibenzofurans were performed on whole blood. High concentrations of several congeners were detected, and the results were used to discuss the half-life of the chemicals in humans. The subject presented a mild chloracne after an unspecified time from his exposure to bromodixoins in March, suggesting that TBDD could produce skin effects as chlorodioxins. Other more relevant symptoms occurred after the exposure to TCDD in September, and the patient was hospitalized for a short period.
The second was a study of subjects exposed to PBDDs and PBDFs as a result of working at a BASF factory in etrusion blending of polybutyleneterphthalate with decarbromodiphenyl ether, used as a flame retardant. The intensity of exposure was determined in 1989 through air monitoring (3). The paper presents blood levels of 2,3,7,8,-TBDF and TBDD and of total congener profiles for some exposed workers and the results of a comparison of several immunological tests in a population of exposed versus a population of unexposed deriving from the same working cohort. Workers had detectable blood levels of TBDD and TBDF; half-life estimates of these chemicals are presented. The results of immunological tests were described as "not adversely impacted at these burdens of PBDFs and PBDDs," even though the results of several tests showed a correlation with exposure, and in the subject having the highest blood levels of PBDFs and PBDDs, immunological changes were quite relevant. The authors stated that clinical examination did not reveal "skin lesions consistent with an acnegenic response.
It should be stressed that the results of the two quoted articles do not change the conclusions of Mennear and Cheng-Chung on the health risks of PBDDs and PBDFs. However, slightly different suggestions for future research can be derived. Human populations have been or are exposed to these chemicals because of their use in several work processes involving flame retardants, environmental exposures (mainly due to municipal incinerators), or because of accidents due to thermal decomposition of flame retardants. These exposed human populations can be suitable, at least in theory, for toxicological and epidemiological observations. (2). Carlson-Lynch et al. contend that the analyses conducted in our studies are flawed and that the conclusions reached in our publications are erroneous, rendering them unsuitable for use by the EPA in risk assessment.
Whether or not our studies are used by the EPA for risk assessment is of little concern to us, but we are certainly concerned about statements that they are flawed. Careful examination will show that all of the major points raised in the commentary are either incorrect or have no valid basis. We would like to respond to the criticisms made, point by point, in the order presented, beginning with the methylation paper (1).
Critique: The average arsenic exposures in almost all of the studies analyzed were too low to observe methylation saturation.
Response: The commentators base this statement on three issues. First, the authors state that evidence from an experimental study (of only four human volunteers each receiving only one dose level) suggests that methylation would be completely saturated at exposures greater than 500 pg/day (3).
However, at the highest oral dose in this study, 1000 pg/day, the amount of urinary arsenic in the inorganic form was only 26%, hardly demonstrating methylation saturation even at this level. Buchet et al.
(3) state in their paper that "speciation of the arsenic metabolites in urine indicated that the arsenic methylation capacity of the human body was not yet saturated, even with an oral daily dose of 1000 pg As." The evidence of any metabolic saturation from this study is not conclusive. Each of four arsenic dosing levels was assigned to a different individual subject, making it impossible to differentiate interindividual differences in methylation efficiency from dose-dependent effects that might apply to a general population.
Second, the authors state that we analyzed only two groups with average urinary arsenic levels at or above 190 pg/I, which they hypothesize corresponds to the concentration above which methylation saturation occurs. This statement obscures the fact that 1) the two groups combined had a total of 35 people, 2) our analysis of available individual data (see Figure 2 of our paper) included 14 persons with urinary arsenic levels >190 pg/l. No trend of higher relative proportions of unmethylated arsenic is suggested for those 14 individuals.
Third, the authors state: ". . . a regression analysis on the individual data within the Yamauchi et al. [4] population was borderline significant at p = 0.10. . ." (p. 354). However, this was just one of nine regression analyses we presented. The slopes were positive in four (including the Yamauchi study) but negative in five (1: Table 9).
As a matter of interest, in our more recent studies of chronically exposed popu-Volume 103, Number 1, January 1995 | 2014-10-01T00:00:00.000Z | 1995-01-01T00:00:00.000 | {
"year": 1989,
"sha1": "18e7632afd154636691b3893d63b13a93effc45e",
"oa_license": "pd",
"oa_url": "https://doi.org/10.1289/ehp.9510312b",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "18e7632afd154636691b3893d63b13a93effc45e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
59063182 | pes2o/s2orc | v3-fos-license | The Hypothetical Learning Trajectory on Research in Mathematics Education Using Research-Based Learning
This study aims to create a learning trajectory on research in mathematics education using design research methodology to enhance research and academic writing skills for pre-service mathematics teachers. The fourteen pre-service mathematics teachers during 5 months period from one higher education institution in Tangerang – Indonesia was collected. The design research method was carried out in three phases: preliminary design phase, teaching experiments phase, and retrospective analysis phase. Initial data analysis of 14 pre-service teachers’ research and academic writing skills was conducted in six stages and the learning trajectories on this topic was identified. The fourteen pre-service teachers were divided into 7 groups and research independently to produce seven scientific articles. The six articles were published in the proceedings of Konferensi Nasional Matematika (National Congress of Mathematics) XVII 2014 and one article was published in the Elemen Journal Vol. 1 No. 1.
Several studies indicated that undergraduate students faced difficulties in writing thesis, affecting the length of their study (Bangun, Irmeilyana, & Andarini, 2011;Fathonah, Wahyuningsih, & Wahyuningsih, 2011;Firmansyah, 2014;Prahmana, 2014;Santosa, Wiyanarti, & Darmawan, 2009).In Indonesia, undergraduate students are required to write a thesis as part of their research training (PERMEN No. 49 Tahun 2014).The completion rate usually takes up 2 to 4 semesters.Limited knowledge of methodology, the capacity of research advi- sors, limited research training and experience in academic writing and low-involvement of students to resolve their problems were identified as main source of difficulties (Firmansyah, 2014;Fathonah, Wahyuningsih, & Wahyuningsih, 2011;Puspitasari, 2013).Difficulties in academic writing is also due to lack of motivation, anxiety, and language constraints (Rahmiati, 2014).Therefore, research and academic writing skills are accepted as important skills in the quality of learning of academic training.
Indonesian Directorate of Higher Education Institution to encourage more research and publications by students including undergraduate students issued a policy (PERMEN No. 49 Tahun 2014).Furthermore, the National Standard Qualification (Tim Penyusun KKNI Dikti, 2013) requires students to publish their research in a reputable national journal (acknowledged by KKNI level 6).On the other hand, some researchers have documented the success of research-based learning to enhance students' skills in conducting research (Widayati et al., 2010;Waris, 2009;Umar et al., 2011;Webb, Smith, & Worsfold, 2011;GIHE, 2008;University of Adelaide, 2009), but the greater part of them are still focused on the students who come from non-education department.
Willison & O'Regan ( 2007) developed six research skill indicators that are classified into 5 levels of research.Furthermore, Dowse and Howie (2013), conducted within the framework of design research, sought to design and develop an academic research writing intervention.Prahmana (2014) identified that the learning process at the university did not offer enough support for students to develop their academic writing skills.Therefore, this study aims to create a learning trajectory on research in mathematics education using design research methodology to enhance research and academic writing skills for pre-service mathematics teachers.Based on the problems mentioned, the success of the learning activities to improve pre-service teachers' skills is investigated in research question as follows, "To what extent do the learning trajectory on research in mathematics education in assisting pre-service mathematics teachers to enhance their research and academic writing skills?" Research-Based Learning (RBL) Boud & Feletti (1998) mention six principles which should guide the implementation of RBL, namely multiplicity, activeness, accommodation and adaptation, authenticity, articulation, and timelessness.Furthermore, Neo (2003) also mentions the six principles by using different terms, namely constructivism, contextual learning theory, discovery learning theory, information-process learning orientation, cooperative learning theory, and self-determination theory.RBL activity starts from identifying the problem, extracting knowledge and skills, solving the problem or application, and concluding with reflection (Farkhan, 2008).This is supported by Poonpan & Suwanmankha (2005) which states that RBL involves students in constructing knowledge with the five stage
Academic Writing Skills
Academic writing skills can be interpreted as the skill to produce a paper under standard rules and using a particular scientific method (Supriyadi, 2013;Rahmiati, 2014).Writing a scientific paper has several stages and procedures such as looking for ideas by reading, observing, conducting research, conducting experiments, finding the data, and supporting theory and further writing down the results (Rahmiati, 2014).Suparno (2012) provides general steps in writing a scientific article, namely developing the ideas, planning the text, developing paragraph, writing draft, and finalization.Furthermore, Supriyadi (2013) describes 13 stages in writing a good scientific paper.There are (1) understanding the nature of scientific paper; (2) understanding the difference between scientific paper and non-scientific paper; (3) identifying topics; (4) limiting the topic; (5) formulating the title of scientific paper; (6) formulating the problem of scientific paper; (7) formulating the thesis of scientific paper; (8) framing the scientific paper; (9) developing ideas and paragraphs groups; (10) quotation processing; (11) using the standard language in the scientific paper; (12) writing a list of references; and (13) editing the paper.In addition, scientific paper must also meet the rules of writing scientific language, such as the use of standard grammar, word choice, and the effectiveness of writing (Zulkarnain, 2012).All stages and requirements led to students having difficulty in writing a scientific papers (Yuniawan & Wardani, 2008;Sudiati & Nurhidayah, 2008;Ulfah et al., 2013).Therefore, it takes a special treatment at every stage by controlling every stage through certain indicators in order to produce the skills to write a good scientific paper.
The Hypothetical Learning Trajectory (HLT) in this study has several learning goals expected to be reached by the students.To reach the goals formulated, researcher designs a sequence of instructional learning for research in mathematics education to enhance research and academic writing skills (Figure 1).
Method
This study uses a design research as research method, which is an appropriate way to answer the research questions and achieve the research objectives that start from preliminary design, teaching experiments, and retrospective analysis (Prahmana, 2013).Design research is a methodology that has five characteristics, namely interventionist nature, process oriented, reflective component, cyclic character, and theory oriented (Akker et al., 2006).Design research is a cyclical process of thought experiment and instruction experiments to implementation (Gravemeijer, 2004).There are two important aspects related to design research.They are the Hypothetical Learning Trajectory (HLT) and Local Instruction Theory (LIT).According to Freudenthal (in Gravemeijer and Eerde, 2009), students are given the opportunity to build and develop their ideas and thoughts when constructing mathematics.Teachers can select appropriate learning activities as a basis to stimulate students to think and act when constructing mathematics.Gravemeijer states that the HLT consists of three components, namely (1) the purpose of mathematics teaching for students, (2) learning activities, devices or media used in the learning process, and (3) a conjecture of understanding the process of learning how to learn and strategies students that arise and thrive when learning activities are done in class (Gravemeijer, 2004).The fourteen pre-service mathematics teachers during 5 months period from one higher education institution in Tangerang -Indonesia was collected.The design research method was carried out in three phases: preliminary design phase, teaching experiments phase, and retrospective analysis phase.Initial data analysis of 14 pre-service teachers' research and academic writing skills was conducted in six stages and the learning trajectories on this topic was identified.This study consists of three steps done repeatedly until the discovery of a new theory that a revision of the theory of learning is tested (Figure 2).
Method
Th is study uses a design research as research method, which is an appropriate way to answer the research questions and achieve the research objectives that start from preliminary design, teaching experiments, and retrospective analysis (Prahmana, 2013).Design research is a methodology that has fi ve characteristics, namely interventionist nature, process oriented, refl ective component, cyclic character, and theory oriented (Akker et al., 2006).Design research is a cyclical process of thought experiment and instruction experiments to implementation (Gravemeijer, 2004).Th ere are two important aspects related to design research.Th ey are the Hypothetical Learning Trajectory (HLT) and Local Instruction Th eory (LIT).According to Freudenthal (in Gravemeijer and Eerde, 2009), students are given the opportunity to build and develop their ideas and thoughts when constructing mathematics.Teachers can select appropriate learning activities as a basis to stimulate students to think and act when constructing mathematics.Gravemeijer states that the HLT consists of three components, namely (1) the purpose of mathematics teaching for students, (2) learning activities, devices or media used in the learning process, and (3) a conjecture of understanding the process of learning how to learn and strategies students that arise and thrive when learning activities are done in class (Gravemeijer, 2004).Th e fourteen pre-service mathematics teachers during 5 months period from one higher education institution in Tangerang -Indonesia was collected.Th e design research method was carried out in three phases: preliminary design phase, teaching experiments phase, and retrospective analysis phase.Initial data analysis of 14 pre-service teachers' research and academic writing skills was conducted in six stages and the learning trajectories on this topic was identifi ed.Th is study consists of three
Results
The results of this study indicate that learning trajectory on research in mathematics education have a very important role in assisting pre-service mathematics teachers to enhance their research and academic writing skills.The learning activities start from (1) dividing 14 pre-service mathematics teacher into seven groups to do research independently, reviewing mathematics education journal, searching and observing the current problem issue in mathematics education, and making research question; (2) designing and doing research based on observation results to answer the research question; (3) collecting and analyzing the research data and also making the conclusion; (4) searching the conference event, submitting the research abstract, and attending also to this event; and the last, (5) writing full paper based on the research result and publishing that paper in proceedings or journal as a part of dissemination stage.Furthermore, students collect all research process portfolio and give them research and academic writing skills questionnaire as one of evaluation process in this learning activities.As a result, students were able to do research, write scientific paper, and publish it seen from the research results in the end of activities.There are seven scientific articles that published in the proceedings of Konferensi Nasional Matematika XVII 2014 (six scientific paper) and the Jurnal Elemen Vol. 1 No. 1 (one scientific paper).For more detail, the researcher will discuss the results of the learning process on research in mathematics education, which is divided into three stages namely preliminary design, teaching experiments, and retrospective analysis.
Results
Th e results of this study indicate that learning trajectory on research in mathematics education have a very important role in assisting pre-service mathematics teachers to enhance their research and academic writing skills.Th e learning activities start from (1) dividing 14 pre-service mathematics teacher into seven groups to do research independently, reviewing mathematics education journal, searching and observing the current problem issue in mathematics education, and making research question; (2) designing and doing research based on observation results to answer the research question; (3) collecting and analyzing the research data and also making the conclusion; (4) searching the conference event, submitting the research abstract, and attending also to this event; and the last, (5) writing full paper based on the research result and publishing that paper in proceedings or journal as a part of dissemination stage.Furthermore, students collect all research process portfolio and give them research and academic writing skills questionnaire as one of evaluation process in this learning activities.As a result, students were able to do research, write scientifi c paper, and publish it seen from the research results in the end of activities.Th ere are seven scientifi c articles that published in the proceedings of Konferensi Nasional Matematika XVII 2014 (six scientifi c paper) and the Jurnal Elemen Vol. 1 No. 1 (one scientifi c paper).For more detail, the researcher will discuss the results of the learning process on research in mathematics education, which is divided into three stages namely preliminary design, teaching experiments, and retrospective analysis.
Preliminary Design
At this stage, the literature study on research-based learning, research skill, and academic writing skill was conducted by researcher including develop learning design and skills indicators.Furthermore, researcher conducted observations in undergraduate student, designing hypothetical learning trajectory (HLT), as shown in Figure 1.A set of activities for learning research in mathematics education has been designed based learning trajectory and thinking process of students who hypothesized and also discussed about learning trajectory and all indicator with Prof. Dr. Darhim and Dr. Nancy Susiana as an expert validator.Th e instruction set of activities has been divided into six activities that have been completed in 10 meetings that has been discussed above.
Teaching Experiment
At this stage, the learning activities that have been designed in the preliminary design stage was tested by researcher.Th ere are the student activity of dividing 14 pre-service mathematics teacher into seven groups to do research independently, reviewing mathematics education journal, searching and observing the current problem issue in mathematics education, and making the research question.Next, all groups design and research based on observation results to answer the research question.Aft er that, they collect and analyze the research data and also making the conclusion.Furthermore, lecture and all groups were searching the conference event and submitting the research abstract to this event.Lastly, they written full paper guided by the lecture based on the research result and published that paper in proceedings or journal as a part of dissemination stage.To see the response of the students during the learning process, researcher give them research and academic writing skills questionnaire and also collect all research process portfolio as one of evaluation process in this learning activities.
Retrospective Analysis
As a result, all activities which have been designed can be used to answer the research question above.Th e fourteen pre-service teachers were divided into 7 groups and conducted research independently to produce seven scientifi c articles.Th e six articles were published in the proceedings of Konferensi Nasional Matematika XVII 2014 and one article was published in the Jurnal Elemen Vol. 1 No. 1.For more details, the results of the learning process on research in mathematics education will be discussed.Th e activities are as follows: Learning trajectory which has been designed in Figure 1 is the activity undertaken in this study to guide students doing research and writing scientifi c paper as a their research result by using research-based learning.So that, researcher designed an activity that make students review journal and fi nd "hot" issue in mathematics education.Th is activities start from dividing all students into 7 groups and conduct research independently.Th e goal is that students are able to design learning trajectory for their research (Figure 3).
Fig. 3. One of the student's learning design for her research
2. Furthermore, all groups collected research data based on their research and analyzed that data a lso made the conclusion.Next, they are looking for the conference and submitting their research abstract his event as a part of dissemination of their research (Table 1).Furthermore, all groups collected research data based on their research and analyzed that data and also made the conclusion.Next, they are looking for the conference and submitting their research abstract to this event as a part of dissemination of their research (Table 1).2. Furthermore, all groups collected research data based on their research and analyzed that data and also made the conclusion.Next, they are looking for the conference and submitting their research abstract to this event as a part of dissemination of their research (Table 1).
Table 1.All research abstracts that have been submitted to Konferensi Nasional Matematika 3. Next, full paper based on the research result and feedback from all participants in conference that had been seen their present was written by all groups and guided by their lecture.
4. Lastly, they submit that paper in proceedings or journal.As the result, students were able to publish the research results in the end of activities.There are six scientific articles that were published in the proceedings of Konferensi Nasional Matematika XVII 2014 (Team 2-7) and one scientific paper in the Jurnal Elemen Vol. 1 No. 1 (Team 1) as a product of this research (research and academic writing skills).
5. The retrospective analysis occurred in the first cycle showed that these pre-service teachers were on the level 1 and 2, based on research skill indicators developed by Willison and O'Regan (2007).Further research analysis will be needed to revise the HLT which will be then tested back in the next cycle (further research).
Next, full paper based on the research result and feedback from all participants in conference that had been seen their present was written by all groups and guided by their lecture.
Lastly, they submit that paper in proceedings or journal.As the result, students were able to publish the research results in the end of activities.Th ere are six scientifi c articles that were published in the proceedings of Konferensi Nasional Matematika XVII 2014 (Team 2-7) and one scientifi c paper in the Jurnal Elemen Vol. 1 No. 1 (Team 1) as a product of this research (research and academic writing skills).
Th e retrospective analysis occurred in the fi rst cycle showed that these pre-service teachers were on the level 1 and 2, based on research skill indicators developed by Willison and O'Regan (2007).Further research analysis will be needed to revise the HLT which will be then tested back in the next cycle (further research).
Discussion
Th is study creates a learning trajectory on research in mathematics education using design research methodology (Gravemeijer and Cobb, 2013) to enhance research and academic writing skills for pre-service mathematics teachers.Initial data analysis of 14 pre-service teachers' research and academic writing skills were conducted in six stages and the learning trajectories on this topic were identifi ed.Th e fourteen pre-service teachers were divided into 7 groups and conducted research independently to produce seven scientifi c articles.Th e six articles were published in the proceedings of Konferensi Nasional Matematika XVII 2014 and one article was published in the Jurnal Elemen Vol. 1 No. 1. Th e retrospective analysis occurred in the fi rst cycle showed that these pre-service teachers were on the level 1 and 2, based on research skill indicators developed by Willison & O'Regan (2007).Further analysis will be needed to revise the HLT which will be then tested back in the next cycle.Open Access.Th is article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
, t. 123, Nr. 3 steps done repeatedly until the discovery of a new theory that a revision of the theory of learning is tested (Figure2).
Fig. 3 .
Fig. 3.One of the student's learning design for her research | 2018-12-18T05:46:42.035Z | 2016-10-20T00:00:00.000 | {
"year": 2016,
"sha1": "171ce8b588d18da93d508bcd0c5994d274e0d7f6",
"oa_license": "CCBYSA",
"oa_url": "http://pedagogika.leu.lt/index.php/Pedagogika/article/download/402/261",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "171ce8b588d18da93d508bcd0c5994d274e0d7f6",
"s2fieldsofstudy": [
"Mathematics",
"Education"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
270687729 | pes2o/s2orc | v3-fos-license | Chromosome-level genome assembly and annotation of the black sea urchin Arbacia lixula (Linnaeus, 1758)
Abstract The black sea urchin (Arbacia lixula) is a keystone species inhabiting the coastal shallow waters of the Mediterranean Sea, which is a key driver of littoral communities’ structure. Here, we present the first genome assembly and annotation of this species, standing as the first Arbacioida genome, including both nuclear and mitochondrial genomes. To obtain a chromosome-level assembly, we used a combination of PacBio high fidelity (HiFi) reads and chromatin capture reads (Omni-C). In addition, we generated a high-quality nuclear annotation of both coding and non-coding genes, by using published RNA-Seq data from several individuals of A. lixula and gene models from closely related species. The nuclear genome assembly has a total span of 607.91 Mb, being consistent with its experimentally estimated genome size. The assembly contains 22 chromosome-scale scaffolds (96.52% of the total length), which coincides with its known karyotype. A total of 72,767 transcripts were predicted from the nuclear genome, 24,171 coding, and 48,596 non-coding that included lncRNA, snoRNA, and tRNAs. The circularized mitochondrial genome had 15,740 bp comprising 13 protein-coding genes, 2 rRNA, and 22 tRNA. This reference genome will enhance ongoing A. lixula studies and benefit the wider sea urchin scientific community.
Introduction
The black sea urchin, Arbacia lixula, is an iconic species living in the Mediterranean Sea and Atlantic Ocean (mainly in the Macaronesian Islands).Like other sea urchins, it is a key driver of littoral communities' structure due to its contribution to maintaining marine barren grounds, shifting complex littoral macroalgal beds into systems dominated by crustose coralline algae. 1,2The effect of A. lixula is particularly pronounced given its omnivorous diet and bulldozing mode of browsing. 3,4Being a thermophilous species, it is expected to be favoured by climate change. 2,5,68][9] In addition, a transcriptomic study found compelling evidence of phenotypic plasticity in this species and reported strong gene expression shifts in individuals exposed to acute thermal changes. 10Overall, it is suspected that it could displace other species less favoured by global warming such as Paracentrotus lividus. 11In fact, over the last few decades, it has become increasingly common in the coldest regions of the Mediterranean Sea. 8,12The potential spread of A. lixula may negatively affect the shallow rocky ecosystems along the Mediterranean coasts, as it has the potential to generate and maintain barrens of reduced diversity and productivity through massive grazing. 2,5Due to its ecological relevance and predicted increased impact, it is important to undertake population genetic studies on this species. 1opulation genetic studies are essential to understand the biology of the species and to predict future scenarios driven by climate change, by assessing several factors such as the connectivity between populations, genetic diversity, and inbreeding levels, among others.Several studies have evaluated the population genetic structure of A. lixula, with different conclusions depending on the resolution level of the markers used. 6,13,14Genetic differentiation between the Mediterranean Sea and the Atlantic Ocean was reported regardless of the marker used.However, panmixia across the Mediterranean Sea could only be ruled out by moving from genetic markers (mitochondrial loci, microsatellites) to genomic data. 14The two main drivers of its genetic structure seemed to be salinity and, to a lesser degree, temperature.Additionally, more recent studies demonstrated the existence of strong genomic substructure along a naturally acidified system (pH from 8.1 to 7.4) over distances of less than 200 m and likely related to adaptation to pH. 15 A recent study reported that only less than 3% of the genomic markers from A. lixula 14 could be mapped to the closest reference genome available (Strongylocentrotus purpuratus), which was insufficient to characterize the functionality of candidate loci to be involved in adaptation. 16Thus, due to the lack of a reference genome for this species, the genetic bases of adaptation to these and other potential factors are still unknown.
From the evolutionary point of view, sea urchins are in a key position in the tree of life.As deuterostomes, echinoderms, and their sister group hemichordates (marine worms) are the only groups sharing a common ancestor with chordates.Thus, research on sea urchins encompasses several fields and applications, such as evolutionary biology, cell biology, biochemistry of eggs, embryos, and the fertilization process, and human diseases (reviewed in Davidson et al., 2002 17 ).Currently, there are five nuclear reference genomes of sea urchins available: S. purpuratus (GCA_000002235.4), 18P. lividus (GCA_940671915.1), 19ytechinus variegatus (GCA_018143015.1), 20Lytechinus pictus (GCA_015342785.2), 21and Eucidaris tribuloides (GCA_001188425.1).However, the most closely related genomes diverged from A. lixula around 179 Mya (http://www.timetree.org/,accessed November 2023) which limits comparative genomic studies. 16ere, we provide the first high-quality genome assembly of A. lixula, including both nuclear and mitochondrial genomes.To achieve a robust assembly and annotation, we used a combination of sequencing techniques, including PacBio HiFi reads, Omni-C reads, and RNAseq short reads from several individuals.The nuclear genome is assembled at the chromosome level, the mitochondrial genome is circularized and both are annotated.The present genome will extend current genomic resources on sea urchins and will provide a key tool for assessing the population structure at a fine level and unravelling the adaptive capabilities of the species in the face of current challenges due to global warming and ocean acidification.
Sampling, DNA extraction, library preparation, and sequencing
Two individuals of A. lixula were collected in Colera, Girona, Spain (42.394254, 3.154585).High molecular weight genomic DNA from muscle tissue from Aristotle's lantern of a single individual was extracted according to the protocol of Sambrook and Russell (2001). 22The fragment length and concentration of the DNA extraction were assessed using the TapeStation 2200 (Agilent Technologies) and the Qubit dsDNA BR Reagents Assay Kit on the Qubit Fluorometer (Thermo Fisher Scientific, Waltham, MA, USA).Subsequently, we constructed two SMRTbell libraries following the instructions of the SMRTbell Express Prep kit 2.0 with low and ultra-low DNA Input Protocols (Pacific Biosciences, Menlo Park, CA, USA).We performed two SMRT cell sequencing runs from a single low-input library and a 3rd SMRT cell run of the ultra-low input library, and sequenced them in Continous Circular Sequence (CCS) mode on the Sequel System IIe with the Sequel II Sequencing Kit 2.0.To generate Omni-C short reads, ~50 mg of muscle tissue from the second individual was used by preparing the corresponding libraries following Dovetail® Omni-C kit manufacturer's instructions with an insert size of 350 bp.The library was sequenced on the NovaSeq 6000 platform using a 150 pairedend sequencing strategy at Novogene (UK).
Genome size estimation
We estimated the genome size of A. lixula following the flow cytometry protocol with propidium iodide (PI)-stained nuclei described in Chueca et al., 2021. 23We selected muscle tissue from Aristotle's lantern of the two freshly collected individuals of A. lixula mentioned above and chopped it with a razor blade in Petri dishes containing 2 ml of ice-cold Galbraith buffer. 24,25The suspension was filtered through a 42 μm nylon mesh, stained with the intercalating fluorochrome (PI, Thermo Fisher Scientific), and treated with RNase II A (Sigma-Aldrich), each with a final concentration of 25 μg/ml.The mean red PI fluorescence signal of stained nuclei was quantified using a Beckman-Coulter CytoFLEX flow cytometer with a solid-state laser emitting at 488 nm.Fluorescence intensities of 20,000 nuclei per sample were recorded.The software CytExpert 2.3 was used for histogram analyses.As internal reference standards, we used cricket (Acheta domesticus, 1C = 2 Gb) and chicken nuclei (Gallus gallus, 1C = 1.2 Gb) (Biosure).The total quantity of DNA in the sample was calculated as the ratio of the mean red fluorescence signal of the 2C peak of the stained nuclei of the A. lixula sample divided by the mean fluorescence signal of the 2C peak of the reference standard times the 1C amount of DNA in the reference standard.To minimize the instrument error, two measurements were carried out on each of three different days using both internal reference standards (Supplementary Table S1).
De novo genome assembly
High fidelity (HiFi) reads were called using a pipeline containing DeepConsensus 0.2.0. 26Briefly, all CCS reads are called from the subreads with PacBio's tool ccs 6.4.0 (https:// github.com/PacificBiosciences/ccs)followed by alignment of CCS and subreads with actc 0.3.1 (https://github.com/PacificBiosciences/actc) and finally running DeepConsensus with the before created CCS reads and alignment as input.Omni-C reads were first checked for quality using FastQC 0.11.9 (https://www.bioinformatics.babraham.ac.uk/ projects/fastqc/) and subsequently trimmed to remove adapters and low-quality regions using Trimmomatic 0.39 (ILLUMINACLIP:TruSeq3-PE.fa:2:30:10 LEADING:3 TRAILING:3 SLIDINGWINDOW:4:15 MINLEN:36). 27The quality of the filtered reads was checked using FastQC.A primary screen of our potential genome assembly span and heterozygosity was obtained with the software Jellyfish 2.3.0 28 and Genomescope 2.0.0 29 using HiFi data and default parameters.HiFi reads from the PacBio platform were used to assemble the A. lixula genome using Hifiasm 0.16.1. 30Basic contiguity statistics of these assemblies were estimated using Quast 5.0.2 31 (Supplementary Table S2).Duplicated regions (haplotigs) from the primary assembly were collapsed using purge_dups 1.2.5. 32
Chromosome level assembly and mitochondrial identification
Filtered Omni-C data was used for chromosome scaffolding with YaHS 1.2a.2. 33The contact map of the obtained assembly was visualized and manually curated using Juicebox Assembly Tools, 34 resulting in an assembly including chromosome-scale superscaffolds.Scaffolds were sorted by length, the largest 22 were numbered ascending from 1 to 22, most likely representing the 22 expected chromosomes.
The mitochondrial genome was obtained by MitoHiFi 2.2. 35A complete mitochondrial genome from A. lixula (obtained from NCBI X80396) was provided as input together with the manually curated scaffolds.The scaffold containing the mitochondrial genome was tagged accordingly in the final fasta file.
Assembly quality
The quality of each assembly stage was assessed using the following parameters: assembly contiguity, assembly completeness, and mapping rates.Assembly contiguity was assessed with Quast (Supplementary Table S2).Assembly completeness was measured by Benchmarking Universal Single-Copy Orthologs (BUSCO) 5.4.3 36 against a Metazoan database (metazoan_odb10; Supplementary Table S3).We also evaluated the k-mer completeness of our assembly using Merqury 1.3. 37We identified the unplaced reads using blastn searches against the nt database, using the megablast engine with -evalue 1e-25 -max_target_seqs 1 -max_hsps 1 options (Supplementary Table S4).Finally, HiFi reads were backmapped to the different assembly stages using minimap2 2.24, 38 and heterozygosity levels were calculated after basecalling with bcftools 1.13. 39Results were summarized using Qualimap 2.2.1 (Supplementary Table S5). 40
Repeated elements identification and masking
Repeat families were identified using RepeatModeler 2.0.4 and masked with RepeatMasker 4.1.4. 41First, we generated custom repeat libraries (CRL) from A. lixula and P. lividus by running RepeatModeler.We combined this library with the repeat sequences of S. purpuratus from RepBase 27.03, 42 generating a final file containing repeat models for three genomes of sea urchins (A.lixula, P. lividus, and S. purpuratus).We soft-masked and hard-masked the repeats in the final assembly using RepeatMasker.Finally, we screened the genome assembly for telomeric repetitive motifs with the software tidk 0.2.31. 43We explored the most abundant repetitive elements with a size from 10 to 40 in the terminal 1% of each chromosome using the function 'explore' and, afterwards, we calculated the number of times they were repeated along the assembly using 1,000 bp windows with the function 'search'.
Genome annotation
The annotation of protein-coding genes, small nucleolar RNA (snoRNAs), tRNAs, and UTR regions was performed using a combination of homology-based and ab initio methods with MAKER 2.31.10. 44For the homology-based methods, we used RNAseq from A. lixula and gene annotations from closely related species.Published RNA-seq data from A. lixula was downloaded from the SRA archive (PRJNA642021 and PRJNA302797) and consisted of 26 paired libraries with Illumina sequencing (Supplementary Table S6) from four different tissues: coelomocytes, digestive, ovary, and testis. 45Quality checking was performed using FastQC 0.11.9 and MultiQC 1.8 46 with default parameters.Adapters and low-quality regions were removed using Trimmomatic v0.39 (TruSeq3-PE.fa:2:30:10LEADING:3 TRAILING:3 SLIDINGWINDOW:4:15 MINLEN:36).In addition, we detected and removed contaminants (mainly human and bacterial sequences) with Kraken 2.1.2. 47Filtered RNAseq data was assembled with Trinity 2.11 48 and redundancy was reduced using CD-HIT 4.8 49 with a 90% similarity threshold.Additionally, all protein sequences from S. purpuratus and P. lividus annotations were downloaded (38,349 proteins from GCF_000002235.5 and 30,556 from Marlétaz et al., 2023 19 ) and obtained amino acid sequences of BUSCO genes, localized in the newly generated genome assembly of A. lixula.We used these annotations in A. lixula, S. purpuratus, and P. lividus as input data to generate homology-based gene predictions on the hard-masked genome.To do so, we used BLASTn 2.10.0 50and exonerate 2.4.0 (https://www.ebi.ac.uk/about/ vertebrate-genomics/software/exonerate) as implemented in MAKER 2.31.10. 51In addition, we conducted ab initio gene predictions with three different software: AUGUSTUS 3.3.3, 52GeneMark-EP 4.69, 53 and SNAP 2013-11-29. 54To refine the genome annotation, we performed a total of three rounds of protein modelling.For the first modelling round, we used BUSCO genes to generate a gene model with SNAP and RNAseq for AUGUSTUS.Subsequently, we generated the first annotation draft with MAKER.For the 2nd and 3rd modelling rounds, we trained AUGUSTUS and created a new SNAP model with gene models of the previous round.Additionally, in the last annotation round, we also used tRNAscan-SE 1.3.1 55 to annotate tRNA genes and snoscan 0.9.1 56 to annotate snoRNA.Genes and transcripts annotated with MAKER were renamed using maker_map_ids and map_gff_ids scripts from MAKER 2.31.10.Protein coding, snoRNA, and tRNA genes were named ALIXG[0-9]{6} and their transcripts ALIXT[0-9]{6}-T[0-9]. Finally, to evaluate the completeness of the annotated protein-coding transcripts we ran BUSCO using the metazoa_odb10 database.
Annotation of long non-coding RNAs (lncRNAs), was conducted by mapping the filtered RNAseq reads to the reference genome using HISAT2 2.2.1121 57 and generating a transcriptome assembly for each sample using StringTie 2.1.4122 58by providing the MAKER annotations as reference.The individual GTF files were merged to obtain a reference GTF using the merge option from StringTie, which was subsequently used as input for FEELnc 59 to identify the candidate lncRNAs.Transcripts shorter than 200 bp, monoexonic, overlapping protein-coding genes, and transcripts with coding potential were filtered out.To compute the coding potential of the transcripts we used two different strategies to obtain the training sets: (i) the shuffle approach, which takes a set of mRNAs from A. lixula and shuffles them while preserving 7-mer frequencies, and (ii) using lncRNAs annotated in S. purpuratus as training set.A total of 23,344 non-coding transcripts (10,814 genes) in the shuffle approach and 22,049 non-coding transcripts (10,435 genes) were identified using the lncRNAs annotated in S. purpuratus.We selected the transcripts identified as non-coding by the two approaches (20,869 transcripts contained in 9,871 genes) and used them as input to estimate the coding potential using CPC2 (http:// cpc2.gao-lab.org/,accessed on June 2023).CPC2 identified a total of 20,328 transcripts (9,627 genes) as non-coding, being our final set of putative lncRNAs.LncRNA genes were renamed using maker_map_ids as MSTRG.[0-9]{5}and transcripts as MSTRG.[0-9]{6}.[0-9].Both MAKER and lncRNA annotations were combined to generate a final annotation GTF file for A. lixula.
Nuclear genome assembly
The final genome assembly of A. lixula (eeArbLixu1) was generated by assembling 43 Gb of HiFi and 25 Gb of Omni-C data followed by manual curation, spanning 607.91 Mb (Fig. 1a).This assembly size is consistent with the genome size estimated experimentally using flow cytometry (580.21Mb, Supplementary Table S1), but higher than when using the HiFi k-mer count distribution approach obtained with Genomescope (463.62 Mb, Supplementary Fig. S1).Differences in length between the total assembly and the flow cytometry are small (4.6%) and could be attributed to the variance in the experimental measurements, or to very small amounts of undetected haplotypic duplications and/or contamination in the The genome size inferred with K-mer-based estimates is likely underestimated due to the high amount of repeated elements and high heterozygosity levels (see below).The final unphased genome assembly is highly contiguous and complete (Supplementary Tables S2 and S3, Fig. 2).Importantly, 96.52% of the sequence is assembled in the 22 longest scaffolds, which agrees with the karyotype of the species (1n = 22; where A. lixula is called A. pustulosa 62 ).The 676 unplaced scaffolds (representing 3.5% of the PacBio sequences) were taxonomically assigned using blast searches (Supplementary Table S4).A total of 50.44% of them had a significant match (with only 2% of mean query cover), where 87.98% were classified as Deuterostome, 6.74% as Protostome, 3.23% as Bacteria, and 2.05% as other (mostly synthetic sequences).Since 94.67% of the Deuterostome sequences were assigned to Echinodermata, we can conclude that roughly 50% of the unplaced scaffolds are likely to belong to A. lixula.
More than 99% of HiFi reads were back-mapped to the final assembly (Supplementary Table S5).The heterozygosity of the genome assembly with both back-mapping and HiFi For mRNA and lncRNA, numbers outside parentheses refer to genes and inside parentheses to transcripts.k-mer count distribution is high (3.47 and 4.40%, respectively), in line with high values detected in reference genomes of other sea urchins, ranging from ~3% in L. variegatus to ~5% in S. purpuratus 63 and with high heterozygosity levels detected in a previous population study in A. lixula. 14In the final assembly, we identified 98.01% (97.43% of them singlecopy) complete BUSCOs and assembly k-mer completeness is 68.09%, possibly due to the high heterozygosity levels found in A. lixula.The percentage of GC content of the assembly is 39.33, higher than the other five sequenced sea urchins (36.26% on average, ranging from 34.4% in P. lividus to 37.40% in S. purpuratus, Supplementary Table S7).The lengths of all chromosomes, the mitochondrial genome, and unplaced scaffolds length are shown in Supplementary Table S8.
In the final assembly, 41.84% of its total length was identified as repeat elements, where 37.95% are interspersed repeats (Supplementary Table S9).This value is within the range of other sea urchins, being 41.02% on average, ranging from 39.20% in S. purpuratus to 42.68% in L. pictus (Supplementary Table S7).The most abundant elements were DNA transposons (9.16%), retroelements (8.06%), and simple repeats (2.99%), while most of the repeated elements remained unclassified (19.53%).A high abundance of DNA transposons followed by retrotransposons is usually found in sea urchins and other deuterostomes. 19We found several different numbers of repeats for the motif AACCCT as potential indicative for telomeric regions (Supplementary Fig. S2).The single motif was sparsely found along the genome but the six times tandem was found in high density in the terminal position of most chromosomes, indicating telomeric regions.We also found high-density regions in the middle of two chromosomes, indicating centromeric regions.The same motif being repeated in tandem three times was identified as telomeric in the two sea urchins with putative telomeric regions annotated to date (Echinus esculentus and Psammechinus miliaris, https://github.com/tolkit/a-telomeric-repeat-database).Our results point out that the telomeric motif in A. lixula is the six-time repeated tandem, supposing the longest telomeric repetitive element reported so far in sea urchins.
Nuclear genome annotation
Complete genome annotations are generally required to decipher functional aspects of many biological fundamental questions.Their obtention is challenging and computationally demanding due to its complexity, and given the tissue (and even cell) gene expression specificity, there is a need for sequencing transcriptomes from different individuals, development stages, conditions, and tissues.Thus, we annotated the A. lixula genome using RNAseq data from several tissues and individuals and annotations from closely related species, combining them using ab initio and homology-based gene prediction methods (Fig. 1b).Overall, a total of 19,415 protein-coding genes and 24,171 transcripts (1.24 transcripts per gene on average, with a maximum of 16 transcripts per gene) were annotated (Table 1).This number is within the range of other sea urchins, ranging from 1.47 to 1.12 in L. variegatus and L. pictus, respectively (Supplementary Table S7).However, this number might be an underestimation of the biological number of isoforms, and further studies adding additional individuals, tissues, and/or stages are advised to improve the current annotation.Protein-coding transcripts contain nine exons on average (Supplementary Table S10), with 3.6% of them being mono-exonic (874 out of 24,171 transcripts).The protein set resulting from annotation has a BUSCO completeness of 88.78%, being slightly lower than the BUSCO completeness recovered in the total assembly (98.01%).This indicates that despite the large number of annotated protein-coding genes a small fraction of genes is still missing in the annotation but present in the assembly.In addition, we annotated 37,895 non-coding genes, including 9,627 lncRNA (20,328 transcripts), 24,875 snoRNA, and 3,393 tRNAs (Supplementary Table S7).
Mitochondrial genome assembly and annotation
The mitochondrial genome was assembled in one single circular contig of 15,740 bp and had a GC content of 37.4%.Its annotation contained 13 protein-coding genes, 2 ribosomal RNAs, and 22 tRNAs (Fig. 3), consistent with the available mitochondrial genome from A. lixula. 64Gene order and number of genes are highly conserved among sea urchins, despite the most distant species analyzed diverged ~179 Mya (Fig. 3).Our new mitogenome assembly is complete and contributes to enriching the sequence diversity available. 65,66
Conclusion
We provide the annotated genome of the black sea urchin A. lixula, which has been sequenced within the framework of the Catalan Initiative for the Earth Biogenome Project. 67his is the first chromosome-level reference for this order (Arbacioida) and will contribute to enriching the genome resources of sea urchins in particular, and biodiversity in general.In fact, this species diverged around the lower Jurassic (~179 Mya) from the closest sequenced sea urchin genomes.Given its key phylogenetic position and its high quality (in both assembly and annotation aspects), the A. lixula genome sequence will represent a valuable resource for the scientific community.The resource will facilitate deciphering its population structure dynamics and the genetic basis of adaptation which is key for the potential increase in a number of this ecologically relevant species.
Figure 1 .
Figure 1.Workflow for (a) genome assembly and (b) annotation.Grey designates the sequencing data generated in the current research and available in ENA (PRJEB60287), blue designates input data, with squares the quality control steps, orange the processes and/or the software used, and dark green the output files generated.
Figure 2 .
Figure 2. Nuclear genome assembly and annotation.(a) contact map of the genome assembly showing the 22 superscaffolds (corresponding to chromosomes) ordered from longest to shortest and a picture of A. lixula from Creu Palacín.(b) Snailplot summarizing scaffold statistics, total size, and base composition.The main plot is divided into ordered-size bins; pale grey spiral shows the cumulative length on a log scale, dark grey represents the length of all scaffolds, red represents the longest scaffold in the assembly (55.6 Mb), and orange and pale orange show the N50 and N90 lengths (26 Mb and 20.9 Mb), respectively.The dark blue and light blue around the outside of the plot show the distribution of GC and AT percentages.(c) Summary of the BUSCO genes identified using the metazoa_odb10 set.(d) Summary of the repeated elements identified in the genome assembly.(e) Classification of the genes annotated in the genome in major biotypes.(f) Gene length, number of exons, and percentage of GC content of the different gene biotypes.
Figure 3 .
Figure 3. Mitochondrial genome assembly and annotation.(a) Circular plot showing the complete annotation of the A. lixula mitochondrial genome, including 13 protein-coding genes, 2 ribosomal RNAs (rRNA), and 22 tRNAs.(b) Divergence time among the five sea urchins mitochondrial genomes available; J: Jurassic, C: Cretaceous, P: paleogene, and N: Neogene.(c) Alignment of the five mitochondrial genomes from sea urchins shows a high degree of conservation among genes and their order. | 2024-06-24T06:17:23.670Z | 2024-06-22T00:00:00.000 | {
"year": 2024,
"sha1": "0397abca93f8383f7dbefbe1d2fdb5a31a4d930c",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/dnaresearch/advance-article-pdf/doi/10.1093/dnares/dsae020/58306337/dsae020.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6b54d5f65a989d83b205046eddd467d5f5497adf",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
213025292 | pes2o/s2orc | v3-fos-license | Aspect Regarding the Design of Active Strategies for Venture Capital Financing – The Flexible Adjustment for Romania as a Frontier Capital Market
Industry 4.0 revolution find a real interest of entrepreneurs but in the case of frontier and emerging markets it is difficult to find a reliable source of long-term financing. The decoupling of the technological evolution from the evolution of the access to financing capacities must be analysed from different point of view (financial, legislative, socio- technical). In Romania, as a frontier market there are only few alternative investment solutions capable to respond to the long-term financing demand of performant projects. The main interest is to understand the strategies for adaptation of venture capital fund (VCF) at real conditions. Venture capital funds (VCF) represents a particular form of private equity investments, scale down and more focused on innovative start-up (or even expansions on technology or markets) projects (the typical value is 10 mil Euro). This form of investments is a personalized response to the general problem related to the actors that do not have tangible assets for collaterals and/or cannot demonstrate the ability to make a profit. In the case of VCF, as a vehicle oriented on innovation and technology, the business plan represents the main element for project portfolio selection in the context of matching the interests of investors with the interests of financed firm’s managers. This contribution is especially important for the case of frontier and emerging markets characterized by additional restrictions (access to strategies, liquidity problems, and agency costs beyond a simple monitoring). For Romania, it is essential to adapt VCF investors' objectives to all phases (selection, evaluation, contract signing and restructuring, progress monitoring, stimulating value-added and, especially, closing the VCF cycle) to real conditions and considering the performance indicators balancing with the value creation mechanisms specific of industry 4.0.
Introduction to the issue of venture capital funds VCF -the situation in Romania
In Romania, although the macroeconomic characteristics are favorable, there are only few alternative solutions that are capable of responding to the long-term financing demand of the start-up performing projects. The actual interest is to understand the strategies for harmonizing funding capabilities with the real projects proposed by entrepreneurs and the fructification of the strategic solutions through flexible adaptation of investment vehicles like venture capital funds (VCF) at real market conditions. The specific conditions refer in fact to the inefficiencies specific to the market category in which Romania belongs, namely frontier market, but with real perspectives to skip to another status, namely emerging market. Progress has been made in terms of legislative issues, but the Romanian alternative fund industry does not harmonize well enough either with investors (especially retail) or with entrepreneurs seeking long-term financing (Boscoianu et al., 2015). In order to remedy this situation, not only alternative investment fund managers, but all stakeholders, authorities, managers of companies interested in financing and investors should participate.
Although in Romania direct investment is still preferred, the recent emergence of alternative investment funds (AIFs) opens new flexible perspectives that provide easy and scalable access to long-term project financing, starting from the principles of modularity, scalability and competitive selection. In emerging markets, delegating decision-making to investment managers (Boscoianu et al., 2018) becomes essential because it optimizes the investment cycle through flexibility and efficiency and provides adequate response to market restrictions (liquidity, critical mass, access barriers, additional scaling and monitoring costs).
To reveal the actual situation of the financing venture capital into emerging markets, it can start from the general framework of financial buyer solutions (Bruton et al., 2009;Bottazzi and Hellmann, 2008;Alhorr et al., 2008). Private Equity (PE) / Private Equity Funds (PEF) are long-term financing solutions for over 100 million Euros of performing business/ projects (the current global trend is growing by an order of magnitude) and include a wide range of activities and mechanisms such as: acquisition of mature companies demonstrating leveraged buyout (LBO) capacities; minority equity investments offered to firms for expansion/ restructuring (growth capital) have created firms in early development or expansion (pure venture capital investments). The financial buyers are different from the strategic buyers based on their ability of acquiring a more leveraged capital structure with more favorable debt financing elements and efficient exit strategies. Moreover, based on the premises offered by the alternative investment funds for the emerging markets, the chances for a safe diversification and the establishment of a better management strategic orientation are still there.
The venture capital funds (VCF) represent a particular form of scaled down PEFs (in the case of those funds that are one unit less than PEF and a typical value of 10 million Euro) and are more focused on the innovative start-up (or even the expansions on technology or markets) projects. The VCF investments represent a personalized response to the general issue related to those actors that do not own tangible assets for collaterals and/or are not able to prove their initial ability to generate profit (Amit et al., 1998). On the other hand, within the emerging markets, the LBO elements are also viewed as the portfolio's consolidating elements.
The VCF's technical and organizational particularities in the emerging markets
Although private equity or venture capital solutions are well known in the investment landscape, there are few references and concrete examples to Romania. In order to describe the solutions and effective trading strategies of these instruments, we will detail specific organizational and technical particularities in some emerging markets.
First of all, let present the VCF transactions participants in emerging markets. The management of VCF selects the targeted firm, negotiate the acquisition price, secure debt financing, makes strategic and financial decisions and decide the strategic timing of investment (the characteristics of exit). In emerging markets, the investors are limited partner's type. They sign the invest contracts for as long 8-10 years being limited by the exit strategies (IPO/ strategic sale). The role of the management of the targeted firms is critical in the context of alignment of interests of management with interests of VCF.
In order to understand how to improve the viability of VCFs, it is important to make reference to the main characteristics of the transactions in the emerging markets: o it is mandatory to configure those critical securitisation elements (debt and equity) that are required by the institutional investors (Bekaert and Harvey, 2003); o there are high debt levels set in place starting from the intention of increasing the return on equity for PEF/ VCF investors; o regarding the exit strategy, the capability to resold a VCF-portfolio company after 5-8 years through an IPO (initial public offering) depends on the efficiency of markets and it is still preferred the private transaction solution (Cumming, 2008); o the companies in VCF portfolios target an IRR (internal rate of return) above 25% in the context of a higher leverage, an inconsistent ability to pay dividends and another market's inefficiencies; o the VCFs have some effects on companies and markets such as: pressures on performance, focus on anti-takeover strategies, the possibility of a further readjustment of the capital structure (the leverage via additional debt to reduce the overall cost of capital and improve the returns), the streamlining of the mergers and acquisitions. One can notice the importance of the introduction of a set of solutions that will adjust the VCF investments to the actual situation in Romania such as: o the growth of the efficiency of the partnerships based on the investment and management cultures; o taking part in some large transactions based on the flexibility of the partnerships (the so called "club transactions"); o the leveraged recapitalizations of VCF portfolios via debt transactions. Another question is related to the process of selection of the target companies for the VCF transactions. In Romania, the capital market is still underdeveloped and even if there are good companies, the investment opportunities are lost in the wake of market inefficiencies (Boscoianu et al., 2013;. The process of selection of the target companies for the VCF portfolios differs significantly from other investment funds. The key selection element is the VCF business plan that comprises differentiating elements in regards to the market, the product, the IPR rights, the management team, the operation history, the financial projects, the necessary funds and the exist opportunities (Anson, 2008). The plan needs to be a coherent and a consistent one (Kaplan and Stromberg, 2009). It needs to be based on the business strategy connected with the targeted niche. It also needs to emphasize the necessary resources and the risks that can be encountered on the way, the ability of the management team up in order to design an efficient and a viable set of actions that will accomplish the target performances. The plan also needs to be realistic and offer both the funding stages and the supplementary adjustment amounts. The financial objectives need to comprise both the profit indicators and the exit conditions (IPO or the strategic transaction).
One needs to highlight the fact that the VCF within the emerging markets could also comprise the LBOs with those companies that are able to generate cash flow (to pay debt interest, the main payments and dividends). In emerging markets, the investors are more focussed on returns, on cutting down the actual investment period as well as on the efficiency of the exit strategies. The inclusion of certain PE/LBO types of elements would generate the premises for portfolio stabilizations.
The critical characteristics of the VCF companies within the emerging markets are related to: o the quality of management (in the case of a high leveraged company, the rigorousness and the quality of decisions are essential) and assets (the assets are oriented towards the growth in cash flow and sales); o ensuring a stable and a robust cash-flow (the management's ability to ensure cost savings and operational initiatives); o taking steps towards improving the capital structure in the context of a higher leverage, an efficient structured debt and a smaller equity investment that is able to induce a greater potential return; o a good control of the capital expenditure. VCF managers manage fundraising processes, portfolios selection, structuring investments in their portfolio companies, investment monitoring, value added service offerings, and design the exit scheme at the end of the investment cycle. This involves a set of highly complex activities (selection decisions, investment timing, and management of large investment blocks, additional capital injections, or restructuring/closure solutions, alternative exit solutions through IPO). The main advantage of VCF is the possibility of staging (Gompers, 1995), and mathematically this can be represented by a Call option with leverage implications (option value increases with volatility). In this case, the issue of compensation for risk comes from the financed beneficiary, a novel aspect that could be used in the case of Romania.
In the case of VCF, as a vehicle oriented on innovation and technology the business plan represents is the main element for project portfolio selection in the context of matching the interests of investors with the interests of financed firms' managers. The active involvement of the investment manager in managerial assistance and technical advice, including the setting of operational objectives (detailed and correlated with business plans), progress monitoring and operational management, offers the advantage of reducing the information asymmetry specific to mediums with high uncertainties and low liquidity. For Romania, it is essential to adjust the VCF investors objective in all stages (selection, evaluation, agreements and re-structuring, the progress' monitoring, the encouragement of getting the added value and, most of all, bringing VCF to a closure) of real life situations by taking into account the connection between the performance indicators and the typical Industry 4.0 value design mechanisms.
This contribution is especially important in the case of frontier and emerging markets characterized by additional restrictions (the access to strategies, the liquidity problems, agency costs beyond a simple monitoring).
The Innovative solutions for adjusting VCF to the Romanian capital market
Although VCF contracts have a well-defined place in medium and long-term financing, a careful consideration of the adaptation of VCF funding solutions is needed in current context in Romania.
VCF finances innovative projects with high associated risk and low liquidity, and in the case of emerging markets it faces additional barriers to investment (liquidity, political instability, information asymmetry, fund transfer restrictions, taxes), exacerbation of uncertainty issues and predisposition to turbulent. VCF in emerging markets are in fact niche strategies and the active involvement of the investment manager in monitoring the company's progress and operational management becomes essential. In this type of managerial assistance and business technical advice, VCF participation must contain the management of detailed operational objectives, related to business plans.
The attractiveness of VCF investments in emerging markets involves balancing performance items beyond the risk-return binom so that even sometimes significant differences in investment timing are considered. VCF performance cannot be compared to benchmarking as in the case of alternative investment funds and there is no question of a risk benchmarking, the portfolio being built on the fruition of market opportunities.
VCF funding solutions offer unique benefits both to the company that can access impossible financing through classic investments and to the financier who benefits from the opportunity of a staged investment that can be represented by a Call with Leverage implications. Since the value of the option increases with the volatility of the project / portfolio of financing projects, it results in creating a preference for long-term investment involving a higher risk. In addition, within the VC financing partnership, a more prudent, risk-clearing attitude is created at the same time by the VCF fund manager. This risk adjustment mechanism is of utmost importance for the VCF funding contracts in emerging markets.
Within VCF in emerging countries, the key selection element, which also supports the management contract, is the business plan. It is essential in this case to harmonize the interests of VCF investors with those of companies from portfolio (especially financial projections, associated risks, impact on the VCF portfolio, and the effectiveness of exit strategies). In addition to highlighting the necessary resources and risks along the way, the business plan must reflect the ability of the management team to achieve the proposed performance. The plan should be realistic by providing the sequence of funding and additional adjustment investments, and the financial targets should contain both profitability indicators and public exit conditions through IPO.
The typical lifecycle of the VCF funding in emergent markets follows the Gompers-Lerner curve (Gompers and Lerner, 2001), and at these stages there are a number of particularities such as: a) attracting initial capital from underwriting external investors is a non-transparent process of "club transactions" and the duration may differ significantly (3-12 months); b) the analysis of the companies' business plans basically takes place in parallel with the action itself; c) the initial investments are in a closed system and without showing the stage profit at VCF level and the duration differs significantly depending on the type of projects (about 1.5 -5 years), it is desired to include LBO elements as stabilization machines; d) the investments' active management within the VCF portfolio (including further capital investments) stand for a flexible element that functions together with the leveraged recapitalizations of VCF portfolios via debt transactions and do not exclude their involvement in large transactions; e) monitoring, management, and technical support are key elements in generating a constant profit that will be able to sustain the VCF strategy;
ENTRENOVA 12-14, September 2019
Rovinj, Croatia f) finalising the VCF investment cycle can be taken into account in advance based on early redemptions due to the fact that the IPO strategies target more performing markets. The complexity of adapting VCF funding to the emerging market context emphasizes the importance of governmental support both in creating new institutional and market infrastructures and in developing the investment culture specific to the current context in Romania.
Regarding the VCF financing mechanisms, in the case of Romania, there is a comparative advantage compared to the classical solutions and an added efficiency resulting from the inherent reduction of the moral hazard problems presented above. Through a better selection of projects, focusing on personalized contracts results in a good adaptation to specific target environments, characterized by information asymmetry and high uncertainty in which management of the problem of adverse selection can often be confronted with hidden actions leading to moral hazard.
The typical structure and classic VCF models indicate a long-term investment horizon that is not well connected to the requirements of emerging market investors. Thus, an initial investment phase of 1 to 3 years and a maturity stage in the 3-7 years implies the search for better adapted solutions. In addition, the closure of the Gompers-Lerner cycle (Gompers and Lerner, 2001;2006) may create some surprises in the sense of a larger weight of liquidations or possible failed IPOs. In these circumstances, it is expected that the processes of alignment of alternative investorto-business interests will be specified in detail through performance objectives wellanchored by the specific sector.
For Romania, the harmonization of the objectives of VCF investors on all phases (selection, evaluation, contracting and structuring, monitoring progress, stimulating value-adding and, in particular, closing the VCF cycle) should start from understanding how to get performance indicators and creating collaborative and natural stabilization mechanisms. Alternative investment culture is another vector of integration and can be sustained through government-university partnerships. Scientific research in the field of attention can also contribute to the success of these investment vehicles.
The use of real options in the VCF transactions -dealing with high uncertainties and turmoil
VCF performance analysis should start from the idea that some of the portfolio companies do not deliver the proposed returns. Analysis of success probabilities by Bayesian processes and the continuation of investment processes by capitalizing new opportunities on the basis of updated beliefs (Bergemann and Hege, 1998) and new liquidity injections could also contribute to the stabilization of VCF portfolios.
In order to understand how to obtain stable and balanced performance, it is necessary to adapt the real option analysis ROA as a robust tool in dealing with investment uncertainty (Adner and Levinthal, 2004;Adner, 2007;Coff and Laverty, 2001) to the typical VCF financing mechanisms. In the literature (Trigeorgis, 1996) exist different types of real options: option to defer, grow, stage, scale, abandon, switch.
The management of real options focuses on the adequate implementation (from the selection of the projects, the decisions to continue the investments and the exit). The classic mechanism of the investment decisions is represented by the expending of resources with opportunity cost under uncertainty and irreversibility (Dixit et al., 1994). The use of real options in VCF starts from stating the advantage of understanding the multi-stage investments (Herath and Park, 2002) based on the building and managing of different real options (making use of those options that add in value and giving up the ones that do not add in value).
The financial option, defined as the right (but not the obligation) to take an action in the future (Amram and Kulatilaka, 1999;Kogut and Kulatilaka, 2001) provides protection against the unwanted price fluctuations. They characterized by the asymmetry that exists between the limited downside and an unlimited upside. On the other hand, the real options were defined (Trigeorgis, 1996) as contingent investments that secure future decision rights by offering both continuation/expansion opportunities (growth options allow subsequent investments leveraging the project) as well as defer or abandon (exit options). The sequential approach allows the inclusion of the updated information regarding the project's dynamics and allows the investment process to be more flexible (through learning and discovering).
The principle of investment timing starts from the observation of the combination of uncertainty and irreversibility and suggests the importance of introducing a wait or defer type D option that represents the cost of waiting until the uncertainty is solved and the growth type G (invest immediately to target the opportunity).
It is worth noting that in this case there is the possibility of NPV-D <0 even in the case of a feasible project from the classical perspective (NPV> 0), but also investments that create value synergies through follow-on options, where a non-feasible project (NPV <0) can become interesting, because NPV + G> 0. It should be noted that the influence factors of the financial call value (underlying asset, the exercise, the uncertainty, the risk-free rate, the maturity) changes for real options due to the fact that the exercise is not known and fixed (post divergence in valuation). In addition, there are costs that are associated in order to keep options open in the context that ambiguity is related to the maturity. Moreover, the transactions with real options could be influenced by adverse selection and information asymmetries.
In order to provide a typical VCF transaction application, we propose a timing model of investments with future expectations inspired by the Grenadier and Weiss (1997) model of current and future technology. It highlights the strategic potential of timing of investments starting from the assumption that sequential investments offer a source of vital of flexibility in environments with higher levels of uncertainty.
The formulation of decisions should start from the distinction between two types of options, capable of providing flexible responses to changes in markets, defer D option (wait/ delay and stage investments -benefits of waiting new information) and growth G option (invest immediately or early investment -ability to expand in the future). Immediate investment may lead to missed participation in new market opportunities in the next round, while waiting strategy is penalized by an opportunity cost. This equation highlights the short-term or timing expectations of the market (ex-date dividend) and long-term developments. It is especially important to understand the relative G/D and G/D conflicts.
There is a two-generation model of opportunities that distinguishes between G and D options, with opposite effects on the attitude of investment VCF. Timing and the momentum of emerging a future opportunity, VC2 is associated with uncertainty in comparative values and the frequency of change.
Let us consider two generations, a current opportunity (VC1 project) and a future opportunity (VC2 project) and two decisions related to the adoption of the current/ future opportunity. One needs to estimate the significance of the relative G/D value by starting from the value of the future opportunity, uncertainty in the current/ future values, the arrival time of the future opportunity. One will start from identifying the underlying source of uncertainty. The deferral option value is expressed by the opportunity cost of irreversibly investing in VC1 project (uncertain value today). In the case of the growth option G, the aim is to maximize the potential value of VC2 project. The uncertainty of VC1 and VC2 could/ could not be related. In regards to the option values, higher uncertainty of VC2 lowers the D value of VC1 while increasing the G value of VC2. If VC2 represents only an incremental advance over VC1, the changes in the value of VC2 have a small impact on the VC1 value.
One notices the existence of several mechanisms in generating values based on those two types of options. The way they are used depends on the actual situation. The D value is determined by the opportunity cost of investing in VC1 and suggests the adjustment of the decision according to the process of arrival of the new information. There is a trade-off between first mover advantage and the possible arrival of alternative future projects. On the other hand, the G value depends on the preference for the investment in future projects by starting from the trust in solving the incertitude. After identifying the new investment method, the information changes the corresponding uncertainty level and, as a result, the VCF manager either makes use of the option (full investment) or stops the investment. The importance of the expected arrival time of VC2 originates from the protection mechanism of the initial investment given the fact that the monitoring costs diminish the G value.
The real option analysis (ROA) is based on two essential elements such as the ability of the VCF managers of reacting naturally to the changing market conditions and the ability to react to the risk change. The main issues of the real option analysis (ROA) refer to the actual aspects regarding the implementation, the organizational aspects of the managing options, a profound understanding of those mechanisms of generating value based on real options as well as the quantitative estimate of the reason for maintaining open options.
The future research should consider aspects related to the timing of investments, a thorough research on the understanding of the value creation and the appropriate mechanisms working within real options in the context of VCF transactions, the introduction of the issue of valuing the abandonment options, the use of performance or risk measures to test ROA. Moreover, combining several theories such as the agency theory (the understanding of the usage limits), ranges (commitment vs flex for building performance) can be taken into account.
Conclusion
The VCF issue in Romania was barely analysed in the specialised literature. The present approach is unique due to the fact that it is meant to be a multi-dimensional analysis in itself based on the market's practical assumptions. Firstly, it tackles some aspects that were borrowed from the PEF/LBO as well as some hedge fund trading strategies that help introduce flexible solutions for the adjustment to the difficult conditions of the emerging markets. Secondly, some flexible staging mechanisms are emphasized based on a Call option that has an impact on the connection existing between the leverage and the compensation for risk.
VCF in emerging markets refer to a set of complex activities that start with the setting up of the initial structure, the set of decisions of the target companies, the investment timing and finishing with a large set of flexibilities (based on extra capital investments, restructuring/closing solutions, club transactions partnerships, leveraged recapitalizations of VCF portfolios via debt transactions, innovative exist solutions based on the IPO).
Regarding the strategies for adapting venture capital financing at the actual context, in Romania the support of the government (based on the Triple Helix partnerships in establishing/refining the specific investment culture and the upgrade/design of a new investment infrastructure) is one of an utmost importance . The improvement of the performance based on the reduction of the moral hazard issues is done through the projects' transparent and competitive selection, a proper design of the contracts (based on finding a common ground of the interests through the performance objectives that are specific to a particular field), as well as a fluid personalization that adjusts to the investment stages that characterize the emergent markets (the duration of the investment, the IPO mechanisms).
As far as Romania is concerned, the clear statement of the VCF investors' objectives during all the stages (selection, evaluation, trading and structuring, the progress' monitoring, sustaining the added value and especially the closing of the VCF cycle) needs to emerge from the performance indicators and the typical Industry 4.0 value design mechanisms. | 2019-12-19T09:16:04.781Z | 2019-09-12T00:00:00.000 | {
"year": 2019,
"sha1": "ca158e808f3690ef71b2ac055af3667aea12e45f",
"oa_license": "CCBYNC",
"oa_url": "https://hrcak.srce.hr/file/365037",
"oa_status": "GREEN",
"pdf_src": "ElsevierPush",
"pdf_hash": "2317d94600566809cf148ebae59b60afda3d7ee4",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
117449043 | pes2o/s2orc | v3-fos-license | Quantum gravity and non-commutative spacetimes in three dimensions: a unified approach
These notes summarise a talk surveying the combinatorial or Hamiltonian quantisation of three dimensional gravity in the Chern-Simons formulation, with an emphasis on the role of quantum groups and on the way the various physical constants (c,G,\Lambda,\hbar) enter as deformation parameters. The classical situation is summarised, where solutions can be characterised in terms of model spacetimes (which depend on c and \Lambda), together with global identifications via elements of the corresponding isometry groups. The quantum theory may be viewed as a deformation of this picture, with quantum groups replacing the local isometry groups, and non-commutative spacetimes replacing the classical model spacetimes. This point of view is explained, and open issues are sketched.
1 Introduction and motivation 1
.1 Historical remarks
Giving a talk on three dimensional (3d) gravity at a meeting in Cracow is like carrying coal to Newcastle: the beginnings of the subject are usually traced back to the paper [1] by Andrzej Staruszkiewicz, alumnus and later professor at the Jagellonian University in Cracow. Staruszkiewicz's paper, published in 1963, is about classical 3d gravity and its special features. The subject of 3d quantum gravity started only five years later with the realisation by Ponzano and Regge [2] that angular momentum theory plays an important role in this context. Gravity in 3d is now a large subject in its own right, which I can not possibly review here. However, in this introductory part of the talk I will at least attempt to identify a few of the main themes and relate them to the approach followed here. Influential papers by Deser, 't Hooft and Jackiw written in the 1980s [3,4,5,6] on classical and quantum scattering of particles demonstrated the possibility of carrying out non-perturbative calculations of quantum scattering processes in 3d gravity. As we shall see, they also contain indications of the relevance of the braid group in describing such processes. These indications are elaborated in the later literature, see for example [7,8,9], and turn out to be closely related to the quantum group approach pursued in this talk.
The Chern-Simons formulation of 3d gravity, observed in [10] and elaborated in [11], establishes a connection between 3d gravity and a host of areas in mathematical physics, including topological field theory, knot theory, the theory of Poisson-Lie groups and of quantum groups. Since this talk is based on the Chern-Simons approach, we will see many of these connections.
The early paper by Ponzano and Regge, mentioned above, provides the foundation of the spin foam approach to 3d quantum gravity. This is perhaps the approach to 3d quantum gravity that contains the most directly useful lessons for 4d quantum gravity. I will not discuss this approach in this talk, and shall not attempt to summarise the large literature on it. However, it is worth pointing out that there are close links with Chern-Simons theory (spin foam state sums may be viewed as discretisation of the path integral) and to quantum groups, see [12] for an early paper and [13,14] for examples of recent papers with many references.
The possibility that non-commutative geometry is needed to describe spacetime at the quantum level has long been a theme in quantum gravity research [15], see [16] for a recent discussion with some references. It is therefore interesting to ask if one can use the relatively tractable 3d situation to establish the role of non-commutative geometry in quantum gravity in a mathematically convincing way. Early discussions of non-commutative spacetime coordinates appear in the paper [17]. Spacetime non-commutativity in 3d quantum gravity is studied, in different approaches, in [18,19,20,21]. Putting these approaches into one coherent picture is one of the objectives of this talk.
Finally, I should mention two further important themes of 3d gravity research which I will not be able to touch on in this talk. One is the study of BTZ black holes, an introduction to which can be found in the book [22]. The other is the relation to 3d hyperbolic geometry, where the papers and books [23,24,25,26] may provide good starting points.
Topological degrees of freedom and interactions in 3d gravity
The Einstein field equations (without cosmological constant and in units where the speed of light is 1) Rg ab = −8πGT ab determine the Ricci tensor of a spacetime in terms of the energy momentum tensor. In spacetime dimensions greater than three, the Ricci tensor does not fix the Riemann tensor and it is possible to have metrically non-trivial (i.e. curved) spacetimes satisfying the vacuum (T ab = 0) field equations. In three spacetime dimensions, this is not possible. The Ricci tensor determines the Riemann tensor and, as a result, the only vacuum solutions of the Einstein equations with vanishing cosmological constant are flat [22]. This result simplifies Einstein's theory of gravity in 3d dramatically, but does not render it trivial. There are non-trivial solutions of the Einstein equations in the presence of matter, and, if the topology of the three-dimensional manifold representing the universe is non-trivial, there may be vacuum solutions which, though flat, have non-trivial holonomy. These observations are often summarised in the slogan that in 3d gravity there are no gravitational waves but that the theory has topological degrees of freedom.
The simplest solution of the Einstein equations illustrating the previous paragraph is the spacetime surrounding a point-particle. The energy-momentum tensor is a Dirac delta-function with support on the world line of the particle.The metric solving the field equations is flat away from the world line and is singular on the world line. More precisely it is a direct product of a cone (space) and R (representing time) [22]. The line element, in terms of polar coordinates (r, φ), with r > 0, and a time coordinate t is simply However, the range of φ is [0, 2π − µ), where the parameter µ is related to the particle's mass m and to Newton's constant G via µ = 8πGm.
In three dimensions , the physical dimension of G is that of an inverse mass so that µ is a dimensionless, angular parameter. The effect of a particle on the geometry of spacetimes is, then, to cut out a wedge of size µ from the spacetime surrounding the particle's world line.
It is instructive to consider the effect of the geometry (1.1) on light test particles. Such particles travel on geodesics, which are simply straight lines on the cone after it has been cut open. It is easy to check that geodesics passing the particle of mass m on one side are deflecting relative to particles who pass it on the other side by the angle µ (in the coordinate system (t, r, φ)). This relative deflection is illustrated in Fig. 1 and is independent of the distance of closest approach between the heavy particle of mass m and the test particles (impact parameter). The interaction is topological in the sense that it only depends on whether the test particle passes on the left or the right of the heavy particle, and not on the relative distance. This kind of interaction is familiar from the Aharonov-Bohm interaction between electrons and a magnetic flux, and this analogy can be made precise: both interactions can be related to the braiding of the world lines of the interacting particles [9]. µ µ Figure 1: Geodesics in the space surrounding a conical singularity with deficit angle µ
Physical constants entering 3d quantum gravity
The four physical constant entering 3d quantum gravity are the speed of light c, Newton's constant G, Planck's constant and the cosmological constant Λ. From these, we can form two length constants (remembering that the dimension of G is an inverse mass), namely In this talk we will deal with both Lorentzian and Euclidean gravity, and we parametrise Euclidean and Lorentzian metrics in a unified fashion by allowing c 2 < 0 in the Euclidean situation. As a result, both the length parameters in (1.2) may be imaginary, depending on the sign of c 2 and Λ. From the ratio of the two length parameters we can form a dimensionless quantity. We define the deformation parameter which may take values on the real line or the unit circle in the complex plane.
It is useful to clarify the role played by the various constants in 3d gravity in general terms at this stage. The observation of the previous section that, in the absence of matter, solutions of the Einstein equations are locally flat generalises in the presence of a cosmological constant to the statement that vacuum solutions are locally isometric to model space times, which depend on the parameters c and Λ. For Lorentzian gravity with vanishing cosmological constant, for example, the model spacetime is Minkowski space while for Euclidean gravity with positive cosmological constant it is the four-sphere with the round metric. The isometry groups of the model spacetimes inherit a dependence on c and Λ. In the examples above they are, respectively, the Poincaré group in 3d and the 4d rotation group SO(4). Newton's constant G enters when one studies the dynamics of spacetime and plays the role of a parameter in the Poisson structure and that of a coupling constant to matter. Finally, enters in the quantisation and the dimensionless parameter q in (1.3), combining all four constant, controls the quantum theory when all the constants 1/c, G, Λ, are non-zero.
Motivation and outline of the talk
The goal of this talk is give a unified account of aspects of classical and quantum gravity in 3d, in which the physical parameters of the previous section enter as deformation parameters. Our account of classical gravity is based on the formulation of 3d gravity as a Chern-Simons gauge theory, where the local isometry groups play the role of the gauge groups. As well shall see, the parameters c and Λ enter in this description via the structure constants of the Lie algebra of the gauge group, while the parameter G enters via the inner product (or trace) on the Lie algebra which is used in the Chern-Simons action. We sketch the description of the phase space of 3d gravity as the moduli space of flat connections, and review the description of its Poisson structure in a formulation, due to Fock and Rosly [27], which makes essential use of classical r-matrices.
The description of the Poisson structure in terms of r-matrices is tailor-made for the quantisation via the combinatorial or Hamiltonian scheme pioneered in [28], [29] and [30]. In this scheme, the quantisation is controlled by quantum groups which are deformations of the local isometry groups of the model spacetimes, with deformation parameters G and in addition to c and Λ. These quantum groups naturally act on non-commutative spaces, which one may interpret as deformations of the classical model spacetimes. This framework thus provides a concrete mathematical setting for exploring the proposal that, in quantum gravity, spacetime should be mathematically modelled in terms of non-commutative geometry. We end our talk with an evaluation of the successes and limitations of this approach to 3d quantum gravity.
Model spacetimes and isometry groups
The following treatment of the model spacetimes follows closely that in [31]. We use Roman letters a, b, c . . . for 3d spacetimes indices, with range for {0, 1, 2} (in both the Euclidean and Lorentzian case). The model spacetimes arising in 3d gravity can be described in a simple an unified fashion in terms of the metric in an auxiliary R 4 . Here we use Greek indices for the range {0, 1, 2, 3}. The model spacetimes can be realised as embedded hypersurfaces via This two-parameter family includes the three-sphere S 3 (c 2 < 0, Λ > 0), doubles covers of hyperbolic space H 3 (c 2 < 0, Λ < 0), de Sitter space dS 3 (c 2 > 0, Λ > 0) and anti-de Sitter space AdS 3 (c 2 > 0, Λ < 0). Double covers of Euclidean space E 3 and Minkowski M 3 space arise in the limit Λ → 0, which one should take after multiplying the defining equation in (2.2) by Λ. In Fig. 2 we show the embedded model spacetimes (with one spatial dimension suppressed).
In order to be able to take the limit Λ → 0 for the associated isometry groups it is best to work with the inverse metric The Lie algebra generators of the isometry groups of (2.3) can conveniently be defined in terms of the Clifford algebra associated to (2.3) [31]. Thus we define generators γ µ via so that the six Lie algebra generators are given by They have the commutation relations The advantage of the Clifford algebra approach is that one can immediately write down two naturally defined invariant bilinear forms. One, denoted ·, · is defined by carrying out the As we shall see shortly, this is the Killing form on the Lie algebra We now express the above generators in more conventional 3d notation. For this purpose we define the three-dimensional totally antisymmetric tensor with downstairs indices via 012 = 1.
Then we define the rotation generatorJ 0 , the boost generatorsJ 1 ,J 2 and translation generators P a viaJ where we used the spacetime part of the 4d metric g µν (2.1) to lower indices, and refer to [31] for a discussion of physical dimensions and interpretation of these generators (which are denoted by the same letters, but without tilde there). The Lie algebra brackets are now with indices raised via the inverse metric g ab . The combination −c 2 Λ which occurs in the Lie brackets plays an important role in what follows, and we introduce The bilinear form (·, ·), already advertised as the Killing form, is The metric κ ab is the most natural one on the Lie algebra so(3) respectively so(2, 1) spanned byJ 0 ,J 1 andJ 2 . Note that it differs from the spacetime metric g ab , but that it has the right physical dimensions and that imaginary c gives the usual Euclidean metric, as required.
It is one of the coincidences of 3d that spacetime and the Lie algebra of rotations and/or boosts are both three-dimensional. Both are equipped with Euclidean respectively Lorentzian metrics, but our derivation shows that, in a physically natural normalisation and construction, the spacetime and Lie algebra metrics come out differently. This is potentially confusing in calculations where indices are raised and contracted with these metrics, and most papers on 3d gravity use conventions where the two kinds of metrics coincide. We can achieve this by switching from the physical Lie algebra basis used thus far to a geometrical basis according tõ In this geometrical basis, all the generators J a are dimensionless, and all the translation generators P a have the dimension of inverse time. One checks that the Killing metric now takes the form which is diag(1, 1, 1) in the Euclidean and diag(1, −1, −1) in the Lorentzian case. Moreover, the Lie brackets take the same form as in (2.8), but all indices are now raised with the Lie algebra metric η ab . This is convenient and we shall work in this basis for the remainder of this talk. We denote the Lie algebra with these brackets by g λ . The conventions regarding the metric then agree with [32], but the convention regarding the naming of λ agrees with [11] and differs from [32], where Λ was used for what we call λ now. Conventions regarding the naming of the cosmological constant and the combination (2.9) differ in the literature, and the reader will need to take good care when comparing results from different sources.
The other bilinear form introduced in the Clifford language gives the following non-zero pairings This pairing is non-degenerate for any value of λ and is crucial for the Chern-Simons formulation of 3d gravity, as we shall see.
In Table 1 we list Lie groups whose Lie algebras are (2.14). We have used the isomorphisms SU (2)/Z 2 = SO(3) and SL(2, R)/Z 2 = SO(2, 1) 0 , the identity component of SO(2, 1). The isometry groups are determined by their Lie algebras only up to coverings, and our choice in Table 1 is one of convenience. In the following, we write G λ for this family of Lie groups.
The Chern-Simons formulation of 3d gravity
In Cartan's approach to Riemannian geometry [33] the fundamental geometrical object is a connection which combines an orthonormal frame field (or vielbein) e a and the spin connection ω ab on the orthonormal frame bundle into the so-called Cartan connection. Concretely, in the case of 3d geometry, we combine the dreibein with the translation generators P a of (2.7) and the local connection one-forms ω a = 1 2 abc ω bc with the rotation and/or Lorentz generators J a into the local one-form taking values in the Lie algebra g λ . The curvature of the Cartan connection combines the Riemann curvature of the spin connection ω = ω a J a , a cosmological term and the torsion T = (de c + abc ω a ∧ e b )P c .
In the Cartan approach to general relativity (in any dimension), the Einstein-Hilbert action is expressed in terms of the vielbein and the connection, which are treated as independent variables. The action is called the Palatini action when interpreted in this way. In this approach, the condition of vanishing torsion (in the absence of spin sources) follows as a variational equation rather than as an a priori condition. It turns out that, in three dimensions, the Einstein-Hilbert (or Palatini) action is simply the Chern-Simons action for the Cartan connection (3.1), with the bilinear form (2.15) used as an inner product [10,11]. However, beyond the equality of the actions, the relationship between the Chern-Simons formulation and the Einstein formulation of 3d gravity is subtle: non-invertible dreibeins e a may occur in the Chern-Simons formulation but are ruled out in the Einstein approach, based on metrics. This changes the nature of gauge orbits in the two cases, so that the physical phase spaces are, in general, different. This was pointed out in a 1+1 dimensional context in [34] and was demonstrated in an explicit example involving four particles in 3d gravity in [35]. Our approach to 3d gravity in the remainder of this talk is based on the Chern-Simons formulation.
We discuss the Chern-Simons action in terms of the general bilinear form (·, ·) αβ = α ·, · + β(·, ·) (3.3) on the Lie algebra g λ . This form is non-degenerate iff [38] α 2 − λβ 2 = 0, (3.4) and the associated action contains the gravitational action (the terms proportional to α), the Chern-Simons action for the spin connection and additional terms involving torsion. This general action was first considered by Mielke and Baekler [36] and recently revisited in [37], where the analogy between the terms proportional to β and the Immirzi term in 4d was stressed. The variational equations which follow from the general action (3.5) are simply the flatness condition for the Cartan connection, i.e. the vanishing of (3.2), provided the form (3.3) is non-degenerate. This appears to imply that the family of actions (3.5) leads to equivalent physics provided the condition (3.4) holds.
However, as argued in [38], the induced canonical structure of the phase space does depend on the ratio of α and β. Since we are only interested in the Chern-Simons formulation of 3d gravity here, we set from now onwards.
The gauge formulation of 3d gravity can easily and naturally be extended to include minimal coupling between the gauge field and point particles. This was first discussed in in detail in [39] and is reviewed in our notation in [38], where the dependence of the coupling on the parameters α and β is also discussed. We are not able to discuss the coupling to particles, the Poisson structure and the division by gauge equivalence in the space available here. Instead, we summarise the results in the next section, and motivate them in general, geometric terms.
Classical r-matrices and Poisson brackets on the space of holonomies
Having established that, in the Chern-Simons formulation, classical solutions of the field equations are flat G λ -connections, we can characterise the phase space of 3d gravity on a manifold M 3 in the Chern-Simons formulation as the space of flat G λ -connections on M 3 , modulo gauge transformations. In order to make this precise and concrete, we consider 3d universes of topology M 3 = R × S, where S is a two-dimensional manifold representing space. Then one can show [11] that the phase space is the moduli space of flat G λ -connections on S (i.e. the space of flat G λ -connections moduli gauge transformations), equipped with the Atiyah-Bott symplectic structure [40,41] , which is defined in terms of the bilinear form used in the Chern-Simons action. With the choice (3.6) this bilinear form is 1 16πG ·, · . (4.1) Therefore, in the Chern-Simons formulation, and assuming the factorisation M 3 = R × S, the task of constructing a theory of quantum gravity amounts to quantising the moduli space of flat G λ -connections on S, with a symplectic structure induced by (4.1).
Despite the elegance and generality of this result, a precise mathematical description of this moduli space and a rigorous quantisation remains a difficult task. In the case where S is a compact surface of genus g ≥ 2, the moduli space can characterised in terms of the moduli space A S of flat SU (2) connections in the Euclidean case and in terms of Teichmüller Space T S (a component of the moduli space of flat SL(2, R) connections) in the Lorentzian case. In Table 2 we reproduce a summary of the results given in [42], where further references can be found. The results in the Lorentzian case are due to [23,24].
For each of the symplectic manifolds in the table, one may in principle attempt a quantisation and subsequent interpretation in terms of 3d quantum gravity. In this talk I summarise a description of the moduli space and its Poisson structure which is closely based the parametrisation in terms of G λ -valued holonomies, and which uses a concrete and unified description of the Poisson structure, which is tailor-made for quantisation. The idea for this description is due to Fock and Rosly [27]. It is the foundation of the combinatorial or Hamiltonian quantisation programme for Chern-Simons theory, described in [28,29,30].
Fock and Rosly's description of the phase space starts with the observation that flat connections on a manifold are characterised by their holonomies along non-contractible paths. The moduli space of flat connections on a surface S can thus be parametrised by the set of holonomies along closed paths which generate the fundamental group of S, modulo gauge transformations at the common starting and end point of those paths. So far we have assumed that S is a compact manifold without boundary, but in the Fock and Rosly description it is easy to include punctures decorated with co-adjoint orbits of G λ . This is desirable in the context of 3d gravity, since a co-adjoint orbit of G λ physically correspond to the phase space of a point particle, and the 'decoration' of a puncture with a co-adjoint orbit is precisely the effect of minimal coupling between the Cartan connection (3.1) and the point particle's degrees of freedom. Moreover, this minimal coupling correctly reproduces the gravitational coupling between a point particle and the gravitational field, with momentum acting as a source of curvature and spin acting as a source for torsion. For details we refer the reader to the papers [39,38] and for a relatively brief but pedagogical account to the talk [43].
The effect of the minimal coupling to co-adjoint orbits on the holonomies can be summarised as follows. Using the inner product (4.1), co-adjoint orbits can be written as adjoint orbits.
For particles with mass m and spin s, these orbits are of the form Decorating a puncture on S with such an orbit forces the holonomy around the puncture to lie in the conjugacy class For a genus g surface S with n punctures and orbit labels µ i , σ i , i = 1 . . . n, a set of generators of the fundamental groups is shown in Fig. 4. The moduli space of flat G λ -connections can be written in terms of the extended phase spacẽ by imposing the condition that a suitable composition of the generating loops is contractible (and hence has trivial holonomy), and by dividing by conjugation at the base point: The trick introduced by Fock and Rosly is to define a (symplectic) Poisson structure on the extended phase spaceP (4.2) in such a way that the G λ -conjugation action onP is symplectic and that the symplectic quotient by it gives P with the Atiyah-Bott symplectic structure. The Poisson structure onP is defined in terms of a classical r-matrix, i.e. an element r ∈ g λ ⊗ g λ which satisfies the classical Yang-Baxter equation (CYBE) where we have used standard notation, explained, for example in textbooks like [44] or [45]. The information about the inner product used in the definition of the Atiyah-Bott symplectic structure (or, equivalently, in the Chern-Simons action) is encoded in r via the following compatibility requirement: Definition: An r-matrix is compatible with a Chern-Simons action if it satisfies the CYBE (4.4) and if its symmetric part is equal to the Casimir associated to the Ad-invariant, nondegenerate symmetric bilinear form used in the Chern-Simons action.
In our case, the relevant Casimir operator for the 'gravitational' bilinear form (4.1) is A family of compatible r-matrices is given by [46,32] r = 32πG P a ⊗ J a + abc n a J b ⊗ J c , n a n a = −λ, (4.6) where we use the metric (2.13) to lower and contract indices.
Two comments are in order here. The first concerns the dependence of the solution on the real vector n = (n 0 , n 1 , n 2 ) which has to satisfy the given constraint but is otherwise arbitrary. Thus, for λ < 0, the vector n is any vector of length √ −λ in the Euclidean (hyperbolic) case, but is necessarily time-like in the Lorentzian (de Sitter) case. For λ = 0, n vanishes in the Euclidean case but may be any light-like vector in the Lorentzian case. For λ > 0, n is spacelike in the Lorentzian (anti de-Sitter) case, while there is no real solution in the Euclidean case. However, the Euclidean case with λ > 0 (and hence Λ > 0) is the only case where the model space (S 3 ) and the local isometry group SU (2) × SU (2) are both compact, and the Chern-Simons theory is simply two copies of SU (2) Chern-Simons theory, which is extensively studied in the literature, see [47] for an early paper. I will not say much about this case in the following, although it seems interesting and worthwhile to relate the many results about SU (2) Chern-Simons theory to the framework discussed here, and to interpret them in terms of 3d gravity. Presumably this would involve using a complex vector n and imposing a suitable reality condition after quantisation.
The second comment concerns the non-uniqueness of the solutions (4.6). These solutions all amount to equipping the Lie algebras g λ with the structure of a classical double, see [44,50] for general background and [32] for an explanation in the context of 3d gravity. However, other r-matrices are known, which are also compatible with the bilinear form (4.1) but which do not belong to the family (4.6), see [38] for examples and the forthcoming paper [48] for a systematic discussion. This gives rise to an ambiguity in the implementation of the Fock-Rosly prescription and the subsequent quantisation, but presumably leads to the same quantum theory. This issue has not been conclusively settled, and is also discussed in [48]. One advantage of working with the r-matrices associated to classical doubles is that one may quantise by going to the associated quantum double. This is what we will review in the next section.
The Fock-Rosly Poisson structure onP is determined in terms of a compatible r-matrix. The formulae for the brackets are explicit but lengthy, and we refer the reader to [27] or [30] for details. Some understanding of it can be gained from the observation, made in [49], that the Poisson brackets can be 'decoupled' after a suitable coordinate change, and that, as a symplectic manifold,P is isomorphic to a direct sum of g copies of the Heisenberg double of the Poisson-Lie group G λ (with the Sklyanin Poisson-Lie structure defined by r) and the manifolds C µ i σ i , i = 1, . . . n viewed as symplectic leaves of the dual Poisson-Lie group G * λ : The general definitions of the Sklyanin, Heisenberg double and dual Poisson structures can be found in the paper [49] and also in the textbook [44] or the lecture notes [50]. We will give some further background in the next Section, but here we note that all of these structures for the family of groups G λ with the r-matrices (4.6) are explicitly given in in [32]. For example, in the case of vanishing cosmological constant (and n vanishing), one finds [51,52] Hei(SL(R) R 3 ) T * (SL(2, R) × SL(2, R)).
In the Fock-Rosly description of the phase space (4.3) one still needs to impose a constraint inP, and take a quotient. We will not pursue this here since we are mainly interested in the quantum theory. Our approach to quantisation is to quantiseP first, and then to take the quotient at the quantum level.
5 Quantum groups and 3d quantum gravity
The combinatorial quantisation programme and associated quantum groups
The task of constructing a quantum theory of 3d gravity in the Chern-Simons approach followed here is that of quantising the Poisson algebra of functions on the physical phase space (4.3), and of finding a unitary, irreducible representation (UIR) of the quantised algebra. By 'quantisation' of a Poisson manifold M we mean, generally speaking, a deformation F h (M ) of the algebra of functions on that manifold with a multiplication depending on a parameter h in such a way that the commutator of two elements in F h (M ) to first order in that parameter equals the Poisson bracket of the classical limit of those elements [44]. Details, for example the precise class of functions (C ∞ or some algebraic subset), depend on the Poisson manifold in question.
In the combinatorial approach, one simplifies this task by first quantising the extended phase space (4.2), and then imposing the reduction to (4.3) at the quantum level by a suitable condition on the Hilbert space carrying the UIR of the quantisation of (4.2). An important advantage of the combinatorial approach is that one really only needs to carry out the quantisation of the building blocks entering the decomposition of the extended phase space (4.7), and that these, in turn, can all be constructed from one quantum group H and its representations.
The quantum group H in question is the quantisation of the so-called dual Poisson-Lie group G * λ of G λ (with the Sklyanin Poisson-Lie defined by the r-matrix (4.6)). This is explained in general terms in [28,29] and in the particular case of semi-direct products like the Euclidean or Poincaré groups in [52]. It can be motivated as follows.
The dual Poisson-Lie group G * λ is a non-linear analogue of the Kirillov-Kostant-Souriau (KKS) Poisson structure on the dual g * λ of the Lie algebra g λ [53,50]. Since the quantisation of the KKS structure on g * λ is the universal enveloping algebra U (g λ ), it is not surprising that the quantisation of the Poisson algebra of G * λ is a deformation of U (g λ ). Thus we see already at this general level that the quantum groups H are Hopf algebras obtained by deforming the local isometry groups G λ (or more precisely, of their group algebras). We therefore refer to them as quantum isometry groups in the following. There is further similarity between the canonical Poisson structure on g * λ and G * λ : the symplectic leaves of the former are co-adjoint orbits while the symplectic leaves of the latter are conjugacy classes in G λ [44,50]. Given the non-degenerate bilinear pairing (4.1) on g λ , co-adjoint orbits may be thought of as adjoint orbits in g λ , and conjugacy classes in G λ may be thought of as non-linear deformations of these.
The irreducible representations of a Lie algebra can be obtained by quantising the KKS Poisson algebra and imposing the conditions which define the co-adjoint orbits in terms of suitable Casimir operators. This analogy, and the general comments of the previous paragraph, go some way to motivating the result that the quantisation of the conjugacy classes C µ i σ i in the decomposition (4.7) gives UIRs V µ i σ i of the quantum group H (with possible quantisation conditions on the labels µ i , σ i ). The quantisation of the classical Heisenberg double of G λ is the Heisenberg double of the Hopf algebra H [45]. Its unique irreducible representation, in the cases where they have been studied, is a quantum group analogue of the regular representation of a group, and we therefore denote it by Reg(H). We thus arrive at the following Hilbert space for the quantisation of the extended phase space (4.7) This space is, by construction, a (reducible) representation of the quantum group H. The Hilbert space for quantisation of the physical phase space (4.3) is the invariant part under this H-action [28,29,30]: In order to carry out the combinatorial quantisation programme in practice one needs to construct the quantum group H and to find the representations appearing in (5.1). The construction of the quantum group H is facilitated by the fact that the r-matrices (4.6) equip g λ with the structure of a classical double of either sl(2, R) (in the Lorentzian case) or su(2) (in the Euclidean case) with suitable bialgebra structures, given in [32]. Following the principle that the quantisation of the double is quantum double of the quantisation [54], the family of quantum groups H can thus easily be found. We list them in Table 3, which should be seen as a quantised and 'gravitised' version of Table 1 of the classical isometry groups. We will not give definitions or lists of generators and relations for any of these quantum groups here, but refer to the standard textbooks [44,45]. However, to gain some physical understanding it is worth noting that half the generators should be interpreted as rotation/boost generators and the other half as momentum generators. Thus, for example in the Lorentzian case of vanishing cosmological constant D(U (su(1, 1))) = U (su(1, 1)) C(SU (1, 1)), (5.3) as an algebra, where C(SU (1, 1)) are complex-valued, smooth functions on SU (1, 1). The generators J a of U (su(1, 1))) are simply the rotation generator J 0 and the boost generators J 1 , J 2 already encountered in (2.14), while elements of C(SU (1, 1)) should be thought of as functions or coordinates on the non-linear momentum space SU (1, 1), see [43] for details and references, and also below for further remarks. Finally, the parameter q appearing in the table is the one introduced at the beginning of this talk (1.3). It combines all four physical parameters entering quantum gravity with a cosmological constant.
The combinatorial quantisation programme has been carried out to various degrees of completeness in the different cases. For the Euclidean case with vanishing cosmological constant, (2))) D(U (su(1, 1))) Λ > 0 D(U q (su(2))), q root of unity D(U q (su(1, 1))) q ∈ R Λ < 0 D(U q (su(2))), q ∈ R D(U q (sl(2, R))), q ∈ U (1) (2)) was first pointed out in [8], and the proof that it plays the role of the quantum isometry group H in the combinatorial approach to Euclidean quantum gravity without cosmological constant was given in [46]. The Lorentzian case was considered in [9] and the general situation of Chern-Simons theory with certain semidirect product gauge groups was considered in [52]. The situation where the classical gauge group is SL(2, C) (i.e. Euclidean with Λ < 0 or Lorentzian with Λ > 0) was studied in [55], with the relevant quantum group already constructed in [56]. The Euclidean case with Λ > 0 is essentially the Turaev-Viro model. Finally, the very interesting Anti-de Sitter case (Lorentzian and Λ > 0) has, unfortunately, not received much attention in the framework sketched here.
Non-commutative momentum addition, braiding and non-commutative spacetimes
Having constructed the quantum groups which control the construction of 3d quantum gravity according to the combinatorial scheme it is natural to ask what one can learn from them about the physics of 3d quantum gravity.
Formally, the role of the quantum isometry groups listed in Table 3 is strictly auxiliary. The physical Hilbert space (5.2) is, by definition, invariant under the action of those quantum groups. Physical observables which act on this Hilbert space (see [57] for a discussion of classical examples) are not obviously related to the quantum isometry groups. As already mentioned (and discussed further in the Conclusion), the r-matrix used in the Fock-Rosly scheme, and hence the associated quantum group, is not uniquely determined. Both of these observations suggest that the quantum groups in Table 3 have only an indirect physical significance.
On the other hand, the quantum isometry groups, their representations and even their quantum R-matrices can be directly related to physical properties of particles in 3d quantum gravity. We will illustrate this for the case of vanishing cosmological constant. In that case, the quantum doubles appearing in Table 3 are quantum doubles of the Lie groups SU (2) in the Euclidean case and SU (1, 1) (which is isomorphic to SL(2, R)) in the Lorentzian case. These quantum doubles are semi-direct products as algebras as shown in (5.3), and have a representation theory which is very similar to those of the Euclidean and Poincaré groups [8,58,59]. The only difference is that the 'mass shell' in momentum space which characterises UIRs of the Euclidean and Poincaré group become conjugacy classes in the non-linear momentum spaces (SU (2) in the Euclidean case and SU (1, 1) in the Lorentzian case). Physically, this means that momenta are no longer vectors but group elements of SU (2) or SU (1, 1) and that momentum 'addition' is implemented by group multiplication in SU (2) or SU (1, 1) instead of vector addition. These non-linear and non-commutative properties of momentum addition for gravitating particle reflect the use of holonomies for characterising particle properties, as y used in early papers on 3d gravity [3,7]. We can even see it in the simplest non-trivial example of 3d spacetime, namely the cone shown in Fig. 1. The spacetime is fully characterised by the deficit angle µ, which is the mass of the particle in units of the Planck mass 1/8πG. However, the angular nature of this parameter fits very well into the picture of SU (1, 1)-valued momenta: we simply think of µ as a rotation, i.e. a particular element of SU (1, 1).
A closely related property of gravitating particles is their scattering, as analysed in some of the early papers on 3d quantum gravity [5,6]. It turns out that the S-matrix for the scattering of two massive and spinning particles can also be interpreted in terms of quantum groups and the sort of topological interactions discussed in Sect. 1.2. As shown in [9], the S-matrix is naturally related to the R-matrix of the quantum double D(U (su(1, 1))).
Finally, the curved and non-abelian nature of the momentum manifold suggests that naturally defined positions coordinates (which should generate translations on momentum space) should be non-commutative. One can argue this more formally by demanding that momentum and position algebras should be dual as Hopf algebras, leading to the family of Hopf algebras shown in Table 4. A particular, and much studied example is the 'spin spacetime' with generators X 0 , X 1 , X 2 and commutation relations [X a , X b ] = P abc X c , (5.4) where P = 8π G is the Planck length in 3d gravity, and both the Euclidean and Lorentzian interpretation apply. This non-commutativity of positions was already considered in [17] and [18], and appears naturally in the quantum group theoretical framework considered here. It can also be derived in other approaches, namely in a path integral for particles where gravitational field degrees of freedom have been integrated out [20] or in a coset construction [21], which is analogous to the way the classical spacetimes (2.2) can be obtained as homogeneous spaces of the classical isometry groups G λ . Finally, the role of the quantum double D(SU (2)) as a quantum isomtetry group of the 3d (Euclidean) was noted in [19], where the latter was studied from the point of view of non-commutative differential geometry.
It is interesting that physical arguments, path integrals, coset constructions and general quantum group theoretical considerations all lead to the same non-commutative spacetimes. One way of exploring the physical significance of this non-commutativity is to study representations of the quantum doubles in Table 3 in position space. The requires Fourier-transforming the usual formulation of the representations in momentum space, in analogy to the way the UIRs of the Poincaré group can be Fourier transformed into the solution space of the familiar wave equations of relativistic physics (Klein-Gordon, Dirac, Maxwell etc). This was carried out for D(SU (2)) in [60] and is considered for the Lorentzian case in [61].
Outlook and conclusion
We have seen that the combinatorial quantisation of the Chern-Simons formulation of 3d gravity gives a unified picture of the various regimes of 3d gravity, with the physical parameters c, Λ, G and entering as deformation parameters in distinctive ways. Quantum groups naturally replace the classical isometry groups in this approach to 3d quantum gravity, and noncommutative spacetimes replace the classical model spacetimes. In general, the relation between the quantum isometry groups and the physical Hilbert space of 3d quantum gravity is a formal one, but we have seen that aspects of the quantum isometry groups like the non-commutative momentum addition and the braiding via the quantum R-matrix have a direct physical interpretation. It is worth noting that it is possible to take a Galilean limit c → ∞ in the framework discussed here [31,62], and that the non-commutative quantum space is the Moyal plane in that case, with a time-dependent non-commutativity of the spatial coordinates.
In order to clarify the physical interpretation of quantum isometry groups and the associated non-commutative spacetimes it may be useful to consider universes with a boundary instead of the spatially compact universes considered in this talk. The treatment of boundaries in the classical theory is discussed in [35,63,64] but a general treatment of the quantisation has not been given. Another approach would be to work directly on the physical phase space as in [57,65], and to attempt the quantisation there.
Other quantum groups than quantum doubles have been discussed in relation to 3d quantum gravity, notably bicrossproducts or κ-Poincaré algebras which were originally introduce in 4d [66,67,68]. As shown in [38], the κ-Poincaré algebra with the usual time-like deformation parameter is not compatible with 3d gravity in the combinatorial framework. On the other hand, κ-Poincaré algebras with space-like deformation parameters are possible. This and other quantisation ambiguities of 3d quantum gravity are discussed in the forthcoming paper [48]. | 2011-05-19T17:57:32.000Z | 2011-05-19T00:00:00.000 | {
"year": 2011,
"sha1": "de2865a137ec3a2d588a67dd7639c82382a993bf",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "Arxiv",
"pdf_hash": "de2865a137ec3a2d588a67dd7639c82382a993bf",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
92695270 | pes2o/s2orc | v3-fos-license | MASS TRANSPORT IN MOLTEN ALKALI CARBONATE MIXTURES
A one-dimensional model based on Stefan-Maxwell theory of mass transfer was used to calculate the composition changes of the electrolyte in MCFC. Stefan-Maxwell diffusivities were calculated from conductivity and transport number data and used in the model. The composition changes calculated agreed with experimental results for lithium-potassium carbonate but less for lithium-sodium. The time dependent change of composition was also calculated but this could not explain the difference. In addition, the influence of the porosity of the fuel cell components, together with the electrolyte filling degree, was calculated and this showed a large influence on the composition change. electrolyte with the salts having a common anion. A model for binary molten salts with a common anion based on the Maxwell-Stefan approach for multicomponent mass transport has been developed (11). Input parameters for the model are the Stefan-Maxwell diffusivities, which can be
INTRODUCTION
The Molten Carbonate Fuel Cell is a well developed fuel cell technology for converting chemical energy to electrical energy. The major limiting factors for longer lifetime are corrosion, nickel shorting and electrolyte losses. All of these processes are dependent on the local composition of the electrolyte. Large efforts are being made to minimise corrosion by trying different materials and adding additives to the electrolyte to decrease nickel solubility. The effects of concentration changes o f the electrolyte are considered to be small and negligible. Kunz (1) investigated movement of electrolyte in a MCFC stack and it was concluded that the movement of electrolyte was a significant problem. Brenscheidt et al (2)(3)(4) have done measurements and seen that the electrolyte composition changes in a single cell during different current loads, and have combined these results with a model based on Ficks law to predict changes at different current densities. Their model was based on internal mobility data (5)(6) and conductivity data (7). The mobility measurements (5)(6) were done using the Klemm method, which measures the separation of two cations in an electrical field. The measurements showed that the mobility o f the larger ion is greater then for the smaller, which is not the normal case for diluted electrolytes. The reason for this has been explained in (8)(9)(10) where it is also mentioned that segregation of cations will not be a serious problem in MCFC.
The MCFC electrolyte consists of a binary electrolyte with the salts having a common anion. A model for binary molten salts with a common anion based on the Maxwell-Stefan approach for multicomponent mass transport has been developed (11). Input parameters for the model are the Stefan-Maxwell diffusivities, which can be calculated from conductivity, density and mobility data. Measurement of conductivities (12)(13)(14) has been done for a wide range of temperature and compositions for binary systems of potassium, sodium and lithium carbonates. Mobility and density data has also been measured (5)(6)15).
The aim of this study was to calculate if segregation of cations occurs, and if so, how large the composition changes of the electrolyte are. A one-dimensional form of the model developed in (11) was used. Stefan-Maxwell diffusivities were calculated using conductivity data (13), density data (15) and internal mobility data (5)(6). The model was used to calculate the concentration profiles for different current loading for planar electrodes and for porous electrodes. The time dependent concentration profiles for a typical current load were also calculated. In addition, the influence of different parameters on the melt composition was also investigated for a porous electrode with the same current load.
MODEL
For an isothermal and isobaric system, with no outer forces acting on a unit volume of electrolyte, the driving force is equal to the sum of the friction forces. In a concentrated solution, the motion of species i relative to other species can be described by the equation for multicomponent transport (16) j * i CT^i j j * i CT^i j [1] For a molten alkali carbonate salt mixture equation [1] gives two independent equations, in this case chosen to be for the two positive alkali ions, and the following equation system can be written as Equation system [2] can be inverted to give the flux of the two positive ions against the velocity of the common anion. To simplify the calculations, the salt is treated as non-dissociated. The electrochemical potential for the salt can be calculated on a mole fraction basis (18). Using that the electrochemical potential for the salt is a weighted sum 152 Electrochemical Society Proceedings Volume 2004-24 of its ions, equation system [2 ] can be simplified using the superficial current to the following equation system )v <M+ y ' + civ3 Vc5 +^/ + e2v3 where the fluxes are against a stationary axis.
The flux of carbonate ions is always proportional to the current.
For molten alkali carbonates it has been shown (15) that the density of the electrolyte is not a linear combination of the two different salt densities. The concentration of each species i can be calculated using the measured density (15) V ( J Using [4] and the relationship between the concentrations of the different ions and the two salts as a function of composition [5], [3] can be reduced to one equation. By using a material balance (16) and macro-homogenisation theory for the porosity (17), the time dependent change of salt A can be written as
Conductivity model
For an electrolyte with uniform composition the electrochemical potential for an ion is proportional to the electrical field. Using this fact and inverting [2], an expression for the conductivity can be derived by using the following relationship between the current and the potential gradient where the conductivity is given by Extracting Stefan-Maxwell diffusivities As it can be seen in [7], the Stefan-Maxwell diffusivities (binary diffusion coefficients) need to be determined in order to get the transport number and the diffusion coefficient required for [6 ]. The binary diffusion coefficients are assumed to be a function of the composition (a low degree polynomial) and follow Arrhenius relationship with respect to the temperature. D a = D ij,a (xA )exp(~ E ij,a / R T ) [
Determination procedure The relative error (between the measured and calculated values) for both the conductivity and transport number was calculated for the complete data set. The binary diffusion coefficients were obtained by minimising the euclidean norm o f these relative errors using the Isqnonlin algorithm in MATLAB 6.5.1.
Calculation o f concentration changes
In order to decrease the number of unknowns in [6], the calculated diffusion coefficient, transport number and the concentration of salt B were expressed as polynomials of the concentration of salt A. In this model, the effect of activity changes is ignored. Both the steady-state and the time-dependent solutions were calculated using the general equation formulation in FEMLAB 3.0a. The modelling base case is described in Figure 1. Linear current distribution is assumed for the electrodes and there is no flux of electrolyte through the boundaries. Filling levels for cathode and anode were calculated using the method in (19), ignoring the effect of electrolyte wettability of the electrode. x (mm) Figure 1. Modelling base case: anode porosity 40% and electrolyte filling level 60%, matrix with 60% porosity and completely filled and cathode porosity 70 % and 30 % filling level.
Binary diffusion coefficients
The binary diffusion coefficients have been calculated for Li2C0 3 /K2C0 3 (Li/K) and Li2C0 3 /Na2C0 3 (Li/Na) melts. For Li/K a Li2C0 3 -mole fraction of 0.4-0.7 and a temperature range between 550-700°C was used to fit the binary diffusion coefficient to the experimental data. A larger parameter window could not be used because the relative errors became too large. For Li/Na a smaller parameter window had to be used. The mole fraction of Li2C0 3 varied between 0.4-0.6 and the temperature between 600-675°C. The small parameter window needed to be used because the experimental transport numbers varied without a common trend and the error became too large. To capture the changes in experimental data (i.e. conductivity and transport number), an increase in the number of degrees of freedom in the (Dy(xj)) is possible, though it is necessary to maintain the integrity of the fits. This indicated that either some physical changes of the melt were not captured by the model for the whole composition and temperature range, or that more experimental data is needed for the transport number. Figure 2 shows that the binary diffusion coefficient for the large cation is larger for Li/Na than for Li/K. This is in agreement with the values of the internal mobilities in (5)(6), where the mobility for Na+ is larger then for K+ at the same composition and temperature. The same Figure also shows that the binary diffusion coefficient for the two different cations is smaller in Li/Na then in Li/K. The reason for this might be that Na+ is smaller than K+ and interacts more with Li+. The difference for the binary diffusion coefficient between Li+ and CO32' for the two melts cannot be explained, except that it probably comes from input experimental data.
Planar electrodes with bulk electrolyte
The modelling was simplified by using planar electrodes in order to be able to see how different current densities impacted on the electrolyte composition. The model was tested on a case with two planar electrodes with bulk electrolyte in between. The left boundary (x=0) is the anode side in Figure 3. Initial salt composition for the two different melts was 62 mol% Li2CC>3 for Li/K and 52 mol% Li2CC>3 for the Li/Na.
As can be seen in Figure 3 the composition changes are larger for Li/Na. This is because the value of D is smaller and this affects the composition gradient more even though the Xi-f values are smaller for Li/Na.
Single cell
The base case model described in a previous section {Calculation o f concentration changes) was used in these simulations. In the model, the distribution of electrolyte between the electrodes is not considered to change with the current density. To simplify the calculations the conductivity was not considered to change with the composition, which would make the current distribution more uneven. Figure 4 shows that there will be a large decrease of Li+ in the cathode for both melts, which agrees with experimental results. The effect is larger for the cathode than for the anode due to the low filling level of the cathode. Comparing the results in Figure 4 with those presented in (2)(3)(4), the effect for Li/K is smaller than the experimental, but this is probably because the experimental cell had thinner electrodes. The effect of composition changes according to the model is much larger than what is seen in the experimental results for Li/Na (2)(3)(4). In order to be able to compare the two studies better, the filling level and pore size distribution has to be known from the experiment and the right thickness has to be used in the model.
Time dependent composition changes. With modelling, the relaxation/evolution of concentration gradients with time can be investigated. This knowledge can be used in two ways. Firstly, it can be used to determine how fast a cooling procedure has to be in order to capture concentration profiles. Secondly, it can be used to determine the characteristic frequency for impedance measurements or other time dependent electrochemical techniques. The same single cell model is used, except that time has to be introduced as a variable. The results from Figure 5 show that it takes approximately an hour for the concentration profile to reach steady state. The change of the composition for the cell when the current is interrupted is faster, but not large enough to explain the difference between this model and the experimental results. These results indicate that the cooling time is of importance if accurate measurements are desirable.
Filling level effects. The result of the model depends largely on physical conditions, for example, how the electrolyte distributes between the electrodes, the filling level and the porosities of the different components. Figure 6 . Mole fraction of Li2CC>3 salt at a current density of 1500 A/m2 depending on different parameters for Li/K. (-) anode 40% porosity and 60 % filling level, matrix 60% porosity and cathode 70% porosity and 30% filling level, (--) anode filling level 30% and cathode filling level 10 %, (•••) anode filling level 80% and cathode filling level 60 %.
As can be seen in Figure 6 , the filling level and porosity greatly influence the distribution of ions between the anode and cathode. This model, together with a model for vaporisation and corrosion, could be a valuable tool for determining the effect of electrolyte composition changes during long-term operation. As Figure 6 demonstrates, a depletion of electrolyte would cause a larger cation separation. It is important to consider the effects of even small changes of composition during long-term operations.
The advantage of the Stefan-Maxwell approach to mass transfer is that it combines the conductivity, diffusion and transport number by the binary diffusion coefficients, rather than viewing the three phenomena as separate entities. In order to be able to make more accurate predictions of the composition changes during current load, more experimental data for the transport number is needed. This data should be obtained for a melt with a composition and temperature close to the operating conditions.
CONCLUSIONS
The segregation of cations in molten alkali melts for MCFC can be modelled using the Stefan-Maxwell approach to mass transfer. The difficulty in the determination o f the Stefan-Maxwell diffusivities was due to problems with the experimental data. The calculated results for the concentration changes in porous electrodes compared well with the experimental results for Li/K. However, the same conclusion cannot be made for Li/Na. The separation is approximately 5% for both Li/K and Li/Na for a typical single cell. It takes approximately one hour to reach steady state, which is important to consider when doing time dependent electrochemical experiments. The porosity o f the different components, together with the filling degree, largely influences the composition changes in the cell.
ACKNOWLEDGEMENTS
The financial support from the Swedish Energy Agency is gratefully acknowledged for this study. The Electrochemical Society is acknowledged for providing a travel grant and paying the meeting fee for this conference. | 2019-04-03T13:15:34.817Z | 2004-01-01T00:00:00.000 | {
"year": 2004,
"sha1": "601182062d4f9919ec4aaae8fefa0a8735f437f9",
"oa_license": null,
"oa_url": "https://doi.org/10.1149/200424.0151pv",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "5f6037ac6103749395fc7d7e3ae688521aec7a70",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
256213503 | pes2o/s2orc | v3-fos-license | In Vitro Measurement and Mathematical Modeling of Thermally-Induced Injury in Pancreatic Cancer Cells
Simple Summary Thermal therapies, the controlled heating of tissue, are a clinically accepted modality for the treatment of localized cancers and are under investigation as part of treatment strategies for pancreatic cancer. The bioeffects of heating varies as a function of intensity and duration of heating and can vary across tissue types. We report on the measurement of thermal injury parameters for pancreatic cancer cell lines in vitro and assess their suitability for predicting changes in cell viability following heating. The results of this study may contribute to research investigating the use of thermal therapies as part of pancreatic cancer treatment strategies, the development of modeling tools for predictive treatment planning of thermal therapies, and understanding the effects of other energy-based interventions that may involve perturbation of tissue temperature. Abstract Thermal therapies are under investigation as part of multi-modality strategies for the treatment of pancreatic cancer. In the present study, we determined the kinetics of thermal injury to pancreatic cancer cells in vitro and evaluated predictive models for thermal injury. Cell viability was measured in two murine pancreatic cancer cell lines (KPC, Pan02) and a normal fibroblast (STO) cell line following in vitro heating in the range 42.5–50 °C for 3–60 min. Based on measured viability data, the kinetic parameters of thermal injury were used to predict the extent of heat-induced damage. Of the three thermal injury models considered in this study, the Arrhenius model with time delay provided the most accurate prediction (root mean square error = 8.48%) for all cell lines. Pan02 and STO cells were the most resistant and susceptible to hyperthermia treatments, respectively. The presented data may contribute to studies investigating the use of thermal therapies as part of pancreatic cancer treatment strategies and inform the design of treatment planning strategies.
Introduction
Pancreatic cancer is the fourth leading cause of cancer-related death in the United States and accounts for 8% of cancer deaths, with a low five-year survival rate of approximately 10% [1,2]. Surgical resection remains the most effective treatment strategy; however, only approximately 20% of patients are surgical candidates at the time of diagnosis [2,3]. For patients with unresectable pancreatic cancer, the use of chemotherapy alone or in conjunction with surgery remains the gold standard, although long-term survival rates are poor and the regimen comes with risks for major complications in patients with advanced disease [4,5]. Thermal ablation [6,7] and other non-ionizing energy-based local interventions such as irreversible electroporation are under investigation as potential adjuvant or stand-alone treatment options for patients with unresectable pancreatic adenocarcinoma [8]. In addition to ablative effects, heating in the mild hyperthermia range (39-43 • C) may offer a means for thermally-triggered drug delivery [9][10][11] or serve as an adjuvant to ionizing radiation and/or chemotherapy [12][13][14][15].
The bioeffects induced by heating are a function of the time-temperature profile during heating and may vary across cell types. Mathematical models relating changes in cell viability, stress protein expression, and other biomarkers to the time-temperature history during heating have been reported [16]. Cell/tissue-specific parameters for these models can be determined from experiments on cells in vitro [17][18][19]. One of the most widely used models is the Arrhenius thermal injury model [18], which describes cell death following heating as a first-order exponential relationship between temperature and duration of heating, and has been applied to assess thermal damage in various cell types, including liver cancer cells [20], prostate tumor cells [21], and breast cancer cells [22]. The thermal isoeffective dose model, which relates an arbitrary transient temperature profile to equivalent minutes of heating at a reference temperature, typically taken to be 43 • C, is derived from the Arrhenius model [23]. Despite its wide usage, the standard Arrhenius model fails to represent thermally-induced injury or cell death at mild hyperthermic temperatures (39-43 • C) for several cell types, showing significant over-prediction of the initial "shoulder" region as explained by Pearce [18]. Augmenting the Arrhenius model with a time delay term has been proposed to account for the delayed cell death at low temperatures [24]. Other models for thermal injury have been proposed, including a two-state statistical thermodynamic model by Feng et al. [19] and a three-compartment reaction-based model by O'Neill et al. [25]. While thermal injury parameters for a range of cell types have been reported, there are few published data reporting on the viability of pancreatic cancer cells following heating. Identification of thermal injury parameters is important to inform the design of thermal therapy devices and systems, select treatment doses, and to inform interpretation of experimental and clinical studies involving heat as a therapeutic modality.
The objective of the present study was to determine the kinetics of thermal injury to pancreatic cancer cells in vitro following thermal exposure to temperatures up to 50 • C and use these data to evaluate predictive models for thermal injury. Given the central role of experimental murine models in pancreatic cancer research, we conducted studies on two murine pancreatic cancer cell lines (KPC and Pan02), as well as a normal murine fibroblast cell line (STO). The KPC murine model (KRAS/TP53 point mutation) [26] is an established genetically-engineered and clinically relevant model of pancreatic ductal adenocarcinoma that represents many histopathological features observed in human disease. The murine pancreatic adenocarcinoma cell line Pan02 [27,28], syngeneic to C57BL/6, is an established grade III model widely used for pre-clinical evaluation of single and combination therapies. Given the significance of the stroma in pancreatic tumors, we also evaluated the kinetics of thermal injury on STO cells. For each of these cell lines, monolayer cell cultures were heated in water baths to temperatures in the range 42.5-50 • C for 3-60 min, and cell viability following heating was assessed up to 24 h following hyperthermia and compared to 37 • C control. The kinetics of thermal injury were estimated from the measured viability data. Finally, we comparatively assessed three mathematical models for predicting thermallyinduced changes in cell viability based on the measured in vitro data.
Cell Culture
KPC and STO cells were cultured with DMEM's medium (Gibco™ 11995065, Fisher Scientific, Hampton, NH, USA) with 10% fetal bovine serum (Corning™ 35015CV, Fisher Scientific) and 1% penicillin-streptomycin (Gibco™ 15140148). Pan02 cells were cultured with RPMI 1640 Medium (Gibco™ 11875093, Fisher Scientific) supplemented with sodium pyruvate and 10% fetal bovine serum. Cultures were maintained in a 37 • C, 5% CO 2 incubator in 75 cm 2 phenolic culture flasks. In preparation for hyperthermia treatment, cells were seeded in n = 6 wells of 96-well culture plates at a density of~30,000 cells/cm 2 at a volume of 200 µL medium/well and maintained in a 37 • C, 5% CO 2 incubator for 24 h to allow cells to reach log phase prior to hyperthermia.
In Vitro Hyperthermia to Monolayer Cell Cultures
To expose cells in culture to hyperthermia, sealed 96-well plates containing cells were immersed in temperature-controlled water baths (shown to be an effective method for accurate and uniform heating of cell culture samples) [29]. To assess temperatures during hyperthermia, transient temperature profiles were recorded using five T-type thermocouples embedded within distinct wells of a dummy plate that contained no cells while filled with 200 µL of water/well. The dummy well plate was immersed in water baths simultaneously with the cell-containing plate, thus providing a reasonable assessment of the temperatures within the cell-containing wells. Figure 1 illustrates the dummy plate design, including five thermocouples positioned and sealed within four corner wells and one central well.
Cell Culture
KPC and STO cells were cultured with DMEM's medium (Gibco™ 11995065, Fisher Scientific, Hampton, NH, USA) with 10% fetal bovine serum (Corning™ 35015CV, Fisher Scientific) and 1% penicillin-streptomycin (Gibco™ 15140148). Pan02 cells were cultured with RPMI 1640 Medium (Gibco™ 11875093, Fisher Scientific) supplemented with sodium pyruvate and 10% fetal bovine serum. Cultures were maintained in a 37 °C, 5% CO2 incubator in 75 cm 2 phenolic culture flasks. In preparation for hyperthermia treatment, cells were seeded in n = 6 wells of 96-well culture plates at a density of ~30,000 cells/cm 2 at a volume of 200 μL medium/well and maintained in a 37 °C, 5% CO2 incubator for 24 h to allow cells to reach log phase prior to hyperthermia.
In Vitro Hyperthermia to Monolayer Cell Cultures
To expose cells in culture to hyperthermia, sealed 96-well plates containing cells were immersed in temperature-controlled water baths (shown to be an effective method for accurate and uniform heating of cell culture samples) [29]. To assess temperatures during hyperthermia, transient temperature profiles were recorded using five T-type thermocouples embedded within distinct wells of a dummy plate that contained no cells while filled with 200 μL of water/well. The dummy well plate was immersed in water baths simultaneously with the cell-containing plate, thus providing a reasonable assessment of the temperatures within the cell-containing wells. Figure 1 illustrates the dummy plate design, including five thermocouples positioned and sealed within four corner wells and one central well. Since the time to reach the setpoint temperature can be rather slow, we employed a two-step approach. First, both the cell-containing and dummy well plates were immersed in a water bath set at an elevated temperature of ~80 °C. When the temperature recorded by thermocouples in the dummy plate reached within 0.2 °C of the target temperature (i.e., 42.5, 44, 46, or 50 °C), plates were immediately transferred to another pre-heated water bath that was set to the desired target hyperthermic temperature for a predetermined duration in the range of 3-60 min. A USB thermocouple data acquisition module (TC-08 OMEGA) was used to record the temperature data from the thermocouples embedded within the dummy plate. Following hyperthermia treatment, sealing films were removed and the 96-well culture plates were returned to a 37 °C incubator for subsequent 6 h and 24 h recovery of thermal injury. For each cell line, an additional plate containing cells was Since the time to reach the setpoint temperature can be rather slow, we employed a two-step approach. First, both the cell-containing and dummy well plates were immersed in a water bath set at an elevated temperature of~80 • C. When the temperature recorded by thermocouples in the dummy plate reached within 0.2 • C of the target temperature (i.e., 42.5, 44, 46, or 50 • C), plates were immediately transferred to another pre-heated water bath that was set to the desired target hyperthermic temperature for a predetermined duration in the range of 3-60 min. A USB thermocouple data acquisition module (TC-08 OMEGA) was used to record the temperature data from the thermocouples embedded within the dummy plate. Following hyperthermia treatment, sealing films were removed and the 96-well culture plates were returned to a 37 • C incubator for subsequent 6 h and 24 h recovery of thermal injury. For each cell line, an additional plate containing cells was also immersed in a 37 • C water bath for the experimental durations considered in this study, providing a no-hyperthermia control.
Cell Viability Evaluation
After an incubation period of 6 h and 24 h post-heating (shown to be effective evaluation periods for measuring cell viability [24]), the cell culture supernatant was discarded from each 96-well culture plate and viability was determined using the 3-(4,5- dimethylthiazol-2-yl)-2,5-diphenyl-2H-tetrazolium bromide (MTT) colorimetric assay [30], which is based on the reduction of a yellow tetrazolium salt to purple formazan crystals by metabolically active cells. The measured optical density for each time-temperature combination was normalized to the optical density measured for no-heat control plates immersed in a 37 • C water bath for the same time duration. The normalized values thus represent the average concentration of viable cells across n = 6 wells following hyperthermia exposure for each experimental group.
Arrhenius Model of Thermal Injury
The Arrhenius cell injury method models cell death as a first order chemical reaction where the source materials (viable cells) are transformed to the product (non-viable cells). After identification of the rate parameters for the reaction, the Arrhenius model allows prediction of cell injury for arbitrary time-temperature profiles. Equations (1) and (2) describe the Arrhenius model: where C 0 is the initial concentration of live cells prior to thermal exposure, C(t) is the concentration of live cells after t seconds of heating, Ω(t) is a positive number representing the extent of thermal damage at time t, A is the frequency factor (s −1 ), E a is the activation energy (J/mole), R is the universal gas constant (8.31 J·K −1 ·mol −1 ), and T is temperature [K]. The value of Ω(t) can be cast as a probability, P, of thermally-induced injury. The parameters of the model, A and E a , are cell line specific and can be determined from experiments where cells are exposed to isothermal heating. As the first step, the rate of decay in cell viability (k) can be determined from viability measurements following heating as a function of time at multiple temperatures [16] by fitting Equation (3) to the experimentally measured cell survival, S.
Then, using Equation (4), the relationship between the natural logarithm of the constant (k) and the reciprocal of temperature (1/T) is plotted to find A and E a from the slope and y-intercept of the fit, respectively.
Arrhenius Thermal Injury Model with Time Delay
As described by Feng et al. [19] and Pearce et al. [24], some cell lines initially exhibit a significant shoulder region where cell viability remains high until a threshold lethal thermal dose is attained. The conventional Arrhenius model may not accurately represent changes in cell viability for these cells. To address this limitation of the standard Arrhenius thermal injury model, an improved Arrhenius model was presented by Pearce et al. [24] by adding a temperature-dependent time delay (t d ) using the slope (m) and intercept (b) to compensate for the measured viability data within the shoulder region: where t represents total heat exposure duration, t d denotes the time delay in seconds, T is the temperature in Kelvin, and m and b represent relevant coefficients obtained by slope and intercept of the equation, respectively. The ordinary Arrhenius injury process is initiated when t > t d and is calculated from that point forward.
Two-State Thermal Injury Model
Feng et al. [19] presented a two-state cell damage model under hyperthermia conditions, which was reported to be in good agreement with experimental data. In their study, a general two-state model was proposed to characterize the entire cell population with two distinct and measurable subpopulations of cells, in which each cell is in one of the two substates, either viable (live) or damaged (dead). The resulting cell viability can be expressed as follows: Φ(τ, T) was defined as a function that is linear in exposure time τ when the temperature T is fixed and K is constant. In their study, in vitro cell viability data from hyperthermia experiments on human PC3 prostate cancer cells and normal RWPE-1 cells were compared against the two-state damage model and used to determine the parameters in the function Φ(τ, T). This model requires three experimentally derived fit coefficients (α, β and γ) that were estimated using a standard bilinear least-squares regression algorithm. Finally, the fractional cell survival at any time point can be calculated using Equation (9).
Determination of Heat-Induced Thermal Dose (CEM 43)
The Sapareto-Dewey thermal isoeffective dose model is a means to compare thermal damage accumulated after heating with an arbitrary time-temperature profile against t 43 , the equivalent time needed to achieve the same level of damage when heated to 43 • C (CEM 43) [31,32]. t 43 can be calculated with Equation (10).
where t 43 is the cumulative number equivalent time (min) at 43 • C, T i is the temperature at the i-th time interval t i , and R CEM is 0.5 when T i > 43 • C and R CEM is 0.25 when T i ≤ 43 • C. In Equation (10), R CEM represents the rate at which time taken to achieve a thermal damage isoeffect drops for each unit rise in temperature.
Model Assessment
The accuracy of our developed injury predictive models was assessed on murine KPC pancreatic cancer cell lines that were exposed to non-isothermal heating to temperature in the range 47-51 • C. A coupled electromagnetic-bioheat transfer computational model simulating microwave thermal ablation (MWA, 50 W, 10 min with a 14 G water-cooled applicator), as described in our prior studies [33], was used to identify time-temperature profiles at the periphery of the ablation zone. Detailed information regarding the heat transfer model, model parameters, and the numerical method are provided in the Supplementary Materials.
Finally, in vitro hyperthermia experiments were performed to expose KPC cells to temperature profiles similar to those at the periphery of the ablation zone. The measured cell viability was compared against model predictions. Figure 2 shows the measured temperature profile inside five wells of the dummy 96-well plate during a 46 • C hyperthermia exposure. Parameters used to quantitatively assess the heating profiles are also illustrated, including: ramp time, duration of the steady-state phase, target error, homogeneity of heating, and duration of the cool-down phase. Table 1 lists the mean values and ranges for each of these parameters across heating experiments for target setpoint temperatures of 42.5, 44, 46, and 50 • C for all three cell lines considered in this study. Accuracy represents the error between the target temperature and mean recorded temperature based on five sealed thermocouples during the constant heating phase, ramp time represents the time required to reach the target temperature from physiological temperature (37 • C), and the cooling phase represents the time it takes to drop to physiological temperature from target temperature following hyperthermia exposure.
Temperature Profiles in Dummy Well Plates
perature profiles at the periphery of the ablation zone. Detailed information regarding the heat transfer model, model parameters, and the numerical method are provided in the Supplementary Materials.
Finally, in vitro hyperthermia experiments were performed to expose KPC cells to temperature profiles similar to those at the periphery of the ablation zone. The measured cell viability was compared against model predictions. Figure 2 shows the measured temperature profile inside five wells of the dummy 96well plate during a 46 °C hyperthermia exposure. Parameters used to quantitatively assess the heating profiles are also illustrated, including: ramp time, duration of the steady-state phase, target error, homogeneity of heating, and duration of the cool-down phase. Table 1 lists the mean values and ranges for each of these parameters across heating experiments for target setpoint temperatures of 42.5, 44, 46, and 50 °C for all three cell lines considered in this study. Accuracy represents the error between the target temperature and mean recorded temperature based on five sealed thermocouples during the constant heating phase, ramp time represents the time required to reach the target temperature from physiological temperature (37 °C), and the cooling phase represents the time it takes to drop to physiological temperature from target temperature following hyperthermia exposure. Figure 3 shows the measured cell viability assessed using the MTT assay for all three cell lines at 6 h and 24 h post hyperthermia. Figure 4a illustrates the relationship between ln (k) and 1/T for data measured at 24 h post-heating. The thermal damage kinetic coefficients A and E a are determined from the intercept and slope, respectively, of the best-fit line to the data. Table 2 lists the thermal damage kinetic parameters of E a , A, and time delay parameters (m, b) that were calculated from the measured viability data 24 h post hyperthermia for each of the three cell types.
Arrhenius Thermal Damage Models
The coefficient of determination (R 2 ) for calculated kinetic parameters was in the range of 95-99%, indicating the suitability of the Arrhenius injury models for predicting the extent of heat-induced cell injury. The measured and calculated damage were compared as shown in Figure 5a Figure 4a illustrates the relationship between ln (k) and 1/T for data measured at 24 h post-heating. The thermal damage kinetic coefficients A and Ea are determined from the intercept and slope, respectively, of the best-fit line to the data. The coefficient of determination (R 2 ) for calculated kinetic parameters was in the range of 95-99%, indicating the suitability of the Arrhenius injury models for predicting the extent of heat-induced cell injury. The measured and calculated damage were compared as shown in Figure 5a,b for the simple Arrhenius model and the Arrhenius model with time delay, respectively. The Root Mean Square Error (RMSE) for the simple Arrhenius model and the Arrhenius thermal damage model with time delay were 12.24% and 8.48%, respectively.
Two-State Model of Thermal Damage
The measured and calculated damage were compared as described in Figure 5c. Cell viability data across all considered thermal doses in all three cell lines was investigated. RMSE for 6 h and 24 h recovery was 31.66% and 51.22%, respectively. Table 3 lists the RCEM values measured in the present study and compares against RCEM values for other cell types reported in the literature.
Two-State Model of Thermal Damage
The measured and calculated damage were compared as described in Figure 5c. Cell viability data across all considered thermal doses in all three cell lines was investigated. RMSE for 6 h and 24 h recovery was 31.66% and 51.22%, respectively. Table 3 lists the R CEM values measured in the present study and compares against R CEM values for other cell types reported in the literature. Figure 6b illustrates the mean value of recorded temperature based on five sealed thermocouples in the 96-well dummy plate, as well as clinically relevant simulated temperature time history. Figure 6c shows the comparison between the measured and calculated percentage of cell survival following hyperthermia exposures that were obtained by MTT assay and our developed predictive models, respectively.
Discussion
Knowledge of thermal sensitivity of representative target cells is informative for the design and optimization of thermal therapy protocols (i.e., temperature and heating time). Prior studies have investigated the kinetics of thermal injury of hepatocellular carcinoma, prostate cancer and renal carcinoma cells at temperatures in the range of 37-63 °C using different heating modalities [21,[36][37][38]. However, there have been few reports of the kinetics of thermal injury to pancreatic cancer cells.
Overall, we have shown that exposure to heat stress decreased cell viability in pancreatic cancer cells (i.e., KPC and Pan02), in agreement with other in vitro and in vivo studies examining hyperthermia's effectiveness as a potential therapeutic modality for treating pancreatic cancer [39][40][41][42][43][44][45]. As expected, the rate of decline in cell viability was more rapid as the applied temperature increased. KPC cells exhibited slightly greater resistance to thermal stress than the STO cells, indicated by their higher cell viabilities following heat treatment, while Pan02 cells showed the most resistance to heat treatment.
Discussion
Knowledge of thermal sensitivity of representative target cells is informative for the design and optimization of thermal therapy protocols (i.e., temperature and heating time). Prior studies have investigated the kinetics of thermal injury of hepatocellular carcinoma, prostate cancer and renal carcinoma cells at temperatures in the range of 37-63 • C using different heating modalities [21,[36][37][38]. However, there have been few reports of the kinetics of thermal injury to pancreatic cancer cells.
Overall, we have shown that exposure to heat stress decreased cell viability in pancreatic cancer cells (i.e., KPC and Pan02), in agreement with other in vitro and in vivo studies examining hyperthermia's effectiveness as a potential therapeutic modality for treating pancreatic cancer [39][40][41][42][43][44][45]. As expected, the rate of decline in cell viability was more rapid as the applied temperature increased. KPC cells exhibited slightly greater resistance to thermal stress than the STO cells, indicated by their higher cell viabilities following heat treatment, while Pan02 cells showed the most resistance to heat treatment. We quantified the cell viability at 6 h and 24 h post heat exposure to visualize the progression of heat-induced cell death over time. For all three cell lines, the viability continued to decrease dramatically at 24 h post exposure for high temperature exposures (i.e., T = 50 • C, t > 5 min, T = 46 • C, t > 20 min) compared to the viability at 6 h post exposure. Baumann and colleagues [46] also exposed pancreatic cancer cells (PANC-1 and BxPC-3) to 45-50 • C for 5 min and measured the viability in different time points up to 7 days post exposure. The results were similar to ours, showing that near-complete cell death can occur following exposure to high temperatures (e.g., 50 • C), where complete cell death was not observed immediately post treatment but instead took longer to fully manifest in vitro. Ludwig et al. [44] also assessed the effect of hyperthermia on BxPC-3 human pancreatic cancer cells and showed that exposure to hyperthermia treatment at 41 • C and 43 • C for 1 h have almost no impact on cell viability, which was also reflected in our measured in vitro results.
Lage et al. [47] investigated the thermal sensitivity of human gastric (EPG85-257) and pancreatic carcinoma (EPP85-181) cell lines using water bath hyperthermia and calculated the Arrhenius injury model parameters. However, in their study, hyperthermia temperature was limited to 45 • C. In the present study, the optimized values for activation energy (E a ) and frequency factor (A) in murine pancreatic cancer cells (i.e., KPC and Pan02) were calculated under near-isothermal heating conditions. The obtained kinetic coefficients were aligned with the Wright's line plot of the Arrhenius coefficients from Pearce [48]. The range of coefficient of determination in this study (0.95 < R 2 < 0.98, see Figure 4) for temperatures between 42.5-50 • C is similar to the values derived from other hyperthermia studies (0.95 < R 2 for T > 40 • C) [16,49], indicating the suitability of the Arrhenius model for predicting thermally-induced injury in pancreatic cancer cell lines in vitro.
Similar to O'Neill [25] and Feng [19], we observed the initial shoulder region where cell viability was not affected at low temperatures with short durations, hence an improved Arrhenius model was used to provide a better fit, since traditional Arrhenius parameters (activation energy and frequency factor) calculated from low temperature, long duration exposures may not accurately predict cell death resulting from high temperature, low duration exposures [16,38]. Calculated RMSE values for all three cell lines were considerably improved when switching to the improved Arrhenius model from the traditional Arrhenius model. The thermal dose was also calculated using R CEM that was derived from our temperature-dependent cell survival data. The calculated R CEM values for KPC and Pan02 cells were 0.588 and 0.596, respectively. This was in agreement with the results presented by Mouratidis et al. [50], where the R CEM value for human colon cancer cell lines were calculated to be in the range of 0.5-0.53 at temperatures above 43 • C.
We also assessed the suitability of the two-state injury model by Feng et al. [19] for predicting changes in viability following heating of pancreatic cancer cells. The results presented by Feng reasonably accurately demonstrate the shoulder region of cell viability curves in their study on PC3 cell lines. However, in our study a rather poor fit between the two-state model and our collected in vitro data was observed, as illustrated in Figure 5. This might be due to the limited number of temperatures considered in this study, as the model relies on additional measured data at longer heating times where the viability tends to drop dramatically. Moreover, Feng et al. point out that the Arrhenius fit might actually provide a better estimation of cell viability at the higher temperatures where the shoulder region is not relevant. As previously described by Pearce [18], inclusion of more thermodynamic states may improve the accuracy of the two-state model.
Our study was limited to monolayer cell cultures, which may not accurately represent tumor cell response to heating in vivo. Previous in vivo studies have demonstrated a lower thermal threshold for the destruction of tumors when compared to cell culture in vitro under thermal exposure profiles [21,51,52]. The thermal damage model coefficients reported in this manuscript may be used to guide the selection of time-temperature profiles that can be anticipated to yield a specified level of thermal damage in pancreatic tumors, although caution should be taken when applying results to the clinical scenario, given the use of murine cell lines in the present study. Notably, this study highlighted variable susceptibility of different cell lines to hyperthermic exposure. Pancreatic tumors exhibit relatively high inter-tumor and intra-tumor heterogeneity [53]; understanding the differential thermal susceptibility of various cell populations to thermal exposure can inform prediction of the range of thermal damage levels anticipated for a given time-temperature profile delivered in the clinical scenario. Furthermore, in the clinical setting, thermal profiles are likely to vary across the targeted tumor due to the constraints of practical heating technology. Given time-temperature profiles observed during heating that can be measured with MRI or other thermometry techniques, quantitative analysis of thermal damage profiles can be performed using the reported thermal damage coefficients. Such analyses, coupled with post-treatment imaging of the targeted tumors, can provide means to assess and interpret treatment response [54].
Conclusions
We measured the extent of thermal injury in murine pancreatic cancer cell lines after exposure to temperatures in the range of 42.5-50 • C as informed by in vitro studies and derived thermal injury kinetic model parameters. Our results suggest that the improved Arrhenius model incorporating the time delay [24] to address the shoulder region is most suitable for use in mild hyperthermia therapies up to 60 min of heating. Finally, the accuracy of our developed injury predictive models was experimentally validated when cells were subjected to time-temperature profiles similar to those anticipated at the periphery of an ablation zone.
Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/cancers15030655/s1, Details of the computational model used in this study for simulating temperature distribution within the pancreas tissue. Table S1: Pancreas tissue biophysical properties employed in simulations (References [55][56][57][58][59][60][61] are cited in the supplementary materials). Funding: This study has been supported by NIH grant R01 EB028848 and from the KSU Johnson Cancer Research Center (Pancreatic Cancer CRCE).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest. | 2023-01-25T16:17:39.913Z | 2023-01-21T00:00:00.000 | {
"year": 2023,
"sha1": "068de9a776410ca459eac69d8380935224d76294",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/15/3/655/pdf?version=1674289336",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "31d8187be3e5583191f73b134021387302bcd169",
"s2fieldsofstudy": [
"Engineering",
"Biology"
],
"extfieldsofstudy": []
} |
18862312 | pes2o/s2orc | v3-fos-license | A Bioactivity-Based Method for Screening, Identification of Lipase Inhibitors, and Clarifying the Effects of Processing Time on Lipase Inhibitory Activity of Polygonum Multiflorum
Traditional Chinese medicine (TCM) has been used for the treatment of many complex diseases. However, the bioactive components are always undefined. In this study, a bioactivity-based method was developed and validated to screen lipase inhibitors and evaluate the effects of processing on the lipase inhibitory activity of TCM by ultrahigh performance liquid chromatography coupled with quadrupole-time-of-flight mass spectrometry and fraction collector (UHPLC/Q-TOF-MS-FC). The results showed that both Polygonum multiflorum and processed P. multiflorum extracts had inhibitory effect against lipase with IC50 values of 38.84 μg/mL and 190.6 μg/mL, respectively. Stilbenes, phenolic acid, flavonoids, and anthraquinones were considered to be the potential lipase inhibitors. Eleven potential lipase inhibitors were simultaneously determined by UHPLC. Principal component analysis (PCA) was employed in exploring the effects of processing time on lipase inhibitory activity of P. multiflorum. Compared with conventional methods, a bioactivity-based method could quantitatively analyze lipase inhibitory activity of individual constituent and provide the total lipase inhibitory activity of the samples. The results demonstrated that the activity integrated UHPLC/Q-TOF-MS-FC method was an effective and powerful tool for screening and identifying lipase inhibitors from traditional Chinese medicines.
Introduction
Nowadays, obesity presents as a main health problem in the world, which is often associated with pathological disorders [1,2]. Obesity is caused by excessive energy intake over low energy expenditure, leading to the accumulation of fat in the body. The fat contained in food is the excess energy source. In order to reduce fat accumulation in the body, the digestion and absorption of the fat after food intake should be prevented [3]. At present, lipase has been selected as a therapeutic target for the prevention of fat digestion. There are some synthetic pancreatic lipase inhibitors such as orlistat, used widely as effective antiobesity drugs [3,4]. However, long term usage of these drugs has various side effects such as liver toxicity, abdominal distention, and borborygmus [5].
From this perspective, it is necessary to screen green and safe lipase inhibitors from natural products.
For this purpose, many plants including traditional Chinese medicines (TCMs) have been used for the treatment of obesity [6][7][8][9]. Polygonum multiflorum (Heshouwu in Chinese), obtained from the root of P. multiflorum Thunb., is an example of a TCM used over centuries for the treatment of various kinds of diseases in China [10]. The main components in P. multiflorum are stilbenes, phenolic acids, flavonoids, and anthraquinones [11][12][13]. Phenolic compounds, such as gallic acid and catechin, have antioxidant activity in vivo in previous reports [14,15]. Moreover, anthraquinones have antiinflammatory, hemostatic, laxative, and antibacterial activities [16]. Specifically, stilbenes are known for their effect in 2 Evidence-Based Complementary and Alternative Medicine treating neurodegenerative diseases, such as Alzheimer's disease and Parkinson's disease [17][18][19]. They are the active components contributing to the pharmacological effects of P. multiflorum. There are reports on the lipid-lowering activity of P. multiflorum for the treatment of hyperlipidemia in animal and cell experiments [20][21][22]. However, the lipid regulation mechanisms were still not clearly elucidated. Therefore, lipase was selected as a key enzyme to screen lipase inhibitors for elucidating the lipid regulation mechanisms of P. multiflorum.
It is a great challenge for separating, screening, and identifying lipase inhibitors from TCMs due to the complexity and variability of components. Besides, traditional screening method of isolating, purifying, and evaluating lipase inhibitors using animal and cell models is time-consuming and labor-intensive and this makes it difficult screening lipase inhibitors in TCMs directly [23]. The lipase inhibitors may be lost during the process of isolation and purification. Thus, it is very necessary to establish a rapid and simple method in obtaining relatively pure compounds from bioassays. It could be applied directly to screen lipase inhibitors from TCMs. In order to overcome the disadvantages flogged with the use of conventional methods, HPLC coupled with bioassays was used to screen bioactive components [24,25]. With the development of modern techniques, UHPLC-Q-TOF/MS has been widely used in the screening of components from TCMs not only for its high peak capacity, high sensitivity, and resolution but also for the exact mass determination by the Q-TOF/MS [26,27]. Furthermore, the fraction collector (FC) is a useful tool for the rapid sample preparation to test the activity of chromatographic fractions.
In this study, bioactivity-based UHPLC-Q-TOF/MS-FC method was applied to screen the lipase inhibitors and evaluate the effects of processing time on inhibitory lipase activity of P. multiflorum. In the approach, UHPLC coupled with FC was used for the rapid sample preparation of the bioassays. UHPLC-Q-TOF/MS was employed to provide chemical information on the lipase inhibitors from TCMs. Finally, the screened and identified lipase inhibitors were simultaneously determined by UHPLC-PDA to evaluate the effects of processing on the quality of P. multiflorum. The method could reflect the bioactivity of the whole extract and individual components, structure-activity relationships, concentrationeffect relationships, and the concentration of individual compounds. This novel technique may help to discover lipase inhibitors rapidly and efficiently and help evaluate the effects of processing on the quality of TCMs and its benefits in drug research and development.
Experimental
2.1. Plant Materials. P. multiflorum was purchased from Anguo TCM market (Hebei, China) and authenticated by Professor Lin Ma (Tianjin University of Traditional Chinese Medicine). P. multiflorum was processed by black soybean decoction according to the Chinese Pharmacopoeia, into the processed P. multiflorum. The procedure was as follows: P. multiflorum was mixed with black bean extract for 24 h (10 g black bean extracted with 200 mL water twice), it was finally steamed for 36 h, and then the processed P. multiflorum was obtained. 5.0 kg processed P. multiflorum and P. multiflorum powder were fluxed with 5 L 95% ethanol and refluxed with 8 L 60% ethanol for 2 h, respectively. Then, the extraction was combined, condensed, and lyophilized. The extraction yield was 17.2% for P. multiflorum and 9.45% for the processed P. multiflorum. Finally, the plant materials were obtained.
Preparation of the Fractions.
When the P. multiflorum extract was injected into the UHPLC system, the fraction collector (BSZ-100, Shanghai QingpuHuxi Instrument, Shanghai, China) was used for the fraction collection by setting the time interval at 20 s. Then, the fractions were collected and evaporated to dryness by nitrogen gas. The residues were reconstituted and diluted for bioassays. lipase was dissolved with deionized water and the insoluble substances were removed by centrifugation at 14,000 rpm for 10 min. Finally, the concentration of enzyme solution was 1.0 mg/mL.
Preparation of Standard Solutions.
Gallic acid, catechin, epicatechin, polydatin, 2,3,5,4 -tetrahydroxystilbene-2-O--D-glucoside, resveratrol, emodin-8-O-glucoside, physcion-8-O-glucoside, rhein, emodin, and physcion with the concentration of 1.0 mg/mL were prepared in methanol for quality control. Rhein, emodin, and physcion were dissolved in DMSO (Dimethyl Sulfoxide) with suitable concentrations. The reference standards solution was diluted serially with 10% methanol for the bioassays. The concentrations of DMSO in a series of samples at different concentrations were less than 0.1% (v/v). Orlistat was used as the positive control of lipase, respectively. It was prepared with 10% methanol and diluted to a series of different concentrations.
UHPLC-Q-TOF-MS Analysis.
The components in P. multiflorum extract were identified by an Agilent Q-TOF-MS system. Aglient 6520 Q-TOF mass spectrometer (Agilent Corporation, Santa Clara, CA, USA) coupled with the Agilent 1290 HPLC via an ESI interface was used to obtain chemical information. The mobile phases, flow rate, column temperature, and injection volume were the same as in the UHPLC analysis. The detection wavelengths were set at 254 for emodin and physcion and at 280 nm for other components. The gradient elution was set as follows: 0-4 min, 3%-12% B; 4-8 min, 12%-15% B; 8-13 min, 15%-25% B; 13-16 min, 25%-50% B; and 16-20 min, 50%-80% B. Reequilibration time after gradient elution was 5 min. The ESI-MS spectra were obtained in both positive and negative ion modes to provide complete information for the compounds identification. The Q-TOF-MS analysis conditions were set as follows: capillary voltage, 4500 V; fragmentor voltage, 175 V; skimmer voltage, 65 V; drying gas temperature, 350 ∘ C; drying gas (N 2 ) flow rate, 10 L/min; nebulizer gas pressure, 35 psig; and octopole RF, 750 V. The mass range was m/z 100-1000. The ions [M-H] − were selected as precursor ions and subjected to target-MS/MS analysis.
Method Validation.
The method validation including linearity, limits of detection (LOD), limits of quantification (LOQ), repeatability, precision, stability, and recovery was performed on the basis of US Pharmacopeia recommendations and guidelines.
2.6.1. Linearity, Repeatability, LODs, and LOQs. The calibration curves were constructed with the diluted concentrations of the reference compounds by plotting the peak areas ( ) versus the corresponding concentration ( , g/mL). The repeatability was evaluated by six independent sample solutions and expressed as the relative standard deviation (RSD). The stocks solution of the reference compounds was diluted to a certain concentration at the signal-to-noise ( / ) ratio of 3 and 10, respectively. Six independent sample solutions were evaluated for the repeatability test.
Precision, Stability
, and Recovery. The intraday and interday variability were used for the evaluation of the precision. Intraday and interday precision were assessed by the standard solutions of three different concentrations (low, medium, and high concentration) within one day and over three consecutive days, respectively. The variability was expressed as RSD. The stability experiment was determined for the standard solutions at three different concentrations at the time interval of 0, 2, 4, 6, 8, 12, and 24 h at room temperature. The recovery was determined by spiking three different concentrations (80%, 100%, and 120%) of the eleven compounds into certain amount (0.05 g) of the P. multiflorum and then the mixed solutions were extracted and analyzed by the above method. Finally, the recovery was calculated by the formula: recovery (%) = (found amount -original amount)/ spiked amount × 100%.
Repeatability and Recovery of Fraction
Collections. The P. multiflorum extract was injected into the UHPLC system and then collected by the FC. Six batches (six times of the collection were used as a batch) were collected to evaluate the repeatability of the fraction collection method. Eleven known components were selected as markers to determine the yield of the collected fractions. The same peak was divided into several fractions, which were combined and condensed to dryness by nitrogen gas. The recovery of the fraction was assessed by the ratio of the peak content in the reconstituted fraction and in the P. multiflorum extract, which was employed to evaluate the loss in the process of fraction collection.
2.6.4. Sample Analysis. The P. multiflorum extract and processed P. multiflorum extract (with different processing times) were analyzed by the developed method. The concentrations of the eleven components were determined.
2.6.5. Assay for Lipase Inhibitory Activity. The assay of lipase was performed according to previous method with slight modifications [28]. 25 L test samples and 25 L lipase solution were mixed together. Then, 50 L 4-methylumbelliferyl oleate was added to the mixture. After incubating at 25 ∘ C for 20 min, 100 L sodium citrate (0.1 M, pH 4.2) was added Table 2, all the RSD values of intraday and interday precision were less than 5% and 4.91%, respectively. The intraday and interday accuracies were in the range of 94.2%-106% and 93.7%-107%. The results indicated that the method was precise for the quantitative analysis and fraction collection of the P. multiflorum extract. The RSDs of the stability of the analytes were less than 4.98% and the accuracy was in the range from 93.7% to 106%, demonstrating that the sample solutions were stable within 24 h at room temperature. The RSD values of the recoveries at three different concentrations were less than 4.49% and the recoveries were in the range of 95.3% to 105% (Table 3), which illustrated that the extraction method was of high accuracy.
Recovery of Fraction Collections.
The results of the repeatability of enriched fractions were no more than 3.97% (Table 1), demonstrating that the fraction collection method was reproducible for the rapid sample preparation. The recoveries of the 11 fractions were higher than 71.4%, illustrating that the loss in the enriching process was acceptable and the process was reproducible (data was not given).
Lipase Inhibitory Activity of P. multiflorum Extract.
The lipase inhibitory activity of P. multiflorum and processed P. multiflorum extracts at the concentration of 4-5000 g/mL was investigated. The results showed that both P. multiflorum and processed P. multiflorum extracts have the inhibitory effect against lipase. IC 50 values of P. multiflorum and processed P. multiflorum extracts were 38.84 g/mL and 190.6 g/mL, respectively. These results indicated that activity of P. multiflorum was better than processed P. multiflorum. The contents of 11 typical constituents were determined by UPLC-PDA. It was found that the contents of catechin, epicatechin, polydatin, 2,3,5,4 -tetrahydroxystilbene-2-O--Dglucoside, resveratrol, physcion-8-O-glucoside, and emodin-8-O-glucoside in P. multiflorum were higher than those in processed P. multiflorum. The contents of gallic acid, rhein, emodin, and physcion in P. multiflorum were lower than those in processed P. multiflorum. It was concluded that P. multiflorum extract showing better activity than processed P. multiflorum is related to contents of some lipase inhibitors. Thus, it was necessary to screen potential lipase inhibitors and P. multiflorum extract was selected for the bioactive fraction collection by the UHPLC-FC.
Lipase Inhibitory Activity.
The extract was injected in UHPLC for the separation of the components and all fractions were collected for the bioassays. The lipase inhibitory effect was expressed as inhibition ratio (%), which could be calculated by the above mentioned formula. The time interval of fraction collector was set at 20 s according to the resolution and peak shape in the chromatogram. Thus, the relative purity of the fractions was guaranteed for the bioassays. The fractions from the same peak were combined. From Figure 1, we could observe that the lipase inhibitory activity of the fractions samples that were enriched for 40 and 60 times showed a nonlinear dose-dependent relationship. That is, with the enriched times increasing, the bioactivity of some fractions increased while some did not. When the P. multiflorum extract was enriched for 60 times, the lipase inhibitory activity of some fractions was higher than 50%. As can be seen from Figure 1, the fractions of peaks 5, 6, 7, 8, 9, 10, and 11 showed strong inhibition of lipase, while the peaks 1, 2, 3, and 4 showed relatively weak inhibitory effect of lipase. These fractions containing 20 bioactive components were screened directly and might be considered as the antilipase active components in P. multiflorum extract. As a result, the established method could be used to screen potential lipase inhibitory TCMs and bioactive components.
Identification of the Bioactive Components.
The identification of the components in P. multiflorum extract was carried out by UHPLC-Q-TOF-MS. The 70% methanol extract of P. multiflorum was employed to obtain the total ion chromatogram (TIC) of MS study and MS/MS study of the fragment ion. As can be seen from Table 4, 11 compounds were 6 Evidence-Based Complementary and Alternative Medicine identified or characterized tentatively according to previous reports. Among the identified compounds, there were 1 phenolic acid, 2 flavonoids, 3 stilbenes, and 5 anthraquinones. Peaks 1, 2, and 3 were unambiguously identified as gallic acid, catechin, and epicatechin, respectively, by comparing mass spectrometric behavior and retention time with the ref- 6 was identified as resveratrol [31]. Emodin-8-O-glucoside (peak 7) and physcion-8-O-glucoside (peak 8) were identified certainly by comparing the characteristic mass spectra with the reference standards. The negative MS spectra of peak 9 showed a parent ion [M-H] − at m/z 283.1 and fragment ion at m/z 239.0 was observed in MS/MS spectra, which was tentatively characterized as rhein according to the previous report [32]. The characteristic fragment ions of 283.1, 269.0, and 240.0, the elemental composition, and characteristic fragmentation behaviors could be employed to identify the anthraquinone derivatives. Two components, emodin (peak 10) and physcion (peak 11), were unambiguously identified by comparing mass spectra and retention times with their reference compounds.
As a result, peaks of 1-11 were identified as gallic acid, catechin, epicatechin, polydatin, 2,3,5,4 -tetrahydroxystilbene-2-O--D-glucoside, resveratrol, emodin-8-O-glucoside, physcion-8-O-glucoside, rhein, emodin, and physcion, respectively. As a more sensitive and reliable method, the integrated MS identification and activity screening method could be used in screening bioactive components and identification of components from natural products. The chemical information of P. multiflorum extract also provided helpful information of the activity study and could be used in the compound identification of other similar TCMs.
The Contribution of Lipase Inhibitory
Activity. The inhibitory effect of positive control orlistat against lipase was determined and IC 50 value was calculated by the software Graph Pad Prism 5. The IC 50 value of orlistat was 0.167 g/mL. The IC 50 of orlistat was defined as a potency unit. The potency of other concentrations of orlistat was calculated by the potency unit. The potency calibration curve was constructed by plotting the potency ( ) against the corresponding inhibition ratio ( ). The calibration curve of orlistat was = 0.0033 3 − 0.5271 2 +28.111 −497.27 ( 2 = 0.9956). Then, the potency of the enriched fractions was determined by the curve mentioned above. Therefore, the relative percentages of inhibition of lipase of all identified fractions expressed potency unit of orlistat equivalent antilipase capacity according to standard potency curves. The contribution of individual fractions was calculated by the formula: contribution (%) = potency of individual fraction/the total lipase inhibitory × 100%. The results showed that the 2,3,5,4 -tetrahydroxystilbene-2-O--D-glucoside played the most important role in the lipase inhibitory effect contributing more than 70% while phenolic
Confirmation of the Bioactivity of the Bioactive Components.
In order to validate the results obtained from the established method, the bioassays of the reference standards of the bioactive components (purity > 98%) from the P. multiflorum extract were carried out at the same time. Eight different diluted concentrations of the bioactive components of the lipase inhibitory effect were assessed. The effective concentration at which lipase was inhibited by 50% was defined as IC 50 values of the components against lipase. The inhibitory effect of the compounds was expressed as inhibition ratio (%). The results were listed in Table 4. Except gallic acid, catechin, and epicatechin, the lipase inhibition ratio of the compounds was above 50% at the concentration of 1.0 mg/mL, which was consistent with the inhibitory activity of the bioactive fractions. The IC 50 values of the bioactive components with the lipase inhibition ratio more than 50% were calculated. order to clarify the effect of processing on lipase activity, the different processing times on contents of lipase inhibitors were investigated. The contents of 11 compounds were determined simultaneously for the comprehensive quality control of the P. multiflorum extract. The chromatogram of quantitative analysis is illustrated in Figure 3. After the P. multiflorum was processed, the contents of phenolic acid (gallic acid) and flavonoids (catechin and epicatechin) were enhanced, while the contents of stilbenes (polydatin, 2,3,5,4tetrahydroxystilbene-2-O--D-glucoside, and resveratrol) were reduced. Among the anthraquinones, the contents of physcion, rhein, and emodin were enhanced and those of physcion-8-O-glucoside and emodin-8-O-glucoside were depressed. The P. multiflorum extracts of different processing times were also calculated. The results were shown in Table 5. As can be seen from Table 5, the contents of gallic acid, catechin, and epicatechin had no regular changes with different processing times, while the contents of polydatin, 2,3,5,4 -tetrahydroxystilbene-2-O--D-glucoside, and resveratrol were decreased. The reason was that polydatin, 2,3,5,4 -tetrahydroxystilbene-2-O--D-glucoside, and resveratrol lose glycosides in the processing procedure. The contents of physcion, rhein, and emodin improved with the processing times until 32 h, after which the contents showed no regular changes. The concentrations of physcion-8-Oglucoside and emodin-8-O-glucoside presented a completely opposite tendency. The reason was that physcion-8-Oglucoside and emodin-8-O-glucoside were physcion and emodin derivative, respectively. Before 32 h, physcion-8-Oglucoside and emodin-8-O-glucoside were decreased while physcion and emodin were increased. But physcion and emodin were decreased after 32 h which might be related to other chemical reactions occurring in the process. The reason for this needs to be studied in further work. As a result, processing could affect the concentrations of bioactive components. In order to understand the effect of processing on P. multiflorum more clearly, principal component analysis (PCA) was employed for exploring the effects of processing on quality of P. multiflorum. The sum of PC1 and PC2 was higher than 85.33% of total variance, which could be sufficient to describe the variability. The score plot for these compounds generated from comparison of two PCs is illustrated in Figure 3. 13 samples of different processing times were classified into 4 groups. P. multiflorum with no processing and 4 h processing were clustered into one group, respectively. Samples that were processed 8 to 16 h were classified into one group and 20 to 32 h were clustered into another group. Based on this, the quality control of P. multiflorum becomes more comprehensive; this could be employed as a standard for the herbal medicines in the market.
Conclusion
In this study, a bioactivity-based UHPLC-Q-TOF-MS-FC method was developed and validated to successfully screen and identify lipase inhibitors from P. multiflorum. The active components were used in clarifying the effects of processing on the quality of P. multiflorum. The method could provide the total activity of the extract, the activity of the factions and individual components, and the contribution of the active components in the herbal medicines. Compared with conventional methods, the developed method was more rapid, effective, and comprehensive for the active components screening and identification of lipase inhibitors in TCMs. As a result, stilbenes and anthraquinones in P. multiflorum were screened to have the potential lipase inhibitory effect, which could be used as safe lipase inhibitors for the treatment of obesity. These results give scientific support for the clinical use and specification for processing in the market for P. multiflorum. | 2016-05-08T11:08:35.174Z | 2016-01-26T00:00:00.000 | {
"year": 2016,
"sha1": "33764739c592a5bbded7d7f1266c0ddf61c79253",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/ecam/2016/5965067.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d6393496b8194cba7a524fbc37e2fb93b9dc0154",
"s2fieldsofstudy": [
"Chemistry",
"Agricultural And Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
216524732 | pes2o/s2orc | v3-fos-license | The impact of access to clean water on health of the elderly: Evidence from China
Access to clean water is a crucial part for the health of humans. Many present and previous studies have found that drinking water quality affects people’s health. Our study aims to find the association between drinking water and health of the elderly in China. Through regression analysis, the results show significant association between water facility (access to tap water) and old people’s depression level, difficulties to do daily activities and self-assessed health. The harmful chemicals in unsanitary water bring health risks to the old people. We suggest the government that investing more money on drinking water facilities is necessary to improve both physical and mental health of the elderly.
Introduction
Over the decades, people's living condition becomes much better in China, but there are many environmental and health problems needed to be dealt with. As the economy grows, there is an initial phase of environment deterioration followed by a subsequent period of improvement, known as the Environmental Kuznets Curve [1]. At the beginning of industrial development, the developing countries have only the ability to produce pollution-intensive goods. What's worse, these countries have no previous experiences with industrial development, so they don't have strict policy associating with the environmental policies. Via an "induced policy response," "as nations or regions experience greater prosperity, their citizens demand that more attention be paid to the non-economic aspects of their living conditions." With stringent environmental standards and laws, the developed countries often have cleaner water and better air conditions. The environmental conditions in different countries may not happen exactly as the curve indicates, but some developing countries can learn from the pattern concluded from the developed countries and pay attention to environmental problems at an early stage. Scientists have researched into this field; however, due to the limited availability of data, only a few have taken a look at the problem in China.
With a considerable increase in average income, Chinese people enjoy a better material life but there is no large increase in their life expectancy [2]. As shown in Figure 1, the Gross Domestic Product (abbreviated as GDP) of China increases dramatically during the past 20 years; it is now getting close to the GDP growth rate of America. Accordingly, China is approaching the downward sloping phase of Environmental Kuznets Curve (abbreviated as EKC). Similar to the experience of developed countries, it is policy response that dominates the improvement of environment in China. Moreover, the tight relationship between environmental quality and economic growth shown by the EKC theory along with other research findings indicate that the economic policies are crucial to the making improvements in environment, and only one or two country's change can help avoid a global disaster [3]. As shown in Figure 2, under the force of newly revised Water Pollution Prevention and Control Law of the People's Republic of China, the public private partnerships (PPP) investment in water and sanitation surged in 2007 and 2017, when the law was enforced and revised, respectively. While the effects of the law and regulations remained to be examined, a number of studies indicate that water pollution imposes significant health risks and improving water quality brings tremendous benefit to people's health. There is robust evidence suggesting that clean water and advanced sewerage treatments save infants' and young children's lives [4]. Drinking water that are not disinfected contains heavy metals and sometimes bacteria, so it increases people's health risk [5]. These previous works mainly utilize mortality rates and biomarkers as a dependent variable, however, few of them takes mental health and people's abilities for daily activities into consideration. In the perspective of government, water pollution is a more difficult problem to tackle compared to air pollution. In contrast to the top-down approach to deal with air pollution issues, water pollution problems require a bottom-up approach -all the people from the bottom to top to pay attention to water resources and water issues so that water conservation can be put into effect and improve people's health [6]. As shown by endogenous economic growth theories, improved health may increase human capital, thus, contributes to the engine of economic growth [7].
Given the health benefit in terms of mortality of improving water quality, and the gap in the understanding of how access to clean water affects mental health and abilities for daily activities in the existing literature, it is worthwhile to study the effects of access to water sanitary facilities on mental health. This paper discusses the relationship between access to drinking water quality and people's mental health status, using data of a nationally-representative survey in China. This paper is organized in the following way. We start with offering the background of China economy growth and environmental degradation in the introduction part. It is followed by explaining the data, method, and model we applied. Then the results are shown with discussion. We end up with conclusion, summarizing the findings.
Data and methodological framework
This study utilizes data of China Health and Retirement Longitudinal Study (CHARLS). Collecting data from a nationally representative sample of Chinese residents ages 45 and older around 28 provinces around China, CHARLS is able to serve the needs of scientific research on the elderly. The study is carried out every two years after the baseline national wave in 2011. The Data can be downloaded after requesting from the official site charls.pku.edu.cn. STATA is the software we used to view and analyze the data. We used biomarker, community, and demographic data of the study. There are 20,435 residents who took part in the CHARLS 2015 survey. Table 1 shows the detailed demographic distribution of CHARLS 2015 wave sample. CHARLS data contains a wide array of information on the demographics, education, health, income, etc. In the individual questionnaire, the respondents are asked how often they feel depressed in the past week with four choices provided i.e. "1 Rarely or none of the time", "2 Some or a little of the time", "3 Occasionally or a moderate amount of the time", "4 Most or all of the time". We utilize this variable to construct a variable with a four-point-scale to suggest the levels of depression of the respondent. Level of depression equaling to 4 indicates the highest level of depression and level of depression equaling to 1 indicates the lowest level of depression. We use difficulties of activities of daily living (ADL) and difficulties of instrumental activities of daily living (IADL) to show respondents' abilities for daily activities. ADL includes the abilities of walking, eating, toileting, dressing, bathing and urination and defecation. In our analysis, in our analysis, the variable difficulties of ADL is a dummy variable with 1 indicating that the respondent has difficulty in ADL and 0 otherwise. IADL includes the abilities of shopping, housecleaning, managing money, taking medicine and cooking. Similarly, the variable difficulties of IADL is a dummy variable with 1 indicating that the respondent has difficulty in IADL and 0 otherwise. We further construct an indicator for overall health status of the respondents based on self-evaluation. In the questionnaire, they are asked how they would rate their health with five choices ranging from "1 Excellent" to "5 Poor" provided. In our analysis, we switched the order to "1 Poor" to "5 Excellent", which fits the common sense that a higher score means the person is more satisfied with his or her health.
Besides the individual survey, CHARLS also collects information on the community or village where the individual respondents live. Among the information, we utilize the one on how many households in your village/community use drinking water from the following sources as the key independent variable. The choices include tap water, well water, pool water, river lakes and books, rain water and snow water, cellar water, spring, and others. Among these water sources, only tap water is pre-processed through water sanitary facilities, thus, is cleaner than the rest on average. To isolate the effects of access to clean water on health, we further construct a dummy variable -water facilitywith 1 indicates that the community/village has access to water processed by sanitary facilities i.e. clean water, and with 0 indicates that the community/village does not have access to water processed by sanitary facilities.
In addition, we construct and utilize the following control variables in the regression analysis: (1) years of education: number of years of education.
(2) expenditure: expenditure of the household in the last year. This is used as a proxy for income since there are many missing values in the income data.
(3) female: dummy variable with 1 indicates that the respondent is female and 0 indicates that the respondent is male.
(4) urban: dummy variable with 1 indicates that the respondent lives in urban area and 0 indicates that the respondent lives in rural area.
(5) age: age of the respondent. The empirical methods i.e. the regression model is as below where Y is the health indicator ranging among level of depression, difficulties of ADL, difficulties of IADL, and self-reported health status and water facility is the key dependent variable indicating whether the community/village has access to water sanitary facilities thus clean water.
Y(health indicator) = β0 + β1 (water facility)i + β2 (years of education)i + β3 expenditurei + β4 femalei + β5 urbani + β6 agei + ui Figure 3 shows the fraction of communities/villages with access to water sanitary facilities and thus clean water. It suggests that more than a half (66.13%) of the communities have tap water facilities. One third (33.87%) of all the communities in the study do not have access to tap water. Figure 4 shows the comparison of level of depression between respondents with access to clean water and those do not. It clearly suggests that people who use tab water are less likely to feel depressed. . Figure 5 shows the comparison of self-reported health condition between respondents with access to clean water and those do not. It is evident that people who drink from tap water feel healthier than people who don't drink from tap water. Figure 6 shows the comparison of difficulty to do daily activities between respondents with access to clean water and those do not. As the graph shows, people who have access to tap water has much smaller difficulty to carry out daily activities. Figure 7 shows the comparison of instrumental difficulty to do daily activities between respondents with access to clean water and those do not. People who have access to tap water has much smaller difficulty to carry out instrumental daily activities. The results above suggest that there is a clear distinction between the respondents with access to clean water and those who do not. To further test whether this distinction is robust to other control variables and to make ceteris paribus comparisons, we performed regression analysis taking into both key independent variables i.e. water facility and control variables into consideration. The results are shown in Table 2. Note: Key X variable water facility and each control variable are put into regression analysis with 4 key Y variables: ADL_difficulty_any (whether the respondent has any ADL difficulty), IADL_difficulty_any (whether the respondent has any IADL difficulty), self-reported health, and Depressed level. ***p < 0.01, **p < 0.05, *p < 0.1. Data source: CHARLS 2015.
Results and discussion
For each of the four regressions, according to the F test (F-statistics=89.57, 109.13, 33.74, 72.83), they are all significant, meaning that the models are correctly specified.
According to regression analysis, access to tap water facility has a statistically significant (p<0.01) association with elder people's level of depression, ADL difficulties, IADL difficulties and selfreported health status. The regressions also suggest that urban, male respondents with longer years of education tend to have fewer difficulties in ADL and IADL, better self-reported health status, and lower level of depression.
These findings consistently indicate the benefit of providing access to clean water. To be more specific, having access to clean water is negatively associated with ADL difficulties and IADL difficulties, which suggests that respondents with access to clean water have fewer difficulties in daily activities such as walking, eating, etc. and fewer difficulties in instrumental daily activities such as shopping, managing money, etc. We also find that having access to clean is negatively associated with level of depression, meaning that respondents with access to clean water have lower level of depression. Moreover, having access to clean water is positively associated with self-reported health status, suggesting that respondents with access to clean water have better self-evaluation of health.
The results fit along with previous findings of drinking water quality and health of the elderly. For example, drinking water turbidity can increase the risk of the elderly to have gastrointestinal illnesses [8]. Moreover, contaminated and unprocessed water might be the source of waterborne diseases [9]. Our finding reflects the health risks that the elderly may have if they don't have sanitary water sources. Besides problems in physical health status, our results show that the elderly even have more mental issues when they don't have tap water to drink. It is similar with the negative effect brought by arsenic in drinking water: arsenicosis symptom, which has a strong negative effect on mental health [10]. Even worse, chronic exposure to arsenic through drinking water is associated to an increase in mortality rate [11]. The impacts of a clean water source affect the well-being of the elderly. As a result, it is essential to establish necessary water processing factories and help more people get access to disinfected tap water without harmful chemicals. While chlorination cannot remove arsenic in water, more methods of processing drinking water should be considered to reduce the health risk. As the climate changes, the quality of surface water, especially rivers and lakes, degrades over time [12]. This degradation can harm people's health if their drinking water sources are not cleaned thoroughly. For instance, microcystins, a microbial pollutant in water from contaminated water pipes, threatens people's health and may leads to liver cancer [13]. What's worse, industrialization and increasing agriculture activities escalate the water scarcity in China [13]. People may not have enough clean water to drink in the future. Economically, taking more care for the elderly is crucial for China because nowadays the population is aging, a problem which many developed countries have. The human capital, involving people's well-being, determines a country's economic condition, so the health of the elderly is of vital importance for China, as people retire at an older age. To increase longevity and help people live healthier, the government should fund more money on the sanitary water system and make sure that all the residents can have clean, safe tap water to drink.
There are some limitations in this study that might affect the association between the water facility and the health of the elderly. For instance, we do not include health insurance and water pollution in different regions where the respondents in our live. People with enough insurance and a better environment might have better physical and mental health regardless of the water facility.
Conclusions
Through regression analysis, we find that drinking water quality has a statistically significant association with ADL/IADL difficulties, self-reported health status, and level of depression. The results prove previous findings while giving more associations other than water quality's effect on physical health. According to our findings, having access to clean water significantly lowers people's difficulties in ADL and IADL, lowers people's level of depression, and improves people's selfevaluation of health status, therefore, we propose the government to fund more money in improving the water sanitation facilities. Future studies can focus on the water quality's effects on people's mental health to find the theory behind our findings. | 2020-04-16T09:16:43.673Z | 2020-04-09T00:00:00.000 | {
"year": 2020,
"sha1": "941e3e391a15301b651b04f6c4ffd5ddea24fdf1",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/467/1/012123",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "ab66f76ec345ae542d7b14d936e21334b29be031",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Environmental Science",
"Physics"
]
} |
24544204 | pes2o/s2orc | v3-fos-license | Gender Specific Influence of Fish Oil or Atorvastatin on Functional Properties of Renal Na , K-ATPase in Healthy Wistar and Hypertriglyceridemic Rats
For better understanding of pathophysiological processes leading to increased retention of sodium as a consequence of hyperlipidemia, the properties of renal Na,K-ATPase, a key enzyme involved in maintaining sodium homeostasis in the organism, were studied. Enzyme kinetics of renal Na,K-ATPase were used for characterization of ATPand Na-binding sites after administration of fish oil (FO) (30 mg·day) or atorvastatin (0.5 mg·100 g·day) to healthy Wistar rats and rats with hereditary hypertriglyceridemia of both genders. Untreated healthy Wistar and also hypertriglyceridemic female rats revealed higher Na,K-ATPase activity as compared to respective untreated male groups. Hypertriglyceridemia itself was accompanied with higher Na,KATPase activity in both genders. Fish oil improved the enzyme affinity to ATP and Na, as indicated by lowered values of Km and KNa in Wistar female rats. In Wistar male rats FO deteriorated the enzyme in the vicinity of the Na-binding site as revealed from the increased KNa value. In hypertriglyceridemic rats FO induced a significant effect only in females in the vicinity of the sodium binding sites resulting in improved affinity as documented by the lower value of KNa. Atorvastatin aggravated the properties of Na,KATPase in both genders of Wistar rats. In hypertriglyceridemic rats protection of Na,K-ATPase was observed, but this effect was bound to females only. Both treatments protected renal Na,K-ATPase in a gender specific mode, resulting probably in improved extrusion of excessive intracellular sodium out of the cell affecting thus the retention of sodium in hHTG females only.
Introduction
Hypertriglyceridemia, independently of cholesterol level, is associated with an increased risk of cardiovascular diseases and it may be followed by development of atherosclerosis and hypertension (Hokanson et al. 1996). In management of hypertriglyceridemia various therapies have been selected. A systematic search of randomized controlled trials, comparing any lipid-lowering intervention with placebo or usual diet with respect to mortality, resulted in conclusion that statins and n-3 polyunsaturated fatty acids (PUFA) are the most favorable lipid-lowering interventions with reduced risks of overall and cardiac mortality (Studer et al. 2005). Numerous other studies have shown that in HTG treatment statins and a diet rich in omega-3 polyunsaturated fatty acids (particularly docosahexaenoic acid and eicosapentaenoic acid) lower lipid levels in plasma and have an important impact in primary and secondary prevention of cardiovascular disease and atherosclerosis as well.
For better understanding of pathophysiological processes leading to cardiovascular events as a consequence of hyperlipidemia, several experimental Vol. 60 models have been intensively studied. One of these alternatives, known as Prague hereditary hypertriglyceridemic (hHTG) rats, was developed as a model of human hypertriglyceridemia (Vrána and Kazdová 1990). A large number of studies have revealed that Prague hHTG rats represent a suitable model for the study of metabolic disturbances in relation to blood pressure. Numerous abnormalities of blood pressure regulation as well as alterations in the structure and function of the cardiovascular system were found in this model (for review see Zicha et al. 2006). The Prague hHTG rats showed a significant increase of diastolic blood pressure accompanied with hypertrophy of the left ventricle (Klimeš et al. 1997). In other studies increase of systolic blood pressure (Šimko et al. 2002) and hypertrophy of the right ventricle were documented (Šimko et al. 2005). The aorta is also altered in hHTG rats, as documented by quantitative and qualitative changes of endothelium and connexin 43 (Dlugosova et al. 2009). Besides the increased pressure overload also the altered retention of sodium and subsequent enlargement of circulating volume may be co-responsible for the development of cardiac hypertrophy (Klimeš et al. 1997). The kidneys of hypertriglyceridemic rats were more susceptible to cyclosporin-induced nephrotoxicity which was reduced in hHTG rats by PUFA (Bohdanecká et al. 1999). Administration of PUFA and also atorvastatin resulted in antiarrhythmic effect via protection of cardiomyocytes and cell-to-cell junction integrity (Bacova et al. 2010).
The mammalian kidney plays a crucial role in maintaining the extracellular homeostasis of sodium ions. One of the key systems involved in this process is renal Na,K-ATPase or so called sodium pump which transports 3 Na + ions out of the cell and 2 K + ions into the cell using the energy derived from hydrolysis of one molecule of ATP. Previous studies have documented that Na,K-ATPase is sensitive to the administration of PUFA (Gerbi et al. 1997, Djemli-Shipkolye et al. 2002. Administration of atorvastatin or simvastatin to hyperlipidemic patients also increased the activity of Na,K-ATPase in erythrocytes (Broncel et al. 1997).
Trying to characterize the utilization of ATP and Na + -binding properties of renal Na,K-ATPase during hypertriglyceridemia and its treatment with PUFA and atorvastatin, the present study was designed to investigate the kinetic properties of the enzyme in male and female hHTG rats.
Animal model
Experiments were performed on adult Wistar rats and rats with hereditary hypertriglyceridemia (HTG) of both genders.
At the beginning of experiments, 3-month-old Wistar and HTG animals of both genders were divided into 3 groups (n=8 in each experimental group): Wistar and HTG rats fed with n-3 polyunsaturated fatty acid diet isolated from fish oil (eicosapentaenoic acid and docosahexaenoic acid, Vesterålen, Norway, 30 mg·day -1 ) for two months (WfFO -Wistar females, HfFO -HTG females, WmFO -Wistar males, HmFO -HTG males), Wistar and HTG rats treated with atorvastatin (Zentiva, Slovakia, 0.5 mg·100 g -1 body weight per day) for two months (WfA, HfA, WmA, HmA), and untreated Wistar and HTG control rats (WfC, HfC, WmC, HmC). All rats were allowed free access to food and drinking water. The animal room was air-conditioned and the environment was continually monitored for the temperature of 23±1 °C with relative humidity of 55±10 %. At the end of experiment, the rats were anesthetized by thiopental anesthesia, the excised kidneys were immediately frozen in liquid nitrogen and stored for further investigations of Na,K-ATPase properties. All experiments were approved by the Veterinary Council of the Slovak Republic (Decree No. 289, part 139, July 9th 2003) and were conform to Principles of Laboratory Animal Care (NIH publication 83-25, revised 1985).
Sample isolation
The plasmalemmal membrane fraction from rat kidney was isolated according to Jorgensen (1974) with slight modifications. Briefly, the renal tissue was homogenized in cold isolation medium containing in mmol.l -1 : 250 sucrose, 25 imidazol, 1 EDTA (pH 7.4) using a tissue disruptor (3 x 10 sec at a setting of 4, Polytron PT-20). The homogenate was centrifuged at 6000 g for 15 min. The sediment was re-homogenized and centrifuged again at 6000 g for 15 min. The collected supernatants from both centrifugations were recentrifuged at 48000 g for 30 min and the final sediment was re-suspended in the isolation medium. An aliquot was removed for determination of proteins by the method of Lowry et al. (1951) using bovine serum albumin as standard.
Enzyme kinetics
ATP-kinetics of Na,K-ATPase was estimated at the temperature of 37 °C measuring the hydrolysis of ATP by 10 μg plasmalemmal proteins in the presence of increasing concentrations of the substrate ATP (0.16-8.0 mmol·l -1 ). The total volume of the medium was 0.5 ml containing (in mmol·l -1 ): MgCl 2 4, KCl 10, NaCl 100 and imidazole 50 (pH 7.4). After 20 min of pre-incubation in substrate-free medium, the reaction was started by addition of ATP and after 20 min the reaction was stopped by addition of 0.3 ml 12 % ice-cold solution of trichloroacetic acid. The liberated inorganic phosphorus was determined according to Taussky and Shorr (1953). In order to establish the Na,K-ATPase activity, the ATP hydrolysis that occurred in the presence of Mg 2+ only was subtracted. The Na,K-ATPase kinetics for cofactor Na + was determined by the same method, in the presence of increasing concentrations of NaCl (2.0-100.0 mmol·l -1 ). The amount of ATP was constant (8 mmol·l -1 ). The kinetic parameters V max , K m , K Na were evaluated from the obtained data by direct nonlinear regression. The parameter V max represents the maximal velocity, K m and K Na values represent the concentrations of ATP or Na + necessary for half maximal activation of the enzyme.
Statistical analysis
All results were expressed as mean ± S.E.M. The significance of differences between the individual groups was determined using the one-way analysis of variance (ANOVA) by Student-Newman-Keuls test. A value of p<0.05 was regarded as significant.
Results
When activating renal Na,K-ATPase with increasing concentrations of substrate, its activity was higher in hHTG groups throughout the applied range of ATP concentrations as compared to Wistar groups ( Fig. 1). This effect was observed in female as well as in male groups. Evaluation of the data by nonlinear regression resulted in higher V max value in the WfC group by 23 % as compared to the WmC group. Similar gender specificity was observed also in hHTG animals where the V max value in the HfC group was by 18 % higher as compared to the HmC group. Hypertriglyceridemia itself induced a significant increase of V max value in both genders, representing 25 % in males and 21 % in females. The K m value remained similar in all four compared groups (Table 1).
[ATP] (mmol·l -1 ) Vol. 60 Table 1. Kinetic parameters of renal Na,K-ATPase during activation with ATP and NaCl in Wistar rats and hypertriglyceridemic (hHTG) rats divided into the following groups: control female Wistar rats (WfC), control female hHTG rats (HfC), control male Wistar rats (WmC), control male hHTG rats (HmC). Activation of the enzyme with increasing concentrations of cofactor Na + resulted again in higher activity in hHTG groups throughout the applied range of NaCl concentrations as compared to Wistar groups in both genders (Fig. 2). Evaluation of kinetic parameters revealed higher V max value in the WfC group by 16 % as compared to the WmC group. Similar gender specificity was observed also in hHTG animals where the V max value in the HfC group was by 16 % higher as compared to the HmC group. Hypertriglyceridemia itself induced a significant increase of V max value in both genders representing 22 %. The K Na value showed gender specificity in Wistar rats, where this parameter was higher by 24 % in WfC as compared to the WmC group (Table 1).
Groups of rats V max ATP-kinetics
Administration of fish oil to Wistar females induced a slight increase of Na,K-ATPase activities in the presence of low concentrations of ATP, but in the presence of higher concentrations exceeding 2 mmol.l -1 ATP the activities were slightly lower as compared to untreated controls (Fig. 3). This biphasic effect of fish oil administration was reflected in decrease of K m value by 15 % in WfFO as compared to WfC group (Table 2). In Wistar males the enzyme activity was similar throughout the applied concentration range of ATP (Fig. 3) without any alterations in kinetic parameters (Table 2). Administration of atorvastatin to Wistar rats was followed by decreased enzyme activities throughout the whole concentration range of ATP in both genders (Fig. 3). Evaluation of kinetics parameters resulted in significant decrease of V max value in females only, as shown by 23 % decrease in WfA compared to the WfC group (Table 2).
During activation of renal Na,K-ATPase with increasing concentrations of NaCl, a slight stimulation of the enzyme activity was observed in Wistar females treated with fish oil. This effect we observed in the presence of lower concentrations below 10 mmol.l -1 of NaCl, above this concentration the effect was lost (Fig. 4). Evaluation of this effect revealed unchanged V max with decreased K Na value by 17 % in the WfFO as compared to the WfC group ( Table 2). Administration of Vol. 60 fish oil to Wistar males induced again a slight stimulation of Na,K-ATPase activity but the stimulatory effect increased stepwise with increasing concentration of NaCl ( Fig. 4) resulting in increased values of V max by 17 % and K Na by 25 % in WmFO as compared to the WmC group (Table 2).
[NaCl] (mmol·l Administration of atorvastatin to Wistar rats was followed by decreased enzyme activities throughout the whole concentration range of NaCl in both genders (Fig. 4). Evaluation of kinetics parameters resulted in significant decrease of V max value by 21 % in WfA as compared to the WfC group. The K Na value was significantly increased by 22 % in males, however in females the value was decreased by 21 %, as compared to respective controls ( Table 2).
Administration of fish oil to hHTG rats induced a slight increase of Na,K-ATPase activities throughout the investigated concentration range of ATP in both genders as compared to untreated animals (Fig. 5), however this slight effect was not reflected either in changes of V max or of K m values ( Table 3).
Administration of atorvastatin to hHTG rats Vol. 60 induced a slight stimulation in female rats only (Fig. 5), resulting in an 18 % increase of V max and 15 % of K m value (Table 3). When activating renal Na,K-ATPase with increasing concentrations of NaCl, we observed a slight stimulation of the enzyme activity only in females of hHTG animals treated with fish oil (Fig. 6). Evaluation of this effect revealed significant decrease of K Na value by 11 % in HfFO as compared to the HfC group (Table 3).
Administration of atorvastatin to hHTG female rats was followed by increased enzyme activities throughout the whole concentration range of NaCl (Fig. 6), resulting in 14 % increase of V max value in HfA as compared to the HfC group (Table 3). On the other hand, in males we observed a slight decrease of enzyme activity below 10 mmol·l -1 concentration of NaCl (Fig. 6) resulting in a small but statistically significant increase of K Na value by 12 % in HmA as compared to the HmC group (Table 3).
Discussion
There are cumulative data indicating that sodium transport across the cell membrane is gender dependent (Grikiniene et al. 2004). Healthy females showed a significantly lower intracellular concentration of Na + (Smith et al. 1988, Taylor et al. 1991) as compared to males. Gender-related differences in intracellular Na + concentration result most likely from differences in the function of sodium transport systems including Na,K-ATPase. This hypothesis is confirmed also by our present study, documenting increased activity of renal Na,K-ATPase in control female Wistar rats. This is in agreement with previous data of Quintas et al. (1997). However, some other experimental studies documented similarity in activities and expression of renal Na,K-ATPase during normal physiological conditions in both genders (Fekete et al. 2004(Fekete et al. , 2006. Our data provided evidence that in the situation of pathophysiological overload the activity of renal Na,K-ATPase was again increased in females as comaperd to male hHTG rats. The increased activity of renal Na,K-ATPase in hHTG females may be explained by a hypothetically higher presence of the active enzyme molecules in the renal tissue in females as compared to males. This hypothesis is supported by our data concerning the increased V max value in both types of enzyme kinetics in all female groups as compared to respective male groups. This finding, suggesting better adaptation of the enzyme to pathophysiological overload in females, is supported by the observation of higher mRNA expression of Na,K-ATPase catalytic α 1 subunit, as well as higher activity in female rats as compared to males after ischemia reperfusion injury in the kidney (Fekete et al. 2004). The sexual disparity of Na,K-ATPase properties was demonstrated in various pathophysiological conditions also in other tissues, like the aorta (Palacios et al. 2006), heart (Vlkovičová et al. 2005 and erythrocytes (Smith et al. 1988).
The mechanism of the protection of Na,K-ATPase in females may be ascribed to estradiol, because it was previously shown that this hormone, besides other effects, stimulates directly and also indirectly the Na,K-ATPase molecule in the cardiac tissue (Dzurba et al. 1997). This explanation is supported also by the finding that the significantly higher activity of the sodium pump is a consequence of estradiol-induced increase in Na,K-ATPase α2 subunit expression in the female aorta (Palacios et al. 2004) and α1 subunit expression in the female kidney (Fekete et al. 2004). Estradiol also stimulated Na,K-ATPase activity and the expression of α1 subunit via multiple signaling cascades, including phosphatidyl inositol-3 kinase and p42/44 mitogenactivated protein kinase in vascular smooth muscle (Sudar et al. 2008).
It is necessary to mention that renal Na,K-ATPase activity was higher in both genders of hHTG rats as compared to respective control Wistar groups. So the improved extrusion of intracellular sodium out of the cell as a consequence of increased Na,K-ATPase activity documented in the present study, may be the reason for increased retention of sodium in hHTG rats, as shown by Klimeš et al. (1997).
Previous studies reported that the Na,K-ATPase was sensitive to the administration of n-3 polyunsaturated fatty acids (PUFA). The effect of fatty acids was dependent on physiological or pathophysiological conditions and it showed also tissue dependence. The positive effect of PUFA on Na,K-ATPase was documented during diabetes in red blood cells (Djemli-Shipkolye et al. 2002), in hearts (Gerbi et al. 1997) and in sciatic nerves (Gerbi et al. 1999). On the other hand, Djemli-Shipkolye et al. (2002) observed a negative effect of PUFA on Na,K-ATPase in sciatic nerves in diabetic as well as in healthy rats. It should be mentioned that all previous studies were carried out exclusively on males.
It is known that Na,K-ATPase reveals different properties in males and females, as discussed previously. Therefore in our study we tried to broaden the knowledge about the effect of PUFA from the view of possible gender specificity. Our new data suggest a gender specific influence of PUFA on renal Na,K-ATPase. In Wistar female rats, fish oil improved the enzyme affinity to ATP and Na + , as indicated by lowered values of K m and K Na . In Wistar males fish oil caused deterioration of the enzyme in the vicinity of the Na + -binding site, as revealed from the increased K Na value. In hHTG rats fish oil induced a significant effect only in females in the vicinity of the sodium binding sites, resulting in improved affinity, as documented by a lower value of K Na . Our data indicate that renal Na,K-ATPase in healthy and also in hypertriglyceridemic female rats seems to be more sensitive to the protective effect of PUFA administration.
Atorvastatin did not influence the vicinity of the ATP binding site in Wistar males nor in Wistar females, as suggested by unchanged K m values. In the vicinity of the Na + binding site of Na,K-ATPase in male Wistar rats, atorvastatin induced probably conformational changes, resulting in deteriorated affinity to sodium, as suggested by increased value of K Na . Concerning Wistar female rats, atorvastatin reduced the number of active Na,K-ATPase molecules as indicated by the lowered V max value for both types of enzyme activation. This finding may suggest a decrease of transmembraneous transport of sodium out of the cell as a consequence of decreased Na,K-ATPase activity, presumably followed by lowered retention of sodium in normotensive Wistar rats.
In male hypertriglyceridemic rats atorvastatin did not affect the functional properties of renal Na,K-ATPase, while in hHTG female rats it stimulated Na,K-ATPase in the whole concentration range of ATP or Na + . This effect is probably caused by an increased number of active enzyme molecules, as suggested by increased V max value. So the enzyme in female rats treated with atorvastatin was capable to increase its activity also in the presence of high concentrations of ATP/Na + , while the enzyme in untreated hHTG rats was already saturated. Administration of atorvastatin to healthy Wistar rats was followed by deteriorated properties of the renal Na,K-ATPase in both genders. While a positive effect was observed in the protection of renal Na,K-ATPase in hypertriglyceridemic rats, but this effect was strictly bound to females only.
In conclusion, treatment with fish oil or atorvastatin improved the functional properties of renal Na,K-ATPase in a gender specific manner, inducing probably improved extrusion of intracellular sodium out of the cell affecting thus the retention of sodium in hHTG females only.
Conflict of Interest
There is no conflict of interest. | 2017-09-07T05:52:56.344Z | 2011-01-01T00:00:00.000 | {
"year": 2011,
"sha1": "1de58260c63ecfaf7cac0b24a7431eb3f934fda5",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.33549/physiolres.932153",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "33ede94a5fca45ce3a3b077f437c6d50c27bf880",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
250317197 | pes2o/s2orc | v3-fos-license | Cerebral venous sinus thrombosis caused by traumatic brain injury complicating thyroid storm: a case report and discussion
Introduction Cerebral venous sinus thrombosis (CVST) is an uncommon cerebrovascular disease with diverse predisposing factors. We report a case of CVST caused by a thyroid storm induced by traumatic brain injury. Case presentation A 29-year-old male patient with a history of Graves’ disease with hyperthyroidism presented to our hospital with head trauma of cerebral contusion and laceration in both frontal lobes confirmed by admission CT scan. He received mannitol to lower intracranial pressure, haemostatic therapy, and antiepileptic treatment. Eight days later, he presented with signs of thyroid storms, such as tachycardia, hyperthermia, sweating and irritation, and his thyroid function tests revealed high levels of TPO-Ab, TR-Ab, TG-Ab, FT3 and FT4. Then, he entered a deep coma. His brain CT showed a thrombosis of multiple venous sinuses, along with the opening of peripheral collateral vessels, congestive infarction with haemorrhage and brain swelling. He regained consciousness after treatment with antithyroid drugs, anticoagulants, respiratory support and a regimen of sedation/analgesia. After a half-year follow-up, most of the patient’s blocked cerebral venous sinuses had been recanalized, but there were still some sequelae, such as an impaired fine motor performance of the right hand and verbal expression defects. Conclusions CVST can be induced by thyroid storms, and trauma-related thyroid storms can develop on the basis of hyperthyroidism. The purpose of this case report is to raise clinicians’ awareness and improve their ability to diagnose CVST early in patients with traumatic brain injury complicating thyroid storms to improve the neurological prognosis among similar patients.
Introduction
Cerebral venous sinus thrombosis (CVST) is believed to be caused by intracranial venous blood stasis, coagulation hyperactivity and venous endothelial injury. The annual incidence of CVST is up to 15.7 per million persons [1]. Due to atypical clinical manifestations, it is often easily missed or misdiagnosed. Hyperthyroidism may be one of the predisposing factors for CVST [2], but nonspecialist physicians generally have limited awareness of this condition. When a patient with hyperthyroidism is admitted to the hospital due to severe trauma, clinical interventions surrounding trauma are frequently implemented without adequate attention to hyperthyroidism, which might result in adverse consequences. Recently, we admitted a case of a thyroid storm precipitated by traumatic brain injury that resulted in CVST. This report Open Access *Correspondence: garyyrg@126.com aimed to examine the pathogenesis and regimen selection, as well as treatment and monitoring for CVST, to improve the early diagnosis and differentiation of CVST among clinicians. A 29-year-old male patient was admitted to the emergency department because of head trauma caused by a fall from a height on December 12, 2020. He suffered acute confusion for 5 minutes immediately after trauma. He presented generalized convulsions and complained of obvious dizziness and headache after waking up. Examination of the CT scan (Fig. 1A) showed signs of cerebral contusion and laceration in both frontal lobes and bilateral pulmonary contusions. After a negative screening result of a nucleic acid test for COVID-19, he was admitted to the department of neurosurgery the next day. On the admitting physical examination, he was afebrile with a regular pulse rate of 100/min and blood pressure of 162/97 mmHg. He had a clear mind and multiple abrasions on the top of the right forehead. He showed no abnormalities in the cardiopulmonary examination and received a Glasgow Coma Scale score of 15 (E4V5M6). His admission diagnosis was light cerebral contusion and laceration in the bilateral frontal lobes, intracerebral haematoma in the left frontal lobe and scalp haematoma on the right frontal scalp. Then, he received mannitol to lower intracranial pressure, haemostatic therapy, antiepileptic treatment and close monitoring for changes in consciousness and vital signs. His father provided a medical history of more than 1 year of Graves' disease with hyperthyroidism, taking methimazole irregularly without regular review. His thyroid function tests thereby revealed a serum thyroid-stimulating hormone (S-TSH) level 0.01 mIU/L (NR: 0.27 ~ 4.2), thyroid peroxidase antibody (TPO-Ab) 288.1 IU/ml (NR: 0 ~ 34), thyrotropin receptor At 3 am on December 20th, the patient became irritable and was unable to respond correctly with a heart rate of 120/min. His body temperature increased to 38.5 °C, his blood pressure reached 200/120 mmHg, and he began profusely sweating. After the endocrinology consultation, he was suspected of being in a thyroid storm. Then, he was transferred to the ICU for further diagnosis and treatment. The patient continued to have a high fever up to 40 °C and a heart rate > 130/min. He endured respiratory distress and was endotracheally intubated to receive ventilator-assisted breathing. The patient received adrenal gland imaging scans and obtained a negative result. The patient was confirmed to be in a thyroid storm and received a series of antihyperthyroidism treatments, including propylthiouracil, iodine, propranolol, hydrocortisone and plasmapheresis. Meanwhile, he entered a deep coma (GCS 2 T scores), and re-examination of his brain CT and CTV ( Fig. 1B-D) showed a thrombosis of multiple venous sinuses, along with opening of peripheral collateral vessels, congestive infarction with haemorrhage and brain swelling. His neurological condition deteriorated rapidly. According to multidisciplinary consultation from neurosurgery, neurology and endocrinology, as well as family wishes, physicians-in-charge chose a treatment protocol containing subcutaneous injection of low molecular weight heparin (LMWH) 5000 IU (approximately 100 IU/kg) q 12 h, thyroid storm control, mannitol to lower intracranial pressure, sedation and analgesia, rather than aggressive endovascular thrombectomy. The patient's temperature and heart rate gradually decreased, and his FT3 and FT4 values also showed similar trends (Fig. 2). LMWH was injected on December 23, 2020, and the anticoagulant effect was determined by dynamically monitoring the changes in D-dimer and plasma anti-Xa activity (Fig. 3). D-dimer gradually decreased, and the level of plasma anti-Xa activity gradually reached approximately 0.4 U/ml (effective range of 0.4 ~ 1.0 in our institute). A CT scan performed on December 29th revealed that the low-density lesions in the left parietal/ temporal/occipital lobes had progressed with obvious brain swelling (Fig. 1E). The patient's coma status gradually improved, and his temperature, heart rate and blood pressure gradually returned to normal ranges. He regained consciousness on January 10th, 2021 and was gradually weaned from mechanical ventilation; then, he received physical rehabilitation. His plasma thrombosis screening revealed slightly low protein C activity and normal levels of protein S activity and homocysteine, with no lupus anticoagulant detected. His indices for autoimmune disorders showed no abnormalities. An MRI scan on January 24th (Fig. 1F) revealed a patchy infarct lesion in the left parietal lobe, and the surrounding brain swelling improved. The patient was discharged on January 28th and transferred to Shanghai for further treatment. His thyroid function indices were in the normal range before discharge (Fig. 2), with D-dimer decreasing to 0.48 mg/L. However, his right upper limb muscle strength remained poor. Rivaroxaban was prescribed as the main oral anticoagulant to substitute LMWH before discharge.
Case presentation
After discharge, the patient continued to receive treatments, including oral antihyperthyroidism (methimazole), anticoagulation (rivaroxaban), and rehabilitation exercises. Half a year after discharge, the grand movement ability and muscle strength of his right upper limb gradually recovered. However, the fine movements of his right hand were still imperfect, and his language presentation skills were slightly impaired. His MRI on July 8th revealed a focal lesion of encephalomalacia with gliosis in the left parietooccipital lobes and recanalization of the previously blocked venous sinuses, and a small thrombus was still visible in the left transverse sinus and sinus confluence ( Fig. 4A-C). His hyperthyroidism was well controlled, with an FT3 of 5.5 pmol/L (NR: 3.1 ~ 6.8) and an FT4 of 15.5 pmol/L (NR: 12 ~ 22). He continues to take oral medication and undergo physical rehabilitation.
Discussion
Cerebral venous sinus thrombosis (CVST) is a rare thrombotic disease. Recent studies have found that its incidence may be greatly underestimated [1,4]. The aetiology and risk factors for CVST include infectious factors, pregnancy and postpartum status, systemic disease, dehydration, intracranial tumours, oral contraceptives, hypercoagulable status, certain drugs, trauma, COVID-19, and vaccination with adenovirus-vaccines [5], and approximately 30% of CVST cases still have unknown aetiology [6,7]. CVST caused by traumatic brain injury is rare [8,9]. On the basis of uncontrolled hyperthyroidism and precipitated by certain factors, such as infection, trauma, and surgery, thyroid storms occur and lead to rapid deterioration, involving multiple organs and an overall mortality of up to 20-30% [10]. Severe trauma is a rare cause of thyroid storms [10][11][12]. It is a challenge for admitting physicians to identify thyroid storms among trauma patients because they commonly focus on dealing with significant injuries. Manifestations such as tachycardia and unconsciousness are often thought to be related to trauma. Hyperthyroidism is considered a risk factor for CVST. Hieber et al. [2] found that 20.9% of CVST patients had thyroid diseases, and the ratio was much higher than previous studies on risk factors for CVST [6,7,13]. A number of previous studies have also shown a correlation between hyperthyroidism and CVST [14][15][16][17] because of the hypercoagulable state induced by thyrotoxicosis [18,19]. Thyrotoxicosis increases plasma levels of tissue factor, Factor VIII, Factor IX, von Willebrand Factor, fibrinogen, D-dimer, and plasminogen activator inhibitor-1 [20,21], these factors are all related to the formation of CVST. This patient with poorly controlled hyperthyroidism received mannitol to lower intracranial pressure, haemostasis and antiepileptic treatment after brain trauma. He was instructed to take antihyperthyroidism drugs by himself without surveillance. He developed a thyroid storm and a coma 8 days after admission, and his CT re-examination scan showed multiple cerebral venous sinus thromboses and progressive congested cerebral infarction of the left parieto-occipital lobe with brain swelling, which led to a deep coma. Witnesses stated that the patient likely lost consciousness before falling, which meant a hyperthyroidism-related brain condition might have existed before trauma. During the episode of his thyroid storm, we found an interesting phenomenon: the patient had a high level of catecholamine, which indicates sympathetic hyperactivity. Hyperthyroidism can affect a physiologic state similar to catecholamine excess, leading to a series of manifestations [22]. On the other hand, catecholamines increase T4-to-T3 conversion in selected tissues, showing a synergistic interaction between thyroid hormone and the sympathetic system [23]. In previous studies [24,25], plasma and urinary levels of norepinephrine in thyrotoxicosis patients have been reported as either normal or diminished. Interestingly, we proposed an opposite result in our case. We could not fully confirm the aetiology of sympathetic hyperactivity after adrenal gland imaging scans, but we obtained a good therapeutic effect after antihyperthyroidism plus antisympathetic therapy (propranolol). We believe that the thyroid storm enhanced sympathetic activity in this case, and there was still a complicated interaction between these two systems. More evidence is needed to support our inference. According to the above analysis, this patient developed a trauma-inducing thyroid storm on the basis of poorly controlled hyperthyroidism and ultimately developed multiple CVSTs and cerebral infarction. Under a series of intensified treatments, the patient's situation was gradually under control, and he slowly regained consciousness. During his follow-up, long-term anticoagulant and hyperthyroidism control became routine therapy because his hyperthyroidism is a predisposing factor of thrombosis, and his conditions are currently well controlled. The lessons learned from this case are as follows: well-controlled hyperthyroidism should not be ignored under any situation, and careful investigation of the medical history and increasing awareness of trauma-associated thyroid storms may help reduce the misdiagnosis rate and prevent catastrophic consequences.
The 2017 European Stroke Organization guideline for the diagnosis and treatment of CVT [26] recommended the use of LMWH as the main treatment for acute CVT. The United States guidelines suggested endovascular therapy considered in patients with clinical deterioration despite anticoagulation or with severe neurological deficits or coma [27]. Currently, endovascular therapy is not recommended as a routine treatment for CVST. In this case, the decision-maker was reluctant to accept the risks of endovascular therapy; therefore, anticoagulation with LMWH became a reasonable choice in the situation of a thyroid storm. However, there were still some sequelae left due to multiple CVST leading cortex infarctions. In the future, if there are more reliable manoeuvres of endovascular treatment, the neurological prognosis of such patients will likely be improved. There is currently no recommendation for testing anti-Xa activity to monitor the effectiveness of LMWH [28,29] because the treatment range of LMWH might have no association with clinical results. However, certain groups of patients may benefit from this monitoring, including pregnant women, children, obese patients, and patients with renal impairment [29]. The tendency of a gradual increase in anti-Xa activity corresponded to the gradual decline in D-dimer in this patient (Fig. 3). However, the anti-Xa activity curve showed a vast fluctuation, which meant it could only be used as a reference instead of complete guidance for treatment.
Our case report has some limitations. We could not provide direct evidence from trauma to CVST, and there might have been some preexisting pathologic status in his brain before falling. We could not fully rule out other aetiologies of sympathetic hyperactivity during his thyroid storm. We did not complete a full screen for all related thromboembolic disease, haematological diseases, oncological diseases or rheumatological diseases, which might have been the underlying aetiology for CVST. We did not have enough safe endovascular manoeuvres to choose from in order to minimize subsequent cerebral sequelae other than anticoagulation. We will handle similar cases better in the future with the accumulation of experience.
Conclusion
This report describes a case of CVST caused by traumatic brain injury complicated by thyroid storm with the goal of improving our understanding of the diagnosis and treatment of similar cases. We propose the following learning points: ▼ If a patient with traumatic brain injury has underlying hyperthyroidism, the admitting physicians should pay attention to the severity of hyperthyroidism and maintain close monitoring to prevent thyroid storm development. ▼ CVST can be induced by a thyroid storm. Timely craniocerebral imaging should be performed when a stable patient with hyperthyroidism develops cognitive impairment or even coma during treatment. ▼ Currently, anticoagulation is the most reasonable choice for CVST. If technical conditions are allowed, aggressive endovascular therapy could be considered to reduce neurological damage and sequelae [27]. ▼ Monitoring of anti-Xa activity can be considered for patients who are prone to secondary bleeding or unaffordable on bleeding consequences (such as cerebral haemorrhage). | 2022-07-07T13:27:16.165Z | 2022-07-07T00:00:00.000 | {
"year": 2022,
"sha1": "597a3a448e770269dc4bbc08cade8aea12302613",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "d5a1f2b6115aa807865ee9b675c9e73c2d17c3eb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268873733 | pes2o/s2orc | v3-fos-license | Mediastinal Nontuberculous Mycobacterial Infection in Children: A Multidisciplinary Approach
Background: Mediastinal infections due to nontuberculous mycobacteria remain an exceedingly rare entity. Most cases in the published literature do not include pediatric patients. Due to their clinical infrequency, poor response to antimicrobial therapy and often precarious anatomical location, the optimal management of these lesions can be challenging. Methods: Retrospective medical record review of 4 pediatric cases of mediastinal nontuberculous mycobacteria infection was undertaken. Each child presented with nonspecific respiratory symptoms, including significant acute airway obstruction and required a range of investigations to confirm the diagnosis. Nonresponsiveness to conservative measures and antimycobacterial therapy ultimately resulted in surgical intervention to obtain clinical improvement. Results: All 4 children had extensive evaluation and multidisciplinary involvement in otolaryngology, respiratory medicine, pediatric surgery, infectious diseases and cardiothoracic surgery. They all eventually had their disease debulked via thoracotomy in addition to prolonged antimycobacterial therapy, with successful clinical outcomes. Conclusions: Mediastinal nontuberculous mycobacteria infections in the pediatric population are rare and diagnostically challenging. A high clinical suspicion should be maintained, and multidisciplinary input sought. Targeted surgery with adjuvant medical therapy can reduce disease burden with minimal long-term morbidity.
N ontuberculous mycobacteria (NTM) are acid-fast bacilli (AFB) found throughout the environment, including in soil and tap water. 1 Mycobacterium avium complex (MAC) is responsible for 80% of all NTM infections. 2 NTM cases may be globally increasing due to improved diagnostics. 3The incidence varies geographically with the incidence in Australia ranging from approximately 0.6 to 1.6 cases per 100,000 children per year. 4In immunocompetent children, the most common manifestation of NTM is cervical lymphadenitis followed by skin and soft tissue infections. 2,5ediatric mediastinal lesions with airway involvement caused by MAC are exceedingly rare, accounting for 3.6% of all NTM cases in a recent Australian study. 4Pulmonary NTM tends to occur as lung parenchymal disease in children with underlying conditions such as cystic fibrosis, bronchiectasis or immunodeficiency. 4,5Intrathoracic NTM can occur in otherwise healthy children, confined to the mediastinal lymph nodes without significant pulmonary disease.The diagnosis can be elusive as bronchial washings are often negative and confirmation may require more invasive investigation.
Medical management of NTM lung infections has variable outcomes.Sputum clearance is often difficult to achieve, 6 and the combinations of antibiotics with known in vitro susceptibility do not necessarily correlate with clinical response.These antibiotics require prolonged administration and may be toxic, which can be challenging for children.Surgical debridement, debulking or excision can aid in making a definitive diagnosis and in disease resolution.However, these lesions are not always easily accessible and significant complications are possible. 7We present 4 children with mediastinal NTM infections where a multidisciplinary team (MDT) approach resulted in successful clinical outcomes.
Consent
Informed parental consent was obtained for publishing patient information and images.Ethics approval was gained through the Sydney Children's Hospital Network Research Ethics Office (CCR2021/06).
Case 1
A two-and-a-half-year-old immunocompetent male presented with a 3-month history of intermittent barking cough.The cough was not related to infectious upper respiratory symptoms or triggered by oral intake.He was well in between episodes though had developed soft airway sounds on exertion.He had been treated for presumed croup with oral dexamethasone and nebulized adrenaline a few times.Amoxicillin had had minimal effect.On presentation, he had increased work of breathing but no stridor or wheeze.His chest radiograph, full blood count and C-reactive protein were normal.His symptoms settled and he was discharged, however, he represented 2 weeks later with another episode.
Rigid laryngo-tracheo-bronchoscopy (LTB) identified a polypoidal growth at the distal right trachea causing near-complete obstruction of the right main bronchus (RMB).No foreign body or other abnormality was identified.A computerized tomography (CT) of the chest revealed a calcified right paratracheal mass ing into the airway (Fig. 1).A second right hilar mass was also seen.
The tracheal lesion was debulked endoscopically under a second general anesthetic.It was avascular and soft with a white-speckled internal appearance (Fig. 2).Histopathology demonstrated granulomatous inflammation.AFB were not identified on microscopy and culture.Molecular investigations on tissue, gastric aspirates and induced sputum did not yield bacterial, fungal or acid-fast isolates.His tuberculin skin test (TST) was 6 mm.An interferon-gamma release assay, immunology work-up including lymphocyte subsets, immunoglobulin levels and screening for chronic granulomatous disease, sarcoidosis and cystic fibrosis were negative.
Based on the radiologic features, NTM was the most likely diagnosis.However, the inability to identify the pathogen and its susceptibility made the choice, route and duration of medical treatment difficult, especially in the context of his age, compliance, tolerability and monitoring considerations.LTB 6 weeks later revealed regrowth of the mass to approximately 50% of its original size.A fluorodeoxyglucose positron emission tomographycomputerized tomography scan demonstrated the lesions to be metabolically active.
An MDT meeting with the infectious disease, respiratory, otolaryngology, pediatric and cardiothoracic surgical teams reached consensus for surgical debulking for diagnostic and therapeutic purposes.He underwent a right thoracotomy where a firm and adherent paratracheal nodal mass with its inferior border under the azygous vein was dissected out.Complete removal from the tracheal wall was not undertaken to avoid creating a significant defect.A second node was found at the hilum, inferior to the thymus with the phrenic nerve coursing posteriorly.Caseous material was curetted without damage to the phrenic nerve.MAC DNA was weakly detected by polymerase chain reaction (PCR) on intraoperative specimens.
Postoperatively, he was given triple antimycobacterial therapy (azithromycin, ethambutol and rifampicin) for 12 months.Followup chest CT 6 months after ceasing antibiotics showed significantly smaller areas of calcification with no endoluminal extension.Fluorodeoxyglucose positron emission tomography-computerized
Case 2
A 2-year-old, immunocompetent female presented to a regional hospital 3 times over 5 weeks with worsening cough, intermittent wheeze, increased work of breathing, loss of appetite and stagnant growth.The family had traveled to Malta 2 months prior.A chest radiograph demonstrated right upper and mid-zone changes with hyperinflation of the left lung.Flexible bronchoscopy performed locally identified a friable left main bronchial lesion.
LTB at our center revealed a fleshy obstructing mass within her eft main bronchus (LMB).The carina was splayed suggesting extrinsic compression from a mediastinal mass.Unexpectedly, the RMB was significantly narrowed by external compression, with only the right upper lobe bronchus patent.Maintaining ventilation was challenging, and extracorporeal membrane oxygenation was considered, however, she tolerated careful biopsy of the exophytic component.Chest CT demonstrated a homogenous hypoattenuating subcarinal mass 33 × 18 × 28 mm (Fig. 3A).An aggressive neoplastic lesion, lympho-vascular malformation or bronchogenic cyst were differential diagnoses.She was given high-dose intravenous methylprednisolone while awaiting biopsy results.
The biopsy revealed granulomatous inflammation but no evidence of malignancy.Flow cytometry, TST, GeneXpert PCR for Mycobacterium tuberculosis, pan-mycobacterial PCR, 3 gastric lavages, immunoglobulin levels, lymphocyte subsets, investigation for defects in the interferon-gamma pathway, Epstein-Barr virus and cytomegalovirus serology were all negative.
She was commenced on oral isoniazid, rifampicin and pyrazinamide.A repeat CT chest 2 weeks later revealed the subcarinal mass had decreased to 24 mm × 16 mm with reduced airway compression.When MAC was later confirmed from the initial bronchoalveolar lavage (BAL) cultures, she switched to clarithromycin, ethambutol and rifampicin.Nine weeks into treatment, however, she deteriorated with increased work of breathing and fatigue.A chest CT revealed almost complete obstruction of LMB, despite a reduction in size of the mass.
Following MDT discussion, the potential risk of surgical injury to the carina was raised hence a fourth antimycobacterial agent (intravenous amikacin) was added.Despite 3 months of treatment, the mass was compressing the RMB with hyperinflation of the right middle lobe on CT.She proceeded to surgical debulking via a right thoracotomy where a hard subcarinal nodal mass measuring 15 mm × 15 mm was firmly adherent to the esophagus.The trachea and RMB were defined, and the mass was resected without bronchial or esophageal injury.
Her antibiotic medications were continued for a total of 12 months.A follow-up chest CT 10 months after surgery revealed resolution of the subcarinal inflammatory lesion and normalization of the main bronchi (Fig. 3B).A chest radiograph at her local hospital 2 years after treatment was suggestive of residual nonobstructive mediastinal lymphadenopathy.She remains clinically well and asymptomatic at follow-up after 4 years.
Case 3
A two-and-a-half-year-old, previously well-female presented with an 18-month history of recurrent cough exacerbated by respiratory illnesses.She had been managed in the community with oral antibiotics, but had no response to bronchodilators.A chest CT revealed focal hyperdensity in the precarinal and subcarinal regions causing compression of the LMB, with the suggestion of calcified lymph nodes (Fig. 4).
Her TST reading was 20 mm.Panfungal, GeneXpert, pan-mycobacterial PCR of her sputum and 3 gastric lavages were negative for AFBs.Immunoglobulin levels, lymphocyte subsets, alpha-fetoprotein and beta-human chorionic gonadotrophin were normal.Flexible bronchoscopy and BAL showed no malignant cells, but revealed significant narrowing of the LMB.
She progressed to a left thoracoscopy with biopsy of the lesion.Dissection was difficult due to adhesions.The mass was deep under the aortic arch superior to the left pulmonary artery, under the ductus arteriosus and anterior to LMB.The LMB was inadvertently opened hence the procedure was converted to a left postero-lateral muscle-sparing thoracotomy.The mass was cleared from the bronchial defect to allow primary repair, then dissected from the pulmonary artery and aorta.Caseous, calcified material was curetted out, leaving the medial fibrous capsule in situ.
Histopathology showed granulomatous inflammation consistent with mycobacterial infection.AFB was not detected on
Case 4
A 14-month-old female presented to the emergency department with concern about upper airway obstruction and significant airway noise, possibly triggered by coryzal illness.She had a history of gastro-esophageal reflux disease, recurrent chest infections with wheeze treated as bronchiolitis, failure to thrive and gross motor delay.She had no improvement with fluticasone propionate.LTB revealed a slit-like RMB and chest CT showed a mediastinal soft tissue lesion without calcification, involving the subcarinal, right paratracheal and right hilar regions.There was mass effect on the left atrium, RMB and occlusion of the right middle lobe bronchus.
After MDT consultation, she underwent a thoracoscopy, which showed a posterior mediastinal mass tenting the azygous vein.Frozen section and formal histopathology of the lesion demonstrated granulomatous inflammation.She was treated with cefotaxime and dexamethasone however sputum from the endotracheal tube identified AFB.Although she had never traveled and there was no known exposure to M. tuberculosis, she was commenced on isoniazid, pyrazinamide, rifampicin, pyridoxine and prednisolone.She had a negative TST (0 mm).MAC (M.intracellulare) was later isolated on sputum culture and her medications were rationalized to clarithromycin, rifampicin, ciprofloxacin and ethambutol; the latter 2 medications were ceased after 6 months due to tolerance issues and potential neurotoxicity effects, and replaced with moxifloxacin.A gastrostomy tube was required for treatment delivery.
Speckled calcification of the affected nodes with infiltration into the airway lumen was apparent 18 months after the first CT scan (Fig. 5A).Flexible bronchoscopy and BAL 2 years after the initial presentation revealed 2 moderately large nodules on the medial wall of bronchus intermedius.Both lesions were ulcerated, bleeding and caseating.She remained on medications for a total of 3 and a half years, as there was slow resolution of her lymphadenopathy on imaging (Fig. 5B).LTB 1 year after ceasing treatment showed complete resolution of the lesions within the airway lumen.
DISCUSSION
Intrathoracic NTM disease confined to the mediastinal lymph nodes has historically been an unusual presentation in immunocompetent pediatric patients.MAC most commonly presents as cervical lymphadenitis, with current literature stating that except for patients with cystic fibrosis, children very rarely develop pulmonary disease 5,7,8 The diagnosis of mediastinal MAC infection in the pediatric population may be elusive.The clinical presentation varies Mediastinal Mycobacterial Infection greatly, and children often present with nonspecific symptoms such as cough, weight loss, shortness of breath, wheeze, unexplained fevers and occasionally hemoptysis. 9Radiographic findings can include calcifications, isolated pulmonary nodules, mediastinal lymphadenopathy, parenchymal infiltrates or bronchial obstruction.Bronchoscopic findings range from normal anatomy, external compression through to lesions infiltrating the lumen.Microbiological and molecular confirmation of NTM from respiratory secretions or biopsies may not be reliable or sensitive in children, particularly in those with paucibacillary disease. 2 Histopathologic findings are likewise problematic to interpret, including nonspecific granulomatous inflammation that may be seen in NTM infection, but also in M. tuberculosis infection, fungal infections, brucellosis and melioidosis. 2he American Thoracic Society has proposed diagnostic criteria for adult NTM disease, incorporating the need for positive cultures on bronchial washings or sputum. 2 However, as in our cases, definitive positive cultures are exceptionally rare, especially in children. 5Additionally, these guidelines have not been validated for pediatric patients which adds to diagnostic uncertainty in this population subgroup.Litman et al 2 stated that the American Thoracic Society criteria may need to be altered when treating pediatric patients, as only 30%-50% of pediatric patients will have positive cultures on gastric aspirates.While BAL has improved the rates of diagnosis in adults, the clinical utility in children does not correlate. 2,9he challenge in making the diagnosis and recommending treatment necessitates collaborative input from otolaryngology, respiratory medicine, infectious diseases, pediatric surgery, cardiothoracic surgery, intensive care, and anesthetics.Mediastinal NTM should be considered early during an unusual soft tissue mass to prevent deterioration from significant airway obstruction, local destruction, invasion and involvement of critical mediastinal structures.Once other pathologies such as neoplastic processes have been excluded, a high index of suspicion based on the clinical and radiological findings should be maintained, as despite extensive investigation and sampling, microbiological confirmation of NTM may not occur or take a considerable length of time given its slow-growing nature.
Parental counseling and involvement are important in decision-making, particularly with regard to the tolerability of multiagent therapy, side effect monitoring requirements, suitability for surgery including access and risks, and modalities for surveillance.Controversy exists with regard to the management of mediastinal NTM in pediatric patients.The effectiveness and need for antimycobacterial therapy has not been well established in immunocompetent children. 5Current medical management is aggressive and requires a 3-or 4-drug regimen for 12 months, tailored to the species and susceptibility tests (if known).Empirical therapy often includes 3 drugs for the first 1 to 2 months, pending clinical and/or radiographic improvement.Typical regimens incorporate a macrolide (azithromycin or clarithromycin), ethambutol and a rifamycin (rifampin or rifabutin). 2,7Aminoglycosides or fluoroquinolone may be substituted or added as a fourth-line therapy for patients who develop breakthrough disease whilst receiving treatment, or for patients in whom there is concern about macrolide resistance.This is in line with the American Thoracic Society recommendations that indicate prolonged antimycobacterial therapy is necessary in adults with pulmonary infection caused by NTM; however, no standard therapeutic regimen is applicable to children. 5 Many of these drugs have known side effects, such as ototoxicity with prolonged amikacin use, QT prolongation or gastrointestinal effects.They may also be poorly tolerated and require frequent pathology tests, electrocardiograms, audiology testing and therapeutic drug monitoring.The children in our series were managed over a period of 14 years.Each had different social and medical considerations, which explains the variability in the choice of antibiotic therapy.
Surgery for mediastinal NTM may be necessary for diagnostic confirmation, to reduce disease burden, relieve airway obstruction, or when medical management is not effective.Options include removal or debulking of affected lymph nodes, lobectomy, segmentectomy, wedge resection or even pneumonectomy, the latter choices considered for extranodal disease. 10,11Shiraishi et al demonstrated 97% and 88% relapse-free rates at 5 and 10 years, respectively, when surgical resection was combined with pre-and post-operative anti-NTM medication. 12These findings are consistent with Lu et al who found that surgical excision provides higher rates of cure in patients with focal lung involvement with NTM.The surgical approach in their study was either via an anterior or posterior thoracotomy.Sputum clearance rates ranged from 84% to 100%, with better conversion rates if antibiotics were continued postoperatively. 10he decision to progress to surgery should be tailored to the individual.Surgical excision of an inflammatory mass lying adjacent to critical structures remains challenging, but debulking of disease is almost always necessary to achieve cure.Our cases had nodal disease adjacent to the trachea, carina and bronchi.Damage to these structures could have resulted in the need for challenging reconstruction and potentially prolonged intubation with extracorporeal membrane oxygenation support.MDT input was essential in the decision-making at several points in each patient's journey.
The best method and length of time for follow-up of pediatric mediastinal NTM has not been determined.NTM infections in sites like the cervical lymph nodes can be monitored with bedside assessment and ultrasounds, whereas intrathoracic disease surveillance is more challenging and likely to require general anesthesia and radiation exposure.Bronchoscopy, chest CT and magnetic resonance imaging were used in our patients depending on access to services, clinician and parental preference.Using different modalities for subsequent assessments can make comparison difficult.Mediastinal nodal changes remained present in our cohort, but were significantly improved over time, and stable for several years.The natural history and burnout of residual disease is unknown.All 4 of our patients have remained asymptomatic with no long-term morbidity and promising growth and development at the time of reporting.
CONCLUSION
Mediastinal NTM remains a rare clinical entity in children, posing a diagnostic challenge with variable and nonspecific clinical presentations.Furthermore, the administration of antimycobacterial medical therapy in this age group is not straightforward due to the prolonged course of multiple medications and the need for regular monitoring for potential toxicities.A high clinical suspicion should be maintained based on the clinical and radiological findings once other conditions have been excluded.Surgery is an effective adjunct to the treatment of NTM and should be considered early in the course of the disease, both for diagnostic and therapeutic purposes.Targeted rather than radical surgery can successfully reduce disease burden with minimal long-term morbidity.When surgery is combined with pre-and post-operative anti-NTM medications, disease control rates are high.However, the precarious location of the disease can make surgery high-risk.Our cases reveal
FIGURE 1 .
FIGURE 1. Noncontrast coronal CT scan of the chest showing a speckled hyper-dense right paratracheal lesion with contiguous extension into the distal right tracheal lumen (arrow) near the take-off of the right main bronchus (A).A second separate right parahilar lesion (arrow) had a similar speckled hyper-dense appearance (B).
FIGURE 2 .
FIGURE 2. Initial endoscopic view of the carina demonstrating a polypoid mass of the distal trachea causing near-complete occlusion of the entrance to the right main bronchus (A).Debulking using cupped forceps revealed a relatively avascular lesion with white spots and soft consistency (B).Clearance of the lesion flush to the lateral tracheal wall improved airflow to the right lung (C).
FIGURE 3 .
FIGURE 3. Coronal CT chest with contrast showing a large subcarinal mass causing external compression of the left and right mainstem bronchi as well as bronchus intermedius (A).Repeat CT chest 10 months following medical and surgical treatment (B).
FIGURE 4 .
FIGURE 4. Axial (A) and coronal (B) CT chest without contrast showing a mediastinal lesion with speckled calcifications causing anterior external compression of the left main bronchus.
FIGURE 5 .
FIGURE 5. Coronal CT chest with contrast (A) demonstrating a lesion in the right parahilar region eroding into the lumen of the right main bronchus despite being on treatment for 18 months.Repeat CT chest (B) 3 years after initial presentation showing normalization of the right main bronchus with some persistent minor calcifications. | 2024-04-04T06:18:57.012Z | 2024-03-29T00:00:00.000 | {
"year": 2024,
"sha1": "ac3d5db4ec68b1bf0aaf314926bbb636ed797009",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-3280575/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "WoltersKluwer",
"pdf_hash": "a8d7c5798ebcd7db0e54e19d4198884123325917",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
100062667 | pes2o/s2orc | v3-fos-license | hexagonal doping of graphene with B , N , and O : a DFT study †
First-principles density functional theory (DFT) calculations were carried out to investigate the rectangular and hexagonal doping of graphenewith B, N, andO. In both of these configurations, though the dopants are incorporated at the same sublattices sites (A or B), the calculated values of the band gaps are very different with nearly the same amount of cohesive energies. In this study, the highest value of the band gap (1.68 eV) is achieved when a maximum of 4 O atoms are substituted at hexagonal positions, resulting in a lower cohesive energy relative to that of the other studied systems. Hexagonal doping with 3 O atoms is significantly more efficient in terms of opening the band gap and improving the structural stability than the rectangular doping with 4 O atoms. Our results show the opportunity to induce a higher band gap values having a smaller concentration of dopants, with better structural stabilities.
Introduction
6][7] The exceptional charge carrier mobility of graphene (10 6 cm 2 V s À1 ) makes it very much desirable for use in semiconductor electronic devices. 8esides these distinctive properties, the one big hurdle is the zero gap character of graphene, which restricts its use in nanoelectronics.In this regard, the band gap engineering of graphene is necessary. 9Fortunately, we can overcome this issue in a number of ways.Graphene superstructures such as quantum dots, 10,11 nanoribbons, 12,13 and nanomeshes 14 can address this problem by inducing a quantum connement effect, which leads to the opening of a band gap around the Dirac point.Furthermore, one of the simple and efficient techniques to alter the electronic structure of graphene is substitutional doping where C atoms are replaced by impurity atoms.6][17][18][19][20] Graphene is usually doped with B and N atoms because these dopants are the neighbors of C.Moreover, by using B and N dopants, the 2D geometry of graphene is retained due to the nearly equal covalent radii of these atoms.Additionally, graphene can be doped with Be, co-doped with Be-B and Be-N, and molecular doping with BeO to change the electronic structure, signicantly. 21,22Graphene has been doped with B, N, O, and F, in a previous study, to investigate the electronic properties of graphene, but this study was limited to one dopant atom only. 17A systematic study on the doping of graphene with B and N can be found in ref. 23.These authors studied different sites with varying concentrations of the dopants and found that, for maximum band gap opening in graphene, the dopants must be integrated at the same sublattices positions (A or B).In our recent study, we investigated two types of doping conguration of Be in graphene, namely rectangular and hexagonal. 21In that study, we discovered that, aer the selection of a suitable dopant, in order to induce higher band gaps it is important not only to employ the dopants at the same sublattices sites (A or B), but also to choose specic sites (i.e.hexagonal congurations).To the best of our knowledge, these rectangular and hexagonal congurations are not reported in the literature for any atom(s) other than Be.
In this study, the doping of graphene with B, N, and O is investigated using a DFT study.We have chosen previously investigated rectangular and hexagonal congurations for our doped graphene systems to check the response of the electronic structures.The main theme of this study is to check the validity of our congurations for other atoms (B, N and O) except Be, and to obtain the optimum value of the band gap of graphene with the minimum number of dopants.pseudopotentials. 25The GGA ¼ PBE level of theory was used for electron-electron interactions. 26The double zeta (DZ) basis set was selected and the orbital conning cut-off was set to 0.01 Ry.For higher doping concentrations (9.375-12.5%),we have performed VDW-DF 27 calculations complemented by the double zeta basis set with polarization (DZP) to investigate the magnetic moment, if any.The mesh cut-off value was xed to 200 Ry for our 4 Â 4 graphene supercell with periodic boundary conditions.The z-axis was set to 14 Å to avoid interactions between the layers.The Brillouin zone was sampled with 30 Â 30 Â 1 Monkhorst-Pack k-points.The optimization procedure was done using a conjugate gradient algorithm until all of the forces were less than 0.01 eV ÅÀ1 .For the cohesive energy calculations, we used the following formula:
and O)
E coh is the cohesive energy per atom.E i and E tot correspond to the energy of an individual element (the gas phase energy) in the same supercell and the total energy of the system, respectively.n represents the total number of atoms in the supercell.
Results and discussion
Primarily, we optimized our 4 Â 4 graphene sheet to get a relaxed structure.23]28 This relaxed geometry with the corresponding band structure is shown in Fig. S1 in the ESI.† This optimized graphene sheet was doped with B, N, and O atoms in rectangular and hexagonal congurations with increasing concentration, ranging from 3.125% to 12.5% (1-4 C atoms replaced by impurity atoms).In the rectangular conguration, the C atom(s) replaced by dopant(s) is (are) denoted by RD1-RD4 (hollow spheres) and in the hexagonal conguration these dopants are denoted by HD1-HD4, as can be seen from Fig. 1.The upper two dopants (R-D2, R-D4) in the rectangular conguration are shied along the positive x-axis by 2.46 Å (which is the lattice constant of graphene) relative to H-D2 and H-D4 of the hexagonal conguration.The results so obtained are presented below.
B-doping
Initially, one C atom is replaced with one B atom making the B concentration 3.125% in the host graphene.The geometry was fully relaxed to the required accuracy.The C-B bonds were enlarged to a value of 1.50 Å due to the larger covalent radius of B (85 pm) compared to that of C (75 pm).Due to the electron decit character of B, the Fermi level underwent a downward shi of 0.78 eV from the Dirac point.The electronic band structure calculations show a band gap opening of 0.21 eV as can be seen in Fig. 2. All of these values were found to be in good agreement with the earlier ndings. 17,23er the satisfactory replication of these results, we started doping graphene with B at varying concentrations at the rectangular and hexagonal sites (Fig. 1).Their geometries, along with their band structures, can be seen in Fig. S2-S7 in the ESI.† The C atoms in a graphene sheet consisting of 32 atoms are substituted with 1 to 4 B atoms in the rectangular conguration, which caused a linear increase in the band gap values, ranging from 0.21 to 0.55 eV (Fig. 3).This linear increase in the band gap with an increasing percentage of B-atoms can be achieved when all of the B-atoms are employed in the graphene sheet at the same sublattice sites (A or B). 23Moreover, the band gap values can be increased signicantly if the B dopants are integrated at the hexagonal sites.By doping with 4 B atoms hexagonally, an abrupt increase in the value of the band gap can be seen as compared to rectangular doping with 4 B atoms.This is due to the fact that the B dopants actually make a 2 Â 2 superlattice in graphene, which can be regarded as ideal hexagonal doping.Furthermore, these congurations (rectangular and hexagonal) led to the same geometry and structural stability, yet different band gap values are observed.Due to the larger covalent radius of B than of C, an expansion in the unit cell is observed for B doping.Spin polarized calculations reveal that only 4 B atomdoping of graphene at hexagonal sites induced a magnetic moment of 0.7 m B .These indicate the prociency of hexagonal doping.
N-doping
A N atom was doped into a graphene sheet and the C-N bonds were found to be 1.42 Å in length aer structural optimization.As the N atom is electron-rich relative to the C atom, the Fermi level is raised by 0.78 eV.The same band gap value of 0.21 eV is observed as in the case of the single B atom.Fig. S7-S14 in the ESI † correspond to the N doping of graphene.
The number of N atoms is increased in the graphene sheet from 1 to 4 in rectangular congurations.A linear rise in the band gap value is achieved (Fig. 4), which is similar to the B doping presented above.This linear rise was reported by Rani and Jindal 23 when N atoms were doped into graphene at the same sublattices sites (A or B).The band gap values can be enhanced signicantly by incorporating the N atoms at hexagonal sites, which is comparable to the result of hexagonal doping with B discussed above.Similar to B doping, a higher value for the band gap can be achieved by N doping hexagonally, which also tended to the same stability as that of rectangular doping.Furthermore, a negligibly small reduction in unit cell size is observed due to the smaller covalent radius of N than that of C. No magnetic moment was calculated for rectangular doping.However, hexagonal doping with 3 and 4 N atoms induced magnetic moments of 0.8 and 1.3 m B , respectively.The magnetic moment that arose from the 3 N atom hexagonal doping is greater than that from the 4 B atom hexagonal doping.
O-doping
It is interesting to investigate the rectangular and hexagonal doping of graphene with O as O atom has two electrons more than C, which could be compared to the results obtained previously from Be doping (having two electrons less than C). 21dditionally, there is no such study regarding oxygen doping at specic sites in graphene.For this purpose, we doped graphene with a single atom of O initially.The optimized C-O bonds were found to be 1.49Å in length, which is comparable to the value of 1.50 Å obtained before. 17The Fermi level is moved upward by 0.58 eV.This insertion of O in graphene induced a band gap opening of 0.57 eV, which is a bit higher than the value of 0.50 eV calculated by Wu et al. 17 The doping concentration of O in graphene is increased from 3.125 to 12.5% (1-4 O) in the rectangular conguration.A linear rise in band gap is observed with rectangular doping.However, an exponential rise in the band gap can be seen from the hexagonal doping of graphene with O.The value of the band gap is increased enormously from 1.03 to 1.68 eV just by choosing specic dopant sites (hexagonal).This huge increase occurs because the dopants form a 2 Â 2 superlattice in the graphene, which can be considered as the ideal hexagonal doping conguration.This tendency of increasing the band gap linearly and exponentially is in agreement with Be doping with Fig. 4 The relationship between N doping with increasing concentration at rectangular and hexagonal sites and the band gap values is plotted.
Fig. 5 The relationship between O doping with increasing concentration at rectangular and hexagonal sites and their respective band gap values.The rectangular doping causes a linear increase in the band gap, while hexagonal doping leads to an exponential rise.increasing doping concentration. 21Moreover, the size of the unit cell is found to be the same as that in pristine graphene even at a high dopant concentration (12.5%).No magnetic moment was observed for any case (rectangular or hexagonal) at any level of dopant concentration.
The effect of doping concentration on the structural stability is shown in Fig. 5 (right panel).The cohesive energies of N doped graphene are higher than those of B and O doped graphene.The lowest cohesive energies are plotted for O doping which at the same time give rise to higher values for the band gaps (max.value ¼ 1.68 eV) when compared to B and N doping.An increase in the dopant concentration gives rise to a higher value of the band gap, and at the same time leads to a linear decrease in the cohesive energy.All of the results are summarized in Table 1.
Conclusions
Electronic structure calculations for graphene doped with B, N, and O at rectangular and hexagonal sites are carried out using rst-principles density functional theory (DFT).The dopant number is increased from 1-4 in a 4 Â 4 graphene sheet.A linear increase in band gap values occurred due to the rectangular doping while an exponential rise in band gaps can be seen due to the hexagonal conguration of the dopants in the graphene.This difference in the band gaps obtained for different congurations is more prominent for O doping, which is comparable to Be doping 21 as these atoms have two electrons more and fewer, respectively, than the C atom.The value of the band gap obtained from 3 O atoms doped at the hexagonal site is substantially greater than that when 4 O atoms are doped at the rectangular site, hence, providing the opportunity to induce a higher value of the band gap with better structural stability.Furthermore, for hexagonal doping with 4 B, 3 N, and 4 N atoms, we have observed magnetic moments at the VDW-DF/ DZP level of theory.No magnetic moment was observed for O doping.This shows the supremacy of hexagonal site doping over rectangular site doping.Our results offer the possibility of getting a higher value of the band gap with a higher structural stability, due to a lower amount of dopants.
Fig. 1
Fig. 1 The red hollow spheres, denoted by RD1-RD4, corresponding to rectangular doping are presented in (a).The hexagonal configuration is shown by the blue hollow spheres (HD1-HD4) pictured in (b).A and B indicate the sublattice sites A and B.
Fig. 2
Fig. 2 Optimized geometry of a 4 Â 4 graphene sheet doped with a single B atom (a) along with the corresponding band structure graph (b).The Fermi level is set to a zero energy scale.
Fig. 3
Fig.3The relationship between B doping with increasing concentration at rectangular and hexagonal sites and the respective band gap values is plotted.
Table 1
Summary of the calculations performed for B, N, and O doping in graphene a a The calculated value of the cohesive energy of graphene is À9.53 eV per atom. | 2019-04-08T13:12:37.678Z | 2017-03-09T00:00:00.000 | {
"year": 2017,
"sha1": "19a0326671e2ad20107b9e357f77202e0b1a8157",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2017/ra/c6ra28837e",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "19a0326671e2ad20107b9e357f77202e0b1a8157",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
210977645 | pes2o/s2orc | v3-fos-license | Pulmonary Function Test Disorders in Rheumatoid Arthritis Patients – A Hospital Based Study
Original Research Article In a hospital based case control study, in which 50 patients of RA and 50 healthy controls were studied with respect to their demography, disease activity, disease duration and PFT. Mean age of Rheumatoid arthritis (RA) patients and controls was 46.86±10.51 and 46.68±10.65 years respectively. Patients and controls were in the age group of 40-49 and 50-59. RA group constituted 16(32%) and control group constituted 6(12%) of smokers (p=0.0016). Mean BMI in the RA patients was significantly lower compared to controls (p=0.027). RA patients had significantly lower mean FVC% compared to controls (p<0.001). Mean FEV1% was also significantly lower in RA patients compared to the controls (p<0.001). Mean FEV1/FVC in RA patients was significantly reduced than in controls (p=0.012). However mean disease duration 5.98±4.3 years showed no significance with PFT abnormality. Also mean DAS28 showed no relation with PFT abnormality. Obstructive and Restrictive pulmonary dysfunction was found in 14(28%) and 19(38%) in RA patients respectively compared to that of 0% and 2(4%) in healthy controls respectively (p<0.001). However use of drugs like Methotrexate, Leflunomide and others in RA patients showed less significance with lung function abnormality. Also spirometric indices like FVC, FEV1, and FEV1/FVC were found to be reduced in smoking RA patients compared to RA patients independent of smoking. From the above observations it can be concluded that pulmonary dysfunction in RA patients diagnosed by Pulmonary Function Test may pick up the abnormality early and confer a chance of early intervention.
INTRODUCTION
Rheumatoid arthritis is a chronic, systemic and inflammatory disorder primarily affecting joints. It may result in painful and deformed joints which can lead to loss of function. Signs and symptoms may prevail in other organs also. RA affects about 0.5% to 1% of adults in the developed world with 5 to 50 per 100,000 people newly developing the condition each year [1]. Onset is uncommon under the age of 15 and from then on the incidence rises with age until the age of 80. Women are affected 3 to 5 times as often as men [2]. Women usually develop the disease between 40 and 50 years of age and for men somewhat later [3]. RA primarily affects joints but other organs are also affected in more than 15-20% of individuals. Various other commonly affected systems are skin, lungs, kidneys, heart, blood vessels, eyes, liver, blood, brain, bones. Constitutional symptoms of the disease include fatigue, low grade fever, morning stiffness, loss of appetite, loss of weight are common manifestations seen in people with active RA [4]. RA is often associated with Pleural disease (20-40%), interstitial pneumonitis (5-10%), Nodules (1%), interstitial fibrosis, Bronchiolitis obliterans, Organizing pneumonia, Pulmonary vasculitis [5]. Interstitial lung disease is not only the most common but also the most serious form of the lung involvement in RA and its incidence in different individuals varies from 1-58% [6][7][8][9][10]. ILD is clinically detected in less than 5% of the RA patients [11], although studies have shown much high prevalence of ILD by aid of HRCT [12], which closely co-relates with the results of the open lung biopsy in other connective tissue disorder [13], although less well with PFTs [14]. Limited evidence is available to suggest that relatively small number of patients die from respiratory failure in Rheumatoid Arthritis patients [15,16].Tools to monitor remission in RA are: DAS28, ACR-EULAR criteria, Provisional Definition of Cardiology Remission of Rheumatoid Arthritis, Simplified Disease Activity Index (SDAI) and Clinical Disease Activity Index (CDAI) [17].
Pulmonary involvement is common in RA and the most severe extra articular involvement ranking second cause of mortality in this population. RA can affect lung parenchyma airways and pleura which is responsible for 10-20% of all mortality in these patients. Spirometry being widely available can be utilized to screen and monitor RA patients to detect PFT abnormality early as most of these patients are asymptomatic for long time and hence will help in early intervention.
Since RA is the widely encountered disease in medical and pulmonary clinics there is paucity of literature about the prevalence of spirometric abnormalities in these patients so as to understand if early spirometric abnormalities could point to an underlying disorder and a serious inquiry into the causation of such abnormalities. The current study was designed against this backdrop to study the presence of spirometric abnormalities in patients with RA and try to co-relate the presence of these abnormalities with other indicators of disease severity. The study is aimed to implement the positive results of the study for following our RA patient population and manage their disease in better fashion.
MATERIALS AND METHODS
This study which was carried out in the department of General Medicine and Division of Rheumatology at SKIMS Soura from Dec 2013-Dec 2014. Written informed consent was taken from the participants of study. The case group include diagnosed patients of RA which had age above 20 yrs. from both genders.
Individuals having any collagen vascular/autoimmune disease, exposure to dust such as asbestos/silica, having any lung disease, undergone any recent surgery, having unstable cardiovascular status and were in last trimester of pregnancy were excluded from the study. The evaluation for all studied individual included Haemogram with ESR, C reactive protein, Rheumatoid Factor, Chest X-Ray, Electrocardiogram and Spirometry according to standards of ATS/ERS criteria.
The various parameters measured by spirometry, on the basis of which different lung function abnormalities could be detected were Forced Vital Capacity (FVC), Forced Expiratory Volume in First second (FEV1), Forced Expiratory Flow (FEF), Peak Expiratory Flow (PEF), Total Lung Capacity (TLC), and Residual Volume (RV). Five step approach of spirometric interpretation was applied for diagnosis of different lung function. Before doing Spirometry all base line parameters like height, weight, blood pressure, respiratory rate, SpO2, and pulse were measured and spirometric procedure was repeated 15-20 minutes after the inhalation of the bronchodilator (mainly Salbutamol). Individuals having DAS28 score of 3.2 or more were taken for having active disease.
RESULTS AND OBSERVATIONS
In this study which included 50 cases of RA patients and 50 controls. Mean age of cases was 46.86±10.51 as compared to 46.68±10.65 in controls (p=0.932) the difference being non-significant. Among both case control group the no of females was 36(72%). The number of smokers among Rheumatoid Arthritis patients was 32% compared to 12% among control group. The difference being significant (p=0.016). Control group in this study had higher mean BMI compared to case group (p=0.027 significant). On comparing different spirometric parameters mean percentage of predicted FVC was significantly lower in cases compared to that of controls (p<0.001). Mean percentage of predicted FEV1 also was significantly lower in Rheumatoid Arthritis patients compared to that of controls (p<0.001) Figure1. Mean FEV1/FVC also was significantly lower in RA patients compared to that of controls (p<0.012). In our study it was found that mean duration of RA disease of patients was 5.98±4.3 years. Mean Disease Activity Score of 28 joints (DAS28) in RA patients was found to be 3.9±1.1, which implied that overall Rheumatoid Arthritis was active. On the basis of DAS28 score of individual RA pts it was found that active RA was found in 76% of cases. On comparing DAS28 values with spirometric parameters, it was found that 63% of patients with active disease had abnormal PFT among which 23.7% had obstructive type and 39.5% had restrictive type pulmonary dysfunction (table1).
Fig-1: Error bar showing FEV1 in study subjects: cases versus controls
The commonest symptom manifested by RA patients was cough followed by reflux syndrome and breathlessness. On comparing PFT abnormality among cases and controls, it was found that obstructive abnormality was found in 14(28%) of patients in RA group compared to 0% in control group, and restrictive abnormality was found in 19(38%) of RA pts compared to that of 2(4%) in controls, the difference being statistically highly significant (p<0.001). However drug intake like Methotrexate and Leflunomide showed no significance with relative lung function abnormalities. Also lung function abnormalities showed no significance with relation to duration of RA among case group. This study proved good evidence that the spirometric indices like FVC, FEV1, and FEV1/FVC had positive co-relation with RA independent of smoking.
DISCUSSION
Our study which was primarily based to evaluate out the lung function abnormality in Rheumatoid Arthritis (RA) patients. Many parameters were evaluated and our observation found link with other studies. Mean age of the RA pts was consistent with the literature [18]. Risk factor of smoking which was found in 32% of RA group and 12% of control group, which was statistically significant (P=0.016), similar results were found in literatures [19,20]. RA pts which were having significantly lesser BMI than control group, is understandably due to debilitating nature of RA. Well characterized pulmonary disorder found in RA pts include pleural effusion, Rheumatoid nodules, pulmonary fibrosis and caplans syndrome [21,22], the existence of the specific airway obstruction is a subject of debate as we found obstructive abnormality in 14(28%) and restrictive abnormality in 19(38%) of RA pts compared to that of 0% and 2(4%) in control group respectively (p<0.001), howeverNovet et al. found obstructive disorders in 50% of their RA cases. Like other studies we also found significant decrease in different spirometric parameters like FEV, FVC and FEV1/FVC in RA pts compared to that of controls, and also significant reduction was also found in same parameters when case group was adjusted for smoking. Respiratory disorder in RA can be due to various factors which include underlying bronchial hyperreactivity [23], abnormalities in distal bronchioles [24], and association with the deficit in α-1 antitrypsin [25], recurrent respiratory infection [26] or treatment with pencillinamine [27]. Although the relationship between the two needs to be established in future epidemiological studies. No significant relation was observed between RA disease duration and any pulmonary function abnormality in our study which in harmony withliterature with Avnon et al. [28] but different from that found out by Vergnenegre et al. [29]. Also no relation was found out between RA disease activity and spirometric indices which are same as that of literature [29], but different from observation found by Tariq Al Assadi [30], as their sample size was quiet small. Although Methotrexate and Leflunomide effect lungs in RA patients [31,32], but our observation did not find any significant relation between the two, which is same as that of literature [33]. Significant Restrictive and Obstructive spirometric defects in RA group corelate with disease activity.
Our study is limited by a small sample size and the possibility of an already existing disorder that could be the confounder of the final analysis. Radiological evaluation, bronchoscopy or other invasive diagnostic tests like lung biopsies, that were not the part of the current study, could be added to the evaluation scheme so as to better understand the profile of lung diseases in RA patients in Kashmiri population.
CONCLUSION
Combination of smoking and progressive rheumatoid arthritis in this study population lead to abnormal pulmonary functions. Progressive RA is also a responsible factor for lowering of BMI in affected individuals. Early diagnosis of RA with the aid of better diagnostic evaluation is necessary, in order to save the patients from debilitating effects of this noxious disease. | 2020-01-30T20:44:47.683Z | 2020-01-25T00:00:00.000 | {
"year": 2020,
"sha1": "da9038b5e4aa335dbde03d11e9fcf71155d8ef2c",
"oa_license": null,
"oa_url": "https://doi.org/10.36347/sjams.2020.v08i01.046",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1285600ba865fdf88b92391dd85393da313f9131",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14174112 | pes2o/s2orc | v3-fos-license | Synthesis and Reactivity of Novel Boranes Derived from Bulky Salicylaldimines: the Molecular Structure of a Maltolato Compound
Reductive amination of salicylaldehyde or 3,5-di-tert-butylsalicylaldehyde and 1-adamantylamine using NaBH4 gave the corresponding aminoalcohols in high yields. Subsequent addition of one equivalent of H3B·SMe2 to the aminoalcohols, with loss of two equivalents of dihydrogen, resulted in the formation of adamantanyl oxazaborinanes (1a,b). The molecular structure of 1b was studied by a single crystal X-ray diffraction study. Crystals were obtained from a saturated Et2O solution and belong to the triclinic space group Pī with unit cell parameters a The molecular structure of the addition product (2a) arising from maltol and 1a was also confirmed by single crystal X-ray diffraction. Crystals were obtained from a saturated 1:2 mixture of toluene/Et2O and belong to the orthorhombic space group Pna2(1) with unit cell parameters a = 18.519(6) Å; b = 17.315(5) Å; and c = 12.680(4) Å. The asymmetric unit contains two molecules that differ slightly in some of the dihedral angles.
Introduction
The hydroboration reaction involves the addition of a B-H bond in hydroboranes to unsaturated molecules and has become an important method of reducing alkenes, alkynes, ketones and imines in organic synthesis [1].The resulting organoboron products are remarkably versatile synthons that can be transformed into diverse families of important compounds (Figure 1).However, difficulties associated with synthesis, cost, handling and storage of hydroboranes have continued to plague their practical use at an industrial scale [2].As such, there has been a considerable amount of research addressed at synthesizing novel boranes for the hydroboration reaction [3].Commonly used boranes in both unanalyzed and catalyzed hydroboration reactions are pinacolborane (HBO2C2Me4) and catecholborane (HBO2C6H4), both of which are derived from 1,2-diols.Likewise, our recent efforts in this area have involved the synthesis of sterically-encumbered dioxaborolanes derived from commercially-accessible 1,2-diols that are easily handled and stored for the selective addition to alkenes and alkynes [4,5].Much less studied, however, are hydroboranes based on amines and aminoalcohols.As part of this present study, we have begun to generate a family of oxazaborinanes derived from readily-prepared aminoalcohols based upon the salicylaldimine structural motif.Related oxazaborolidines containing a B-H bond are of interest as these simple compounds can act as either hydroboration reagents, or as Lewis-acid catalysts for a wide range of transformations, especially in the Corey-Bakshi-Shibata reduction of ketones using borane [6].Fine-tuning the electronic and steric component of the aminoalcohol backbone will allows us to alter the behavior of the resulting oxazaborinane.We have initially prepared novel oxazaborinanes derived from 1-adamantylamine where the large adamantyl group should provide steric protection for the Lewis acidic boron atom; the results of our study are described herein.
Synthesis and Reactivity
Addition of salicylaldehyde, or 3,5-di-tert-butylsalicylaldehyde, to 1-adamantylamine followed by a reductive amination step using excess sodium borohydride in methanol gave the corresponding aminoalcohols in high yields [7,8].Drop-wise addition of a solution of H3B•SMe2 at room temperature in toluene to the aminoalcohols gave the desired oxazaborinanes 1a,b selectively (Figure 2).Compounds 1a and 1b have been characterized using a number of physical methods including multinuclear NMR spectroscopy.For instance, oxazaborinane 1a displays a doublet in the 11 B-NMR spectrum at δ 25.5 ppm with a B-H bond coupling of J = 150.2Hz, similar to that reported previously for the related oxazaborolidine derived from (1R, 1S)-(−)-ephedrine which is observed at δ 25 as a doublet with a B-H coupling of J = 160 Hz [9].Interestingly, only a broad singlet is observed in the 11 B-NMR spectrum for 1b, suggesting a dynamic behavior presumably arising from the large butyl groups.Proton and carbon NMR data are as expected and do not change much upon formation of the corresponding oxazaborinanes.Initial reactivity studies were conducted on the less-hindered compound 1a and this species failed to add to ketones, alkenes or alkynes, even in reactions using elevated temperatures or those done in a microwave reactor.Conversely, reactions of 1a with benzaldehyde slowly gave the corresponding addition product albeit conversions were remarkably low, limiting its use as a borane for the hydroboration reaction.Attempts to use a number of transition metals to facilitate this reaction also proved unsuccessful [10].
One of the key intermediates in the use of oxazaborolidines as Lewis-acid catalysts in the Corey-Bakshi-Shibata reduction of ketones using borane involves coordination of the ketone to boron's empty p-type orbital.To investigate the use of 1a as a potential Lewis-acid catalyst, we then examined the addition of maltol (3-hydroxy-2-methyl-4-pyran-4-one), a natural food additive, to see if any adduct formation was observed.Indeed, we were surprised to observe small shifts in the both the 1 H and 13 C-NMR spectra, but more diagnostic, was the change in the 11 B-NMR spectra which now showed a peak at δ 0.4 ppm, signifying the boron atom lies in a four-coordinate environment.The observation that the doublet is still present with a B-H coupling of J = 108.1 Hz demonstrates that the B-H bond does not react with the alcohol O-H group to give a new borate with an NBO2 environment, along with evolution of dihydrogen.This result suggests the potential of using these new oxazaborinanes as Lewis-acid catalysts.
Molecular Structures
Although we were unable to get single crystals of 1a suitable for a single crystal X-ray diffraction study, crystals of 1b were obtained readily from Et2O at −30 °C, the molecular structure of which is shown in Figure 3a.Bond distances within the aromatic and adamantyl rings are as expected and crystallographic data is listed in Table 1.Of note are the B(1)-N(1) distance of 1.3944(17) and the B(1)-O(1) length of 1.3953( 16) Å, which are typical for related oxazaborolidines [11].The angles around boron with N(1)-B(1)-O(1) of 120.90 (11)° indicate that the boron atom is still trigonal planar (Figure 3b).The angles around the nitrogen atom with B(1)-N(1)-C(1) 116.80 (10), B(1)-N(1)-C( 16) 126.88 (10), and C(1)-N(1)-C( 16) 116.32(9)° suggest the amine group is between a tetrahedral and trigonal planar environment, which is expected if there is some degree of dative bonding between the nitrogen lone pair into the empty p-type orbital of boron.We were also fortunate enough to obtain single crystals from the reaction of 1a with maltol to give 2a, the molecular structure is shown in Figure 4. Two independent molecules of 2a were found within the unit cell.Although co-crystallization has been the subject of recent interest lately, particularly in the area of pharmaceutical chemistry [12], it appears that 1a has reacted with maltol via O-H bond activation.Indeed, coordination of the maltol group to the Lewis-acidic boron atom occurs through the alcohol oxygen atom and not the ketone group.1)121.0(4)°.More important, however, is that the boron-nitrogen bond distance in this molecule is significantly elongated, with a distance of B(1)-N(1) 1.634(8) Å compared to 1.3944(17) Å found in 1b.These data all support at a structural form where the O-H bond of the alcohol has been broken and either a formal positive charge could be placed on the nitrogen atom and a formal negative charge would reside on the four coordinate boron atom or, more likely, the molecule exists as a Lewis acid-base adduct (Figure 5) For comparison, the boron-nitrogen bond distance in a structurally-related compound derived from B(OMe)3, 3,5-di-tert-butylsalicylaldehyde and aniline is 1.628(4) Å, which contains a true dative bond [13].Hydrogen atoms were found in Fourier difference maps and refined.These clearly indicate N-H bonds with distances of of N(1)-H(1D) 0.91 (6) and N(2)-H(2D) 0.96(7) Å, respectively.Intramolecular hydrogen bonding with the corresponding oxygen atoms is present, leading to near linear hydrogen bonds with distances of B(1)-H(1C) 1.14(5) and B(2)-H(2C) 1.21(5) Å, respectively.The formation of 2a presumably arises from initial coordination of the alcohol oxygen to the Lewis acidic boron atom.
Synthesis of 2a
A solution of maltol (47 mg, 0.37 mmol) in toluene (3 mL) was added dropwise to a stirred solution of 1a (100 mg, 0.37 mmol) in toluene (1 mL).The mixture was stirred at RT for 18 h, at which point the solvent was removed under vacuum, affording an oily brown solid.The residue was dissolved in THF (2 mL) and cooled to −30 °C for several hours.The solution was then filtered quickly to remove impurities and the filtrate brought to dryness.Yield: 141 mg (96%) of a waxy yellow solid.
X-ray Crystallography
Single crystals of 1b and 2a were coated with Paratone-N oil, mounted using a polyimide MicroMount and frozen in the cold nitrogen stream of the goniometer.A hemisphere of data was collected on a Bruker AXS P4/SMART 1000 diffractometer using ω and ϕ scans with a scan width of 0.3° and 10 s (1b) and 30 s (2a) exposure times.The detector distance was 5 cm.The data were reduced (SAINT) [14] and corrected for absorption (SADABS) [15].The structures were solved by direct methods and refined by full-matrix least squares on F 2 (SHELXTL) [16].All non-hydrogen atoms were refined using anisotropic displacement parameters.For 2a the N-H and B-H hydrogen atoms were found in Fourier difference maps and refined using isotropic displacement parameters.The remainder of the hydrogen atoms for 2a were included in calculated positions and refined using a riding model.Due to the light atom nature of the compound, determination of absolute configuration was impossible and Friedel opposites merged in final refinement runs.The asymmetric unit contains two independent molecules that differ slightly in some of the dihedral angles.Crystallographic data for 1b and 2a have been deposited with the Cambridge Crystallographic Data Centre as supplementary publication numbers CCDC 1039702 and 1039703, respectively.
Conclusions
We have readily prepared two new salicylaldimine-based oxazaborinanes derived from adamantylamine and characterized them using multinuclear NMR spectroscopy as well as a single crystal X-ray diffraction study for one compound.While initial results show the potential for these oxazaborinanes as reagents for the hydroboration reaction, it is obvious the steric and electronic environments of the molecules must be altered to improve efficacy.Future work will concentrate on trying to make the boron atom more electrophilic by using salicylaldimines and amines containing electron-withdrawing groups.We also provide evidence for coordination and activation of alcohols to these oxazaborinanes and we will examine chiral variants of these compounds as catalysts for the Corey-Bakshi-Shibata reduction of ketones, the results of which will be presented in due course.
Figure 1 .
Figure 1.Hydroboration of alkenes and subsequent transformation to generate a wide array of compounds.
Table 1 .
Crystal data and structure refinement details for compounds 1b and 2a. | 2015-09-18T23:22:04.000Z | 2015-02-05T00:00:00.000 | {
"year": 2015,
"sha1": "d8aa1656c657a42ba0f802a3f992be50bfd8a151",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4352/5/1/91/pdf?version=1423209228",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "d8aa1656c657a42ba0f802a3f992be50bfd8a151",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
253983663 | pes2o/s2orc | v3-fos-license | The impact of HLA-G, LILRB1 and LILRB2 gene polymorphisms on susceptibility to and severity of endometriosis
Endometriosis is a disease in which endometriotic tissue occurs outside the uterus. Its pathogenesis is still unknown. The most widespread hypothesis claims that ectopic endometrium appears as a result of retrograde menstruation and its insufficient elimination by immunocytes. Some reports have shown expression of non-classical HLA-G molecules on ectopic endometrium. HLA-G is recognized by KIR2DL4, LILRB1 and LILRB2 receptors on natural killer (NK) and other cells. These receptors are polymorphic, which may affect their activity. In this study we investigated whether HLA-G, KIR2DL4, LILRB1 and LILRB2 polymorphisms may influence susceptibility to endometriosis and disease progression. We used polymerase chain reaction (PCR), PCR-restriction fragment length polymorphism (PCR-RFLP) and allelic discrimination methods with TaqMan SNP Genotyping Assays for typing of 276 patients with endometriosis and 314 healthy fertile women. The HLA-G rs1632947:GG genotype was associated with protection against the disease and its severe stages; HLA-G rs1233334:CT protected against progression; LILRB1 rs41308748:AA and LILRB2 rs383369:AG predisposed to the disease and its progression. No effect of KIR2DL4 polymorphism was observed. These results support the role of polymorphisms of HLA-G and its receptors LILRB1 and LILRB2 in susceptibility to endometriosis and its progression.
Introduction
Endometriosis is an estrogen-dependent gynecological disease, affecting about 10% of women in reproductive age. It is associated with the occurrence of endometrium outside the uterus. Endometriotic lesions can be found mainly in the ovaries and pelvic peritoneum, but also in the rectovaginal septum, and at more distant locations such as the lung, liver, and pancreas, and even in scars after operative surgery (Ahn et al. 2015;Serdar and Bulun 2009;Gupta et al. 2016;Parkin and Fazleabas 2016;Vercellini et al. 2014). In addition, endometriotic lesions may undergo malignant transformation (Worley et al. 2013). The etiopathology of endometriosis is still poorly understood. One hypothesis of endometriosis development is Sampson's theory of retrograde menstruation (Sampson 1927;Dastur et al. 2010). According to this theory, retrograde menstruation may result in implantation, survival and growth of endometrial cell foci in the peritoneal cavity. The mechanism(s) of this phenomenon is unknown; it is plausible, however, that it may be due to insufficient elimination of endometrial cells by the local immune system. Indeed, women with endometriosis were found to have reduced activity of natural killer (NK) cells (Oosterlynck et al. 1992;Maeda et al. 2012;Eidukaite and Tamosiunas 2008;Tariverdian et al. 2009). These cells, granular cytotoxic lymphocytes, have been found not only in the peripheral blood, but also in the peritoneal fluid (Eidukaite and Tamosiunas 2008; Králíčková and Vetvicka 2015;Kawashima et al. 2009). A defect of the NK activity in the recognition and lysis of implanted endometrial cells may be thus one of the crucial mechanisms in the initiation and progression of endometriosis. NK cell activity is regulated by different receptors-with activating or inhibitory actionsuch as killer immunoglobulin-like receptors (KIRs) and leukocyte immunoglobulin-like receptors (LILRs) (Maeda et al. 2012;Králíčková and Vetvicka 2015;Borges et al. 1997;van der Touw et al. 2017). KIR and LILR recognize class I human leukocyte antigens (HLA), among them HLA-G. HLA-G is expressed by placental trophoblasts, and it is known as a crucial factor in maintaining pregnancy. However, it may also be expressed on ectopic endometrial tissue in the peritoneal cavity and be recognized by immune cells via its receptors: KIR2DL4 (of both inhibitory and activating potential), and inhibitory LILRB1 and LILRB2 (Maeda et al. 2012;Kawashima et al. 2009;Wang et al. 2008;Hudson and Allen 2016;Kang et al. 2016). Moreover, HLA-G up-regulates LILRB1, LILRB2 and KIR2DL4 expression in antigen-presenting cells, NK cells, and T cells (LeMaoult et al. 2005).
Previous GWA studies of endometriosis have implicated WNT (wingless-type MMTV integration site) signaling and oestrogen responsive genes, genes involved in the actin cytoskeleton and cellular adhesion (Rahmioglu et al. 2014;Nyholt et al. 2012), the CDKN2BAS locus encoding the cyclin-dependent kinase inhibitor 2B antisense RNA (Uno et al. 2010), and four single nucleotide polymorphisms (SNPs) located in and around interleukin 1α (Adachi et al. 2010). Most of the identified GWAS variants were non-coding. The most recently published studies by Sapkota et al. (2017a, b) have evaluated the potential role of coding variants in endometriosis risk by large exome-array analysis. However, their results did not identify any coding variants with MAF > 0.01, with moderate or large effect sizes in endometriosis pathogenesis. They provide genome-wide significant evidence for association with a splice variant (rs13394619) in the GREB1 (Growth Regulation By Estrogen In Breast Cancer 1) locus in women with European ancestry. Moreover, the 19 SNPs identified in endometriosis explain up to 5.19% of variance in endometriosis, suggesting that many more variants remain to be detected. On the other hand, we focused rather on genes important for innate immune response. In our previous paper we found an association of NK cell receptor KIR2DS5 gene and its potential ligand HLA-C C2 with endometriosis (Nowak et al. 2015a). Here, we analyzed other genes which may be involved in immune control of extra-uteral endometrial tissue. We examined the SNPs which may be associated with gene expression or splicing and therefore they could have potential influence on the receptor-ligand interaction between immune cells and ectopic endometrium.
Therefore, the aim of this retrospective study was to evaluate the association of the SNPs in genes coding for KIR2DL4, LILRB1 and LILRB2 receptors and their ligand HLA-G with susceptibility to and severity of endometriosis as potential non-invasive markers for the diagnosis of this disease.
Study groups
The present study included 590 women from the Polish population who were enrolled during the period from 2005 to 2016. The study was approved by the Local Bioethics Committees at the Medical University of Wroclaw, Polish Mothers' Memorial Hospital-Research Institute in Łódź, and the Medical University of Warsaw, Poland. Informed consent was obtained from all individual participants included in the study.
Endometriosis was diagnosed in 276 women. The patients were recruited at several Polish clinics: the First and Second Department of Obstetrics and Gynecology, Medical University of Warsaw; the Department of Surgical, Endoscopic and Oncologic Gynecology and the Department of Gynecology and Gynecologic Oncology in Polish Mothers' Memorial Hospital-Research Institute in Łódź; and Gameta Hospital in Rzgów. The mean age of affected women was 33.02 ± 7.03 years. The diagnosis was based on laparoscopic surgery and confirmed by histopathological examination.
The patients were classified and analyzed according to the stage of the disease (American Fertility Society 1985) or according to the localization of the endometriotic lesions ( Fig. 1). For 22 patients with endometriosis, detailed information on rAFS stage and lesion localization were not available.
The control group consisted of 314 fertile women. Among them 219 had at least two healthy-born children with the same partner without a history of spontaneous miscarriage and immunological or endocrinological diseases. Ninety-five women had at least one child. The mean age of fertile patients was 32.29 ± 5.81 years. The control group was recruited in the First Chair and Clinic of Obstetrics and Gynecology and the Department of Medical Genetics, University of Warsaw.
DNA preparation and genotyping
Genomic DNA was isolated from 5 mL of the peripheral blood samples collected during the patient's admittance to the hospital using the Invisorb Spin Blood Midi Kit (Invitek, Berlin, Germany) according to the producer's instructions.
HLA-G genotyping was conducted in three sequence positions. To detect the 14 base pair insertion/deletion (rs371194629:c.*65_*66insATT TGT TCA TGC CT) in the 3′ untranslated region (UTR) we used the polymerase chain reaction with sequence-specific primers (PCR-SSP) method. The rs1632947:A>G polymorphism was distinguished by real-time PCR. Details of the genotyping of these two polymorphisms have been previously described by Wiśniewski et al. (2010Wiśniewski et al. ( , 2015. The genotyping of the triallelic rs1233334:G>C/T was performed on a 7300 Real-Time PCR System (Applied Biosystems) using Applied Biosystems (Foster City, CA) ready-made Assay-on-Demand including two primers-forward 5′-ACT GTC TGG GAA AGT GAA ACT TAA GAG-3′ and reverse 5′-AAT GTG ACT TTG GCC TGT TGG TAT A-3′-and two fluorescently labeled probes: 5′-VIC-CTT TGT GAG TCG TGT TGT A-NFQ-3′ and 5′-FAM-CTT TGT GAG TCC TGT TGT A-NFQ-3′. The 10-μl reaction mixture contained ~ 20 ng of genomic DNA, 1 × TaqMan Universal PCR Master Mix, No AmpErase Uracil N-Glycosylase (UNG) (Applied Biosystems), primers and probes. PCR conditions were as follows: 95 °C for 10 min and (95 °C for 15 s, 60 °C for 1 min) × 40. This genotyping was confirmed by direct sequencing. Fig S1 shows the distribution of representative results in the scatter plot from the real-time PCR of HLA-G rs1233334:G>C/T SNP genotyping.
There are two variants of KIR2DL4 with 9 or 10 consecutive adenines in the gene sequence. The deletion of one adenine in exon 7 contributes to the frame shift; therefore the 9A allele encodes the soluble form of the receptor with a missing transmembrane domain or truncated cytoplasmic tail. The 10A allele determines the membrane-bound receptor (Nowak et al. 2015b;Goodridge et al. 2007Goodridge et al. , 2009). The 10A/9A insertion/deletion in the 9620 position (rs11410751) of the KIR2DL4 gene has been previously found in complete linkage disequilibrium with the rs649216:T>C of the gene (r 2 = 1) in our population (Nowak et al. 2015b). The T allele of the rs649216 corresponded to the 9A allele of the rs11410751, while rs649216:C corresponded to the variant with the 10A allele. Therefore, we decided to use the PCR method and restriction fragment length polymorphism (RFLP) with EarI digestion for testing of the rs649216:T>C KIR2DL4 polymorphism, instead of the high resolution melting (HRM) method, which we found more expensive and troublesome than PCR-RFLP. Detailed protocols about these methods were published previously (Nowak et al. 2015b).
Genotyping of the rs41308748:G>A polymorphism in the LILRB1 gene as well as the rs383369:G>A polymorphism and the rs7247538:C>T polymorphism in the LILRB2 gene was carried out using PCR-RFLP. The restriction enzymes used in this study were as follows: AciI, TaiI and Hpy166II, respectively. The rs1061680:T>C in LILRB1 was genotyped using the allelic discrimination method with TaqMan SNP Genotyping Assay (C_9491145_10) on a 7300 Real-Time PCR System (Applied Biosystems). Primer sequences, annealing temperatures, restriction enzymes and reaction conditions for LILRB1 and LILRB2 genotyping are listed in Table S1. Reference samples for all tested SNPs were sequenced by an external company (Genomed, Poland). Detailed information of all tested polymorphisms and their potential functions is summarized in Table 1.
Statistical analysis
SNP frequencies were estimated by direct counting. The statistical significance of differences in genotype and allele frequencies between the control group and patients was estimated using the two-sided Fisher's exact test and by the Chi-square test with the appropriate degrees of freedom, χ 2 df (df=(m − 1) × (n − 1), where m = number of rows, n = number of columns). A p value of less than 0.05 was required to reject the null hypothesis, which assumes that Chr Chromosome; Genomic position is shown relative to GRCh38.p7; SNP IDs are according to dbSNP (rs, http://www.ncbi.nlm.nih. gov/SNP); c.*65_*66insATT TGT TCA TGC CT was earlier described as 14 bp ins/del in 3′UTR of the HLA-G gene (Wiśniewski et al. 2010(Wiśniewski et al. , 2015; NM_006669.6:c.1807-7G>A was earlier described as 5651 G>A (rs41308748) (Wiśniewski et al. 2015;Nowak et al. 2016); NM_006669.6:c.425T>C was earlier described as 927 T>C (rs1061680) (Davidson et al. 2010), and were relative to the translation start site (Nowak et al. 2015b) there is no difference in the distribution of genotypes and alleles between the control group and patients. If P < 0.05, it was corrected (P corr. ) by the number of comparisons using Bonferroni correction. For 2 × 2 tables the odds ratio (OR) and 95% confidence interval for it were also calculated. Statistical analysis was performed using the software package GraphPad InStat version 3.06 (San Diego, CA, USA). Hardy-Weinberg equilibrium was checked using the Chisquare test with one degree of freedom for each SNP.
Neither the other LILRB2 SNP (rs7247538:T>C) nor KIR2DL4 (rs649216:T>C) or LILRB1 (rs41308748:G>A and rs1061680:T>C) was distributed differently between mild and severe disease ( Table 6). None of other polymorphisms was associated with localization of lesions (Table S4).
Discussion
In the present study we found that susceptibility to and the severity of endometriosis are associated with polymorphisms in the HLA-G, LILRB1 and LILRB2 genes. On the other hand, the disease was not associated with the KIR2DL4 polymorphism. The data on HLA-G expression in endometrial tissue from healthy individuals and patients with endometriosis are controversial. HLA-G has been detected on eutopic endometrial cells and peritoneal fluid cells in the menstrual phase of women with or without endometriosis (Kawashima et al. 2009); however, Barrier et al. (2006) found HLA-G protein and mRNA expression only in ectopic endometrial tissue but not in eutopic endometrium in women with or without endometriosis, independently of cycle stage. Notably, in an earlier study, Hornung et al. (2001) did not detect HLA-G in peritoneal fluid, ectopic and normal endometrial tissues and stromal cells from endometriosis patients or controls. The HLA-G molecule exists as seven protein isoforms as a result of alternative splicing: four membrane-bound (HLA-G1, G2, G3, G4) and three soluble (HLA-G5, G6, G7) isoforms (Menier et al. 2010;Donadi et al. 2011;Castelli et al. 2014). Soluble HLA-G (sHLA-G) was found in the peritoneal fluid in similar concentrations in control subjects and in mild and severe endometriosis (Eidukaite and Tamosiunas 2008).
Several important regulatory motifs have been described in the promoter of the HLA-G gene, e.g. Enhancer-A (EnhA), the interferon-stimulated response element (ISRE) and the SXY module. All of them are mainly responsible for controlling gene expression by affecting transcription factor binding or promoter methylation (Donadi et al. 2011;Castelli et al. 2014;Persson et al. 2017;Verloes et al. 2017).
These regions exhibit many polymorphic sites; among them, positions − 964, − 725 and − 716 (in the promoter) may affect expression of HLA-G (Donadi et al. 2011;Castelli et al. 2014;Persson et al. 2017;Verloes et al. 2017;Amodio et al. 2016;Ober et al. 2003). Indeed, we found here protective effects of rs1632947:GG (− 964GG) and rs1233334:CT (− 725CT) HLA-G genotypes on susceptibility to endometriosis and/or progression of the disease (Table 7). On the other hand, a 14 bp insertion/deletion in the 3′UTR (rs371194629) has an influence on both expression and alternative splicing of HLA-G (Verloes et al. 2017) and the level of sHLA-G (Chen et al. 2008). However, no association of this polymorphism with endometriosis was seen in our study. The reason why one polymorphism, rs1632947:GG genotype in the promoter region, increasing expression of 1 3 HLA-G (Ober et al. 2006), seems to protect against endometriosis, whereas 14 bp deletion in 3′UTR (rs371194629), also increasing HLA-G expression (Verloes et al. 2017), had no effect, needs explanation by further experiments. No other reports on the role of HLA-G polymorphisms in endometriosis have been published so far. However,it is worth to mention that other class of MHC genes located near HLA-G (HLA-DQ and HLA-DRB1) have already been published in the context of endometriosis (Zong et al. 2002;Sundqvist et al. 2011;Sobalska-Kwapis et al. 2017).
The putative role of HLA-G in the etiopathogenesis of endometriosis may be strengthened by our further observation that the disease is also associated with polymorphism in LILRB1 and LILRB2 genes coding for HLA-G receptors. NK cells express different levels of LILRB1 (Kirwan and Burshtyn 2005) and individuals vary in its positivity, ranging from 10 to 77% of NK cells, depending on gene polymorphism (Davidson et al. 2010).
The rs41308748:G>A polymorphism of the LILRB1 gene is an intronic SNP situated between the cytoplasmic tail and the 3′UTR sequence, which could have an influence on the splicing process. We found its association (AA genotype) with susceptibility to endometriosis (Table 7); therefore, studies on splicing variants in endometriosis would be desirable. We observed earlier a protective effect of the GA genotype in recurrent miscarriage, whereas the AA genotype had no effect (Nowak et al. 2016). The rs1061680:T>C is a non-synonymous SNP, located in the sequence encoding the extracellular D2 domain (Davidson et al. 2010). It is in strong linkage disequilibrium with another SNP (rs10423364:A>G) which is located in a potential transcription factor binding site (our in silico analysis) and may therefore affect gene expression. Thus, rs1061680:T>C may be a marker of rs10423364:A>G, and may also influence protein structure. However, in our present study we did not reveal its association with endometriosis.
The polymorphism rs7247538:C>T of LILRB2 changes histidine to tyrosine (p. His300Tyr) in the amino acid sequence of the protein. Our in silico analysis indicated that it may also have a possibly damaging influence on the splicing process. However, this polymorphism was not associated with endometriosis. The second tested SNP in the LILRB2 gene was the rs383369:G>A (p. Arg20His) and it has been located in the signal sequence region. The G allele of rs383369 has been associated with low expression levels of LILRB2 in Northeast Asians, where it has a high frequency; however, it is infrequent in Europeans (Hirayasu et al. 2008). In our population, almost all individuals possessed the alternative A allele, and GG homozygotes were virtually absent. Nevertheless, the AG heterozygotes had 7 times higher probability of having severe endometriosis than AA homozygotes (Table 6). It suggests, then, that lower LILRB2 expression may predispose to more severe stages of the disease.
KIR2DL4 has been considered to be also an HLA-G receptor (Rajagopalan andLong 2012, 2014). Its long cytoplasmic tail suggests an inhibitory function. However, it has only one immunoreceptor tyrosine inhibitory motif (ITIM) in the cytoplasmic tail and a positively charged arginine residue in its transmembrane region, allowing it to complex with the FcεRI-γ chain which transduces the activation signal upon ligand binding by KIR2DL4 (Kikuchi-Maki et al. 2005). However, the HLA-G/ KIR2DL4 interaction has recently been questioned (Le Page et al. 2014). In addition, only one out of four individuals in our population possesses a functional receptor (Nowak et al. 2015b). The lack of functional KIR2DL4 may be compensated by the presence of LILRB1. Notably, LILRB1, despite its inhibitory potential, may also exert an activating effect through its immunoreceptor tyrosinebased switch motif (ITSM) (Li et al. 2009) and therefore substitute for KIR2DL4. There are some limitations of our work. First, the group of subjects with minimal or mild endometriosis was small (16 individuals). This resulted from late diagnosis, as women often do not see their doctor until they suffer from infertility or the pain becomes unbearable. Second, protein expression of cell surface LILRB1, LILRB2 and KIR2DL4 as well as soluble or membrane HLA-G was not examined here. However, this will be a future direction of our research, with particular emphasis on expression of these molecules in endometriotic lesions in peritoneum vs ovary. Moreover, recently published GWAS analysis of potential proteinmodifying genetic variants in 9000 endometriosis patients and 150,000 controls of European ancestry (Sapkota et al. 2017b) have not identified our proposed variants with endometriosis pathogenesis. However, variants which modify protein structure through amino acid substitutions or alter stop signals or splicing, particularly those with MAF < 0.05 have been implicated as important but not well covered in GWA studies. Moreover, only about 18% of endometriosis cases in Sapkota et al. (2017b) samples had moderate-tosevere disease while in our study these stages accounted to 92%, and therefore Sapkota et al. (2017b) analysis may not have adequate reference in severe cases. In addition, the cost of whole genome or exome sequencing methods limits large-scale studies and it still limits the selection of potential SNPs for testing.
In conclusion, our results suggest that HLA-G and its receptors LILRB1 and LILRB2, but not KIR2DL4, may play a role in elimination of ectopic endometrial cells and in development of the disease. Our data are novel, as this is the first report on this topic. | 2022-11-27T14:53:25.328Z | 2017-12-12T00:00:00.000 | {
"year": 2017,
"sha1": "6077f22dc21365bdac4f00817c953aef4160f681",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00438-017-1404-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "6077f22dc21365bdac4f00817c953aef4160f681",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
225465608 | pes2o/s2orc | v3-fos-license | Cyclic Mechanical Behavior of Two Sandy Soils Used as Heat Storage Media
: In this research, the cyclic mechanical behavior of two heat storage sandy soils is experimentally studied using a cyclic thermo-mechanical triaxial device. The results of the tests, which were performed under controlled temperature conditions between 20 and 60 ◦ C, show a significant dependence of the mechanical response of the sandy soils with the amplitude of the cyclic loading and medium temperature. The mechanical performance and accumulation of plastic strains of the soils with an increasing number of loading cycles are discussed in view of the intrinsic soil behavior.
Introduction
Closing the gap between energy demand and supply is one of the greatest challenges of our time. One of the many practical solutions provided by the field of energy geotechnics in satisfying the worldwide energy demand is via clean and renewable energy schemes, such as seasonal thermal energy storage via borehole thermal energy storage (BTES) systems (e.g., soils) or solid sensible heat storage systems (e.g., cemented porous media). In both cases, heat or cold from solar collectors or other forms of energy is collected and stored for long periods to be used for future industrial or domestic purposes [1][2][3][4][5][6], and hence such systems have recently emerged as a viable and encouraging alternative in satisfying the energy requirements of both small and large scale applications. A detailed review on the design considerations for BTES systems is provided in [7].
Such sensible thermal energy storage systems are usually built below ground level supporting structures (or designed as part of the sub-structure of buildings) and hence are expected to have load bearing capabilities. Therefore, accurate study of their mechanical stability (both in terms of static and cyclic mechanical loading, due to manmade structures as well as natural hazards such as earthquakes), especially at elevated temperatures, should be carefully assessed prior to their design and operation by performing appropriate mechanical tests.
Several modeling and experimental studies were carried out in the past with regards to the static mechanical behavior of clayey soils at elevated temperatures [8][9][10][11]. Uchaipichat and Khalili [12] performed experimental studies on the static mechanical behavior of an unsaturated silt between temperatures of 25 and 60 • C using a triaxial equipment modified for testing at temperature and suction controlled conditions. Furthermore, an example study of the mechanical stability of solid sensible heat storage materials under the effect of static loading and temperature was presented in [13]. All of the studies showed a significant dependence of the static mechanical performance of the investigated soils/materials on medium temperature. Similar to the response of the soils or other heat storage materials to static loading, their behavior upon cyclic loading is also expected to be affected by changes in temperature. 2 of 12 Considering the importance of the study of the behavior of soils subjected to cyclic loading, several theoretical and empirical constitutive models to estimate their cyclic behavior have also been proposed in past studies [14][15][16][17][18]. The behavior of clayey soils subjected to cyclic loading has been studied in [19][20][21]. Cyclic laboratory tests on clayey soils are typically performed in undrained conditions and can lead to the generation of excess pore pressures and the accumulation of shear strains. Several laboratory experiments on the cyclic behavior of sandy soils have also been carried out in the past. Results of cyclic triaxial tests on sand where the increase in the accumulated strains were found to be proportional to the logarithm of the number of cycles were reported in [22]. Other tests where the accumulation of the rate of strain of coarse-grained soils decreased proportionally with the inverse of the number of cycles were reported in [23]. Duku et al. [24] presented the results of drained cyclic tests on sands at various factors such as relative density, gradation, mineralogy, frequency etc., and Tatsuoka et al. [25] reported on the results of undrained cyclic triaxial tests on Toyoura sand at different frequencies between 0.05 and 1.0 Hz. Overall, a negligible influence of the loading frequency on the cyclic behavior of the soils was reported in the literature.
Depending on the drainage type of the granular media used as a heat storage material, the application of a cyclic mechanical loading may cause an accumulation of irreversible or plastic deformations or excess pore pressures. For drained conditions, a cyclic loading can cause the accumulation of plastic strains at a high number of cycles even for small amplitudes of loading. For un-drained conditions, a cyclic loading with a considerably high amplitude (such as from an earthquake) can cause the accumulation of high pore pressures leading to the loss of strength or liquefaction of the soil [14].
Such accumulation of plastic strains is of high importance in practical cases such as in heat storage media, where the tolerance to displacement is small, and may endanger the long-term serviceability of the system. Such serviceability failures within the heat storage granular media can greatly affect the heat transport to and from the soil and the heat exchanger interface. One parameter which is significantly affected by the serviceability failures within the heat storage media is thermal conductivity. The thermal conductivity of the heat storage soils controls the rate of loading or un-loading of heat storage systems and their overall efficiency [26,27].
On this basis, in this study, the cyclic mechanical stability of two heat storage sandy soils was assessed experimentally at different amplitudes of the cyclic loading and temperatures using a cyclic thermo-mechanical triaxial device. The findings of the cyclic loading tests at elevated temperature reported in this study are of particular importance in the context of heat storage soils, especially considering the limited or lack of research available in the literature in this area. The research was conducted as part of the work of project Angus II (Figure 1), which aims at assessing the impacts of the use of the geological subsurface for thermal, electrical and material energy storage in the context of the transition to renewable energy sources, using the area of Schleswig-Holstein, Germany, as a model example. As the soil in the BTES system is near dry condition with a significant hydraulic permeability (porosity), the cyclic loading tests presented in this study were performed at fully drained dry conditions. It should also be noted that unsaturated and dry soils have a lower effective heat capacity when compared to saturated soils, they also possess a comparatively lower effective thermal conductivity, which although it results in lower loading/unloading rates of heat transfer, helps better retention of heat in the storage unit.
Tested Soils
Two coarse-grained soils, namely a fine sand, labeled here as sand A (S-A) ( Figure 2a The fine sand, S-A, is also the heat storage media used in the BTES of project Angus II as shown in Figure 1. Whereas, the coarse sand, S-B, due to its comparatively higher hydraulic permeability, is used for a second set-up in project Angus II as a modular heat storage system, at a comparatively smaller scale, in a cylindrical barrel heated with a central BHE unit to study the convective heat transport in porous media in sensible heat storage systems. A summary of the obtained physical and geotechnical properties of the sandy soils (obtained through measurements in our laboratories at Kiel University), in accordance with standards [28][29][30], are presented in Table 1. The transient effective thermal conductivities and effective heat capacities of the sands were measured using a Decagon KD2 Pro device with a SH-1 dual needle probe, which has a length of 30 mm, a diameter of 1.3 mm and 6 mm of spacing between the two needles. The thermal parameter measurements were conducted at room temperature and in atmospheric pressure conditions based on transient line heat source analysis [31,32]. Figures 3 and 4 show the X-ray diffraction (XRD) mineralogical analysis and grain size distributions of the soils. The fine sand, S-A, is also the heat storage media used in the BTES of project Angus II as shown in Figure 1. Whereas, the coarse sand, S-B, due to its comparatively higher hydraulic permeability, is used for a second set-up in project Angus II as a modular heat storage system, at a comparatively smaller scale, in a cylindrical barrel heated with a central BHE unit to study the convective heat transport in porous media in sensible heat storage systems.
Tested Soils
A summary of the obtained physical and geotechnical properties of the sandy soils (obtained through measurements in our laboratories at Kiel University), in accordance with standards [28][29][30], are presented in Table 1. The transient effective thermal conductivities and effective heat capacities of the sands were measured using a Decagon KD2 Pro device with a SH-1 dual needle probe, which has a length of 30 mm, a diameter of 1.3 mm and 6 mm of spacing between the two needles. The thermal parameter measurements were conducted at room temperature and in atmospheric pressure conditions based on transient line heat source analysis [31,32]. Figures 3 and 4 show the X-ray diffraction (XRD) mineralogical analysis and grain size distributions of the soils. The fine sand, S-A, is also the heat storage media used in the BTES of project Angus II as shown in Figure 1. Whereas, the coarse sand, S-B, due to its comparatively higher hydraulic permeability, is used for a second set-up in project Angus II as a modular heat storage system, at a comparatively smaller scale, in a cylindrical barrel heated with a central BHE unit to study the convective heat transport in porous media in sensible heat storage systems.
A summary of the obtained physical and geotechnical properties of the sandy soils (obtained through measurements in our laboratories at Kiel University), in accordance with standards [28][29][30], are presented in Table 1. The transient effective thermal conductivities and effective heat capacities of the sands were measured using a Decagon KD2 Pro device with a SH-1 dual needle probe, which has a length of 30 mm, a diameter of 1.3 mm and 6 mm of spacing between the two needles. The thermal parameter measurements were conducted at room temperature and in atmospheric pressure conditions based on transient line heat source analysis [31,32]. Figures 3 and 4 show the X-ray diffraction (XRD) mineralogical analysis and grain size distributions of the soils.
Equipments Used
The cyclic mechanical stability of the soils was analyzed experimentally using an electromechanical cyclic triaxial testing device ( Figures 5 and 6). The cyclic testing apparatus consisted of a triaxial cell ( Figure 6), a vertical loading machine (with a high precision controlled load frame and a TC4-25 load cell with a maximum force limit of 25 kN) capable of applying cyclic deviatoric stresses at frequencies f of up to 5 Hz and different amplitudes, a dynamic high precision cell pressure system capable of applying cyclic cell pressure to the specimen, two WDC dynamic real time digital closed-loop actuator controllers (one for the deviatoric stress and the other for the cell pressure), a single volume-pressure-controller or VPC 10/1000 pore-water (back-pressure) application system (with a volumetric capacity of 1000 ml and maximum pressure limit of 1000 kPa), a Huber Ministat 125 Pilot ONE heat pump for controlling the temperature of the triaxial cell via a circulating fluid (glycol + distilled water) and a PC control and datalogger unit for system control and data recording ( Figure 5).
Equipments Used
The cyclic mechanical stability of the soils was analyzed experimentally using an electromechanical cyclic triaxial testing device ( Figures 5 and 6). The cyclic testing apparatus consisted of a triaxial cell ( Figure 6), a vertical loading machine (with a high precision controlled load frame and a TC4-25 load cell with a maximum force limit of 25 kN) capable of applying cyclic deviatoric stresses at frequencies f of up to 5 Hz and different amplitudes, a dynamic high precision cell pressure system capable of applying cyclic cell pressure to the specimen, two WDC dynamic real time digital closed-loop actuator controllers (one for the deviatoric stress and the other for the cell pressure), a single volume-pressure-controller or VPC 10/1000 pore-water (back-pressure) application system (with a volumetric capacity of 1000 ml and maximum pressure limit of 1000 kPa), a Huber Ministat 125 Pilot ONE heat pump for controlling the temperature of the triaxial cell via a circulating fluid (glycol + distilled water) and a PC control and datalogger unit for system control and data recording ( Figure 5).
Equipments Used
The cyclic mechanical stability of the soils was analyzed experimentally using an electromechanical cyclic triaxial testing device (Figures 5 and 6). The cyclic testing apparatus consisted of a triaxial cell ( Figure 6), a vertical loading machine (with a high precision controlled load frame and a TC4-25 load cell with a maximum force limit of 25 kN) capable of applying cyclic deviatoric stresses at frequencies f of up to 5 Hz and different amplitudes, a dynamic high precision cell pressure system capable of applying cyclic cell pressure to the specimen, two WDC dynamic real time digital closed-loop actuator controllers (one for the deviatoric stress and the other for the cell pressure), a single volume-pressure-controller or VPC 10/1000 pore-water (back-pressure) application system (with a volumetric capacity of 1000 ml and maximum pressure limit of 1000 kPa), a Huber Ministat 125 Pilot ONE heat pump for controlling the temperature of the triaxial cell via a circulating fluid (glycol + distilled water) and a PC control and datalogger unit for system control and data recording ( Figure 5).
Experimental Procedure
The specimens were prepared at dry moisture condition with a diameter of 100 mm and a height of 200 mm, making sure that the bulk density was homogeneous throughout the specimen volume. The cyclic loading tests were performed at a cell pressure of 100 kPa, which was initially applied to the specimen and was maintained during a short stage consolidation of the samples, during which the drainage valves were kept open and no back pressure was applied. A deviatoric stress σdev of 150 kPa was then applied statically to the specimens to be used as the base load σdev,base for the cyclic loading tests, which were conducted with the drainage valves fully open.
The cyclic loading tests were conducted at a frequency f of 0.1 Hz, with deviatoric stress amplitudes σdev,amp of 10, 20 and 30 kPa, medium temperatures of 20, 40 and 60 °C and for a number of cycles N of up to a maximum of around 2200. Prior to the start of the cyclic tests, sufficient temperature stabilization time was allotted to achieve steady state conditions within the samples. The cyclic loading and displacement data were recorded at an acquisition rate of around 0.25 s. The mechanical (static and cyclic) loading schemes used are diagrammatically depicted in Figure 7.
Experimental Procedure
The specimens were prepared at dry moisture condition with a diameter of 100 mm and a height of 200 mm, making sure that the bulk density was homogeneous throughout the specimen volume. The cyclic loading tests were performed at a cell pressure of 100 kPa, which was initially applied to the specimen and was maintained during a short stage consolidation of the samples, during which the drainage valves were kept open and no back pressure was applied. A deviatoric stress σ dev of 150 kPa was then applied statically to the specimens to be used as the base load σ dev,base for the cyclic loading tests, which were conducted with the drainage valves fully open.
The cyclic loading tests were conducted at a frequency f of 0.1 Hz, with deviatoric stress amplitudes σ dev,amp of 10, 20 and 30 kPa, medium temperatures of 20, 40 and 60 • C and for a number of cycles N of up to a maximum of around 2200. Prior to the start of the cyclic tests, sufficient temperature stabilization time was allotted to achieve steady state conditions within the samples. The cyclic loading and displacement data were recorded at an acquisition rate of around 0.25 s. The mechanical (static and cyclic) loading schemes used are diagrammatically depicted in Figure 7.
Experimental Procedure
The specimens were prepared at dry moisture condition with a diameter of 100 mm and a height of 200 mm, making sure that the bulk density was homogeneous throughout the specimen volume. The cyclic loading tests were performed at a cell pressure of 100 kPa, which was initially applied to the specimen and was maintained during a short stage consolidation of the samples, during which the drainage valves were kept open and no back pressure was applied. A deviatoric stress σdev of 150 kPa was then applied statically to the specimens to be used as the base load σdev,base for the cyclic loading tests, which were conducted with the drainage valves fully open.
The cyclic loading tests were conducted at a frequency f of 0.1 Hz, with deviatoric stress amplitudes σdev,amp of 10, 20 and 30 kPa, medium temperatures of 20, 40 and 60 °C and for a number of cycles N of up to a maximum of around 2200. Prior to the start of the cyclic tests, sufficient temperature stabilization time was allotted to achieve steady state conditions within the samples. The cyclic loading and displacement data were recorded at an acquisition rate of around 0.25 s. The mechanical (static and cyclic) loading schemes used are diagrammatically depicted in Figure 7. Figure 8 presents the time plot results for the first three cycles of the cyclic loading tests of the sandy soils at room temperature. Figure 8 presents the time plot results for the first three cycles of the cyclic loading tests of the sandy soils at room temperature.
Time Plots
The plots showed a significant increase in the measured axial cyclic strain εcyc of the soil samples with the increasing number of cycles N, as the strain and stress loops generated due to the application of the cyclic loads were not completely closed. This led to irrecoverable strains and the accumulation of plastic strain εacc with each applied cycle [14]. As the frequency f of the cyclic loading in this study was in between 0 and 1 Hz, inertia forces could be neglected and the accumulated strains were predominantly plastic [33,34].
Moreover, the magnitude of the εacc of the soils was generally the highest within the first cycle, which is commonly referred to as the "irregular cycle". As expected, for both soils, the magnitudes of the εacc increased with an increase in the applied deviatoric stress amplitude σdev,amp or cyclic strain amplitude, corroborating previous studies [14,24]. The plots showed a significant increase in the measured axial cyclic strain ε cyc of the soil samples with the increasing number of cycles N, as the strain and stress loops generated due to the application of the cyclic loads were not completely closed. This led to irrecoverable strains and the accumulation of plastic strain ε acc with each applied cycle [14]. As the frequency f of the cyclic loading in this study was in between 0 and 1 Hz, inertia forces could be neglected and the accumulated strains were predominantly plastic [33,34].
Cyclic Loading Results at Different Deviatoric Stress Amplitudes
Moreover, the magnitude of the ε acc of the soils was generally the highest within the first cycle, which is commonly referred to as the "irregular cycle". As expected, for both soils, the magnitudes of the ε acc increased with an increase in the applied deviatoric stress amplitude σ dev,amp or cyclic strain amplitude, corroborating previous studies [14,24]. Figures 9 and 10 show the results of the cyclic loading tests on the soils which were conducted at room temperature with deviatoric stress amplitudes σ dev,amp between 10 and 30 kPa. Figures 9 and 10 show the results of the cyclic loading tests on the soils which were conducted at room temperature with deviatoric stress amplitudes σdev,amp between 10 and 30 kPa.
Cyclic Loading Results at Different Deviatoric Stress Amplitudes
The results showed a significant increase in the measured axial cyclic strains εcyc and hence the accumulation of plastic strains εacc of both soils with the increasing number of cycles N. The rate of increase in the measured εacc of the soils was also directly proportional to the magnitudes of the applied deviatoric stress amplitudes σdev,amp [14,24]. With the application of cyclic loading and changes in the stress-strain loops of the soils, the quartz and albite dominated particle grains (grains of high angular shape irregularity and hence interlocking, especially for sand B as shown in Figure 2, and with a high uniformity of particle gradation leading to high soil void ratios as shown in Table 1 and Figure 4) were subjected to changes in their inter-particle interlocking and frictional forces and the soil skeleton underwent grain rearrangements leading to irrecoverable or plastic strains. Moreover, most of the measured εacc of the soils occurred within the first 100 cycles, after which the increase in the measured εacc was gradual. When comparing the nature of the accumulation of plastic strain of the soils due to cyclic loading with previous studies, it can be concluded that they at room temperature with deviatoric stress amplitudes σdev,amp between 10 and 30 kPa.
The results showed a significant increase in the measured axial cyclic strains εcyc and hence the accumulation of plastic strains εacc of both soils with the increasing number of cycles N. The rate of increase in the measured εacc of the soils was also directly proportional to the magnitudes of the applied deviatoric stress amplitudes σdev,amp [14,24]. With the application of cyclic loading and changes in the stress-strain loops of the soils, the quartz and albite dominated particle grains (grains of high angular shape irregularity and hence interlocking, especially for sand B as shown in Figure 2, and with a high uniformity of particle gradation leading to high soil void ratios as shown in Table 1 and Figure 4) were subjected to changes in their inter-particle interlocking and frictional forces and the soil skeleton underwent grain rearrangements leading to irrecoverable or plastic strains. Moreover, most of the measured εacc of the soils occurred within the first 100 cycles, after which the increase in the measured εacc was gradual. When comparing the nature of the accumulation of plastic strain of the soils due to cyclic loading with previous studies, it can be concluded that they The results showed a significant increase in the measured axial cyclic strains ε cyc and hence the accumulation of plastic strains ε acc of both soils with the increasing number of cycles N. The rate of increase in the measured ε acc of the soils was also directly proportional to the magnitudes of the applied deviatoric stress amplitudes σ dev,amp [14,24]. With the application of cyclic loading and changes in the stress-strain loops of the soils, the quartz and albite dominated particle grains (grains of high angular shape irregularity and hence interlocking, especially for sand B as shown in Figure 2, and with a high uniformity of particle gradation leading to high soil void ratios as shown in Table 1 and Figure 4) were subjected to changes in their inter-particle interlocking and frictional forces and the soil skeleton underwent grain rearrangements leading to irrecoverable or plastic strains. Moreover, most of the measured ε acc of the soils occurred within the first 100 cycles, after which the increase in the measured ε acc was gradual. When comparing the nature of the accumulation of plastic strain of the soils due to cyclic loading with previous studies, it can be concluded that they exhibited a typical shakedown behavior, where the strain increment decreases with the increasing number of cycles without reaching ultimate failure [34,35]. Overall, sand A had higher recorded ε cyc and ε acc values for the given number of cycles when compared with sand B (with ε acc values of 0.4% for sand A and 0.22% for sand B at around N = 2200 cycles for σ dev,amp of 30 kPa at room temperature), due to its comparatively finer texture and lower strength or compressibility modulus.
With the application of cyclic mechanical loading and the subsequent increase in the accumulated plastic strains and hence the compaction of soils, the effective thermal conductivity of soils is expected to increase. This will in turn maximize the efficiency of the heat storage system by increasing the charging/un-charging rates of the heat storage unit. However, cyclic mechanical loading may have a negative impact on the BHE unit of the heat storage system (which is usually made of cemented porous media) by creating serviceability failures/formation of cracks with an increase in the number of cycles and the accumulation of plastic strains, which will in turn lower the effective thermal conductivity and hence the efficiency of heat transfer of the cemented media. Figures 11 and 12 show the results of the cyclic loading tests which were conducted with a deviatoric stress amplitude σ dev,amp of 20 kPa and medium temperatures T between 20 and 60 • C.
Cyclic Loading Results at Different Temperatures
As can be seen from the results, for the given deviatoric stress amplitude σ dev,amp , an increase in temperature T of the soils introduced elastic behavior in the soil skeleton, which allowed for the recovery of a small portion of the total cyclic strain ε cyc of each cycle (see the stress-strain loops and stiffness changes within each cycle as depicted in Figure 11). As a result, a more stable soil configuration with a comparatively lower measured ε acc was obtained with an increase in the medium temperature of both sands. This reduction in the measured ε acc of the soils at elevated temperatures may mitigate some of the effects of the long term serviceability failures typically caused by the accumulation of plastic strains due to cyclic loading, which can in turn cause significant reductions to the effective thermal conductivity of the heat storage materialsin the BTES system. With the application of cyclic mechanical loading and the subsequent increase in the accumulated plastic strains and hence the compaction of soils, the effective thermal conductivity of soils is expected to increase. This will in turn maximize the efficiency of the heat storage system by increasing the charging/un-charging rates of the heat storage unit. However, cyclic mechanical loading may have a negative impact on the BHE unit of the heat storage system (which is usually made of cemented porous media) by creating serviceability failures/formation of cracks with an increase in the number of cycles and the accumulation of plastic strains, which will in turn lower the effective thermal conductivity and hence the efficiency of heat transfer of the cemented media.
Conclusions
Assessing the mechanical behavior (in the forms of static and cyclic loading, from manmade structures as well as natural hazards such as earthquakes) of heat storage media in solid sensible heat storage and borehole thermal energy storage (BTES) systems is essential prior to their design and operation, as such systems are usually designed at or below ground level with load bearing capabilities and supporting structures. Furthermore, for the case of cyclic mechanical loading, the accurate estimation of the accumulated plastic strains of the heat storage material prior to the design of the heat storage system ensures the avoidance of future long-term serviceability failures. In this study, the cyclic mechanical performance of two heat storage sandy soils was experimentally studied using a cyclic thermo-mechanical triaxial device.
The results showed a significant increase in the measured axial cyclic strains and the accumulation of plastic strains of both soils with the increasing number of cycles. As expected, the rate of increase in the measured accumulated plastic strains of the soils was also found to be directly proportional to the magnitudes of the applied deviatoric stress amplitudes. Moreover, most of the measured accumulated plastic strains of the soils occurred within the first 100 cycles, after which the increase in the accumulated plastic strains wasgradual. Overall, sand A exhibited higher recorded axial cyclic strains and accumulated plastic strains for the given number of cycles when compared with sand B (with accumulated plastic strains of 0.4% for sand A and 0.22% for sand B at around 2200 cycles for a cyclic loading deviatoric amplitude of 30 kPa at room temperature), due to its comparatively finer texture and lower strength or compressibility modulus.
Furthermore, an increase in the temperature of the soils introduced elastic behavior in the quartz and albite dominated soil skeleton, which allowed for the recovery of a small portion of the total cyclic strain within each cycle, leading to a more stable soil configuration with a comparatively lower accumulation of plastic strains. Overall, the results show a significant dependence of the cyclic mechanical behavior, and in particular the accumulation of plastic strains, of the investigated heat storage sandy soils on the amplitude of the applied cyclic loading and the medium temperature. | 2020-07-30T02:09:01.484Z | 2020-07-26T00:00:00.000 | {
"year": 2020,
"sha1": "b6cfed7a62bf7363750ced78cf321074b28fe4b0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/13/15/3835/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "374ce2394e99f8b512c3a60cf4e7d5d9a4786f7e",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
59466087 | pes2o/s2orc | v3-fos-license | Characterization of a Spherical Proportional Counter in argon-based mixtures
The Spherical Proportional Counter is a novel type of radiation detector, with a low energy threshold (typically below 100 eV) and good energy resolution. This detector is being developed by the network NEWS, which includes several applications. We can name between many others Dark Matter searches, low level radon and neutron counting or low energy neutrino detection from supernovas or nuclear reactors via neutrino-nucleus elastic scattering. In this context, this works will present the characterization of a spherical detector of 1 meter diameter using two argon-based mixtures (with methane and isobutane) and for gas pressures between 50 and 1250 mbar. In each case, the energy resolution shows its best value in a wide range of gains, limited by the ballistic effect at low gains and by ion-backflow at high gains. Moreover, the best energy resolution shows a degradation with pressure. These effects will be discussed in terms of gas avalanche properties. Finally, the effect of an electrical field corrector in the homogenity of the gain and the energy threshold measured in our setup will be also discussed.
Introduction
An active field in instrumentation for Particle Physics is the development of massive detectors with low background levels and low energy threshold. The range of applications is diverse: the search of dark matter in the form of Weakly Interacting Massive Particles (WIMPs) [1] or axions [2], the detection of low-energy neutrinos, radon and neutron counting. In this context, the Spherical Proportional Counter (SPC) was proposed in [3] as a candidate fulfilling all the requirements: a single readout channel reading a big gas volume (and mass); a potentially low intrinsic background level with a good selection of radiopure materials; and a high signal-to-noise ratio due to its low capacitance, which is proportional to the radius of the central electrode and does not depend on the vessel size. Indeed, previous works have shown that an energy threshold as low as 100 eV [4,5] is feasible, keeping a good energy resolution (10% FWHM at 22.1 keV).
Even if there is only one channel, the pulse risetime can be used to discriminate complicate topologies like muons from point-like events like x-rays; or to set a fiducial volume, as the risetime depends on the radial position where the energy was deposited by gas diffusion. As an example of this feature, we present in figure 1 (left) the dependence of the risetime with the pulse amplitude for our SPC irradiated by a 241 Am source covered by an aluminum foil. As the mean free path of photons is longer than the sphere's radius at low pressures, an x-ray may deposit its energy at any distance. As a result, it will create a distribution in risetime for a fixed amplitude in the 2D plot. This fact happens for the x-rays of the source (at 11.9, 13.9, 17.7 and 20.8 keV). In contrast, the fluorescence lines emitted from the stainless steel vessel (chromium at 5.5 keV and iron at 6.4 keV) and the foil (aluminum at 1.5 keV) form three spots at long risetimes. If we select the events whose risetime ranges in 4.5-8.7 µs (red lines), we obtain the energy spectrum of figure 1 (right).
The advantages of SPCs have motivated several feasibility studies [5 -9] and the groups interested in this technology have created the network NEWS (New Experiments With Spheres), which is now formed by institutes of France, Greece, China and Spain. In this context, the scalability of high gains and good energy resolutions to higher masses (pressures) is an open question, which is a key-point to increase the sensitivity of the detector to any particular application. This work tries to give a first answer, presenting the characterization of a SPC in two argon-based mixtures (with methane and isobutane) and for gas pressures up to 1250 mbar. The setup, the data-taking procedure and the analysis are described in detail in section 2. We also present the two configurations of the cental rod (without and with a field corrector), whose main results will be respectively detailed in sections 3 and 4. Finally, we finish in section 5 with some conclusions and an outlook.
Detector description
The detector consists of a large spherical stainless steel vessel 1.0 meter in diameter and a small metallic ball 5 mm in diameter kept at the center by a stainless steel rod. The radiation from the calibration source or other sources (like muons and gammas) ionize the gas. As the central ball is set to a positive voltage and the vessel to ground, electrons drift to the ball where an intense electric field amplifies the charge. The charge movement induces a signal at the electrode, which is extracted by a teflon-based high voltage feed-through from the vessel. The signal is decoupled Figure 1: Left: Dependence of the risetime with the pulse amplitude in Ar+5% methane at 100 mbar, obtained irradiating our SPC with a 241 Am source whose 5.5 MeV alphas have been blocked by an aluminum foil. Muons show an event distribution centered at 8 keV and 14 µs. Some muons events (see band below 3 keV at long risetime values) are not well analized as their risetime is longer than the analysis range. The x-rays from the source (at 11.9, 13.9, 17.7 and 20.8 keV) create different bands in risetime while the fluorescence lines emitted from the vessel (chromium at 5.5 keV and iron at 6.4 keV) and the aluminum foil (at 1.5 keV) form three spots at long risetime. The red lines delimit the fiducial volume for point-like events. Right: Energy spectrum after having selected the events in the fiducial volume. The Argon escape peaks (E.P.) of both fluorescence lines and 241 Am x-rays lines are also present in the spectrum. from the high voltage (powered by a CAEN N1471HA module) by a capacitance and then fed into a CANBERRA preamplifier, which is acquired by a Tektronix DS5054B oscilloscope. A view of the setup is shown in figure 2 (left), while on right caption, we show the two configurations used in these tests: in the first one the ball is situated at 30 mm of the rod, covered by a teflon cap; in the second one, the ball is 20 mm away from a copper plate, which works as a field corrector (generally called "umbrella") and is isolated from the rod by a teflon cassette. The detector operates in seal mode, i.e., the vessel is first pumped out and then filled with an appropriate gas at a pressure from few tens of mbar up to 2 bar. Two mixtures have been studied: Ar+5% methane for pressures between 50 and 1250 mbar and Ar+2% isobutane for pressures between 100 and 400 mbar. No gas filter has been used in this work. The pumping lasted at least one night to reduce the outgassing rate from inner vessel's components below 10 −6 mbar l/sec. This value allows the operation of the detector during a week without any visible degradation.
The results presented here were obtained irradiating the SPC by a 109 Cd source respectively situated inside the vessel at the bottom part for the rod without the field corrector (figure 2), and at the lateral bore for the second configuration. In the first case, we observed that the energy resolution was poor (≈30% FWHM at 22.1 keV) when calibrating from the lateral bore. In contrast, values were good (≈10% FWHM) when the SPC was calibrated from the bottom part. We concluded that the field was not homogeneous and that a field corrector was needed, as suggested in [3]. However, as described in section 4, the field became homogeneous only when the umbrella was set to negative values. This fact disagrees with the conclusions of the former publication. A simulation of the electric field is foreseen to evaluate the effect of the field corrector in our setup.
In a first step, signals acquired by the oscilloscope are smoothed. The first derivate is then calculated to define a temporal range where the Pulse Shape Analysis (PSA) is applied. This range is useful to avoid errors produced by pile-up events, unstable baselines or noise. Finally, the PSA calculates some pulse features like the baseline, risetime, or amplitude. As shown in figure 1, the risetime allows to discriminate x-rays from muons setting an upper limit. Two other effects have been observed: one is a decrease in amplitude for events with long risetimes at low gains ( figure 3, left). This effect is caused by the ballistic effect, i.e., the time needed to collect all charge is comparable to the decharging time of the preamplifier. To compensate this degradation, an upper limit in risetime of 13 µs has been applied. The second one is a population of 22.1 keV events with lower amplitudes at short risetimes (same figure as before). This effect happens only at the configuration without any field corrector. We have attributed these events to x-rays depositing its energy near the ball, where a surface defect or a field distorsion may reduce the detector's gain. To remove this contribution, a lower limit at risetime of 5 µs has been set.
The resulting spectrum after the risetime selection is shown in figure 3 (right). The K α (22.1 keV) and K β lines (24.9 keV) from Ag fluorescence are clearly distinguished, as well as the fluorescence lines of the vessel (crome at 5.5 keV, iron at 6.4 keV) and the source container (copper at 8.0 keV), and the corresponding Argon escape peaks (E.P.). The peak at 22.1 keV is used to obtain the gas gain and energy resolution, fitting it to a gaussian funtion (red line). The energy resolution of 6.4 keV line is around 23% FWHM for all optimum cases and configurations, near the expected value (20% FWHM) derived from the 22.1 keV peak. The difference between both values could be explained by fitting errors due to the presence of other peaks in the same range of energy.
Results without the field corrector
The dependence of the peak position with the voltage generates the gain curves, shown in figure 4 for both argon-based mixtures. The gain reaches values higher than 2 × 10 3 in all cases. Moreover, the spark limit was not observed in any case and the data-taking was resumed when the energy resolution degraded. The dependence of the gain with the voltage follows the expected exponential relation based on the Rose-Korff model [10], except for high voltages where gain tends to saturation. We have attributed this effect to ion-backflow [11], i.e, ions produced in the avalanche enter in the drift volume and produce distorsions due to space-charge effects. The energy resolution depends on the detector's gain, as shown in figure 5. At low values, the energy resolution degrades because the signal is comparable to noise. As the gain increases, it reaches its best value and stays constant during a range of gains, worsening again at higher gains. The surprising fact is how different are the gains where this last degradation starts: ≈ 3 × 10 2 for Ar+2%iC 4 H 10 and ≈ 10 3 for Ar+5%iCH 4 , i.e., a factor 3 better for methane even if isobutane quenchs better the photons generated in the avalanche [12]. For this reason, we can not attribute this effect to an increase of gain fluctuations or the proximity to the spark limit, but to an effect probably correlated with the former gain saturation. Just as a comparison, energy resolution starts degrading in argon-based mixtures at 10 4 for Micromegas detectors [13]. The best values obtained for energy resolution at 22.1 keV are 9% FWHM in Ar+5% methane at 150 mbar and 11% FWHM in Ar+2% isobutane at 200-400 mbar, as shown in figure 5. As noted before, better values were expected for isobutane as this gas is a better quencher. These values are similar to those reported in [4]. Finally, we have observed a degradation of energy resolution with pressure for the argon-methane mixture. Its physical origin is still unknown but similar tendencies have been reported for Micromegas detectors in CF 4 [14] and Xe-TMA [15].
First results of the field corrector
The effect of the umbrella field corrector (figure 2, top-right) was studied calibrating the SPC alternatively from the lateral bore and the bottom bore (blue stars at figure 2, left) in Ar+5% methane at pressures between 100 and 200 mbar. The gain increased if negative voltages were applied to the umbrella for both cases but no maximum was found. In terms of energy resolution (figure 6, left), we observed a clear improvement for lateral calibrations if the field corrector was set to negative values, reaching an optimum at a ratio of -0.2. However, the energy resolution slightly degraded for bottom calibrations. As a compromise, we fixed the ratio of umbrella-to-ball voltages to -0.2 when generating gain curves. Then, we replaced the lateral cap by an aluminized mylar window of 3.5 µm thickness, and verified that the gain was uniform in all radial directions, because the vessel's fluorescence lines of a long run (see figure 1) were clearly defined.
The absolute gain was a factor 5 higher in this configuration, i.e., a lower voltage must be applied to reach the same gain. Apart from that, the degradation observed in energy resolution at high gains appeared at higher values ( figure 6, right), which allowed us to reach an energy threshold below 1 keV ( figure 1, right). This value is still far from the 0.1 keV reported in [4] because the DAQ used in this work has no trigger optimization. We expect to implement this type of electronics in future updates. The best value for energy resolution was 9.5% FWHM at 22.1 keV for 100 and 150 mbar, similar to the configuration without the field corrector.
Conclusions and prospects
The Spherical Proportional Counter (SPC) is a novel type of radiation detector, with a low energy threshold and good energy resolution. This detector has a wide range of applications like Dark Matter searches, neutrino or neutron detection. To increase the sensitivity of this detector, the scability of its good properties is crucial. For this purpouse, we present the characterization of a 1 meter diameter SPC in two argon-based mixtures and for pressures up to 1250 mbar. We have studied two rod configurations: without and with an umbrella field corrector.
For the first configuration, gains as high as 2 × 10 3 have been reached. Gain curves show a saturation effect at high values, which seems to be correlated with a degradation in energy resolution and has been attributed to ion-backflow. The best values obtained for the energy resolution at 22.1 keV are 9% FWHM in Ar+5% methane at 150 mbar and 11% FWHM in Ar+2% isobutane at 200-400 mbar respectively for gains (0.2 − 1.0) × 10 3 and (0.1 − 0.2) × 10 3 . The energy resolution also degrades at high pressure but its physical origin is still unknown. In the second case, the umbrella field corrector has made uniform the detector's gain in all radial directions. The best value for energy resolution is 9.5% FWHM at 100-150 mbar. Energy resolution also degrades at high gains, but at higher values than in the first case, which has allowed us to reduce the energy threshold below 1 keV. The new electrode is now being characterized for pressures up to 1250 mbar.
In near term, we plan to update the acquisition system to further reduce the energy threshold and to study other light gases like neon and helium. Our studies with these gases can be applied in the project SEDINE, developed by part of the NEWS network. This project is operating a radiopure copper spherical vessel at the Modane Underground Laboratory (LSM) since end 2012, and aims at building a sphere of 2 meters of diameter filled with a light gas at 10-20 bar for Dark Matter searches. This future experiment may reach sensitivities near DAMA signal for WIMPs masses as low as 1 GeV if background level is around 10 −2 keV −1 kg −1 day −1 and the energy threshold is kept below 0.1 keV for high pressures. For further details, the reader is referred to [6,5]. | 2015-01-07T12:17:34.000Z | 2015-01-07T00:00:00.000 | {
"year": 2015,
"sha1": "f927da72b173654922e355bf2046fea961274385",
"oa_license": "CCBYNCSA",
"oa_url": "https://pos.sissa.it/213/162/pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "3f70a9effb3b2d699bb3f3e82f1c11d9916ed2a2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
203884973 | pes2o/s2orc | v3-fos-license | The First Case of Short-Spiked Canarygrass ( Phalaris brachystachys ) with Cross-Resistance to ACCase-Inhibiting Herbicides in Iran
: The weed Phalaris brachystachys Link. severely a ff ects winter cereal production. Acetyle-CoA Carboxylase (ACCase)-inhibiting herbicides are commonly used to control this weed in wheat fields. Thirty-six populations with suspected resistance to ACCase-inhibiting herbicides were collected from wheat fields in the Golestan Province in Iran. A rapid test performed in Petri dishes and whole-plant dose–response experiments were conducted to confirm and investigate the resistance level of P. brachystachys to ACCase-inhibiting herbicides. The seed bioassay results showed that 0.02 mg ai L − 1 clodinafop-propargyl (CP) and 1.36 mg ai L − 1 of the diclofop-methyl (DM) solution were the optimal amounts for reliably screening resistant and susceptible P. brachystachys populations. In the whole plant bioassay, all populations were found to be resistant to CP, resistance ratios ranging from 2.7 to 11.6, and all of the CP-resistant populations exhibited resistance to DM. Fourteen populations showed low resistance to cycloxydim, and thirteen of these populations were also 2-fold resistant to pinoxaden. The results showed that DM resistance in some P. brachystachys populations is likely due to their enhanced herbicide metabolism, which involves Cytochrome P450 monooxygenases, as demonstrated by the indirect assay. This is the first report confirming the cross-resistance of ACCase-inhibiting herbicides in P. brachystachys in Iran.
Introduction
Wheat (Triticum aestivum L.) is one of the most important crops in Iran. Approximately 52% of arable land in Iran is used for wheat cultivation, with a 23% yield reduction caused by weeds [1]. The annual Poaceae species short-spiked Canarygrass (Phalaris brachystachys Link) is a common and troublesome weed in winter cereals in Mediterranean countries [2]. This is a vigorous and prolific weed that can significantly reduce wheat and barley yields and has been shown to decrease wheat yield by 16 to 60% [3,4]. Currently, P. brachystachys is found in the northern part of the Iran, where it infests crops during autumn and winter [5].
The use of herbicides is the most efficient and economical means of controlling grass weeds, and several ACCase-inhibitors have been registered in Iran in the last three decades [6]. The target site of these herbicides is Acetyl-CoA Carboxylase (ACCase; EC 6.4.1.2), which is a key enzyme that catalyzes the primary step in fatty acid biosynthesis [7]. ACCase-inhibiting herbicides are classified into three major families: aryloxyphenoxypropionates (APP), cyclohexanediones (CHD), and phenylpyrazolines The thirty-seven seeds of suspected resistant P. brachystachys populations were collected from winter wheat fields of the Golestan province in Iran during the spring of 2015, 2016 and 2017 (four, twenty-nine and four populations each year, respectively). Additionally, one susceptible population was collected from the same region that had never been treated with herbicides in 2016. Seeds from at least 15 plants that had reached physiological maturity were randomly collected by hand and bulked. The seeds were air dried and stored in paper bags at room temperature until used in the experiment. A global positioning system unit used to take latitude and longitude measurements for each field, and their locations, were mapped (ArcGIS 9.2) ( Figure 1). Information regarding the collection position of each population is shown (Supplementary Materials).
Seed Bioassay
Does-response experiments were conducted using 9-cm diameter plastic Petri dishes. After breaking seed dormancy (seeds were immersed in sulfuric acid (98%) for 3 min then kept in Petri dishes containing moist filter paper with 5 mL distilled water for 72 h in a refrigerator at 4 • C in the dark), five germinated seeds were placed on two sheets of filter paper. Each Petri dish was considered to be one replicate, and the experiment was conducted with three replications for each herbicide and each population. This experiment was repeated three times. A 5-mL aliquot of an aqueous solution of the commercial formulation of DM was applied at 0, 0.1, 0.5, 1, 2, 4, 8, 16 and 32 mg ai L −1 , and CP was applied at concentrations of 0, 0.005, 0.02, 0.04, 0.08, 0.32, 0.64, 1. 28, 5.12, 20.46 and 40.96 mg ai L −1 . For each population, the control treatment (without herbicide solution) with 5 mL distilled water was also considered. Then, the Petri dishes were placed in an incubator at 25 • C. After seven days, the shoot lengths of the coleoptiles in all of the seedlings were measured and expressed as the percentage of the shoot length of coleoptile compared to the control [13].
Whole Plant Dose-Response Assay
The experiment was repeated three times and arranged in a completely randomized design with three replications for P. brachyhstachys populations with different doses of clodinafop-propargyl, diclofop-methyl, cycloxydim and pinoxaden. Seed dormancy was broken as previously described and, then, the germinated seeds were sown in 11-cm plastic pots filled with a mixture (1:2 v/v) of peat and soil and placed in a greenhouse. The pots were irrigated daily as required. Each replicate pot contained five plants. Four weeks after sowing, at the three-to-four-leaf stage (as BBCH scale in [13][14], herbicides were applied at different rates using a calibrated sprayer with a flat-fan nozzle (TeeJet 8001) to deliver 250 L ha −1 of spray solution at 200 kPa. One untreated control (without herbicide application) for each population has been used. The detailed information of the herbicides used for the dose-response assay is presented in Table 1. The plants were harvested and oven dried for 48 h at 70 °C 28 days after herbicide application, and the dry-weight data were recorded. The data were expressed as a percentage of the untreated control. It should be noted that the populations with a high resistance factor based on DM herbicide results were used in the dose-response assay with pinoxaden and cycloxydim herbicides.
Whole Plant Dose-Response Assay
The experiment was repeated three times and arranged in a completely randomized design with three replications for P. brachyhstachys populations with different doses of clodinafop-propargyl, diclofop-methyl, cycloxydim and pinoxaden. Seed dormancy was broken as previously described and, then, the germinated seeds were sown in 11-cm plastic pots filled with a mixture (1:2 v/v) of peat and soil and placed in a greenhouse. The pots were irrigated daily as required. Each replicate pot contained five plants. Four weeks after sowing, at the three-to-four-leaf stage (as BBCH scale in 13-14), herbicides were applied at different rates using a calibrated sprayer with a flat-fan nozzle (TeeJet 8001) to deliver 250 L ha −1 of spray solution at 200 kPa. One untreated control (without herbicide application) for each population has been used. The detailed information of the herbicides used for the dose-response assay is presented in Table 1. The plants were harvested and oven dried for 48 h at 70 • C 28 days after herbicide application, and the dry-weight data were recorded. The data were expressed as a percentage of the untreated control. It should be noted that the populations with a high resistance factor based on DM herbicide results were used in the dose-response assay with pinoxaden and cycloxydim herbicides.
Growth Assay in Combination with CytP450 Inhibitor
Seedlings of different DM-resistant (based on resistance factor) and S populations at the 3-4 leaf stage were treated with DM at the rate of 300 g ai ha −1 with and without amitrole (AM) to study whether metabolism was responsible for resistance in the resistant populations. AM was applied at 13.1 g ha −1 24 h prior to application of DM. Twenty-one days after application, the plants were harvested, and the shoot fresh weight was measured. The experiment was repeated three times and it included ten replicates per repetition.
Statistics Analysis
The data obtained from the Petri dish and pot experiments were fit to a nonlinear log-logistic regression model with four parameters in the "R" statistical software with the "drc" package [14].
In the model, y represents the shoot dry weight or shoot length of coleoptile (percentage of the untreated control) at a herbicide dose of x; c and d denote the lower and upper limits, respectively; b is the slope of the response curve at e; and e denotes GR 50 (or GR 90 ). The effective concentration of herbicide that caused 50% inhibition of the shoot length of coleoptile (EC 50 ) was calculated from the log-logistic regression model, which allowed us to screen resistant and susceptible populations according to the EC 50 Equation (1). From each model, the effective herbicide doses which inhibited plant growth by 50 and 90% (GR 50 and GR 90 ) with respect to the untreated control, were determined for each population. The resistance factor (RF), which is the ratio of the EC 50 , GR 50 or GR 90 of the resistant population to the EC 50 , GR 50 or GR 90 of the susceptible population, was considered as an index in order to compare the resistance levels of tested populations [15].
Seed Bioassay
A difference in the shoot length of the populations was visible after 7 days of incubation. The resistance factors and estimated nonlinear regression parameters for the applied herbicides are shown in Table 2. The four-parameter log-logistic model provided a good fit to the data (p < 0.001; R 2 > 0.96). The results of the Petri dish assays showed that the coleoptile length of the seedlings decreased according to a sigmoidal trend and that the decreasing shoot length of the S population observed at lower concentrations than the other populations. This confirmed that the susceptible population was more sensitive to herbicides than the other populations. Regarding DM, the estimated EC 50 was 1.36 mg ai L −1 for S, while for the other populations it ranged between 1.86 and 6.30 mg ai L −1 ( Table 2). In the Petri dish assays, with the increasing CP concentration, different responses were consistently observed, and all populations had shorter coleoptiles compared to their untreated controls. While 0.02 mg ai L −1 of CP inhibited 50% of the shoot length of the S population, for the other populations, it ranged from 0.07 to 0.29 mg ai L −1 of CP and the resistance factors ranged from 2.77 to 10.27 (Table 2).
Dose-Response Assay
We assessed representatives of all of the different groups of graminicides, such as clodinafop-propargyl, diclofop-methyl, pinoxaden and cycloxydim. The results showed that the susceptible population was considerably controlled by two APP herbicides. The other populations showed resistance to the APP herbicides, but the level of resistance varied substantially. The S population was inhibited by 50% with only 24.22 g ai ha −1 of CP compared with the recommended field amount of 80 g ai ha −1 . The other populations were resistant to the CP field dose, with resistance levels ranging from 2.7 to 11.6-fold based on the GR 50 values (Table 3). Among the 36 populations studied, the Kr15 and Kr16 populations had the largest resistance factor based on their GR 50 values ( Table 3). The estimated GR 50 values indicated different resistance factors (RF = GR 50 R/GR 50 S) to DM for the different populations. The S population was inhibited by 50% with only 279.57 g ai ha −1 of DM, while the amount required to reach GR 50 for the other populations was between 563.14 and 3059.90 g ai ha −1 . The estimated GR 90 for the S population was 866.63 g ai ha −1 , whereas the GR 90 values varied in the other populations from 2934.52 to 22929.67 g ai ha −1 ( Table 4). For cycloxydim (CHD family herbicides), all populations were found to exhibit low resistance levels ( Table 5). The concentration of cycloxydim that led to a 50% inhibition of shoot growth in the S population was 46.35 g ai ha −1 and the cycloxydim resistance factor for all of the resistant populations was between 2-fold and 3-fold. The lowest GR 90 value for resistant populations was observed in AL21 (295.57 g ai ha −1 ), whereas a higher GR 90 value was recorded in Kr15 (425.97 g ai ha −1 ) (Table 5). Similarly, the pinoxaden herbicide was found to significantly reduce the growth of all the resistant populations, and low resistance levels were recorded for this ACCase-inhibiting herbicide in 13 populations. The pinoxaden GR 50 values of the resistant populations were approximately two times higher than for the S population, and a large reduction of shoot dry weight was found in all the resistant populations (Table 6). b = slope of curve around the dose giving 50% response; 3 SE = standard error; 4 R 2 = 1 − (sum of squares of the regression/corrected total sum of squares); 5 p-value = is the probability level of significance of the resistance factor; 6 GR 50 refers to the herbicides rates required for 50% dry weight reduction compared with the non-treated control; 7 RF 50 = Resistance Factor is calculated as (GR 50 resistant/GR 50 sensitive); 8 GR 90 = refers to the herbicides rates required for 90% dry weight reduction compared with the non-treated control; 9 RF 90 = Resistance Factor is calculated as (GR 90 resistant/GR 90 sensitive). b = slope of curve around the dose giving 50% response; 3 SE = standard error; 4 R 2 = 1 − (sum of squares of the regression/corrected total sum of squares); 5 p-value = is the probability level of significance of the resistance factor; 6 GR 50 refers to the herbicides rates required for 50% dry weight reduction compared with the non-treated control; 7 RF 50 = Resistance Factor is calculated as (GR 50 resistant/GR 50 sensitive); 8 GR 90 = refers to the herbicides rates required for 90% dry weight reduction compared with the non-treated control; 9 RF 90 = Resistance Factor is calculated as (GR 90 resistant/GR 90 sensitive).
Growth Assays in Combination with CytP450 Inhibitor
The responses of P. brachystachys populations to DM, with and without amitrol are shown in Table 7. The present study found that the combination of DM with amitrol was slightly more effective in the AL33, G04 and Kr15 populations than DM alone and pretreatment with amitrole significantly inhibited the growth of these populations compared to populations without amitrole. However, the fresh weight of the S population did not vary and was independent of the amitrole treatment.
Discussion
The seed bioassay method for determining resistant and susceptible populations has been previously utilized [16][17][18]. This method is regarded as the most rapid and simplest way to screen resistant and susceptible populations. In this study, this method was applied for the P. brachystachys populations. In preliminary tests, each APP herbicide was tested. It is necessary to detect resistance as early as possible to avoid the costly consequences of a resistance spread. Seed bioassays have been shown to be a useful and accurate tool for screening a large number of suspected resistant populations. The identification of concentrations that are effective at separating resistant and susceptible populations is important not only for the rapid diagnosis of potential resistance but also for the screening of seeds used for experiments. From this research, it was determined that the seed bioassay could be developed to be a feasible method to identify resistant populations of P. brachystacys. This method has been used to test resistance to ACCase-inhibitors in barnyardgrass (Echinochloa crusgalli) [19] and Johnsongrass (Sorghum halepense (L.) Pers.) [20]. Other researchers described a seed bioassay to detect grass weeds resistant to ACCase-inhibiting herbicides [21].
The dose-response assays confirmed that the P. brachystachys populations were resistant to DM and CP herbicides. The seed assay also confirmed the resistance to APP herbicides. According to the results of both the whole plant and seed bioassay, the resistance factor of most of the populations to CP were considerably higher compared to DM. No precise history of herbicide application in the sampled fields was available. However, Golestan is one of the most important provinces for producing wheat in Iran. The use of these herbicides has been the main approach to control weeds in wheat fields. The high percentage of resistance to CP and DM was expected because these two herbicides have a common basic molecular structure [22], and both have been extensively used to control grassy weeds during wheat cultivation, which is the most frequently grown crop in the area. These results indicate that resistance to these herbicides can be attributed to the use of a wheat monoculture in the sampling areas along with the repeated use of these herbicides for a long period of time [23]. Resistance to APP herbicides has been reported in littleseed canarygrass (Phalaris minor Retz.) [1]. Also, the level of cross-resistance to APP herbicides in Avena spp. has been reported [24]. Notably, most populations were highly resistant to APP herbicides, while their response to PPZ varied. Resistance to APP herbicides is not necessarily associated with resistance to pinoxaden. The AL21 population, which showed high resistance to APP herbicides, was susceptible to pinoxaden. However, the AL04 and Kr15 populations expressed high resistance to CP, with RF values of 9.4 and 11.5, respectively (Table 6). These populations also expressed high resistance to the same chemical class of APP herbicide, DM, with RF values of 8 and 6, respectively, but low cross-resistance was observed to cycloxydim (RF of 2.74 and 3.19, respectively) and pinoxaden (RF of 2.09 and 3.41, respectively) ( Table 5).The reduced control of some P. brachystachys populations by pinoxaden indicates cross-resistance to this herbicide, regardless of the fact that this herbicide has been used in Iran for the last few years. The whole-plant dose-response assays showed that the cross-resistance levels of ACCase inhibitors varied among P. brachystachys populations. APP presented the highest RF values, while the cross-resistance corresponding to CHD and PPZ herbicides was low.
The differences in the cross-resistance patterns in these resistant populations indicate that resistance evolved independently and that each resistant population has likely been exposed to a different selection pressure. Additionally, the differences indicate that more than one resistance mechanism is likely involved in these P. brachystachys resistant populations.
To test the hypothesis that enhanced DM metabolism is conferred by CytP450, a known CytP450-inhibitor, amitrole, was tested. Amitrol has long been used as an indicator of the involvement of P450 in metabolic resistance to ACCase herbicides [25][26][27]. These results of this experiment suggest that CytP450 monooxygenase-mediated metabolism could be present in these populations and contribute to the resistance phenotype. These results indicate that CytP450 is involved in DM-resistance in G04, Kr15 and AL33 populations of P. brachystachys and metabolic resistance could be the mechanism responsible for resistance in these populations ( Table 7).
The consecutive use of different herbicides with the same mode of action in wheat fields in the Golestan province led to the selection of resistant P. brachystachys individuals, and their numbers have increased within the populations. Today, resistant populations have been established in several parts of the province, and if the current weed/crop management method does not change, increasing selection pressure will result in further infestation of resistant populations. However, crop rotation and, consequently, different weed management methods would be the best way to control resistant P. brachystachys populations in this region. The results of this study clearly indicate that pinoxaden and cycloxydim have become ineffective at controlling some of the APP resistant P. brachystachys populations and these herbicides should not be considered as alternative herbicides for the effective control of resistant populations. Our results regarding this species are in agreement with the results reported by others regarding the different levels of cross-resistance patterns of different weeds resistant to the three groups of ACCase-inhibiting herbicides [18,24]. The Italian ryegrass (Lolium multiflorum) with DM resistance was also cross-resistant to pinoxaden [28]. Additionally, there was a level of cross-resistance to APP, CHD, and PPZ in bristly dogstail grass (Cynosurus echinatus) populations [27]. Hood canarygrass (Phalaris paradoxa) populations have also been reported to have cross-resistance to the APP, CHD and PPZ herbicides [7]. The insensitivity of the ACCase target site is the most common mechanism of resistance to ACCase-inhibiting herbicides [28]. However, resistance likely did not develop via a single mechanism; rather, multiple mechanisms, including enhanced metabolism, an altered target site, and other uncharacterized mechanisms, may be involved [29].
Conclusions
This is the first study confirming the cross-resistance of the aryloxyphenoxypropionates, cyclohexanediones and phenylpyrazolines herbicides in P. brachystachys in Iran. The CytP450 monooxygenase data in the present study indicate that a metabolic mechanism is probably involved in conferring cross-resistance among ACCase-inhibiting herbicides in some P. brachystachys populations. However, the resistance level cannot only be explained by herbicide metabolism to non-toxic forms, and other additional mechanisms should be studied. ACCase enzyme activity and gene analysis are needed to identify the resistance mechanisms in these populations. A goal for further research is the identification of the resistance mechanisms that are involved in ACCase inhibitor herbicides. We plan to study these mechanisms in the future; meanwhile, due to the results of the present study, resistance to ACCase inhibitors in P. brachystachys from Iran has been identified. In further studies, we will elucidate the resistance mechanisms of resistance by biochemical and molecular methods. | 2019-09-16T18:27:22.170Z | 2019-07-14T00:00:00.000 | {
"year": 2019,
"sha1": "73c91a4285d18fd0ff54dfe48ae7114f3cc29816",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4395/9/7/377/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ff729bb4669b82e418625d4f41b68a8516ae7ca1",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
230587779 | pes2o/s2orc | v3-fos-license | Fate of Trace Organic Compounds in Hyporheic Zone Sediments of Contrasting Organic Carbon Content and Impact on the Microbiome
: The organic carbon in streambed sediments drives multiple biogeochemical reactions, including the attenuation of organic micropollutants. An attenuation assay using sediment microcosms di ff ering in the initial total organic carbon (TOC) revealed higher microbiome and sorption associated removal e ffi ciencies of trace organic compounds (TrOCs) in the high-TOC compared to the low-TOC sediments. Overall, the combined microbial and sorption associated removal e ffi ciencies of the micropollutants were generally higher than by sorption alone for all compounds tested except propranolol whose removal e ffi ciency was similar via both mechanisms. Quantitative real-time PCR and time-resolved 16S rRNA gene amplicon sequencing revealed that higher bacterial abundance and diversity in the high-TOC sediments correlated with higher microbial removal e ffi ciencies of most TrOCs. The bacterial community in the high-TOC sediment samples remained relatively stable against the stressor e ff ects of TrOC amendment compared to the low-TOC sediment community that was characterized by a decline in the relative abundance of most phyla except Proteobacteria. Bacterial genera that were significantly more abundant in amended relative to unamended sediment samples and thus associated with biodegradation of the TrOCs included Xanthobacter , Hyphomicrobium , Novosphingobium , Reyranella and Terrimonas . The collective results indicated that the TOC content influences the microbial community dynamics and associated biotransformation of TrOCs as well as the sorption potential of the hyporheic zone sediments.
Introduction
Wastewater-derived trace organic compounds (TrOCs) such as pharmaceuticals and personal care products are frequently detected in receiving rivers due to inefficient removal by most conventional treatment processes [1,2]. Despite occurring in trace concentration ranges (ng to µg L −1 ), their persistence and accumulation are of ecotoxicological concern [3]. However, attenuation of such compounds via microbial transformation and sorption processes has been reported in the hyporheic zone, the saturated sediment directly beneath and lateral to the stream [2,[4][5][6]. Both attenuation processes are significantly influenced by the organic matter content in the sediment since organic carbon fuels multiple TrOC-coupled biogeochemical reactions [7] as well as being the main sorbent for organic chemicals [8].
In impacted rivers and streams, most of the organic carbon derived from wastewater effluents, decomposing leaf litter and macrophytes is deposited onto the streambed sediment [9,10]. The upper section of the sediment or benthic zone as the primary contact point of such deposits has a higher concentration of organic carbon compared to subjacent layers [6,7]. Subsequently, most streambed sediments of receiving rivers are characterized by gradients in the organic carbon content along the depth profile. This bioavailable total organic carbon (TOC) is considered a major limiting factor for microbial metabolism [11]. As bacteria dominate microbial communities in streambed sediments [12][13][14][15][16], bacterial populations, turnover and metabolism are virtually higher in the surface sediment layer with corresponding mineralization rates decreasing exponentially with depth [6,[17][18][19]. Additionally, as the main sorbent for organic chemicals, the decline in TOC content with increasing depth corresponds to reduced TrOC sorption potential of the sediment [16].
As rivers continue to be impacted by a wide range of emerging TrOCs, the influence of the hyporheic zone sediment TOC content on their removal becomes increasingly important. We hypothesized that hyporheic zone sediments differing in the TOC content along the depth profile host distinct microbiomes and exhibit variable TrOC removal capacities. We investigated the removal efficiency of a set of 13 TrOCs routinely discharged by a wastewater treatment plant (WWTP) using impacted hyporheic zone sediments differing in the initial TOC content. The compounds included pharmaceuticals from various pharmacological classes including nonsteroidal anti-inflammatory drugs (NSAIDs; diclofenac, ibuprofen, ketoprofen and naproxen), beta-blockers (metoprolol, propranolol), cholesterol-lowering agents (bezafibrate, clofibric acid), antihypertensive drugs (furosemide, hydrochlorothiazide), anticonvulsant (carbamazepine), an artificial sweetener (acesulfame), and a corrosion inhibitor (benzotriazole).
Our objectives were to (i) determine TrOC removal efficiencies in hyporheic zone sediments differing in initial TOC concentrations via microbial transformation and sorption mechanisms, (ii) assess the response of the indigenous bacterial communities in the sediments differing in TOC concentrations to TrOC amendment, and (iii) hence identify potential bacterial TrOC degraders. To address our aims, we (i) performed a TrOC attenuation assay in biotic and abiotic batch microcosms, and (ii) characterized the response of the indigenous bacterial community using time-resolved high-throughput sequencing of the 16S rRNA genes and 16S rRNA.
Study Site and Sampling
Sediment samples were collected from a section of the River Erpe, an urban lowland stream in Berlin, Germany, located approximately 0.7 km downstream of the Muenchehofe WWTP effluent outlet. The stream receives 60-80% of its discharge as effluents [20]. In June 2016, the site was selected for a comprehensive study on the fate of TrOCs in the hyporheic zone. The sediments at the sampling site were densely covered by macrophytes, hence minimizing light exposure onto the surface sediment [20]. Preliminary analysis of the nutrient species across the sediment profile indicated the upper 30 cm was oxic [6]. The sediment was also virtually homogenous up to about 35 cm depth and consisted mainly of sand (>50%), silt and gravel [21]. The sediment TOC concentration decreased with increasing depth. The upper layer (0-10 cm), hereafter referred to as the surface layer, and the subjacent layer (>10 cm of depth), hereafter referred to as the subsurface layer, contained 8.7% and 3.2% TOC, respectively [5], which was in a typical range of TOC found in temperate streambed environments (2.0-33%; average: 8.5%; [22]). Three sediment cores up to the 20 cm sediment depth were collected using 6 cm-diameter sediment corers (Uwitec, Mondsee, Austria). Grab samples of the surface water were also collected at the same location as the sediment cores and stored in sealed bottles. The core samples were then transferred to the laboratory and sectioned in 10 cm intervals. Sediment samples between depths 0-10 cm and 10-20 cm from the three replicate cores were manually homogenized in sterile plastic containers using alcohol-sterilized spatulas and processed aerobically under standard sterile lab conditions. A portion (t0 samples) from each sectioned depth was stored at −80 • C for subsequent extraction of nucleic acids.
Chemicals and Standards
Native and isotope-substituted internal standards of the test compounds-diclofenac, ibuprofen, ketoprofen, naproxen, metoprolol, propranolol, bezafibrate, clofibric acid, furosemide, hydrochlorothiazide, carbamazepine, benzotriazole and acesulfame-were purchased from Toronto Research Chemicals Inc., (North York, ON, Canada). Liquid chromatography-mass spectrometry (LC-MS) grade methanol was purchased from Merck (Darmstadt, Germany), analytical grade acetic acid (≥99.7%) from Sigma-Aldrich (Darmstadt, Germany) and LC-MS grade water was generated with a Milli-Q water purification system (Merck, Darmstadt, Germany). Stock and working solutions were prepared as reported in [4].
Microcosm Setup
Three sets of microcosms per sediment layer using duplicate samples for each of the three sediment cores were set up in 5 mL glass bottles, each containing 2 g of wet sediment and 2 mL of river water. In total, nine microcosms were set up per sediment layer (3× sediments from three cores 3). In two of the three sets, the river water was amended with approximately 500 µg L −1 of each of the 13 test compounds. All 13 test TrOCs occur at the sampling site, and TrOCs typically range from 0.1 to 200 µg L −1 in surface and hyporheic pore waters [4]. TrOC concentrations applied in our microcosms were higher but in the same order of magnitude of concentrations observed in situ [4]. Such high concentrations were used to allow for an enrichment of potential TrOC degraders as previously demonstrated [23,24]. To account for sorption, one of the setups was treated with 0.1% sodium azide to reduce bacterial activity. The third set of microcosms served as an unamended biotic control and was incubated without the test compounds. The slurries were then thoroughly mixed, and the bottles capped. All setups were incubated at 18 • C with shaking at 100 rotations per minute for the duration of the test to facilitate water infiltration in the sediments. Incubation was in the dark to mimic light-limited conditions of sediments due to macrophyte coverage of sediments, and to minimize potential photolysis. The microcosms were aerated daily under sterile conditions. After 15 days of incubation, three replicates from each treatment were destructively sampled, and the sediment from the biotic setups stored at −80 • C for subsequent nucleic acid extraction. At the end of the test (65 days), the supernatant was withdrawn for LC-MS analysis of the test compounds and the sediment used for nucleic acid extraction.
Chemical Analysis
Water samples were analyzed using a direct injection-ultra high-performance liquid chromatography method coupled to tandem mass spectrometry (UHPLC-MS/MS) following a standard protocol established previously [4]. Frozen water samples (−20 • C) were equilibrated to room temperature, and volumes of 800 µL were combined with 195 µL methanol and the isotope-labeled internal standard mix in 5 µL methanol. The mixture was then vortexed and filtered using Filtropur S 0.45 µm polyethersulfone (PES) membrane syringe filters (Merck, Sarstedt, Germany) into 2 mL micro vials (Thermo Scientific, Dreieich, Germany). The UHPLC-MS/MS injection volume was 20 µL. A blank sample to control for carry-over and a quality control standard (QC, compound concentration 0.5-3 µg L −1 ) was injected every 10-15 samples. The acquired MS data were further processed using Thermo Scientific Xcalibur 3.1.66.10 software and quantified using the internal standards method [4]. The removal efficiency of the test compounds was calculated as a percentage of the initial spiked concentration.
Nucleic Acid Extraction, Quantification, and Reverse Transcription
DNA and RNA were co-extracted following the rapid protocol for the extraction of total nucleic acids from environmental samples [25]. The nucleic acids were subsequently separately obtained through enzymatic digestion using DNase-free RNase and RNase-free DNase (Promega, Mannheim, Germany), respectively, according to the manufacturer's instructions. DNA and RNA concentrations were determined with Quant-iT ® PicoGreen DNA and RiboGreen RNA assay kits (Invitrogen, Karlsruhe, Germany), respectively, using a Tecan Infinite ® 200 PRO multiplex plate reader (BioTek, Bad Friedrichshall, Germany). The RNA was subsequently reverse transcribed into complementary DNA (cDNA) using random hexamer primers and Superscript™ IV Reverse Transcriptase (Invitrogen, Mannheim, Germany) following the manufacturer's protocol.
Quantitative Real-Time PCR
The bacterial 16S rRNA gene and 16S rRNA copy numbers were quantified using the quantitative real-time polymerase chain reaction (qPCR). The nucleic acids were first diluted 100-fold to reduce potential inhibition of qPCR by coextracted PCR-inhibiting compounds and confirmed inhibition-free at such dilutions using spiking assays as described in Zaprasis et al. [26]. The qPCR reaction mixture consisted of 10 µL SensiMix Plus SYBR Green and Fluorescein,1.2 µL 50 mM MgCl 2 (Bioline GmbH, Luckenwalde, Germany), 150 ng/µL bovine serum albumin, 0.2-1.6 pM of each primer (341F/534R) (Biomers, Ulm, Germany), 5 µL template (DNA or cDNA) and nuclease-free water (Thermo Fischer Scientific, Dreieich, Germany). The thermal cycling program comprised initial denaturation at 95 • C for 10 min, and 35 cycles of denaturation at 94 • C for 30 s, primer annealing at 55.7 • C for 40 s and elongation at 72 • C for 40 s. The final elongation was at 72 • C for 5 min.
Bacterial 16S Amplicon Sequencing
Sequencing of the bacterial 16S rRNA genes and 16S rRNA was implemented via Miseq ® Illumina ® platform at LGC Genomics GmbH (Berlin, Germany). An initial PCR amplification step (protocol kindly provided by LGC), consisted of 1× MyTaq buffer containing 1.5 units MyTaq DNA polymerase (Bioline, London, UK) and 2 µL of BioStabII PCR Enhancer (Sigma-Aldrich, Darmstadt, Germany), 15 pmol of each forward primer U341F and reverse primer U806R [27] and 5 ng of DNA/cDNA per sample in nuclease-free water (Thermo Fischer Scientific, Dreieich, Germany) in a final 20 µL volume. For each sample, the forward and reverse primers had the same 10 nt barcode sequence. The PCR was carried out using the following thermal profile: 2 min at 96 • C initial denaturation followed by 30 cycles of 96 • C for 15 s, 50 • C for 30 s and final elongation at 70 • C for 90 s. About 20 ng amplicon DNA of each sample was pooled for up to 48 samples with different barcodes. If needed, PCRs showing low yields were further amplified for five cycles. The amplicon pools were purified with one volume Agencourt AMPure XP beads (Beckman Coulter, CA, USA) to remove primer dimers, followed by an additional purification on MinElute columns (Qiagen, Hilden, Germany). About 100 ng of each purified amplicon pool DNA was used to construct Illumina sequencing libraries using the Ovation Rapid DR Multiplex System 1-96 (NuGEN, Leek, The Netherlands). Illumina libraries were pooled and size selected by preparative gel electrophoresis. Sequencing was performed on an Illumina MiSeq using V3 Chemistry (Illumina, CA, USA) yielding 300 base paired-end reads.
Raw amplicon sequences were analyzed by demultiplexing all libraries using the Illumina bcl2fastq 1.8.4 software. The reads were sorted by amplicon inline barcode corresponding to independent samples followed by trimming of sequencing adapters and primers. Combination of forward and reverse reads was performed using BBMerge 34.38. Using Mothur 1.35.1 [28], sequences containing ambiguous bases (Ns), with homopolymer stretches of more than 8 bases or with an average Phred quality score below 33 were removed. Remaining sequences were aligned against the 16S Mothur-Silva SEED r119 reference alignment followed by sequence subsampling. Errors in sequences were reduced by preclustering, while chimeras were eliminated with the uchime algorithm. This was followed by taxonomical classification of the sequences (against the Silva reference classification) and removal of sequences from other domains of life. Operational taxonomic unit (OTU) picking by clustering at the 97% identity level (using the cluster.split method) and OTU consensus taxonomical calling, integrating the taxonomical classification of the cluster member sequences were then performed. The representative sequences of each OTU (with at least 2 observed sequences) were queried against a filtered (unknown and unclassified sequences were removed) version of the ribosomal database project release 11.4 reference, and a summary table with taxonomy and alignment details for each OTU representative sequence was generated. OTU relative abundance data filtered for low-abundance OTUs were subsequently generated with QIIME 1.9.0 using rarified data based on the sample with the minimum number (12,805) of sequences. Please note that Silva r119 used in the current study classifies the "Betaproteobacteria" as "Betaproteobacteriales", an order of the Gammaproteobacteria. Thus, genera and higher taxonomic ranks that formerly represented "Betaproteobacteria" now belong to "Gammaprotobacteria".
Sequence data were deposited in the NCBI Sequence Read Archive under the accession number PRJNA633609.
Statistical Analyses
ANOVA and Tukey's test (p-value of <0.05) were used to evaluate statistically significant differences in the removal efficiencies of the test compounds, the effect of treatments on the total bacteria 16S rRNA copy numbers as well as the alpha diversity indices using PAST v3.15 [29]. Principal coordinate analysis (PCoA) plots and a two-way analysis of similarity (ANOSIM) based on the Bray-Curtis metric were used to visualize bacterial community composition and to test for significant differences among the treatments, respectively, using PAST v3.15. OTUs with significant differential abundance between treatments were identified using the DESeq2-function in R [30], performed on the non-rarefied, non-normalized datasets using Benjamin-Hochberg adjusted significance levels (p-adj < 0.05).
Depletion of TrOCs under Varying TOC Concentrations
Depletion of TrOCs in the biotic microcosm setups with initially high (8.7%) and low (3.2%) TOC concentrations varied considerably within and among test compound classes ( Figure 1A). Among the NSAIDs, the removal efficiencies of ibuprofen and ketoprofen were 2-fold higher in the surface relative to subsurface samples. Diclofenac removal efficiency was only marginally higher in the surface compared to the subsurface sediment samples, while naproxen was removed entirely under both TOC conditions. Among the beta-blockers, complete removal of propranolol was observed under both TOC conditions, while metoprolol removal was only marginally higher in the surface relative to subsurface layer samples. The cholesterol-lowering agents bezafibrate and clofibric acid removal correlated with the TOC concentration with significantly higher removal efficiency observed in the surface sediment layer. However, clofibric acid exhibited relative persistence in both TOC conditions with only less than 50% removed. Significantly higher removal also occurred for carbamazepine, benzotriazole and acesulfame but not for furosemide and hydrochlorothiazide in the surface relative to subsurface sediment samples.
In the abiotic setups, sorption of TrOCs to the sediment correlated with the initial sediment TOC concentration for most compounds ( Figure 1B). Among the NSAIDs, the removal efficiencies of diclofenac, ketoprofen and naproxen were significantly higher in the surface compared to subsurface sediment, while that of ibuprofen was only marginally higher. Similar to the biotic setups, the removal of the beta-blockers metoprolol and propranolol was high under both TOC conditions. Propranolol was removed entirely, while metoprolol removal exceeded 80% under both high and low TOC conditions. Significantly higher removal efficiency was also observed for bezafibrate, carbamazepine and benzotriazole but not for clofibric acid in the surface relative to subsurface samples. Acesulfame removal did not occur in abiotic setups under both high and low TOC conditions. On the other hand, furosemide exhibited a significantly higher removal efficiency in the subsurface relative to surface samples.
Water 2020, 12, x FOR PEER REVIEW 6 of 22 carbamazepine and benzotriazole registered significantly higher removal efficiencies in the biotic relative to abiotic samples.
The Abundance of the Total Bacterial Community
The unincubated sediment samples, i.e., t0 samples, revealed approximately 10 9 16S rRNA gene and 16S rRNA copies per gram sediment ( Figure 2). Following incubation, a marginal decline Overall, the removal efficiency of most test compounds was higher via biotic and abiotic mechanisms in the surface relative to subsurface sediment samples ( Figure 1). However, a comparison between the two mechanisms revealed significantly higher removal of ibuprofen, ketoprofen, naproxen, metoprolol, bezafibrate, clofibric acid, carbamazepine, acesulfame and benzotriazole in the biotic relative to abiotic treatments of surface sediment samples. In the subsurface samples, the compounds diclofenac, ketoprofen, naproxen, metoprolol, bezafibrate, carbamazepine and benzotriazole registered significantly higher removal efficiencies in the biotic relative to abiotic samples.
The Abundance of the Total Bacterial Community
The unincubated sediment samples, i.e., t0 samples, revealed approximately 10 9 16S rRNA gene and 16S rRNA copies per gram sediment ( Figure 2). Following incubation, a marginal decline occurred in the 16S rRNA gene copies in both amended and unamended surface sediment samples, while a significant decline occurred in the subsurface samples (ANOVA, p < 0.05), compared to corresponding t0 samples. TrOC-amended surface sediment samples registered higher 16S rRNA gene and 16S rRNA copies than the unamended samples analyzed at days 15 and 65 ( Figure 2A). On the other hand, the 16S rRNA gene copies in the subsurface samples were marginally lower in amended relative to unamended samples on day 15 but higher at day 65 ( Figure 2B). The 16S rRNA copies were, however, marginally higher in the amended than unamended samples at both days 15 and 65.
Water 2020, 12, x FOR PEER REVIEW 7 of 22 occurred in the 16S rRNA gene copies in both amended and unamended surface sediment samples, while a significant decline occurred in the subsurface samples (ANOVA, p < 0.05), compared to corresponding t0 samples. TrOC-amended surface sediment samples registered higher 16S rRNA gene and 16S rRNA copies than the unamended samples analyzed at days 15 and 65 ( Figure 2A). On the other hand, the 16S rRNA gene copies in the subsurface samples were marginally lower in amended relative to unamended samples on day 15 but higher at day 65 ( Figure 2B). The 16S rRNA copies were, however, marginally higher in the amended than unamended samples at both days 15 and 65.
Diversity and Bacterial Community Structure
Surprisingly, species richness was higher in the subsurface than surface sediment samples in the sediment prior to incubation ( Figure 3A,B). However, a significantly higher species richness was observed in the amended surface compared to subsurface samples obtained at days 15 and 65 following incubation. In contrast, higher species richness was observed in the unamended subsurface relative to surface samples at day 65.
The Shannon diversity was likewise marginally higher in the unincubated (t0) subsurface relative to surface samples ( Figure 3C,D). However, in incubated samples, the surface samples exhibited higher diversity indices compared to the subsurface samples irrespective of treatment.
Diversity and Bacterial Community Structure
Surprisingly, species richness was higher in the subsurface than surface sediment samples in the sediment prior to incubation ( Figure 3A,B). However, a significantly higher species richness was observed in the amended surface compared to subsurface samples obtained at days 15 and 65 following incubation. In contrast, higher species richness was observed in the unamended subsurface relative to surface samples at day 65. PCoA plots based on 16S rRNA gene and 16S rRNA sequence data, and ANOSIM R-values, revealed the effect of the treatments on the microbial community. R-values greater than 0.6 indicated a rather strong dissimilarity between microbial communities from different treatments and time points. In the surface sediment samples, the PCoA plots revealed distinct clustering of the bacterial community according to incubation time, while the effect of TrOC amendment was not apparent ( Figure 4A,B). Consistent with these findings, the two-way ANOSIM test indicated that in the surface sediment samples The Shannon diversity was likewise marginally higher in the unincubated (t0) subsurface relative to surface samples ( Figure 3C,D). However, in incubated samples, the surface samples exhibited higher diversity indices compared to the subsurface samples irrespective of treatment.
PCoA plots based on 16S rRNA gene and 16S rRNA sequence data, and ANOSIM R-values, revealed the effect of the treatments on the microbial community. R-values greater than 0.6 indicated a rather strong dissimilarity between microbial communities from different treatments and time points. In the surface sediment samples, the PCoA plots revealed distinct clustering of the bacterial community according to incubation time, while the effect of TrOC amendment was not apparent ( Figure 4A,B). Consistent with these findings, the two-way ANOSIM test indicated that in the surface sediment samples, incubation time accounted significantly for the variation in the bacterial community composition (DNA: R = 0.7, RNA: R = 0.7, p < 0.02), while the effect of treatments was not apparent (DNA: R = 0.3, RNA: R = 0.2, p < 0.22). For the subsurface samples, both incubation time and TrOC amendment contributed significantly to the differences in the bacterial community composition. The clustering, however, distinctly separated along axis 1, depicting a stronger influence of incubation time than that of the TrOC amendment ( Figure 4C,D). The corresponding ANOSIM test further supported this observation by revealing the stronger effect of incubation time (DNA: R = 0.9, RNA: R = 0.9, p < 0.02) compared to TrOC amendment (DNA: R = 0.7, RNA: R = 0.7, p < 0.01).
Water 2020, 12, x FOR PEER REVIEW 9 of 22 TrOC amendment contributed significantly to the differences in the bacterial community composition. The clustering, however, distinctly separated along axis 1, depicting a stronger influence of incubation time than that of the TrOC amendment ( Figure 4C,D). The corresponding ANOSIM test further supported this observation by revealing the stronger effect of incubation time (DNA: R = 0.9, RNA: R = 0.9, p < 0.02) compared to TrOC amendment (DNA: R = 0.7, RNA: R = 0.7, p < 0.01).
Phylum-Level Taxonomic Composition
The predominant phyla in the two sediment layers on DNA and RNA levels were Proteobacteria, Chloroflexi, Actinobacteria, Acidobacteria, Bacteroidetes and Firmicutes ( Figure 5). Other phyla identified (>1% relative abundance) included Nitrospirae, Gemmatimonadetes and Chlorobi. The t0 samples indicated that only the relative abundance of the predominant phylum Proteobacteria was higher in the surface (38%) compared to the subsurface layer (32%), while other phyla were similar in terms of relative abundance in the two layers.
Family-Level Taxonomic Composition
The t0 surface and subsurface sediment samples exhibited a similar number of dominant bacterial families (>3% relative abundance) at the DNA level ( Figure 6A,C). However, the surface sediment samples had a higher number of dominant families than subsurface sediment samples at the RNA level ( Figure 6B,D). These included Proteobacteria affiliated Hyphomicrobiaceae and Comamonadaceae; Caldilineaceae and unclassified families of JG30-KF-CM66, KD4-96, TK10, JG30-KF-CM45 belonging to the Chloroflexi; an Acidobacterial Subgroup 6 family, and Nitrospiraceae (Nitrospirae). Among the t0 samples, some families exhibited higher relative abundances in the subsurface than in surface sediment. These included the family Anaerolineaceae at the DNA and RNA levels, Rhodobiaceae at the DNA level and Gemmatimonadaceae at the RNA level ( Figure 6).
Incubated surface sediment samples amended with TrOCs exhibited an increased relative abundance relative to unamended controls in the families Methylophilaceae, Caldilineaceae, Acidimicrobiaceae and Gemmatimonadaceae and an unclassified KD4-96 family at the DNA level ( Figure 6A). At the RNA level, Methylophilaceae, Comamonadaceae, Anaerolineaceae, unclassified JG30-KF-CM45, Acidobacteria Subgroup 6 family and Eubacteriaceae were stimulated by the TrOCs (Figure 6B). The amended subsurface sediment samples exhibited higher relative abundance than the unamended controls in the families Xanthobacteriaceae, Hydrogenophiliaceae, Rhodospirillaceae, Methylophilaceae, Rhodocyclaceae and an unclassified KD4-96 family at both DNA and RNA levels, Hyphomicrobiaceae, Caldilineaceae, Acidobacteria Subgroup 6 family only at the DNA level and Comamonadaceae, Anaerolineaceae and Peptococcaceae at the RNA level, respectively ( Figure 6C,D).
Genus-Level Taxa Associated with TrOC Degrading Microbial Communities
Relative to unamended controls, some specific taxa were considered enriched by the test compounds based on significant differential abundance as determined by Log2foldchange values (Table 1). Based on the 16S rRNA gene and 16S rRNA analyses, diverse taxa were enriched in response to TrOCs, including known and Candidatus genera affiliated with the phyla Proteobacteria (alpha-, delta-, gamma) and Bacteroidetes (Sphingobacteriia and Cytophagia) ( Table 1). a Gene bank accession number; b similarity of OTU representative 16S rRNA gene sequence to that of closest cultured relative; c significant (p-adj < 0.05) Log2-fold change >0 and <0 are reported as determined by Deseq2; d non-significant differential abundance between treatment and unamended controls.
Influence of TOC on Biotransformation and Sorption of TrOCs
Higher microbial removal efficiencies of most test compounds in the organic rich surface relative to the subsurface sediment samples ( Figure 1A) highlight the significance of the organic carbon content in the removal of some micropollutants reaching the hyporheic zone. The organic carbon serves as a nutrient source for heterotrophic microorganisms and promotes bacterial colonization [9,11,31]. Thus, a better energy status of surface than subsurface sediment microbes might be anticipated, suggesting that surface sediment organisms are more tolerant to TrOCs and prone to respond to TrOC amendment than their subsurface sediment counterparts. This may explain the higher bacterial abundance and diversity detected in the incubated surface sediment samples (Figures 2 and 3). High diversity and abundance have been previously associated with enhanced biotransformation efficiency of many organic micropollutants [16,23,[32][33][34][35], an observation the current study extends.
Acesulfame, hitherto considered persistent [36], has been recently reported to be biodegradable in constructed and natural environments, though the environmental parameters associated with these recent findings are not yet established [6,34,37]. In the current study, the biotransformation of acesulfame occurred only in the surface sediment suggesting the compound was likely degraded by specific taxa that were supported by the high TOC content in this layer compared to the subjacent layer. Benzotriazole, considered less biodegradable in WWTPs [38], was almost completely removed in the sediment samples. This may be attributed to the higher bacterial diversity and increased residence time in hyporheic zone sediments than in WWTPs [39,40].
While furosemide was previously considered recalcitrant to biodegradation, biotransformation in sediments has been reported recently [41], and the current study suggests the organic carbon content may influence such biotransformation. Though the majority of previous studies reported carbamazepine as relatively persistent to biodegradation [4,16,34,42,43], up to 60% removal of carbamazepine in a mixed bacterial culture has been reported [44]. In the current study, removal in the biotic microcosms was only marginally higher than in the abiotic setup suggesting possible contribution of both mechanisms in its removal as carbamazepine sorbs readily to organic matter in sediment-water systems due to its relatively high log D ow of 2.8 [24].
Hydrochlorothiazide removal is attributed to photolysis and hydrolysis [45,46]. While photolysis can be excluded in the current study, the contribution of hydrolysis to its removal cannot be ruled out, and further investigations would be required. Indeed, hydrochlorothiazide removal was similar in biotic and abiotic treatments (Figure 1), suggesting abiotic removal mechanisms such as hydrolysis in addition to sorption as more relevant than biodegradation. Despite higher removal of clofibric acid in surface relative to subsurface sediment samples, the overall removal of the compound was still low (<40%) in biotic microcosms under both TOC conditions suggesting the persistence of the compound in water-sediment matrices as reported in previous studies [47,48]. Nevertheless, biodegradation contributed to its removal (Figure 1).
The complete and near-complete removal of propranolol and metoprolol, respectively, under both TOC concentrations, matches previously reported patterns in two sediment types differing in TOC content [49]. Both compounds are structurally similar, a factor that may have influenced their similar interaction with TOC and the resident bacterial community. Since removal efficiencies in biotic and abiotic incubations were similar, abiotic mechanisms dominated biodegradation here.
On the other hand, naproxen, ibuprofen and ketoprofen (NSAIDs) exhibited variable interaction with TOC where the latter two were strongly impacted by TOC content, while naproxen was not. This may be attributed to the difference in their physical-chemical properties. While the three compounds contain the carboxyl and alkyl functional groups, naproxen differs markedly from the rest by having an ether group [50], which may account for its varied interaction with TOC or resident microbial communities. Overall, biodegradation and sorption were important removal mechanisms for the NSAIDs tested.
Although sorption and abiotic removal mechanisms were minor compared to biotic ones for many TrOCs, a higher removal of the NSAIDs (diclofenac, ibuprofen, ketoprofen and naproxen), cholesterol-lowering agents (bezafibrate, clofibric acid), carbamazepine and benzotriazole in the surface sediment relative to subsurface sediments in abiotic microcosms indicates that sorption of these compounds is influenced by organic carbon concentration in the sediments. Such influence on sorption as a removal mechanism for some organic micropollutants in sediments by the organic matter content has been previously reported [6,16,34,41]. For some other compounds such as the beta-blockers (metoprolol, propranolol), furosemide and hydrochlorothiazide, no correlation with TOC concentration was observed suggesting other factors or processes contributed to their removal. Indeed, processes such as hydrolysis have been reported as significant removal mechanisms for such compounds as furosemide [51] and hydrochlorothiazide [46]. The quality of organic carbon sorbed to mineral particles or in the form of particulate organic matter likewise impacts sorption (reviewed in [52]). It is well established that the lower the O/C ratio of such organic matter is, the higher its hydrophobicity and the higher hydrophobic interactions with dissolved organic compounds are. Organic coatings of particles are important mediators of sorption, modifying the chemical and physical properties of particles relevant for sorption, e.g., charge-distributions [53]. The contribution of such processes to their removal in the current study are thus hypothesized. Moreover, the occurrence of TrOCs in neutral and ionizable forms further determine the type of interaction with the sediment materials due to the influence of external factors such as pH [54]. At the prevailing pH in the microcosms (pH 7.5-8), the test compounds potentially exhibited different physicochemical properties, and their interaction with the sediment was expected to be driven by different processes such as hydrophobic partitioning for neutral TrOCs, e.g., carbamazepine, and electrostatic interactions and surface complexation for the ionizable TrOCs, e.g., ibuprofen, naproxen, ketoprofen and diclofenac [6,55]. In the same way, desorption of the TrOCs from the sediment into the aqueous phase may be driven by the same factors leading to a counteractive effect on the sorption as a TrOC removal mechanism.
Interplay of TOC, Bacterial Community Structure and TrOC Removal
The difference in the organic carbon content in the two sediment samples was reflected in the bacterial community structure and TrOC removal dynamics. The taxonomic composition of the bacterial community at the phylum level remained relatively constant throughout the incubation in the surface sediment samples ( Figure 5A,B), likely due to the stable supply of carbon and energy from the organic-rich sediment. In the presence of abundant primary carbon sources, degradation of TrOCs via cometabolism as previously demonstrated appears likely [6]. In such a scenario, the bacteria possibly utilized the organic carbon as the sole source of carbon and energy while transforming the TrOCs as a non-growth substrate [56]. While cometabolism is a major TrOC removal mechanism in cases where the concentration of the TrOCs is too low to support biomass growth, or where they exhibit apparent toxicity rendering them unfavorable to enter catabolic pathways of microbial cells, in some cases the cometabolism initiates a reaction to transform persistent compounds into their more biodegradable forms before they enter the central metabolic pathways [40]. The latter may have occurred as reflected in the marginal increase in the relative abundances of Proteobacteria, Bacteroidetes, Firmicutes, Acidobacteria, Chloroflexi and Gemmatimonadetes in amended relative to unamended samples ( Figure 5A,B), indicating potential utilization of some of the TrOCs as a carbon source by these phyla. However, metabolic degradation cannot be excluded and might represent an alternative explanation for such findings. Indeed, members belonging to these phyla have been associated with degradation of various xenobiotics including the current test compounds [16,23,24,[57][58][59][60][61][62].
In the subsurface sediment samples, a shift in the bacterial community composition in the amended samples relative to unamended controls at day 15 of incubation ( Figure 5C,D), in which the relative abundance of Proteobacteria increased, while other phyla such as Chloroflexi, Firmicutes and Actinobacteria declined, suggested a possible change in the carbon utilization dynamics. Depletion of readily degradable organic carbon may have favored Proteobacteria. As relatively rapid responders to substrates [63], and a characteristic broad physiological and metabolic diversity [64], the Proteobacteria may have easily adapted to utilizing the TrOCs as an alternative sole carbon and energy source, hence outcompeting the other taxa in the microbial community. Nevertheless, analyses at the family and genus levels revealed that even within the declining phyla some taxa increased in their relative abundance in amended relative to unamended samples, thus suggesting a possible utilization of TrOCs as a carbon source. (Figure 6C,D; Table 1). Such a phenomenon with different members of the same phylum responding differently to TrOC exposure has been previously reported [16,23,65]. Slow responders such as Chloroflexi and Acidobacteria were only observed to increase in relative abundance at day 65, signifying the importance of contact time between some bacterial groups and TrOCs, a factor associated with the enhanced TrOC removal capacity of hyporheic zone sediments compared to WWTPs [15,23,39].
Putative Taxa Associated with Degradation of the Test Compounds
Bacterial taxa enriched in the micropollutant-amended microcosms relative to the unamended controls were considered potential degraders of the test compounds ( Figure 6; Table 1). These included Proteobacteria affiliated families Methylophilaceae, whose members are obligate methylotrophs but were also previously associated with the degradation of TrOCs such as ketoprofen, formononetin, ibuprofen, primidone, ametrine and naproxen [66] as well as Comamonadaceae previously associated with the degradation of pharmaceuticals in sediments [16,23]. The relative enrichment in the current study of Rhodocyclaceae, hitherto associated with anaerobic hydrocarbon degradation, extends on the recently reported potential of some Rhodocyclaceae affiliated taxa to degrade hydrocarbons under oxic conditions [67]. Rhodocyclaceae was also important for the degradation of toluene under oxygen-limiting conditions [67], highlighting potential resilience in the hyporheic zone under declining oxygen conditions. This may explain its flourishing in the subsurface sediment samples where oxygen availability may be limited ( Figure 6C,D). The potential of Rhodospirillaceae and Xanthomonadaceae in the degradation of aromatic organic compounds is widely reported [23,[68][69][70][71]. Their relative enrichment by TrOCs in the present study, therefore, extends this observation. Enriched taxa at the genus level included the toluene-degrading Xanthobacter [72], Hyphomicrobium previously associated with the degradation of ibuprofen [23] and 2,4-Dichlorophenol [73], Novosphingobium widely associated with the degradation of numerous aromatic compounds including pharmaceuticals [16,23,24,61], and Rhizobium enriched in ibuprofen-amended sediment samples [23]. Other Proteobacteria affiliated genera hitherto unassociated with xenobiotic degradation but enriched in the TrOC-amended samples included Phaselicystis, Ferritrophicum, Crenothrix, Magnetospirillum, Reyranella, Prosthecomicrobium and Geothermobacter, suggesting their involvement in the biotransformation of the test compounds. The genus Terrimonas belonging to the Bacteroidetes was previously associated with the degradation of ibuprofen [23], dibutyl phthalate [74] and benzo[a]pyrene [75], and was also enriched by the test compounds in the current study.
The increase in the relative abundance of Chloroflexi affiliated Caldilinaceae and Anaerolineaceae following TrOC amendment corresponds to the previous association of these families with TrOC removal [66,76]. The Caldilinaceae affiliated genus Caldilinea was previously associated with TrOC removal in an anoxic-aerobic membrane bioreactor [66], while Anaerolineaceae representatives were associated with degradation of organic pollutants, aromatics and n-alkanes under anaerobic conditions [76][77][78]. Although considered strictly anaerobic [79], a surprisingly considerable abundance of Anaerolineaceae members was detected in aerobic WWTP water samples [78]. The authors attributed the observation to the presence of anoxic microzones within the aerated wastewater flocs. A recent study further revealed enrichment of Anaerolineaceae in ibuprofen-amended oxic hyporheic zone sediments [23]. Their prevalence in such conditions may, therefore, be attributed to similar anoxic microzones commonly reported within the oxic hyporheic zone sediments [15]. The enrichment of the Chloroflexi affiliated unclassified KD4-96 and JG30-KF-CM45 families in the current study further extends their association with the degradation of TrOCs as recently reported in ibuprofen-amended oxic hyporheic zone sediments [23].
An unclassified Acidobacteria Subgroup 6 family enriched in the presence of TrOCs is in agreement with the reported association of members within this phylum with the degradation of pharmaceuticals, polychlorinated biphenyls and petroleum compounds [16,23,57,80]. The family Acidimicrobiaceae has been associated with ibuprofen degradation in oxic hyporheic zone sediments [23], and its enrichment in the current study suggests a potential to degrade aromatic compounds. The increase in the relative abundance of Gemmatimonadaceae in amended relative to unamended sediment samples suggests the potential to utilize at least some of the TrOCs. Members of this family have been associated with degradation of ibuprofen [23] and other complex compounds, e.g., the benzoate-degrading Gemmatimonas aurantiaca and an uncultured Gemmatimonas species were associated with alkylbenzene sulfonate degradation [60].
The enrichment of Firmicutes affiliated Eubacteriaceae and Peptococcaceae belonging to Clostridia corresponds to their previous association with xenobiotic degradation. Eubacteriaceae was among soil microorganisms associated with soils historically contaminated by heavy metals and hydrocarbons [81]. Peptococcaceae, though previously reported in the anaerobic degradation of aromatic compounds [82], have been recently shown to harbor genes encoding enzymes involved in benzene degradation in a benzene-degrading denitrifying continuous culture, where transcripts associated with the family Peptococcaceae dominated all samples [83]. The enrichment of such a broad range of taxa by TrOCs in hyporheic sediments highlights the hyporheic zone as a reservoir of diverse bacteria with a potential to degrade a wide range of emerging contaminants.
Conclusions
Though the microbial removal efficiency of most TrOCs declined with increasing hyporheic sediment depth attributable to the differences in the concentration of organic carbon and associated changes in microbial community dynamics, in some cases, low concentrations of organic carbon can boost TrOC removal, since in high concentrations, the organic carbon may also serve as a competitive substrate that inhibits preferential degradation of TrOCs. As evidenced in the current study, the contribution of sorption and other abiotic removal mechanisms to the fate of organic micropollutants in sediment-water matrices is not to be ignored. Moreover, the contribution of the biotic and abiotic processes in TrOC removal is not exclusionary but rather complementary since, for example, sorption may impair or enhance the bioavailability of a compound. Likewise, biodegradation of dissolved compounds at equilibrium concentrations will stimulate desorption of sorbed fractions. Thus, the importance of biodegradation might be underestimated when abiotic and biotic treatments are merely compared in terms of TrOC removal.
The bacterial community analyses in TrOC amended relative to unamended sediment samples highlight diverse bacteria potentially supporting TrOC removal, a remarkable tolerance of hyporheic zone bacterial taxa towards a cocktail of TrOCs at rather high concentrations, and how environmental factors such as TOC might impact TrOC removal. Thus, the hyporheic zone supports an important ecosystem service in terms of sustaining a diverse microbiome and surfaces for microbial colonization as well as TrOC sorption, all contributing to the removal of TrOCs from river water. | 2020-12-17T09:13:15.739Z | 2020-12-15T00:00:00.000 | {
"year": 2020,
"sha1": "897a9ce1e1a2b69a72c13bb7ea12f6cb17dcd775",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4441/12/12/3518/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "815a2476e6ec1debff3ce86b442893373a5287b2",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
236212128 | pes2o/s2orc | v3-fos-license | Facile Preparation of High Strength Silica Aerogel Composites via a Water Solvent System and Ambient Pressure Drying without Surface Modification or Solvent Replacement
To further reduce the manufacturing cost and improve safety, silica aerogel composites (SAC) with low density and low thermal conductivity synthesized via ambient pressure drying (APD) technology have gradually become one of the most focused research areas. As a solvent, ethanol is flammable and needs to be replaced by other low surface tension solvents, which is dangerous and time-consuming. Therefore, the key steps of solvent replacement and surface modification in the APD process need to be simplified. Here, we demonstrate a facile strategy for preparing high strength mullite fiber reinforced SAC, which is synthesized by APD using water as a solvent, rather than using surface modification or solvent replacement. The effects of the fiber density on the physical properties, mechanical properties, and thermal conductivities of SAC are discussed in detail. The results show that when the fiber density of SAC is 0.24 g/cm3, the thermal conductivity at 1100 °C is 0.127 W/m·K, and the compressive strength at 10% strain is 1.348 MPa. Because of the simple synthesis process and excellent thermal-mechanical performance, the SAC is expected to be used as an efficient and economical insulation material.
Introduction
Silica aerogel is a new type of nanomaterial with a 3D nanoporous network structure. It was originally developed by Kistler in 1931 [1], and has attracted widespread attention due to its low density, low thermal conductivity, and high specific surface area [2][3][4]. The preparation process of aerogels usually involves sol-gel, aging, and supercritical drying [5,6]. The supercritical drying process usually adopted requires a high temperature and high pressure (the supercritical point of ethanol is 240 • C, 6.3 MPa, and the supercritical point of carbon dioxide is 31 • C, 7.38 MPa), which is costly, dangerous, and restricts the largescale continuous industrial production of aerogels. Therefore, ambient pressure drying (APD) has received great attention [7]. However, early APD includes solvent replacement and surface modification, and the waste liquid produced in this process is difficult to be recycled and utilized, resulting in great waste and environmental pollution [8][9][10]. Since then, researchers have prepared silica aerogels using methyltrimethoxysilane (MTMS) as a precursor by APD without solvent replacement [11][12][13][14][15][16], but aerogels have poor mechanical properties and are prone to brittle fracture during use [17][18][19]. Our research group [20] used MTMS as the precursor and mullite fiber as the reinforcing phase to prepare an aerogel insulation composite material in the early stage, which has a low thermal conductivity (0.0403 W/m·K at room temperature and 0.101 W/m·K at 1100 • C), but its strength is not strong enough to resist external vibration and compression during use (0.108 MPa at 10% strain). In addition, flammable and explosive ethanol is used in the preparation process, which is dangerous and not friendly to the environment.
In this paper, the high strength aerogel insulation composites were prepared with MTMS as the precursor and mullite fiber as the reinforcing phase by a APD process; no ethanol, no solvent replacement, and no surface modification were involved. The effects of fiber density on the physical properties, mechanical properties, and thermal conductivities of silica aerogel insulation composites were investigated. This work will provide an important basis for the economical, efficient, and green preparation of high-performance thermal insulation materials.
Materials and Methods
MTMS, urea, and cetyltrimethylammonium bromide (CTAB) were all purchased from Shanghai Maclin Biochemical Technology Co., Ltd. (Shanghai, China). Acetic acid was purchased from Sinopharm Holding Chemical Reagents Co. Ltd. (Shanghai, China). Mullite fiber parts provided by Shandong Luyang Co., Ltd. (Zibo, China). None of the reagents were further purified.
MTMS was added into a 0.01 M acetic acid aqueous solution. Then, CTAB and urea were added and strongly stirred for 4 h to enhance the hydrolysis of MTMS and to obtain the silica sol. The molar ratio between MTMS, H 2 O CTAB, and urea is 1:8:0.05:0.7. Next, the prepared mullite fiber parts (different fiber densities at 0.20, 0.24, 0.26, 0.30, and 0.32 g/cm 3 ) were impregnated with the above sol under vacuum conditions. The whole system was sealed tightly in 60 • C water to form a gel. Upon gelation, a small amount of deionized water was added to protect the gel. After aging in a water bath at 60 • C for 48 h to promote the cross-linking and strengthening of the nano skeleton, it was dried at ambient pressure for 8 h at 60 • C, 80 • C, 100 • C, 110 • C, and 120 • C, separately, to obtain the aerogel composite. The gradual drying step was done to ensure the integrity of the nano structure of the aerogel matrix. Then, the prepared samples were heat treated in a muffle furnace at 700 • C for 2 h (the rate of heat treatment is 5 • C/min) to obtain the final SAC.
The volumetric density of the silica aerogel composites was measured with the Archimedes method. The morphology was observed by scanning electron microscopy (TESCAN MIRA3 Brno, Czech Republic). A universal testing machine (WDW Model 100, Jinan, China) was used to test the compressive strength. The thermal conductivity of SAC at room temperature and high temperatures were measured with a heat flow meter (ASTM-E1530, New Castle, DE, USA) and hot plate meter (YB/T4130-2005, Luoyang, China), respectively. Mercury intrusion porosimeter (Autopore IV 9510, Norcross, GA, USA) was used to determine the pore size distribution. Nitrogen sorption analysis (Quantachrome autosorb-IQ2-MP, Boynton Beach, FL, USA) was used to characterize the BET (Brunner-Emmet-Teller) surface area and nano pore size of the SACs.
Results and Discussion
The microscopic morphology of the mullite fiber is illustrated in Figure 1a,b, as lots of fibers go through each other and wind around each other. From the SEM, the average fiber length is 1.5 mm and the average fiber diameter is 5 µm. As can be seen from Figure 1c, the silica aerogel composite is well formed with smooth surface and no obvious crack. The SEM pictures (Figure 1d,e) show that the aerogel as the matrix fills the whole space in blocks, and the fiber runs through the aerogel matrix as the reinforcing phase. Moreover, the fiber and aerogel were closely bonded and had a good compatibility. Figure 1f reveals that the aerogel matrix is composed of many nanoparticles packed together, and many nanoscale gaps are formed between them. There was no visible shrinkage in the plane direction of the composite, and the shrinkage in the thickness direction is shown in Figure 2a. With the increase in fiber density, the shrinkage of the silica aerogel composite decreased continuously, from 32.61% at 0.20 g/cm 3 to 7.92% at 0.32 g/cm 3 . Because of the interaction between the shrinkage of the aerogel and the fiber expansion, the density of the mullite fiber reinforced aerogel composites did not change significantly and fluctuated in the range of 0.51~0.53 g/cm 3 .
In order to analyze the pore size distribution of the composite material, mercury intrusion porosimeter (MIP) and nitrogen sorption (NS) methods were employed. Figure 2b illustrates the MIP pore size distribution curves of the composites with different fiber densities. It can be found that the pore size distribution curves of the composites have two peaks, indicating that the materials have two pore size structures: micron pore and nano pore, and the diameters of the nano pore and micron pore are concentrated on about 10 nm and 20 µm, respectively. The NS isotherms and the corresponding pore size distribution of the SACs are shown in Figure 2c,d. All of the SACs display type IV isotherms with a hysteresis loop according to the IUPAC (International Union of Pure and Applied Chemistry) classification, which indicates that there are nanopores between 2-50 nm in the SACs [21]. The NS pore size distribution revealed that the nanoscale pore structure around 10 nm of SACs was consistent with the SEM and MIP analyses. The detectable ranges of NS and MIP were 4-3 × 10 5 nm and 0.35-100 nm, respectively, and the results from NS were more reliable than MIP within the pore width of 0.35-100 nm in this case [22][23][24][25]. So, we adopted the result from NS for the range of nano pores and MIP for the micron pores. Furthermore, the results from MIP in the range of nano pores testify to the results from the NS on the other side. All of the nano pores in the SACs were from the aerogel matrix. With the increase in the fiber density, the proportion of aerogel decreased, and as a result, the number of nano pores in the composites decreased, and the sample with density of 0.32 g/cm 3 had the least number of nano pores. The BET specific surface areas of the SACs were 158.758, 137.108, 96.962, 104.433, and 99.849 m 2 /g for fiber density at 0.20, 0.24, 0.26, 0.30, and 0.32 g/cm 3 , respectively. Micron pores in the SACs were from two parts: gaps between fibers and gaps between fiber and aerogel. The increase in the fiber density led to the elevation of gaps between fibers. However, the variation of gaps between the fiber and aerogel was hard to analyse. What is more, the density of SACs also affected the micron pores, a high density usually resulted in more micron pores in this case. As a result of all of the above factors, with the increase in fiber density, the micron pores showed the same trend with densities of SACs, decreased first and then increased, and the sample with a density of 0.24 g/cm 3 had the least number of micron pores. The influence of the fiber density on the thermal conductivity of SAC was conducted and is shown in Figure 3. It be seen from Figure 3a that the vacuum thermal conductivity and ambient thermal conductivity of the aerogel decreased at first and then remained unchanged with the increase of fiber density. The thermal conductivity of SAC at an atmospheric pressure reaches the lowest value of 0.05806 W/m·K at 0.24 g/cm 3 , and the difference between the two curves is considered to be the thermal conductivity contributed by the gaseous phase, including the gaseous thermal conductivity at room temperature and the effect of gas-solid coupling [26]. Figure 3b depicts that the thermal conductivity of the silica aerogel composite continuously increased with the rise of temperature [18,27,28]. This phenomenon is attributed to the rapid increase in radiant thermal conductivity at high temperatures. With the increase of fiber density, the thermal conductivity of the silica aerogel insulation composites increased firstly, then decreased, and then increased. At 400 • C, the sample with a density of 0.24 g/cm 3 showed the lowest thermal conductivity (0.073 W/m·K), and it had the best thermal insulation performance (0.121 W/m·K) when the density was 0.32 g/cm 3 at 1000 • C. As can be seen from Figure 4a, with the increase in fiber density, the compressive strength of the composite material decreased significantly. The compressive strength of the 10% strain decreased from 2.003 MPa (fiber density is 0.20 g/cm 3 ) to 0.644 MPa (fiber density is 0.32 g/cm 3 ). The reason is that under the same volume condition, with the increase of fiber density, the proportion of high-strength aerogel matrix in the composite material decreased, which made it unable to provide a strong support for the composite material. The comprehensive performance of SAC was best at a fiber density of 0.24 g/cm 3 , and its compressive strength at 10% strain was 1.348 MPa, which was 1146% higher than that of the same type of aerogel insulation composite [20]. Strong aerogels and toughened fibers play an important role in the preparation of regular-shaped and high-strength aerogel composites. As can be seen from the compressive stress-strain curves of pure aerogel (inset diagram in Figure 4b) and the SAC (Figure 4b), the fracture of the pure aerogel before strain at 5% means terrible brittleness, but for SACs, it can still withstand stress without fracture after strain at 70%, which means good toughness rather than brittleness. In order to show the good toughness of the SAC more visually, we made "car pressure tests" (let a car run over the samples) on the SAC and a piece of corresponding aerogel. Figure 4c,d show the photos after the first/second car pressure test, while the insets in which show the photos before the tests. As in Figure 4c, for the first test, after the impact from the tyre of the car, the aerogel collapsed while the SAC remained intact. In the second test, the SAC after the first test was turned over and went on the second test. After the second test, it still remained intact without fracture, which means the SAC showed a certain degree of toughness on the whole.
Conclusions
In this paper, the mullite fiber reinforced silica aerogel insulation material was synthesized by the method of APD with water as the only solvent, without surface modification or solvent replacement. The effects of the fiber density on the physical properties, mechanical properties, and thermal conductivity of SAC were studied. With the increase in fiber density, the shrinkage and the compressive strength of SAC decreased obviously. In addition, the increase in fiber density caused the porosity and thermal conductivity to decrease first and then increase. In summary, when the fiber density was 0.24 g/cm 3 , the sample possessed a low high-temperature thermal conductivity (0.127 W/m·K at 1100 • C) and excellent compressive strength (1.348 MPa at 10% strain). These desirable features confirm the suitability of SAC aerogels prepared by APD technology as a high-performance and economical thermal insulation material.
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-07-25T06:17:03.901Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "19164c4c9e84e4ed9cbe3ce082e6db4e420f9018",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/14/14/3983/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "57b775ca0a65161e83d7b49e159cdd1af4c9c997",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
261741509 | pes2o/s2orc | v3-fos-license | Burden, risk factors and outcomes associated with adequately treated hypothyroidism in a population-based cohort of pregnant women from North India
Hypothyroidism is the commonest endocrine disorder of pregnancy, with known adverse feto-maternal outcomes. There is limited data on population-based prevalence, risk factors and outcomes associated with treatment of hypothyroidism in early pregnancy. We conducted analysis on data from an urban and peri-urban low to mid socioeconomic population-based cohort of pregnant women in North Delhi, India to ascertain the burden, risk factors and impact of treatment, on adverse pregnancy outcomes- low birth weight, prematurity, small for gestational age and stillbirth. This is an observational study embedded within the intervention group of the Women and Infants Integrated Interventions for Growth Study, an individually randomized factorial design trial. Thyroid stimulating hormone was tested in 2317 women in early (9–13 weeks) pregnancy, and thyroxin replacement started hypothyroid (TSH ≥2.5mIU/mL). Univariable and multivariable generalized linear model with binomial family and log link were performed to ascertain risk factors associated with hypothyroidism and association between hypothyroidism and adverse pregnancy outcomes. Of 2317 women, 29.2% (95% CI: 27.4 to 31.1) had hypothyroidism and were started on thyroxin replacement with close monitoring. Overweight or obesity was associated with increased risk (adjusted RR 1.29, 95% CI 1.10 to 1.51), while higher hemoglobin concentration was associated with decreased risk (adjusted RR 0.93, 95% CI 0.88 to 0.98 for each g/dL) for hypothyroidism. Hypothyroid women received appropriate treatment with no increase in adverse pregnancy outcomes. Almost a third of women from low to mid socio-economic population had hypothyroidism in early pregnancy, more so if anemic and overweight or obese. With early screening and adequate replacement, adverse pregnancy outcomes may be avoided. These findings highlight the need in early pregnancy for universal TSH screening and adequate treatment of hypothyroidism; as well as for attempts to reduce pre and peri-conception overweight, obesity and anemia. Clinical trial registration: Clinical trial registration of Women and Infants Integrated Interventions for Growth Study Clinical Trial Registry–India, #CTRI/2017/06/008908; Registered on: 23/06/2017, (http://ctri.nic.in/Clinicaltrials/pmaindet2.php?trialid=19339&EncHid=&userName=society%20for%20applied%20studies).
screening and adequate treatment of hypothyroidism; as well as for attempts to reduce pre and peri-conception overweight, obesity and anemia.
Background
The thyroid gland plays a key role in pregnancy homeostasis and metabolic adaptations important for fetal development, as well as supply of energy to the mother.It undergoes adaptive changes to meet the increased demand during pregnancy, and women with low reserves during preconception, frequently enter pregnancy in hypothyroid state.Hypothyroidism is the commonest endocrine disorder of pregnancy, and if not adequately treated, can result in adverse pregnancy outcomes-growth restriction, prematurity, low birth weight (LBW) and stillbirth [1,2].However, the symptoms are nonspecific and insidious; hence, diagnosis is usually missed or made much later due to delayed reporting of pregnancy [3].
Globally, hypothyroidism affects 3-5% of all pregnant women [4], however, the prevalence is higher in South Asian countries [5,6].In a Chinese study of 2899 pregnant women, the prevalence of hypothyroidism (TSH >3.93 mIU/L) in the first trimester was 10.9% [7].Yadav et al, in a meta-analysis of observational studies, reported a pooled prevalence of 11.01% in pregnant women in India [8], however, only two of the 54 studies were community based, and the cut off levels of TSH used were not uniform, varying from 2.3-4.5 mIU/mL in all three trimesters of pregnancy.Data from secondary and tertiary hospital across nine states in India showed 13.1% prevalence in the first trimester, with a TSH cutoff of 4.5 mIU/mL [9].
The thyroid stimulating hormone (TSH) assay is a simple, sensitive, commonly used screening tool for thyroid dysfunction in pregnancy but is limited by lack of uniformly accepted reference ranges which vary according to laboratory reference levels, type of assay, and population heterogeneity.The new third generation assays have high functional sensitivity as recommended by the American Thyroid Association (ATA) and are a good tool for diagnosing primary thyroid dysfunction [10].The National Guidelines in India (2014) recommend treatment aimed at maintenance of TSH <2.5 mIU/mL in the first trimester and <3 mIU/mL in the second and third trimester of pregnancy, similar to the guidelines of the Endocrine Society [11,12].The ATA 2017 recommends 4 mIU/mL as the upper limit of normal, in the absence of population specific ranges [13].A systematic review of normative values of trimester specific thyroid function in Indian women concluded that TSH cut offs of up to 5-6 mIU/ mL, similar to the pre-pregnancy stage, should be used in the first trimester of pregnancy, although it was limited by the fact that no outcomes were included [14].
There are limited data on population-based prevalence, risk factors and adverse outcomes of hypothyroidism in pregnancy, particularly in low to mid socio-economic populations.We conducted an analysis of data from a population-based cohort of urban and peri-urban lowmid socioeconomic strata neighborhoods of Delhi to determine the prevalence of hypothyroidism in pregnancy, its risk factors, and impact of replacement on adverse pregnancy outcomes LBW, prematurity, small for gestational age (SGA), and stillbirth.
Study design, setting and participants
This is an observational study embedded within the intervention group of the Women and Infants Integrated Interventions for Growth Study (WINGS), conducted in urban and periurban low-to-mid-socioeconomic neighborhoods in south Delhi, India.A summary of WINGS is provided below; the details of methods have been previously published [15].Briefly, 13,500, eligible women aged 18-30 years were identified through a door-to-door survey.Women who provided written consent to participate in the study were enrolled (first randomization) to receive pre and peri-conception interventions or routine care and followed up until their pregnancies were confirmed, or for 18 months after enrollment.At confirmation of pregnancy by ultrasonography, after obtaining written consent a second randomization was done wherein pregnant women received enhanced pregnancy and early childhood care or routine care.
Study description
Pregnant women in the intervention group received at least eight antenatal care visits.Body mass index (BMI) was measured, and a non-fasting serum TSH and hemoglobin tested at the time of confirmation of pregnancy.Women with TSH �2.5 mIU/mL were labelled as hypothyroid and were managed with replacement doses of levothyroxine, after a thorough history and clinical assessment as per the National Guidelines [11].TSH levels were repeated monthly, and thyroxine dose titrated till TSH stabilized to normal pregnancy reference levels [11].Once normalized, a repeat TSH for monitoring, was done twice in the second trimester, and once in the late third trimester which amounted to an approximate five to six repeat assays of TSH in pregnancy.Close follow up was done by study team workers who made weekly home visits to ensure compliance, and progress was monitored using electronic trackers.Those with uncontrolled TSH and TSH <0.1 mIU/mL (hyperthyroid) were referred to the endocrinology clinic of the collaborating government tertiary care hospital (Safdarjung Hospital, New Delhi, India).
Laboratory analysis
Blood samples for TSH assay were collected in serum separator tubes (SST), transported in cool boxes (4˚to 8˚C) to the field laboratory where centrifugation was done at ~450 × g at room temperature for 10 minutes, to separate serum and then transported to a National Accreditation Board for Testing and Calibration Laboratories (NABL) accredited study laboratory (Oncquest Laboratory) maintaining cold chain.
Serum TSH was analyzed using the Architect (Ci 8200 Abbott-Architect) TSH assay, a twostep immunoassay using chemiluminescent microparticle immunoassay (CMIA) technology with an inter-assay coefficient of variation (CV) of 20% which meets the requirements of a of third generation TSH assay.The Architect TSH assay has an analytical sensitivity of <0.01 μIU/mL [16].
Statistical analysis
Sociodemographic characteristics were reported as mean (SD), or proportions, as appropriate.We calculated incidence (95% confidence interval: CI) of hypothyroidism occurring at the time of pregnancy confirmation.Univariable and multivariable generalized linear model with binomial family and log link were performed to ascertain risk factors associated with hypothyroidism.We calculated unadjusted and adjusted risk ratios (RR) and their 95% CI for the association between hypothyroidism and adverse pregnancy outcomes (LBW, SGA, preterm birth, spontaneous preterm birth, stillbirth) using generalized linear model with binomial family and log link.The candidate variables were related to socio-demographic and nutritional status of the women; continuous (maternal age, hemoglobin and glycosylated hemoglobin (HbA1c) levels at the time of confirmation of pregnancy), and categorical (height (<150 and �150 cm), years of schooling <12 and �12 years), early pregnancy (gestational age �20 weeks), BMI, religion (Hindu and others), type of family (extended or joint, and nuclear), family with a below-poverty-line card, and family covered by health insurance scheme.All statistical analyses were performed using STATA version 16 (StataCorp, College Station, TX, USA).
Definitions of adverse pregnancy outcomes
LBW was be defined as weight < 2500 g on day 7 after birth.Gestation at birth was estimated by subtracting date of birth from date of dating ultrasound and adding it to gestational age as assessed by dating ultrasound according to INTERGROWTH-21 [17].Preterm birth was defined as births occurring at < 37 completed weeks of gestation.Spontaneous preterm births will be defined as births occurring at < 37 weeks of gestation and preterm pre-labor rupture of membranes or spontaneous onset of labor.Still birth was defined as babies born with no signs of life at or after 28 weeks of gestation, 1000 grams or more, or attainment of at least 35 cm crown-heel length (WHO Maternal, newborn, child, and adolescent health).
Birth weight centile was calculated using the INTERGROWTH-21 standard based on day-7 weight and gestational age at birth.SGA was defined as birth weight centile < 10 th as per INTERGROWTH-21 standard.
Results
In this study 2317 women were followed up from pregnancy till delivery.The median (IQR) of gestational age at the time of recruitment was 9.5 (9.1-11.0)weeks.Socioeconomic and clinical characteristics of enrolled women prior to pregnancy are shown in Table 1.Women who were hyperthyroid (TSH<0.1mIU/ml)were excluded in this analysis.The study population was relatively young with a mean (SD) age of 23.8 (3.1) years, about half of them had education more than 12 years.Height was less than 150 cm in 34.1%; mean (SD) BMI was 22.2 (3.9) kg/m 2 ; with dual burden of malnutrition: 15% women were underweight and 28% of women were overweight or obese in the hypothyroid group compared to 22% in the euthyroid group (Table 1).
Table 2 shows the thyroid status of pregnant women at the time of confirmation of pregnancy, with 29.2% (95% CI: 27.4 to 31.1) having hypothyroidism (TSH �2.5 mIU/mL).
Table 3 shows the association between socioeconomic and clinical characteristics of women with hypothyroidism.Overweight or obesity in early pregnancy was associated with increased risk (adjusted RR 1.29, 95% CI 1.10 to 1.51) of hypothyroidism.Each unit increase in Hb (adjusted RR 0.93, 95% CI 0.88 to 0.98 for each g/dL) levels was associated with reduced risk of being hypothyroidism.
Table 4 shows the association of hypothyroidism with adverse pregnancy outcomes where management of hypothyroidism was supported by the research team.The risk of adverse pregnancy outcomes i.e., LBW, SGA, preterm, spontaneous preterm birth and stillbirth, were similar among euthyroid women and those who were treated for hypothyroidism.
Discussion
The main findings of this study were, the high prevalence of hypothyroidism in early pregnancy, occurring in 29.2% of the population-based cohort and that treating hypothyroidism, early and adequately led to no increase in adverse outcomes (stillbirth, preterm birth, LBW, SGA).Anemia, overweight and obesity in early pregnancy were identified as risk factors for hypothyroidism.
A wide disparity in the prevalence of hypothyroidism in pregnancy has been reported in previous studies in India.A hospital-based study from nine states of India reported 15% prevalence of hypothyroidism using TSH levels of >4.5 mIU/mL and of 44.3% using a cut off level of >2.5 mIU/mL in first trimester [9].A meta-analysis of observational studies with varying gestational ages and TSH cut off values found the prevalence of hypothyroidism in pregnancy to be 11% [8].A prospective observational study from central part of India found hypothyroidism in 9.1% in the third trimester of pregnancy using a cut off of TSH >5 mIU/mL.Similar to our study, Bein et al in a systemic review and metaanlysis reported a decrease in risk of pregnancy loss and neonatal death with treatment of subclinical hypothyroidism [18].
Similar to our finding, a study from southern part of India showed overweight or obesity was a risk factor for hypothyroidism in early pregnancy, and high maternal TSH is associated with obesity and higher weight gain [19,20].The association of obesity and overweight with adverse outcomes is well documented including a recent systemic review and metaanalysis [21,22].Similarly, our finding of a higher hemoglobin level associated with reduced risk of hypothyroidism, was echoed in a systemic review and meta analysis which found that iron deficiency adversely affects thyroid function and autoimmunity in pregnant women [19,23].Also, a study from China found that proportion of hypothyroidism was higher in women with mild anemia in the first trimester than in women with no anemia [24].The strengths of our study are that it is population-based; the socioeconomic strata is low to middle income, which represents the average Indian population; serum TSH was used for thyroid status assessment, which is feasible as a screening test in settings like India where pregnant women may not return after the first visit due to economic reasons, lack of easy accessibility to the health center, etc., and the evidence that early identification and adequate treatment, reduced the risk of adverse pregnancy outcomes.The limitations of our study are that we did not test for other thyroid hormones (T4, T3, Free T4, Free T3), so we could not distinguish subclinical from overt hypothyroidism; nor did we test for thyroid antibodies, which could have helped us detect "at risk" pregnancies.
This study has important implications for health care of women of the reproductive age group.First, the high burden of hypothyroidism in early pregnancy and the prevention of adverse pregnancy outcomes with early management highlights the need for early detection and treatment in antenatal care programs.Secondly, preventive interventions could be introduced preconceptionally to reduce risk factors, like obesity and overweight, and low hemoglobin levels.
Conclusion
Almost a third of women from a mid-low socio-economic population may be at risk for developing hypothyroidism in early pregnancy.Anemia, being overweight or obese may increase the risk.Ensuring universal TSH screening in early pregnancy, and adequate treatment if hypothyroidism is detected, would serve to improve pregnancy outcomes.Reducing pre and periconception obesity and anemia could help reduce the risk of hypothyroidism.
Table 1 . Sociodemographic characteristics of pregnant women.
* Joint or extended family: Adult relatives other than the enrolled woman's husband and children living together in a household https://doi.org/10.1371/journal.pone.0282381.t001
Table 4 . Association of management of hypothyroidism with adverse pregnancy outcomes compared to subjects who were euthyroid.
*adjusted for maternal age, height, years of schooling, early pregnancy (gestational age �20 weeks) BMI https://doi.org/10.1371/journal.pone.0282381.t004 | 2023-09-14T20:23:45.168Z | 2023-09-13T00:00:00.000 | {
"year": 2023,
"sha1": "93a2172f400696814ded4914bdb80387691962ce",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0282381&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dc8c24444b13a2742d410d70de2ebb0f34e81361",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233305616 | pes2o/s2orc | v3-fos-license | Reliability of Human Evaluation for Text Summarization: Lessons Learned and Challenges Ahead
Only a small portion of research papers with human evaluation for text summarization provide information about the participant demographics, task design, and experiment protocol. Additionally, many researchers use human evaluation as gold standard without questioning the reliability or investigating the factors that might affect the reliability of the human evaluation. As a result, there is a lack of best practices for reliable human summarization evaluation grounded by empirical evidence. To investigate human evaluation reliability, we conduct a series of human evaluation experiments, provide an overview of participant demographics, task design, experimental set-up and compare the results from different experiments. Based on our empirical analysis, we provide guidelines to ensure the reliability of expert and non-expert evaluations, and we determine the factors that might affect the reliability of the human evaluation.
Introduction
Evaluation of summarization quality plays a crucial role in the development of summarization tools since a well-executed evaluation can help to determine whether the system has adequately outperformed the existing tools in terms of quality and speed or whether the designed properties work as intended (van der Lee et al., 2018;Lloret et al., 2018). The human evaluation has been the most trusted evaluation method and used as gold standard for summarization evaluation (Gatt and Krahmer, 2018;Celikyilmaz et al., 2020). However, in recent years, some researchers have provided an extensive overview of papers with human evaluation and pointed out that there is a lack of standardized procedures leading to mostly non-comparable and non-reproducible results (van der Lee et al., 2019;Belz et al., 2020;Howcroft et al., 2020;van der Lee et al., 2021). Howcroft et al. (2020) have reported based on the analysis 165 papers with human evaluation published in INLG and ENLG that more than 200 different terms have been used for human evaluation, which results in lack of clarity in reports and extreme diversity in approaches. van der Lee et al. (2021) have analyzed 304 research papers published in INLG and ACL conferences and reported that only 3% of 304 analyzed papers described the demographics, 6% provided the details about task design, 19% reported any inter-rater agreement score, 23% conducted a statistical analysis for human evaluation, and 32% reported the number of different evaluators per item, where 92% of the reported cases only one rating is used.
In this paper, we aim to contribute the human evaluation research as follows: 1) we conduct series of human evaluation with experts, crowd, and laboratory participants on two different data sets, 2) we report on the participant demographics, task design, and evaluation criteria 3) we demonstrate a comprehensive statistical analysis of human experiments, and 4) we provide guidelines to ensure the reliability of experts and non-experts and determine the factors affecting the human reliability grounded by the empirical evidence from our experiments. Data associated with this work is available at https://github.com/nesliskender/ reliability_humeval_summarization.
Related Work
Human evaluation of text summarization can be conducted either by linguistic experts or nonexperts such as laboratory participants or crowd workers. However, expert evaluation has been established as the gold standard in the summarization evaluation and the reliability of non-experts has been repeatedly questioned (Lloret et al., 2018). Gillick and Liu (2010) have conducted a crowdsourcing experiment for summarization evaluation for the first time and concluded that crowd workers can not evaluate summary quality because of the non-correlation with experts. However, they did not report the number of crowd workers per summary. Fabbri et al. (2020) have compared the crowd ratings with expert ratings using five crowd workers per item. They have also reported that crowd ratings do not correlate with experts and emphasized the need for protocols for improving the human evaluation of summarization. Further, Gao et al. (2018);Falke et al. (2017);Fan et al. (2018) have used crowd workers to evaluate the quality of their automatic summarization systems without questioning the reliability of crowd workers.
When we look at the approaches used for human summarization evaluation, they can be broadly classified into two categories: intrinsic and extrinsic evaluation (Jones and Galliers, 1996;Belz and Reiter, 2006;Steinberger and Ježek, 2012). In intrinsic evaluation, the summarization output's quality is measured based on the summary itself without considering the source text. Generally, it has been carried out as a pair comparison (compared to expert summaries) or using absolute scales without showing a reference summary (Jones and Galliers, 1996). However, the extrinsic evaluation, called also task-based evaluation, aims to measure the summary's impact on the completion of some task based on the source document (Mani, 2001). Reiter and Belz (2009) have argued that the extrinsic evaluation is more useful than intrinsic because the summarization systems are developed to satisfy the information need from the source text in a condensed way, but van der Lee et al. (2021) have reported that only 3% of the papers presented an extrinsic evaluation.
Further, the quality criteria used in the human evaluation and the terminology used for describing these criteria had a high degree of variation, 200+ variations in terminology (Howcroft et al., 2020). Researchers have used either the same terminology but evaluated something different or used different terminology but measured the same thing (Belz et al., 2020). In most cases, they did not define the quality criteria they investigated or cite a reference for it, making it difficult to compare the results and draw conclusions across the papers. The scales for evaluation have also varied often, such as Likert (3, 4, 5, 6, 7, 10, 11-point), categorical choice (Yes or No), or rank-based scale (van der Lee et al., 2021).
So, human evaluation lacks structured, reliable evaluation practices, and the current way of reporting human evaluation in research papers generates non-comparable and non-reproducible results. We aim to contribute to human evaluation research for text summarization by determining the intrinsic and extrinsic quality in a reliable and reproducible way with our experiments in section 3.
Experiments
As our source documents, we used the 67 unique post-query pairs from a telecommunication company's customer service forum in German, where customers ask questions about the company's products and services such as "Where can I find my customer number" or "My internet is not working". Each query had 6-10 corresponding forum posts, including the answers from other customers to provide a solution or at least some help to the customer problem. The average word count of the posts was 571.2, the shortest one with 150 words, and the longest one with 1006 words, where the average word count of the corresponding queries was 9.1, the shortest query with three, and the longest with 23 words.
We conducted series of human experiments on this data set shown in Table 1 in chronological order. In experiment 1, crowd workers created extractive summaries for 67 post-query pairs. In experiment 2, different crowd workers evaluated the quality of crowd-generated summaries, the output from experiment 1. Because of the high cost of human evaluation, we limited our evaluation data set for further experiments based on the overall quality ratings from experiment 2. From those, we selected 50 summaries within ten distinct quality groups ranging from lowest to highest scores (lowest group [1.667, 2]; highest group (4.667, 5]), each represented by five summaries. We generated a stratified sample of the data set consisting of summaries with low, medium, and high quality. These summaries originated from 27 post-query pairs.
This new data set, 27 post-query pairs with 50 summaries in varying qualities, has been evaluated by experts in experiment 3, by crowd workers in experiment 4, and by laboratory participants in experiment 5. In these experiments, the task design and the summaries were exactly the same to compare the effect of expertise (expert vs. nonexpert) and environment (lab vs. crowd) on the quality assessment. Further, we created machine summaries for the same 27 post-query pairs using the sumy 1 library to investigate the effect of summary generation method (human vs. machine) on the quality assessment. We applied TextRank algorithm (Mihalcea and Tarau, 2004) for machine summarization since it is one of the limited opensource German summarization algorithm and the most used unsupervised baseline in text summarization (Allahyari et al., 2017). Experts have evaluated these machine summaries in experiment 7, crowd workers evaluated the summaries in experiment 8. Here, we did not ask laboratory participants to evaluate the machine summaries' quality since the comparisons of experiments 3, 4, and 5 revealed the insights regarding the environment's effect on the quality assessment. The experts also created the gold standard summaries for these 27 post-query pairs in experiment 6. In human evaluation experiments, we applied both intrinsic and extrinsic approaches. As the literature reveals a high degree of variation in quality criteria used in human experiments (Belz et al., 2020;Howcroft et al., 2020;van der Lee et al., 2021), we limited the intrinsic factors to six and the extrinsic factors to three. As the limitation criteria, we narrowed the scope of human evaluation from NLG to text summarization and adopted the commonly used quality metrics. Especially, we applied the criteria from the Document Understanding Conferences (DUC 2 ), which have been the forum for researchers in text summarization to compare methods and results. Additionally, we used a measure for overall quality to assess the summaries' total quality. While limiting the extrinsic quality factors, we focused on quality metrics for usefulness for the task and information need because these are the most commonly used criteria in NLG as reported 1 https://github.com/miso-belica/sumy 2 https://duc.nist.gov/ in (Howcroft et al., 2020).
So, we determined intrinsic quality using six different quality criteria: overall quality, defined as "responsiveness evaluation" in Louis and Nenkova (2013), and the five readability (linguistic) measures (grammaticality, non-redundancy, referential clarity, focus, and structure & coherence) defined as in Dang (2005). We evaluated the extrinsic quality using following three measures: summary usefulness defined as "content responsiveness" in Conroy and Dang (2008), source usefulness (in our case post usefulness, because our source documents are forum posts) defined as "relevance assessment" in Mani et al. (2002), and summary informativeness defined as "informativeness" in Mani et al. (2002). We conducted all our evaluations using a continuous scale, 5-point Mean Opinion Score (MOS) with the labels very good, good, moderate, bad, very bad, which is one of the most applied scales in subjective quality assessment (Streijl et al., 2016).
Crowdsourcing Experiments
We conducted all of the crowdsourcing experiments using Crowdee 3 platform. Before each of our crowdsourcing experiment, we had test runs with the student workers who have acted like crowd workers and gave us feedback regarding the task design and understandability. For each new crowdsourcing experiment, we did at least ten or more alterations based on the students' feedback. Further, we payed the minimum hourly wage in Germany and determined payment based on our crowdsourcing experiments' estimated work duration.
Crowd Worker Selection
For crowd worker selection, we developed a twostep qualification process for both crowd creation and evaluation. In the first step, crowd workers needed to pass the German language proficiency test provided by the Crowdee platform with a score of 0.9 and above (scale [0, 1]). In the second step, crowd workers needed to pass a semantic task-specific pre-qualification test.
In the pre-qualification test for summary creation, at first, we presented the summary creation guidelines: 1) Summary should be non-redundant, fluent, informative, and grammatically correct, 2) Summary should be readable and understandable, 3) Summary should be created by copy-pasting 3-5 sentences from forum posts, 4) Any alternation of the sentences and also writing new sentences were not allowed. We also presented an example of a good and bad summary generated for the same post-question pair. 103 out of 144 crowd workers were approved for the summary creation task. The criterion for approval was the ROUGE score of crowd workers' summaries, calculated with summaries created by linguists of the authors' team. Further, we manually evaluated the crowd worker's summaries with a low ROUGE score (ROUGE-1 < 0.4), and if the summary quality was still acceptable, their authors were approved.
In the pre-qualification test for summary evaluation, we gave a brief explanation of the summarization process, highlighting that the summaries were created by simple cutting-out sentences from forum multiple posts, and therefore may appear slightly unnatural. Crowd workers were then asked to evaluate the overall quality of four summaries (two very good, two very bad). The quality of these summaries have already been determined by the linguists of the authors' team on a 5-point MOS scale. For each exact rating match, crowd workers got 4 points, and for each point deviation, they got a point less, so deviations were linearly punished. 98 out of 150 crowd workers passed this qualification test with a point ratio >= 0.625.
Crowd Creation
In experiment 1, we instructed the crowd workers to create one extractive, 3-5 sentences long summary for each post-query pair using the same summary creation guidelines as in the pre-qualification test. To illustrate the guidelines, we presented crowd workers an example of a post-query pair and corresponding one good and one bad summary. Additionally, forum posts were shown as an itemized list of sentences in the creation process, so that each crowd worker only had to select and copy the specified sentences into a summary. Overall 76 unique crowd workers (41m, 35f, M age = 39.43) participated in the experiment 1. Four different crowd workers per post-query pair created 256 summaries for 67 post-query pairs after eliminating cheaters. The average work duration was 458.8 seconds, and total tasks (67 x 4) were completed in 46 hours.
Crowd Evaluation
In experiment 2, the crowd workers evaluated the quality of 256 crowd summaries generated in experiment 1. First, a brief explanation of the summary creation process was shown with an example of a query, forum posts, and a summary to provide background information. Next, the crowd workers were asked to evaluate two summaries regarding the overall quality and the five intrinsic quality measures in the following order: 1) overall quality, 2) grammaticality, 3) referential clarity, 4) nonredundancy, 5) focus and 6) structure & coherence. Three different crowd workers evaluated each summary, and a single crowdsourcing task included the evaluation of two summaries.
The overall quality was rated first to avoid influencing it by more detailed aspects. The evaluation of each aspect was done on a separated page, which contained a definition of the particular aspect (illustrated with an example), a summary, and a 5point MOS scale (very good, good, moderate, bad, very bad) as radio buttons. To have an intrinsic (summary-focused) evaluation, crowd workers did not see the corresponding original post-query pair. Overall 86 crowd worker (49m, 37f, M age = 38.8) completed the summary evaluation task with an average work duration of 356.36 seconds within 12 days. We noticed that conducting a crowdsourcing experiment at Christmas time has slowed the total task completion duration. Further, crowd workers had the chance to give some feedback at the end of the task, and multiple crowd workers commented about the summary content, such as "I don't find the summary very informative overall, so the overall rating was worse than the individual ratings.". Therefore, we added questions regarding the summary's content quality to experiment 4. We used the same instructions and task description as in experiment 2 and added three extrinsic quality measures showing the original corresponding post-query pair to evaluate the summary's content quality. Also, we increased the number of unique crowd workers to 24 for each summary following the recommendations of Naderi et al. (2018) for a robust crowdsourcing study. Since reading the summary and all the source text increases the reading effort, we asked crowd workers to rate the quality of one summary in one task.
After answering the same six questions explained in the above paragraphs, we asked crowd workers to evaluate the following extrinsic quality measures: 7) summary usefulness, 8) post usefulness, 9) summary informativeness. Again, the evaluation of each aspect was done on a separate page, which contained the definition of the particular aspect with an example, the post-query pair, the summary, and the answer options as the 5-point MOS scale. Overall, 46 crowd workers (19f, 27m, M age = 43) completed the evaluation of selected 50 summary with an average work duration of 249.88 seconds. The total of 1200 tasks (50 summary x 24 crowd worker) was published in batches, and each batch was completed within one day.
In our last crowdsourcing experiment, experiment 8, we asked crowd workers to evaluate the quality of 27 TextRank summaries using the same task design as in experiment 4. Overall, 21 crowd workers (15m, 6f, M age = 26.3) participated in experiment 8 with an average task completion duration of 287.92 seconds, completing total tasks within three days. Our analysis from experiments 3 and 4 has shown that 8-10 crowd workers per summary delivers results corresponding to laboratory experiments. Therefore, we collected evaluations from 10 different crowd workers per summary.
Laboratory Experiment
In experiment 5, we recruited participants via a local participant pool for the summary quality evaluation experiment in a controlled laboratory environment. We accepted only the native German speakers and did not perform any other pre-qualification. The experiment design and the summaries were exactly the same as in experiment 4, where 24 different laboratory participants evaluated the nine different quality aspects of 50 summaries. They also completed the task using Crowdee platform to avoid any user interface biases.
In addition to instructions of experiment 4, all the participants were also instructed in written form before the experiment start and all of the participant's questions regarding the task's understandability were answered immediately by the lab instructor. As expected, the participants were also physically present in a controlled laboratory environment during the task. The experiment duration was set to one hour, and the participants were asked to evaluate as many summaries as they can in an hour. Overall, 71 participants (38m, 33f, M age = 29.3) completed the experiment 5, evaluating 12 summaries per hour on average within 51 days.
Expert Experiments
In experiment 3, two experts who are Masters students in linguistics evaluated the same selected 50 summaries with the same task design as in experiment 4. At first, they evaluated the summaries separately using Crowdee platform. After the first separate evaluation round, the inter-rater agreement scores, Cohen's κ, showed that the experts often diverted in their assessment. To reach consensus among experts, we followed an iterative approach similar to the Delphi method (Linstone et al., 1975) and arranged physical follow-up meetings with experts which we refer as mediation meetings.
In these meetings, experts discussed the reasons and backgrounds of their ratings for each summary in case of disagreement and eventually aligned in case of consensus. Eventually, acceptable interrater agreement scores were reached for nine quality measures. One should keep in mind that elaborated follow-up meetings principally lead to the increasing convergence of expert ratings. We did not test for a saturation effect with this observation, but the effort allocated in this step clearly influences the expert rating values.
In experiment 6, the same experts created gold standard summaries for the corresponding source post-query pairs of 27 TextRank summaries using the same task design as in experiment 1. Lastly, in experiment 7, the same experts evaluated the quality of 27 TextRank summaries following the same iterative approach and same task design as in experiment 3.
Results
Results are presented for the mean opinion scores (MOS) of overall quality (OQ), grammaticality (GR), non-redundancy (NR), referential clarity (RC), focus (FO), structure & coherence (SC), summary usefulness (SU), post usefulness (PU) and summary informativeness (SI) collected in experiments 2, 3, 4, 5, 7, and 8 (see table 1). We will refer to these measurements by their abbreviations in this section. Further, we use non-parametric statistics in our analysis because of the non-normal distribution of some measurements in these experiments. In this section, we compare the results from experiment 3 with experiment 7 to analyze expert reliability. Following the recommendations of van der Lee et al. (2019), we calculated the raw agreement in percentage and Cohen's κ as inter-rater agreement scores.
Looking at Table 2, we observe that the mediation meetings increased the agreement scores enormously both for the evaluation of crowd and TextRank summaries. Only after the mediation meetings, acceptable Cohen's κ scores between experts could be achieved with all measures having substantial (0.6-0.8] or almost perfect agreement (0.80-1.0] for all measures except for NR, PU, and SI being weak in crowd summary evaluation (0.40-0.60] (Landis and Koch, 1977).
For TextRank summaries, the increase is considerably higher than the crowd summaries. Since the same experts evaluated the TextRank summaries under the same experimental conditions as in experiment 3, we can conclude that the characteristics of machine-generated summaries such as unnaturalness or non-fluency constitute a challenge even for experts before mediation. Further, the TextRank summaries included usually same kind of mistakes which made it easier for experts to agree on a specific evaluation scheme for each evaluation criteria during mediation sessions, leading to higeher agreement in comparison to crowd summaries.
The effect of mediation on the inter-rater agreement scores shows clearly that the mediation meetings are necessary for reliable expert evaluation, especially when evaluating machine-generated sum-maries. We plan to use the specific evaluation criteria shaped during expert mediation sessions to improve the task design in future work.
Crowd Evaluation
This section compares the results from experiment 2 with experiment 4 to measure the re-test reliability of crowd experiments. To do so, we calculated the Spearman correlations between the crowd evaluations from experiment 2 (3 crowd workers per item) and experiment 4 (24 crowd workers per item) for the six intrinsic measures. To have the same number of crowd workers per summary as in experiment 2, we selected the first three evaluations per summary from experiment 4. The black circles in Figure 1a show the correlation between these first three crowd evaluations from experiment 4 and crowd evaluation from experiment 2. The correlation coefficients range from 0.497 to 0.587 for all six measures, indicating a moderate re-test reliability of crowd evaluation.
However, choosing the first 3 out of 24 crowd raters for correlation analysis is neither a conscious nor reliable choice. Would we still get the same correlations if some of the remaining 21 crowd workers would have completed the task before the first three considered above? To investigate this, we randomized 100 times the order of 24 crowd evaluations and selected the first three evaluations to correlate them with the evaluation from experiment 2. Figure 1a shows the scatter plots for these correlations, ranging from weak to strong for all six measures. We see a noticeable difference between the initial correlations (black circles in Figure 1a) and randomizations. Here, we observed that the correlations ranged from 0.2 to 0.75, showing that the crowdsourcing experiments with three crowd workers per summary still include high degree of unpredictability and can only be moderately reliable.
If we increase the number of crowd workers per item, can we overcome this unpredictability? To investigate this, we divided the existing data from experiment 4 into two random groups, two groups each with 12 crowd workers per item, and calculated Spearman correlations between them. Figure 1b shows the correlation between these two randomized groups for the nine quality measures. In comparison to Figure 1a, the slope of randomized correlations in Figure 1b is lower and the mean correlation of randomizations is very strong except for PU and SI which are strong (ρ OQ = 0.874, ρ GR = 0.858, ρ N R = 0.799, ρ RC = 0.857, ρ F O = 0.815, ρ SC = 0.874, ρ SU = 0.848, ρ P U = 0.626, ρ SI = 0.793).
This result proves that the reliability of crowdsourcing experiments depends on the number of crowd workers per item and reliable crowdsourcing results cannot be achieved with three crowd workers per item.
Effect of Expertise and Environment
To investigate the effect of expertise and environment on the human summarization evaluation, we compare the results from experts (experiment 3), crowdsourcing (experiment 4), and laboratory (experiment 5) experiments, which are conducted on the same data set with the same task design. Figure 2 shows the boxplots of expert, crowd, and laboratory ratings for nine quality measures. Here, we see that the experts used the upper end of Figure 2: Boxplots of expert evaluations (blue), crowd evaluations (green) and laboratory evaluations (orange) for crowd summaries the scale more often than the non-experts and gave higher ratings on average. Further, the non-expert evaluations are slightly negatively skewed using a smaller portion of the scale.
To explore if these differences statistically significant, we calculated the non-parametric ANOVA, Kruskal-Wallis Test, between expert, crowd, and laboratory ratings. The test results revealed no significant difference between the expert and crowd evaluations except for PU and between the crowd and laboratory except for SI. However, the expert evaluations differed significantly from laboratory evaluations. Experts gave significantly higher ratings than the laboratory participants for all measures except for SU and SI. Here, we observe that significant differences exist only between the intrinsic evaluations indicating that the intrinsic evaluations require more expertise than the extrinsic evaluation.
Additionally, we calculated the Spearman correlations of expert evaluations with crowd and laboratory for all nine measures as shown in Figure 3. We found that the correlation magnitudes between ex- Figure 3: Spearman correlations between expert and laboratory, expert and crowd, and crowd and laboratory for the nine quality measures pert and laboratory and between expert and crowd were very similar, ranging from moderate to very strong. However, the correlations between crowd and lab were very strong except for PU and remarkably higher than the correlations with experts. These results show that the environment does not have a significant effect on human evaluation, but the level of expertise affects the human evaluation.
Effect of Data Quality
To analyze the effect of the data quality itself on human evaluation, we compare the correlations between expert (experiment 3) and crowd (experiment 4) for crowd-generated summaries with the correlation between expert (experiment 7) and crowd (experiment 8) for TextRank-generated summaries. On average, the correlations for TextRank summaries for nine quality measures were 0.12 points lower than the crowd summaries. To determine if this is a significant difference, we applied Zou's confidence intervals test for independent variables (Zou, 2007) and found out that the differences were not statistically significant except for SC.
Further, we calculated non-parametric T-test, the Mann-Whitney U test, between crowd and expert ratings for TextRank summaries. The results revealed that the crowd workers rated OQ, RC, FO, SU, and PU of TextRank summaries significantly lower than the experts. In contrast, when evaluating crowd summaries, crowd ratings did not differ significantly from experts except for PU. This result indicates that crowd workers tend to give lower ratings than the experts for machine-generated summaries. However, the summary generation method does not affect the rank-order of their ratings, and the correlation between crowd and expert do not differ from each other significantly both for humanand machine-generated summaries.
Goodness of Automatic Metrics: With
whom to compare?
The goodness of automatic summarization evaluation metrics is generally measured by their correlation to human evaluations, usually expert evaluations (Bhandari et al., 2020). In this section, we compare the correlations of commonly used automatic metrics ROUGE (Lin, 2004) and BERTScore (Zhang* et al., 2020) with expert and crowd evaluations for TextRank summaries to find out if the crowd workers can be used instead of experts. As human evaluation measures, we only considered the OQ, SU, and SI because the automatic metrics are content-based metrics and should rather be compared to content-based human evaluations (Lloret et al., 2013). Table 3 shows the correlations of ROUGE and BERTScore with OQ, SU, and SI measured by experts and crowd. To determine if these correlation differences are significant, we applied Zou's confidence intervals test for overlapping dependent variables and found out that there is no significant difference between any correlation. This result indicates that crowd workers can be used instead of experts to determine the goodness of automatic metrics.
Conclusion and Future Work
In this paper, we report a comparative analysis of series of human evaluation experiments with crowd workers, laboratory participants, and experts on two different data sets to determine the reliability of human evaluation for text summarization.
However, the research papers with expert evaluations for summarization have not reported any mediation meetings, let alone only 19 % reported the inter-rater agreement scores in the range of 0.3-0.5 (van der Lee et al., 2021). This raises the question of expert reliability, and to avoid that, we recommend having mediation meetings with experts for reliable expert evaluation based on our results in section 4.1.1. With our analysis, we showed that mediation meetings are elementary to assure the reliability of expert evaluations for all quality measures.
Further, we found out that the number of crowd workers per item determines the crowd evaluation's reliability. van der Lee et al. (2021) showed only 57 % of papers specified number of evaluators and the median was 3 among the papers which have reported the evaluator number. But our analysis in Section 4.1.2 showed that when using crowdsourcing, three crowd workers per item can only deliver moderately reliable results and around ten or more different crowd workers should evaluate each summary. This result is also inline with our previous findings in Iskender et al. (2020b,a).
While the environment (crowd vs. lab) does not affect the human evaluations, the level of expertise might have affected the human evaluation. Although there are mostly strong correlations between the experts and non-experts, their evaluations do not match 100%. Depending on the evaluation aim or the end-user group of the summarization system, the evaluator's expertise should be determined, e.g., summarization systems developed for naive end-users should be evaluated by the naive end-users rather than the experts, and expert systems should be evaluated by linguistic experts.
Additionally, the summary generation method (human vs. machine) might cause a bias in crowd assessments. Because of machine summaries' unnaturalness, the crowd workers tended to rate machine summaries lower than the experts. The feedback that the summaries were very "unnatural" and "robotic" from the crowd workers in experiment 8 also confirms this finding. But still, crowd workers can be used as a direct substitute for experts to determine the goodness of automatic evaluation metrics developed for machine summaries.
However, this paper has some limitations regarding the data set and task design. We used one task design with a single rating scale (5-point MOS scale) and the same set of definitions and explanations for our evaluation criteria in all our experiments, which were conducted on small sized data sets. In future work, we plan to include different human evaluation criteria, compare different rating scales with each other, conduct A/B testing with a second task design, which includes improved definitions of evaluation criteria based on the expert mediation sessions, and expand the data set size to increase the statistical power of our analysis. Additionally, we plan to conduct virtual mediation sessions between two or three crowd workers to find out if we can reach similar results to experts with a small number of crowd workers.
Despite the limitations of our paper, we believe that this paper makes a significant contribution to human evaluation research of text summarization. As Table 1 demonstrates, the time and organizational efforts and the cost of human experiments can be enormous. Especially, conducting laboratory and expert experiments required high organizational effort, and these experiments were completed in months while crowdsourcing experiments usually were finished in a couple of days. This shows how burdensome and time-consuming conducting human evaluation can be, which is a great challenge in a fast-moving field like summarization. Therefore, finding reliable ways of using crowdsourcing can be a promising solution and we hope to see more research in this field. | 2021-04-20T04:02:06.396Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "968e6596b763cbd48a49af8c83957978296f6dca",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "968e6596b763cbd48a49af8c83957978296f6dca",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
247577362 | pes2o/s2orc | v3-fos-license | An ensemble learning strategy for panel time series forecasting of excess mortality during the COVID-19 pandemic
Quantifying and analyzing excess mortality in crises such as the ongoing COVID-19 pandemic is crucial for policymakers. Traditional measures fail to take into account differences in the level, long-term secular trends, and seasonal patterns in all-cause mortality across countries and regions. This paper develops and empirically investigates the forecasting performance of a novel, flexible and dynamic ensemble learning with a model selection strategy (DELMS) for the seasonal time series forecasting of monthly respiratory disease death data across a pool of 61 heterogeneous countries. The strategy is based on a Bayesian model averaging (BMA) of heterogeneous time series methods involving both the selection of the subset of best forecasters (model confidence set), the identification of the best holdout period for each contributed model, and the determination of optimal weights using out-of-sample predictive accuracy. A model selection strategy is also developed to remove the outlier models and to combine the models with reasonable accuracy in the ensemble. The empirical outcomes of this large set of experiments show that the accuracy of the BMA approach is significantly improved with DELMS when selecting a flexible and dynamic holdout period and removing the outlier models. Additionally, the forecasts of respiratory disease deaths for each country are highly accurate and exhibit a high correlation (94%) with COVID-19 deaths in 2020.
a b s t r a c t
Quantifying and analyzing excess mortality in crises such as the ongoing COVID-19 pandemic is crucial for policymakers. Traditional measures fail to take into account differences in the level, long-term secular trends, and seasonal patterns in all-cause mortality across countries and regions. This paper develops and empirically investigates the forecasting performance of a novel, flexible and dynamic ensemble learning with a model selection strategy (DELMS) for the seasonal time series forecasting of monthly respiratory disease death data across a pool of 61 heterogeneous countries. The strategy is based on a Bayesian model averaging (BMA) of heterogeneous time series methods involving both the selection of the subset of best forecasters (model confidence set), the identification of the best holdout period for each contributed model, and the determination of optimal weights using out-of-sample predictive accuracy. A model selection strategy is also developed to remove the outlier models and to combine the models with reasonable accuracy in the ensemble. The empirical outcomes of this large set of experiments show that the accuracy of the BMA approach is significantly improved with DELMS when selecting a flexible and dynamic holdout period and removing the outlier models. Additionally, the forecasts of respiratory disease deaths for each country are highly accurate and exhibit a high correlation (94%) with COVID-19 deaths in 2020.
Introduction
In the time series forecasting literature, two techniques compete: forecast model selection [1] and forecast model combination [2]. The traditional approach to forecasting seasonal and non-seasonal time series is to select a single best model from a pool of candidate models based on certain criteria or employing a given technique [3], potentially neglecting model risk. The ensemble prediction method is widely considered a promising strategy and it has been used with considerable success in research and industry thanks to the availability of a wide variety of individual models. Since Bates and Granger's [4] seminal study, a growing number of linear and non-linear univariate and multivariate times series methods [5,6] and statistical machine learning techniques [7][8][9] have been proposed to increase shortand long-term predictive accuracy in relation to a wide range of problems, including stochastic population -mortality, fertility, and net migration -forecasting [10], epidemiological and excess mortality forecasting [11], meteorology [5], and finance [12,13], among others. Indeed, several comprehensive theoretical and empirical studies have confirmed the superior predictive performance of ensemble methods exploiting a variety of approaches, including stacking and blending to improve predictions, bagging to decrease variance, boosting to decrease bias [14,15] and the Bayesian model averaging (BMA) [16][17][18]. When adopting this empirical strategy, choices have to be made as to which models to include in the combined pool and as regards the contribution (weight) of each model to the final prediction. Here, a significant body of literature has examined optimal model combination weights [19,20], by focusing on the selection of optimal combination schemes and weights [21,22], assigning equal weights to the set of superior models [23], or selecting a subset of best models from among the set of candidates (model confidence set) using a dynamic trimming scheme and considering the model's out-ofsample forecasting performance in the validation period [24,25]. Similarly, to cope with concept drift, memory, change detection, learning, and loss estimation, adaptive algorithms have been proposed [26].
However, experiments are usually conducted with a holdout set so as to pick pools of models manually that perform best for a given series type [27]. This is often motivated by a lack of computational power, as well as by limitations of time for checking and allocating one specific holdout from a holdout set to each individual model, and preferring, therefore, to consider a fixed holdout for all models and, by so doing, facilitating their combination in an ensemble model. In fact, different holdouts for different models results in different lengths for each model, which means the combination of these models of different lengths becomes a challenge. However, ignoring the different holdouts for each model reduces adaptability and undermines their generalization. This paper proposes a dynamic ensemble learning strategy (DELMS) in different layers for panel time series that not only overcomes the limitations of single-model based methods, but also addresses those of new ensemble models with fixed holdout sets and fixed thresholds for model selection. The strategy combines twelve models, including the models suggested by the M4 Competition [28] as benchmarks and standards for comparative purposes. We also consider model candidates to ensure sufficient diversification of statistical models, specifically SARIMA, and DNN models, including multi-layer perceptron (MLP) to yield more robustly accurate forecasts. We then consider different holdouts and different time series lengths in one layer, and other layers in order to select the best models for each series. In this way DELMS is able to generate effective and robust forecasts, separate the pattern from the noise, and overcome overfitting problems. The strategy is based on a Bayesian model averaging (BMA) to combine the heterogeneous models with the lowest error measure to generate an ensemble. It applies the selection of the subset of best forecasters (model confidence set) to be included in the forecast combination, the identification of the best holdout period for each contributed model, and the determination of optimal weights using out-of-sample predictive accuracy. A model selection strategy is also developed to remove the outlier models and to combine the models with reasonable accuracy in the ensemble. In short, the ensemble learning procedure proposed (DELMS) involves: (i) setting the different holdouts to be checked for each contributed model; (ii) choosing the best holdout for each model based on out-of-sample forecasting accuracy; (iii) selecting the subset of best forecasters (model confidence set), using a variable trimming scheme in which a multiple of the forecasting accuracy metric range obtained across all candidate models is used as the threshold for model exclusion; (iv) determining the posterior probabilities (weights) of each model, using the normalized exponential (Softmax) function; and, finally, (v) obtaining ensemble forecasts based on the law of total probability, considering the model confidence set and the corresponding model weights. Unlike previous approaches that have focused on either selecting optimal combination schemes and weights or equally weighting a subset of best forecasters, our novelty ensemble procedure involves identifying the best holdout period for each model, selecting the best forecasting models and determining the optimal weights based on the out-of-sample forecasting performance for each dataset.
To demonstrate empirically the robustness of our approach, we use monthly respiratory disease death data for 61 heterogeneous countries to estimate excess mortality during the COVID-19 pandemic. Excess mortality is the number of deaths attributable to all causes above and beyond mortality predictions under normal (baseline) circumstances for a given period in a population. Clearly, quantifying and analyzing excess mortality attributable to the coronavirus 2 (SARS-CoV-2) pandemic is of great relevance for policymakers, public health officials and epidemiologists [29,30] and, in this sense, any improvement in such forecasts are to be welcomed. Excess mortality is typically measured by national or supranational statistical agencies using the absolute, relative (P-score) or standardized (Z-score) number of ''excess'' deaths, where the benchmark is often computed quite naïvely, by using, for instance, the simple average of the previous year's deaths. The EuroMOMO project (https://www.euromomo.eu) is a notable example of this, with baseline mortality modeled using a generalized linear model corrected for over dispersion assuming that the number of deaths follows a Poisson distribution. However, this approach does not account for differences in the level, longterm secular trends, and seasonal patterns in all-cause mortality across countries and regions. Additionally, empirical studies show that it is hard to find a single, widely accepted forecasting method (if, indeed, one exists) that performs consistently well across all datasets and time horizons [31]. Besides, data quality is another concern, being responsible for biased and inconsistent parameter estimates and leading to flawed conclusions [32]. This is a matter of untold concern for forecasters of ensemble learning predictive models when seeking to predict, for example, numbers of deaths or when an economy should be re-opened [33,34], among others. Moreover, it makes excess mortality a highly appropriate case for comparing the experiences of different countries or regions, where either the degree of misdiagnosis/underreporting or the problems of data quality may differ [35]. Our hypothesis is that the approach proposed herein leads to a decrease in the individual error of ensemble members compared to that provided by normal model selection with equal holdouts for selected models and without overly decreasing the diversity between them. We examine the run times, accuracy, level of contribution, and error metric of the ensemble techniques proposed and compare them with those of the well-known ensemble model without dynamic holdouts and model selection, and the individual forecasting models. This article presents a suitable ensemble time series with improved predictive accuracy and it is our belief that it helps demonstrate which time series techniques contribute more to ensembles.
The remaining sections of the paper are organized as follows. In Section 2, we describe the materials, methods, and related works considered in undertaking this study. Section 3 outlines an extensive set of experiments on respiratory disease deaths in 61 countries and the results. The main discussion and conclusions are reported and discussed in Section 4. Finally, future research proposals are presented in Section 5.
Materials and methods
Here, we propose a meta-learning approach for adapting the ensemble to the best combination of forecasting models. The candidate models are extracted from different layers with the best holdout for each contributed model and each panel member.
Figs. 1 and 2 provide graphical overviews of the materials and methods employed to develop our proposed strategy. We use multiple learning processes to improve the predictive performance of the ensemble, which is built using a learning approach for the candidates addressed in the last layer. In this section, we discuss these techniques in brief and highlight their contributions.
Layered learning and the ensemble learning strategy
The layered learning approach as applied to time series data consists of breaking a forecasting problem down into simpler subtasks that occupy different layers. Each layer addresses a different predictive task and the output of one layer can be used as input for the next layer [36]. In this study, the first task is to obtain a direct mapping of the time series for different countries, combining the intractable time series algorithms and predicting the ensemble model as the final output. This means the first layer task is to find the best holdout for each panel member and for each time series algorithm. This facilitates the second layer task of model selection, which in turn facilitates the identification of the model confidence set of best forecasters in the last layer [37].
It is useful to maximize forecasting accuracy in panel time series -a target that is achieved dynamically -and to adapt the model's learning process to possible unexpected shocks.
Along with the layered learning approach, our ensemble method runs multiple learning algorithms to employ adaptive heuristics that combine forecasters. As a result, we obtain a better predictive performance than might be obtained from any of the constituent learning algorithms. Our strategy comprises several selected models (see Table 1 -line 10), with the best performance being based on minimum error measures. Each model considers different holdouts to solve the problem at hand and selects the best holdout in each case. This leads to a more robust overall performance of the ensemble as it increases the diversity of the holdouts; however, the time series length differs according to the different holdouts. As such, it could constitute a problem for the ensemble layer, were we to merge the models of different lengths. Thus, we need to force all the models selected to be of equal length, so that eventually the length of the ensemble is equal to the minimum length time series in our time series set. Although this windowing strategy provides each forecaster's best prediction and, therefore, the ensemble's best performance, it is clear that to obtain the best results, the length of all the time series should be sufficiently large and almost the same.
Following , let each candidate model be denoted by M l , l = 1, . . . , L representing a set of probability distributions in which the ''true'' data-generating process is assumed to be included, comprehending the likelihood function L (y|θ l , M l ) of the observed data y in terms of model-specific parameters θ l and a set of prior probability densities for these parameters p (θ l |M l ). Consider a quantity of interest ∆ present in all models, such as the future observation of y. The marginal posterior distribution across all models is where p (∆|y, M l ) denotes the forecast PDF based on model M l , and p (M l |y) is the posterior probability of the model M l given the observed data with ∑ L l=1 p (M l |y) = 1. The weight assigned to each model M l is given by its posterior probability The workflow of our proposed method is presented in Fig. 2. To identify the model confidence set and compute model weights, we first set the different holdouts to be checked for each contributed model in each dataset. Let H = {h 1 , h 2 , . . . , h k } represent the set of holdout periods to be considered in the estimation procedure (see Fig. 2. Layer L1). The second step involves selecting the best holdout for each candidate model based on the out-ofsample forecasting accuracy measure (see Fig. 2. Layer L2). We use the symmetric mean absolute percentage error (SMAPE) as our measure of forecasting accuracy (see Table 1 -line 8). 1 To select the best holdout for each model, we tested the different holdout values from three to ten years -considering the holdout set (H = {3, 5, 7} years 2 ) (see Table 1 -line 7) as representative of the short-, medium-, and long-term -and compared the SMAPE values at each iteration, retaining the model with the lowest SMAPE as the candidate for the model confidence set selection step. This provides the strategy with an opportunity to cover different parts of the data space and to handle different dynamic regimes in different candidate time series. Additionally, it ensures the final ensemble model is able to manage the limitations of each in the others.
Third (see Fig. 2. Layer L3), the subset of best forecasters is selected using the best holdout period (see Table 1 -lines 21-34) and a variable trimming scheme in which a multiple θ (pre-set at 0.5) of the distance between the maximum and minimum values 1 We avoid using the AIC and the BIC because the candidate models are in different model classes, and the likelihood is computed differently. For selected models in the same class, the BIC is useful and is used automatically by the algorithm to select, for instance, a SARIMA model among the candidate SARIMA models. Another problem associated with the error term in ensemble modeling can be avoided by using accuracy measures whose formula contains a logarithm, such as MSLE, RMSLE, and SLE. Based on the work conducted here, the program would be interrupted because some of the algorithms potentially present negative values in these measures. 2 The results for the other holdout periods are consistent with those reported in this paper.
of the forecasting error metric is used as the threshold for model exclusion, i.e., using where SMAPE g,l is the SMAPE value for model l in the panel member (country) g (see Table 1 -lines [35][36]. For each panel member, if the error of a candidate model is greater than the Γ g indicator, (i.e., SMAPE g,l > Γ g ) the model is excluded from the model confidence set and from the ensemble forecast computation (Table 1 -lines 37-43), i.e., it is assigned zero weight in (1).
Depending on the distribution of the SMAPE values, the number of models excluded from the model confidence set will vary. From a frequentist point of view, building up a model confidence set is a way of summarizing the relative forecasting performances of the entire set of candidate models and identifying the set of statistically best forecasters. The advantage of this statistic defined in (3) is its simplicity, ease of application, and interoperability. Moreover, it falls somewhere between the time series' close and extremely distant models. In this case, the distant forecasting models are removed from the ensemble, which is ideal for avoiding overfitting and controlling the redundancy in the output of the ensemble model. Our intuition is that the models with a minimum error are closest to the actual data generating process. Yet, comparing the error measure with the mean of the errors removes only those models that are extremely distant from the other candidate models. This upholds the diversity of the selected models and avoids the overfitting problem.
Fourth, the posterior probabilities of the best forecaster model (model weights) are computed using the normalized exponential (Softmax) function with ξ l = S l / max {S l } l=1,...,L and S l : = SMAPE g,l . The Softmax function is a generalization of the logistic function that is often employed in classification and forecasting exercises using traditional machine learning and deep learning methods as a combiner or activation function [38]. The function assigns larger weights to models with smaller forecasting errors, with the weights decaying exponentially the larger the error (see Table 1 -lines [11][12][13]. Fifth, the BMA forecasts are obtained based on the law of total probability (1) considering the model confidence set and the corresponding model weights (4). The sampling distribution of the ensemble forecast of the quantity of interest is a mixture of the individual model sampling distributions (see Table 1 -lines [44][45][46][47][48][49][50][51][52][53]. The pseudocode of the proposed methodology is listed in Table 1.
In the interest of reproducible science, the dataset and all methods are publicly available [39].
The learning algorithms
This section summarizes the characteristics of the individual candidate learning algorithms (times series methods) used in this study 3 (see Table 1 -line 10). We selected our models by reviewing the six top-performing hybrid or combination models in the M4 Competition, but taking into consideration our research limitations derived from the length of time series and the computational power required to build the ensemble model for a 61-member time series from 2000 to 2016 with 12 individual models and three holdouts.
The seasonal trend decomposition using Loess (STL) allows us to decompose a time series into its trend and seasonal components. Based on the Loess smoother, it offers a simple, versatile, and robust method for decomposing a time series and estimating nonlinear relationships [41]. The models need to be robust to the outliers detected in the multiple panel members' (countries) datasets. In specifying the STL, we use a robust decomposition so that sporadic abnormal observations do not affect the estimates of the trend-cycle and seasonal components. The time series are tested for autocorrelation using the Ljung-Box test, considering the null hypothesis that the model exhibits appropriate goodness-of-fit. The method does not handle the calendar variation automatically, and it only provides facilities for additive decompositions, which could be considered a limitation of this approach. We use the two parameters t.window and s.window to control the speed at which the trend-cycle and seasonal components can change. Smaller values allow for more rapid changes, which we need especially for some time series with strong turning points. As a result, the number six was chosen for s.window and t.window based on the results of the residual checks and the Ljung-Box test statistics.
The seasonal naive (SNAIVE) method sets the forecast to be equal to the last observed value from the same season of the year (i.e., the same month of the previous year) [42]. It is a useful benchmark for other forecasting methods and, here, it was found to be helpful in showing the recent time series trend and for adjusting the ensemble model for the trend component.
Similarly, the SARIMA and random walk forecasts (RWF) -as a SARIMA(0,0,0)(0,1,0)m model, in which m is the seasonal period -were used as state-of-the-art methods to memorize repeating monthly patterns. However, many SARIMA models have no exponential smoothing counterparts [43], and the robust univariate forecasting models, such as Holt-Winters' multiplicative method (HWM) and the exponential smoothing state space model (ETS), can be considered a good complement to SARIMA models in our final ensemble. All ETS models are non-stationary, while some SARIMA models are stationary [44]. ETS follows the last trend of the time series and it is appropriate for the ensemble model for empowering the trend parameter in the final predictions. ETS point forecasts are equal to the medians of the forecast distributions. For models with only additive components, the forecast distributions are normal, so the medians and means are equal. For multiplicative errors, or multiplicative seasonality, which perform similarly in most of the time series analyzed in this study, the point ETS forecasts are not equal to the means of the forecast distributions. In these cases, SARIMA is a better choice. On the other hand, ETS is a non-linear exponential smoothing model with no equivalent SARIMA counterpart. Therefore, we propose that the ETS model be selected automatically and the type of trend and seasonal component be additive with the restriction of finite variance. The bootstrapping method for resampled errors was employed rather than distributed errors and simulation was used rather than algebraic formulas for calculating prediction intervals. The other options for the ETS model are shown in Table 2. The TBATS -that is, (T)rigonometric terms for seasonality, (B)ox-Cox transformations for heterogeneity, (A)RMA errors for short-term dynamics, (T)rend, and (S)easonal -are also used to adopt the ensemble model with multiple seasonality of some time series.
In the case of the neural network time series algorithms, the extreme learning machines (ELM) were used with the lasso penalty. ELM theory assumes that the randomness in the determination of coefficients of neural network predictors (input weights) can feed the learning models with no iterative tuning for a given distribution as is the case in gradient-based learning algorithms. The model entails randomly defined hidden nodes and input weights without any optimization, so that only output weights need to be calibrated during the training of the ELM [45]. In the hyperparameter calibration of the ELM, we consider the maximum 500 hidden layers for 200 networks to be trained and summarized in the ELM's final ensemble forecast model.
The neural network autoregression (NNAR) refers to single hidden layer networks using the lagged values of the time series as inputs and automatic selection of parameters and lags according to the Akaike information criterion (AIC) [46]. In the NNAR model specification, we considered the last observed values from the same season as the inputs to capture the seasonality patterns and to use a size equal to one, because we have one attribute without a regressor, and by way of improvement, we used 100 networks to fit the different random starting weights and then averaged them out to produce the forecasts. Additionally, we considered the multilayer perceptron (MLP) as a kind of NNAR model. This is more complicated and advanced than the NNAR, having three components in the form of NNAR(p,P,k), in which p denotes the number of lagged values that are used as inputs and which is usually chosen based on an information criterion, like AIC, P denotes the number of seasonal lags, and k denotes the number of hidden nodes.
Finally, singular spectrum analysis (SSA) was used as one of the high-quality modeling approaches. The calibration of the SSA is an important, but not easy task, in a standalone modeling approach [47]. It depends upon two basic parameters: the window length and the number of eigentriples used for reconstruction. The choice of improper values for these parameters yields incomplete reconstruction, and the forecasting results might be misleading. In this study, we set window length equal to 12 and eigentriples equal to NULL. Table 2 summarizes the hyperparameters of the algorithms used in this study.
Data selection and cleansing
We use cause-of-death data from the World Health Organization's (WHO) mortality database [50] to empirically demonstrate the forecasting capacity of the methodology proposed. The database collects cause-of-death statistics from country civil registration systems and estimates from the United Nations Population Division for countries that do not regularly report population data. We use an Excel file 4 of this database to evaluate the data quality of each country and a CSV file that includes the death time series of each country by gender. The first of these files identifies the quality of data for each country, using five color categories -green, dark yellow, light yellow, dark red and light red. Countries classified as green have multiple years of national death registration data with high completeness and quality of cause-of-death assignment. Estimates for these countries may be compared and time series may be used for priority setting and policy evaluation. However, this dataset only includes data for 2000, 2010, 2015, and 2016 and it is not complete for the time series. As a result, we used this dataset only to identify the countries reporting high-quality data to the WHO and ranked them according to their data quality. In line with the metadata of the dataset, the criteria used to rank the countries by data quality are shown in Table 3, coinciding, that is, with the WHO descriptors. [48], and the TBATS method, which includes Box-Cox transformation, ARMA errors, trend and seasonal components [49]. We considered only those countries with a data quality corresponding to the first three categories and eliminated various islands due to a lack of data (e.g., Åland Islands). We also cleaned the dataset by removing the total column and various rows with unknown month data and/or zero deaths. Some countries reported total deaths for three months in a row during certain years. In such instances, we assumed a uniform distribution of deaths across the quarter and allocated the corresponding value to each month. We filtered the datasets for respiratory diseases and considered the death variable as a univariate time series with monthly sampling frequency. Table 4 shows the WHO codes classified as respiratory infections. To compute the number of deaths attributable to respiratory diseases, we aggregated codes 380 and 410 or, equivalently, codes 390, 400, and 410. We also corrected the names of some of the countries (Appendix A). In this way we were able to calculate the proportion of deaths attributable to respiratory diseases. To estimate the number of monthly deaths caused by Otitis media: Acute otitis media (AOM) is a common complication of upper respiratory tract infection whose pathogenesis involves both viruses and bacteria.
respiratory diseases, we multiplied the annual proportion by the total number of forecasted deaths each month. We used the fraction of annual deaths from respiratory diseases over the total number of deaths as a proportion of deaths in each month. This procedure provided us with a dataset with more than twelve thousand observations in a pool of a 61-member panel time series (countries) from 2000 to 2016 [39] (see Table 1 lines 3-4). These panel time series cover possible situations of stationarity, non-stationarity, increasing trends, seasonality, and structural breaks so as to undertake a comprehensive evaluation of the improvement in accuracy of candidate and ensemble models in different scenarios.
Given the varying data quality of countries/territories/areas as regards case detection, definitions, testing strategies, reporting practices, and lag times, missing values are expected in the time series dataset. To deal with this problem, we tested the Kalman, seasplit and seadec algorithms to impute the missing values. Of the three, the seasplit algorithm performed best as regards saving both the trend and the seasonality for our dataset (see Table 1 -line 19). We only imputed missing values within the time series, but not at the beginning of the time series with a start date after 2000. As a result, rather than changing the first year of the time series to our base year 2000, we used the latest year available (see Table 1 -line [17][18]. To avoid the error caused by combining time series of different lengths in an ensemble model, we adapted the R code to handle different start years. The same problem arises as a result of the procedure adopted to select the best holdout for each model, which may ultimately lead to model combinations considering forecasts based on different holdouts, i.e., different time series lengths.
Finally, for comparing the superiority of the proposed DELMS model, 7137 time series models were explored. They obtained from 12 time series models plus an ensemble, 3 scenarios, and 3 holdouts for 61 countries.
Forecasting accuracy comparison
The predictive accuracy metrics obtained for the three alternative holdout periods under investigation, using three alternative backtesting procedures, are reported in Table 5. In the case of the first approach -the ''Fixed holdout'' -we used a fixed holdout period equal to 3, 5, and 7 years to derive the composite (ensemble) model.
The results in the first columns show that some models exhibit better performance than that of the ensemble models with fixed holdouts. For instance, the average error of the TBATS model across two holdout periods is smaller than that of the BMA (see Table 5, Column (1)).
The second approach -the ''Fixed holdout with model selection'' -uses a multiple of the SMAPE values across all methods to evaluate the distance of each model to the others as detailed above in the pseudocode ( Table 1). The models with SMAPE values higher than that of the introduced indicator are considered poor forecasters and eliminated from the ensemble forecast. [39]. The results in Table 5 show that the accuracy of the BMA approach improves in robustness when pursuing the selection approach for each holdout, with the composite model now ranking first among all the methods tested. With a fixed holdout of 3 for all models, which is the classical approach, the BMA has a SMAPE value of 0.112. For the same holdout, but with model selection, this SMAPE value improves (0.103). In the third approach -''model selection plus dynamic holdouts'' -that is, a combination of approaches one and two -the percentage error improves again (0.102). This approach combines the best forecasting models fitted using each model's optimal holdout selection. As a result, the accuracy of the ensemble is improved, leaving the individual learning algorithms at a reasonable distance. Fig. 3 summarizes the above empirical results. It is apparent that the ensemble model with the new layered learning approach (DELMS) exhibits greater predictive accuracy than either of the two single forecasting methods used and either of the ensemble strategies with fixed holdouts and with fixed holdouts and model selection. It shows that the approach proposed improves the predictive performance at each step of the learning process illustrated in Fig. 2.
Finally, the Wilcoxon signed-rank test was performed to determine the significance of the superiority of the proposed model (DELMS). This test was used to determine the significance of the forecasting errors in the forecasts of the central trend made by two forecasting models with the same number of data [51]. Let e i be the forecasting errors in the ith forecast value (i.e. countries) generated by two forecasting models (DELMS and BMA(holdout=3) (Appendix C).
sum of ranks
where r + and r − represent the sum of ranks. For e i = 0, we eliminate the comparison. The statistic W is defined as in Eq. (6): The proposed DELMS model significantly outperforms the BMA(ho=3), which is the ensemble model with the best performance among all ensembles with fixed holdouts ( Table 5). The proposed DELMS model is significantly superior to other BMAs with respect to forecasting (P-value = 0.015). Table 7 reports the distribution of the models excluded from the selection procedure and ranks them according to their contribution to the composite model. A vertical comparison of the results offers insights into the contribution of each model to the ensemble, while a horizontal comparison enables us to assess the rate of contribution across different holdout periods.
Models excluded from the selection procedure
The results show, first, that all models are excluded several times from the BMA model space as a result of the procedure to select the model confidence set, highlighting that the set of best-performing forecasters differs across the countries, i.e., their predictive accuracy is population-and period-specific. This is unsurprising and can be explained by the differential patterns observed in the respiratory disease data. The variability in the models' out-of-sample forecasting accuracy also reveals their ability to capture diverse features of mortality data. Second, the results suggest that combining models is a way to leverage their Table 5 Ranking of the models and ensembles according to the accuracy measure. Source: Authors' own.
(1) Fixed holdout column for the first row (BMA) shows the SMAPE for the Bayesian model averaging approach with fixed holdout. Rest of rows show individual models. (2) and (3) represent the methods proposed herein. model by selecting both the best holdout for each model and the best forecasters in the model confidence set finally used to make the forecast. Table 8 presents the contribution ranks, the exclusion frequency, and the proportion of the selected models with the best holdout for the DELMS. The results show that the contribution of single learners to the ensemble changes when compared with that obtained with model selection only (Table 7), highlighting again the importance of combining model selection with holdout period calibration. Fig. 4 reports the BMA model confidence set (vertical axis) and corresponding posterior probability (horizontal axis) for selected countries.
As we used the SMAPE criterion to select the set of models and respective weights, a given weight of zero indicates excluding that individual model from the BMA forecast combination. We can observe that the model's contribution to the ensemble varies across countries and the ensemble model consistently performs well in all countries.
Algorithmic efficiency analysis
We analyze the algorithmic efficiency of each method -i.e., the amount of computational resources used by the algorithm -by measuring the time spent in fitting the ensemble model with each approach and using it to predict the maximum likely run-time of a new given time series (Table 9). The CPU used here is the Intel Core i7-7500U Processor @ 2.70 and 2.90 GHz with 16.0 GB RAM. The modeling, training, tuning, and testing are programmed in R 4.1.2. The method proposed fits the models considering three holdout periods in order to select the best holdout for each model.
Our expectation is that the method drives the run-time at least three times more than the two other approaches, which is expected given that the underlying model is a multi-step forecasting method. However, if we consider the average run-time and the mean confidence intervals for the three approaches, we see that they do not differ greatly, which indicates that our proposed method is efficient in terms of computation time.
Excess mortality analysis
The proposed ensemble learning for panel time series with strategy selection and dynamic holdouts (as discussed in Section 2 and here, above, in Section 3) was used to forecast the number of deaths caused by different kinds of respiratory disease for a subset of 61 countries in 2020 (see Table 1 -line 5). Additionally, COVID-19 deaths were extracted for the same year from the COVID-19 Weekly Epidemiological Update published by the World Health Organization (WHO) with data as received from national authorities, as of 3 January 2021, which provides full coverage for the period of 2020 [52]. Table B.1 (in Appendix B) presents forecasts of the total number of deaths attributable to respiratory diseases (RD TD), which is calculated as an aggregation of monthly death forecasts for each country. The last two columns show the standardized values of the total number of deaths attributable to respiratory diseases and COVID-19, respectively, used in calculating the correlation. The Pearson correlation for all 61 countries is 0.34, which is statistically significant (P-value = 0.007). As shown in Table 10, to calculate the correlation for a more limited set, we considered the European countries, including the United Kingdom, Canada, and the United States of America.
The selection criteria used were the maturity standards of their official statistics (SDDS+, SDDS, GDDS), outcomes from official statistics corruption models [53], and the quality of death data according to the WHO ranking discussed in Section 3.1. Now, the correlation coefficient increased dramatically to 94% (P-value =0.000), which can be attributed to the higher quality of the official statistics in these countries. Ashofteh and Bravo (2020) have shown there to be significant variation in the quality of the COVID-19 datasets reported worldwide, albeit a recent study suggests that data science and new technologies can be expected to play a significant role in improving data quality from national statistical offices in the future [54].
The comparison of death forecasts attributable to respiratory diseases and COVID-19 deaths is shown in Figs. 5 and 6. For most countries, COVID-19 deaths can be said to have ''replaced'' the respiratory deaths that would have occurred based on extrapolations of past respiratory disease trends. Here, a study of Notes: ART: Average run-time, STD: Standard deviation, LCL: Lower confidence limit, UCL: Upper confidence limit.
Table 10
Comparison of number of deaths forecast for respiratory diseases and actual COVID-19 deaths. Source: Authors' own.
Row
Country (1) Alpha-3 Country No Population (2) RD TD (3) COVID TD (4) Standardized RD TD (5) the factors affecting COVID-19 mortality shows a high correlation between respiratory deaths and COVID-19 deaths, a finding that is consistent with clinical manifestations and epidemiological studies. For example, countries with a high expectancy of respiratory diseases presented higher excess mortality, that is, at the macro (country) level. At the individual level, the higher number of deaths from respiratory diseases could be considered an indication of the population's greater susceptibility to COVID-19 symptoms and a greater risk of death. This comparative study highlights the fact that the policy effectiveness of different countries could result in an evaluation bias, without considering their past experience with respiratory diseases. Fig. 5 shows that the countries of Europe and North America were sensitive to respiratory diseases and that this boosted the excess mortality attributable to the COVID-19 pandemic; however, Fig. 6 shows that in 2020 some countries dealt better with COVID-19 than others as regards their vulnerability to respiratory diseases. Thus, this last figure highlights that in countries in which the forecast of respiratory disease deaths significantly exceeds the confirmed COVID-19 deaths (e.g., Japan and the Philippines), the management of the pandemic crisis succeeded in reducing excess mortality. The results shown in these two figures are very much in line with a recent study indicating a much lower overall excess-mortality burden due to COVID-19 in Japan than in Europe and the USA [55]. Here, Yorifuji et al. [56] suggest that in Japan, the public health regulations aimed at preventing COVID-19 may have incidentally reduced mortality related to respiratory diseases, such as influenza, and so decreased net excess mortality.
Additionally, in addressing vulnerability to respiratory diseases, Japan and the Philippines appear to have set a good example for the rest of the world in terms of controlling the effects of respiratory death numbers on the number of COVID-19 deaths. The similarity of the situations in these two countries seems to testify to the importance of the agreements struck on their, socalled, COVID-19 Response Support. As reported on the website of the Department of Foreign Affairs in the Philippines, the Japanese Government has been unstinting in its commitment to the Philippines' recovery efforts, previously pledging over JPY100 billion assistance in emergency and standby loans and donating 1 million Japan-manufactured AstraZeneca vaccines. 5 Fig. 6 shows a similar situation for the Republic of Korea, which geographically lies in the same vicinity as these two countries. The outcome of the comparison of death forecasts attributable to respiratory diseases and actual COVID-19 deaths for the Republic of Korea is in line with a recent study estimating mortality in Korea undertaken by Shin et al. [57], which finds that mortality in 2020 was similar to the historical trend. This similarity of outcomes reported by these neighboring countries seems to highlight the importance of international cooperation and the sharing of resources for the successful control of the effects of pandemics. Moreover, as these countries are geographically close to each other, meteorological factors might also have been influential in their respective outcomes. Clearly, more research is required.
Finally, in addition to the effect of respiratory deaths on deaths attributable to the pandemic, international cooperation, optimal scheduling and the utilization of medical resources, large-scale virus testing, protecting and managing the healthcare of the elderly, lockdowns, vaccination, and controlling the borders are examples of other factors that might result in different outcomes by country. However, accurate and timely estimations of respiratory deaths also seem to be an important factor when undertaking comparisons of multiple countries.
Conclusions
We have tested a new ensemble learning technique (DELMS) for the panel time series forecasting of respiratory diseases and we summarized the empirical results obtained when using individual models, a simple ensemble model, an ensemble with model selection, and an ensemble with model selection and dynamic holdouts (DELMS). Our goal in so doing was to obtain a benchmark for evaluating the excess mortality related to COVID-19 that might serve as a common framework for all countries.
Based on the performance outcomes of the models ( Table 5) and results of Wilcoxon signed-rank test (Table 6), on average, the ensemble with model selection and dynamic holdouts (DELMS) performs significantly better than the other methods. Our results provide clear evidence of the competitiveness of this method in terms of its predictive performance when compared to the stateof-the-art approaches and even the ensemble model without the dynamic holdout and model selection layer.
Our analysis of the contribution of each of the candidate models to the ensemble (Tables 7 and 8) highlights the positive effect on overall prediction accuracy of selecting the best holdout for each model and excluding the outlier models from the ensemble. Moreover, it was evident that some of the state-of-the-art approaches outperformed the neural network time series models. A possible explanation for the underperformance of the complex neural network approaches might lie in the non-stationary elements, for example, the trend component and their pre-set hyperparameters. However, neural network time series models have been shown to perform much better when the time series data are nonlinear and stationary and present sudden changes in their layering hierarchy [58]. For this reason, they can be expected to add value to the ensemble in the case of mostly detrended time series. Additionally, recurrent neural networks, such as LSTM and GRU, have the potential to outperform time series models and their use could be usefully explored for the ensemble in future studies with sufficient computation resources or less panel members.
The variation in the performance of each model stresses the need to improve each of them individually by selecting the best holdout and, moreover, to determine the best models to contribute to the ensemble without overfitting. The indicator proposed here in Formula (3) removes only those models that are very distant from the other models and, by so doing, we are able to avoid the significant bias in the set of candidate forecasters. The final ensemble model shows a significant improvement in overall accuracy when compared with the other ensembles and with each individual state-of-the-art approach. The superiority of the proposed DELMS model was explored by comparing 7137 time series models obtained from 61 countries, 12 time series models plus an ensemble, 3 scenarios, and 3 holdouts.
Here, we have used the new ensemble strategy to forecast the number of deaths from respiratory diseases in 2020 for a sample of 61 countries. The correlation between the standardized values of deaths from respiratory diseases and those from COVID-19 was positive and statistically significant. Based on this outcome, it is apparent that we should consider death forecasts from respiratory diseases as a covariate for evaluating the management strategies employed by different countries, be they lockdown rules or the relaxation of border control regulations. On the basis of our study, Japan and the Philippines are candidates for further investigation in this regard; indeed, they are more eligible than other countries that only record a low death toll. It may well be that the experience of these countries with high mortality attributable to respiratory diseases played a more than relevant role in their management of the pandemic.
Indeed, in the case of the COVID-19 pandemic it might be more relevant to focus on the death toll rather than on the cumulative number of patients. Given the nature of pandemics, the challenge usually lies in being able to control its spread; however, here the primary concern might be said to have been controlling the severe cases and caring for the patients facing the greatest likelihood of death. Those countries presenting a high number of cases of respiratory disease and which successfully managed the pandemic, therefore, could be better targets for further studies that compare their health policies and strategies with those implemented by countries presenting only a low rate of mortality.
In short, the study described here represents an initial attempt at developing a new approach to ensemble forecasting tasks. The main motivation for this paper was the observation that the performance of the ensemble model might potentially be enhanced by selecting the best holdout for each candidate model and by choosing the best outcomes based on the dynamics of the observed values of the main series. In experiments using the 61member panel time series of respiratory disease deaths recorded between 2000 and 2016, the aggregation of selected forecasting models employing our approach provides a consistent advantage in terms of accuracy and leads to better predictive performance. Moreover, our study provides a correction of the total number of positive cases of COVID-19, in accordance with the expected number of deaths attributable to respiratory diseases as identified by our ensemble model.
Finally, this study has highlighted the pandemic experiences of Japan and the Philippines, identifying them as candidates for further exploration. The two countries present a high degree of vulnerability to the COVID-19 pandemic; yet, despite this, they succeeded in managing it well. Thus, regardless of higher death tolls than those recorded in other countries, their policy response should be examined to extract best practices. Finally, and of particular interest, is the fact that in most countries COVID deaths seem to have ''replaced'' the deaths attributable to respiratory disease that appear likely to have occurred in the absence of the pandemic, based, that is, on an extrapolation of past trends of such deaths.
Future research
Future studies could usefully seek the optimization of θ in Formula (3), that is, investigate the dynamic selection of optimum θ to ensure better performance. Additionally, as the usual neural networks fail to model time series adequately, especially in the case of incomplete/limited data during the onset of the epidemic [59], the study of recurrent neural networks, such as LSTM and GRU, would constitute an interesting future step if necessary computational power is available. This research should examine their impact on predictive accuracy, computation time and other resources, given the potential of these mechanisms to outperform ensemble time series models with no more than a reasonable increase in the computation power requirement. Indeed, the consideration of a non-linear meta-learning approach, as opposed to a linear approach, and of prediction intervals, as opposed to a point forecast, could constitute a fruitful next step. Moreover, the use of classification techniques to analyze heterogeneous and homogeneous countries could be considered another layer following the application of the forecasting methods. As such, a clustering analysis might usefully be implemented based on the notion of excess mortality. Finally, the countries of Japan and the Philippines standout, and their policy response should be subject to an epidemiological examination to determine what lessons might be learned.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2022-03-21T15:17:31.534Z | 2022-08-01T00:00:00.000 | {
"year": 2022,
"sha1": "8f55f439faac638f8c95c7914e263835dee7b5ed",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.asoc.2022.109422",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "eee8ca84c16d5a030108d5f205badbc679f5d595",
"s2fieldsofstudy": [
"Economics",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
229531742 | pes2o/s2orc | v3-fos-license | Gestión de materiales culturales delicados: los restos óseos humanos del Museo Etnográfico Municipal Dámaso Arce, Olavarría, Argentina Management of Delicate Cultural Materials: the Human Remains of the Museo
SUMMARY Museums that keep human remains must conform to a set of regulations and professional recommendations for their safekeeping. The icom (International Council of Museums) Code of Ethics identifies these pieces as sensitive cultural materials and has a number of related guidelines. This will form the premise of this work, which aims to present the procedures that are being adopted to legally and tech nically condition the human skeletal remains of seven persons that are located in the Museo Etnográfico Municipal Dámaso Arce ( meda , for its initials in Spanish) de Olavarría , in Argentina. Currently this museum is not open to the
Uno de los mayores problemas identificados en ese diagnóstico es la ausencia de una gestión adecuada y actualizada de los restos óseos de siete personas que se encuentran en dicho museo.
de hueso y de dientes, los cuales están en proceso en el Laboratorio de Isótopos Estables de la Universidad de Wyoming (Estados Unidos).
Facultad de Ciencias Sociales (Facso)
Universidad Nacional del Centro de la Provincia de Buenos Aires
SUMMARY
Museums that keep human remains must conform to a set of regulations and professional recommendations for their safekeeping. The icom (International Council of Museums) Code of Ethics identifies these pieces as sensitive cultural materials and has a number of related guidelines. This will form the premise of this work, which aims to present the procedures that are being adopted to legally and technically condition the human skeletal remains of seven persons that are located in
Centro de Registro del Patrimonio Arqueológico y Paleontológico
(crpap) 1 has nowhere suitable to house collections, meaning that municipal museums, without specific regulations to regulate them, act as transient repositories. That is, the mastery of each municipality's museum collections resides with the province, but each one has the power to promote local ordinances. In fact, the Ola- to the institution, location (if in exhibition), preservation and restoration treatments or a log of procedures to which they have been exposed, etc. It is also mentioned that museums have obligations in terms of the protection, accessibility and interpretation of their collections.
Human remains and the regulatory framework
For several years there has been a legal framework in Argentina surrounding human remains kept in museums. The Ley Nacional 25.517/01 mandates that such remains be made available to the indigenous people or communities that claim belonging to them, and that those that remain unclaimed may continue in the institutions that house them and should be treated with the utmost respect. Its regulatory decree: 701/2010, determines that the Instituto Nacional de Asuntos Indígenas (inai) is responsible for coordinating, articulating and assisting in the monitoring and study of compliance with the directives provided for by Law 25.517. However, before the sanctioning of this rule, restitutions had already been made to different people, indigenous or family communities (Ametrano, 2015;Endere, 2002), re-burials (Curtoni & Chaparro, 2007 and repatriations (Pérez & Pegoraro, 2004;Rodríguez et al., 2005). These actions were linked to the processes of ethnic re-emergence that have been developing in the country for several years (Centro Promocional de Investigaciones en Historia y Antropología [Cepiha] 2 ,1999; Podgorny & Politis, 1992;Straighten et al., 2014), in line with the international movement named "reburial issue" Hubert, 1992& Ucko, 2001, the recognition of rights such as those from the International Labor Organization (ilo, 1989) Powell, 2007;Giesen, 2013). In 1999 a civil association of voluntary integration was created, the aapra, which has its own code of ethics in which it states (Article 14), that human remains should be treated with particular respect, taking into account criteria agreed upon between the different stakeholders concerned. In correspondence, the above-mentioned Code of Ethics (2004) considers that the acquisition, conservation and exposure of human remains and 2 English: Promotional Center for Research in History and Anthropology sacred objects investigated, should be carried out in accordance with professional standards, respecting the interests and beliefs of the ethnic or religious communities and groups from which they originate (Principles 2.5 and 3.7). With regards to accessibility, it states that "it is necessary to facilitate free access to the collection and all relevant information related to them" (Principle 3.2).
With regard to exhibition, it mentions that it must be done "with (2007), aaba approved a declaration regarding the ethics of the study of human remains, which recognizes the rights of both the original people and the subjects of their study, as they are of interest to all humanity (Endere, 2013). In 2011 the AABA adopted a Code of Ethics that established, amongst other issues: that it is the responsibility of biological anthropology professionals to ensure the proper conservation of human remains, that the study of human remains must be carried out with due justification, that those responsible be adequately trained to do so and that interaction with communities that hold claim of belonging must be promoted; respecting their customs, creeds and values.
In order to meet these new requirements, museums (and other research centers) have adopted special protocols for the treatment of indigenous remains, updated their inventories, and reviewed documentary funds and movement records. They have also begun to prepare exclusive deposits, to enable spaces to hold indigenous ceremonies and impart workshops open to the community in order to discuss these issues together with interested parties Castro et al., 2009;García, P., Conforti, M. E., & Guichón, R. A., 2018& Noticias del Museo de Córdoba, 2017. One topic of debate is, for example, the scope of the concept of respectful treatment. In the case of a museum in the province of Santa Cruz and in response to collaborative work with some Mapuche and Tehuelche communities, it was requested not to name the remains alongside the term collections because, according to their own cul- ENERO-JUNIO 2020JANUARY-JUNE 2020 tural parameters, such a concept reduces them into simple objects (Nahuelquir et al., 2015).
GENERAL DIAGNOSIS OF MUSEUMS IN ARGENTINA
According to the jurisdictional membership of Argentine anthropological museums, the status of the heritage housed within them is rather disparate, which is evident across infrastructure, conservation, staff training, cooperation and academic advice, among other aspects. Some private or municipal museums are relegated and even abandoned to the good intentions of whichever government is on duty as well as on the whims of individuals within them. On the other hand, some university, provincial and national museums have been able to update and renovate their facilities and practices by securing funding sources from national or international agencies (Bonnin, 2014). Since 2003, a plan by the Dirección Nacional de Patrimonio y Museos 3 has been implemented at governmental level which aims to contribute to the overall improvement of many institutions, while the formation of professional associations (such as the
Red Jaguar, the Comité de Educación y Acción Cultural Argentina
[ceca], a member of icom, and the typa Foundation) facilitated updates and training. However, the above-mentioned improvements were limited to the support of these organizations, and not all municipal museums participated in, or had access to them, meaning that, in many cases, their conditions remained unimproved.
Currently, the province of Buenos Aires boasts a great diversity of municipal museums, depending on the degree of liaison with the academic sphere or state dependency that they have achieved, and the support of associations of friends. Some of the most successful ones are those that knew how to maintain a direct link with researchers over time, such as in the Museo José Mulazzi in Tres Arroyos, the Museo Gesué Noseda in Lobería, and the Museo de Ciencias Naturales in Necochea, to name a few of the ones closest to Olavarría. However, the Museo Dámaso Arce, in spite of being located in the same city as the Instituto de Investigaciones Arqueológicas y Paleontológicas de Cuaternario Pampeano (Incuapa) 4 , devoted to archaeology and paleontology, did not enjoy the same fate. There are several reasons for this and they can only be understood through a historical analysis of its development over time (Chaparro, 2017
OVERVIEW OF THE MUSEO ETNOGRÁFICO DÁMASO ARCE (meda)
In the first decade after its foundation (1963), meda received an important award of national academic prestige, which was consolidated with the creation of the Instituto de Investigaciones Antropológicas de Olavarría (iiao), both dependent on the Municipality of Olavarría (Politis, 2005). Several conditions converged to make this happen, including a visionary and proactive director, with extensive intellectual and management skills (Guillermo Madrazo), and the confidence and political and economic support of the municipal administration. It should be noted that this museum has a longer history, since it was originally private when, in the first decade of the 20th century, the autodidactic goldsmith Dámaso Arce began to build up a private collection in his home.
In 1918 he opened his first exhibition, where he included a Sala de Indios (Hall of Native Indians), and in 1923 he officially inaugurated the Museo Hispano Americano. Over the years it gained in popularity and prestige, partly driven by the tenacity of its founder who, like other collectors of the time, integrated networks for the exchange of information and pieces (Podgorny & Lopes, 2008;García, 2011). In retrospect, Dámaso Arce had no official municipal support, as it took 20 years after his death for the family to manage to bring the collection into the state domain (Chaparro, 2017). More than 50 years after its official creation as meda and a century after its initial creation, this state support has not been present again, which is why today the museum is closed to the public, without a building of its own and with a collection on the brink of neglect.
In 2009 a permit was obtained granting access to the museum's reserves 5 with the goal of making a diagnosis on the state of conservation of the collections, within the framework of the investigations of the first author (without any connection to the municipality).
It is worth remembering that, at that time, the meda headquarters were located on the first floor of a large mansion on San Martín street, a few meters from the city of Olavarría's central square, also the headquarters for the homonymous municipality. As a result of this diagnosis, an analysis on the condition of the museum was presented to the municipal authorities, which included recommendations and proposals for intervention. Amongst the findings, these were the main problems identified: the unstable or moving wooden floor in the exhibition hall, the general state of abandonment of the collection storage and the lack of an updated inventory.
According to the last updates (in 1987 and 1988), the collection has 589 pieces, few of which emanate from Dámaso Arce's time.
The objects come from various regions of the country, correspond to different periods of time and are made from a great diversity of materials including: bone, textiles, leather, rocks, ceramics, metals, shells, etc. Amongst the items identified were a number of lithic pieces and more than 300 megafauna fossils that do not contain dates of entry or that were added in draft after the preparation of the inventory, the result of fortuitous discoveries by neighbours or recoveries from quarries in the district -none of which contain dates for their entry to the collection or were added in draft form after the initial inventory had been carried out (Chaparro, 2017).
Human skeletal remains were also identified and will be described later. In 2014, five years after the diagnostic report was delivered, the authorities decided to dismantle the exhibition and move the complete collection to the Bioparque Zoológico La Máxima (also a municipal facility) however, it is not known if the report had any impact on this decision. A space was conditioned within the zoo to serve as storage that would meet minimum standards: humidity and temperature control, indirect artificial lighting, metal shelving, appropriate packaging, and the digitalization of the collections began. However, these conditions were only maintained until 2017, when an electrical malfunction damaged the facility and all maintenance ceased afterwards. At the same time, the same municipal administration accepted a proposal for the conditioning of the indigenous human remains and granted a subsidy for its implementation (Chaparro, 2017).
This summary gives an account of the complex picture presented by meda. In general terms, it can be said that a series of irregularities linked to the management of the institution by some municipal officials (former and current) occurred and continue to occur to this day. Three factors associated with their actions influenced the decline and closure of the institution to the public: ignorance towards the patrimonial value of the collection, a lack of interest in updating national and provincial legislation that would have protected it, and a lack of awareness on international regulations and professional recommendations that should have been applied over time. These actions, and a range of others carried out without proper planification (such as removals or loans of pieces), or in an arbitrary manner (lack of personnel) or by chance (floods), affected at various times -and continue to directly affect-the integrity of the collections.
AND STUDY UNDERTAKEN
As detailed above, for collections of objects to be properly preserved and valued, they must have the most comprehensive and reliable supporting information possible; the same applies to human skeletal remains. In the diagnosis carried out at meda, a human skull with cultural deformation and a toddler in an urn were identified during the revision of the inventory (from 1988 to 1989), both originating from the northwest of Argentina. In addition to these, sets of skeletal remains were identified in the collection, that were in various boxes and were not included in that inventory. It should be noted that the two communities and indigenous people of Olavarría were informed and consulted prior to conducting these studies.
Based on these irregularities, a general description of the contents of each box was made in the first place. This included: identification of whether they corresponded to human remains; evaluation of whether they had some kind of label or numbering that would indicate if they were indeed already part of the inventory; determination of sex and age; analysis of the state of preservation and bone integrity; and photographic records ( Figure 1). Secondly, the iiao carried out a new documentary survey on the origin of the remains. Until about 20 years ago, meda and the iiao published about research and exchanges that were taking place, leading to the suspicion that, whilst studying them, some of these remains could be associated to archaeological excavations from other periods. In fact, through the interweaving of these two data it was identified the possible origin of some of the indigenous remains (two) found in the meda storage that resulted from the archaeological excavations carried out by its first director, (Madrazo, 1966(Madrazo, , 1969. Likewise, in order to corroborate or get a closer approximation of the true origin of the bone remains (one of the necessary legal requirements), it was advised to carry out a chemical study of stable oxygen isotopes (δ 18 O). Oxygen molecules in human tissues are derived from embedded water, diet and atmospheric O2 levels. The stable isotope values of δ 18 O in the water consumed (both directly drunk and obtained through food consumption) are translated in a linear way to the tissues of their consumers (Ehleringer et al., 2008;Longinelli, 1984), so they are useful for studies on population movements and region allocation. Seven analysis were performed on bone and tooth samples, the results of which are in processing at the University of Wyoming's Stable Isotope Laboratory (usa). The box also contains 9 skeletal elements of fauna (they were left in a separate bag). Internal label, possible provenance to be corroborated: San Blas. Human bones. Superficial collection.
1
A baby with an estimated age of between 0 and 1 month approximately. 105 elements are present. Label written on the right parietal: 137 By association with a meda funeral urn and documentation, its probable origin is from the Alfarcito archaeological site (Tilcara, Jujuy), AR VI site, excavated and published in Madrazo (1969).
To be corroborated with isotopic studies.
1
Possibly all the bone elements correspond to the same person.
Label written on the bone: 194 Adult person, possibly male with cranial deformation.
154 1 A fragmented skull with reddish coloration. Label on wooden paper: skull without provenance.
1
A mummified left foot, it is not possible to know if they correspond to the same person.
Furthermore, the box contained 12 bone elements of indeterminate fauna that were separated.
1
A possibly female young adult person. Old sign 202.
1
A possibly male adult with cranial deformation. With preserved remnants of dissected soft tissue. It has two labels 201 and 119.
FINAL COMMENTS
As institutions of public interest, museums have the duty of safeguarding heritage and are obligated to best management practices. On the basis of these principles, meda does not conform with the concepts of legitimate ownership and documentation of collections, especially of sensitive cultural materials. This is compounded by the failure to comply with National Law 25,517 on indigenous human remains. In order to adapt to these requirements, basic descriptive morphological studies were carried out to correctly identify the remains, as well as a bibliographic research that enabled the establishment of legitimate ownership of some of the pieces: date and form of entry into meda. Finally, it was decided to complement the samples with chemical studies that would allow approximate determinations of their provenance, in order to gather the necessary documentation and evaluate the origin of the remains, which could be key in the face of possible requests for reports from indigenous peoples or communities and claims for restitution.
Laboratorio de Ecología Evolutiva Humana (leeh)
Universidad Nacional del Centro de la Provincia de Buenos Aires | 2020-12-27T13:04:18.278Z | 2020-09-21T00:00:00.000 | {
"year": 2020,
"sha1": "c3d50eca73d3c5f2bb236d4af75dfb2528882f8a",
"oa_license": "CCBYNC",
"oa_url": "https://revistaintervencion.inah.gob.mx/index.php/intervencion/article/download/6318/7791",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "619c8f341248703cb4d65830deb82d8330992f85",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Political Science"
]
} |
510007 | pes2o/s2orc | v3-fos-license | Asymmetry of the Structural Brain Connectome in Healthy Older Adults
Background: It is now possible to map neural connections in vivo across the whole brain (i.e., the brain connectome). This is a promising development in neuroscience since many health and disease processes are believed to arise from the architecture of neural networks. Objective: To describe the normal range of hemispheric asymmetry in structural connectivity in healthy older adults. Materials and Methods: We obtained high-resolution structural magnetic resonance images (MRI) from 17 healthy older adults. For each subject, the brain connectome was reconstructed by parcelating the probabilistic map of gray matter into anatomically defined regions of interested (ROIs). White matter fiber tractography was reconstructed from diffusion tensor imaging and streamlines connecting gray matter ROIs were computed. Asymmetry indices were calculated regarding ROI connectivity (representing the sum of connectivity weight of each cortical ROI) and for regional white matter links. All asymmetry measures were compared to a normal distribution with mean = 0 through one-sample t-tests. Results: Leftward cortical ROI asymmetry was observed in medial temporal, dorsolateral frontal, and occipital regions. Rightward cortical ROI asymmetry was observed in middle temporal and orbito-frontal regions. Link-wise asymmetry revealed stronger connections in the left hemisphere between the medial temporal, anterior, and posterior peri-Sylvian and occipito-temporal regions. Rightward link asymmetry was observed in lateral temporal, parietal, and dorsolateral frontal connections. Conclusion: We postulate that asymmetry of specific connections may be related to functional hemispheric organization. This study may provide reference for future studies evaluating the architecture of the connectome in health and disease processes in older individuals.
INTRODUCTION
With recent advances in structural neuroimaging, it is now possible to track medium and large-scale pathways of white matter fibers and to construct the map of neural connectivity across the entire brain (the brain connectome) (1,2). The brain connectome constitutes a promising development in neuropsychiatry since many physiological and pathological processes are believed to affect the architecture of neural networks (3)(4)(5)(6). For example, normal cognitive development spanning from childhood to senior years is traditionally believed to be associated with maturation of neural networks supporting cognitive domains such as attention (7), memory (8), language (9), and executive function (10). Similarly, neurological diseases including epilepsy and dementia are associated with pathological rearrangements or impoverishment of normal networks at a systems level (11,12). Furthermore, psychiatric diseases such as schizophrenia (13), bipolar disorder (14), and addiction (15) are related to reinforcements of pathological networks.
In order to better understand how different biological processes affect the normal connectome, it is important to accurately characterize the connectome organization in healthy individuals. Specifically, it is important to define the degree of individual variability and hemispheric asymmetry that can be expected from the normal population.
Since hemispheric asymmetry is abnormal in neuropsychiatric diseases such as schizophrenia (16), bipolar disorder (17,18), and depression (19), in this study we aimed to describe the patterns of hemispheric asymmetry and individual variability of the brain connectome obtained from a cohort of healthy senior individuals. We employed high-resolution magnetic resonance images (MRI) to reconstruct structural brain connectivity based upon white matter pathways linking anatomically defined cortical regions of www.frontiersin.org interest (ROIs). We aimed to describe the group-wise distribution of asymmetries involving cortical connectivity and white matter pathways.
SUBJECTS
We studied 17 right-handed healthy subjects (mean age 53 years, SD = 7 years, range = 40-76) who were recruited from the local community. None of the subjects had a history of neurological, psychiatric, or chronic medical illnesses. All patients signed an informed consent to participate in this study. The Institutional Review Board at the University of South Carolina approved this study.
IMAGE ACQUISITION
All subjects underwent MRI scanning at a 3T Siemens Trio equipped with a 12-channel head coil located at the University of South Carolina, yielding: (1)
IMAGE PRE-PROCESSING
The construction of the connectome involved two parallels preprocessing steps, namely, the segmentation of the cerebral cortex into multiple anatomical ROIs and reconstruction of white matter fibers. These steps are explained below:
SEGMENTATION OF THE CEREBRAL CORTEX
T1-weighted MR images were converted into NIfTI format utilizing the dcm2nii tool from the MRIcron software package (20). Images in native space were non-linearly normalized into standard MNI space using the Clinical Toolbox (21) employing unified segmentation-normalization routines as part of the software Statistical Parametric Mapping (SPM8). Of note, the Clinical Toolbox was particularly designed to accurately quantify tissue volumes in seniors and older adults (21).
This step yielded probabilistic maps of gray and white matter in MNI space.
Next, the base b = 0 T2-weighted dMRI volume was linearly transformed to standard space utilizing a boundary-based registration approach (22). This step was performed using FMRIB's Linear Image Registration Tool (FLIRT), as part of FMRIB Software Library (FSL) (23). The registration parameters were then used to transform the Automated Anatomical Labeling (AAL) atlas (24) and the white and gray matter probabilistic maps onto the dMRI space. Once in dMRI space, a map of cortical regions segmented according to AAL was obtained by overlaying the registered AAL atlas onto the registered probabilistic gray matter map. The intersection between these images (including only voxels with a probability greater than 50% of being gray matter) represented the segmented cortical map. A list describing all ROIs used in this study can be observed in Table A1 in Appendix.
WHITE MATTER FIBER RECONSTRUCTION
Extraction of diffusion gradients was performed with dcm2nii (20). The dMRI volumes were aligned to the b = 0 dMRI image using the FSL FLIRT tool (23). In diffusion space, whole brain tractography was reconstructed with the software Diffusion Toolkit (25) according to the following parameters: (1) angle threshold = 45°, (2) inclusion mask derived from the average of diffusion weighted signal and from the white matter probabilistic map registered to dMRI space, (3) FACT propagation algorithm, (4) spline filter.
CALCULATION OF THE CONNECTOME
From each patient, the path of each tractography streamline was assessed. All streamlines were seeded in white matter, and the end-points of each streamline were computed. Streamlines with end-points within ROIs were counted as links between these ROIs. Streamlines with end-points outside ROIs were discarded. After all streamlines were assessed, the result was a weighted connectivity matrix A, where the entry A ij represented the number of streamlines connecting regions i and j (i.e., the weighted link between i and j). Note that only direct links between regions i and j were included in the link A ij .
Finally, each link weight was corrected based on the surface of the connected ROIs and the distance between the ROIs, as proposed by Hagmann et al., where the link weight is inversely proportional to the sum of the linked ROI surfaces and fiber length, in order to account for tractography bias related to size of connected ROIs and distance traveled by the streamline (2).
Asymmetry was evaluated for ROIs and for links
Regions of interested asymmetry was calculated by assessing the sum of link weights connecting an ROI, in comparison with the homologous ROI in the contralateral hemisphere. The connectivity of each ROI was computed without discrimination regarding the opposite end of the streamline. For example, when computing the connectivity of the left hippocampus, all possible connections of the left hippocampus were computed, including connections to ROIs in the ipsilateral and contralateral hemispheres. Connections of the left hippocampus to itself or to the contralateral hippocampus were discarded.
For each ROI, cortical connectivity asymmetry was calculated according to the following asymmetry index (AI) = (R-L)/[(R + L)/2]; where L represents the connectivity of the ROI in the left hemisphere and R represents the connectivity of the ROI in the homologous ROI in the right hemisphere. A one-sample t -test was performed to evaluate whether the distribution of ROI asymmetries across all subjects was statistically different than a distribution with mean = 0. This step was performed for each ROI. Note that ROI asymmetry represents the sum of connections to an ROI, irrespective to which other ROI is being linked to that ROI. Thus, ROI asymmetry should be interpreted in the context of link-wise asymmetry.
Link-wise hemispheric asymmetry was calculated by assessing the difference in weight for each link, according to the same asymmetry index AI = (R-L)/[(R + L)/2]; where L represents the link between ROIs within the left hemisphere and R represents the link between the homologous ROIs within the right hemisphere. Note that inter-hemispheric connections were excluded from this calculation. Cerebellar links were also excluded. Only supratentorial links within the same hemisphere were assessed. A one-sample t -test was performed to evaluate whether the distribution of asymmetries for each link across subjects was statistically different than a distribution with mean = 0.
RESULTS
The average connectivity matrix is demonstrated in Figure 1. The ROIs are numbered in accordance with the glossary from Table A1 in Appendix. Briefly, regions 1-45 represent ROIs located in the left hemisphere, whilst regions 46-90 represent ROIs located in the right hemisphere. As such, within the average connectivity matrix with 90 × 90 entries, the upper left quadrant demonstrates links between ROIs in the left hemisphere, and the lower right quadrant demonstrates links between ROIs within the right hemisphere. The right upper and the left lower quadrants of the connectivity matrix represent reciprocal left to right connections. Since connections are not directed (i.e., the strength of connectivity from region i to j is the same as the connectivity from region j to i), the matrix is symmetrical along its main diagonal. As expected, links within the same hemisphere exhibited a higher weight compared with connectivity links between different hemispheres.
The average link-wise hemispheric asymmetry is also demonstrated in Figure 1. This matrix only includes 45 × 45 entries, where each cell represents the asymmetry index for each link. In order to avoid false estimations of asymmetry in links that were tracked only in a few subjects, we only included links that were tracked in greater than 75% of subjects (i.e., at least 13/17 subjects). Each matrix cell represents the asymmetry index for the connection between the ROI listed in the column and the ipsilateral ROI listed in the row. The ROIs are numbered in accordance with Table A1 in Appendix (from 1 to 45, regardless of side).
The distribution of ROI asymmetries is shown in Figure 2. The box-plot demonstrates the range of asymmetries across all subjects for each ROI. As expected, there were no regions of extreme asymmetry. A one-sample t -test revealed that among all possible 45 ROIs, a significant leftward asymmetry was noted on the inferior occipital region, fusiform gyrus, lingual gyrus, parahippocampal gyrus, amygdala, inferior frontal operculum, superior frontal gyrus, and mid cingulate gyrus. Conversely, a rightward asymmetry was noted for the middle temporal gyrus and superior frontal orbital region. These results are shown in Table 1.
Link-wise asymmetry was observed toward both hemispheres. A significant leftward asymmetry was noted on the following reciprocal connections: amydgala to parahippocampal gyrus; middle to inferior occipital regions; fusiform to lingual gyri; fusiform to Table A1 in Appendix. occipital inferior gyrus; insula to inferior frontal opercular region; fusiform to parahippocampal gyrus; precuneus to lingual gyrus; angular to superior parietal region; precuneus to mid cinculate gyrus. Conversely, a significant rightward asymmetry was noted on the following reciprocal connections: superior temporal gyrus to rolandic opercular region; middle temporal gyrus to middle occipital region; inferior to superior parietal regions; insula to inferior frontal triangularis region; precuneus to superior parietal region. These results are summarized in Table 2.
www.frontiersin.org
These results are demonstrated in Figure 1, which shows the distribution of T scores and p values for all links, represented in 45 × 45 matrices, where each matrix cell demonstrates the T value or the p value for the asymmetry distribution for the connection between the row ROI and the column ROI. Figure 3 demonstrates the anatomical distribution of links with an absolute mean asymmetry higher than 0.5. The three-dimensional anatomical reconstruction of regional links was defined based on an in-house developed atlas of anatomical connectivity involving all 90 ROIs used in this study. The location of travel of all streamlines connecting each possible pair of ROIs was defined based on the spatial Frontiers in Psychiatry | Neuropsychiatric Imaging and Stimulation distribution of center points (or "centroid") of serial transverse sections across the white matter streamlines corresponding to each link, through an in-house modified version of the methods described by Garyfallidis et al (26). The centroids for each link were connected to define the main pathway of streamline travel. This step was repeated for all possible links. The utility of this approach was exclusively for anatomical visualization of the results, as demonstrated in Figure 3.
DISCUSSION
In this study, we investigated the individual variability of hemispheric asymmetry in structural connectivity in healthy senior individuals. By reconstructing the structural connectome from each subject, and by assessing the distribution of asymmetry indices related to cortical ROI connectivity and link weight, we observed that, within a sample composed of healthy older adults, there was mild but noticeable hemispheric asymmetry in structural connectivity. We observed a more prominent leftward asymmetry in cortical connectivity, i.e., a larger number of regions demonstrated a higher degree of connectivity in the left hemisphere. Specifically, cortical ROIs located in the occipital lobe, medial temporal, dorsolateral and medial frontal lobe and cingulate exhibited a higher weight of connectivity in the left hemisphere. Conversely, fewer cortical regions demonstrated a rightward asymmetry, with the middle temporal gyrus and the orbito-frontal regions exhibiting a significantly higher cortical connectivity on the right hemisphere.
In turn, we also observed regional hemispheric asymmetry in relationship with the strength of connectivity between specific ROIs. In accordance with the previous observation about global cortical connectivity, a leftward asymmetry was also more commonly observed among links. Interestingly, a leftward asymmetry was observed on peri-Sylvian and medial temporal -occipital regions. Conversely, a rightward asymmetry was noted on lateral temporal -frontal -occipital and parietal links. While did not test the relationship between these links and cognitive performance, it is possible to speculate that some lateral asymmetry may be related to functional specialization of some of these connections. For instance, leftward asymmetry may be associated with dominant hemisphere language processing involving verbal memory (parahippocampal -amydala connections) (27), phonological processing (insula -frontal operculum connections) (28), and semantic retrieval (29) (angular gyrus -superior parietal connections). Conversely, a rightward asymmetry may be observed in networks associated with visualspatial processing (30) (intraparietal connections, precuneusparietal connections).
It should be noted that we adopted a liberal statistical threshold (i.e., the level of statistical significance from the one-sample t -tests was not corrected based on multiple comparisons). We adopted a liberal threshold since we expected that the degree of asymmetry exhibited by our population would be mild, and given the number of multiple comparisons, a more stringent threshold would preclude the evaluation of the locations with a higher degree of asymmetry. Nonetheless, given our sample size, it is possible that some of our observed asymmetries may be related to sample bias and may constitute false positives. For this reason, we recommend the interpretation of the results from this manuscript in this context. The asymmetry index is possibly a better representation of the magnitude of asymmetry, rather than over-emphasizing the importance of links or ROIs with p < 0.05.
The results reported in this study should help the contextual evaluation of other connectome studies applied to health and disease. We demonstrated the range of asymmetry in a small cohort of normal older adults with the purpose of providing a reference for future studies evaluating processes that affect neural network organization. Thus, future studies should also be interpreted with special attention to specific characteristics of the demographics from the population studied. | 2016-05-12T22:15:10.714Z | 2014-01-09T00:00:00.000 | {
"year": 2013,
"sha1": "a4b9566357aed7c31574ba503435d0b658f1299c",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyt.2013.00186/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a4b9566357aed7c31574ba503435d0b658f1299c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
235794997 | pes2o/s2orc | v3-fos-license | Provider-centric Allocation of Drone Swarm Services
We propose a novel framework for the allocation of drone swarms for delivery services known as Swarm-based Drone-as-a-Service (SDaaS). The allocation framework ensures minimum cost (aka maximum profit) to drone swarm providers while meeting the time requirement of service consumers. The constraints in the delivery environment (e.g., limited recharging pads) are taken into consideration. We propose three algorithms to select the best allocation of drone swarms given a set of requests from multiple consumers. We conduct a set of experiments to evaluate and compare the efficiency of these algorithms considering the provider's profit, feasibility, requests fulfilment, and drones utilization level.
I. INTRODUCTION
Drone-as-a-Service (DaaS) is a concept used to describe the use of the service paradigm to model services offered by unmanned aerial vehicles [1]. These services include, but are not limited to, agriculture [2], geographic mapping [3], and package delivery [4]. Of particular interest, is the application of DaaS in delivery. Drone delivery has recently seen a boost in interest from the industry [5] and research [6] as the need for contact-less deliveries increases in times of pandemics. Governments are relaxing delivery restrictions and regulations which accelerates the drone delivery growth [5]. Swam-based DaaS (SDaaS) augments the concept to describe drone services that are delivered by a swarm of drones instead of single drones [7]. SDaaS services are widely used in search and rescue [8], sky shows entertainment [9], airborne communication networks [10], and delivery [11].
Swarms of drones in delivery are able to overcome a single drone delivery limitations. A swarm may be used for the timely delivery of heavier and/or multiple packages which go beyond the capability of one single drone [7]. A swarm may also travel longer distances as the payload may be distributed amongst the drones which in return reduces the pressure on the battery [11]. In addition, recent USA's Federal Aviation Administration (FAA) drone flying regulations approved the use of small drones for delivery (payload < 2.5 kg) 1 . Therefore, larger drones are not regulated as they are neither practical nor safe in a city which raises the need for drone swarms in 1 https://www.faa.gov/uas/advanced operations/package delivery drone delivery. Swarm-based Drone-as-a-Service for delivery is defined as the use of swarm of drones to carry multiple packages in a line of sight skyway segment [7]. The skyway segments connect nodes that maybe the source, destination, or intermediate nodes representing buildings rooftops equipped with recharging stations. The nodes and segments would make an SDaaS skyway network. SDaaS delivery composes of three main components. First, delivery swarms must be allocated for delivery requests. Second, the best path must be composed from the source to the destination that optimizes the delivery. Last, in case of failure, recovery plans must be implemented. In this paper, we focus on the first component (allocation) taking into consideration the effect on the second component (composition). Previous works focused on the composition assuming the existence of pre-allocated swarms [7].
Swarm-based drone delivery services lend themselves quite naturally to being modelled using the service paradigm because they map to the key ingredients of the service concept, i.e., functional and non-functional attributes [7]. The function of an SDaaS is the delivery of packages from a source to a destination. The non-functional aspects or the Quality of Services (QoS) include the delivery time, energy consumed, cost, etc. We address the allocation of SDaaS services from a provider perspective, i.e. optimizing the QoS for the benefit of the provider. We focus on optimizing the profit through the allocation of swarm members to consumer's requests. The allocation of the swarm members is a core part of swarmbased services as it depends on the consumers requests and directly affects the optimal composition of SDaaS services.
There are several challenges that require addressing when allocating swarms for delivery requests. We assume that an SDaaS service provider owns a fleet of n drones that are used to make multiple swarms to serve multiple requests. The swarms need to be allocated optimally to deliver requests bounded by strict time windows. A late or early delivery is considered unsuccessful. In addition, swarms may travel longer than a request's time window for certain delivery missions (e.g. long distances). This causes overlapping travel times that may affect delivery requests in the next time window. Furthermore, congestion at recharging nodes may occur if multiple swarms compose the same path simultaneously. Given the aforementioned challenges, we propose to allocate SDaaS swarms to serve consumers requests optimally from a provider perspective. This work is the first to introduce the allocation and re-allocation problem of homogeneous drone delivery swarms to a set of time constrained requests using a service-oriented approach.
We summarize our main contributions as follow: • Define a new set of SDaaS services allocation constraints.
• Propose a modified A* congestion-aware algorithm to compose SDaaS services. • Propose three SDaaS allocation algorithms optimizing profit for the providers. Let's assume that, within a day, several medical facilities request multiple packages from a supplier to be delivered together. Hence, multiple swarms are needed to deliver these packages to their respected destinations. Let's also assume that each facility selects a time window for packages arrival. As shown in Fig 1, requested packages may be of different weights. We assume that the maximum package weight does not exceed the drones payload limitations. The weight of the packages directly affects the energy consumption of the drones [7]. This in return will affect the distance the swarm can travel before the need to recharge. On the other hand, the medical supplier owns a finite set of drones that are used to make different swarms to serve the requests. The drones are homogeneous, i.e. same type. Therefore, the drones have same payload capacities, battery capacities, and power consumption rates. These drones would traverse through a line of sight skyway network (following drone fly regulations 2 ) till they reach the destinations. The intermediate nodes are buildings' rooftops supplied with multiple recharging pads. Each node contains different numbers of recharging pads. We assume the environment is deterministic, i.e. we know before hand about the packages weights, availability of recharging pads, battery capacities, etc. As shown in the figure, if the swarm serving destination 26 spends more than an hour to deliver and return back to the source, the delivery to destination 35 will be affected as it needs to re-use a drone from the first swarm to deliver 4 packages.
A. Motivating Scenario
Given r requests per day, how to allocate the homogeneous swarms from a finite fleet n to serve the different requests? What is the maximum number of requests could a fleet of size n undertake within a day? How to deal with overlapping 2 https://www.casa.gov.au/drones/rules/drone-safety-rules travel times? How to allocate swarms to requests that are most profitable for providers? How to re-use and schedule the swarms to maximize the fleet utilization? Furthermore, how to compose the different swarms paths to reduce congestion in the network. Given all the aforementioned constraints, and without loss of generality, the system should ensure the arrival of the packages to the destinations within consumers' specified time windows. Future work would extend this work to address environmental issues and uncertainties in the delivery environments.
II. RELATED WORK
Swarming can be defined as the collective behavior of a group of entities to achieve a common goal [12]. Examples in nature include bird flocks, fish shoals, and honey bees [13]. Similarly, drone swarms can be defined as a set of drones acting as a single entity to achieve a common goal [14]. Drone swarms are able to overcome single drones limitations and provide faster and more efficient services [15]. In this regard, drone swarms have enormous potential across a broad spectrum of civilian applications. The recent literature spotlights several applications that are already being tested and others envisioned for the future. This includes search and rescue [8], weather monitoring [16], and delivery [12].
In delivery, the majority of the literature refer to drone swarms as a set of individually operated drones managed to serve multiple independent deliveries [17] [18]. However, we define a swarm as a set acting as a single entity and travelling together to deliver multiple packages to a single destination. In this respect, a sequential and parallel drone swarm delivery services compositions was proposed [7]. A sequential composition is when a swarm travels together bounded by a space window from a source to a destination. A parallel composition occurs with dynamic swarms capable of splitting and merging between the source and the destination. In addition, the effect of swarm formation and shape on delivery services composition was explored [11]. However, to the best of our knowledge, no literature discussed the optimal allocation of drone swarms to serve multiple requests in a delivery context.
Multi-robot task allocation (MRTA) is an area concerned about the selection of the best robot, out of a set of robots, and allocating it to the best suitable task, out of a set of tasks, to optimize an overall system performance [19]. This area also deals with homogeneous and heterogeneous robots. Homogeneous robots are robots within a group with same capabilities [20]. Heterogeneous robots are robots within a group with different capabilities [21]. For example, homogeneous drones are drones with same speeds, sizes, or payload and battery capacities. The tasks that require multiple robots to be executed are referred to as Multi-robot Tasks [19]. In this regard, to the best of our knowledge, the allocation of homogeneous drone swarms to a set of constrained tasks has not yet been explored. In addition, the re-allocation and scheduling of swarms to perform multiple tasks has not been addressed. Thus, this work is the first to introduce the allocation and re-allocation problem of homogeneous swarms (drone delivery swarms) to a set of time constrained tasks (consumers requests) using a service-oriented approach.
The service paradigm leverages the drone technology to effectively provision drone-based services [1]. The service paradigm presents a higher level of abstraction that provides congruous solutions to real-world problems [22]. Single drone delivery services framework was presented as Drone-as-a-Service (DaaS) [4]. This framework was later expanded to model swarm-based services called Swarm-based Drone-as-a-Service (SDaaS) as they map to the key ingredients of the service paradigm, i.e., functional and non-functional attributes [7]. Hence, this work extends the SDaaS framework to cover the key element of allocating the service swarm members to optimize the service's QoS from a provider perspective. This work also considers SDaaS services composition and delivery delays that may be caused due to congestion and the concurrent use of the skyway network medium.
III. SWARM-BASED DRONE-AS-A-SERVICE MODEL
In this section, a service model for swarm-based delivery services is presented. We abstract each swarm travelling on a skyway between two nodes as an SDaaS (see Fig. 1). We formally define a Swarm-based Drone-as-a-Service (SDaaS). We then define an SDaaS provider and a consumer's request. Later, we discuss the constraints surrounding the allocation of SDaaS services. Definition 1: Swarm-based Drone-as-a-a-Service (SDaaS). An SDaaS is defined as a set of drones, i.e. more than one, carrying packages and travelling in a skyway segment. It's represented as a tuple of < SDaaS id, S, F >, where • SDaaS id is a unique service identifier • S is the swarm travelling in SDaaS. S consists of D which is the set of drones forming S, a tuple of D is presented as < d 1 , d 2 , .., d m >. S also contains the properties including the current battery levels of every d in D < b 1 , b 2 , .., b m >, the payloads every d in D is carrying < p 1 , p 2 , .., p m >, and the current node N the swarm S is at. • F describes the delivery function of a swarm on a skyway segment between two nodes, A and B. F consists of the segment distance dist, travel time tt, charging time ct, and waiting time wt when recharging pads are not enough to serve D simultaneously in node B. Definition 2: SDaaS Service Provider. A provider is presented as a tuple of < D, α >, where • D is the finite set of n drones owned by the provider. D is a tuple of < d 1 , d 2 , .., d n > and every drone d i consists of a tuple of < b, p, s > where b is the maximum battery capacity of the drone, b is the maximum payload capacity of the drone, and s is the maximum speed of the drone. • α is the providers locations, i.e. source node. Definition 3: SDaaS Request. A request is a tuple of < R id, β, P, T >, where • R id is the request unique identifier.
• β is the request destination node. • P are the weights of the packages requested, where P is < p 1 , p 2 , ..p m >. • T is the time window of the expected delivery, it is represented as a tuple of < st, et > where st is the start time of the requested delivery window and et is the end time of the requested delivery window.
Fig. 2. Allocation and Scheduling Constraints in SDaaS
We address the following constraints surrounding the swarm-based services allocation to deliver on its potential: • Strict time of delivery: A delivery request is strictly bounded by a time window of packages arrival. This means if a package arrives before the window start time or after the window end time the delivery is considered unsuccessful. If a package arrives earlier than scheduled, a consumer might not be at the destination location to receive it. In a similar manner, if a package arrives late, a consumer will not be satisfied by the service. • Time overlapped requests: There might be instances where multiple consumers requests are to be delivered within the same time window. Since the provider has a finite number of n drones, some overlapped requests may not be served. This is true if the number of packages exceed the number of available drones. In this case, the provider needs to select the most profitable requests and reject the least profitable ones. Fig.2 depicts this constraint were R1-R3 are overlapping and only two requests could be served at a time since the provider owns 5 drones only. • Inter-dependency of delivery requests: The finite drones need to be allocated to the requests that will maximize the profit for the provider. Hence, the drones may be allocated multiple times to serve multiple requests. However, there might be instances when selecting a most profitable request within a time window may lead to a non-optimal profit (R1 in Fig. 2). This is because the long Round Trip Time (RTT) for the R1 swarm will lead to opting out R4 that is more optimal. In addition, selecting R4 allows the selection of R3 which would not have been selected if R1 is selected. Hence, the allocation should consider the time inter-dependent requests to maximize the provider profit throughout the full day. • Skyway network contentions: Congestion at nodes may occur since concurrent requests mean that multiple swarms will be using the skyway network at the same time. This will cause increased delivery times. Hence, we need to address the SDaaS service composition and path selection to reduce the number of sequential charges due to congestion. • SDaaS environment constraints: In addition to all the aforementioned constraints, the authors of [7] has defined some constraints in the SDaaS environment. This includes the different battery consumption rates within a swarm due to different carried payloads. In addition, the number of drones in a swarm may exceed the number of available recharging pads at a node causing sequential charging.
Our aim is to successfully allocate and re-allocate swarms to serve the best set of consumers requests, from a provider's perspective, given the aforementioned constraints. To the best of our knowledge, this paper is the first to propose the optimal allocation of swarm services to consumers' requests.
IV. SDAAS MEMBERS SELECTION AND ALLOCATION FRAMEWORK
The SDaaS members selection and allocation framework is a provider-centric, profit-driven, framework. The framework consists of two main modules as shown in Fig.3. The first is the congestion-aware SDaaS services composition (path composition) for every request. The output of this module is the maximum Round Trip Time (RTT) for every request and the profit if the request is allocated and served. The second module uses the maximum RTT produced by module one and allocates the drones to the requests to maximize the utilization of the provider's drones and increase the providers profit.
A. Congestion-aware SDaaS Composition
This paper focuses on a provider owning a finite set of homogeneous drones, i.e. drones with same capabilities. We assume that the packages in a request do not exceed the payload capacities of the drones. In addition, a drone can carry one package at a time and serve one request in a single trip. Therefore, the number of drones in a swarm, serving a request, is equal to the number of packages in a request. We also assume a request may consist of maximum m packages, i.e. maximum m drones form a swarm. The goal of this module is to compute the maximum time taken by a trip taken by a swarm to serve a request and return back to the origin RT T . Computing the RT T for a request is important in order to reuse the drones in other requests. The RT T is highly affected by the path taken by the swarm to the destination and its return path. The number of recharging pads, as described earlier, is different at each node. Some nodes may have recharging pads that are less than the number of drones in the swarm. Hence, sequential charging may occur adding to the RT T . In addition, contention may occur at a node if two swarms serving different requests take the same path at a time causing congestion. Hence, we propose a modified A* congestion-aware SDaaS services composition algorithm. The RT T along with the number of drones used determine the profit of the provider if the request is served. We assume that the environment is deterministic, i.e. we know the availability of recharging pads considering other providers using the network. All the
Algorithm 1 Congestion-aware Services Composition Algorithm
Input: S, R Output: RT T , Profit 1: RT T = 0 2: while S is not at destination and back at the source do 3: distance to destination= Dijkstra(current, destination) 4: compute energy consumption for every d in S based on R package weights and distance to destination 5: if all d in S can reach destination without intermediate nodes then select best neighboring node (min travel time and min node time) 16: S travels to neighboring node 17: RT T +=travel time + charging time + waiting time 18: end if 19: end while 20: Profit = size(SD) * RTT * Constant 21: return RT T , Profit drones (S D ) serving a request, fully charged, form a swarm at the source node and traverse the network without dispersing in a static behaviour [23]. While the drone is not at the destination, the swarm computes the potential to reach the destination directly without stopping at intermediate nodes to recharge using the shortest Dijkstra's path [24]. The potential is computed by considering the payload of all drones and its effect on the battery consumption. If the swarm can travel directly, the RT T is updated by the travel time tt in the selected segments. If the swarm can't reach the destination directly, it greedily selects the most optimal neighbouring node. The most optimal neighbour is the one with the least travel time tt, charging time ct, and waiting times wt caused by sequential charging due to limited number of recharging pads. Since the payloads carried are different between the drones, the energy consumption rate is different, and the battery charging times are different ct. We take the maximum charging time of concurrent recharging drones and add it to the RT T . At every node, we consider the potential of congestion to compute the maximum possible RT T that will guide the allocation process. We assume that each node is occupied by all the other drones owned by a provider if they are less than the maximum swarm size P D − S D < m. Otherwise, we consider the node to be occupied by a swarm of maximum size m. We compute the node time, i.e. ct + wt, considering We assume no more than two swarms may select a node at a time. The RT T is updated and the drone again checks its potential to reach the destination. The process repeats until the swarm is at the destination. Once at the destination, the swarm charges full and travels back to the source in the same manner. The energy consumption rate at the return trip is much lower than the outbound trip due to the released payload. The RT T is updated at the source after charging fully. Once the RT T is computed the profit of the request, if allocated and served, is computed using the RT T and the number of drones serving the request S D . The profit is computed per mile per drone taking the constraints into consideration. Algorithm 1 describes the congestion-aware services composition process.
B. SDaaS members Allocation and Requests Scheduling
The output of the first module, i.e. the maximum RT T and Profit, for all requests is used to allocate the drones swarm members to the most profitable combination of requests. In this section, we propose three methods for the swarm-request allocation. We assume that the requests are received in batch for the full day. We divide the day into t time windows W . For simplicity, we assume that a drone may be used once in a time window. Hence, if a request's RT T overlaps over two time windows, we assume that the swarm serving this request is booked for both time windows. This assumption would be relaxed in the future work. We need to allocate the swarms to serve the most profitable requests while maximizing the utilization of the finite number of drones that the provider owns.
The swarms-requests allocation problem is a part of the Multi-Robot tasks allocation problems. However, to the best of our knowledge, no previous work has addressed the allocation problem from a service provider perspective to allocate a finite set of robots to time-constrained consumers requests (tasks) and ensuring the re-use of robots to maximize their utilization. Each drone may serve more than one request at different times. We propose to tackle this problem through three proposed algorithms namely: Time-based greedy algorithm, Requestbased greedy algorithm, and the Heuristic-based algorithm. Later, we will compare the performance of these algorithms against the Brute force algorithm. 2) Time-based greedy allocation algorithm: This approach is a slight modification to the request-based greedy algorithm where the request time window is taken in consideration when sorting the requests. Here, all the requests are first sorted by their start times ascendingly. All the requests within a time window are then sorted by their profit descendingly. The rest of the algorithm checks the validity of the requests combination similar to Algorithm 2 and computes the total profit, drones utilized, and the set of the served and allocated requests.
3) Heuristic-based allocation algorithm: The main limitation of the greedy algorithms is the order of the requests processed and served. Sorting the requests on profit and time does not ensure the most optimal solution. This approach is, hence, a modification of the greedy algorithms that tries to tackle the order limitation. Algorithm 3 depicts the heuristicbased allocation method. In this approach, the unsorted list of requests are looped through considering different allocation start points resulting in different combinations of requests. Two main set of arrays and variables are constructed. The first set of arrays are of the size of the requests R storing the profits (all prof its), drones utilized (all drones utilized), and the combination of served requests (all served R) generated by different starting points (starting request). The second set of arrays and variables are local temporary active arrays that are reconstructed at every starting point (request). These arrays and variables store the total profit (active total prof it), drones utilized (active drones utilized), and the served requests combination (active served R) with a certain starting point (request). A nested loop is constructed where the outer loop loops over all requests R [line 2]. The inner loop, similar to the greedy, loops over the requests and checks the valid combination however starting from the request which the outer loop is at (r id) [line 4]. After every iteration of the inner loop, the variables of the valid served requests are updated in the active set of arrays and variables. After the full iteration of the inner loop, the first set of arrays, of size R, store the total profit, drones utilized, and the combination of served requests at the index of the starting point (request) (r id). Once the outer loop finishes, the algorithm finds the combination of requests with the maximum profit and the number of drones utilized in this combination [lines [25][26][27][28]]. Fig. 4 shows an allocation example of the three proposed algorithms with a set of 10 requests (0-9). The bars represent the maximum RT T for each request. In this example, we if (DAR i + used drones[WAR i ]) <= P D then 6: if overlap (RT TAR i , WAR i + 1) == True then 7: if (DAR i + used drones[WAR i + 1]) <= P D then 8: active served R.append(ARi) 9: active total profit += PAR i all profits.append(active total profit) 22: all drones utilized.append(active drones utilized) 23: all served R.append(active served R) 24: end for 25: total profit= max(all profits) 26 assume that the provider owns 6 drones only. The drones are used to make up swarms to serve the different requests. As mentioned earlier, we assume that a drone once used in a time window may not be used again within the same window. The algorithms result in a set of selected requests that the drones get allocated to. As shown in the figure, the heuristic-based algorithm outperforms the two methods with a total profit of 272.65 and 7 served requests out of 10. Next, the requestbased greedy algorithm served 6 requests with a total profit of 252.9. Finally, the time-based greedy algorithm served 5 requests with a total profit of 211.91. The main down-side of the time-based greedy algorithm is the starting time window. If requests are served and allocated in time window i use all/most drones and the served requests overlap to the next time window (i + 1), there are chances that more profitable requests in the next time window will not be served. As shown in Fig.4, since requests 1 and 3 have been allocated, as they arrive earlier, the more profitable requests 2 and 9 were not served. In the same manner, the requestbased greedy algorithm chooses the most profitable request but there might be a combination of other smaller requests with a higher profit. As shown in the figure, the algorithm selects request 8 to be served whereas the total profit of requests 5 and 6 is higher. Since the heuristic-based algorithm overcomes the limitation of requests order, it smartly outperforms the two other methods and allocates swarms to serve requests that might be less profitable but in total resulting in more profit and more served requests.
V. EXPERIMENTS AND RESULTS
In this section, we evaluate the performance of the three proposed methods of allocations. We conduct a set of experiments to evaluate the performance in terms of profit maximization, drones utilization, requests fulfilment, and computation. Although there are existing resource allocation algorithms [25] [26], the proposed problem is fundamentally different in that it is machine (drone) unspecific, re-allocatable, and time dependant. Hence, we compare the proposed methods against the allocation-optimal and exhaustive Brute force allocation to measure their performance.
The dataset used in the experiments is an urban road network dataset from the city of London. The data consist of a graph edge list of the city [27]. For the experiments we took a sub-network of 129 connected nodes to mimic a skyway network. Each node was allocated with different number of recharging pads randomly. We then set a source node and generate r requests with different destination nodes. For each request, we synthesize payloads with a maximum size of 5 packages and a maximum weight of 1.4 kg. We use the congestion-aware SDaaS composition algorithm to compute the profit and the maximum RT T of each request given the different number of recharging pads at each node. Each generated request is randomly assigned to a delivery time window. We assume the drone takes 30 minutes to fully charge In the first experiment, we compare the performance in terms of profit of the three proposed algorithms against the optimal and exhaustive Brute force allocation approach. In the brute force approach, all possible combination of requests are generated. Combinations that are non feasible due to the finite number of provider's drones are discarded. The feasible combination with the highest profit is returned and the swarms are allocated to the requests of this combination. In this experiment, we assume a provider owns 30 drones that are allocated and re-used. As shown in Fig. 5, the heuristic-based approach outperforms the request-based and time-based greedy algorithms. The x-axis represents the number of requests received within a day. The y-axis represents the total profit made by serving the allocated requests. The brute force approach, as shown in the figure, terminated at 27 requests. This is because the memory usage is highly exponential as all possible combination of requests are generated and processed. This shows that the brute force approach is not feasible in real-word scenarios where the number of delivery requests received a day may be large in number. The heuristic-based approach outperforms the request-based and time-based greedy algorithms. As explained earlier, this is due to the heuristic-based algorithm capability of overcoming the order of allocation constraint (it does not allocate the most profitable requests first as shown in Fig. 4). The heuristic-based algorithm smartly outperforms the two other proposed methods and allocates swarms to serve requests with less profit but in total resulting in more profit and more served requests. The request-based and time-based greedy algorithms are interchangeably preforming better than each other. This, as described earlier, mainly depends on the order of the received requests. In some cases, where requests with higher profits are widely distributed over the time windows, the time-based method would perform better. Otherwise, if most profitable requests are not distributed, the request-based method performs better.
Although the heuristic-based method is not always optimal, it is the best performing feasible method. The heuristic-based Fig. 6 shows an example of such situation. As shown in the figure, the only difference between the heuristic and brute force is the allocation of requests 2 (more profitable) and 3. Since the heuristic loops over the requests in sequence, changing the starting point at every iteration, there are chances that requests with lower profits get allocated and requests with higher profits coming later do not find drones to be served. The selected requests in the figure started at iteration 3, i.e. request 3 was the first to get allocated. Since requests 3, 7, and 9 got allocated (in sequence) with the heuristic method, requests 1 and 2 did not have enough drones in their respected time windows to get allocated. Regardless of the limitation described, the execution time of the heuristic-based algorithm is polynomial compared to the exponential time of the brute force making it a more scalable and feasible solution. Fig. 7 shows the execution times of all methods varying the number of requests received a day. The left y-axis represents the execution time for the brute force algorithm. The right y-axis represents the execution times of the three proposed methods.
In the second experiment, we study the performance of the proposed algorithms on the percentage of fulfilled requests. A fulfilled request is a request that is successfully allocated to a swarm of drones to be served. Fig. 8 shows the percentage of successfully fulfilled requests as the number of requests increase in a day. We assume that the provider owns a set of 30 drones only. As shown in the diagram, when the number of requests are low the provider's drones are able to serve all the requests. As the requests received per day increase, the percentage of successful requests decreases. This is because the finite drones limit the number of requests that could be served. The heuristic-based approach distinctly outperforms the greedy approaches and is capable of serving more requests a day. This is reflected by the profit the provider gains (Fig. 5) and will lead to more customer's satisfaction as more requests are served. Such graph, allows the provider to determine the number of requests they should receive a day if they own a finite number of drones and are incapable of enlarging the fleet. Fig. 9 measures the percentages of fulfilled requests with a fixed number of requests (50) and varied number of provider's owned drones. The graph shows how the heuristicbased method outperforms the other two methods. As the number of owned drones increase more requests are getting fulfilled. Such graph, helps the provider to determine the optimal number of drones they should own if they receive a daily r number of requests.
In the last experiment, we measure the effectiveness of the methods in terms of drones utilization. A provider, as described earlier, owns a finite set of drones. An optimal method is a method that re-uses these drones as much as possible to serve multiple requests. Fig.10 shows the percentage of utilized drones, i.e. the number of times the drones are re-allocated, as the number of requests received increase. We assume that the provider owns 30 drones. As shown in the figure, as the number of received requests increase, the percentage of drone's utilization increases. The percentage stabilizes when all the drones are fully occupied at all time slots. As shown in the graph, the heuristic method outperforms the other methods in terms of drones utilization. The drones at requests 110-200 are used more than five times in a day with seven time windows and window overlapping requests. Fig. 11 shows the percentages of drones utilized as the number of owned drones increases with a fixed number of received requests a day (50). As the number of owned drones increases the drones utilization decreases as they will be re-allocated less. The heuristic-based method, as shown, outperforms the other two methods in terms of drones utilization. At number of drones 5 the heuristic-based method and the requestbased greedy algorithms utilize the same number of drones. However, if we look at Fig. 9, the percentage of fulfilled requests with 5 drones is less with the request-based greedy method. This means the request-based greedy method is using more drones to serve less number of requests. Using the drones utilization graphs along with the graphs of successful requests, the provider may determine the optimal number of drones they should own to serve maximum number of requests while utilizing the drones as much as they can without increasing the cost on themselves. VI. CONCLUSION We proposed a provider-centric allocation of drone swarm services known as, Swarm-based Drone-as-a-Service (SDaaS). A congestion-aware SDaaS composition algorithm is proposed to compute the maximum round trip times a swarm may take to serve a request taking the constraints at intermediate nodes (limited recharging pads and congestion) in consideration. Three swarms-requests allocation methods were proposed with the goal of increasing the provider's profit namely; request-based greedy, time-based greedy, and heuristic-based allocations. The efficiency of the proposed approaches was Fig. 11. Drones Utilization varying the number of owned drones evaluated in terms of profit maximization, execution times, requests fulfilment, and drones utilization. The efficiency of the proposed approaches where compared to a brute force baseline approach and has shown their feasibility on scaling compared to the brute force. Experimental results show the outperformance of the heuristic-based approach in comparison with the other two algorithms and its near optimal results. The limitation of the heuristic-based method is also described and compared to the brute force method. In the future work, we will consider heterogeneous swarms allocation to serve multiple requests and extend the work to deal with SDaaS failures. | 2021-07-13T01:15:58.610Z | 2021-07-12T00:00:00.000 | {
"year": 2021,
"sha1": "fff6b66e78a42c8f2927b5bc5205d7da9dc103a3",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2107.05173",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "fff6b66e78a42c8f2927b5bc5205d7da9dc103a3",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
261174087 | pes2o/s2orc | v3-fos-license | Sun protection use and habits in the LGBTQI+ community in Lebanon: A cross sectional study
Sun exposure is an extrinsic risk factor for skin aging, wrinkle formation, and the development of skin cancer, namely melanoma, basal cell carcinoma (BCC), and squamous cell carcinoma (SCC). Sun protection measures have emerged as an important means of preventing these harmful effects. Studies have shown that sexual minority men have a greater prevalence of skin cancer than heterosexual men.
| INTRODUC TI ON
Our skin is exposed to sunlight on a daily basis.Sun exposure is an extrinsic risk factor for skin aging, wrinkle formation, and the development of skin cancer, namely melanoma, basal cell carcinoma (BCC), and squamous cell carcinoma (SCC).Sun protection measures have emerged as an important means of preventing these harmful effects.They have been shown to decrease the risk of skin cancer, photo-aging, and disorders of hyperpigmentation, as well as play an important role in the management of photosensitive skin disorders. 1xual and gender minority (SGM) populations, according to the National Institutes of Health, "include, but are not limited to, individuals who identify as lesbian, gay, bisexual, asexual, transgender, two-spirit, queer, and/or intersex".Studies have shown that sexual minority men have a greater prevalence of skin cancer than heterosexual men.A recently published analysis of a nation-wide questionnaire in the United States (US) showed that gay and bisexual men, of which 7516 and 5088 were included respectively, had significantly elevated odds of developing skin cancer than heterosexual men, with an odds ratio of 1.26 for gay men and 1.48 for bisexual men. 2 In a previous cross-sectional study that included 3083 homosexual, gay, or bisexual men, it was also found that this group of men was more likely to report having tanned indoors. 3ere is limited research investigating the reasons behind this risk of skin cancer development.This is especially important because identifying preventable risk factors, like those pertaining to sun exposure behaviors, can be targeted in the fight against skin cancer and help establish screening tools and preventive interventions for the SGM community.To our knowledge, studies have not previously investigated the sun protection behaviors/awareness practiced by SGM individuals and the prevalence of skin cancer in Lebanon.
In this study, we examined the habits of sun protection use, sun protection measures, and tanning habits among adults in the Lebanese SGM community, and aimed to explore the possible association between having a history of skin cancer or other photo-related skin conditions and sun protection awareness.
| Study design
This study is an Institutional Review Board (IRB)-approved, crosssectional study conducted to assess the use and habits of sun protection among adults in the SGM community in Lebanon.It was conducted as a survey that patients filled anonymously.The survey included questions about our patients' demographics, sun protection habits, sun protection measures, medical knowledge, and health resources (Supplementary document).
| Sampling strategy
The study population consisted of adults presenting to the dermatology clinics at the American University of Beirut Medical Center Adults aged 18-80 years of age were recruited.Those who are not proficient in spoken English or Arabic and those who are mentally or visually impaired were excluded.To note, illiterate patients displayed notably higher SPF use (72.2%), compared to those obtaining information from media (18.2%) or family and friends (5.3%).
Discussion: Surveying the perception of the Lebanese SGM community towards sun damage and their adaptive practices to prevent it can help implement and gear a nation-wide campaign to spread proper awareness about this subject.Studying their behavioral tendencies for not using sunscreen can help overcome this contributing risk factor for skin cancers.
Conclusion:
Future investigations have yet to identify confounding variables contributing to higher levels of skin cancers in this population.
K E Y W O R D S
LGBTQI+, skin cancer, sunscreen were not excluded from the study, as they were assisted in filling the survey.
The prevalence of LGBTQI+ people in the United States is estimated at 4.5%. 4Since there is no data on the percentage of SGM in Lebanon, we assumed that a similar proportion of the Lebanese population is LGBTQI+, and we expected 4.5% of a sample from the general population to belong to this group.Hence, we aimed to have a sample of around 100 SGM participants, targeted at the nongovernmental organizations SIDC and Helem, where many SGM individuals are expected to be present, fill our questionnaire.Patients were recruited throughout the entire year, documenting responses during each of the four seasons to have a more representative sample.
| Statistical analysis
Data analysis was performed using SPSS (IBM SPSS 25.0).Categorical variables were presented using frequencies and percentages, while continuous variables were presented using mean and standard deviation (mean ± SD).Pearson's Chi-square and Fisher's exact tests were used to evaluate the associations between the different categorical variables.Significance was interpreted at α ≤0.05.
| RE SULTS
A total of 129 participants agreed to participate and answered our questionnaire (Table 1).The ages of the participants ranged between 18 and 51 years, with a mean of 28.98 years and a standard deviation (SD) of 7.1 years.Out of the 129 participants, 66 (50.8%) identified as gay, 21 (16.2%) as lesbian, 18 (13.8%)as bisexual, 5 (3.8%) as asexual and 19 (14.6%) identified their sexual orientation as other than the mentioned above.The majority of our participants had an undergraduate degree (43.1%), almost a quarter (23.1%) had a graduate degree, a quarter (25.4%) had a school degree and only a minority (6.9%) had a technical degree (Table 1).
TA B L E 1
Demographics and tanning habits of the selected population.Reasons for tanning varied among our participants: tanning to get a color (13.1%), tanning to get vitamin D (4.6%), tanning socially (6.9%), and tanning for mood elevation (0.8%) (Table 1).Almost half of the participants (46.9%) reported tanning for more than one reason mentioned above.As for the products used while tanning, more than half of the participants, 53.1%, reported using a tanning oil or a sunscreen with SPF, compared to 24.6% who used tanning oil without SPF and 11.5% who reported using Vaseline petroleum jelly or baby oil.Around 8.5% reported using tanning products other than the mentioned above (Table 1).
Although there was no significant association between sex- A higher percentage of SPF use was noted among those who had an office job as compared to those with a field job and those who are unemployed (31.4% vs. 28.6% vs. 18.6% respectively).
Nevertheless, no significant association was noted between the type of job and SPF use (p = 0.601).Despite no significance between SPF use and the highest educational degree attained (p = 0.070), the tendency to use SPF increased with higher levels of education.Only 10% of school degree holders reported SPF use, compared to 22.2% in those with a technical degree, 28.6% in those who held an undergraduate degree, and 40% in those who held a graduate degree (Table 3).
There was no significant association noted between knowledge that the sun causes skin cancer and SPF use (p = 0.067).The majority of participants (64.6%) who showed knowledge that sun causes skin cancer reported not using SPF, and only 32.9% reported using SPF.Likewise, more than half of the participants (64.7%) who did not know that sun causes skin cancer did not use SPF.No statistically significant association was noted between the knowledge that the sun causes pigmentation and SPF use (p = 0.325); only 32.1% of participants who had knowledge about the risks of pigmentation used SPF as compared to 27.3% who actually used SPF without knowing this risk (Table 4).
A significant association was found between the source of information that was used for sun protection information and SPF use (p < 0.001), where participants who obtained information from their dermatologists had significantly higher proportions of SPF use (72.2%) as compared to participants who obtained their information from media (18.2%), family and friends (5.3%) and those that used more than one source (26.7%,Table 5).
| DISCUSS ION
Among SGM groups, it has been reported that sexual minority men in the US are more likely to tan indoors and less likely to use protective clothes when outdoors.However, they were also more likely to avoid sun exposure and, when exposed, to use sunscreen and seek shade while outdoors. 5Still, the male population in the SGM community has been shown to tan more and protect less from the sun when compared to the heterosexual counterpart.There was an opposite relationship between females in the SGM community compared to the heteronormative one. 5One can assume that increased sun exposure is a modifiable contributing factor for the high risk of
TA B L E 2
The association between sexual orientation, SPF use and tanning bed use.
Yes N(%) No N(%) Total
Sexual orientation 6 Further research demonstrated that the genus beta of HPV amplifies the process of UV-induced DNA damage leading to skin cancer. 7Due to the high prevalence of HPV among SGM, it could be considered another relevant modifiable risk factor for the higher skin cancer in this population.
In 2011, a review on the prevalence of sunburn, sun-protection behaviors, and indoor tanning among US adults, adolescents, and children, found that around 3 in 10 adults routinely follow sunprotection practices. 8Sun protection practices were also correlated
TA B L E 3
The association between occupation, the highest degree attained and SPF use.
TA B L E 5
The association between the source of sun protection information and the use of SPF.
Yes N(%) No N(%) I do not know N(%) Total N(%)
Source of sun protection information with demographic factors.For example, adults were less likely to use sunscreen if their income was much lower than the poverty level.
Concurrently, household income was directly correlated with sunscreen use, which may suggest that cost is a barrier to sunscreen use. 9In our study, we found similar associations where people with office jobs were more inclined to wear sunscreen than people with field jobs regardless of their sexual orientation.This could be explained by the more stable higher income that comes with office jobs in our country, despite sun exposure in field jobs.In addition, people who were unemployed were the least inclined to use sunscreen, which matches the aforementioned study regarding the cost barrier of SPF or, more so, the probable lower socio-economic status, and level of education.In support of the latter, education level appears to be an important factor in regulating sunscreen use.People with low educational level reported less frequent sunscreen use and lower SPF factor, compared with people with a higher educational level. 10Concurrently, our study found that those who had a graduate degree more frequently wore sunscreen than those with a lower degree of education.However, among each of the participant educational groups, a higher percentage of participants did not wear SPF than those who did.
In our study, the LGBTQI+ community seemed to seek exposure to the sun for various reasons including social tanning, getting a darker skin color, increasing vitamin D levels, and elevating the mood.
Similarly, a recently developed biopsychosocial model revealed that mood/affect regulation and appearance reasons played a significant role in the motivation towards indoor tanning in the SGM community. 11Studies found that gay and bisexual men had higher rates of indoor tanning and had a higher risk of developing melanoma. 12terestingly, unlike these mentioned studies, in Lebanon most of our studied population did not have a previous history of artificial tanning.Whether these habits translate to a lower incidence of skin cancer among the Lebanese SGM population compared to those reported in the US-based studies is an issue has yet to be explored.
Lebanon is a Mediterranean country situated between Greece and Saudi Arabia both geographically and in terms of cultural norms, so we have yet to discover what level of awareness is most adopted here.The only study that has looked into sun protection habits in Lebanon was conducted on teenagers from the general population, in 2004. 13Although most were aware that sunscreen is protective, the vast majority reported enjoying tanning in the summer.Similarly, among our SGM population, even though the majority knew that sun exposure causes skin cancer and dyspigmentation, they did not report adopting sun protective measures, namely using SPF.Further investigations in the adult Lebanese population would be crucial to understand their sun protective habits and practices, as well as potentially compare them to those practiced by the SGM population.
In the previously mentioned Lebanese study, the most common source of information regarding the risks of sun exposure were television and journals, rather than radio, school, or doctors.
A group in Peru interviewed outpatients treated at dermatology clinics to assess their knowledge, attitudes, and practices about sun exposure and photoprotection.This population showed an appropriate degree of awareness about sun-protective behaviors, which could be attributed to their contact with a dermatologist during their visits at their clinics. 14In our study, a statistically significant relationship was found between SGM participants who got their skin health and sun protection information from dermatologists and SPF use compared to other sources of information.
These outcomes indicate that dermatologists are at the forefront of this awareness campaign tackling sun protective measures, skin health and skin cancer prevention, especially in the SGM community.
Indeed, our population is relatively limited in size and only addressed people from the SGM community.In addition, some of the questions in the survey had a low response rate and could not be reported or analyzed, which could have contributed to the decrease in significance of our results.Nevertheless, there was no significant difference between SPF use and sexual orientation; all members of this community showed a tendency towards not using SPF.The nationwide awareness and educational campaigns abroad, which we lack, might have contributed to a higher proportion of patients abiding to sunscreen use.More acceptance and diversity in the Western countries probably led to a higher awareness and more support groups educating about better sun protective measures.The SGM community is still marginalized in our country, and people who identify as such might find it difficult to reach out for help or seek medical advice.This could have contributed to the discrepancy in awareness and appropriate sun protective behaviors.
| CON CLUS ION
This study focused on members of the SGM community and demonstrated their tendency not to use sun-protective measures, as well as their deficits in knowledge of skin cancer prevention.Surveying the perception of the Lebanese SGM community towards sun damage and their adaptive practices to prevent it, can help implement and gear a nation-wide campaign to spread proper awareness about this subject.Studying their behavioral tendencies for not using sunscreen can help overcome this contributing risk factor for skin cancers.Future investigations have yet to identify confounding variables contributing to higher levels of skin cancers in this population.
(
AUBMC) for any dermatological complaint, in addition to adults presenting to the organizations Helem and SIDC.Helem is the first lesbian, gay, bisexual, transgender, queer, intersex (LGBTQI+) rights organization in the Arab world, officially established in Beirut, Lebanon in 2004.It is a nongovernmental organization, with a mission to lead the nonviolent struggle for the liberation of LGBTQI+ and other persons with nonconforming sexualities and/or gender identities in Lebanon and the Middle East and North Africa (MENA) region against violations of their individual and collective civil, political, economic, social, and cultural rights.
SIDC's mission is to develop social solidarity by reinforcing healthy behavior in Lebanon through community empowerment, prevention, harm reduction policies, advocacy, and psychosocial services.They provide all the necessary information about protective behaviors to reduce the risk of HIV and Hepatitis B/C transmission as well as the harms resulting from drug use, especially among vulnerable populations like SGM individuals.
Some of these factors include alcohol and tobacco use, concomitant human immunodeficiency virus (HIV) and/or HPV infection, and the use of estrogen hormone therapy which are all prevalent in the SGM community.AUTH O R CO NTR I B UTI O N SAll authors have read and approved the final manuscript.Nohra Ghaoui, Divina Justina Hasbani, Sally Hassan, and Dana Saade wrote the paper and analyzed the data.Nohra Ghaoui, Divina Justina Hasbani, Sally Hassan, Tarek Bandali, and Serena Saade The association between SPF use and having knowledge that sun causes skin cancer and sun causes pigmentation. | 2023-08-27T06:17:56.123Z | 2023-08-26T00:00:00.000 | {
"year": 2023,
"sha1": "47d7cd6826770e8d5ea1f47e7c1d5c39d129a42a",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jocd.15974",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "e0d4349dc92cc879c2322b40f3fc2be5f6bb19e1",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
8391097 | pes2o/s2orc | v3-fos-license | Gang of GANs: Generative Adversarial Networks with Maximum Margin Ranking
Traditional generative adversarial networks (GAN) and many of its variants are trained by minimizing the KL or JS-divergence loss that measures how close the generated data distribution is from the true data distribution. A recent advance called the WGAN based on Wasserstein distance can improve on the KL and JS-divergence based GANs, and alleviate the gradient vanishing, instability, and mode collapse issues that are common in the GAN training. In this work, we aim at improving on the WGAN by first generalizing its discriminator loss to a margin-based one, which leads to a better discriminator, and in turn a better generator, and then carrying out a progressive training paradigm involving multiple GANs to contribute to the maximum margin ranking loss so that the GAN at later stages will improve upon early stages. We call this method Gang of GANs (GoGAN). We have shown theoretically that the proposed GoGAN can reduce the gap between the true data distribution and the generated data distribution by at least half in an optimally trained WGAN. We have also proposed a new way of measuring GAN quality which is based on image completion tasks. We have evaluated our method on four visual datasets: CelebA, LSUN Bedroom, CIFAR-10, and 50K-SSFF, and have seen both visual and quantitative improvement over baseline WGAN.
Introduction
Generative approaches can learn from the tremendous amount of data around us and generate new instances that are like the data they have observed, in any domain. This line of research is extremely important because it has the potential to provide meaningful insight into the physical world we human beings can perceive. Take visual perception for instance, the generative models have much smaller number of parameters than the amount of visual data out there in the world, which means that in order for the generative models to come up with new instances that are like the actual true data, they have to search for intrinsic pattern and distill the essence. We can in turn capitalize on that and make machines understand, describe, and model the visual world better. Recently, three classes of algorithms have emerged as successful generative approaches to model the visual data in an unsupervised manner.
Variational autoencoders (VAEs) [17] formalize the generative problem in the framework of probabilistic graphical models where we are to maximize a lower bound on the log likelihood of the training data. The probabilistic graphical models with latent variables allow us to perform both learning and Bayesian inference efficiently. By projecting into a learned latent space, samples can be reconstructed from that space. The VAEs are straightforward to train but at the cost of introducing potentially restrictive assumptions about the approximate posterior distribution. Also, their generated samples tend to be slightly blurry. Autoregressive models such as PixelRNN [32] and PixelCNN [33] get rid of the latent variables and instead directly model the conditional distribution of every individual pixel given previous pixels starting from top-left corner. PixelRNN/CNN have a stable training process via softmax loss and currently give the best log likelihoods on the generated data, which is an indicator of high plausibility. However, they are relatively inefficient during sampling and do not easily provide simple low-dimensional latent codes for images.
Generative adversarial networks (GANs) [10] simultaneously train a generator network for generating realistic images, and a discriminator network for distinguishing between the generated images and the samples from the training data (true distribution). The two players (generator and discriminator) play a two-player minimax game until Nash equilibrium where the generator is able to generate images as genuine as the ones sampled from the true distribution, and the discriminator is no longer able to distinguish between the two sets of images, or equivalently is guessing at random chance. In the traditional GAN formulation, the generator and the discriminator are updated by receiving gradient signals from the loss induced by observing discrepancies between the two distributions by the discriminator. From our perspective, GANs are able to generate images with the highest visual quality by far. The image details are sharp as well as semantically sound.
Motivation: Although we have observed many successes in applying GANs to various scenarios as well as in many GAN variants that come along, there has not been much work dedicated to improving GAN itself from a very fundamental point of view. Ultimately, we are all interested in the end-product of a GAN, which is the image it can generate. Although we are all focusing on the performance of the GAN generator, we must know that its performance is directly affected by the GAN discriminator. In short, to make the generator stronger, we need a stronger opponent, which is a stronger discriminator in this case. Imagine if we have a weak discriminator which does a poor job telling generated images from the true images, it takes only a little effort for the generator to win the two-player minimax game as described in the original work of GAN [10]. To further improve upon the state-of-the-art GAN method, one possible direction is to enforce a maximum margin ranking loss in the optimization of the discriminator, which will result in a stronger discriminator that attends to the fine details of images, and a stronger discriminator helps obtain a stronger generator in the end.
In this work, we are focusing on how to further improve the GANs by incorporating a maximum margin ranking criterion in the optimization, and with a progressive training paradigm. We call the proposed method Gang of GANs (GoGAN) 1 . Our contributions include (1) generalizing on the Wasserstein GAN discriminator loss with a marginbased discriminator loss; (2) proposing a progressive training paradigm involving multiple GANs to contribute to the maximum margin ranking loss so that the GAN at later GoGAN stages will improve upon early stages; (3) showing theoretical guarantee that the GoGAN will bridge the gap between true data distribution and generated data distribution by at least half; and (4) proposing a new quality measure for the GANs through image completion tasks.
Related Work
In this section, we review recent advances in GAN research as well as many of its variants and related work.
Deep convolutional generative adversarial networks (DCGAN) [28] are proposed to replace the multilayer perceptron in the original GAN [10] for more stable training, by utilizing strided convolutions in place of pooling layers, and fractional-strided convolutions in place of image upsampling. Conditional GAN [22] is proposed as a variant of GAN by extending it to a conditional model, where both the generator and discriminator are conditioned on some extra 1 Implementation and future updates will be available at http:// xujuefei.com/gogan. auxiliary information, such as class labels. The conditioning is performed by feeding the auxiliary information into both the generator and the discriminator as additional input layer. Another variant of GAN is called auxiliary classifier GAN (AC-GAN) [25], where every generated sample has a corresponding class label in addition to the noise. The generator needs both for generating images. Meanwhile, the discriminator does two things: giving a probability distribution over image sources, and giving a probability distribution over the class labels. Bidirectional GAN (BiGAN) [7] is proposed to bridge the gap that conventional GAN does not learn the inverse mapping which projects the data back into the latent space, which can be very useful for unsupervised feature learning. The BiGAN not only trains a generator, but also an encoder that induces a distribution for mapping data point into the latent feature space of the generative model. At the same time, the discriminator is also adapted to take input from the latent feature space, and then predict if an image is generated or from the true distribution. There is a pathway from the latent feature z to the generated data G(z) via the generator G, as well as another pathway from the data x back to the latent feature representation E(x) via the newly added encoder E. The generated image together with the input latent noise (G(z), z), and the true data together with its encoded latent representation (x, E(x)) are fed into the discriminator D for classification. There is a concurrent work proposed in [8] that has the identical model. A sequential variant of the GAN is the Laplacian generative adversarial networks (LAPGAN) [6] model which generates images in a coarse-to-fine manner by generating and upsampling in multiple steps. It is worth mentioning the sequential variant of the VAE is the deep recurrent attentive writer (DRAW) [11] model that generates images by accumulating updates into a canvas using a recurrent network. Built upon the idea of sequential generation of images, the recurrent adversarial networks [15] has been proposed to let the recurrent network to learn the optimal generation procedure by itself, as opposed to imposing a coarse-to-fine structure on the procedure. Introspective adversarial network (IAN) [4] is proposed to hybridize the VAE and the GAN. It leverages the power of the adversarial objective while maintaining the efficient inference mechanism of the VAE. The generative multi-adversarial networks (GMAN) [9] extends the GANs to multiple discriminators. For a fixed generator G, N randomly instantiated copies of the discriminators are utilized to present the maximum value of each value function as the loss for the generator. Requiring the generator to minimize the max forces G to generate high fidelity samples that must hold up under the scrutiny of all N discriminators. Layered recursive generative adversarial networks (LR-GAN) [35] generates images in a recursive fashion. It first generates a background, and then generates a foreground by conditioning on the back-ground, along with a mask and an affine transformation that together define how the background and foreground should be composed to obtain a complete image. The foregroundbackground mask is estimated in a completely unsupervised way without using any object masks for training. Authors of [24] have shown that the generative-adversarial approach in GAN is a special case of an existing more general variational divergence estimation approach, and that any fdivergence can be used for training generative neural samplers. InfoGAN [5] method is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. A lower bound of the mutual information can be derived and optimized efficiently. Rather than a single unstructured noise vector to be input into the generator, InfoGAN decomposes the noise vector into two parts: a source of incompressible noise z and a latent code c that will target the salient structured semantic features of the data distribution, and the generator thus becomes G(z, c). The authors have added an information-theoretic regularization to ensure there is high mutual information between the latent code c and the generator distribution G(z, c). To strive for a more stable GAN training, the energy-based generative adversarial networks (EBGAN) [38] is proposed which views the discriminator as an energy function that assigns low energy to the regions near the data manifold and higher energy to other regions. The authors have shown one instantiation of EBGAN using an autoencoder architecture, with the energy being the reconstruction error. The boundary-seeking GAN (BGAN) [13] aims at generating samples that lie on the decision boundary of a current discriminator in training at each update. The hope is that a generator can be trained in this way to match a target distribution at the limit of a perfect discriminator. Least squares GAN [21] adopts a least squares loss function for the discriminator, which is equivalent to a multi-class GAN with the 2 loss function. The authors have shown that the objective function yields minimizing the Pearson χ 2 divergence. The stacked GAN (SGAN) [14] consists of a top-down stack of GANs, each trained to generate plausible lower-level representations, conditioned on higher-level representations. Discriminators are attached to each feature hierarchy to provide intermediate supervision. Each GAN of the stack is first trained independently, and then the stack is trained end-to-end.
Perhaps the most seminal GAN-related work since the inception of the original GAN [10] idea is the Wasserstein GAN (WGAN) [3]. Efforts have been made to fully understand the training dynamics of generative adversarial networks through theoretical analysis [2], which leads to the creation of the WGAN. The two major issues with the original GAN and many of its variants are the vanishing gradient issues and the mode collapse issue. By incorporating a smooth Wasserstein distance metric and objective, as op-posed to the KL-divergence and JS-divergence, the WGAN is able to overcome the vanishing gradient and mode collapse issues. WGAN also has made training and balancing between the generator and discriminator much easier in the sense that one can now train the discriminator till optimality, and then gradually improve the generator. Moreover, it provides an indicator (based on the Wasserstein distance) for the training progress, which correlates well with the visual image quality of the generated samples.
Other applications include cross-domain image generation [30] through a domain transfer network (DTN) which employs a compound of loss functions including a multiclass GAN loss, an f -constancy component, and a regularization component that encourages the generator to map samples from the target domain to themselves. The imageto-image translation approach [16] is based on conditional GAN, and learns a conditional generative model for generating a corresponding output image at a different domain, conditioned on an input image. The image superresolution GAN (SRGAN) [19] combines both the image content loss and the adversarial loss for recovering highresolution counterpart of the low-resolution input image. The plug and play generative networks (PPGN) [23] is able to produce high quality images at higher resolution for all 1000 ImageNet categories. It is composed of a generator that is capable of drawing a wide range of image types, and a replaceable condition network that tells the generator what to draw, hence plug and play.
Proposed Method: Gang of GANs
In this section we will review the original GAN [10] and its convolutional variant DCGAN [28]. We will then analyze how to further improve the GAN model with WGAN [3], and introduce our Gang of GANs (GoGAN) method.
GAN and DCGAN
The GAN [10] framework trains two networks, a generator G θ (z) : z → x, and a discriminator D ω (x) : x → [0, 1]. G maps a random vector z, sampled from a prior distribution p z (z), to the image space. D maps an input image to a likelihood. The purpose of G is to generate realistic images, while D plays an adversarial role to discriminate between the image generated from G, and the image sampled from data distribution p data . The networks are trained by optimizing the following minimax loss function: min where x is the sample from the p data distribution; z is randomly generated and lies in some latent space. There are many ways to structure G(z). The DCGAN [28] uses fractionally-strided convolutions to upsample images instead of fully-connected neurons as shown in Figure 1. The generator G is updated to fool the dis- criminator D into wrongly classifying the generated sample, G(z), while the discriminator D tries not to be fooled. Here, both G and D are deep convolutional neural networks and are trained with an alternating gradient descent algorithm. After convergence, D is able to reject images that are too fake, and G can produce high quality images faithful to the training distribution (true distribution p data ).
Wasserstein GAN and Improvement over GAN
In the original GAN, Goodfellow et al. [10] have proposed the following two loss functions for the generator: The latter one is referred to as the − log D trick [10,2,3].
Unfortunately, both forms can lead to potential issues in training the GAN. In short, the former loss function can lead to gradient vanishing problem, especially when the discriminator is trained to be very strong. The real image distribution P r and the generated image distribution P g have support contained in two closed manifolds that don't perfectly align and don't have full dimension. When the discriminator is near optimal, minimizing the loss of the generator is equivalent to minimizing the JS-divergence between P r and P g , but due to the aforementioned reasons, the JSdivergence will always be a constant 2 log 2, which allows the existence of an optimal discriminator to (almost) perfectly carve the two distributions, i.e., assigning probability 1 to all the real samples, and 0 to all the generated ones, which renders the gradient of the generator loss to go to 0.
For the latter case, it can be shown that minimizing the loss function is equivalent to minimizing KL(P g P r ) − 2JS(P r P g ), which leads to instability in the gradient because it simultaneous tries to minimize the KL-divergence and maximize the JS-divergence, which is a less ideal loss function design. Even the KL term by itself has some issues. Due to its asymmetry, the penalty for two types of errors is quite different. For example, when P g (x) → 0 and P r (x) → 1, we have P g (x) log Pr(x) → 0, which has almost 0 contribution to KL(P g P r ). On the other hand, when P g (x) → 1 and P r (x) → 0, we have P g (x) log Pr(x) → +∞, which has gigantic contribution to KL(P g P r ). So the first type of error corresponds to that the generator fails to produce realistic samples, which has tiny penalty, and the second type of error corresponds to that the generator produces unrealistic samples, which has enormous penalty. Under this reality, the generator would rather produce repetitive and 'safe' samples, than samples with high diversity with the risk of triggering the second type of error. This causes the infamous mode collapse.
WGAN [2,3] avoids the gradient vanishing and mode collapse issues in the original GAN and many of its variants by adopting a new distance metric: the Wasserstein-1 distance, or the earth-mover distance as follows: where Γ(P r , P g ) is the set of all joint distributions γ(x, y) whose marginals are P r and P g respectively. One of the biggest advantages of the Wasserstein distance over KL and JS-divergence is that it is smooth, which is very important in providing meaningful gradient information when the two distributions have support contained in two closed manifolds that don't perfectly aligned don't have full dimension, in which case KL and JS-divergence would fail to provide gradient information successfully. However, the infimum inf γ∈Γ(Pr,Pg) is highly intractable. Thanks to the Kantorovich-Rubinstein duality [34], the Wasserstein distance becomes: , where the supremum is over all the 1-Lipschitz functions. Therefore, we can have a parameterized family of functions {f w } w∈W that are K-Lipschitz for some K, and the problem we are solving now becomes: . Let the f w (discriminator) be a neural network with weights w, and maximize as much as possible so that it can well approximate the actual Wasserstein distance between real data distribution and generated data distribution, up to a multiplicative constant. On the other hand, the generator will try to minimize L, and since the first term in L does not concern the generator, its loss function is to minimize −E x∼Pg [f (x)], and the loss function for the discriminator is to minimize
Gang of GANs (GoGAN)
In this section, we will discuss our proposed GoGAN method which is a progressive training paradigm to improve the GAN, by allowing GANs at later stages to contribute to a new ranking loss function that will improve the GAN performance further. Also, at each GoGAN stage, we generalize on the WGAN discriminator loss, and arrive at a marginbased discriminator loss, and we call the network margin GAN (MGAN). The entire GoGAN flowchart is shown in Figure 2, and we will introduce the components involved. Based on the previous discussion, we have seen that WGAN has several advantages over the traditional GAN. Recall that D wi (x) and D wi (G θi (z)) are the discriminator score for the real image x and generated image G θi (z) in Stage-(i + 1) GoGAN. In order to further improve it, we have proposed a margin-based WGAN discriminator loss: where [x] + = max(0, x) is the hinge loss. This MGAN loss function is a generalization of the discriminator loss in WGAN. When the margin → ∞, this loss becomes WGAN discriminator loss. The intuition behind the MGAN loss is as follows. WGAN loss treats a gap of 10 or 1 equally and it tries to increase the gap even further. The MGAN loss will focus on increasing separation of examples with gap 1 and leave the samples with separation 10, which ensures a better discriminator, hence a better generator. We will see next that the MGAN loss can be extended even further by incorporating margin-based ranking when go beyond a single MGAN.
Ranker R: When going from Stage-i GoGAN to Stage-(i + 1) GoGAN, we incorporate a margin-based ranking loss in the progressive training of the GoGAN for ensuring that the generated images from later GAN training stage is better than those from previous stages. The idea is fairly straight-forward: the discriminator scores coming from the generated images at later stages should be ranked closer to that of the images sampled from the true distribution. The ranking loss is: Combing (2) and (3), the L disc and L rank loss together are equivalent to enforcing the following ranking strategy. Notice that such ranking constraint only happens between adjacent GoGAN pairs, and it can be easily verified that it has intrinsically established an ordering among all the MGANs involved, which will be further discussed in Section 4.
The weights of the ranker R and the discriminator D are tied together. Conceptually, from Stage-2 and onward, the ranker is just the discriminator which takes in extra ranking loss in addition to the discriminator loss already in place for the MGAN. In Figure 2, the ranker is a separate block, but only for illustrative purpose. Different training stages are encircled by green dotted lines with various transparency levels. The purple solid lines show the connectivity within the GoGAN, with various transparency levels in accordance with the progressive training stages. The arrows on both ends of the purple lines indicate forward and backward pass of the information and gradient signal. If the entire GoGAN is trained, the ranker will have achieved the following desired goal: R(G 1 (z)) R(G 2 (z)) R(G 3 (z)) · · · R(G K (z)) R(x), where indicates relative ordering. The total loss for GoGAN can be written as: L GoGAN = λ 1 · L disc + λ 2 · L rank , where weighting parameters λ 1 and λ 2 controls the relative strength.
Theoretical Analysis
In WGAN [3], the following loss function involving the weights updating of the discriminator and the generator is a good indicator of the EM distance during WGAN training: This loss function is essentially the Gap Γ between real data distribution and generated data distribution, and of course the discriminator is trying to push the gap larger. The realization of this loss function for one batch is as follows: Theorem 4.1. GoGAN with ranking loss (3) trained at its equilibrium will reduce the gap between the real data distribution P r and the generated data distribution P θ at least by half for Wasserstein GAN trained at its optimality.
Proof. Let D * w1 and G * θ1 be the optimally trained discriminator and generator for the original WGAN (Stage-1 GoGAN). Let D * w2 and G * θ2 be the optimally trained discriminator and generator for the Stage-2 GoGAN in the proposed progressive training framework.
The gap between real data distribution and the generated data distribution for Stage-1 to Stage-N GoGAN is: Let us first establish the relationship between gap Γ 1 and gap Γ 2 , and then extends to the Γ N case.
According to the ranking strategy, we enforce the following ordering: which means that ) On the left hand side, it is the new gap from Stage-2 GoGAN for one image, and for the the whole batch, this relationship follows: Therefore, we have 0 < Γ 2 < ξ 1 + Γ 1 , where the term ξ 1 can be positive, negative, or zero. But only when ξ 1 ≤ 0, the relation Γ 2 < Γ 1 can thus always hold true. In other words, according to the ranking strategy, we have a byproduct relation ξ 1 ≤ 0 established, which is equivalent to the following expressions: Combing relations (9) and (14), we can arrive at the new ordering: Notice the nested ranking strategy as a result of the derivation. Therefore, when going from Stage-2 to Stage-3 GoGAN, similar relationship can be obtained (for notation simplification, we drop the (i) super script and use bar to represent average over m instances): which is equivalent to the following expression when considering the already-existing relationship from Stage-1 to Stage-2 GoGAN: Similar ordering can be established for all the way to Stage-N GoGAN. Let us assume that the distance between the first and last term: D * w1 (x) and D * w1 (G * θ2 (z)) is β which is finite, as shown in Figure 3. Let us also assume that the distance between D * wi (x) and D * wi+1 (x) is η i , and the distance between D * wi+1 (G * θi+1 (z)) and D * wi (G * θi (z)) is ϕ i . Extending the pairwise relationship established by the ranker in (4,5) to the entire batch, we will have equal margins between the terms D * wi+1 (x), D * wi+1 (G * θi+1 (z)), and D * wi (G * θi (z)); and the margin between D * wi+1 (x) and D * wi (x) remains flexible. Therefore, we can put the corresponding terms in order as shown in Figure 3, with the distances between the terms η i and ϕ i also showing. The homoscedasticity assumption from the ranker is illustrated by dashed line with the same color. For instance, the distances between adjacent purple dots are the same.
We can establish the following iterative relationship: The total gap reduction TGR(N + 1) all the way to Stage-(N + 1) GoGAN is: is an increasing function TGR(N + 1) > TGR(N ), and we have: Therefore, GoGAN with ranking loss (3) trained at its equilibrium will reduce the gap between the real data distribution and the generated data distribution at least by half for Wasserstein GAN trained at its optimality. Proof. Recall the iterative relation from (18): Combining (20) and (21), we can have the following: Summing up all the LHS and RHS gives (notice the changes in lower and upper bound of summation): Therefore, the total gap reduction up to Stage-(N + 1) GoGAN is equal to β − ϕ N .
Evaluating GANs via Image Completion Tasks
There hasn't been a universal metric to quantitatively evaluate the GAN performance, and often times, we rely on visual examination. This is largely because of the lack of an objective function: what are the generated images gonna be compared against, since there is no corresponding groundtruth images for the generated ones? These are the questions needed to be addressed.
During the WGAN training, we have seen a successful gap indicator that correlates well with image quality. However, it is highly dependent on the particular WGAN model it is based on, and it will be hard to fairly evaluate generated image quality across different WGAN models. We need a metric that is standalone and do not depend on the models.
Perhaps, the Inception score [29] is by far the best solution we have. The score is based on pretrained Inception model. Generated images are pass through the model and those containing meaningful objects will have a conditional label distribution p(y|x) with low entropy. At the same time, the marginal p(y|x = G(z))dz should have high entropy because we expect the GAN to generate varied images. However, we argue that the Inception score will be biased towards the seen objects during the Inception model training, and it measures more of the "objectness" in the generated images, rather than the "realisticity" the GAN is intended to strive towards.
In this work, we propose a new way to evaluate GAN performance. It is simple and intuitive. We ask the GANs to carry out image completion tasks [36], and the GAN performance is measured by the fidelity (PSNR, SSIM) of the completed image against its ground-truth. There are several advantages: (1) this quality measure works on image level, rather than on the image distribution; (2) the optimization in the image completion procedure utilizes both the generator and the discriminator of the trained GAN, which is a direct indicator of how good the GAN model is; (3) having 1-vs-1 comparison between the ground-truth and the completed image allows very straightforward visual examination of the GAN quality, and also allows head-to-head comparison between various GANs; (4) this is a direct measure of the "realisticity" of the generated image, and also the diversity. Imagine a mode collapse situation happens, the generated images would be very different from the groundtruth images since the latter ones are diverse.
Details on the Image Completion Tasks
As discussed above, we propose to use the image completion tasks as a quality measure for various GAN models. In short, the quality of the GAN models can be quantitatively measured by the image completion fidelity, in terms of PSNR and SSIM. The motivation is that the image com- pletion tasks require both the discriminator D and the generator G to work well in order to reach high quality image completion results, as we will see next.
To take on the missing data challenge such as the image completion tasks, we need to utilize both the G and D networks from the GoGAN (and its benchmark WGAN), pre-trained with uncorrupted data. After training, G is able to embed the images from p data onto some non-linear manifold of z. An image that is not from p data (e.g. images with missing pixels) should not lie on the learned manifold. Therefore, we seek to recover the "closest" image on the manifold to the corrupted image as the proper image completion. Let us denote the corrupted image as y. To quantify the "closest" mapping from y to the reconstruction, we define a function consisting of contextual loss and perceptual loss, following the work of Yeh et al. [36].
The contextual loss is used to measure the fidelity between the reconstructed image portion and the uncorrupted image portion, which is defined as: where M is the binary mask of the uncorrupted region and denotes the Hadamard product operation. The perceptual loss encourages the reconstructed image to be similar to the samples drawn from the training set (true distribution p data ). This is achieved by updating z to fool D, or equivalently to have a small gap between D(x) and D(G(z)), where x is sampled from the real data distribution. As a result, D will predict G(z) to be from the real data with a high probability. The same loss for fooling D as in WGAN and the proposed GoGAN is used: The corrupted image with missing pixels can now be mapped to the closest z in the latent representation space with the defined perceptual and contextual losses. z is updated using back-propagation with the total loss: where λ (set to λ = 0.1 in our experiments) is a weighting parameter. After finding the optimal solutionẑ, the image completion y completed can be obtained by:
Methods to be Evaluated and Dataset
The WGAN baseline uses the Wasserstein discriminator loss [3]. The MGAN uses margin-based discriminator loss function discussed in Section 3.3. It is exactly the Stage-1 GoGAN, which is a baseline for subsequent GoGAN stages. Stage-2 GoGAN incorporates margin-based ranking loss discussed in Section 3.3. These 3 methods will be evaluated on three large-scale visual datasets.
The CelebA dataset [20] is a large-scale face attributes dataset with more than 200K celebrity images. The images in this dataset cover large pose variations and background clutter. The dataset includes 10,177 number of subjects, and 202,599 number of face images. We pre-process and align the face images using dLib as provided by the Open-Face [1]. The LSUN Bedroom dataset [37] is meant for large-scale scene understanding. We use the bedroom portion of the dataset, with 3,033,042 images. The CIFAR-10 [18] is an image classification dataset containing a total of 60K 32 × 32 color images, which are across the following 10 classes: airplanes, automobiles, birds, cats, deers, dogs, frogs, horses, ships, and trucks. The processed image size is 64 × 64, and the training-testing split is 90-10.
Training Details of GoGAN
For all the experiments presented in this paper we use the same generator architecture and parameters. We use the DCGAN [28] architecture for both the generator and the discriminator at all stages of the training. Both the generator and the discriminator are learned using optimizers (RMSprop [31]) that are not based on momentum as recommended in [3] with a learning rate of 5e-5. For learning the model at Stage-2 we initialize it with the model learned from Stage-1. In the second stage the model is updated with the ranking loss while the model from stage one held fixed. Lastly, no data augmentation was used for any of our experiments. Different GoGAN stages are trained with the same number total epochs for fair comparison. We will make our implementation publicly available, so readers can refer to it for more detailed hyper-parameters, scheduling, etc.
Results and Discussion
The GoGAN framework is designed to sequentially train generative models and reduce the gap between the true data (a) (b) Figure 4: Ranking Scores for Stage-1 and Stage-2 of GoGAN. In the second stage the ranking loss helps ensure that the Stage-2 generator is guaranteed to be stronger than the generator at Stage-1. This is clearly noticeable in the gap between the stage-1 and stage-2 generators.
Ground truth
Occluded distribution the learned generative model. Figure 4 demonstrates this effect of our proposed approach where the gap between the discriminator scores between the true distribution and the generated distribution reduces from Stage-1 to Stage-2. To quantitatively evaluate the efficacy of our approach we consider the task of image completion i.e., missing data imputation through the generative model. This task is evaluated on three different visual dataset by varying the amount of missing data. We consider five different level of occlusions, occluding the center square region (9%, 25%, 49%, 64%, and 81%) of the image. The image completion task is evaluated by measuring the fidelity between the generated images and the ground-truth images through two metrics: PSNR and SSIM. The results are consolidated in Table 2
Ablation Studies
In this section, we provide additional experiments and ablation studies on the proposed GoGAN method, and show its improvement over WGAN. For this set of experiments we collect a single-sample dataset containing 50K frontal face images from 50K individuals, which we call the 50K-SSFF dataset. They are sourced from several frontal face datasets including the FRGC v2.0 dataset [27], the MPIE dataset [12], the ND-Twin dataset [26], and mugshot dataset from Pinellas County Sheriff's Office (PCSO). Training and testing split is 9-1, which means we train on 45K images, and test on the remaining 5K. This dataset is single-sample, which means there is only image of a particular subject throughout the entire dataset. Images are aligned using two anchor points on the eyes, and cropped to 64 × 64.
One-shot Learning: Different from commonly used celebrity face dataset such as CelebA [20], our collected 50K-SSFF dataset is dedicated for one-shot learning in the GAN context due to its single-sample nature. We will explore how the proposed GoGAN method performs under the one-shot learning setting. The majority of the singlesample face images in this dataset are PCSO mugshots, and therefore, we draw a black bar on the original and generated images (see Figures 9,10,11) for the sake of privacy protection and is not an artifact of the GAN methods studied.
Training: The GAN models were trained for 1000 epochs each which corresponds to about 135,000 iterations of generator update for a batch size of 64 images. We used the same DCGAN architecture as in the rest of the experiments in the ablation studies.
Margin of Separation:
Here we study the impact of the choice of margin in the hinge loss. Figure 5 compares the margin of separation as WGAN and Stage-1 GoGAN are trained to optimality. Figure 6 compares the generators through the image completion task with 49% occlusion. Figure 7 compares the generators through the image completion task with 25% occlusion.
Image Completion with Iterations: Here we show the quality of the image generator as the training proceeds by evaluating the generated models on the image completion task. Figure 8 compares the generators through the image completion task with 25% occlusion.
Qualitative Results: We first show some example real and generated images (64×64) in Figure 9. The real images shown in this picture are used for the image completion task. Figure 10 shows qualitative image completion results with 25% occlusion. Figure 11 shows qualitative image completion results with 49% occlusion. Quantitative Results: We compare the quality of the image generators of WGAN, Stage-1 GoGAN and Stage-2 GoGAN through the image completion task. We measure the fidelity of the image completions via PSNR and SSIM. Table 5 shows results for our test set consisting of 5000 test faces, averaged over 10 runs, with 25% and 49% occlusions respectively.
Conclusions
In order to improve on the WGAN, we first generalize its discriminator loss to a margin-based one, which leads to a better discriminator, and in turn a better generator, and then carry out a progressive training paradigm involving multiple GANs to contribute to the maximum margin rank- ing loss so that the GAN at later stages will improve upon early stages. We have shown theoretically that the proposed GoGAN can reduce the gap between the true data distribution and the generated data distribution by at least half in an optimally trained WGAN. We have also proposed a new way of measuring GAN quality which is based on image completion tasks. We have evaluated our method on four visual datasets: CelebA, LSUN Bedroom, CIFAR-10, and 50K-SSFF, and have seen both visual and quantitative improvement over baseline WGAN. Future work may include extending the GoGAN for other GAN variants and study how other divergence-based loss functions can benefit from the ranking loss and progressive training. | 2017-04-17T04:42:56.000Z | 2017-04-17T00:00:00.000 | {
"year": 2017,
"sha1": "00d0ad219577c70a3d6295e8839841b2f1898e29",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "00d0ad219577c70a3d6295e8839841b2f1898e29",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
259034287 | pes2o/s2orc | v3-fos-license | Research of Dynamic Characteristics of Bearing Reducers of the TwinSpin Class in the Start-Up Phase and in the Initial Operating Hours
: This paper describes the results of research in the field of monitoring dynamic characteristics of the TwinSpin cycloid bearing reducer made by SPINEA. The research in this area resulted from the requirement to monitor the condition of high-precision bearing reducers and to assess the non-linear behavior of the kinematic structures of the monitored bearing reducers, specifically to determine the optimal start-up time of a particular bearing reducer class and to plan its maintenance. The condition of contact surfaces in the operational process was identified, and the lubricant applied to the monitored class of reducers was assessed. Analyses of the applied lubricant were carried out in order to identify and monitor changes in the technical condition of the bearing reducer. The aim of the measurements was to verify the start-up time of the bearing reducer, to assess the correlation between the total content of iron particles in the lubricant and an increase in kinematic backlash in the engagement of cycloidal wheels and reducer bearings after start-up and after about 1000 h of operation, and to propose recommendations to the operator for the implementation of short-term dynamic vibration, noise, and temperature tests before the reducer itself is put into operation.
Introduction
The SPINEA TwinSpin bearing reducer belongs to the hi-tech products category and represents a unique solution, combining a radial-axial bearing and a high-precision gearbox in a single unit. The gearbox and bearing are designed and integrated to support each other, and they function as a single unit. The gearbox ensures accurate transmission of movement or torque between the two shafts. In this case, it is designed with high accuracy, which means that it is able to minimize deviations and inaccuracies in the transmission of motion. The structural interconnection of these two components into one compact unit has several advantages. The first advantage is the saving of space, since there is no need to have separate gearboxes and bearings. Another advantage is greater precision and stability, as the bearing itself can help minimize vibration and inaccuracies in the gearbox. The advantage of such a solution is high transmission efficiency and positioning accuracy, while its dimensions and weight are small due to extremely high transmission capacity. Since its launch, through continuous development, this reducer has gone through several improvements and, in addition to the basic T series, there are also E, H, G, and M series available (Table 1). These bearing reducers are used in combination with servomotors and are applied where high kinematic accuracy, high rigidity, backlash-free operation, and high torque capacity are required [1].
TwinSpin reducers utilize the operating principle of the cycloid gearbox. The transmission and drive mechanism of TwinSpin reducers rely on a high-speed input component, which can be a full or hollow input shaft. The input shaft incorporates two eccentrically Thus, a cycloidal bearing reducer represents a special type of gearbox using a cycloidal gear system to transfer energy between two shafts. For such a transmission to be reliable, in addition to other factors, the start-up time may have a significant impact on the operating parameters of the cycloid gearbox, such as its efficiency, noise, durability, and overall performance. If the start-up stage is too short, the cycloid gearbox may be susceptible to subsequent wear and tear, and its service life may be reduced. In general, the optimal start-up length of a cycloid gearbox depends on several factors and must be selected based on the specific requirements of its application.
Determining the optimal run-in time of bearing reducers varies depending on the design and operating conditions of such a transmission mechanism. The aim of our research was to verify the run-in time of the bearing reducer recommended by the manufacturer and to assess the relationship between the total content of iron particles in the lubricant and the increase in kinematic backlash upon engagement of cycloidal wheels and bearings of the TwinSpin TS 050-63 class reducer. The measurements were carried out on 10 samples after the run-in time and after 1000 h in operation. We analyzed the non-linear behavior of this bearing reducer using technical troubleshooting methods. At the same time, we identified the state of abrasive particles in the lubricant during the recommended run-in time and after 1000 h in operation. The applied measurements confirmed that significant wear of the contact surfaces occurs during the run-in phase. At the same time, we have identified factors that affect not only the recommended run-in time but also the operational reliability and service life of the TwinSpin bearing reducer of the TS 050 class.
Overview of the Papers Published in the Field of the Issue Addressed
The problems of the research and design of the mathematical transmission model and the definition of the precision of the rotational node for a robotic system using a cycloidal transmission are dealt with in the scientific paper [5] by Strutynskyi and Semenchuk. The kinematics of the rotary unit, including the bearing unit and the cycloid transmission, were examined. One of the remarkable aspects of the operation of rolling bearings is the presence of clearances, which remain within the range of several tens of microns. These clearances effectively prevent potential jamming. Working clearances negatively affect the precision of the manipulator. The research conducted indicates that while the rigidity of the manipulator's links affects accuracy, it is not a determining factor. The paper proposes a method for determining the basic geometric dimensions of the links, based on pre-selected optimal values of deformation. In their article "Procedure Selection Bearing Reducer TwinSpin for Robotic Arm" [6], the authors Semjon et al. deal with the issue of selecting a suitable bearing reducer for individual axes of an industrial robot. The choice is largely dependent on the bearing used. The reason for this is that a significant component of the bearing reducer system is the radial-axial bearing, which plays a crucial role in determining the overall functionality and accuracy of the bearing reducer. Achieving smooth, long-term operation of the bearing reducer and the industrial robot can only be accomplished through an active assessment of their overall dynamics, taking into account the maximum speed and load.
In their respective article [7], Lopez Garcia et al. justify the high applicability of cycloid transmissions in robotic technology. Cycloidal drives face natural obstacles when it comes to handling high input speeds, primarily due to the influence of the relatively heavy and dimensionally large planetary (cam) wheel. This affects inertia and causes significant imbalance. To address this limitation, it is common to use two planet wheels arranged in series, shifted 180 degrees from each other. This configuration helps counterbalance the imbalances, minimize vibrations and facilitate higher input speeds. By incorporating pre-gearing stages consisting of conventional planetary gear train (PGT) stages, cycloid drives have successfully gained widespread acceptance in robotics. This combination has enabled cycloid drives to achieve their current level of popularity in the field.
In their article [8], Z. Pawelski, Z. Zdziennicki, et al. presented the findings of their tests on a prototype of a cycloid gear. The aim of the tests was to determine key parameters such as the fluctuation of torque on the input and output shaft, housing vibrations, and efficiency. By conducting FFT analysis on the recorded parameters, they observed a strong correlation between the various measured signals and the designated frequencies. The anticipated benefits of the cycloid gear were indeed validated by the results.
In their article [9], R. Zareba, T. Mazur, et al. assert that a comparison of different drives revealed the superior efficiency of two-stage cycloidal drives, which achieved a remarkable efficiency of 92.7%. This high efficiency was considered advantageous. This type of gearbox offers high precision, high torque, efficiency, overload resistance and a compact design. However, there are also technical solutions aimed at improving these transmissions, such as the use of non-circular gears or the application of cylindrical profiled teeth. In their article titled "A New Design of a Two-Stage Cycloidal Speed Reducer," the authors Blagojevic et al. introduce a novel concept for a two-stage cycloidal speed reducer. The newly designed two-stage cycloidal speed reducer described in this document utilizes only one cycloidal disc per stage, resulting in a more compact design. This is a significant difference compared to the traditional solution. The outcome is a good load distribution and achieved high dynamic balance [10].
In the paper titled "The Effect on Dynamics of Using Various Transmission Designs for Two-Stage Cycloidal Speed Reducers," authored by Hsieh and Jian, the authors present a study on the dynamic properties of four different types of two-stage speed reducers. The article presents four new structural configurations for a two-stage speed reducer and develops a system dynamics analysis model to examine changes in motion and stress in the main components. The research findings shed light on the differences, advantages, and disadvantages of the developed four types of transmissions, particularly in terms of dynamic load imbalance and potential stress depending on the individual design solutions [11].
In their article titled "Design and Analysis of a Three-Stage Cycloidal Planetary Drive for High Gear Ratio," Tsai et al. present a new comprehensive design of a three-stage differential cycloidal planetary drive. The aim of the design solution was to achieve a high gear ratio. The authors describe a conducted study on the load and structural characteristics of such a solution [12].
The article titled "High-Precision Gearboxes for Industrial Robots Driving the 4th Industrial Revolution: State of the Art, Analysis, Design, Performance Evaluation, and Perspectives" by Pham and Ahn provides a comprehensive overview of high-precision gearboxes and their potential application in industrial robot construction. The authors analyze various designs from a technical standpoint and assess performance factors such as transmission error, hysteresis, and efficiency. Lastly, the authors evaluate the possibilities of applying such gearboxes in other systems, such as robots [13].
The scientific article titled "Dynamic Behavior of a Two-Stage Cycloidal Speed Reducer with a New Design Concept" by Blagojević et al. addresses a new concept of a two-stage cycloidal speed reducer. Within the presented study, the dynamic behavior of such a solution is examined. The system of differential equations of motion for the first and second stages of the newly designed two-stage cycloidal speed reducer was solved using Matlab-Simulink R2018b software [14].
The authors Yang et al. state in their article "Reliability-Based Design Optimization for RV Reducer with Experimental Constraint" that design optimization for RV reducers is emerging as a pressing issue in the industry. Presently, existing research predominantly concentrates on deterministic design optimization, neglecting uncertainties and potentially resulting in unreliable designs. Hence, this study addresses the implementation of reliability-based design optimization for RV reducers. The objective is to reduce the size of the RV reducer while simultaneously enhancing its reliability [15].
According to Bednarczyk's article titled "Analysis of the Cycloidal Reducer Output Mechanism Considering Machining Variations," the findings indicate that the distribution of backlash, forces, and contact pressures is heavily influenced by the tolerances of the bushings' arrangement and the holes in the planet wheel. The research in this field has contributed to the development of knowledge in the area of selecting various machining variations with regard to the resulting cutting forces [16].
The article titled "Analysis of Dimensions and Efficiency of Two-Stage Cycloid Speed Reducers" by Matejic et al. presents a comparative study between a traditional two-stage cycloid speed reducer and a new conceptual design. The analysis includes a comparison of various dimensions, such as volume, height, width, and total length. The verification of design proposals was conducted using the Kudrijavcev method [17].
The article titled "A New Three-Stage Gearbox Concept for High Reduction Ratios: Use of a Nested-Cycloidal Architecture to Increase the Power Density" by Maccioni et al. presents a novel gearbox architecture. The new design solution is based on a combination of hypocycloidal and three-stage cycloidal gearing. It involves a so-called nested architecture, where the construction includes stages with gears internally and externally arranged in a concentric manner [18].
Impact of the Run-In Process on the Operational Reliability of the Bearing Reducer
In the field of machine tools and automation, reducing mechanisms such as highprecision bearing reducers of the drive units under examination are, in terms of their technological sophistication, cutting-edge devices. The run-in period of such a mechanism is of crucial importance. Development of new materials, lubricants, smaller manufacturing tolerances, and new finishing technologies have reduced the length of the run-in period. Nevertheless, it is irresponsible to apply the full load to the reduction mechanism right from the start of its technical life. The mandatory trial run cannot be considered a full-fledged run-in. The trial run is mainly used to verify the functionality of the reduction mechanism.
The application of new engineering technologies makes it possible to achieve the required precision of the contact surfaces. Nevertheless, each surface of a component material features specific properties typical thereof. Every part of the bearing reducer that comes into contact with another surface must adapt its position to the opposite part. When functional surfaces come into contact with each other, friction occurs. Friction increases wear on functional surfaces. Each reduction mechanism includes rolling elements and bearings that reduce friction by allowing both surfaces to roll off each other, reducing the friction of contact surfaces. It is now considered a standard for the components intended for high-precision reducers to be deburred, demagnetized, and washed.
The run-in process for bearing reducers includes the following: • Lubrication: Reducers usually have lubrication systems to ensure a smooth and reliable run-in. The lubricant is applied to the reducer to minimize the friction and wear of the bearings while ensuring cooling. • Alignment: Reducer bearings should be properly aligned to minimize unwanted load and increase bearing life. Correct alignment ensures optimal distribution of forces and reduces deformations. • Pre-load setting: Pre-loading is done during the bearing reducer installation. This ensures a certain degree of stress in the bearings, which minimizes the movement of the bearings under load and improves their accuracy. • Start-up: When the preparation is complete, the reducer is started. When the bearing reducer moves, a circular motion is formed, where the operating load is transferred from the input shaft to the output shaft. The grease ensures smooth movement and prevents bearing wear. • Monitoring and maintenance: After starting, it is important to monitor the operation of the bearing reducer and subject it to regular maintenance. This includes checking the quantity and quality of the lubricant, aligning the bearings, checking the shafts, and, if necessary, replacing them in the case of wear.
An improperly designed and implemented bearing reducer run-in process may cause several negative consequences and problems. Incorrectly executed run-in may lead to suboptimal distribution of forces and unwanted bearing loads. This may cause faster wear and shorten the life of the reducer bearings. Insufficient alignment, improper preloading, or inadequate lubrication may lead to bearing failure. Improper alignment and lubrication may result in increased friction between the bearings and the reducer. An incorrectly executed run-in process may also cause unwanted vibrations in the reducer. These vibrations may have a negative impact on performance, accuracy, and smoothness of operation.
If the run-in process is carried out incorrectly and the necessary procedures are not followed, failure of the bearing reducer operation may be the result. This failure may be caused by deformation of the bearing components, bearing damage, loss of accuracy, or other reducer-related problems. The result may be a malfunctioning reducer, interruption of operation, or even damage to other parts of the machine.
It is important to remember that the specific run-in process may depend on reducer type and design. Reducer manufacturers often provide specific instructions and recommendations for the correct setting, operation, and maintenance of their products that are appropriate to follow.
Dynamic Properties of Bearing Reducers of the TwinSpin Class
The disadvantage of the classic cycloid transmission lies in the ripple of the output torque at constant speed. The rotation of two wheels is described as rolling off of each other, with the axis of one of the wheels shifted by the value of eccentricity, which causes the resulting ripple of the cycloid gearbox torque. In reality, though, the interactions in the gearbox are much more complex. Under the effect of the eccentric shaft, the SPINEA gearbox's trochoid wheels shift by 180 • with respect to each other. As a result of the precise trochoid gear profile, almost 50% of the cogs of one wheel are engaged at the same time, which significantly flattens this unwanted ripple. In addition, the gearbox includes axial-radial bearings that, with the very precise trochoid gearing of the gearbox itself, predestine the TwinSpin gearbox from SPINEA to be used in precision-intensive applications, applications with high torque capacity, low dead run, and high rigidity of the compact design [19]. Technical parameters of the bearing reducer of the TwinSpin class are listed in Table 1.
In order to meet the requirement of increasing the reliability of the device, information must be available about the actual state of the object examined (BR). This can also eliminate hidden, less serious defects, which can later develop into a malfunction. Based on this information, timely intervention can be made and prevent malfunction at the most appropriate time, which can significantly affect the entire production process.
If the input shaft and the case are fixed and a torque is applied to the output flange, then the load diagram has the shape of a hysteresis curve ( Figure 2). Lost motion (LM) is a pitch angle of the output flange at ±3% nominal torque measured on the centerline of the hysteresis curve ( Figure 2). Torsional stiffness (kt) is defined as follows: The torsional stiffness values are statistical values for the particular reduction ratio. The hysteresis characteristic of TwinSpin TS 140-139-TB with the lost motion under 0.5 [arcmin] is illustrated in Figure 3. Torsional stiffness (k t ) is defined as follows: The torsional stiffness values are statistical values for the particular reduction ratio. The hysteresis characteristic of TwinSpin TS 140-139-TB with the lost motion under 0.5 [arcmin] is illustrated in Figure 3.
Torsional stiffness (kt) is defined as follows: The torsional stiffness values are statistical values for the particular reduction ratio. The hysteresis characteristic of TwinSpin TS 140-139-TB with the lost motion under 0.5 [arcmin] is illustrated in Figure 3. In their respective publications [19,20], the authors discuss dynamic properties of the TwinSpin gearbox, which is a linear three-weight system with a flexible coupling. In terms of position control, it is necessary to know or identify the nonlinearities that affect the resulting positioning accuracy. As a result of the imperfection in the production of the In their respective publications [19,20], the authors discuss dynamic properties of the TwinSpin gearbox, which is a linear three-weight system with a flexible coupling. In terms of position control, it is necessary to know or identify the nonlinearities that affect the resulting positioning accuracy. As a result of the imperfection in the production of the trochoid wheels and the gearbox support body, the so-called angular transmission error of the gearbox occurs, shown in Figure 4. This phenomenon results in a positioning error, which can be understood as the volatility of the already mentioned theoretical value of the ith transmission ratio. Depending on the position of the output shaft and the angular transmission error value, the following equations [19][20][21] apply to the position and torque, respectively: The exact gear ratio i(ϕout) in the defined position of the rotated output flange cannot Depending on the position of the output shaft and the angular transmission error value, the following equations [19][20][21] apply to the position and torque, respectively: Machines 2023, 11, 595 The exact gear ratio i(φ out ) in the defined position of the rotated output flange cannot be accurately described by a simple equation (ϕ in , ϕ out are the turning angles of the wheels). Due to the periodicity of the angular transmission error having exceeded at least one revolution of the output flange, this error can be characterized by the Fourier series: where f is the frequency of rotation, i th is the theoretical value of the transmission ratio, A represents the amplitude, and b is the phase deviation. The angular transmission error (ATEM) is an important characteristic of rotary modules and refers to the deviation between the actual output rotation angle and the theoretical output rotation angle. ATEM is typically measured in degrees or arcminutes, and it is a critical factor for ensuring the accuracy, uniformity, and repeatability of the output position of the rotary module. ATEM is influenced by several factors, such as the design of the rotary module, the coupling mechanism used to connect the input and output shafts, the quality of the bearings, and the machining tolerances of the various components. In general, a low ATEM value is desirable, as it indicates that the rotary module can deliver more accurate and precise output positions. Therefore, it is important to carefully consider the ATEM specifications of rotary modules when selecting them for various applications that require high precision and accuracy. Understanding the factors that influence ATEM can help in the design and manufacturing process of rotary modules to achieve better performance and minimize errors. The angular transmission error of the high-precision TwinSpin reducer is less than 1 arcmin. Figure 4 shows an example of an angular transmission error measured on a particular high-precision reducer TS 140-139-TB. The impact of the load on the accuracy of angular transmission is relatively low.
The measurement methodology consists of reaching a position of 720 • at the speed of rotation of the reducer's output flange of 6 • /s. Subsequently, the reducer reaches the 0 • position at the same speed of the output flange. This ensures 360 • rotation of all rotating components. The ATEM is measured in the load-free state with the input gear speed equal to the gear ratio. The speed of the output flange of the reducer is evaluated by 360 • in units [arcsec].
The measurement evaluates the amplitude of error deflection as the fluctuation of the "Peak to Peak" deflection in relation to the angle of rotation of the BR output flange.
The measurement takes place in two directions: • rotation of the output flange in one direction by 360 • -marked CW-and • rotation of the output flange in the opposite direction by 360 • -marked CCW.
The investigated object-the reducer-was fastened in the MTS frame in a vertical position. The measurement was carried out in a load-free state.
The measurement of hysteresis lies in detecting the difference between the actual position of the output flange and the desired position, depending on the magnitude of the torque applied at the braked input side of the reducer. The hysteresis of the torsional rotation of the BR output flange as an angular difference is defined by the hysteresis loop in units [arcmin]. The hysteresis characteristics offer information on the static and dynamic profile of the examined gearbox. Torsional stiffness, backlash in the cogs, and information on positioning accuracy are among the most significant variables that can be obtained from measuring the hysteresis of a given gearbox. Hysteresis is a phenomenon that describes the lagging or delayed response of a system to a changing input or stimulus. In the context of gearboxes and positioning systems, hysteresis can cause positioning errors, reduce accuracy, and increase wear on the components of the system. The hysteresis curve is a graphical representation of the relationship between the input and output of a system, which can help with visualization of the magnitude and direction of the hysteresis effect ( Figure 3). It typically shows the output response of the system as a function of the input as the input is gradually increased and then decreased while keeping all other conditions constant. Reducing hysteresis is often a key goal in the design and optimization of gearboxes and positioning systems, as it can improve positioning accuracy, reduce wear, and increase overall performance. Techniques such as optimizing the design of the gearbox components, using high-quality materials and lubricants, and carefully controlling operating conditions can help to minimize hysteresis and improve the performance of the system [19].
In [20], the relationship between gearbox hysteresis and torsional stiffness efficiency n t is described as follows: where E h is the lost energy and E c represents the energy of the undampened spring, while the following applies to these energies: where M 1 and M 2 are the torque values of the upper or lower branch of the hysteresis curve, respectively, and Mc is the torque corresponding to the loss-free elastic torque of the spring (see Figure 5). where M1 and M2 are the torque values of the upper or lower branch of the hysteresis curve, respectively, and Mc is the torque corresponding to the loss-free elastic torque of the spring (see Figure 5). For research purposes, as part of the process of measuring static and dynamic parameters of bearings and high-precision bearing reducers (BR), a mechatronic troubleshooting system (MTS) has been designed. Drawing on the criteria proposed for selection of actuators, a measuring device designated as MTS 4.0 ( Figure 6) has been designed. The troubleshooting device is the property of our partner-SPINEA Technologies, Ltd., Prešov. All measurements were carried out on the premises of SPINEA Technologies, Ltd. For research purposes, as part of the process of measuring static and dynamic parameters of bearings and high-precision bearing reducers (BR), a mechatronic troubleshooting system (MTS) has been designed. Drawing on the criteria proposed for selection of actuators, a measuring device designated as MTS 4.0 ( Figure 6) has been designed. The troubleshooting device is the property of our partner-SPINEA Technologies, Ltd., Prešov. All measurements were carried out on the premises of SPINEA Technologies, Ltd.
Tribological Properties and Lubrication of Bearing Reducer Mechanisms
Lubricant (oil, grease lubricant) in the reduction mechanism can be considered one of the most important components affecting trouble-free operation of the bearing reducer. The aim is to ensure perfect lubrication of friction points, or effective protection against friction and wear, as well as against the consequences of these phenomena (contact corrosion). In addition, lubricants must meet the following criteria: • reduction of mechanical energy losses and improvement of the mechanical efficiency of the system, • reduction or suppression of wear and its harmful effects on the tribological system, and • improvement of heat dissipation and sufficient cooling. Figure 5. Typical values of the hysteresis curve [19].
For research purposes, as part of the process of measuring static and dynamic parameters of bearings and high-precision bearing reducers (BR), a mechatronic troubleshooting system (MTS) has been designed. Drawing on the criteria proposed for selection of actuators, a measuring device designated as MTS 4.0 ( Figure 6) has been designed. The troubleshooting device is the property of our partner-SPINEA Technologies, Ltd., Prešov. All measurements were carried out on the premises of SPINEA Technologies, Ltd.
Tribological Properties and Lubrication of Bearing Reducer Mechanisms
Lubricant (oil, grease lubricant) in the reduction mechanism can be considered one of the most important components affecting trouble-free operation of the bearing reducer. The aim is to ensure perfect lubrication of friction points, or effective protection against friction and wear, as well as against the consequences of these phenomena (contact corrosion). In addition, lubricants must meet the following criteria: • reduction of mechanical energy losses and improvement of the mechanical efficiency of the system, • reduction or suppression of wear and its harmful effects on the tribological system, and • improvement of heat dissipation and sufficient cooling.
The quality of the lubricant (grease and oil) applied to the bearing reducer affects its trouble-free operation and service life. The reliability of the reduction mechanism also The quality of the lubricant (grease and oil) applied to the bearing reducer affects its trouble-free operation and service life. The reliability of the reduction mechanism also depends on other technical conditions that are necessary for proper functioning of the technical system. One such condition is cleanliness. Cleanliness is an essential condition that must be met in order for the reduction mechanism, as well as the entire mechatronic system, to work properly and reliably throughout its lifetime. The most common damage to transmission mechanisms occurs due to bearing damage and gear damage. The reasons for this need to be attributed to the fact that in such transmission mechanisms, there is a high pressure load (Hertz pressure), which leads to the formation of small fragments-particles-that are the cause of bearing damage. When the cogs are worn, this gives rise to the so-called micropitting, which occurs on the surface of the cog and causes a change in the cog profile, leading to greater cog noise and causing additional fatigue wear (macropitting, chipping, etc.). Micropitting may also occur in bearings. The influence of metal particles on oil degradation depends on the type of particles, e.g., Fe, Cu, etc. These particles can cause catalytic oxidation. In any reduction mechanism, there is a release of abrasive particles-the generation of the particles themselves. Such a system needs to be periodically monitored in order to achieve a certain equilibrium of the particles present so as to not cause fatal destruction of the reduction mechanism.
For measuring purposes, the COSMOS SDM-73measuring device was used. The device allows easy troubleshooting to determine the wear of bearings, gears, etc. The aim of the measurement was to determine the concentration of Fe particulate concentration after the run-in time and after operating 1000 h in the load configurations with mechanical weight. The method of particle detection and isolation by means of a magnetic analyzer allows detection of the intensity of wear and informs on the friction and wear process. It is the result of the fluid's drift force and the magnetic forces that, when applied to an inspected oil sample, ensure that larger iron particles settle immediately upon inlet onto a glass slide that is conveniently located in a strong magnetic field generated by permanent magnets. Due to greater surface-to-volume ratio, particles of 1 to 2 µm flow more slowly and settle on the bottom of the glass slide, i.e., at the outflow of the analyzed sample from the glass slide (ppm-parts per million), the number of particles per 1 million other particles, or 0.0001%, or 1 mg of the substance in 1 L of solution. For example, 25 ppm means 25 millionths, i.e., 0.000025, or 45 × 10 −6 , or 0.0025%, or 0.025‰.
Assessment of the Current Technical Condition of Selected Bearing Reducer Samples by the Chosen Methods of Technical Troubleshooting
The research focused on the assessment and analysis of characteristic non-linear behavior of the kinematic structures of the selected high-precision bearing reducer. Our partner was interested in the analysis of the TS 050 class reducer's technical condition after its approx. 1000 h in operation (Figure 1). We identified the state of abrasive particles in the lubricant, analyzed the state of the contact surfaces in the start-up process, and diagnosed the state of the lubricant and subsequently the state of the contact surfaces in the operational process. The conducted lubricant analysis was primarily focused on determining the optimal start-up time of a given BR and identification of abrasive particles that pass through during the start-up in the first hours of operation. During the start-up, significant wear occurs due to the contact of the respective surfaces of the transmission components. This condition persists until the individual components of the transmission mechanism have settled into operation. The same measurements were also carried out during operation in order to identify significant changes in the BR's technical condition. Temperature changes were analyzed at the same time. Last, but not least, a correlation was sought between the total content of Fe (iron) particles in the oil and an increase in kinematic backlash in the engagement of cycloidal wheels and BR bearings.
Measurement of the TS 050 bearing reducer was carried out configured with a mechanical weight load on the arm on the mechatronic troubleshooting system. Measurements of the examined object were carried out on 10 samples under 100% TR load and with a constant input rotation rate of 1000 rpm over a period of 1000 h. The measurement parameters are listed in Table 2. The so-called bump test was done repeatedly on the BR. The bump test is a type of frequency response test that is commonly used to identify the natural frequencies and damping ratios of a mechanical system, such as a gearbox or a structure. In the context of the BR, the bump test would involve applying a sudden, short-duration impulse or "bump" to the system and measuring the resulting response in the torsional and tilting directions. BR was measured at a constant rotational frequency of 1000 rpm at the input, under a sinusoidal load (weight on the arm), in both directions of rotation (CW, CCW). A vibration sensor was used for the measurement PBC 356B18. At the same time, the measurement of absolute vibrations was carried out during a smooth change of the input speed, in the ranges of 0 to 2000 rpm and 0 to 3000 rpm, with the aim of monitoring the amplification of the oscillation of the output arm. Subsequent to the 1000-h operation, measurements of the LM and H parameters, which could have been affected by the start-up mode, were taken again. Due to the comparability of the individual results before and after the start-up and after 1000 h of operation, a mark was created on each BR across the position of the BR support body and the position of the output flange. In this simple way, a relevant assessment of the technical condition of the measured parameters was achieved. Graphed representation of the course of the LM and H measurements of the selected TS 050-63, s/n 1911, is shown in Figure 7. measurements of the LM and H parameters, which could have been affected by the startup mode, were taken again. Due to the comparability of the individual results before and after the start-up and after 1000 h of operation, a mark was created on each BR across the position of the BR support body and the position of the output flange. In this simple way, a relevant assessment of the technical condition of the measured parameters was achieved. Graphed representation of the course of the LM and H measurements of the selected TS 050-63, s/n 1911, is shown in Figure 7. The TS 050 troubleshooting has been extended by another technical parameter reflecting the kinematic accuracy of the BR, namely the ATEM parameter. It was evaluated between the position sensors located on the input and output sides of the BR, in the load-free state at the BR rotation speed of 6 • /s, at a rotation of the BR output of 360 • in both directions (CW, CCW). A summary of the ATEM values measured in the individual BR samples after a 48-h start-up and after 1000 h in operation is given in Table 3. The ATEM graphs for the TS 050 bearing reducer with the serial number 1901 are shown in Figures 8-11. Testing was done 5 days a week, 24 h a day. The measurement of the concentration of solid Fe particles was carried out after the start-up time and after the 1000-h operation in the configuration featuring a load (a mechanical weight) in accordance with the conditions specified in Table 2. Castrol Tribol GR TT1 PD lubricant was applied to the BR. The analysis of the lubricant was done using the evaluation device Cosmos SDM-72. The measured values are listed in Table 4 The measurement of the concentration of solid Fe particles was carried out after the start-up time and after the 1000-h operation in the configuration featuring a load (a mechanical weight) in accordance with the conditions specified in Table 2. Castrol Tribol GR TT1 PD lubricant was applied to the BR. The analysis of the lubricant was done using the evaluation device Cosmos SDM-72. The measured values are listed in Table 4. Table 5 includes a record of the measured values of the natural frequencies of the bearing reducer BR 050 after 1000 h in operation. In order to assess the levels of HF (high frequency) vibration, measurements and analysis of selected troubleshooting methods were also carried out as follows: • Acceleration, EnvAcc up to 10 kHz for assessing the quality of microgeometry of the contact surfaces (bearings, gearing) • EnvAcc up to 20 kHz and HFD up to 40 kHz for friction mode and quality assessmentbearing capacity of oil film.
The operating condition of the TS 050 reducers (s/n 1902-s/n 1910) can be assessed as good or satisfactory for input speeds of 2000 and 3000 rpm, respectively, based on the assessment of the measured high-frequency vibrations. The exception is the TS 050 s/n 1901 reducer, where the vibrations exceed the allowed limit specified in Alarm 1 (see Table 6). With respect to overall measured vibration values, it was necessary to identify the recommended limits with respect to the requirements of technical standard STN ISO 10816-3 (6). For different operating modes of the machine (considering speed, loading, and performance) it is possible to set the warning limits (alarms) also on the basis of general recommendations (signal increase by 200% compared to reference value).
Default alarm levels: In terms of the FFT spectrum with Acc up to 16 kHz-a comparison of the signal for the speed of 3000 rpm, after 1000 h in operation-the TS 050-63 bearing reducer shows an improvement in high-frequency vibrations by about 30% in the frequency range above 7 kHz. However, the vibration value continues to exceed Alarm 1, especially during the load time of the power transmission (Figures 14 and 15). In terms of the FFT spectrum with Acc up to 16 kHz-a comparison of the signal for the speed of 3000 rpm, after 1000 h in operation-the TS 050-63 bearing reducer shows an improvement in high-frequency vibrations by about 30% in the frequency range above 7 kHz. However, the vibration value continues to exceed Alarm 1, especially during the In terms of the FFT spectrum with Acc up to 16 kHz-a comparison of the signal for the speed of 3000 rpm, after 1000 h in operation-the TS 050-63 bearing reducer shows an improvement in high-frequency vibrations by about 30% in the frequency range above 7 kHz. However, the vibration value continues to exceed Alarm 1, especially during the load time of the power transmission (Figures 14 and 15). In terms of the FFT spectrum with Acc up to 16 kHz-a comparison of the signal for the speed of 3000 rpm, after 1000 h in operation-the TS 050-63 bearing reducer shows an improvement in high-frequency vibrations by about 30% in the frequency range above 7 kHz. However, the vibration value continues to exceed Alarm 1, especially during the load time of the power transmission (Figures 14 and 15). In the case of radial recording, the critical speed (natural torsional frequency) is identifiable around the input speed of 750 rpm. In the axial direction, the critical speed (natural tilting frequency) is at about 1240 rpm of the input speed.
The measurement of temperature changes on the surface of the BR support body was performed using an Alnico sensor. This technique is useful in monitoring the temperature of the BR support body, as temperature changes can affect the performance and reliability of the system. Excessive temperature can cause thermal expansion, which can lead to misalignments, distortions, or even failures in the system. By monitoring the temperature changes, engineers can identify any potential issues and take appropriate measures to prevent or mitigate them. The alarm value was set to 60 °C. The temperature was In the case of radial recording, the critical speed (natural torsional frequency) is identifiable around the input speed of 750 rpm. In the axial direction, the critical speed (natural tilting frequency) is at about 1240 rpm of the input speed.
The measurement of temperature changes on the surface of the BR support body was performed using an Alnico sensor. This technique is useful in monitoring the temperature of the BR support body, as temperature changes can affect the performance and reliability of the system. Excessive temperature can cause thermal expansion, which can lead to misalignments, distortions, or even failures in the system. By monitoring the temperature changes, engineers can identify any potential issues and take appropriate measures to prevent or mitigate them. The alarm value was set to 60 °C. The temperature was In the case of radial recording, the critical speed (natural torsional frequency) is identifiable around the input speed of 750 rpm. In the axial direction, the critical speed (natural tilting frequency) is at about 1240 rpm of the input speed.
The measurement of temperature changes on the surface of the BR support body was performed using an Alnico sensor. This technique is useful in monitoring the temperature of the BR support body, as temperature changes can affect the performance and reliability of the system. Excessive temperature can cause thermal expansion, which can lead to misalignments, distortions, or even failures in the system. By monitoring the temperature changes, engineers can identify any potential issues and take appropriate measures to prevent or mitigate them. The alarm value was set to 60 • C. The temperature was measured in hourly intervals. The temperature alarm value was not exceeded during the measurements.
The sum values of the BR surface temperatures over the first 48 h of measurements are shown in Figure 20.
Results and Discussion
After the start-up mode with a load and finding of the initial BR state, measurements of the start-up torque, hysteresis, lost motion, and temperatures; analysis of the presence of abrasive Fe particles; vibration measurement in the tilt and torsion directions; and measurement of temperature changes were carried out on the examined object, the TS 050 bearing reducer. Prior to the stress test itself, other measurements were made to determine the angular transmission error.
The measurements were done on 10 TS 050-63 reducer samples. The examined object was burdened with a load of the rated output torque TR = 18 Nm during a 1000-h long operation. After the end of a given period of operation under a load, repeated measurements were made to determine the BR's natural frequencies, as were the H, LM, and ATEM parameter measurements, the test for the presence of abrasive particles, HF vibration measurement, FFT spectrum Acc up to 16 kHz, vibration up to 1 kHz, and critical speed measurement.
Based on the measurements and the subsequent analyses, it is possible to state the following: • In terms of measurement of lost motion, hysteresis: The monitored parameters of post-operative hysteresis showed a slight increase in values. The LM parameter showed no significant change from the baseline value after 1000 h of operation. • In terms of temperature measurement: The alarm value of 60°C was not exceeded during the measurements. • In terms of vibration measurement: A decrease in torsional stiffness was noted after 1000 h of operation. This was confirmed by measuring natural frequencies, which showed a decrease of about 2 Hz in the torsion direction. Reduced natural frequency has a positive effect on the overall vibration. No resonance is generated. The tested samples continued to be free of load peaks-no sign of metallic contact. There was an overall drop in vibration for the test speeds of 2000 rpm and 3000 rpm, respectively. The HF vibration analysis confirmed the satisfactory technical condition of the TS 050 bearing reducer samples in nine out of ten cases.
HF vibrations were recorded in the sample with the serial number of 1901, identifying a condition of non-conformity, mainly in terms of friction assessment at the input component speed BR 3000 rpm. Long-term tests have shown that there is gradual damage to the output bearings and thus to the reducer itself. It is recommended to dismantle and check the internal dimensions and surfaces of the sample in question.
The critical input speed was determined for the BR measured. In the axial direction, Figure 20. Record of the course of temperature changes.
Results and Discussion
After the start-up mode with a load and finding of the initial BR state, measurements of the start-up torque, hysteresis, lost motion, and temperatures; analysis of the presence of abrasive Fe particles; vibration measurement in the tilt and torsion directions; and measurement of temperature changes were carried out on the examined object, the TS 050 bearing reducer. Prior to the stress test itself, other measurements were made to determine the angular transmission error.
The measurements were done on 10 TS 050-63 reducer samples. The examined object was burdened with a load of the rated output torque TR = 18 Nm during a 1000-h long operation. After the end of a given period of operation under a load, repeated measurements were made to determine the BR's natural frequencies, as were the H, LM, and ATEM parameter measurements, the test for the presence of abrasive particles, HF vibration measurement, FFT spectrum Acc up to 16 kHz, vibration up to 1 kHz, and critical speed measurement.
Based on the measurements and the subsequent analyses, it is possible to state the following:
•
In terms of measurement of lost motion, hysteresis: The monitored parameters of postoperative hysteresis showed a slight increase in values. The LM parameter showed no significant change from the baseline value after 1000 h of operation.
•
In terms of temperature measurement: The alarm value of 60 • C was not exceeded during the measurements.
•
In terms of vibration measurement: A decrease in torsional stiffness was noted after 1000 h of operation. This was confirmed by measuring natural frequencies, which showed a decrease of about 2 Hz in the torsion direction. Reduced natural frequency has a positive effect on the overall vibration. No resonance is generated. The tested samples continued to be free of load peaks-no sign of metallic contact. There was an overall drop in vibration for the test speeds of 2000 rpm and 3000 rpm, respectively. The HF vibration analysis confirmed the satisfactory technical condition of the TS 050 bearing reducer samples in nine out of ten cases.
HF vibrations were recorded in the sample with the serial number of 1901, identifying a condition of non-conformity, mainly in terms of friction assessment at the input component speed BR 3000 rpm. Long-term tests have shown that there is gradual damage to the output bearings and thus to the reducer itself. It is recommended to dismantle and check the internal dimensions and surfaces of the sample in question.
The critical input speed was determined for the BR measured. In the axial direction, the value is about 1240 rpm, and in the radial direction, the speed accounts for about 750 rpm. When measuring vibrations up to 1 kHz, it is possible to see from the time record how the measured mechanical system reacts to the load when transmitting mechanical power with a load on the arm. The time record shows a different reaction at the time of lifting and lowering the arm with the load (alternation of lighter and darker area under the curve).
•
In terms of assessing the lubricant condition: After the initial phase of relative start-up and 1000 h of operation, the filling of Castrol Tribol GR TT1 PD continued to show a satisfactory state of the abrasive Fe particles content. All sample values were below the recommended limit.
•
In terms of ATEM measurement: The monitored ATEM parameters showed a slight increase in values after initial start-up and 1000 h of operation.
Conclusions
The high-precision TwinSpin gearbox is a combination of a transmission mechanism and an output bearing. Inside, it contains both slow-and high-speed bearings along with cycloidal gearing. Typical for the TwinSpin gearbox is its absolutely balanced internal design, which, despite containing excentres as a vibration generator, eliminates their resulting impact during rotation through their careful placement to achieve that effect. The transmission mechanism itself is dominated by rolling friction, which in its essence does not generate vibrations, as is the case with shear friction. Balance and low vibration are also achieved by a single-stage transmission, which also achieves high gear ratios. Dimensional tolerances, geometric accuracy, and roughness are extremely low, below 1 µm. The cycloid gearbox transfers mechanical power from the servo drive to the output arm through individual components-contact surfaces (bearings, cogged gears, etc.). For reliable operation, it is extremely important that excessive wear (adhesive/abrasive Fe particles) does not occur on the contact surfaces during the entire life of the gearbox (manipulator/robot).
In order to meet the requirement of increased reliability, information must be available about the actual state of the object examined (BR). This can also eliminate hidden, less serious defects, which can later develop into a malfunction.
The present research was aimed at identifying the current BR state during its start-up phase and after about 1000 h of operation. The state of the presence of the abrasive particles in the lubricant was identified, and, subsequently, the state of contact surfaces after the start-up and the operational phase was assessed. In practice, we are not often approached with the requirement to sample lubricants and check them at such intervals. However, since this is a relatively small BR design in which the lubricant content is low (in milliliters-the quantity of lubricant collected was in the same volume), it was very important to do the analysis. Lubricant analyses were primarily aimed at determining the optimal start-up time for the given BR. Abrasive Fe particles, which are introduced during the start-up in the first hours of operation, have been identified. The finding that significant wear and tear occurs during the first hours of operation due to contact of the respective surfaces of the transmission components has been confirmed.
The start-up time of cycloid bearing reducers varies depending on their design, use, and operating conditions. In general, cycloidal bearing reducers have a higher start-up time than conventional cogged gearboxes. An important factor is also the quality and type of the bearings used, which can affect the operational reliability and service life of the reducer. Based on the analysis, we can confirm the recommendations for the user of the bearing reducer of the TwinSpin class to run the reducer for at least 48 h under at least a 50% load of the nominal torque and a maximum speed of 1000 rpm. In order to ensure high service life and operational reliability, it is recommended to replace the entire oil filling after the first 100 h of operation after the start-up with lubricant in the form of technical oil. Based on recommendations from the reducer manufacturer, if the lubricant is grease, it is possible to replace the grease only by completely dismantling the BR. Grease replenishment should be carried out after 1000 h of operation.
The operating problems of bearing cycloid reducers can be caused by various factors that can interact with each other. Some of the most common operational problems are [22][23][24][25][26]: • Wear and tear of the cogs-A cycloid gear system is prone to wear and tear due to high pressure and friction between the gear cogs. Wear and tear may lead to impaired torque transmission accuracy and impaired torque efficiency. • Incorrect mounting-Incorrect mounting may cause various problems, such as damage to the gear, disruption of the gear geometry, and deterioration of its efficiency. • Excessive load-the cycloid bearing reducer is designed for a certain load, and exceeding this limit may cause damage to the gear cogs and lead to a gearbox failure. • Insufficient lubrication-Insufficient or contaminated lubricant may cause gear wear and deterioration of transmission efficiency. • Noise-a cycloid gear system can be significantly noisy in operation, which can be undesirable in specific applications. • Vibration-A cycloid gear system may tend to vibrate. Vibration may negatively affect torque transmission accuracy and impair gearbox efficiency.
To prevent these problems and ensure operational reliability and more efficient operation of the cycloid bearing reducer, regular maintenance and inspection is of utmost importance.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to restrictions of the funding source.
Conflicts of Interest:
The authors declare no conflict of interest. | 2023-06-03T15:15:49.871Z | 2023-05-31T00:00:00.000 | {
"year": 2023,
"sha1": "31b2f0bd8483cd10ae0b30f76926835d2c2f5954",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/machines11060595",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "1c3dcf9e51bb8c2cb3b52ff5c10d4a1785f4abfd",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
} |
244105462 | pes2o/s2orc | v3-fos-license | The accumulation of sugars and organic acids in blackberry fruit in the conditions of Сentral Russia
The taste characteristics of berries of 26 cultivars, elite and selected forms of blackberries grown in Central Russia are presented: dry matter, monosaccharides, sucrose acid the total amount of sugars. The sugar-acid index has been calculated. The best genotypes have been identified according to the content of dry matter and sugars Brzezina and Agawam cultivars, elite seedlings LN-4, LN-13, LN-14 and selected seedlings LN-5, LN-7, LN-8, according to the content of organic (titrated) acids and sugaracid index Brzezina, Black Satin, Loch Tau, seedlings of Black Satin, Cheyenne, Loch Ness LN-6, LN-10, EV LN-13 and control cultivar Agawam. A high traits' conjugation was established: the sugar-acid index and the content of monosaccharides, the total amount of sugars, soluble solids, as well as a high inverse dependency between the content of organic acids and the sugar-acid index. Low unreliable correlation coefficients (r = 0.28 ... + 0.13) were obtained, indicating the absence of links between the fruit weight and the content of biochemical components responsible for the taste characteristics of genotypes, which is a positive fact when creating new cultivars with high taste and a significant weight of fruits, indicating their independent inheritance.
Introduction
Currently, the Earth's population is exposed to negative environmental influences: polluted drinking water and air, low sanitary conditions, etc. The fast-paced lifestyle of a significant number of people leads to unhealthy diets, fast foods and other products with the use of fats and flavor enhancers. Nevertheless, the preservation of health and life expectancy based on a healthy lifestyle come out on top among a significant part of the population of the Russian Federation. This is largely facilitated by good nutrition, based on the consumption of natural nutrients and biologically active substances contained in fruits and berries in sufficient quantities.
The crop that has attracted the interest of the population of Russia in recent years is blackberry. It is popular and widely known in the world. During the period of 1995-2005, the global area of blackberry production increased by 44% [1]. The main producers of blackberries are the USA (over 3,500 ha) and Europe -Serbia (over 5,000 ha). Recently, the state of Chile has become the supplier of blackberries for processing [2].
Growing blackberries in Central Russia is limited by the relatively low frost resistance of the aboveground part of plants, which is overpassed with the use of new technologies for growing this crop based on winter cover [3,4,5]. This deficiency is offset by the excellent specific taste of blackberries and their rich biochemical composition. At the same time, blackberry is a cost-effective crop. Foreign experience in its cultivation evidences the rapidly growing volume of production, which brings good income due to the high yield and high selling price of berries. Blackberries begin fruiting one year after planting, which allows to quickly recoup the investments [6] . But the expanding market of berry products is tightening the requirements for the creation of new cultivars, including the quality of fruits and their biochemical composition [7,8].
The purpose of this research was to study blackberry cultivars of the gene pool of the Russian Research Institute of Fruit Crop Breeding (VNIISPK) by the content of soluble dry matter, sugars, organic acids and the value of the sugar-acid index as indicators of the taste characteristics of various genotypes and the selection of the best ones to be used in breeding to improve the quality of fruits.
Materials and methods
The objects of the study were 26 blackberry cultivars from the VNIISPK gene pool, including 15 selected and elite seedlings of the VNIISPK selection: selected seedlings of Black Satin, Cheyenne cultivars from open pollination, elite (ES) and selected seedlings (LN) of the Loch Ness cultivar. The content of soluble dry matter (Brix), sugars (mono-, disaccharides and their sum), organic acids w/ere determined in blackberries, and the sugaracid index (SAI) was calculated. Sampling was carried out at the site for the cultivar study of blackberries, biochemical analyzes were carried out in the laboratory of biochemical and technological assessment of cultivars and storage at the VNIISPK. In this research, generally accepted methods were used [14,15]: fruits were taken for analysis at the moment of their full maturity (method based on the ability of reducing sugars were determined with a free carboxyl group to r0.5 kg), soluble dry matter (SDM) (Brix, %) were determined by the refractometric method using a digital refractometer PAL-3 (ATAGO); sugars -by the Be educe copper oxide in an alkaline solution to protoxidic one; organic acids (titratable acidity) -by titrating certain volumes of extract with 0.1 N sodium hydroxide solution in the presence of phenolphthalein indicator; the determination of the sugar acid index (SAI) (Ratio) was calculated as the ratio of the total sugar content to the organic acid content. Statistical processing of research results was carried out using the analysis package of Microsoft Excel and STATISTICA 6.0. The coefficients of variation (V%) and the error of mean (m) were calculated considering three-year data.
Results and discussion
The main indicators that determine the taste of fruits are sugars and organic acids, as well as their ratio -the sugar-acid index (Ratio). According to the literary data [16,17,18] based on the study of the chemical composition of the fruits of other crops, SDM (Brix) and sugars, which make up most of the SDM, are in close correlation. Brix is an indirect indicator that determines the sugar content of fruits and berries, which allows, with a large experimental material, to quickly and accurately select high-sugar genotypes.
Studies of the biochemical composition of blackberry fruits carried out over four years (2017-2020) are summarized and presented in Table 1. Note: * -LN -seedling of Loch Ness variety.
With the average value of SDM in fruits for all studied cultivars equal to 12.3 ± 0.2%, the genotypes with an indicator exceeding this level (SDM ˃ 12.5%) were distinguished: Loch Tay, Seedl. LN-2, EV LN -4, Seedl. LN-5, Seedl. LN-7, Seedl. LN-8, EV LN-14 and the control cultivar Agawam. In the bulk of the cultivar samples (70%), the SDM content in the fruits was below the average value. The range of variation depending on the cultivar varied from 10.4 to 14.8%, the coefficient of variation was 9.6%, which indicated a weak varietal variability of the studied trait.
The average content of the total sugars in the fruits of the studied blackberries was 9.47% with a range of variation from 7.49% (Seedl. LN-9) to 11.38% (Brzezina). Of the total amount of sugars, 94% were monosaccharides (glucose + fructose) and only 6% were sucrose. The cultivar variability of (Table 2), the value of which (r = + this indicator was average (V = 11.7%). Brzezina and Agawam, as well as elite seedlings LN-4, LN-13, LN-14 and selected seedlings LN-5, LN-7, LN-8 accumulated sugars in fruits above the average value (˃ 10.00%). The genotypes isolated by the sugar content in fruits practically coincided with the previously isolated cultivars in terms of the SDM content, which was confirmed in the above-mentioned literary sources and the correlation coefficient calculated by us 0.82 ***) indicated high positive reliable connection between these two biochemical traits. We have found that the amount of monosaccharides is closely related to the sum of sugars and SAI -the correlation coefficients are 0.97 *** and 0.74 ***, respectively. Sum of sugars,% + 0,68*** -0,32 + 0,08 + 0,97*** Soluble dry matter, % + 0,29 -0,08 -0,007 + 0,74*** + 0,82*** Fruit mass, g -0,17 -0,13 -0,14 +0,13 -0,21 -0,28 Organic (titratable) acids in the fruits of berry crops are mainly represented by citric acid; therefore, the titratable acidity in the fruits of the studied cultivars of blackberries was recalculated using citric acid. Depending on the genotype, the content of organic acids varied from 0.61 (Agawam) to 1.99% (Texas) with a high coefficient of variation (V = 26.6%) and an average value of 0.97%. Organic acids affect the taste of fruits more than sugars. This is confirmed by the correlation coefficients (Table 2). There is a high inverse relationship between the content of organic acids in fruits and the sugar acid index (r = -0.81 ***), between the sugar acid index, the total amount of sugars (r = +0.68 ***) and the content of monosaccharides (r = + 0.66 ***) -high straight line. Moreover, when comparing the absolute values, the correlation coefficient between SAI and titratable acidity was higher. SAI varied significantly -from 4.3 (Texas) to 18.3 (EV LN-13).
Brzezina, Black Satin, Loch Tay, seedlings of Black Satin, Cheyenne, Loch Ness -LN-6, LN-10, EV LN-13 and the control cultivar Agawam were distinguished by low titratable acidity (<0.90%) of fruits and high SAI value (˃ 11.0); with an average content of organic acids (0.90-1.10%) -Chester, Ouachita, Triple Crown, elite seedlings LN-4, LN-14, selected seedlings LN-1, LN-3, LN-5, LN-7 , LN-8, LN-9, LN-12 and the control cultivar Thornfree. The third group included cultivars with a relatively high titratable acidity (c˃ 1.10%) and a low SAI value (<7.5) of fruits: Natchez and Texas. All studied cultivars have a pleasant fruit taste and can be used as dessert ones. But conditionally they can be divided in this way: the first group of cultivars is for baby food, the second is for dessert purposes for the adult population, and the third is for processing.
The fruit mass of the studied genotypes does not affect the content of the studied biochemical components responsible for their taste characteristics, which is a positive fact when creating new cultivars with high taste properties and a significant fruit weight, as evidenced by low unreliable correlation coefficients (r = -0.28 ... +0.13).
Conclusions
As a result of studying the biochemical components of blackberry fruits of the VNIISPK gene pool, which are responsible for taste characteristics, the best genotypes were identified: according to the content of SDM and sugars -Brzezina and Agawam, elite seedlings LN-4, LN-13, LN-14 and selected seedlings LN-5, LN-7, LN-8 with insignificant varietal variability; by the content of organic (titratable) acids and SAI -Brzezina, Black Satin, Loch Tau, seedlings of Black Satin, Cheyenne, Loch Ness varieties -LN-6, LN-10, ELS LN-13 and the control cultivar Agawam.
A high trait conjugation was established: the sugar-acid index and the content of monosaccharides, the total amount of sugars, soluble dry matter, as well as a high inverse dependency between the content of organic acids and the sugar-acid index. Low unreliable correlation coefficients (r = -0.28 ... + 0.13) were identified, indicating the absence of links between the fruit weight and the content of biochemical components responsible for the taste characteristics of genotypes, which is a positive fact when creating new cs with high taste and a significant mass of fruits, indicating their independent inheritance. | 2021-10-18T17:23:08.053Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "ae7ca855aec99ace6762067f04e7959bcb071911",
"oa_license": "CCBY",
"oa_url": "https://www.bio-conferences.org/articles/bioconf/pdf/2021/08/bioconf_fsraaba2021_02006.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cbdbfec211f3ce12e5b60c581e11add76cf9f2f2",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
} |
208388846 | pes2o/s2orc | v3-fos-license | Oncolytic Viruses and Their Application to Cancer Treatment
Great progress has been achieved in the development of oncolytic viruses (OVs). Oncolytic therapy has become a feasible and effective treatment or supplementary method to cure cancer. This review summarizes the general condition of oncolytic therapy.
phomas, etc. But the mechanism remained unknown and therefore cannot be properly used for tumor treatment. Also, scientists were a lack of tools for modifying more effective virus strains. Whereas chemotherapeutics was comparatively safer and more well-known, doctors preferred chemotherapy more. Thus, oncolytic therapy was always at the secondary position in the cancer study.
The feature and mechanism of various kinds of virus genes are gradually known as the development of virology and genetic. Advancement in techniques like genetic engineering allows people to take directed operation and modification upon virus genes, so as to manipulate virus behavior and function. Directed operation and modification have been taken upon some specific viruses since the 1990s. Over ten kinds of oncolytic viruses aiming at varied types of cancers are now in different preclinical trial phases. China approved world's first oncolytic virus therapy for cancer treatment in 2006, by using H101 for cancers of the head and neck [1].
Mechanism a) Viruses interact with specific cell-surface receptors. As tumor cells overexpressed these proteins than normal cells did, the virus will probably infect the tumor cell. b) After binding to the cell surface receptor, the virus is internalized by endocytosis or membrane fusion, releases its genome into the cell. Replication and viral gene expression vary according to the type of the virus. It can take place entirely in the cell cytoplasm (such as for vesicular stomatitis virus), or in the nucleus and
Introduction
Early in the 20 th Century, a case was reported in The Lancet, that a notable decline in abnormal leucocyte of a female chronic-leukemia suffer was seen, and resulted in her unexpectedly improved condition, after her accidental infection of influenza virus. In 1912, Italian doctor Deface found that tumors of patients with cervical cancer would spontaneously shrink or regress if inoculated attenuated rabies vaccines. And this has raised the curtain on oncolytic virus therapy for tumors. Several more women have injected the same vaccine for treatment subsequently. However, all the patients died of the recurrence of cancer in the end.
Researchers started to experiment with mutant natural attenuated virus strains for treatment study upon cancer cells, after realizing that viruses might have inhibiting effects on tumors. West Nile viruses and adenoviruses were largely used for oncolytic therapy in the 1950s. In the 70s, reports claimed that varicella-zoster viruses' infection could ease acute lymphoblastic leukemia. Besides, measles viruses were found effective in curing leukemia, Burkitt lymphomas and Hodgkin lym-respiratory tract. The virus can be readily detected in feces, and may also be recovered from pharyngeal or nasal secretions, urine, cerebrospinal fluid, and blood. Their role in human disease or treatment is still uncertain, but demonstrate a certain oncolytic property. Though reoviruses' molecular mechanism of selective infection and tumor cell destruction is remaining to be studied, it's generally considered that the activation of RAS signal pathway plays a key role in selective oncolytic effect mediated by this virus.
MV(measles virus; paramyxovirus)
Many researches have been done to prove the killing activity of MV on prostate, mesothelial, ovarian cancer. Heinzerling's research proved it. In a phase I dose escalation trial, 5 patients with CTCL received 16 intratumorally injections of live MV, Edmonston-Zagreb vaccine strain. The well tolerated treatment with MV resulted in clinical responses. Evaluation of biopsies, by immunohistochemistry and RT-PCR demonstrated local viral activity with positive staining for MV NP protein.
All patients demonstrated an increased anti-measles antibody titer after therapy. One of the treated lesions completely disappeared [4].
Mayo Clinic has developed MV (MV-CEA & MV-NIS) that carry the gene that coding carcino embryonie antigen and sodium iodide symporter. MV-NIS is engineered to express the marker peptide carcinoembryonic antigen to permit real-time monitoring of viral gene expression in tumors. Patients with Taxol and platinum-refractory recurrent ovarian cancer and normal CEA levels were eligible for MV-CEA phase I trial. Twenty-one patients were treated with MV-CEA i.e.14 of 21 patients responded dose-dependent stable disease. Five patients had significant decreases in CA-125 level [5]. There are 5 Phase 1 clinical trials carrying on by Mayo Clinic now (NCT00408590, NCT00390299, NCT01503177, NCT01846091, NCT00450814), involving ovarian cancer, peritoneal cancer, glioblastoma multiforme tumors, malignant pleural mesothelioma, head and neck squamous cell carcinoma and multiple myeloma. Routes of administration include intratumorally, intrapleural, intraperitoneal or residual tumor resection cavity injection. Therapy options are alone or in combination with cyclophosphamide treatment [5].
HSV-1 (Herpes Simplex Virus 1)
HSV-1 is attractive for cancer therapy because of the following characteristics: (a) It infects a broad range of cell types and species, killing tumor cells at a relatively low multiplicity of infection (MOI); (b) The infection can't be affected by the antibody in blood which makes repeated injection won't weaken its efficiency; (c) The well-characterized large genome (152 kb) contains many nonessential genes that can be replaced (up to 30 kb) with multiple therapeutic transgenes; (d) Many antiherpetic drugs are available as a safeguard against cytoplasm (such as for adenovirus). In either case, viral gene expression and synthesis of viral proteins largely depends on cellular machinery. Viral gene expression and replication trigger cellular antiviral defenses, such as apoptosis, which are often inactivated in tumor cells. Expression of viral proteins will eventually lead to immune-mediated lysis of infected cells. Lysis is directed by CD8+ T cells, which recognize viral peptide epitopes that are presented by major histocompatibility complex (MHC) class I molecules on the surface of the infected cell. Also, lysis may be triggered because of an overwhelming amount of budding and release of progeny virions from the cell surface, or by the activation of apoptosis during the course of viral replication and gene expression [2].
Ad(adenovirus)
At present, the main strategy of modifying adenovirus's tumor targeting and transfection efficiency is to make use of some abnormal performance of expression of tumor biology. One method is partly or entirely deletion of the gene which are unnecessary in the replication of tumor cells. Another one is to modify the necessary genetic expression during virus replication with tumor-specific promotor or enhancer [3].
ONYX-015 modified by American Onyx Company is an E1B-55K-deleted adeno-virus. Onyx-015 lacks the E1B-55K gene product, which is normally required for degrading the cellular p53 protein during viral infections, allowing it to only replicate in and destroy cells that lack p53, such as tumor cells. At present, ONYX-015 has the most complete and detailed data in clinical trials which involve many solid tumor-like cancers of the head and neck, pancreatic cancer, malignant gliomas, colorectal cancer, prostate cancer, ovarian cancer. Many basis and clinical research have been done on the intent to enhance the replication specificity of AD in tumor and the activity which required in organism against tumor. For instance, the use of Ad alone or co-expression of GM-CSF, IL-12 and IL-18 and other immunomodulatory genes in order to promote the anti-tumor activity of the body. Insertion of the telomerase promoter to enhance the replication selectivity or transform adenovirus. Ongoing clinical trials now include CGTG-102(with expression of GM-CSF) treat advanced solid tumors alone or with cyclophosphamide administration intratumor or intravenous. DNX-2401 (Delta-24-RGD-4C) treat glioblastoma and neuroglioma. (NCT00805376;NCT01582516). Intravesical instillation treat balder cancer. (NCT01438112).
Reovirus
Reovirus infection occurs often in humans, but most cases are mild or subclinical. Reoviruses can also affect the gastrointestinal system (such as Rotavirus) and involvement of the apoptotic pathway of interferon. In 2010 Bier [10] performed a siRNA-based screen of genes, which are known or predicted to participate in membrane trafficking/remodeling, to reveal Ras-related or Ras-independent NDV-sensitizers in the tumorigenic RT3 K1 cells that may also be drivers of tumorigenesis. Rac1 is an essential protein for efficient replication of oncolytic NDV in the tumorigenic cells.
With the reverse genetics' technology matures, the technology began to be applied to optimize the effect of oncolytic Newcastle disease virus. By reverse genetics technique recombinant NDV, can express an exogenous tumor-killing factor, with excellent tumor-killing ability, and achieved a good therapeutic effect in clinical trials.
VV (vaccinia virus; poxvirus)
Vaccinia virus played a decisive role in helping the human smallpox virus. With the rapid development of molecular biology, virology, immunology and cancer genetics, vaccinia virus becomes a great choice due to its effective infection to a wide range of cells, highly immunogenic, the ability to accommodate a large amount of gene inserted and correctly express them and cytoplasmic replication without the possibility of chromatin integration. Meanwhile, as the longest and most widely used virus in human history, a complete study on it provided a convenient further use. The main uses of vaccinia virus are (a) As a delivery vector for cancer; (b) As a carrier of vaccine for immune regulatory molecules and tumor associated antigens in cancer immunotherapy; (c) As oncolytic media replicate in selected cells and lead to cell lysis. The main method to treat cancer with vaccinia virus is making tumor cells originally disguised with specific surface antigens reassigned to the targeted clusters through the partial activation of the stronger immune response, and then eliminated by macrophages [11]. Table 1 summarized the above oncolytic virus details and their development status (Table 1).
Advantages and Disadvantages between Oncolytic Therapy & Traditional Therapy
unfavorable replication of the virus; and (e) The virus remains as an episome within the infected cell, even during latency, precluding insertional mutagenesis [6].
G207 was constructed as a second-generation vector with both copies of γ34.5 deleted and the ICP6 gene inactivated. ICP6 encode a large subunit of ribonucleotide reductase. The enzyme is the key and rate limiting enzyme of DNA synthesis and repair, which plays a crucial part in DNA replication and amplification. Therefore, the second generation obtained tumor cell targeting with double insurance [7].
G47Δ is a third-generation vector that was constructed from G207 by deletion of the ICP47 gene, which normally blocks MHC class I-mediated antigen presentation in infected cells. Consequently, human melanoma cells infected with G47Δ expressed higher levels of MHC class I on their surface, compared to G207-infected cells, resulting in enhanced stimulation of tumor-infiltrating lymphocytes [7]. G47Δ is safer and more effective, it has better performance in treating malignant brain tumors, prostate cancer and breast cancer. A considerable result in suppression and oncolytic effect in metastatic breast cancer, nasopharyngeal cancer, liver cancer and thyroid cancer treatment [8].
NDV (Newcastle-disease Virus)
In 2009 Schumacher [9] observed a strong inverse correlation between the susceptibility to infection and the basal expression of the antiviral genes RIG-I, IRF3, IRF7 and IFN-ß. A strong expression of these genes can explain the resistance of normal cells to NDV infection and a weak antiviral gene expression the broad susceptibility of tumor cells. Thus, NDV can massive amplify in the tumor cell and specifically kill tumor cells without infecting normal cells. What's more, many of tumor cells have a higher expression of sialic acid, which is a receptor of NDV. Though the combination with the widely distributed sialic acid residues on the cell surface, NDV can get into cells and thus kill various kinds of tumor cells. NDV mediates apoptosis mainly through the endogenous apoptotic pathway and does not rely on the However, the crucial problem over oncolytic therapy is the efficiency of the drugs delivery to the specified location. There are two main ways of administration. Intratumorally injection requires higher techniques and it's useless due to tumor's systemic metastasis and diffusion growth. Systemic intravenous injection is easier to implement and aims at several tumor, but it also has many disadvantages. Therefore, the effectiveness depends on the cell OVs have many features that make them advantageous and distinct from current therapeutic modalities: (i) OVs often target multiple oncogenic pathways and apply multiple means for cytotoxicity which means the generation of resistance is at a low probability (not seen so far); (ii) OVs replicate in a tumor-selective manner and are non-pathogenic, in fact, only minimal systemic toxicity has been detected; (iii) Virus dose in the tumor increases with time due to virus amplification, which is opposed to classical drug pharmacokinetics that decreases with time; (iv) Safety features can be built in, such as drug and immune sensitivity. These features could result in a very high therapeutic index [12].
Meanwhile, viruses can trigger several spectator mechanisms to kill uninfected cancer cells. OVs can infect tumor pericytes, which elicits a constrictive inflammatory response that slows blood flow to the tumor, or OVs help present tumor associated antigens to the immune system and engage antitumor immunity [13]. for antigens. OVs can be modified to make better use of innate and adaptive responses to eliminate tumor cells [2].
One strategy to develop oncolytic therapeutics is to select or design viruses that are especially sensitive to the antiviral properties of interferons. Such viruses should have their replication strongly suppressed in interferon-responsive normal tissues but still be able to amplify in interferon-nonresponsive tumor cells. The tumor-selective oncolytic activity could be achieved by deleting or attenuating gene which encode these antiinterferon-gene products [16].
Another common defect in tumor cells that might make them susceptible to oncolytic virus activity involves the downregulation of p53 or its downstream targets. Mice with supernumerary copies of the normal Trp53 gene are both more resistant to VSV infection and have a decreased incidence of tumor formation [17].
Another design strategy for oncolytic viruses would be to delete viral anti-apoptotic genes, creating mutants that only replicate in apoptosis-deficient tumor cells [18].
Diplomatic Immunity
The mammalian adaptive immune system has evolved to restrict the replication and spread of invading pathogens. For oncolytic virus-based therapeutics, this is a double-edged sword. On the one hand, these defense mechanisms block the delivery and/or spread of oncolytic viruses. On the other hand, viral stimulation of the adaptive immune system seems to activate anti-tumor immune surveillance systems, increasing the effectiveness of oncolytic virus therapy. Oncolytic viruses not only mediate direct tumor oncolysis, but also, in combination with their inherent adjuvant properties, induce or reactivate cancer immune surveillance programs. These phenomena indicate that: First, it might not be crucial for the oncolytic virus alone to completely eradicate a tumor to be therapeutically effective. Rather, if the virus can quickly establish a tumor-specific infection, this will lead to a localized inflammation, in situ cytokine production and ultimately an anti-tumor immune response. Second, oncolytic viruses that have been engineered to produce immune stimulatory factors on infection of tumor cells may be more effective therapeutics [2].
Conclusion
Study on oncolytic viruses not only contributes to cancer treatment, but also reveals much upon how cells regulate genetic expression [2]. In spite of considerable improvement in the last two decades, further progress is needed in oncolytic virus therapy for cancers. Methods include equipping oncolytic virus with therapeutic genes; optimizing traditional chemotherapy and radio-phenotype, permissiveness to virus infection, tumor homing ability, and transfer of infectious virus to tumor cells [12].
Development Prospect
As the situation stated above, there are several recent approaches to improve oncolytic virus delivery.
Targeting the Tumor Cell Surface
During tumor evolution, various genetic and epigenetic events result in the unique display or overexpression of so-called 'tumor antigens' on the surface of malignant cancer cells. As cell surface recognition and virus entry are the first procedure in a productive viral infection, engineering a virus which can specific recognize the tumor cell surface would restrict replication of a potent oncolytic virus to malignant cancer cells [2].
Exploiting the Tumor Microenvironment
Given that virally encoded receptors are highly evolved proteins, an alternative approach is to use the in vivo tumor environment to augment selectivity without the complete re-engineering of an already efficient system. For example, subtle alterations in the fusion (F)-protein of measles virus allow it to be processed to an active form only in the protease-rich tumor microenvironment. The F-protein of measles virus facilitates viral entry into cells by mediating fusion of the viral and cellular membranes [2].
Non-enveloped viruses is another option. For example, reovirus normally infects cells of the gastrointestinal tract, where proteases can convert the non-infectious reovirus into an infectious form called the intermediate sub-viral particle (ISVP). When given intravenously, reovirus is not efficiently processed to the infectious form. However, it is possible to select variants that have been converted into ISVP by the action of proteases that are overexpressed in the tumor microenvironment. These selected reoviruses have been shown, in vivo, to selectively infect and kill malignant lymphoid cells that produce a protease-rich microenvironment [14].
'Naturally Smart' Viruses
Viruses have evolved to gain access to the cell through binding to proteins that are displayed on the plasma membrane and that often have crucial roles in regulating normal cell proliferation, homeostasis or adhesion [2]. Recognizing this, Darren Shafren's group have screened a collection of picornaviruses in a search for viruses that preferentially infect tumor cells, based on their overexpression of natural virus receptors [15].
Tumor Growth and Innate Immunity
Innate immunity is a non-specific defense mechanism which is triggered immediately after pathogen detection and does not develop immunological memory | 2019-09-16T03:09:06.963Z | 2019-08-24T00:00:00.000 | {
"year": 2019,
"sha1": "8ce669bd32bf2f0800ec07e4cf7318e8cfe4cf4d",
"oa_license": "CCBY",
"oa_url": "https://www.clinmedjournals.org/articles/iacp/international-archives-of-clinical-pharmacology-iacp-5-020.pdf?jid=iacp",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8ce669bd32bf2f0800ec07e4cf7318e8cfe4cf4d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
238890859 | pes2o/s2orc | v3-fos-license | Genetic variation in drought resilience-related traits among wheat multiple synthetic derivative lines: insights for climate resilience breeding
Twenty-four wheat lines, developed by Aegilops tauschii Coss. introgressions and previously selected for heat or salinity stress tolerance, were evaluated under a drought-rewatering-drought cycle for two years. The objective was to select breeding lines that are resilient to more than one abiotic stress. The experiment was designed in alpha lattice with three replications. Drought was imposed by withholding water during flowering. The results revealed considerable genetic variability in physio-agronomic traits, reflecting the variation in the introgressed segments. High heritability estimates (above 47%) were recorded for most traits, including days to 50% heading, plant height, and thousand-grain weight, indicating the genetic control of these traits which may be useful for cultivar development. The trait-trait correlations within and between water regimes highlighted a strong association among the genetic factors controlling these traits. Some lines exhibited superior performance in terms of stress tolerance index and mean productivity compared with their backcross parent and elite cultivars commonly grown in hot and dry areas. Graphical genotyping revealed unique introgressed segments on chromosomes 4B, 6B, 2D, and 3D in some drought-resilient lines which may be linked to drought resilience. Therefore, we recommend these lines for further breeding to develop climate-resilient wheat varieties.
Introduction
Wheat (Triticum aestivum L.) productivity is considerably limited by persistent drought stress. To meet global wheat demands, the current annual increase of 1% must be accelerated to at least 1.6% (GCARD 2012). This task is compounded by the current climate change scenario (Elliott et al. 2014), and therefore, developing climate-resilient wheat genotypes with the capacity to thrive under different abiotic stresses needs urgent attention.
Climate-resilient wheat genotypes are scarce due to the narrow genetic diversity in modern wheat cultivars (Ogbonnaya et al. 2013). To broaden this genetic diversity, the use of wheat wild relatives for stress resilience breeding has been widely reported (Cox et al. 2017, Kishii 2019, Ogbonnaya et al. 2013, Tsujimoto et al. 2015. To utilize the variation in Ae. tauschii for wheat breeding, synthetic hexaploid wheat lines were developed by crossing Ae. tauschii accessions with a durum wheat cultivar, Communicated by Kenji Kato Received December 17, 2020. Accepted May 18, 2021 First Published Online in J-STAGE on August 18, 2021. *Corresponding author (e-mail: yasirserag@tottori-u.ac.jp) 'Langdon' (Tsujimoto et al. 2015). Then, to reduce linkage drag, the synthetic hexaploid wheat lines were crossed and backcrossed with a popular bread wheat cultivar, 'Norin 61' (N61) and the resulting lines were named 'multiple synthetic derivative (MSD) lines' (Tsujimoto et al. 2015). The MSD lines possess a wide diversity of stress tolerancerelated traits (Elbashir et al. 2017b, Gorafi et al. 2018. Elbashir et al. (2017b) evaluated a population of 400 MSD lines under heat stress in Sudan and selected heat-tolerant candidate lines. Similarly, the same population was evaluated for salinity tolerance, and salinity-tolerant candidate lines were selected (manuscript under preparation). However, genetic analysis for drought resilience among the MSD lines is limited. Also, the role of Ae. tauschii in conferring multiple stress tolerance to wheat is not well understood. Since wheat response to various abiotic stresses is similar and operates through connected pathways (Abhinandan et al. 2018, Tounsi et al. 2019, we selected the previously-reported heat-tolerant (Elbashir et al. 2017b) and salinity-tolerant MSD lines for evaluation under a drought-rewatering-drought cycle.
The objective was to select drought-resilient MSD lines that may also possess heat or salinity resilience traits for breeding. MSD lines exhibiting high-stress tolerance index (STI) and mean productivity (MP) compared with their backcross parent and standard cultivars under drought stress were selected for further breeding.
Plant materials
Twenty-four MSD lines were selected based on heat stress tolerance (14 lines) (Elbashir et al. 2017a(Elbashir et al. , 2017b and salinity stress tolerance (10 lines) (manuscript under preparation). Two of the MSD lines were previously named as MNH2 and MNH5 (Elbashir et al. 2017b). For consistency with Elbashir et al. (2017b), we will retain the names of these two lines (MNH2 and MNH5). Therefore, in this study, the term "MSD lines" refers to all twenty-four lines including MNH2 and MNH5. For comparison, the backcross parent of the MSD lines (N61) and three check cultivars were included in the study. The check cultivars were 'Imam' (a widely cultivated cultivar in Sudan), 'Cham 6' and 'Halberd' (elite wheat cultivars from ICARDA and CIMMYT, respectively). A list of plant materials and their pedigrees is shown in Table 1.
Experimental design and drought treatment
The experiment was conducted in a greenhouse in Arid Land Research Center, Tottori University, Japan (coordinates: 35.5354534, 134.212066), during the 2018/2019 and 2019/2020 growing seasons. Two rectangular beds (100 m × 1.2 m each) were constructed using concrete blocks, 0.35 m deep. The beds were filled with sand-dune regosol (collected from the Tottori Sand Dunes). Four irrigation tubes (0.3 m apart) were set along the beds. Before sowing, fertilizer (NPK 366) was applied at 20 kg ha -1 by mixing with soil. A second application (NPK 366 + Mg) was made during the tillering stage at 50 kg ha -1 . Seeds were sown on rows across the beds, each row for a different genotype, with planting distance 0.2 m × 0.3 m between and within rows. At first, eight seeds were sown per row, but latter thinned to four plants at 20 d after germination. Seeds were sown on January 29 th and December 15 th during the first and second growing seasons, respectively. The experiment was designed in alpha lattice with two treatment levels (well-watered and drought conditions) and three replicates per treatment. The average light intensity in the greenhouse during the reproductive stage was 37,546 lx. The corresponding day/night temperature and relative humidity were 31/18°C and 36/67%, respectively. The soil water potential was measured every two hours using sensors (Decagon devices, WA, USA). Automatic irrigation was performed using Aqua Pro automated irrigation controller (Netafim, Tel Aviv, Israel). When 50% of the plants had flowered, drought was imposed by withholding water supply in one bed, while the well-watered condition was maintained at 90% field capacity. To account for minor phenological differences between the genotypes, genotypes with more than 7 d of delayed flowering (compared to N61) were separated from other genotypes and exposed to drought separately. To mimic the erratic rainfall pattern common in drought-prone areas, the drought-treated bed was rewatered when soil moisture was near permanent wilting point (-1500 kPa, Fig. 1).
Traits evaluated
The number of days-to-heading (DH) was measured when 50% of the spikes had headed. Chlorophyll content (SPAD) was measured during the grain-filling stage using the Minolta SPAD-502 chlorophyll meter (Konica-Minolta, Japan). The SPAD readings were made on the penultimate leaves of 12 main tillers (four tillers per replicate). Plant height (PH), biomass per square meter (BIO), grain weight per square meter (GY), spike number per square meter (SN), grain number per spike (GPS), thousand-grain weight (TGW), and harvest index (HI) were determined at maturity. The STI was calculated as grain weight difference between drought-stressed and well-watered plants using the equation below: where GW si is the grain weight under drought stress, and GW pi is the grain weight under well-watered conditions, for genotype "i" (Fernandez 1992). The MP was calculated as average yield in the well-watered and drought conditions (Rosielle and Hamblin 1981).
Graphical genotyping using DArT-seq markers
Genomic DNA was extracted using the CTAB method (Saghai-Maroof et al. 1984) and sent to Diversity Arrays Technology Pty Ltd., Australia (http://www.diversityarrays. com) for whole-genome scanning using the DArT-seq platform. Complexity reduction was applied to obtain a subset of restriction fragments for each genotype using a combination of restriction enzymes (Sansaloni et al. 2011). The restriction fragments were then sequenced and aligned to the wheat_ChineseSpring10 reference genome and wheat_ConsensusMap_version_4. The presence or absence variation of the genomic fragments (SilicoDArT markers) were used for graphical genotyping. The SilicoDArT markers are dominant and were scored in a binary fashion: "0" or "1" representing absence or presence, respectively, of a restriction fragment containing the marker sequence. A total of 51,202 SilicoDArT markers were obtained. The markers were then filtered on the basis of minimum reproducibility (95%), call rate (90%), and average read depth (8). Only markers that have known chromosomal positions and are polymorphic between N61 and the synthetic parents of individual MSD lines were used for genotyping. Finally, a minimum of 4,148 polymorphic markers were used for the graphical genotyping of individual genotypes. The markers were ordered according to their position within each chromosome from top to bottom, and the conditional formatting function in Microsoft Excel 2019 was used to highlight each marker.
Statistics
ANOVA was performed for nine physio-agronomic traits for each year following the General Linear Model, using GenStat 18 th edition (http://www.genstat.co.uk). The ANOVA was performed by considering genotype by water regime as Treatment Structure, and Rep/Sub-block as Changes in soil water potential under well-watered, drought, and rewatering periods. Data represent the mean ± standard deviation for two years. The inset graph represents daily average for the well-watered condition from day 1 to day 29 during drought treatment. (Lê et al. 2008). Genotype-genotype comparisons for STI and MP were conducted using the Tukey Honestly Significant Difference (HSD) test, and drought-tolerant candidates were selected based on high STI and MP.
Influence of genotype and water regime on trait variability
Bartlett's test for homogeneity of variance revealed that the error variances between the two years were heterogeneous for most traits (F = 3,332; P < 0.05) and therefore, individual year data were used to assess drought resilience. Although the sowing dates were different between 2019 and 2020, the trend for most of the evaluated traits was the same in both years. The means and ranges of all evaluated traits for the MSD lines and N61 are shown in Table 2. The ANOVA table (Table 3) revealed highly significant differences between the main effects of genotype (G) and water regime (E) for most of the evaluated traits in each year. The DH was non-significantly affected by water regime in both years. The interaction effects of G and E (G×E) were significant (P < 0.05) for SPAD, GY and TGW during 2019, and for SPAD and SN during 2020. Other traits were nonsignificantly affected by G×E interaction in both years (Table 3). In 2020, the investigated genotypes showed longer DH and higher GY and HI than in 2019 under both water regimes. Two MSD lines (MSD53 and MSD308) had later heading dates than N61 and the three check cultivars under both control and drought conditions for two years (Supplemental Tables 1, 2).
Principal component analysis (PCA) revealed differences in trait contribution, with GY and BIO, having the strongest contribution to the PCs under drought condition ( Fig. 2A). The first component (Dim 1) explained 35.7%, while the second component (Dim 2) explained 21.9% of the variability. Three high-yielding MSD lines were separated from Fig. 2B).
High heritability values (above 47%) were observed for DH, PH, GPS, and TGW in both water regimes. Other traits showed medium to low heritability values (below 35%) in both water regimes, except SN (57%) under well-watered conditions (Table 3).
GY significantly correlated (P < 0.05) with most of the evaluated traits under both water regimes for two years (Table 4). Significant correlations were also found among other traits. Worthy of note is the positive correlation between BIO and SN (0.813), and BIO and GPS (0.623), and the negative correlations between HI and BIO (-0.869), and HI and SN (-0.855) under well-watered conditions in 2019 (Table 4). Similar trends were found under drought conditions in 2019, and in both water regimes in 2020 (Table 4). Additionally, positive correlations between different water regimes were recorded for most traits.
STI and MP
To select drought-resilient candidates among the wheat MSD lines, stress tolerance index (STI) and mean productivity (MP) were calculated. There was variability in STI among the evaluated genotypes with some MSD lines showing higher STI than N61 and 'Imam'. Specifically, MNH5, MSD140, and MSD308 exhibited higher STIs compared with N61 and 'Imam' for two years (Fig. 3A, 3B). Similarly, MSD308 consistently showed higher MP Vol. 71 No. 4 than N61 and 'Imam' for two years (but non-significantly higher than N61 in 2020), while MSD53 showed higher MP than N61 and 'Imam' in 2020 (Fig. 3C, 3D).
Graphical genotyping
The results indicated that MNH5, MSD53, MSD140, and MSD308 had better drought resilience than N61 and the three check cultivars, including 'Imam'. Therefore, to identify the genomic regions that may be associated with drought adaptation in the selected MSD lines, we conducted graphical genotyping by comparing their genomes with those of their donor (synthetic) and backcross (N61) parents. The results showed that the MSD lines were different from their parents in several genomic regions; various recombinant portions (introgressed segments) were found in most of the 21 chromosomes (Fig. 4). Worthy of note, MNH5 and MSD308 showed similar introgressed segments on chromosome 1A. All four drought-resilient lines possessed similar introgressed segments on chromosome 6B which were not found in two drought-sensitive lines (MNH2 and MSD376) (Fig. 4). Also, MNH5, MSD53, and MSD140 showed similar introgressed segments on chromosome 4B. Interestingly, the drought-resilient MSD53 and MSD140 and the drought-sensitive MNH2 and MSD376 were developed from the same Ae. tauschii accession (Table 1). Since these lines could be considered as sister lines, their genomes were graphically compared (Fig. 4). Similarly, MSD53 and MSD140 contained similar introgressed segments on chromosome 2D and 3D (Fig. 4). These segments were not found in the drought-sensitive lines (MNH2 and MSD376). MNH5 and MSD308 also possessed large introgressed segments on chromosome 2D and 3D. Taken together, the drought-resilient MSD lines possessed similar introgressed segments on chromosome 6B, 4B (except MSD308), 2D, and 3D (except MNH5) (Fig. 4).
Discussion
One way to ensure sustainable wheat production under the current climate change scenario is to develop droughtresilient wheat genotypes that can adapt to more than one abiotic stress. Drought resilience is a quantitative trait controlled by many quantitative trait loci (QTL) and thus, it is difficult to use marker-assisted selection techniques for drought resilience breeding. Therefore, a first step in breeding drought-resilient wheat lines may be to broaden the wheat gene pool using wild introgressions. In the present study, the investigated MSD lines were developed from wild (Ae. tauschii) introgressions, and had been previously selected as heat (Elbashir et al. 2017a) and salinity-tolerant candidates ( Table 1).
The significant genotypic differences (P < 0.001) observed for the investigated traits (Table 3) indicated high genetic diversity among the MSD lines. This diversity is mainly due to the introgressions from the individual Ae. tauschii accessions and may, therefore, be useful for further breeding for drought-resilient wheat lines. Significant differences due to water regime were observed in all evaluated traits except DH. These differences may have resulted from the profound effect of drought on yield and yield components (Fischer and Maurer 1978). The non-significant effect of water regime on DH was expected since the drought stress was imposed after heading. The significant G×E interaction effects on SPAD, yield and yield components for two years reflect the variation in drought adaptation among the investigated genotypes.
The high heritability estimates for most of the investigated traits for two years point to a possible effect of genes or major QTL on these traits. Low heritability estimates are often reported for yield and yield components under drought conditions (Eid 2009, Yaqoob 2016. Moreover, heritability values are subject to specific sets of genotypes and target environments (Mwadzingeni et al. 2017). Therefore, the heritability estimates in this study may have been influenced by the small population size and the amount of genetic variance present in the investigated lines. Overall, the heritability estimates indicated that these traits are highly influenced by genetic factors and may be useful for cultivar development. Also, there were high correlations between most of the evaluated traits, suggesting a strong inherent association among these traits at the genetic level. Highly heritable traits exhibiting strong correlations with other quantitative traits improve selection efficiency (Mwadzingeni et al. 2017, Shimelis andShiringani 2010). Furthermore, the positive and highly significant correlations between traits under different water regimes (Table 4) suggest that these traits were consistent in both conditions. Similar correlations have been reported in bread wheat under different drought intensities in Morocco (Bennani et al. 2017).
The high STI exhibited by three MSD lines (MNH5, MSD308, and MSD140) suggests better adaptation of the MSD lines to post-anthesis drought stress compared with practical cultivars, N61 and 'Imam' (Fig. 3A, 3B). This points to the potential of the MSD lines in outperforming popular elite cultivars including 'Imam' which is widely cultivated in stress-prone areas in Sudan. STI is a reliable selection criterion that has been used for selecting droughtresilient wheat (Bennani et al. 2017) and rice genotypes (Mau et al. 2019). Similarly, some MSD lines showed higher MP compared with N61 and 'Imam', reflecting the higher productivity of the MSD lines in both water regimes (Fig. 3C, 3D). These MSD lines present an opportunity to develop new cultivars with high yield under well-watered and marginal rainfall-growing regions. Based on STI and MP for two years, MSD308 was the best performing, and therefore, selected alongside MNH5, MSD53, and MSD140 (Fig. 3) for further breeding for drought resilience. These MSD lines (except MNH5) were separated from other lines in the PCA (Fig. 2B), suggesting unique drought resilience traits which may be due to similar effects of Ae. tauschii gene introgression. Additionally, MSD53 was recently found to have an efficient water conservation capacity under dry down conditions (Itam et al., unpublished).
Genotyping with polymorphic markers ensured that the variation in each MSD line was due to introgressed segments from its synthetic parent (containing Ae. tauschii genome). The presence of such introgressed segments indicates the effectiveness of the multiple synthetic approach for utilizing the variation in Ae. tauschii for wheat breeding (Cox et al. 2017, Tsujimoto et al. 2015. These acquired genomic segments are likely the source of variation in drought resilience traits among the MSD lines and between the MSD lines and N61. For example, chromosome 1A harbors many plant height-regulating genes, including Rht-B1 and Rht-D1 (Daba et al. 2020, Zanke et al. 2014). Plant height is strongly associated with yield and yield-related traits and has been a major target for selection for high yield in wheat (Rebetzke et al. 2011). This suggests that the introgressed chromosome 1A segments in MNH5 and MSD308 may have yield-related functions. Similarly, chromosomes 4B and 6B introgressions may have droughtresilience functions. Chromosome 4BS harbors the QTL qDSI.4B.1 which is associated with drought susceptibility index, GY, HI, and root biomass in bread wheat under drought stress (Kadam et al. 2012), whereas chromosome 6B harbors the QTL QYld.aww-6B.1 which is associated with increased GY, leaf biomass, and chlorophyll index under combined drought and heat stress (Schmidt et al. 2020). The introgressions in the A and B subgenomes are likely from durum wheat ('Langdon') which is the source of the A and B subgenomes in the synthetic parents (Table 1, Fig. 4). This reflects the presence of important durum wheat genes in the MSD lines for bread wheat improvement. Furthermore, three drought-resilient lines (MSD53, MSD308, and MSD140) contained introgressed segments on chromosome 2D. Chromosome 2D bears the photoperiod sensitivity gene, Ppd-D1 (Hanocq et al. 2004) which is associated with multiple traits including HI, spike length and chlorophyll content under drought conditions (Dodig et al. 2012), and canopy temperature at grain filling under optimal conditions (Sukumaran et al. 2014). Similarly, chromosome 3D is associated with chlorophyll and carotenoid properties and increased GY under drought stress (Czyczyło-Mysza et al. 2011). Furthermore, since the synthetic parents were developed with 43 different accessions of Ae. tauschii, the presence of similar introgressed segments among unrelated drought-resilient MSD lines is interesting; the introgressed segments on chromosome 2D were not present in the drought-sensitive sister lines of the same synthetic parent, emphasizing their (2D donations) possible role in drought adaptation and the potential for utilization in drought resilience breeding. Further investigations are needed to better understand the genetic contributions of the introgressed segments to drought resilience in wheat.
In this study, the screening of the MSD lines under a drought-rewatering-drought cycle ensured that selected lines are able to not only survive the erratic rainfall pattern common in natural field conditions, but to maintain high yield under prolonged drought spells nearing wilting point. Under the test environment, high heritability values were obtained for some of the evaluated traits indicating that selection based on such traits can result in genetic gain for drought resilience. Since the investigated lines had been evaluated under separate heat (Elbashir et al. 2017a) and salinity stresses, they are a useful genetic resource for further breeding for climate-resilient wheat that may thrive under different abiotic stresses. At present, graphical genotyping has shown interesting introgressed segments in the drought-resilient MSD lines. For effective breeding using these genotypes, further exploration of their genetic variability using QTL analysis is recommended.
Author Contribution Statement
H.T., Y.G. and I.T. conceived the project, H.T. provided materials and acquired funding, M.I. and Y.G. performed experiments, M.I. analyzed the data, prepared figures, and wrote a draft of the manuscript, H.T., Y.G. and I.T. reviewed and edited the manuscript, H.T. supervised the study. All authors agreed to the published version of the manuscript. | 2021-08-27T16:55:35.991Z | 2021-08-18T00:00:00.000 | {
"year": 2021,
"sha1": "919996f85add1d46a7eecdc933fbbb4f1eb1147d",
"oa_license": "CCBYNCND",
"oa_url": "https://www.jstage.jst.go.jp/article/jsbbs/advpub/0/advpub_20162/_pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e8a8f6737a50e4ce5633f0aa17f5eac2923791ce",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
6873536 | pes2o/s2orc | v3-fos-license | Shared genetic aetiology of puberty timing between sexes and with health-related outcomes
Understanding of the genetic regulation of puberty timing has come largely from studies of rare disorders and population-based studies in women. Here, we report the largest genomic analysis for puberty timing in 55,871 men, based on recalled age at voice breaking. Analysis across all genomic variants reveals strong genetic correlation (0.74, P=2.7 × 10−70) between male and female puberty timing. However, some loci show sex-divergent effects, including directionally opposite effects between sexes at the SIM1/MCHR2 locus (Pheterogeneity=1.6 × 10−12). We find five novel loci for puberty timing (P<5 × 10−8), in addition to nine signals in men that were previously reported in women. Newly implicated genes include two retinoic acid-related receptors, RORB and RXRA, and two genes reportedly disrupted in rare disorders of puberty, LEPR and KAL1. Finally, we identify genetic correlations that indicate shared aetiologies in both sexes between puberty timing and body mass index, fasting insulin levels, lipid levels, type 2 diabetes and cardiovascular disease.
V oice breaking describes the drop in resonant frequency due to elongation of the larynx in response to androgen exposure 1 . It is a distinct developmental milestone that occurs during late puberty in males (typically between Tanner stages 3 to 4), and age at voice breaking therefore represents a non-invasive marker for the study of puberty timing in men 2 .
Age at menarche, the onset of the first menstrual bleed, is a similar marker of pubertal timing in females and has been more widely studied 3 . Age at menarche in women has been associated with a wide range of disease risks, and previous genome-wide association studies (GWASs) have reported over 100 common loci and five low-frequency coding variants, implicating several previously unsuspected mechanisms involved in the regulation of puberty timing, including post-transcriptional microRNA repression, gamma-aminobutyric acid-B (GABA-B) receptor signalling and nuclear hormone signalling [4][5][6][7] . Those menarche loci were reportedly enriched for variants also associated with puberty timing in boys, but those analyses were limited by the small number of boys with assessment of physical characteristics of puberty 5,6 .
Consequently, our understanding of the regulation of puberty timing in males is derived in large part from studies of rare disorders of puberty. To date, B20 genes have been implicated in abnormally delayed or absent puberty, including normosmic or anosmic hypogonadotrophic hypogonadism (Kallmann syndrome), while only three genes have been implicated in precocious puberty 8,9 . Here, we report the first large-scale GWAS of puberty timing in males, based on recalled age at voice breaking in men in the 23andMe study 10 . By combination with data on recalled age at menarche in women, we show that the genetic architecture of puberty timing has a substantial shared component between males and females, which also overlaps the genetic basis of several health-related traits and diseases.
Results
Genome-wide association signals for age at voice breaking. We identified 11 independent genome-wide significant (Po5 Â 10 À 8 ) signals for age at voice breaking in men located at nine genomic loci (Table 1). Of these, nine signals (mapping to seven loci: in/near LIN28B, MKL2, BSX, TMEM38B, NR4A2, IGSF1 and ALMS1) are correlated (r 2 40.05) with reported loci for age at menarche in women 5,7 . The two novel signals for puberty timing are located in/near LEPR and KAL1, both of which are disrupted in rare disorders of puberty 8 . The strongest common signal (minor allele frequency (MAF)45%) for voice breaking was at the LIN28B locus (rs9391253, P ¼ 8 Â 10 À 24 ) where three independent signals were identified (Table 1), which is consistent with the allelic heterogeneity at this locus reported for age at menarche 5 . All signals are common variants, except for a rare (MAF ¼ 1%) intergenic variant near ALMS1, which is associated with 0.32 year per allele later age at voice breaking, and is highly correlated with a rare non-synonymous variant in ALMS1 (rs45501594, T3542S, r 2 ¼ 0.83) that is reportedly associated with age at menarche in women 7 .
Genetic overlap between puberty timing in men and women. To estimate the shared genetic aetiology between timing of puberty in men and women, we used LD Score Regression 11 to calculate the genome-wide genetic correlation (r g ) between age at voice breaking in men and age at menarche in women. The observed strong positive correlation (r g ¼ 0.74, P ¼ 2.7 Â 10 À 70 ) indicates that many variants have similar influences on puberty timing in males and females. This sex concordance is illustrated by comparison of the effect sizes of all genome-wide significant loci for age at menarche/voice breaking (Fig. 1). Only six of the 123 reported age at menarche loci show significant sex-discordant effects on puberty timing (P heterogeneity o4.0 Â 10 À 4 ); these were rs889122-OLFM2, rs466639-RXRG, rs2688325-CSMD1, rs1254337-SIX6, rs6555855-SLIT3 and rs9321659-SIM1/MCHR2. Notably, at the SIM1/MCHR2 locus the reported menarche-age raising allele is associated with younger age at voice breaking in men (P ¼ 7.9 Â 10 À 5 , P heterogeneity ¼ 1.6 Â 10 À 12 ).
Confirmation of novel puberty timing signals. Given the strong overall genetic correlation between puberty timing loci in men and women, and the paucity of male puberty timing data available in similarly sized studies, we sought confirmation of novel puberty timing loci identified in men using GWAS data on age at menarche in women (from a combination of publicly available HapMap2-imputed data from 182,416 women in the ReproGen consortium 5 and 1,000-Genomes-imputed data from 76,831 additional women in the 23andMe study 10,12 ).
Both of the novel genome-wide significant signals for age at voice breaking in men (in/near LEPR and KAL1) show directionally concordant associations with age at menarche in women (Table 1, LEPR P ¼ 1.85 Â 10 À 5 ; KAL1 2.4 Â 10 À 4 ). However, the LEPR signal (rs140410685 a 3 bp indel) shows a threefold larger effect on puberty timing in men than in women (P heterogeneity ¼ 3.7 Â 10 À 5 ).
We next attempted to confirm sub-genome-wide significant signals for age at voice breaking (5 Â 10 À 8 oPo1 Â 10 À 6 ). Of the five signals at this threshold, three were strongly correlated with reported signals for age at menarche 5 , showing directionally concordant associations with puberty timing in both sexes (rs10980922-ZNF483, rs2282752-WDR6 and rs6681737-NR5A2). The other two signals (rs9350100-RNF144B/ID4 and rs6560352-RORB) showed directionally concordant associations with age at menarche (Po0.01), but with significant heterogeneity between sexes (Table 1). Given this heterogeneity we meta-analysed the two estimates in a random-effects model. Variants at the novel RNF144B-ID4 locus reached genome-wide significance for puberty timing in men and women combined. Although rs6560352 did not reach significance in this meta-analysis (P ¼ 2.1 Â 10 À 6 ), an uncorrelated variant (rs4237264, r 2 B0) at this RORB locus was previously reported as a possible signal for age at menarche (P ¼ 9 Â 10 À 6 ) 5 . In a combined meta-analysis, we robustly confirmed rs4237264 as a novel signal for puberty timing in men and women (P ¼ 2 Â 10 À 11 ).
RORB encodes retinoic acid receptor (RAR)-related orphan receptor beta; notably RORA and RXRG (encoding retinoid X receptor gamma) were previously implicated in age at menarche 5 . We therefore tested single nucleotide polymorphisms (SNPs) within 500 kb of the remaining six RAR, RAR-related and retinoid X receptor (RXR) encoding genes for associations with puberty timing in our pooled sample of men and women, identifying one additional novel signal B350 kb downstream of RXRA (rs416390, P ¼ 2 Â 10 À 8 ) and a further suggestive signal B330 kb upstream of RXRB (rs241438, P ¼ 5 Â 10 À 6 ). In aggregate, SNPs in or near a reported list of nuclear hormone receptor genes that contain these RAR-related and RXR genes are significantly enriched for associations with age at voice breaking in men (P ¼ 7 Â 10 À 3 ), as reported for age at menarche in women 5 .
Genetic correlation between puberty timing and other traits.
To inform the likely aetiological relevance of puberty timing in men and women to other health-related outcomes, we used LD Score Regression to test their genetic correlations (r g ) with 27 other traits or complex diseases. There is no significant heterogeneity observed in genetic correlations between men and ARTICLE NATURE COMMUNICATIONS | DOI: 10.1038/ncomms9842 women. In men and women combined, significant genetic correlations (conservatively adjusted for multiple testing: Po1.85 Â 10 À 3 ( ¼ 0.05/27)) are observed between puberty timing and nine other traits or disease outcomes ( Table 2). The strongest genetic correlation is with body mass index (BMI; r g ¼ À 0.34, P ¼ 4.6 Â 10 À 104 ); further inverse genetic correlations are observed with polycystic ovary syndrome, fasting insulin levels, type 2 diabetes, triglyceride levels, cardiovascular disease and bone mineral density; and positive genetic correlations are observed with high-density lipoprotein cholesterol levels and adult height.
Discussion
We report a large genetic study of puberty timing in males and females. Our findings are the first to quantify the strongly shared genetic basis for puberty timing between sexes, and this is consistent with the largely sex-concordant effects of disruptive mutations in rare disorders of puberty 8,9 . Accordingly, our findings support the validity of recalled age at voice breaking in men as a marker of puberty timing in epidemiological studies, consistent with the informative prospective assessment of this phenotype 2 .
Independent signals at seven loci previously identified for age at menarche in women 5 (including three independent signals at the LIN28B locus) passed the genome-wide statistical significance threshold for age at voice breaking in men (Table 1). These included the strongest reported common and low-frequency signals for age at menarche at LIN28B and ALMS1, respectively, and other signals with relatively large effects. This observation is consistent with the largely shared genetic architecture for puberty timing between sexes.
The overlapping genetic architecture for puberty timing in men and women provided the rationale for a pooled meta-analysis across the sexes. Notably, we identified two novel signals, near RORB and RXRA, which add to the reported signals near to other retinoic acid receptor-related genes, RORA and RXRG 5 , and a fifth signal (near RXRB) showed sub-genome-wide significant association with puberty timing. The retinoic acid receptor-related and retinoid X receptors function as transcription factors that dimerize and regulate nuclear receptors to influence cell differentiation, development, circadian rhythm and metabolism 13 . Their receptor partners include the canonical receptors for oestrogen and androgens among other hormones and metabolites 14 , and their conformational changes may alter receptor sensitivity 15 . Collectively these findings strengthen the evidence for an aetiological role of retinoic acid and retinoid receptors in the regulation of puberty timing, although their relative importance to male versus female puberty remains to be established. Further studies are also needed to identify functional links between these allelic signals and specific gene and protein functions.
Three additional novel signals reached genome-wide significance for puberty timing, represented by variants in/near RNF144B-ID4, LEPR and KAL1. All three were associated with puberty timing in both men and women, two of which had stronger effects in males (LEPR and RNF144B-ID4). The signal at 6p22.3 resides in a gene desert with the two nearest genes, ID4 and RNF144B, located 760 kb and 607 kb away, respectively. Notably, it also lies 852 kb from KDM1B, a gene in the same family of histone demethlyases highlighted previously for age at menarche (variants in/near KDM3B, KDM4A and KDM4C represented genome-wide significant signals) 5 . Variants near KAL1 on Xp22.31 had not been highlighted by previous GWAS of female puberty timing due to paucity of X-chromosome data in those studies. rs5978985 is correlated with a reported signal for circulating free testosterone concentrations in males (r 2 ¼ 0.35 zTest statistics from fixed-effects inverse variance-weighted meta-analysis. yP value from a random-effects meta-analysis was calculated where significant heterogeneity was detected at a novel locus. ||Test for heterogeneity in effect estimates between age at voice breaking in men and age at menarche in women from fixed-effects models. zHapMap2 proxy SNP rs2186245 (r 2 ¼ 1) was used for age at menarche and for puberty timing in men and women combined. #Voice breaking effect estimates for secondary signals are from conditional models, however the menarche and combined estimates are from univariate models. **HapMap2 proxy SNP rs2007888 (r 2 ¼ 0.98) was used for age at menarche and for puberty timing in men and women combined. wwHapMap2 proxy SNP rs2842385 (r 2 ¼ 0.99) was used for age at menarche and for puberty timing in men and women combined.
NATURE COMMUNICATIONS | DOI: 10.1038/ncomms9842 ARTICLE with rs5934505) 16 and KAL1 encodes the extracellular matrix glycoprotein anosmin-1 implicated in the embryonic migration of gonadotrophin releasing hormone and olfactory neurons. Deleterious mutations in KAL1 cause X-linked Kallmann syndrome, characterized by hypogonadotropic hypogonadism and anosmia 8 . rs140410685 near LEPR, which encodes the leptin receptor, is uncorrelated with a reported neighbouring signal for age at menarche (r 2 B0 with rs10789181) 5 , and shows no reported association with adult BMI in reported GWAS metaanalyses (rs2186245 r 2 ¼ 1 proxy, P BMI ¼ 0.33, N ¼ 233,888) 17 .
In rare disorders of puberty, disruptive mutations usually have similar effects on puberty timing in both sexes 8,9 . Notable exceptions are rare mutations in the genes that encode the pituitary hormones, follicle-stimulating hormone and luteinizing hormone, which disrupt puberty in females but not in males 18 . In normal populations, rapid postnatal weight gain predicts earlier puberty timing in both sexes, but the influence of low birth weight is apparent only in females 2 . We found that only a small minority of signals showed sex-discordant effects, most notably rs9321659 at the SIM1-MCHR2 locus. SIM1 encodes a transcription factor regulator of hypothalamic paraventricular nucleus development and function. Rare deleterious mutations cause hyperphagia and severe early onset obesity affecting both males and females 19 ; however, rs9321659 is reportedly not associated with adult BMI (P ¼ 0.56) 17 . Overall, these data suggest that combining male and female data genome wide is likely to yield novel shared puberty loci, but care should be taken to assess for potential heterogeneity in the effects. Finally, our observed genetic correlations indicate the relevance of puberty timing to later life health outcomes in both men and women. Consistent with traditional epidemiological evidence [20][21][22] , earlier puberty timing was genetically related to higher risks of adverse health-related outcomes, including higher BMI, polycystic ovary syndrome, type 2 diabetes, lipid profiles and cardiovascular disease; it was favourable only for bone mineral density. LD score regression is a powerful tool to identify potential causal relationships between traits; however, limitations include the inability to establish causal directions and to adjust for potentialmediating factors. The relationship between puberty timing and obesity risk is complex, with plausible bi-directional mechanisms. Previous studies have reported that genome-wide significant signals for higher BMI, both in combination and individually, are associated with earlier puberty timing in females 5,23,24 . Conversely, in the opposite direction, earlier puberty timing associated with the LIN28B rs314276 C-allele leads to faster adolescent weight gain and higher post-pubertal BMI, without effects on pre-pubertal BMI 25 . Further studies are required to explore whether health outcomes related to earlier puberty timing are fully mediated by higher BMI.
In summary, this large-scale assessment of the genetic architecture of puberty timing in males quantifies the extent of shared aetiology between sexes, extends the evidence implicating retinoic acid-related receptors in the regulation of puberty timing, and supports the relevance of puberty timing in both sexes to the aetiologies of various health-related outcomes.
Methods
Genome-wide association study for age at voice breaking. Genome-wide SNP data were generated from one or more of three genotyping arrays in up to 55,871 men aged 18 or older of European ancestry from the 23andMe study 10,12 , who reported their recalled age at voice breaking, by online questionnaire in response to the question 'How old were you when your voice began to crack/deepen?' Participants answered into one of the predefined age bins (under 9, 9-10 years old, 11-12 years old, 13-14 years old, 15-16 years old, 17-18 years old, 19 years old or older), scored from 0 to 6. Genetic effect estimates from these 2-year bins were rescaled to 1-year effect estimates post analysis. We previously validated the accuracy of this approach by comparing re-scaled 2-year estimates for age at menarche (recorded in the same way) to those obtained from studies recording age at menarche by year. No significant heterogeneity was detected across these two approaches for known menarche loci 7 . 23andMe participants provided informed consent to take part in this research under a protocol approved by Ethical and Independent Review Services, an institutional review board accredited by the Association for the Accreditation of Human Research Protection Programs. Before imputation, we excluded SNPs with Hardy-Weinberg equilibrium Po10 À 20 , call rate o95%, or with large allele frequency discrepancies compared with European 1,000 Genomes reference data. Frequency discrepancies were identified by computing a 2 Â 2 table of allele counts for European 1,000 Genomes samples and 2,000 randomly sampled 23andMe participants with European ancestry, and identifying SNPs with a w 2 Po10 À 15 . Genotype data were imputed against the March 2012 'v3' release of 1,000 Genomes reference haplotype panel. Genetic association results were obtained from linear regression models assuming additive allelic effects. These models included as covariates-age and the top five genetically determined principal components to account for population structure. The reported SNP association test P values were computed from likelihood ratio tests. Results were further adjusted for a lambda GC value of 1.069 to correct for any residual test statistic inflation due to population stratification. LD score regression analysis 26 also confirmed that principal component correction had appropriately controlled for inflation due to population stratification (pre-GC calculated interceptB1) before the more conservative GC correction. Independent signals were identified using a combination of distance-based clumping and approximate conditional analysis. Firstly, regions were defined on the basis of physical proximity, with the most strongly associated SNP representing the association signal for that region. We then tested for the presence of multiple statistically independent signals in each region using approximate conditional analysis implemented in GCTA 27 . Independent signals indicated by SNP P values o1 Â 10 À 6 were considered in follow-up analyses. A signal was considered to be the same as a previously reported menarche locus if it had a pairwise r 2 40.05.
Combined analyses with other puberty data sets. We followed up these selected SNPs in two additional sources of data: reported publicly available HapMap2 reference panel-imputed GWAS results for age at menarche from 182,416 women in the ReproGen consortium 5 ; and independent 1,000 genomes reference panel-imputed GWAS data for age at menarche in 76,831 women in the 23andMe study 7,10 .
These additional samples were considered in three analytical designs. Firstly, all data in women (n ¼ 259,247) were combined to estimate effects of the individual selected SNPs on age at menarche, using fixed-effects inverse variance-weighted meta-analysis with all effect estimates reported on a per year scale. Secondly, for genetic correlation analyses with age at menarche, women in ReproGen consortium cohorts genotyped by the custom 'iCOGs' array were excluded to obtain a consistent sample size across GWAS SNPs (leaving data for analysis on 209,820 women). Thirdly, all GWAS results for puberty timing in men and women (n ¼ 315,118) were combined using inverse variance fixed-effects meta-analysis. LD Score Regression 11,26 showed that combining GWAS data from men and women did not introduce substantial test statistic inflation due to possible relatedness between strata (cross-trait intercept 0.016, s.e. 0.005). Heterogeneity between men and women for individual SNP associations was quantified by the I 2 statistic generated by METAL software. SNPs that demonstrated heterogeneity were analysed in a random-effects model implemented by Han and Eskin 28 .
Genetic correlations. Genetic correlations (r g ) were calculated between age at voice breaking in men, age at menarche in women, and 27 other complex traits/diseases in publicly available data sets using LD Score Regression 11,26 . Data sets used can be downloaded from http://www.med.unc.edu/pgc/downloads. Sample numbers in the studies of each trait are shown in Table 2. The only non-publicly available data set used was a polycystic ovary syndrome GWAS performed on 5,184 self-reported cases and 82,759 controls from the 23andMe study 29 . A conservative Bonferroni corrected P value threshold of 0o1.85 Â 10 À 3 ( ¼ 0.05/27) was used to define significant associations. | 2017-04-02T00:12:21.847Z | 2015-11-09T00:00:00.000 | {
"year": 2015,
"sha1": "5347e85c201beb11032caaefa001288835833037",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/ncomms9842.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "20d30ae7d8a421f907bf8b84bd081d8f72844346",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
15040981 | pes2o/s2orc | v3-fos-license | Modulated inflation from kinetic term
We study modulated inflation from kinetic term. Using the Mukhanov-Sasaki variable, it is possible to determine how mixing induced by the kinetic term feeds the curvature perturbation with the isocurvature perturbation. We show explicitly that the analytic result obtained from the evolution of the Mukhanov-Sasaki variable is consistent with the $\delta N$-formula. From our results, we find analytic conditions for the modulated fluctuation and the non-Gaussianity parameter.
Introduction
Since the mass of the Higgs field in the Standard Model (SM) is much smaller than the Planck scale, it is natural to expect that there is a mechanism (solution to the hierarchy problem) that causes the mass of the scalar fields to be much lighter than the Planck scale.
In fact, string theory and supersymmetric models predict many light scalar fields whose expectation values determine the parameters of low-energy effective action. During inflation, light fields (M i ) may lead to vacuum fluctuations that appear as classical random Gaussian inhomogeneities with an almost scale-free spectrum of amplitude δM i . Since the wavelength of the fluctuations is stretched during inflation over the Hubble horizon after inflation, the vacuum fluctuations of the light fields can be related in various ways to the cosmological curvature perturbation in the present Universe. In this paper, we consider modulated inflation as a mechanism relating the isocurvature perturbation to the curvature perturbation of the Universe. In this paper, we consider modulated inflation as one of the mechanisms that relate the isocurvature perturbation to the curvature perturbation of the Universe. The basic idea of modulated inflation is very simple. We first introduce modulated inflation mentioning the difference between modulated perturbation scenarios and multi-field (double) inflation.
We start with the conventional equation for the number of e-foldings elapsed during inflation: 1) where φ N is the value of inflaton field φ corresponding to N e-foldings, and φ e denotes the end-point of inflation. Using the δN-formula, we find that the fluctuation of the spectrum δφ N = H I /2π leads to the spectrum of the density perturbation where M p is the reduced Planck mass and H I is the Hubble parameter during inflation. In addition to the standard inflation scenario in which δφ N leads to the density perturbation, one may expect more generically other scalar fields may play a similar role. The first specific example of this has been given by Bernardeau et al. [1] for modulated couplings in hybrid-type inflation, in which φ e depends on light fields through moduli-dependent couplings. Lyth [1] considered a multi-inflaton model of hybrid inflation and found another realization of δφ e -induced curvature perturbation: "generating the curvature perturbation at the end of inflation". More recently, we considered trapping inflation combined with inhomogeneous preheating and found a different mechanism for generating the curvature perturbation at the end of weak inflation [2] caused by the fluctuation of the number density (δn) of the preheating field.
In addition to modulated scenarios related to the perturbation δφ e , we have recently proposed a new modulated scenario, modulated inflation [3].
φe N φ dφ allows δN φ to feed the curvature perturbation with the modulated perturbation continuously. 2 Another way to see the source in the new scenario is to consider the conventional evolution of the curvature perturbatioṅ where the pressure perturbation δP ≃φδφ is generated by the modulated perturbation.
On the other hand, in multi-field (double) inflation scenario [4], the change in the co-moving curvature perturbation caused by the isocurvature perturbation δs is given bẏ where σ denotes the adiabatic component andθ = 0 appears at the "bend" in the inflaton trajectory. This discriminates the source terms in modulated perturbation scenarios from multi-field (double) inflation. The term "modulated perturbation" [1,3] has been used to distinguish different origins of the cosmological perturbation. It might be confusing, but in a modulated perturbation scenario the light field (M) can be identified with an additional inflaton field. 3 In this case, there can be two different sources; "bend" and the during inflation (from φ = φ N till the end φ = φ e ). In the left, the curvature perturbation is generated due to the modulated fluctuation δφ e that leads to δt N . In the right, the fluctuation is related to the modulated fluctuation δφ.
modulated perturbations. This approach is useful for brane inflation, since there can be several directions for the moving brane, as well as for the target brane.
As we mentioned above, there is a crucial difference between multi-field (double) inflation [4] and modulated perturbation scenarios [1,3] in the mechanism that converts the isocurvature perturbation into curvature perturbation. However, the difference is not obvious when a lght field appears in the inflaton kinetic term. In section 2, we examine the differences and the similarities between the two scenarios when a lght field appears in the inflaton kinetic term.
Modulated inflation from kinetic term
We introduce a light field M and consider the kinetic term for the inflaton φ; where κ 2 8π = G is the Newton's gravitational constant and ω(M) is a function of the moduli. The kinetic term may be given by a more general function, since any term that is not forbidden by symmetry may appear in the effective action. However, for the effective action during inflation and the kinetic term that can be approximated by a series expansion, we may disregard terms proportional to higher (∇φ) n . Of course, the above approximation is not always true, but we consider such action so that we can follow Ref. [8] in the following. 4 We first consider a separable potential where the prime denotes the derivative of the potential with respect to the corresponding field.
Note that we are not considering double inflation in which the isocurvature perturbation feeds the curvature perturbation mainly when there is a sharp bend in the trajectory.
The bend occurs if the additional inflation stage is induced by the secondary inflaton field. Instead, we consider modulated inflation, in which modulated perturbation of the inflaton velocity sources the curvature perturbation. We first consider the evolution of the curvature perturbation paying attention to the source terms that can feed the curvature perturbation in different ways. 4 See Ref. [6] for more specific examples. In Ref. [6], Ringeval et al. considered multi-field inflation in a brane model in which the specific form of the kinetic term is obtained from the string theory. 5 Note that the former is identical to the latter when b(M) is given by a logarithmic function.
Evolution of the curvature perturbation
Recently, cosmological perturbation in two-field inflation with a light field appearing in the inflaton kinetic term has been studied by Lalak et al. [8]. They calculated the spectra of curvature and isocurvature modes at the Hubble crossing and computed numerically the evolution of the curvature and isocurvature modes, showing how isocurvature perturbations significantly affect the curvature perturbation after Hubble crossing. Our first task is to obtain the analytic form of the curvature perturbation generated by the constant source term that can be related to the modulated inflation. We mainly follow Ref. [8] and consider the Mukhanov-Sasaki variable. At this time, we do not assume coincidence of the calculation given in Ref. [8] and the δN-formula, since the source term appeared in the δN-formula has been disregarded in the similar calculation when the inflaton has the standard kinetic term [4]. As will be shown later in the paper, the calculation is very simple in the δN-formalism. For convenience, we follow the notations in Ref. [8], with a slight difference. In Ref. [8], the light field M and the inflaton φ are denoted by φ and χ, respectively, and the coefficient ω(M) is given by an exponential ω = e b(φ) . Since we are considering modulated inflation, the adiabatic component σ is essentially the same as the inflaton φ. Following the definitions in Ref. [8], we thus find for modulated inflation that the trajectory is given by where θ is a constant in modulated inflation, but changes abruptly when there is a sharp bend in double inflation. We do not consider the case when there is a sharp bend. The
Mukhanov-Sasaki variable is defined by
where Φ is the metric perturbation. Using the slow-roll approximations, the equation of motion for the perturbation gives [8] Q σ ≃ AHQ σ + BHδṡ where δs = δM for modulated inflation. 6 A, B, D are given by the following formula: where ξ is defined by 7 The source of the isocurvature perturbation feeding appears in Eq.(2.6) in the term proportional to B. In double inflation, there is a significant feeding with isocurvature perturbation if there is a sharp bend in the trajectory. The sharp bend leads to a large dθ/dN, as has been discussed for double inflation [9]. In our scenario of modulated inflation, we do not expect such a bend in the trajectory. The main source in modulated inflation is the term proportional to ξ, which is small compared with dθ/dN at the bend, but gives a constant feeding and may become significant after integration. Because of the integration, the correction will be proportional to the number of e-foldings. Disregarding the first term in Eq.(2.6), we find ∆Q ∼ BHδsdt ∼ Bδs Hdt ∼ −2ξNδs, (2.9) which gives a simple result for the co-moving curvature perturbation: (2.10) These approximations are valid only if the time-dependence of the slow-roll parameters is small between Hubble crossing and the end of inflation. In the next subsection, we examine the slow-roll conditions used in the calculation [8].
Note that an inflation model with vanishing Q σ at the horizon crossing does not cause a serious problem in modulated inflation scenario. Modulated inflation involves continuous 6 Note that the constant source term proportional to Hδs has been disregarded in Ref. [4]. Therefore, the absence of this term discriminates between modulated inflation [3] and multi-field inflation for standard kinetic terms, as discussed in the introduction. On the other hand, the term proportional to Hδs appears in the similar calculation when there is a light field appearing in the inflaton kinetic term [8]. In this paper we examine the validity of this approach and the meaning of the term comparing with the δN -formalism. 7 For the definitions of the η-parameters, see Ref. [8].
feeding with isocurvature perturbation, raising the curvature perturbation after horizon crossing. The feeding mechanism operates even if the standard inflaton perturbation is very small at the horizon crossing.
δN -formalism
Our second task is to find analytic result for the curvature perturbation following the δN-formalism. Variation of the action leads to the equations [10] φ Following Ref. [8], we find the velocity for the slow-roll inflaton field to bė where the approximation is valid if |ω ′Ṁ /(3H I ω)| ≪ 1. Because of the additional term proportional toφ 2 , the slow-roll condition for the light field M is If the potential X for the field M is very flat (X ′ ≃ 0), the above condition leads to This is not the condition for the potential, but is needed to ensure the slow motion of the field M during inflation. Otherwise, we find the conventional slow-roll condition ǫ M ≪ 1 when the term proportional toφ is negligible compared with X ′ .
In modulated inflation scenario, the analytic calculation is very simple in the δNformalism [3]. We follow Ref. [3] and start with the following formula [13] N φφ = −H I , (2.15) where the subscript denotes the derivative with respect to the corresponding field. Using Eq.(2.12), we find The meaning of this equation is clear. Considering φ(N) as the "time" during inflation, N φ gives the rate of change in the number of e-foldings. If N φ is perturbed by the modulated fluctuation (δω = 0), it leads to δN after the "time"-integration. If ω is a constant during inflation, we find The fluctuation of the light field δM thus gives The δN-formula is a very powerful tool in calculating the curvature perturbation in this set-up, since we can obtain analytic result without paying special attention to the curvature-isocurvature mixing during inflation. The fact that the two different calculations lead to the identical result is an important finding, since for standard kinetic terms there was no such correpondence.
Finally, we examine the conditions related to the cosmic microwave background (CMB) spectra. The condition for the modulated perturbation to dominate the curvature perturbation is If the modulated perturbation dominates curvature perturbation, the non-Gaussianity parameter can be large. 8 Note that the origin of this parameter is different from the that obtained in double inflation with a sharp bend [14]. In this case the value of the non-Gaussianity parameter is given by [13,12,1] − (2.20) Using these results, we will examine the curvature perturbation in specific models of inflation.
Modulated inflation with a scalar field coupled to gravity 1
In Ref. [3], we mentioned that the fluctuation of the Planck mass may lead to the generation of curvature perturbation. In fact, the moduli-dependent Planck mass M p (M) can be seen as generalized scalar-tensor theory (Einstein gravity with a non-minimally coupled massless scalar field M), which leads to the M-dependence of the kinetic term after conformal transformations. Using the above analyses, it is possible to show that the isocurvature perturbation related to the modulated Planck scale can feed the curvature perturbation after horizon crossing. After conformal transformation, the potential depends on the scalar field M if there is no artificial cancellation.
Let us first examine the conditions for modulated inflation in the model in which M
is non-minimally coupled with the scalar curvature [7]; The quantity in brackets is positive as far as β ≪ 1 andM ≤ M p . Note that induced coupling to the Ricci scalar may arise from one-loop gravity corrections. After conformal transformation g µν = Ω 2ĝ µν (2.22) with we find the action for the inflaton kinetic term with where M is redefined by The conformal transformation leads to the effective potential where we assumed βκ 2 M 2 ≪ 1 in the last approximation. The model has been studied by Tsujikawa et al. in Ref. [7] by numerical computation. However the parameter space where the modulated perturbation is significant was "ruled out", due to the fact that the modulated perturbation leads to the distortions in inflaton perturbation. In fact, in modulated inflation, we consider the situation where the modulated perturbation dominates the inflaton perturbation, which may be seen as a "distortion". However, the important point is that the modulated perturbation can lead to a successful generation of the CMB spectra if appropriate conditions are satisfied. Modulated inflation gives a good approximation when the field M satisfies the slow-roll condition. In this case, the approximation where |β| ≪ 1 and δM ≃ δφ are considered. If there is no fine-tuning between the slow-roll parameters, the spectral index suggests that |ǫ| ≤ |β| ∼ O(10 −2 ) or |β| ≤ |ǫ| ∼ O(10 −2 ).
We thus conclude that the ratio is at most N M /N φ ∼ 0.1 for M ∼ M p . Note that the above condition is valid only when there is no amplification or suppression of the fluctuations δφ or δM during inflation. We next consider fast-roll inflation [15] with η φφ ≥ 1. The fluctuation of φ does not produce classical perturbations when η φφ is larger than unity. For the fast-roll (hybrid) inflation with quadratic potential ∼ ±m 2 φ 2 /2, the velocity of the inflaton field is given by [15] where the function F (ω) is defined by Using this result, we find the fluctuation of the number of e-foldings to be where the prime denotes the derivative with respect to ω. Considering the term proportional toφ 2 , the slow-roll condition for M is In this case the modulated perturbation is the main source of the curvature perturbation, and the non-Gaussianity parameter f nl can be large.
Modulated inflation with a scalar field coupled to gravity 2
The theory with a light scalar fieldM coupled to gravity can be given by the action Considering Jordan-Brans-Dicke theory [16], f and g are written as where ω BD is the so-called Brans-Dicke parameter. 9 After a conformal transformation, the action in the Einstein frame is given by the action with the inflaton kinetic term with Here the dimensionless constant β is given by
Conclusions and discussions
In this paper, we have studied modulated inflation from kinetic term. We showed explicitly that the analytic result obtained from the evolution of the Mukhanov-Sasaki variable is consistent with the δN-formula. Using the simple formula obtained in this paper, we found an analytic condition for modulated inflation. We also found analytic formula for the non-Gaussianity parameter in modulated inflation.
In modulated inflation, the integral of the source term related to the perturbation of the inflaton velocity leads to the curvature perturbation. We consider modulated inflation to be an alternative to conventional inflation, in the sense that it saves inflation when the conventional inflaton perturbation fail to generate the CMB spectra. For example, observation of a large non-Gaussianity parameter (if confirmed) can be a problem for single-field inflation. According to Yadav et al [21], the WMAP 3-year data may not favor single field slow-roll inflation. 10 Note also that the string η-problem may prevent a successful standard single-field inflation scenario. There are many attempts in this direction. A curvaton [17] generates the curvature perturbation long after inflation and saves inflation when the inflaton perturbation does not lead to the successful generation of the cosmological perturbation. The observation of a low-energy gravitational effect in the Large Hadron Collider(LHC) may put a strong upper bound for the inflation scale, but the bound can be evaded in many ways [18,19]. Modulated perturbation may lead to inhomogeneous preheating [2] or inhomogeneous reheating after inflation [20]. Note that inhomogeneous preheating can work with a low inflation scale. The string η-problem may be solved by one of these alternatives [5]. Note that modulated inflation is consistent with fast-roll inflation. Moreover, these models are consistent with a large non-Gaussianity. 10 Non-gaussianity may be generated at preheating [23].
Acknowledgment
We wish to thank K.Shima for encouragement, and our colleagues at Tokyo University for their kind hospitality. | 2008-05-03T08:01:42.000Z | 2008-04-21T00:00:00.000 | {
"year": 2008,
"sha1": "5017c7f0d3cc30f3b44ee1184336c7cdcb46fd05",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0804.3268",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5017c7f0d3cc30f3b44ee1184336c7cdcb46fd05",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
228823566 | pes2o/s2orc | v3-fos-license | A comparison of methods for finding magnetic nulls in simulations and in situ observations of space plasmas
Magnetic nulls are ubiquitous in space plasmas, and are of interest as sites of localized energy dissipation or magnetic reconnection. As such, a number of methods have been proposed for detecting nulls in both simulation data and in situ spacecraft data from Earth's magnetosphere. The same methods can be applied to detect stagnation points in flow fields. In this paper we describe a systematic comparison of different methods for finding magnetic nulls. The Poincare index method, the first-order Taylor expansion (FOTE) method, and the trilinear method are considered. We define a magnetic field containing fourteen magnetic nulls whose positions and types are known to arbitrary precision. Furthermore, we applied the selected techniques in order to find and classify those nulls. Two situations are considered: one in which the magnetic field is discretized on a rectangular grid, and the second in which the magnetic field is discretized along synthetic `spacecraft trajectories' within the domain. At present, FOTE and trilinear are the most reliable methods for finding nulls in the spacecraft data and in numerical simulations on Cartesian grids, respectively. The Poincare index method is suitable for simulations on both tetrahedral and hexahedral meshes. The proposed magnetic field configuration can be used for grading and benchmarking the new and existing tools for finding magnetic nulls and flow stagnation points.
Introduction
Astrophysical plasmas are typically characterised by high magnetic Reynolds numbers, and their magnetic fields are found to exhibit a complex structure on a range of scales. For example, observations from missions studying the Earth's magnetosphere (Cluster (Escoubet et al. 2001) and the Magnetospheric Multiscale (MMS) mission (Burch et al. 2016)) show highly fluctuating fields both in the magnetotail ) and magnetosheath (Retinò et al. 2007). Extrapolations of the solar coronal magnetic field based on photospheric magnetograms similarly show enormous complexity in the magnetic connectivity between photospheric flux fragments (e.g. Schrijver & Title 2002;Close et al. 2003). In order to understand the detailed dynamics of such highly complex fields, we need to identify the features of the magnetic field at which localised energy conversion -typically mediated by magnetic reconnection -takes place. Can-fer the existence of magnetic null points, both in simulations and in spacecraft data.
Methods for finding topological singularities and other special features are becoming increasingly important for researchers working with huge amounts of simulation and observational data. A topological analysis is extremely useful for observers as specific features (e.g. magnetic null points) are likely locations for energetic events in the Sun or the Earth's magnetosphere (for appropriate external perturbations). The same analysis allows us to distinguish the important subsets of the huge amounts of data collected from satellites or telescopes. A topological analysis serves two main purposes when applied to numerical simulations: It allows the identification and classifications of the data sets (or simulation sub-domains) of potential interest, and it could also be used for data compression. Finally, as discussed above, in both observations and simulations, certain topological features have very important physical implications and serve as a framework to understand the physical processes that drive the observed dynamics.
This paper addresses a subclass of topological analysis techniques, namely the identification of the stagnation points of 3D vector fields. In the present discussion (around space and astrophysical plasmas), we mainly discuss the topology of the magnetic field. The same analysis, however, is applicable to other vector fields, such as the flow velocity , so long as the field is divergence-free. We designed and undertook a 'challenge' to compare the performance of different null finding approaches to understand possible deficiencies and weaknesses of several pieces of software. Limited comparisons were previously made between different methods ), but they either did not include as many methods, or they did not include a 'ground truth' in which the exact existence and positions of the nulls are known (Haynes & Parnell 2007;Eriksson et al. 2015). Our aim is to understand how the choice of method could possibly influence the analysis of observations or simulations, and what are each method's strengths and weaknesses. We compare the most popular methods in the literature that can be automated to quickly analyse many different instances of input data (see Section 3).
Theoretical background
2.1. Field structure in the vicinity of a magnetic null Magnetic nulls are locations in space at which the magnetic field is zero, and in the generic case this occurs at isolated points. The structure of the magnetic field in the vicinity of these points can be characterised by linearising the field about the point. We note that for any generic (stable) null this linearisation is non-zero, and the local topology of the field linearisation can be shown to be the same as the local topology of the field itself -see Hornig & Schindler (1996). The eigenvalues and eigenvectors of the magnetic field Jacobian ∇B at the null determine the spinefan structure of the field, as described in detail in Fukao et al. (1975); Parnell et al. (1996). Since ∇ · B = 0, the eigenvalues sum to zero. The eigenvectors associated with the two same-sign eigenvalues locally define a plane in which magnetic field lines approach or recede from the null, known as the fan surface (or Σ-surface). The remaining eigenvector defines the direction of the spine line (or γ-line), along which field lines recede from or approach the null. If the same-sign eigenvalues have negative real parts, the null has topological degree +1 -in the literature this is sometimes termed either an A-type null or negative null. If the same-sign eigenvalues have positive real parts, the topo- logical degree is −1, and we have a B-type null or positive null. One further pertinent distinction is between nulls for which all three eigenvalues are real (radial nulls), and those for which two eigenvalues are complex conjugates (spiral nulls). In the latter case, the field lines form a spiral pattern in the fan surface, and nulls are sometimes denoted as being of A s -or B s -type (this occurs when a sufficiently strong component of electric current is present parallel to the spine line).
Test field for the challenge
The magnetic configuration used to test the null finding methods is based on a triply-periodic field that has previously been used to initiate turbulence simulations (Politano et al. 1995). To this field various perturbations are added in order to make the disposition of null points less 'regular'. Some of these perturbations take the form of 'flux rings', that are inserted in such a way as to lead to a pitchfork bifurcation of one of the pre-existing nulls, leading to the creation of two additional nulls (as in e.g. Wyper & Pontin 2014). This is done in such a way that all nulls can be accounted for based on theoretical arguments. Following the addition of the perturbations as described, the exact null locations can no longer be obtained analytically. Instead they are obtained using Newton's method. Since the field itself is still known analytically, the null location can still be found to arbitrary precision. Further, since the field is prescribed analytically, the Jacobian of B can also be calculated exactly at these points, and thus the topological degree of the null and the local orientation of the spine line and fan surface can be determined as above. Details for constructing the magnetic field are presented in the Appendix A. The null points and their spine and fan structures are represented within the volume of interest (x, y, z ∈ [−π/2, 3π/2]) in Figure 1.
Data sets used to test methods
The different null finding methods are described in the following section. These are designed to be used to find nulls on either hexa-or tetrahedral meshes of data points (obtained from numerical simulations), or on time-series of quadruplets of measurements (taken by the Cluster or MMS spacecraft). We therefore generate two different types of data sets from the model magnetic field. In the first case, we evaluate the magnetic field components on a rectilinear grid of points with various different resolutions. In the second, we define four trajectories through the domain, evaluating the magnetic field components at a discrete set of points along these trajectories (see Fig. 1). These points are chosen in each instance to lie at the corners of a regular tetrahedron, to mimic typical spacecraft configurations. The trajectories are designed so as to pass close to some of the null points, and further from others -see the Appendix A for details. We compare the results for three different sizes of tetrahedron (corresponding to different spacecraft separations).
The magnetic field as defined in Appendix A is dimensionless. Particularly in the context of the trajectory data from hypothetical spacecraft tetrahedra, it is relevant to compare with physical length scales. One possible dimensionalisation would be to consider our domain to be equivalent to the largest fullykinetic simulations afforded by present-day codes and supercomputers, since these were recent data sets on which null finders were applied. Those simulations consider domains extending for tens of ion inertial lengths d i (Pucci et al. 2017;Olshevsky et al. 2018). If we suppose that the domain size is 20 d i in each dimension, then our 'small-scale' and 'medium-scale' tetrahedra have spacecraft separations of 0.13d i and 0.63d i , respectively. The inter-spacecraft separation of the MMS constellation can change between 5 − 80km which corresponds to 0.005 − 0.08d i in the magnetotail, and 0.05 − 0.8d i in the magnetopause. Hence, our 'small-scale' tetrahedron aims at resembling the electron diffusion region scales covered by the MMS mission. The interspacecraft separation of the Cluster mission varies 200−2500km, resembling 0.2 − 2.5d i in the magnetotail, and 2 − 25d i in the magnetopause. This larger separation -of the order of the ion diffusion region -dictates the choice of the 'medium' and 'large' scales for our study.
Methods
This section describes the three methods that we have compared, their theoretical formulation and implementation.
Theoretical formulation
The problem of locating a magnetic null is essentially a problem of finding a root of a continuous divergence-free vector field. The Poincaré index or topological degree method for finding such roots was introduced by Greene (1992). This technique has been applied to various kinetic simulations by Olshevsky et al. (2015Olshevsky et al. ( , 2016 and spacecraft observations by Eriksson et al. (2015); Xiao et al. (2006). The key assumption of the method consists in the linearity of the field around a null, therefore a field in the neighbourhood of the null is given by where the summation is implied over repeating indices, x j0 denote the coordinates of the null, and (∇B) = ∂B i /∂x j | x j =x j 0 is the magnetic field gradient, a 3 × 3 matrix constant. The topological degree of the field in the specified volume of space is represented by the following sum over all nulls where λ 1k,2k,3k are the eigenvalues of the ∇B at the k-th null. As mentioned above, in the generic case nulls do not degenerate (in reality they can be degenerate only at one instant of time during a bifurcation process), and all three eigenvalues are non-zero. The implication of this fact is that magnetic nulls are isolated. As the topological degree is strongly conserved, it provides a measure of the difference between the number of positive and the number of negative nulls in the given volume of space. If the volume of space is sufficiently small, one can assume it encloses exactly one null if D 0. The topological degree over a region of space can be evaluated from the field on the surface which encloses this region of space. Typically the magnetic field is given at the vertices of a cell, either hexahedral or tetrahedral. Each face of the cell is split into triangles (see Fig. 8c in Fu et al. (2015)), and the field in the centre of each triangle is interpolated from its corners. In this way, we translate from the 'configuration space' into the 'magnetic field space'. To find out if the cell's surface encloses a null of the magnetic field, each triangular face is mapped onto a unit sphere in the magnetic field space. The area of each triangle's projection on the unit sphere is given by the solid angle subtended by the three magnetic field vectors. The sum of the areas of all triangles (4 for tetrahedron and 12 for hexahedron), divided by 4π, gives the number of times the triangles cover a unit sphere in the magnetic field space. This is the sum of the signs of all the nulls of the field inside the sphere (see Eq. (2)). We note that each area has a sign, and it is important to observe the order of vertices in the triple cross product B 1 · B 2 × B 3 to get the sign correctly. The 'plus' sign corresponds to the outward directed flux, while the 'minus' refers to the inward field flux.
For our implementation of the Poincaré index method we use the formula for the solid angle subtended by three vectors proposed by van Oosterom & Strackee (1983): Evaluation of the solid angle this way is faster and more stable than the conventional use of the Cosine theorem. In particular, there is no need for zero-denominator checks when using modern programming languages, as errors are handled by the arctan2 function. Once a cell which encloses a null is found, it is possible to get a more precise estimate of the null location inside this cell using the Secant theorem (Greene 1992). However, as noted by Greene (1992), this estimate may often be misleading, even giving locations outside the cell. Our experiments confirmed this problem, therefore it is more practical to assume the null is located in the centre of mass of the cell.
The topological classification of a null is straightforward on hexahedral cells where finite differences can be used to deduce the magnetic field Jacobian. A technique for ∇B computation in tetrahedral cells is given in Khurana et al. (1996).
The null-finder based on the Poincaré index method 1 combines magnetic field measurements into a set of either 4 or 8 magnetic field vectors given in the vertices of a cell. It computes the topological degree and returns either a very small number close to zero (meaning no null is present inside the cell), or a number close to 1 or −1, meaning there is a null inside the cell. In practice, the thresholds of being 'close to zero' or 'close to one' are regulated by the numerical precision. Similarly to the trilinear method described in Section 3.2, only those grid cells are selected for analysis, in which none of the components of the magnetic field have the same sign at all vertices (this being incompatible with the existence of a null in the cell). If at least one component of the magnetic field has the same sign in all the vertices, the field can't go to zero inside this cell (in the linear approximation). This pre-selection reduces the computation cost dramatically in a typical simulation or observation, where only a fraction of measurements comprise field nulls.
Theoretical formulation
The trilinear method for finding the locations of null points in a numerical grid under the trilinear assumption was originally formulated by Haynes & Parnell (2007). The algorithm described below differs from Haynes & Parnell (2007) by using a deca-section method (like the bisection method) rather than Newton-Raphson method for converging on the location of the null points. There are three stages to the method: the reduction stage, bilinear stage and the sub-grid stage.
The reduction stage just reduces the amount of work done by the algorithm in the bilinear stage. Each grid cell is checked in turn for a change in sign in the magnetic field. Essentially, a grid cell cannot contain a null point under the trilinear assumption if all 8 values of the grid cell vertices are of the same sign (see above). The bilinear stage actually checks for the possibility of a null point within a grid cell. The zero isosurfaces (B i = 0) of the three components of the magnetic field will intersect at a null point if a null exists. This triple intersection is difficult to find directly numerically. However, two of the three components of the magnetic field's zero isosurfaces will intersect to form a line which the null point must lie on. This line must also intersect with the grid cell faces. The magnetic field on these grid cell faces is now only bilinear and therefore the locations of the intersection points of this line and the cell faces (say P 1 and P 2 ) can be found analytically. The values of the third component of the magnetic field (unused to form the line) can be found at P 1 and P 2 : if a null point exists, then this third magnetic field component must be of opposite signs at P 1 and P 2 . By using this test, the algorithm can detect which grid cells may contain null points. The final, sub-grid stage is simply the first two stages which are repeated at sub-grid cell resolution to identify the locations of the null points at the required accuracy. Each nullcontaining grid cell is split into a new grid of 10 × 10 × 10 cells where the magnetic field values are found using the trilinear assumption and the reduction and bilinear methods are applied to these smaller grid cells. This process of splitting each cell is repeatedly applied until the desired accuracy in the location is obtained.
1 https://bitbucket.org/volshevsky/magneticnullchallenge The methods used for detecting the sign of the null points use a convergence-style method. They are fully detailed in Williams (2018). A field line about a null point can be written as r(s) = a 1 e λ 1 s e 1 + a 2 e λ 2 s e 2 + a 3 e λ 3 s e 3 (4) where λ i and e i are the corresponding eigenvalues and eigenvectors of M = ∇B evaluated at the null point and a i are constants. By repeated multiplication of equation (4) by M (and assuming that λ 1 is the eigenvalue corresponding to the eigenvector associated with the spine line), we obtain This allows the eigenvector associated with the spine line to be identified. This convergence is used by the Sign Finder to classify the signs of the null points. This also identifies the eigenvectors associated with the fan plane. However, the Sign Finder does not find any of the values of eigenvalues of the null point or identify if it is a spiral null point. If this information is desired, it must be found by an alternative method.
Implementation
The algorithm for finding null points using the trilinear method is implemented in the Magnetic Skeleton Analysis Tools. It is a Fortran based package for analysing the skeleton of divergencefree vector fields 2 .
Theoretical formulation
The first-order Taylor expansion (FOTE) method is based on Taylor expansion of the magnetic field in the vicinity of a null (Fu et al. 2015): where ∇B is the Jacobian matrix derived from four-point measurements, r 0 is the location of one of the four spacecraft, r is the location of interest, and B (r) is the magnetic field at the location of interest. Requiring B (r) = 0 and inverting this linear expansion (Eq. 6), we can obtain the null position r. In general, the equation will always give a position of a magnetic null, if the four spacecraft do not measure exactly the same magnetic field, which is impossible in observations or simulations where instrumental or numerical noise is inevitable. However, we only regard the null as reliably identified if (1) the null-spacecraft distance (r − r 0 ) < d i , (2) the following dimensionless error parameters are both smaller than 0.4: where where λ 1 , λ 2 , and λ 3 are the eigenvalues of the Jacobian matrix ∇B. The quantitative criteria for qualifying FOTE are derived from the comprehensive tests of the simulation data (Fu et al. 2015).
Implementation
The algorithm for finding null points and identifying their type using the FOTE method is implemented in Matlab. The timeseries quadruple data are used in such test. At each sampling point, a null point position relative to the spacecraft is calculated by Equation 6. Since the spacecraft trajectories generated artificially are given, a null point location in the spatial domain can be obtained.
As we have introduced, owing to the linear assumption, the identification of a remote null point is not reliable. Thus, we set a threshold distance. Only the magnetic nulls below such threshold distance are further evaluated.
FOTE and SOTE methods
FOTE method has shown great powers in automatic null detection. However, the FOTE method requires the magnetic fields around the spacecraft to be quasi-linear, so that its accuracy is reduced when dealing with strongly non-linear magnetic fields.
Recently, a new method entitled 'Second-Order Taylor Expansion' (SOTE) was proposed by Liu et al. (2019) to overcome the linear limitation. This method is based on the second-order Taylor expansion of the magnetic field B (x, y, z, t) = ax+by+cz+dxy+exz+fyz+lx 2 +my 2 +nz 2 +B 0 , (9) where a, b, c, d, e, f, l, m, n, and B 0 are vector coefficients. The following constraints can be applied to these equations: ∇ · B = 0 and ∇ × B = uJ, where the current density is derived from particle moments: J = ne (V i − V e ). To completely determine all the coefficients in Equation 9, the SOTE method utilises two sets of four-point measurements of magnetic field and particles, by assuming the structures to be quasi-stationary and solving the precise trajectory of the spacecraft.
The SOTE method is good at reconstructing non-linear structures, for example, the null-point pairs in this study. Thus, it enables the analysis of null-point pairs in space plasmas. However, for null point detection, the SOTE method should have the same performance as FOTE method since the FOTE reconstruction is essentially a local approximation of SOTE reconstruction. What's more, the SOTE method cannot be applied to a timevarying structure while the FOTE method could reveal the temporal evolution of a magnetic structure.
Results
The methods considered here can be broken down into two categories: Some methods are designed to take eight data points as input (meaning 24 data values when the three component of B are included), motivated by the need to find magnetic nulls in simulation data utilising rectilinear grids of points. Both the Trilinear and Poincaré methods have been applied previously in this way. By contrast, other methods were developed to find nulls in data from the four-spacecraft missions, Cluster and MMS, and therefore take as input the magnetic field components measured at four points in space: the FOTE and Poincaré index methods were used in this context. In the following sections we consider these two cases separately.
Results of the Poincaré index method
In Figure 2 we plot the locations of nulls found when the magnetic field is evaluated on equally-spaced grids with resolutions 3. 3D rendering of the magnetic field obtained using the trilinear method and associated Magnetic Skeleton Analysis Tools, for 80 3 resolution. Positive and negative null points are represented as red and blue spheres respectively, spine lines from positive and negative null points are represented as thick red and blue lines respectively and the field lines originating from the fan planes of positive and negative null points are drawn as thinner red and blue lines respectively. 20 3 , 30 3 , and 80 3 . We see that the detection and location of some of the nulls is relatively stable between the different resolutions, while other null detections exhibit large differences for the different resolutions. The null points detected at 20 3 and 80 3 resolution are listed in Table 1. This is discussed further below. Fig. 4. Illustration of the effect of different grid resolution for the reconstructed magnetic field structure using the trilinear method. The magnetic field in the vicinity of the same null point is reconstructed from the 20 3 grid data (left) and the 30 3 grid data (right). The reconstructed field structure changes from a sink to a divergence-free null point when the resolution is increased. Red and orange field lines are traced in the forward and backward direction, respectively, of the magnetic field. Figure 3 illustrates a typical output of the magnetic skeleton analysis and application of the trilinear method to a 80 3 grid. Table 1 shows the results by using the trilinear method to find the locations of the null points in the test magnetic field on different resolution grids. In the 20 3 grid, the trilinear method only finds 12 null points and is only able to classify 11 of these. It cannot locate two of the 14 null points which exist in the analytical field.
Results of Trilinear method
More closely analysing the unclassified null point reveals that, under the trilinear assumption, this point is represented by a sink with the field lines twisting into the point (Figure 4). However, it turns out that when the magnetic field is evaluated on a grid with 30 3 resolution, this point is revealed to be a true divergence-free null point (Figure 4). There is clearly just a resolution issue and the vector field is not approximately trilinear locally to this null point at 20 3 resolution.
The two null points which cannot be located in the 20 3 grid are actually located within the same grid cell at 20 3 resolution. From analysis in the higher resolution grids (where these two null points are now located in different grid cells), we find one of these null points is positive and the other is negative. Since this pair of null points comprises both a positive and a negative null point, together they have a topological degree of zero and so they cannot be found when in the same grid cell.
At 30 3 resolution, all 14 null points are now found using the trilinear method. However there is still one null which the algorithm is unable to classify. The exact same change as above occurs with null point 12 between 30 3 and 40 3 resolution. The field lines around null point 12 show that it appears to be a source in the trilinear approximation at 30 3 resolution and becomes a negative divergence free null point at 40 3 resolution. At 40 3 resolution and higher, all 14 null points are found and classified correctly.
Comparison between Poincaré index and trilinear methods
The null point locations and types for the Poincaré index and trilinear methods are compared to the true values in Table 1. Before comparing these results it is worth making some important notes. First, the trilinear method as currently implemented does not distinguish spiral nulls, since it does not make use of the Jacobian matrix eigenvalues to determine the null type (see above). In principle this could be done in the same way as for the Poincaré method (taking finite differences over the grid to evaluate the Jacobian matrix). Second, the Poincaré index method as currently implemented does not seek the nulls at sub-grid resolution, thus the centre of the cell is reported as the null location. By contrast, the trilinear method fits a field to the data, with the null point of this fitted field within the cell being reported. With the above in mind we can compare the nulls found by the two methods, in Table 1. First, we see that at 20 3 resolution, both methods are imperfect. The trilinear method misses two nulls (10 and 12 -though note that as mentioned above these exist in the same grid cell at this resolution), while one fails the classification process. The Poincaré index method performs a little worse: in addition to the two nulls missed by the trilinear method, nulls 1 and 6 are also not found, while these is a false-positive null as well (bottom row of the table). The problematic null 4 is again wrongly classified, while the spiral nature of null 14 is not picked up.
At 80 3 resolution both methods show much better results, as expected. Both methods find all 14 nulls, with only null 4 wrongly classified by the Poincaré index method (again as B rather than B s ). As a result of the fact that the field is fitted on the grid allowing some sub-grid resolution to the null detection, the trilinear method generally gives a more accurate estimate of the null point location. Table 2 shows the results of the FOTE method testing on null point location and identification. The types, coordinates and minimum distances to spacecraft of these null points are given.
Results of FOTE method
In the 'medium-scale' tetrahedron configuration, the spacecraft separation is about 0.12 in dimensionless units. Considering the linear assumption, the null points with the distance (from the tetrahedron centre) less than 1 are reserved. In total, we detected 14 null points, in which the null points 1, 2, 5, 6, 8, 9, 11, and 14 are included in Table 2 and thus are real null points. The others are misidentifications, generally with large distances (see null points 4, 7, 12, 13). This is consistent with the properties of FOTE method. Six real null points (null points 1, 3, 7, 12, 13, and 14 in Table A.2) are missed in the test. Among the missed null points, nulls 1 and 7 (see Table A.2) are located rather far from the spacecraft trajectory, and thus cannot be detected. Null points 12 and 13 are close to each other (relative to the spacecraft separation), possibly breaking the linearity of the field in their vicinity. This explains why these null points are not detected by FOTE method.
In the 'small-scale' tetrahedron configuration, the spacecraft separation is about 0.025. The null points with the distance less than 0.5 are reserved. In total, we detected 10 null points, in which the null points 1, 2, 6, 7, 8, 10 are included in Table 2 and thus are real null points. We note that the null points which are successfully identified by FOTE method are always with the distance less than 0.25, while the misidentifications are with the distances larger than 0.5. This means we could conveniently improve the credibility of FOTE method by decreasing the threshold distance.
The null point detections for the two different tetrahedron sizes are shown graphically in Fig. 5, where the exact answer is also plotted in the top panel. In the fourth panel of the Figure we also show the result of applying FOTE to an even larger tetrahedron, for comparison with the Poincaré index method (see below). One conclusion (which is expected because of the linear-field assumption in the method) is that the FOTE method performs well when the null point is not too close to any other nulls (say, on the scale of the tetrahedron), such as nulls 8 and 9. While in the places where two nulls are relatively close together, such as nulls 4 and 5, the null type detection is rather erratic.
There is no clear trend regarding the accuracy of null detections with tetrahedron size. However, it is clear that very large and very small tetrahedron sizes are both bad: when the tetrahedron is very large the field can be far from linear, with many nulls in the local region. On the other hand if it is very small, the field gradients not well sampled, and nulls can be missed because they pass far from the spacecraft on the length-scale of the tetrahedron (and it is known for FOTE from previous experience that for reliable results we should exclude null detections more than a few times the spacecraft separation, e.g. Fu et al. 2020). The optimal balance, therefore, is to have a spacecraft separation of order the null separation, but this is not known a priori. In the absence of such knowledge, the optimal size of the tetrahedron can be motivated by some known physical length scales, such as the ion inertial length.
Results of Poincaré index method
With the 'small-scale' and 'medium-scale' tetrahedron configurations described above, the nulls never pass exactly through the spacecraft tetrahedron. Therefore, to test the Poincaré index method we have created trajectories with a 'large-scale' tetrahedron (see the Appendix), to ensure the possibility of (true) positive results. The results of applying the Poincaré index method to this data set are shown in the bottom panel of Figure 5. We observe that all nulls that happen to be enclosed by the artificial spacecraft constellation, namely the nulls 8, 2, and 11, have been correctly identified. We have found that, in accordance with Greene (1992), the Secant method provides a bad estimate of the enclosed null location, often outside the tetrahedron. Therefore, the best practice is to provide the centre of the tetrahedron as the location of the null point.
Comparison between FOTE and Poincaré index methods
As expected, the FOTE method is able to detect the nulls when they are some distance away from the tetrahedron. Moreover, FOTE detects a null feature in all cases where the null point comes within a distance of 1 from the tetrahedron centre, for all tetrahedron sizes tested. The accuracy of the distance and null type assessment tends to degrade for larger tetrahedron sizes, as expected. The Poincaré index method also detects all nulls that could be expected (those that pass through the tetrahedron). The two methods for assessing the type of the null perform similarly, with success rate around 50%. When multiple nulls are located close together (e.g. nulls 4 and 5), or too far from the spacecraft (in case of FOTE), both methods detect the presence of nulls, but show noisy results in detection/distance/typing.
Tetrahedron trajectory considerations
The results discussed above are based upon data measured along trajectories that traverse a circular path in the xy-plane of our domain. Clearly these trajectories do not mimic the behaviour of spacecraft constellations such as Cluster or MMS, which for example in the magnetotail move only slowly as magnetic structures are convected backwards and forwards past them. How-ever, we do not expect the nature of these trajectories to influence the results. The shape of the trajectories is chosen to bring them close to as many of the null points in the domain as possible, in order to test the field reconstruction around each of those nulls, and thus make our analysis more robust. The null point identification is not affected locally by the shape of the trajectory (since only the data at a single time -or two adjacent times for SOTEis used), but rather by the separation of the 'spacecraft'. In order to mimic the effects of small-scale fluctuations in the fields and noise, we added the small-scale fluctuations to the trajectories in Equations (A.3-A.5).
Conclusion
This work intends to help researchers who want to analyse null points (stagnation points) of divergence-free vector fields in their simulations or observations. There are two situations that are commonly encountered in practice: (i) numerical simulations on hexahedral or tetrahedral meshes, and (ii) data from tetrahedra of spacecraft (MMS and Cluster). Each of these cases was assessed independently using the same test magnetic field. This is the first time that such methods have been tested and compared against a 'ground truth' situation where null numbers, positions, and types were known to arbitrary precision based on an analytical expression for the magnetic field. The main results of our study relevant to '8-point methods' used for rectangular meshes from numerical simulations are the following.
-When the field is moderately resolved (1,2, or fewer grid points between nulls), both the Poincaré index and trilinear methods give errors, but the trilinear method has no false positives (PI method has 1), fewer false negatives (2 vs 4), and the performance on null type is the same (one incorrect A vs B identification each). This suggests that the trilinear method is more robust when the field is quite 'rough' on the grid. -When the field is well resolved, (>2 grid points between nulls) both methods identify all nulls and their types correctly. Since the trilinear method finds the nulls to sub-grid resolution it gets the positions more accurately. The trilinear method does not inherently detect spiral nulls, but could do so by finding Jacobian of the trilinear fit and calculating eigenvalues. The PI method could include an extra step where the fit is made (e.g. trilinear) to get sub-grid resolution of null position. -Both methods can be efficiently implemented to run in less than 1 second on an 80 3 grid for the present test field, and show no substantial difference in scaling with resolution.
Concerning 4-point methods typically applied to spacecraft data, we have considered three different sizes for the spacecraft tetrahedron, the smallest two of which can be considered as 'Cluster-scale' and 'MMS-scale' on the basis of a physicallymotivated dimensionsalisation of the field -see Section 2.3. We conclude that FOTE performs well in finding the nulls when they are not close together -roughly speaking, when the null separation is larger than the null-spacecraft distance. (The main discrepancies in Figure 5 are the use of X and O for nulls that are close to 2D). On the other hand, for null pairs that are close together the detection method fails (interestingly it still detects a null, but the inferred type jumps around a lot). Probably this could be used to indicate multiple adjacent nulls.
The practical advice is to use the FOTE method for locating the nulls in the fields measured by probes or spacecraft. In the numerical simulations on the rectilinear grids the trilinear method gives more accurate null location. On the meshes one should use a variation of the PI on either hexahedral or tetrahedral cells. The null location produced by the latter should be taken in the middle of the cell as the secant method of the location estimation could produce unreliable results.
Discussion
We propose that the fields defined in the appendix and used here could be used to test/benchmark future null finders. For example a method based on an expansion in spherical harmonics used by (He et al. 2008;Li 2019). The original intention of this method is to reconstruct magnetotail magnetic structure around magnetic null observed by local satellite (He et al. 2008;Guo et al. 2016). Based on satellite measurements, it reconstructs the magnetic field by taking advantage of a fitting function approach. To match the 12 observed magnetic field components, 10 fitting parameters are presented in 10 spherical harmonic functions, and the other two are in the Harris current sheet model (Harris 1962). Thus, by fitting the simultaneous magnetic field vectors, one can reconstruct the local magnetic field. The calculations in He et al. (2008) confirmed the existence of a magnetic field null in reconnection event, and present a magnetic structure around a 3-D null in the magnetotail. For convenience, we provide the theoretical formulation of this method in Appendix B.
In order to apply this method, four point measurements are required in the data cube. Any four points that form a tetrahedron in the data box introduced in Section 2 can be used for reconstruction experiments. For example, in a 80 3 size data box, one can first choose a 2 3 data box, and separate it as five independent tetrahedrons. Then five reconstructions can be done based on the tetrahedrons. The following is to check for magnetic nulls in these reconstructed magnetic structures. The advantage is that it can be reconstructed to get multiple nulls, while other methods can't judge the existence of multiple zeros in the area surrounded by multiple satellites. Multiple tetrahedrons can be randomly selected for reconstruction, and the results obtained together with all data points can be compared. However, it would be less efficient if it is used as an ergodic calculation similar to the Poincaré index. Also the results of the magnetic nulls need to check the reconstructed magnetic structures manually, which need to be further improved in the future. Once the automated procedure The magnetic field model used in this paper is B = (−2 sin(2y) + sin z)e x + (2 sin x + sin z)e y + (sin x + sin y)e z +0.04(−2 sin(2y) + 2 sin x cos z + sin(y) + 0.1195)e x +0.04(2 sin(x − z) + sin(x + z) + √ 30/7)e y +0.04(−2 cos x sin z + sin(y) + 2 sin(2y) − 0.1378162)e z where the values of a i , k i , l i , X i , Y i and Z i are given in Table A.1. The simulated spacecraft trajectories are constructed as follows. First, a trajectory for the tetrahedron centre is defined. A parametric representation of this curve is given by These vectors lie at the corners of a regular tetrahedron, with each point lying a distance S from the centre of the tetrahedron (which has side-length, or spacecraft separation, S √ 8/3). Here we consider three different tetrahedron sizes, with S = 0.025, S = 0.12 and S = 0.4, which we refer to as 'smallscale', 'medium-scale' and 'large-scale', respectively.
The null points within the domain together with the eigenvalues of the associated Jacobian matrix are given in Table A.2.
Appendix B: Spherical expansion method formulation
The fitting model is designed based on a total of 12 functions, including a constant background field, a function taken from the Harris current sheet model (Harris 1962), and 10 spherical harmonic functions. For the convenience of describing the potential field, the spherical harmonic functions are adopted as a part of the fitting model. Such fitting can be expressed as where B γ , B θ , B φ are the three spherical coordinate system magnetic field components at a spatial position (γ, θ, φ). The first term on the right-hand side describes a potential field from the spherical harmonic series shown below. The transform matrix T xyz→γθφ . converts vector field from a common spacecraft Cartesian coordinate system to a spherical coordinate system. The x-direction background magnetic field together with the magnetic field in a Harris current model is shown in this equation. Expression for B r , B θ , B φ is shown as | 2020-11-12T09:06:39.351Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "9234b75d6af22903e07b5b99a71e60713618532a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2101.02014",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "56172353f36cfc6f0a0c72ddc2d389ae766b75e4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
253691758 | pes2o/s2orc | v3-fos-license | CD24 as a Potential Therapeutic Target in Patients with B-Cell Leukemia and Lymphoma: Current Insights
Abstract CD24 is a highly glycosylated glycophosphatidylinositol (GPI)-anchored protein that is expressed in many types of differentiating cells and some mature cells of the immune system as well as the central nervous system. CD24 has been extensively used as a biomarker for developing B cells as its expression levels change over the course of B cell development. Functionally, engagement of CD24 induces apoptosis in developing B cells and restricts cell growth in more mature cell types. Interestingly, CD24 is also expressed on many hematological and solid tumors. As such, it has been investigated as a therapeutic target in many solid tumors including ovarian, colorectal, pancreatic, lung and others. Most of the B-cell leukemias and lymphomas studied to date express CD24 but its role as a therapeutic target in these malignancies has, thus far, been understudied. Here, I review what is known about CD24 biology with a focus on B cell development and activation followed by a brief overview of how CD24 is being targeted in solid tumors. This is followed by an assessment of the value of CD24 as a therapeutic target in B cell leukemia and lymphoma in humans, including an evaluation of the challenges in using CD24 as a target considering its pattern of expression on normal cells.
Background
Cluster of Differentiation (CD) 24, also called heat stable antigen or nectadrin, is a highly glycosylated glycophosphatidylinositol (GPI)-anchored protein consisting of a mature peptide of 27 residues in mouse and 32 residues in humans. [1][2][3] It was first discovered in 1978 by Springer et al through the screening of monoclonal antibodies binding to unique blood cell antigens. 4 The mature peptide is derived from a precursor peptide of 76 residues in mice and 82 residues in humans. The cleavage of the N-terminal signal sequence and C-terminal glycophosphatidylinositol (GPI)anchor sequence in addition to multiple glycosylation sites generates a mature peptide of 20 to 70 KDa. 5 CD24 is located on chromosome 6q21 in humans and chromosome 10 in mice. Analysis of the evolution of CD24 suggests that it arose prior to the divergence of reptiles and avians but was lost in marsupials and monotremes. 2 The consensus sequence of the mature peptide shows multiple highly conserved residues. 2 These residues are sites, or potential sites, of glycosylation, which is consistent with a critical role of these post-translational modifications for the interaction of CD24 with its ligands (see below).
In a healthy individual, CD24 is expressed on and regulates many different cell types. In general, CD24 expression is dynamic with higher expression on immature cells followed by decreased expression at mature cell stages. 2,6 CD24 promotes the normal maturation of B lymphocytes (B cells), T lymphocytes (T cells), neuronal cells, adipocytes, colon crypt cells, and epithelial cells in the mammary gland. 7-9 CD24 acts as a co-stimulatory molecule for T cells 10 and regulates dendritic cell activation in response to danger-associated molecular patterns (DAMPs). 11,12 CD24 is constitutively expressed in erythrocytes where it decreases aggregation and promotes their half-life in circulation in mice 13 but does not appear to be expressed in human erythrocytes. 14 It is also highly expressed on developing neurons and regions of active neurogenesis in the adult brain. 9 In addition, CD24 is highly expressed on many different types of cancer and can be associated with both increased or decreased cancer aggressiveness. 15 Interestingly, while low levels of CD24 in combination with high levels of CD44 have been associated with breast cancer stem cells, 16 high CD24 levels have also been associated with pancreatic cancer stem cells and other cancer types (please see Altevogt et al 15 for a more in-depth discussion).
CD24 has multiple ligands that interact either in cis (on the same cell) or in trans (on a different cell). 1 These interactions are mediated by the differential glycosylation of CD24. 5,[17][18][19] In trans, CD24 on cells from the myeloid lineage, splenic B cells activated by lipopolysaccharide, and thymocytes, as well as breast cancer cells binds to P-selectin to mediate adhesion to endothelial cells. 18,[20][21][22][23] Similarly, CD24 on MCF-7 breast cancer cells mediates adhesion to E-selectin. 24 The L1 cell adhesion molecule (L1CAM) is the ligand for CD24 in neurons and neuroblastoma where it regulates neurite outgrowth by binding to CD24 in cis. 18,25,26 In dendritic cells, CD24 acts in cis with Siglec-G (Siglec-10) to suppress DAMPs-mediated activation of these cells. 11,12 Lastly, CD24 can interact with itself and so might act as its own ligand in some cases. 27 However, the natural ligand for CD24 on developing B cells or B cell leukemias or lymphomas is unknown.
CD24 Expression in B Cells
B cells develop in the bone marrow in discrete stages that are defined by expression of key biomarkers, one of these being CD24 (Figure 1). 28 CD24 levels are low in the earliest stages of lineage commitment and increase in mRNA and protein expression until they peak at the pre-B cell stage (Fraction C'/D). 2 CD24 expression drops when B cells exit the bone marrow but rebounds at transitional stage 1, which occurs in the spleen, 29 followed by a reduction in circulating mature B cells and beyond. 2,28 Moreover, CD24 is highly expressed on both B1 B cells 28 and regulatory B cells. 30,31 In the bone marrow, CD24 is not simply a biomarker; it is a key regulator of B cell development ( Figure 1). Both transgenic mice that overexpress CD24 and CD24 knock-out mice display a reduction in developing B cell numbers, particularly at the pro and pre-B cell stages. 13,32 However, there are normal numbers of B cells in the periphery demonstrating that reduction in early B cell numbers is compensated by increased proliferation at later cell stages. In vitro, engaging CD24 with stimulating antibodies induces apoptosis in developing B cells and suppresses the proliferation of splenic B cells. 33,34 Furthermore, apoptosis of pre-B cells in CD24 transgenic mice is observed in vivo. 35 Thus, the main role for CD24 in B cell development is to control B cell survival at the pro-and pre-B cell stages.
More recently, CD24 has been linked to extracellular vesicles (EVs) secreted from B lymphocytes [36][37][38][39] and EVs that are present in other bodily fluids such as saliva, 40 urine and amniotic fluid. 41 EVs are nano-sized particles derived from the endocytic pathway (exosomes) or budded off the plasma membrane (microvesicles, ectosomes). 42 EVs are important mediators of cell communication that can transport lipids, proteins, and nucleic acids from donor to recipient cells. Cancer cells have been shown to release more EVs than normal cells 42 and CD24 on EVs from plasma has been suggested to be a marker for detection and monitoring of different cancers. [43][44][45]
CD24 in Immune Regulation and Evasion
CD24 regulates immune activation in cis and in trans. Chen et al 12 demonstrated that the interaction of CD24 with Siglec-10 (Siglec-G in mouse) on dendritic cells (DCs) acts a molecular brake to limit inflammation in response to DAMPs. 46 CD24 does this by interacting directly with Siglec-10 through its sialic acid modifications when DAMPs such as HMBG1, HSP70, or HSP90 are present. 12 This binding activates inhibitory signaling downstream of Siglec-10, which, through the immunoreceptor tyrosine-based inhibitory motif (ITIMs), activates the Shp-1 phosphatase. 47 This in turn downregulates signaling downstream of Toll-like receptors (TLR) to limit DC activation. 12 Interestingly, CD24 specifically interacts with DAMPS but not pathogen-associated molecular patterns (PAMPs) to specifically inhibit the overactivation of the immune system in response to signals of injury but not infection. 11 Acute graft-vs-host (GVHD) disease can occur post hematopoietic transplantation whereby the donor immune cells mount an immune response against the host. High dose chemotherapy and/or total body irradiation causes damage to the host and this conditioning it is thought to be the first step in the development of GVHD. 48 In this situation, CD24 on T cells interacts with Siglec-G expressed on DCs to reduce T cell activation and inflammatory cytokine release in response to the DAMPs released by the injured tissue. 49 The soluble CD24-Fc fusion protein (CD24-Fc) is able to recapitulate the function of CD24 by interacting with Siglec-G and activating its inhibitory signaling function to limit immune activation. 47 In a pre-clinical model of GVHD, CD24-Fc was able to significantly mitigate the effects of GVHD and improve survival. 49 CD24-Fc has been further investigated as a non-chemical inhibitor of immune activation in vivo to target chronic inflammation (Table 1). In a simian immunodeficiency virus (SIV) model, injection of CD24-Fc reduced chronic inflammation and improved overall survival of these animals. 50 Similarly, in a human clinical trial, injection of CD24-Fc reduced chronic inflammation associated with SARS-CoV-2 infection. 51 In a mouse model of sepsis, bacterial sialidases inhibit the sialic acid-mediated interaction between CD24 and Siglec-G, which exacerbates inflammation that is not accompanied by changes in bacterial load. 17 Therefore, in the absence of the limiting effect of CD24 on DCs, inflammation itself causes severe tissue damage and subsequent death in this mouse model of sepsis. Overactivation of the immune system, called a cytokine storm, can be a major cause of mortality in sepsis and viral infection. Thus, DovePress limiting inflammation via CD24-Fc-mediated activation of Siglec-10/G can ameliorate the immune mediated damage occurring in response to pathogenic infection. Metaflammation is the low-level chronic inflammation caused by obesity. It has recently been demonstrated that metaflammation is regulated by interactions of CD24 with Siglec-E. 52 Acting as an innate immune check point inhibitor, CD24 represses the metabolic disorders associated with obesity, including dyslipidemia, insulin resistance and nonalcoholic steatohepatitis. Similar to the interaction of CD24 with Siglec-G, CD24 promotes the activation of Shp-1 phosphatase by Siglec-E to downregulate immune cell activation. CD24-Fc treatment of mice ameliorated these dietinduced metabolic dysfunctions and decreased expression of inflammatory cytokines. Similarly, the group showed that CD24-Fc decreased expression of inflammatory cytokines in healthy human subjects. 52 Recently, EVs have been loaded with CD24 by transfecting a HEK-293-derived cell-line (293-TREx™) for human studies or a NIH/3T3 cell line (Expi293™) for mouse studies with plasmids encoding the human or murine CD24, respectively, under control of tetracycline. 53 The resulting EVs, called EXO-CD24, were then administered via inhalation to either mice challenged with LPS or SARS-CoV-2-infected humans. A significant reduction in inflammatory cytokines, including IL-17A, TNF-α, and IL-6 among others, was observed in both situations as well as a reduction in systemic C-reactive protein (CRP). This observed decrease in inflammatory markers supports the idea that EXO-CD24 is mediating a reduction in a cytokine storm response. While EXO-CD24 appeared to be safe in humans, the efficacy of this treatment remains to be determined. In any case, EXO-CD24 is presumably activating the Siglec-10/G inhibitory cascade to block NF-κB activation similar to CD24-Fc. 53 These data further suggest that uncoupling viral clearance response from the DAMP response by activation of Siglec-10/G by EXO-CD24 can potentially reduce the overall degree of the inflammatory cytokine response. This uncoupling could, therefore, reduce mortality from viral infections normally due to cytokine storm; similar to the effects of CD24-Fc.
In cancer, CD24 expression on some tumors serves as a "don't eat me" signal (ie an immune evasion strategy). Specifically, the presence of CD24 impairs phagocytosis of ovarian and triple negative breast cancer cells by macrophages. 14 In the absence of CD24, macrophage activity caused a significant decrease in tumor volume in vivo. These data suggest that CD24, in concert with Siglec-10, acts as a mechanism to regulate self/non-self-interactions by the innate immune system. 54 It is proposed that interfering with the CD24/Siglec-10 interaction via monoclonal antibody (mAb) treatment may improve the innate immune response to cancer by removing this innate immune checkpoint. 14
CD24 in B-Cell Leukemias and Lymphomas
In Canada, leukemia is the most diagnosed cancer in children at 32% but is one of the rarer cancers in adults at 2.4-3.4%. 55 Lymphomas are the third most common cancer in children (13%) following central nervous system cancers (17%) while lymphomas make up approximately 5% of cancers in Canadian adults. Similar prevalence of leukemias and lymphoma exist globally. 56 Leukemias and lymphomas arise from B cells at many different stages of development ( Figure 2). These neoplasms can be classified based on cytomorphological features as well as expression of distinct cell surface receptors linked to the stage from which it derived. 57 Furthermore, molecular characterization of leukemias have been used to further delineate the origin of different leukemias and lymphomas. 58,59 However, characterization of the leukemic cell of origin is not simple as leukemic cells will continue to acquire additional mutations while it develops. 58,60 Nevertheless, B cell derived leukemias and lymphomas share many of the same features as the cells they arise from; including susceptibility to proapoptotic stimuli and expression of key B cell markers. For example, expression of the pan B-cell marker CD20 on many B cell leukemias and lymphomas has made this a valuable therapeutic target for mAb therapy Rituximab. 61 Activation of pro-apoptotic signaling pathways in response to Rituximab along with activation of complement, recruitment of NK cells to induce antibody-dependent cellular cytotoxicity (ADCC), and phagocytosis all contribute to the anti-tumor effects of the mAb therapy. Targeting surface receptors in this manner has the advantage over general chemotherapeutics in that only cells expressing the receptor will be targeted. However, unless the target is only expressed on cancer cells, and not normal cells, both cell types will be affected by the therapy thereby reducing the targeted advantage of the therapy. CD24 expression on hematological malignancies has been well documented ( Table 2). While it is expressed on most subtypes of leukemias and lymphomas, CD24 is notably low or absent in a subset of B-ALL with KMT2A rearrangements. 62,63 In general, high CD24 expression has been associated with poor prognosis except in the case of multiple myeloma. [64][65][66][67][68][69][70] For chronic lymphocytic leukemia, high CD24 expression was associated with progressive vs stable disease. 71 Even though CD24 has been found to be expressed in most B cell malignancies to date, the protein expression levels do not always differentiate healthy from cancer patients due to high variability in expression. 66 It is possible that combining CD24 expression with expression of other surface antigens will increase the diagnostic or prognostic potential of CD24. 71 However, this remains to be determined in a clinical setting. Interestingly, the presence of CD24 in pediatric B-ALL was associated with sensitivity to radiotherapy 72 , suggesting that CD24 could be used as a prognostic marker if radiotherapy is advised.
Targeting CD24 for Cancer Treatment
CD24 is highly expressed in multiple tumors and has, thus, been targeted in preclinical trials for lymphoma 65 84 and neuroendocrine 14 cancers in addition to osteosarcoma 85 and multiple myeloma 86 (Table 3). Similar to CD20 targeting, the anti-CD24 mAb can bind directly and cluster CD24 to induce apoptosis, recruit NK cells via their Fc receptors to induce ADCC, induce complement-mediated cytotoxicity, as well as the recruitment of phagocytes to the tumor cells. 87 In addition, conjugation of a toxin to the mAb can allow for directed interaction and killing of the cancer cell by the toxin. Furthermore, bispecific antibodies targeting both CD24 and the Natural Killer cell receptor, NKG2D, have been shown to enhance ADCC in liver cancer. 83 However, in most studies the mechanism of action has not been directly determined. Nevertheless, all the published data show thus far show that targeting CD24 via mAb in the cancer types tested reduces the growth of cancer cells in animal models (Table 3). With respect to B cell leukemia and lymphoma, naturally EBV-transformed B cells and Burkitt's lymphoma (BL) have been tested in preclinical animal models. 73,88 In the EBV-transformed cells, unconjugated mAb was found to improve survival of SCID mice to approximately 40% at 88 In the BL situation, the direct cytotoxic effects of the anti-CD24 SWA11 antibody conjugated to Ricin-A (SWA11.dgA) were shown against BL cell lines. 73 Furthermore, injection of SCID mice with disseminated BL disease approximately doubled the tumor-free survival time when injected 4 days after tumor establishment and cured all mice when injected 1 day after tumor establishment. 73 To the best of my knowledge, no recent reports on targeting CD24 in leukemia or lymphoma have been published. Therefore, more remains to be done to determine if targeting CD24 is effective in pre-clinical models of B cell leukemia and lymphoma.
The biggest challenge in targeting CD24 as a direct cancer target is that CD24 is also expressed in cells of the immune system and the nervous system (see above). However, the observation that anti-CD24 mAb does not bind to human erythrocytes 14 is reassuring as this treatment should, thus, not result in hemolytic anemia in humans. One major limitation of the pre-clinical models tested to date is that they are, necessarily, performed in immunodeficient mice (Table 3). Furthermore, the mAbs used are specific to human CD24 and do not cross-react with murine CD24. Therefore, it is not possible to truly appreciate the potential side-effects of this treatment as the binding of the mAb to CD24 on host immune cells and neuronal cells does not occur in the pre-clinical models used. The most likely side-effects of mAb treatment would be immunosuppression and disruption of cognitive function consistent with inhibition of immune cell development 1,3,13,32 and neurogenesis. [89][90][91] Moreover, it has been shown that the lack of CD24 on DCs causes the rapid homeostatic proliferation of T cells when lymphopenia was induced. 92 This rapid and massive expansion of T cells causes death of the immunocompetent mice. Therefore, off-target killing of DCs in secondary lymphoid organs upon injection of an anti-CD24 mAb is likely to induce an expansion of T cells that will be severely detrimental to the human host.
In addition, transport of the mAb to the bone marrow via the circulation will cause the killing of developing B cells; resulting in a severe reduction in circulating B cells. Moreover, mAb against CD24 prevents co-stimulation of T cells to further induce immunosuppression. 93,94 This immunosuppression may be similar to that induced by current chemotherapeutic strategies and could be mitigated by avoidance of pathogenic organisms. Thus, the issue of immunosuppression is much less of a concern than the expansion of T cells covered above.
In cancer treatment, CD24 could also be targeted as an innate immune checkpoint inhibitor to promote phagocytosis of tumor cells by macrophages. 14 The use of innate immune checkpoint inhibitors to target the phagocytic inhibitory ligand CD47, which binds to signal-regulatory protein-α (SIRPα) on macrophages have gone through clinical trials for both solid and hematological cancers. 95 Similarly, targeting the interaction of TIGIT on NK cells or its ligands CD112 (also called PVRL2 and nectin-2) and CD155 (also called PVR) have been tested in clinical trials. 95 However, response DovePress rates thus far to these checkpoint inhibitors have been variable. Nevertheless, further clinical trials are in progress to test the effectiveness of these and other innate checkpoint inhibitors, often in combination with other treatments. Chimeric antigen receptor (CAR) T cell therapy targeting CD19 has been proven to be very successful in relapsed or refractory B-ALL. 96 Using similar logic, Klapdor et al 97 have generated third generation CD24-CAR NK cells to target ovarian cancer showing that CD24 + ovarian cell lines and primary tumor cells were selectively killed by the CD24-NK-CAR cells. However, a dual-CAR cell targeting both CD24 and Mesothelin, an ovarian cancer specific antigen, did not enhance killing over the CD24-targeted CAR alone. Thus, CD24-NK-CAR could potentially be trialed against CD24 + B-cell neoplasms, with the caveat that they may also target normal CD24-expressing cells, similar to the cytotoxic antibody situation described above. Application of bispecific T-cell engagers (BiTEs) may also be an immunotherapeutic strategy to pursue as these molecules can also act to engage CD24-directed T cell-mediated killing of leukemia and lymphomas. 98
Future Directions
While CD24 is a promising target for B-cell leukemias and lymphomas, questions still remain on the viability of this therapy given the expression on many normal cell types. Thus, identification of a therapeutic window may be difficult. The variable glycosylation of CD24 on different cells may be an unexpected benefit to this conundrum. The development of anti-CD24 mAbs that target tumor-specific forms of CD24 would generate the specificity that is needed for high efficacy of this treatment. Multiple versions of anti-CD24 mAb have been generated with some binding only to the carbohydrates (eg BA-1 99 and SN3b 100 ) and some to the peptide backbone (eg SWA11 101 ). The generation of a mAb with specificity towards the tumor-specific carbohydrate-peptide conjugate may further improve tumor-specific targeting.
Lastly, there has been a lack of pre-clinical in vivo studies directly assessing efficacy of targeting CD24 in B cell leukemias and lymphomas. Even though expression data on CD24 in these cancers is promising, the in vivo data are necessary to move forward with using CD24 as a target in B cell neoplasms. Once these models are tested, further studies on safety will be necessary.
Conclusion
Overall, the outlook for CD24 as a target in the treatment of B cell leukemias and lymphomas is promising. However, care will need to be taken to limit or avoid off-target effects on normal CD24-expressing cells.
Publish your work in this journal
OncoTargets and Therapy is an international, peer-reviewed, open access journal focusing on the pathological basis of all cancers, potential targets for therapy and treatment protocols employed to improve the management of cancer patients. The journal also focuses on the impact of management programs and new therapeutic agents and protocols on patient perspectives such as quality of life, adherence and satisfaction. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors. | 2022-11-20T16:38:19.901Z | 2022-11-01T00:00:00.000 | {
"year": 2022,
"sha1": "e4780c9391b3803c672eff5f63cba7b04f140d33",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "f4f7f5884eb18980efa4afbeedbf513a22053180",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257742871 | pes2o/s2orc | v3-fos-license | Faecal Microbiota Transplantation, Paving the Way to Treat Non-Alcoholic Fatty Liver Disease
Non-alcoholic fatty liver disease (NAFLD) is currently the most prevalent cause of chronic liver disease (CLD). Currently, the only therapeutic recommendation available is a lifestyle change. However, adherence to this approach is often difficult to guarantee. Alteration of the microbiota and an increase in intestinal permeability seem to be key in the development and progression of NAFLD. Therefore, the manipulation of microbiota seems to provide a promising therapeutic strategy. One way to do so is through faecal microbiota transplantation (FMT). Here, we summarize the key aspects of FMT, detail its current indications and highlight the most recent advances in NAFLD.
Introduction
Non-alcoholic fatty liver disease (NAFLD) is currently the most common cause of chronic liver disease (CLD), with a global estimated prevalence of 25% of adults [1,2]. However, this prevalence varies between countries, due to differences in age, gender, ethnicity, or dietary habits, among other reasons [3]. NAFLD is also increased in certain at-risk populations, such as patients with type 2 diabetes mellitus (T2DM), obesity and metabolic syndrome (MetS). Unfortunately, the global epidemic of NAFLD seems to be increasing unstoppably, given the prevalence of T2DM and obesity, and thus it is expected that in 2030 the worldwide prevalence of non-alcoholic steatohepatitis (NASH), where steatosis is accompanied by inflammation and ballooning, will increase by 15-56% [4]. Classically, NAFLD was defined as the presence of steatosis in >5% of hepatocytes, without significant ongoing or recent alcohol consumption and in absence of other causes of CLD [5]. Recently, a new definition has been proposed in order to better define patients by reflecting the underlying pathophysiology as a metabolic-driven disease and a shift to "positive" diagnostic criteria rather than exclusion criteria. Those authors proposed a change in the nomenclature of NAFLD to metabolic-dysfunction-associated fatty liver disease (MALFD) [6]. NAFLD is a multisystemic disease [7] that not only has consequences for the liver itself (such as progression to cirrhosis, or hepatocarcinoma) but is also associated with an increased risk of cardiovascular disease and extrahepatic cancer [8,9]. NAFLD also represents a very important economic expense [10], as it is becoming a global public health problem [11].
The pathophysiology of NAFLD is complex and not well understood yet. A theory of "multiple hits" has been proposed [12,13]: the first factor would be hepatic lipid accumulation resulting from the absorption of circulating free fatty acids (FFAs), de novo lipogenesis, tococcus and less of Coprococcus, Faecalibacterium and Ruminococcus, but the significant heterogeneity limited the drawing of conclusions [18].
As seen before, the findings are quite variable among different studies (Table 1) and, therefore, several things should be taken into consideration before drawing conclusions. First, we note that the term dysbiosis refers, as mentioned above, to an imbalance and a single responsible pathological microorganism should not be sought. Second, it is known that GM is dynamic and changes in response to environmental factors, such as diet. Third, previous studies that analysed GM in the stool have the limitation that they do not provide information about changes in the small bowel, colon and microorganisms from the mucosal layer adjacent to the intestinal epithelium [25]. This altered GM can disrupt the metabolic pathway and its end products. Thus, an increase in the fermentative pathways has been described (with an increase in the transformation of alcohol into acetaldehyde and acetate), along with a decrease in the concentration of lecithin (because of the increase in the transformation of lecithin into trimethylamine), an alteration in the metabolism of amino acids, such as indole, and an increase in secondary bile acids [26,27]. Furthermore, it has been demonstrated that the increase in endogenous alcohol produced by GM is involved in the development of NAFLD: introducing strains of Klebsiella pneumoniae, which produced a high dose of ethanol, into mice induced NAFLD [28]. This raises the question of whether reversing dysbiosis could lead to an improvement in NAFLD. Table 1. Summary of studies about dysbiosis observed in patients with non-alcoholic fatty liver disease (NAFLD) and non-alcoholic steatohepatitis (NASH). On the other hand, NAFLD patients have an increased intestinal permeability [29]. It has been proven that the disruption of the intestinal epithelial and gut vascular barrier are early and prerequisite events in the development of NASH and that this is related to dysbiotic microbiota [30]. Mouries J. et al. found that not only bacterial products but also the bacteria themselves will be able to reach the liver, identifying bacteria in the liver parenchyma of mice fed with high-fat diet (HFD) and suggesting their free migration. Accordingly, bacteria and their metabolites can reach the liver, through the portal system, and induce inflammatory response, liver injury and fibrosis [31]. In germ-free mice colonized with stool microbes from 2-week-old infants born to obese mothers (Inf-ObMB), it was shown that Inf-ObMB-colonized mice had an increase in histological signs of periportal inflammation, intestinal permeability and accelerated NAFLD progression when exposed to a Western-style diet [32].
Different Ways of Managing the Microbiota, Focus on Faecal Microbiota Transplantation (FMT)
In this context, there is an increasing interest in ascertaining how the microbiota could be modulated. Figure 1 summarizes the different ways to manipulate GM. The first way to change GM is diet. A vegan diet is high in fermentable fibre which provides growth substances to microbes [33]. Moreover, in a systematic review, an increase in Bacteroidetes at the phylum level and a higher abundance of Prevotella at the genus level were observed [34]. Processed food is associated with an impairment of the gut barrier and an alteration of GM. Furthermore, a high-salt diet is associated with a reduction of Lactobacillus abundance [33]. Exercise also affects GM composition. In obese and overweight individuals, energy restriction and a Mediterranean diet with physical activity reduced Firmicutes, especially Lachnospiraceae, after one year of intervention [35]. Moreover, creating a microbiota profile similar to that of healthy children, with a reduction in Proteobacteria and Gammaproteobacteria, seems to be useful in children with obesity [36].
Antibiotics are also widely known for their ability to change the microbiota. For example, C. difficile is a bacteria that causes dysbiosis and consequently an infection in the colon, which is treated with the non-absorbable antibiotic vancomycin [37]. In a recent study of intensive care unit patients who were administered a broad-spectrum antibiotic, a decrease in α-diversity was observed, with significant differences between bacterial phyla and classes in the stool depending on whether carbapenems or another type of antibiotic was administered [38]. Moreover, antibiotics have been suggested to play a role in functional gastrointestinal disorders [39]. In children, it has also been described that an antibiotic used was associated with a reduction in microbiome diversity and richness, specifically, a meta-analysis showed a reduction in alpha-diversity in relation to macrolide (azithromycin) exposure [40]. of GM. Furthermore, a high-salt diet is associated with a reduction of Lactobacillus abundance [33]. Exercise also affects GM composition. In obese and overweight individuals, energy restriction and a Mediterranean diet with physical activity reduced Firmicutes, especially Lachnospiraceae, after one year of intervention [35]. Moreover, creating a microbiota profile similar to that of healthy children, with a reduction in Proteobacteria and Gammaproteobacteria, seems to be useful in children with obesity [36]. A prebiotic is a selectively fermented ingredient that can stimulate the composition and/or activity of the GM, so prebiotics can change GM composition by providing an energy source that can only be used by certain microbes. The fructose-based carbohydrates inulin and fructooligosaccharides are the most common prebiotics and selectively induce proliferation of Bifidobacteria and Lactobacilli. However, it seems unlikely that any prebiotic is completely specific to a particular bacterial species or genus and hence different individuals may show different responses to the same prebiotic [41]. An increase in butyrate levels has also been observed. This may be related to an increase in lactate-utilizing species, such as Eubacterium hallii, which leads to a boost of butyrate or propionate production [42]. Another explanation could be that the alteration of the gut environment with a decrease in gut pH promotes butyrate-producing Firmicutes [43].
Probiotics contain live microorganisms (or their components), which are similar to the beneficial bacteria which are usually present in the healthy human gastrointestinal tract. Probiotics can be ingested in the form of any food supplement or as a drug, but usually they are derived from food sources, especially fermented dairy products. The most frequently studied species are Lactobacillus, Bifidobacterium, and Saccharomyces [44]. There are multiple favourable effects that probiotics have in the host: the metabolism of nutrients; namely, to improve digestion, regulate proinflammatory and anti-inflammatory cytokines, decrease the alteration of the microbiome, enhance intestinal barrier function, or enhance the immune barrier function [45]. Beneficial effects have been observed in a variety of intestinal diseases, such as in antibiotic associated diarrhoea, inflammatory bowel disease (IBD) or colorectal cancer [46].
Microbial consortia are natural associations of two or more species acting as a community. Thus, based on recent studies, multi-species and synthetic communities would have a greater effect than single strains. Creating these synthetic communities is quite a challenge: to achieve the ideal cocktail. It is essential to identify the microbiota and its function that is directly related to the disease. To do so, it seems that shotgun sequencing metagenomics is superior to 6S rRNA-based phylogenetic profiling. What is more, it is also essential to predict the bacterial interactions [47]. In vitro, it has been shown that propionogenic bacterial consortium was able to reverse the lack of propionate produced by antibiotic-induced microbial dysbiosis [48].
FMT is a treatment intended to restore a patient's disturbed GM, by transferring minimally manipulated donor stool to the gut of the patient [49]. There are different types of FMT depending on the type of material and the form of administration. Thus, FMT can be fresh or frozen administered through enema, colonoscopy (in the right colon, except in severe colitis, where it can be applied in the left colon), upper gastrointestinal tract (by gastroscopy, nasogastric, nasojejunal or gastrostomy tube) or by oral capsules. In the case of administration by upper gastroscopy or colonoscopy, the recipients should be prepared with bowel lavage by polyethylene [49].
Different FMT donor screening protocols have been published, in order to try to establish guidelines for choosing the ideal candidate. First, little information is available about faecal donor age criteria, but the European Consensus and the UEG working group recom-mend individuals aged <60 years to avoid the risk of donor comorbidity [50,51]. Secondly, it should be ensured that the donors are healthy people. For that, some exclusion criteria are usually established, mainly including the risk of infectious disease, gastrointestinal comorbidities and factors that can affect the composition of the FM (antimicrobials or probiotic consumption within the preceding 3 months and during the donation period, major immunosuppressive medications, or systemic antineoplastic agents). Other exclusion criteria also considered are: having a systemic autoinflammatory disease, atopic disease, metabolic syndrome, obesity (Body mass index (BMI) > 30), moderate/severe malnutrition, chronic pain syndromes, ongoing pregnancy, previous or scheduled gastrointestinal surgery, or a history of cancer. Some authors also included diabetes, neurological/neurodegenerative disorders, or chronic treatment (≥3 months) with daily use of proton pump inhibitors [52].
Importantly, this first screening for checking inclusion and exclusion criteria can be made through a questionnaire and a medical interview [53]. Additionally, a stool and blood test at the baseline and periodically is performed for all donors, which is important to avoid the transmission of infectious disease [54]. The Food and Drug Administration (FDA) issued a statement emphasizing the importance of testing multi-resistant microorganisms after two cases of invasive infections caused by extended-spectrum beta-lactamase (ESBL)-producing E. coli in two immunocompromised adults. For that reason, the FDA proposed donor screening with specific questions addressing risk factors for colonization with multidrug resistant organisms (MDROs), excluding those at a higher risk of colonization with MDROs and, additionally, MDRO testing of donor stool and exclusion of those that test positive [54]. Given the COVID-19 pandemic, certain extra measures have been taken. First, the initial screening should detail typical COVID-19-associated symptoms within the previous 30 days and a history of travel to more affected areas [55]. Second, in endemic countries, an RT-PCR assay should be conducted for all donors. Finally, taking into account the number of asymptomatic carriers, a molecular stool screening for SARS-CoV-2 should be conducted [56].
Therefore, in relation to a rigorous screening process, the recruitment of stool donors is a challenging process. Paramsothy S. et al. [57] evaluated a faecal donor program, showing that only 12 of the initial 116 respondents (10%) were enrolled as study donors. A recent study in Italy [58] identified only 25% of stool donors as suitable at the end of the selection process. In another recent study in China [59], from 2071 candidates evaluated, only 66 participants (3.19%) finally qualified as stool donors. Another fact to consider is that, even if all patients are healthy people, significant differences between donors' microbial diversity is observed.
On the other hand, it has been proposed that, instead of using universal donors, it would be more useful to try to use "super-donors", which are donors whose stools obtain better results after the FMT than the faeces of other donors [60]. Another new concept is the use of "keystone species" which consists of first performing a metagenomic analysis of the patient's stool in order to know which species are decreased, and then selecting a specific donor in which those species are increased [61].
After the FMT, monitoring of adverse effects should be carried out, although the specific observation period is currently not well-defined, depending on the way of administration, the baseline characteristics of the patient and the underlying diseases [50]. The most common adverse effects described in the short term are diarrhoea, abdominal cramps, abdominal distension/bloating, abdominal pain, fever, flatulence or constipation, with the majority of adverse effects being mild and self-limiting [62].
Current Indications for FMT
Today, the use of FMT for recurrent C. difficile infection (RCDI) is recommended in various guidelines [34,63]. FMT combination for RCDI (by colonoscopy or nasojejunal tube after 4-10 days of vancomycin) achieves clinical resolution in a significantly higher proportion of patients, compared to fidaxomicin (92 vs. 42%) [64]. Comparing FMT by capsule administration versus by colonoscopy procedure, no differences were demonstrated, since both methods achieved the prevention of RCDI after a single treatment in >95% of participants. However, the capsule group had less adverse events and a greater proportion of patients rated their experience as "not at all unpleasant" [65]. FMT can be repeated in those patients with a recurrence of CDI 8 weeks after an initial FMT. Enema is only recommended if other methods are not available [37].
Another potential indication for FMT is IBD. FMT is effective for IBD patients with RCDI, with a success rate close to 90% after one FMT [66]. Moreover, in patients with IBD and RCDI, a clinical remission of 59% and an improvement in disease activity of 24% has also been described [67]. In addition, in case of failure, a second transplant could be considered. Given the dysbiosis present in IBD and its role in pathophysiology, FMT could also play a role in controlling disease activity [68]. In a recent meta-analysis, clinical remission was quite different among studies, with an overall remission rate of 37% [69]. In patients with mild to moderate Ulcerative Colitis (UC), FMT by colonoscopy was as effective as oral glucocorticoids in inducing remission at week 12, but with the advantage of causing fewer adverse effects [70]. FMT by colonoscopy followed by daily oral capsules is safe and well tolerated in patients with UC, but more studies are needed to draw conclusions as a maintenance therapy [71]. On the other hand, in patients with Crohn's disease in clinical remission with oral glucocorticoids, a single FMT by colonoscopy showed better results regarding maintenance of remission compared to the placebo group, but without reaching statistical significance [72].
In obesity and MetS the possible benefit of FMT has also been studied. In a recent metaanalysis, that included three randomized placebo-controlled trials (RCTs) with 75 obese patients with MetS undergoing FMT by nasojejunal tube, an improvement in dysglycemia was observed in the short term. However, no differences were shown compared to the placebo in the lipid profile or in the BMI. The results regarding the change in the composition of the microbiota were heterogeneous among the included studies [73]. In another recent meta-analysis that included six RCTs with a total of 154 patients with MetS and/or obesity, FMT group had a lower HbA1c and a better lipid profile in a short timeframe (2-6 week). However, there were no differences in fasting glucose, triglycerides, total cholesterol or BMI. It is necessary to consider the great heterogeneity between the studies in the characteristics of the FMT: the form of administration, the times administered and the type of placebo in the control arm [74]. In the FMT-TRIM trial, there was no difference in the primary endpoint between capsules and placebo (insulin sensitivity at 6 weeks) or in most of the secondary endpoints (including BMI). Nevertheless, it seems that in the subgroup of patients with low baseline microbiome diversity, the improvement in some metabolic outcomes may be greater [75].
Other diseases in which the potential usefulness of FMT is being studied include, for example: irritable bowel syndrome, graft-versus-host disease, or autism spectrum disorders [76].
FMT in Chronic Liver Diseases (CLD) Other Than NAFLD
Several studies have been published in relation to FMT in patients with CLD in recent years. Bajaj et al. [68]. carried out a study in 20 cirrhotic outpatients with recurrent encephalopathy (HE), defined as at least two documented overt HE episodes requiring therapy, and a MELD score <17. They gave 5 days of broad-spectrum antibiotics prior to a single FMT enema. The sample for the enema was obtained from one donor who had stools rich in Lachnospiraceae and Ruminococcacea, bacteria that had been reduced in patients with HE. Over 5 months, there were no episodes of HE in the FMT group, compared with five patients (50%) who had HE in the standard of care (SOC) arm. Moreover, in the FMT group, there was an increase in stool microbiota diversity and beneficial taxa (i.e., Lactobacillaceae, Bifidobacteriaceae, Lachnospiraceaeae and Ruminococcaceae), significantly reducing the number of hospitalizations and HE episodes in the long term in the FMT arm compared to the SOC arm [69]. Regarding the stool analysis, differences were observed after >12 months of follow up, with an increase in relative abundance of Burkholderiaceae and decrease in Acidaminoccocaceae, but not in Lachnospiraceae and Ruminococcaceae as observed in the previous study. The limitations of these two studies are that the use of pre-treatment antibiotics makes it difficult to discern the role of FMT alone and the small number of participants is another limitation.
Studies with microbiota capsules in cirrhotic patients with recurrent HE are also available. The administration of 15 FMT capsules (with 4.125 g stool from a single donor) at once significantly reduced the number of hospitalizations and the number of subsequent episodes of HE: only one patient (1%) in a FMT group had HE (which was attributed to a transjugular intrahepatic portosystemic shunt placement) compared to three patients in a placebo group (one of them having six episodes) after 30 days of follow up [77]. Furthermore, post-FMT duodenal mucosal diversity increased with higher Ruminococcaceae, Bifidobacteriaceae and lower Streptococcaceae and Veillonellaceae at day 30. Nevertheless, there was no difference in stool diversity. One of the limitations of this study is that duodenal biopsies were not repeated in the placebo group. FMT can also result in a better immunoinflammatory state, reducing serum interleukin-6 (IL-6) and lipopolysaccharide (LPS)-binding protein, in relation to an increase in beneficial microbial taxa and improved neurological status [78].
Other frequent complications related to cirrhotic patients are infections, especially those caused by multidrug-resistant germs [79]. FMT, either by enemas or by capsules, was associated with a reduction in the abundance of vancomycin, beta-lactamase and rifamycin antibiotic resistance genes. These findings may represent a new therapeutic target for reducing infections, although further studies are necessary [80].
All these studies included cirrhotic patients of various aetiologies (Alcohol, Hepatitis C virus (HCV), NAFLD and other). However, there are also other publications available about people with chronic liver disease of a specific aetiology, which will be mentioned below.
In non-cirrhotic patients with Primary sclerosing cholangitis (PSC) and current IBD, FMT by enema (90 mL from a single donor) was safe and beneficial, since 33% of patients experienced a ≥50% decrease in alkaline phosphatase (ALP). Furthermore, FMT increases bacterial diversity, which may be related to an improvement in ALP levels. Interestingly, overall bile acid profiles did not change, so it raised questions about whether FMT acts by interactions involving other primary metabolites [81]. However, results should be interpreted and generalised with caution given the small number of patients, the heterogeneity of patient characteristics and the lack of a control arm.
In patients with alcohol-related cirrhosis, FMT enema reduced both alcohol consumption (measured by urinary ethyl glucuronide/creatinine) and cravings [82]. This could be related to the increase in microbial diversity and the reduction in the parameters of systemic inflammation. It should be noted that, in this case, an initial study of the composition of the patient's stool was carried out in order to later use specific donors that had stool rich in Lachnospiraceae and Ruminococcaceae, which were absent in the studied population.
Finally, in patients with hepatitis B virus (HBV) and positive hepatitis B virus e-antigen (HBeAg) despite antiviral treatment, it has been described that TMF, but not the placebo, gradually reduced the levels of HBeAg after each dose given (in the duodenum every 4 weeks), achieving the clearance in three out of five patients [83]. A subsequent study supports the potential usefulness of multiple FMT in the duodenum of HBV patients by achieving HBeAg clearance in 16.7% (2/12) of patients after six cycles. Moreover, a significantly reduction in DNA levels was observed compared to the placebo arm; however, no significant reduction in transaminases was observed and no patient achieved hepatitis B surface antigen clearance [84].
To our knowledge, there are currently no published studies conducted in humans on FMT in a cohort of patients with autoimmune hepatitis, primary biliary cholangitis, HCV, Wilson's disease, alpha 1 antitrypsin deficiency or hemochromatosis. Table 2 summarizes all the previously mentioned studies.
Possible Treatments for NAFLD Modulating GM
Currently, lifestyle changes based on diet and exercise are the first step of treatment for patients with NAFLD [5,86]. A Mediterranean hypocaloric diet, low in red and processed meat is recommended. In addition, regular physical activity is also prescribed, with a target of 150-300 min of moderate-intensity or 75-150 min of vigorous-intensity aerobic exercise per week [87]. Lifestyle intervention programs achieve significant weight loss associated with an improvement in histological involvement, reducing steatosis and overall NAFLD activity score (NAS) [88]. Specifically, the ideal goal would be to achieve a 10% weight loss, because from this percentage, improvements in not only steatosis and NASH were reported, but also of fibrosis [89]. However, only a small percentage of patients (10%) will be able to reach the 10% of weight loss and many of these patients fail in the long-term maintenance of the treatment, which is key to prevent weight regain with the passage of time [90].
Prebiotics and probiotics can have a beneficial effect in NAFLD patients by modulating GM. Behrouz V. et al., found an improvement in transaminases and triglycerides after 3 months of probiotic and prebiotic therapy, but did not find significant differences in the rest of the lipid profile or in glucose level [91]. Multiprobiotics consisting of Bifidobacterium, Lactobacillus, Lactococcus and Propionibacterium genera, can be useful for liver fat content and transaminases improvement, but they do not seem to have any effect on reducing liver stiffness [92]. Other multiprobiotics, in this case containing six different Lactobacillus and Bifidobacterium species, also failed to improve liver fibrosis. Moreover, no changes in transaminases, lipid profile, glucose level or hepatic steatosis were obtained [93]. On the other hand, synbiotics are preparations that contain one or more species of probiotic and prebiotic ingredients. Synbiotic treatment, with fructo-oligosaccharides plus Bifidobacterium was ineffective in decreasing liver fat content or in improving liver fibrosis [94].
Pinheiro et al. orally administered a consortium of nine human gut commensal strains to rats fed with a high-fat high-glucose/fructose diet (HFGFD), every 24 h for 2 weeks, and compared them with an oral probing vehicle (sterile PBS) group (HFGFD-VEH) and HFGFD-FMT group. They observed an increase in bacterial diversity and a reduction of portal pressure (PP) compared to HFGFD-VEH. However, the body weight lost was less than achieved in HFGFD-FMT group. No treatment group significantly reversed NASH. There was no difference in glucose and insulin levels, HOMA-IR or cholesterol in any group. However, in the stelic animal (mouse) model (STAM™) study, where a consortium was administrated during 4 weeks, an improvement in NAS was observed, consisting in a decrease in steatosis and ballooning, and in liver fibrosis [95].
FMT and NAFLD
Regarding animal models ,FMT from human donor with severe steatosis, triggered hepatic lipid accumulation and steatosis in mice [96]. Moreover, oral FMT in HFD mice can efficiently reverse the increase in Bacteroidetes and the reduction in Actinobacteria and Firmicutes [97]. In addition, an improvement in metabolic alteration and liver histology can be obtained, since a decrease in body weight, transaminases, intrahepatic lipid accumulation and NAS score of more than two points has been reported. This improvement can be explained by changes in GM that increase butyrate levels and reduce pro-inflammatory factors (i.e., IL-1, IL-6 or TNFα), promoting an anti-inflammatory microenvironment [74]. However, Mitsinikos et al. showed that FMT may not be as advantageous as dietary modifications, given that FMT from mice receiving dietary interventions to rats with NASH did not improve histological activity [98].
On the other hand, HFGFD can produce, along with an increase in steatosis, an increase in PP. FMT from healthy controls to HFGFD rats with NASH achieves a reduction in PP. This change appears to be related to a reduction in intrahepatic vascular resistance, in association with a significant improvement in molecular markers of endothelial dysfunction. Specifically, an increase in protein kinase B and endothelial nitric oxide synthase were observed [99].
To date, only three clinical trials with FMT have been performed in patients with NAFLD (Table 3). Craven et al. [100], showed that FMT from an allogenic donor, transferred by endoscope to distal duodenum of NAFLD patients, reduced small intestinal permeability, which was measured using the lactulose:mannitol urine test. However, there was no significant difference in the hepatic fat fraction measured by resonance or insulin resistance 6 months post-transplant. This may be a consequence of the fact that changes in the microbiome after allogenic FMT are not very lasting. Witjes et al. [101], transferred faeces from four healthy lean vegan donors to NAFLD patients by nasoduodenal tube. After the transplant, there was an improvement in both the biochemical liver profile and in the necro-inflammation score in the liver biopsy. This consists of a decrease in both lobular inflammation and hepatocellular ballooning, but not in steatosis or fibrosis. Regarding the correction of dysbiosis, after FMT, more Ruminococcus, Eubacterium hallii, Faecalibacterium and Prevotella copri were observed. Finally, Xue et al. [102], proposed transplanting faeces from healthy donors by colonoscopy followed by three additional enemas over 3 days. A modulation of GM was observed with a decrease in Proteobacteria and an increase in Bacteroidetes, Firmicutes, Fusobacteria and Actinobacteria phyla. In addition, significant improvement in hepatic fat attenuation evaluated by FibroScan was observed, although fibrosis stage was not analysed. In addition to those mentioned, there are other trials under development: NCT03648086 [103], NCT04465032 [104], NCT04594954 [105], NCT02469272 [106], NCT03803540 [107], NCT04371653 [108], NCT02721264 [109] and NCT02496390 [110]. It should be noted that two of the clinical trials are designed with oral capsules of lyophilized faeces. One of them [108], aims to evaluate microbiome diversity and microbiome richness in faecal samples and the number of participants with an increase in flora diversity after FMT, given twice weekly for 12 weeks. On the other hand, the EMOTION study [111] is a randomized, double-blind, multicentre study, that will be carried out in Spain. It will have two treatment arms, comparing the placebo vs. FMT by oral capsules, and it will be performed in patients with NAFLD confirmed by liver biopsy. As a novelty, it will have an initial phase prior to treatment in which patients will be subjected to lifestyle modifications, in order to stratify patients into those who respond and those who do not respond to lifestyle changes (i.e., leading-phase). In addition, multiple FMT will be performed over a longer period, starting with an initial dose of 24 capsules (6 g of lyophilized faeces) and, subsequently, four maintenance doses of 12 capsules (3 g of lyophilized faeces) every 3 months for 12 months. The main objectives will be to evaluate the safety and tolerability of oral FMT in patients with NAFLD during 72 weeks of treatment and to evaluate the efficacy in hepatic improvement at 72 weeks. Figure 2 schematically explains what FMT consists of and its usefulness in patients with NAFLD, improving intestinal permeability and dysbiosis.
It will have two treatment arms, comparing the placebo vs. FMT by oral capsules, and it will be performed in patients with NAFLD confirmed by liver biopsy. As a novelty, it will have an initial phase prior to treatment in which patients will be subjected to lifestyle modifications, in order to stratify patients into those who respond and those who do not respond to lifestyle changes (i.e., leading-phase). In addition, multiple FMT will be performed over a longer period, starting with an initial dose of 24 capsules (6g of lyophilized faeces) and, subsequently, four maintenance doses of 12 capsules (3 g of lyophilized faeces) every 3 months for 12 months. The main objectives will be to evaluate the safety and tolerability of oral FMT in patients with NAFLD during 72 weeks of treatment and to evaluate the efficacy in hepatic improvement at 72 weeks. Figure 2 schematically explains what FMT consists of and its usefulness in patients with NAFLD, improving intestinal permeability and dysbiosis. Figure 2. Non-alcoholic-fatty-liver-disease (NAFLD)-related dysbiosis and increased intestinal permeability and the utility of faecal microbiota transplantation (FMT). The increase in intestinal permeability enables bacteria and their metabolites to reach the liver through the portal system. In FMT, a stool sample is taken from a healthy donor. After processing, it will be administered to the receptor subject with NAFLD. The FMT can be performed in different ways, either orally or by endoscopy or enemas. FMT is intended to reverse existing dysbiosis and restore the intestinal barrier, and consequently improve the severity of the disease. Non-alcoholic-fatty-liver-disease (NAFLD)-related dysbiosis and increased intestinal permeability and the utility of faecal microbiota transplantation (FMT). The increase in intestinal permeability enables bacteria and their metabolites to reach the liver through the portal system. In FMT, a stool sample is taken from a healthy donor. After processing, it will be administered to the receptor subject with NAFLD. The FMT can be performed in different ways, either orally or by endoscopy or enemas. FMT is intended to reverse existing dysbiosis and restore the intestinal barrier, and consequently improve the severity of the disease.
Limitations of FMT and Future
These studies also have some limitations and some questions without clear answers First, the lack of changes in specific bacterial taxa may demonstrate that microbiome analysis limited to stool specimens does not properly reflect the changes in the small intestine or microorganisms from the mucosal layer adjacent to the intestinal epithelium. Furthermore, genomic approaches by analysing bacterial 16 rRNA genes are not fully useful for detecting low-abundance microbes that might drive the host phenotype. Another point to consider is that the focus is on bacteria, ignoring the contribution that other microorganisms, such as viruses or fungus, can make. In addition, it is known that GM is dynamic and it changes in response to several environmental factors, such as diet, antibiotics or immune responses. On the other hand, there is a very strict inclusion criteria in trials that may not reflect usual practice, including patients who mostly had non-advanced fibrosis. Moreover, there is a lot of heterogenicity between studies in relation to FMT characteristics: selection of the ideal donor, the way to introduce the material, the grams of faecal stool introduced and the number of FMTs that are needed. Apart from that, it is not yet well-known what factors influence the engraftment, but it seems to be related to taxonomic identity, strain abundance and microbial interaction. Moreover, the complexity and heterogeneity of NAFLD also represents an important impediment in ascertaining the real benefit of FMT, because the problem may be that we are not selecting the patients well, and that it would be effective only for some specific subgroups of patients. Additionally, the studies usually have a short period of follow-up for a chronic disease such as NAFLD [60,[112][113][114].
From the data presented above, it appears certain that future work will be related to: first, the better identification and selection of candidate patients who will benefit from FMT and second, improving the selection of the transplanted microbiota, considering the influence of the interaction that the bacteria will have with each other.
Conclusions
NAFLD is an increasingly prevalent disease worldwide associated with the existing obesity pandemic, largely due to changes in diet and lifestyle that have occurred in recent years. These changes can alter the correct balance between the bacterial species that reside in the intestine, producing dysbiosis and an increase in intestinal permeability. There are more and more studies that associate dysbiosis with the development of NAFLD and that have identified the bacterial species and microbial products involved in its development, although further research is still required on this aspect.
Due to the great complexity of this disease, there are still no effective treatments, but it has been described that FMT can have beneficial effects in other diseases; however, the only well-established indication in the current guidelines for FMT is RCDI. FMT, in all forms of administration mentioned above (enemas, instillation in the duodenum or by capsules) was found to be safe and well-tolerated. In addition, FMT has already been tested in animal models of NAFLD, showing improvements in metabolic alterations, although not at the histological level. Thus, modulation of GM through FMT appears to be a promising new therapeutic strategy in NAFLD, which can lead to a change in the paradigm of treatment of the disease. However, there are still few clinical trials carried out in this area, and it must be considered that both the results and the methodology differ among those that have been carried out. For this reason, further research is needed on the effects of FMT on the resolution of NAFLD and especially fibrosis. | 2023-03-26T15:13:44.235Z | 2023-03-24T00:00:00.000 | {
"year": 2023,
"sha1": "c115f0c9ccec1e7ae167257d94e60aaab18475d9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/24/7/6123/pdf?version=1679636019",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6943ddc7e0bc5bb346f19ba4eb7144ed990964c0",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
} |
119600878 | pes2o/s2orc | v3-fos-license | Ambidextrous objects and trace functions for nonsemisimple categories
We provide a necessary and sufficient condition for a simple object in a pivotal k-category to be ambidextrous. In turn, these objects imply the existence of nontrivial trace functions in the category. These functions play an important role in low-dimensional topology as well as in studying the category itself. In particular, we prove they exist for factorizable ribbon Hopf algebras, modular representations of finite groups and their quantum doubles, complex and modular Lie (super)algebras, the $(1,p)$ minimal model in conformal field theory, and quantum groups at a root of unity.
1. Introduction 1.1. Let C be a category with a tensor product and duality. Assuming C satisfies certain minimal axioms, then one can use the tensor product and duality structure on C to define (categorical) traces of endomorphisms and (categorical) dimensions of objects. These trace and dimension functions are powerful tools for studying C. The existence of trace and dimension functions also allows one to use C to construct invariants of knots, links, 3-manifolds, and other objects in low-dimensional topology (e.g. see [45]).
However, in many situations of great interest the category is not semisimple and these functions are trivial. This occurs, for example, for the typical simple supermodules of a complex Lie superalgebra and for certain representations of a quantum group at a root of unity. It is desirable to find a suitable replacement for trace and dimension in these settings. Addressing this question, the first and third authors of this paper showed by direct calculation that the typical simple supermodules for the Lie superalgebras of type A and C admit modified trace and dimension functions and that these in turn give rise to nontrivial link invariants [22]. Geer and Patureau-Mirand have since worked with a number of coauthors to obtain further examples of modified traces. They have successfully used these functions to define invariants of knots, links, 3-manifolds, and other low-dimensional objects in topology.
Most recently, in [19] and [24] this approach is used to understand and generalize the "state sum" invariants of 3-manifolds introduced by Turaev-Viro and Kashaev. As an outcome of their investigations they obtain 3-manifold invariants, "relative Homotopy Quantum Field Theories", and a generalization of Kashaev's quantum dilogarithm invariant which was introduced in his foundational paper where he first stated the volume conjecture.
Date: December 21, 2011. Research of the first author was partially supported by NSF grants DMS-0968279 and DMS-1007197.
Research of the second author was partially supported by NSF grant DMS-0734226 and NSA grant H98230-11-1-0127.
In part motivated by these developments in quantum topology, the authors of the present paper unified and generalized the notion of modified trace and dimension functions to the setting of ribbon categories in [20]. In particular, we showed that these functions also give unexpected new insights into purely representation theoretic questions. For example, they provide a natural generalization of the well known conjecture of Kac and Wakimoto on the superdimension of simple supermodules for complex Lie superalgebras [33]. Recently, Serganova proved our generalized Kac-Wakimoto conjecture for gl(m|n) along with the ordinary Kac-Wakimoto conjecture for the basic classical Lie superalgebras [43]. In turn, the generalized Kac-Wakimoto conjecture is a key ingredient in the forthcoming calculation of complexity for the simple gl(m|n)-supermodules [8].
A simple object admits a nontrivial trace if and only if it is ambidextrous (a condition on morphisms). One surprising outcome of [20] was the discovery that ambidextrous objects can fail to exist even in a natural setting like the finitedimensional representations of sl 2 (k) over a field of positive characteristic.
The main result of this paper is the reformulation of ambidextrous into a condition on objects. This provides a new perspective on what it means for an object to be ambidextrous. Furthermore, the new condition can easily be verified in a wide variety of settings. As a consequence we recover results computed in [20], [21], and [22] as well as a large number of previously unknown examples. A striking outcome of the main theorem and the subsequent examples is the fact that ambidextrous objects seem to be quite plentiful in nature. In particular, we show they exist for representations of factorizable ribbon Hopf algebras, finite groups and their quantum doubles, Lie (super)algebras, the (1, p) minimal model in conformal field theory, and quantum groups at a root of unity.
1.2. Acknowledgements. The second author is grateful to David Hemmer, Christopher Drupieski, and Christopher Bendel for helpful conversations.
2. Traces on Pivotal k-categories 2.1. Pivotal categories. We recall the definition of a pivotal tensor category, see for instance, [4]. A tensor category C is a category equipped with a covariant bifunctor ⊗ : C × C → C called the tensor product, an associativity constraint, a unit object 1, and left and right unit constraints such that the Triangle and Pentagon Axioms hold. When the associativity constraint and the left and right unit constraints are all identities we say that C is a strict tensor category. By MacLane's coherence theorem, any tensor category is equivalent to a strict tensor category. To simplify the exposition, we formulate further definitions only for strict tensor categories; the reader will easily extend them to arbitrary tensor categories. In what follows we adopt the convention that f g will denote the composition of morphisms f • g.
A strict tensor category C has a left duality if for each object V of C there is an object V * of C and morphisms A left duality determines for every morphism f : V → W in C the dual (or transpose) morphism f * : W * → V * by and determines for any objects V, W of C, an isomorphism γ V,W : Similarly, C has a right duality if for each object V of C there is an object V • of C and morphisms The right duality determines for every morphism f : and determines for any objects V, W , an isomorphism γ ′ V,W : A pivotal category is a tensor category with left duality {coev V , ev V } V and right duality { coev V , ev V } V which are compatible in the sense that V * = V • , f * = f • , and γ V,W = γ ′ V,W for all V, W, f as above. Every pivotal category gives a natural tensor isomorphism Moreover, this right duality is compatible with the left duality and defines a pivotal structure (cf. [3, Section 2.2]).
2.3.
Tensor k-categories. Let k be a commutative ring. A tensor k-category is a tensor category C such that its hom-sets are left k-modules, the composition and tensor product of morphisms are k-bilinear, and End C (1) is a free k-module of rank one. Then the map k → End C (1), k → k Id 1 is a k-algebra isomorphism. It is used to identify End C (1) = k. An object V of a tensor k-category C is absolutely simple if End C (V ) is a free k-module of rank one. Equivalently, V is absolutely simple if the k-homomorphism k → End C (X), k → k Id X is an isomorphism. If V is absolutely simple, it is used to identify End C (V ) = k. By the definition of a tensor k-category, the unit object 1 is absolutely simple. We call an object V of C absolutely indecomposable if 2.4. Traces. We now recall the notion of a trace on an ideal in a pivotal k-category.
For more details see [24]. By a right ideal of C we mean a full subcategory, I, of C such that: (1) If V is an object of I and W is any object of C, then V ⊗ W is an object of I. (2) If V is an object of I, W is any object of C, and there exists morphisms f : W → V , g : V → W such that gf = Id W , then W is an object of I. If I is a right ideal in a pivotal k-category C then a right trace on I is a family of linear functions {t V : End C (V ) → k} where V runs over all objects of I and such that following two conditions hold.
(1) If U ∈ I and W ∈ Ob(C) then for any f ∈ End C (U ⊗ W ) we have (2) If U, V ∈ I then for any morphisms f : V → U and g : U → V in C we have t V (gf ) = t U (f g).
2.5. Ambidextrous objects. An absolutely simple object V in a pivotal k-category C is said to be right ambidextrous if . This definition is equivalent to several other definitions, see [24,Lemma 9]. In particular, when C is a ribbon k-category the definition of a right ambidextrous object is equivalent to the definition of an ambidextrous object given in [20]. For short we say V is right ambi if V is right ambidextrous.
Let I V be the full subcategory of all objects U satisfying the property that there exists an object W and morphisms α : It is not difficult to verify I V forms a right ideal. Combining [24, Lemma 9(b)] and [24, Theorem 7(a)], if V is a right ambi object, then the canonical map End C (V ) → k extends uniquely to a right trace on I V . In particular, since on End C (V ) this right trace coincides with the canonical map it follows that it is necessarily nonzero. In short we have the following result.
Theorem 2.5.1. If C is a pivotal k-category and V is a right ambi object in C, then there is a unique non-zero right trace on I V up to multiplication by an element of k.
2.6.
Variations. In a pivotal k-category there are also natural notions of left and two-sided ideals, left and two-sided traces, and left ambidextrous objects. See [24] for details. In this paper we only considered the right-handed version of these concepts and leave the other variants to the interested reader. When the category is a ribbon category, the various notions coincide and are equivalent to the definitions given in [20]. In this case we drop the adjective "right" for brevity.
Main Theorem
3.1. For the reminder of the paper we assume k is a field. We also assume that C is an additive pivotal k-category where every indecomposable object V in C is absolutely indecomposable and all elements of the radical of End C (V ) are nilpotent. In particular, End C (V ) is a local ring. We remark that these assumptions are known to imply that the Krull-Schmidt Theorem holds in C.
For example, by Fitting's Lemma our assumptions hold whenever k is algebraically closed and C is an abelian k-category with all objects having finite length. For a nonabelian example, we note the conditions also hold for Deligne's category Rep(S t ) [14]. See [13] for a description of trace and dimension functions in Rep(S t ).
Given an object V in C, we fix a direct sum decomposition of V ⊗ V * into indecomposable objects W i indexed by a set I: for the byproduct morphisms corresponding to this decomposition. In particular, p k i k = Id W k for all k ∈ I and if we set then {e k } k∈I is a pairwise orthogonal set of idempotents in End C (V ⊗ V * ) which sum to the identity.
Lemma 3.1.1. Let V be an absolutely simple object in C. Then there is a unique j ∈ I which satisfies the following equivalent conditions: There is also a unique j ′ ∈ I which satisfies the following equivalent conditions: Proof. Using that the elements e j are a pairwise orthogonal set of idempotents which sum to the identity and that V is absolutely simple, it is straightforward to verify that there is a unique j such that e j coev V = coev V . Using additivity we have But as V is absolutely simple this Hom-space is one-dimensional and hence there is precisely one k ∈ I for which Hom C (1, W k ) is non-zero. On the other hand, p j coev V : 1 → W j is necessarily nonzero. Thus k = j.
The statements for j ′ are argued similarly.
for all r ∈ R and f ∈ Hom C (W, 1).
Proof. Since by assumption Hom
for some γ ∈ k. By assumption elements of R are nilpotent. Hence for N ≫ 0 we have r N = 0 and The second statement is handled in an identical fashion.
We are now prepared to state and prove the main theorem. For the reader's convenience, we recall the full set of assumptions in force at this point: We assume k is a field and C is an additive pivotal k-category. We further assume all indecomposable objects are absolutely indecomposable and that the radical of the endomorphism ring of an absolutely indecomposable object consists of only nilpotent elements.
We also note the following identities which are needed in the proof: (1) the object V is right ambi; Proof. The fact that the second and third conditions are equivalent is immediate from the fact that V ⊗ V * is isomorphic to its dual, the Krull-Schmidt Theorem, Lemma 3.1.1, and the isomorphisms Hom . We now show that the first and second conditions are equivalent. Assume j = j ′ . Let f ∈ End C (V ⊗ V * ). By linearity we may assume without loss that f = e r f e s for some r, s ∈ I. If either r or s is not equal to j, it then follows that both f coev V = 0 and f * coev V * = 0. Hence the ambidextrous condition is trivially satisfied in this case. Now assume r = s = j. Now we consider f ′ := p j f i j ∈ End C (W j ). Since W j is absolutely indecomposable, we may write f ′ = α Id Wj +r for some α ∈ k and r ∈ Rad (End C (W j )). By Lemma 3.1.2 we then have Applying i j to both sides and simplifying yields On the other hand, let us consider ev V f . Using Lemma 3.1.2 and arguing as above we obtain Combining this with our calculation of f coev V we have that V is right ambidextrous.
On the other hand, say V is right ambidextrous. Then e j coev V = (φ −1 V ⊗ Id V * ) e * j coev V * . By the choice of j we have that e j coev V = 0 and so e * j coev V * = (φ V ⊗ Id V * ) e j coev V = 0. Dualizing and using (3.1.2), this then implies ev V e j = 0. However, by Lemma 3.1.1 this implies j = j ′ .
We note that identical arguments also prove the analogous statement for left ambi objects. It is also useful to note that the notion of right ambi is local in the sense that it only depends on End C (V ⊗ V * ). For example, let C be a pivotal kcategory and let F be a full pivotal subcategory of C (that is, the pivotal structure on F is inherited from C). Let V be an absolutely simple object of F . Then V is right ambi in C if and only if it is right ambi in F .
3.2.
The Ideal Proj. We now assume C is an abelian pivotal k-category. By [3, Proposition 2.1.8] the tensor product in a pivotal category is exact in both entries and, hence, the full subcategory of C consisting of the projective objects forms an ideal. Let Proj denote this ideal. Furthermore, since C is a pivotal category it follows that if P is a projective object, then P * is again a projective object (e.g. by [16,Proposition 2.3]). That is, projective and injective objects coincide in C. Finally, we note that if V is a projective object with ev V : V ⊗ V * → 1 an epimorphism, then I V = Proj by [24, Lemma 12].
Let us now assume that C has enough projectives. Each absolutely simple object then has a projective cover and, since projectives are also injective, this projective cover has a unique absolutely simple subobject. In particular, if we let P 1 denote the projective cover of 1, then we write L for the unique absolutely simple subobject of P 1 . Following [15], we call C unimodular if That is, if P 1 ∼ = P 1 * . We recall the full set of assumptions currently in use: We assume k is a field and C is an abelian pivotal k-category. We further assume all indecomposable objects are absolutely indecomposable and that the radical of the endomorphism ring of an absolutely indecomposable object consists of only nilpotent elements. Finally, we assume C has enough projectives. Corollary 3.2.1. Let C be a category which satisfies the above assumptions. If C contains absolutely simple projective objects, then every absolutely simple projective object is right ambidextrous if and only if C is unimodular. Therefore, if C is unimodular and has an absolutely simple projective object, L, of C such that ev L is an epimorphism, then Proj admits a unique non-zero right trace.
Proof. If L is an absolutely simple projective object, then L ⊗ L * decomposes into a direct sum of indecomposable projective objects. By We note that in the following examples the categorical trace on the ideal Proj is known to be trivial. Thus the nontrivial right trace given in the examples below cannot be the categorical trace. We also remark that the categories in Sections 4.3, 4.4, 4.6, 4.8, and 4.9 are all ribbon categories. As remarked earlier, this implies the right trace on Proj is in fact a two-sided trace.
4.2.
Finite-dimensional Hopf algebras. Fix a ground field k and let H be a finite-dimensional Hopf algebra over k. Let C be H-mod, the category of finitedimensional H-modules. The counit, coproduct, and antipode define a unit object, a tensor product, and a duality on C. If we write S for the antipode of H, then by [7, Proposition 2.1] the category C is a pivotal category if and only if there is a group-like element g ∈ H such that S 2 (x) = gxg −1 for all x ∈ H. That is, C is a pivotal category if and only if S 2 is an inner automorphism.
Recall that H is called unimodular if the space of left and right integrals coincide (c.f. [39, Section 2.1]). The following result of Lorenz [37,Lemma 2.5] shows that this definition agrees with the categorical notion given earlier.
Lemma 4.2.1. If H is a finite-dimensional Hopf algebra over a field k, then the category of finite-dimensional H-modules is unimodular if and only if H is unimodular.
On the other hand, by Oberst and Schneider [42] (see Lorenz [37, Proposition 2.5]) a Hopf algebra H is unimodular and S 2 is an inner automorphism if and only if H is a symmetric algebra.
Factorizable ribbon Hopf algebras in characteristic zero.
Let H be a finite-dimensional, factorizable, ribbon Hopf algebra over an algebraically closed field of characteristic zero. Such Hopf algebras appear frequently in the literature. For example, if H is the Drinfeld double of a finite-dimensional Hopf algebra, then it is well known to always be factorizable. Furthermore, by Kauffman and Radford [30] is is also known precisely when the Drinfeld double is a ribbon Hopf algebra. Proof. Since H is a ribbon Hopf algebra, C is a ribbon category. Since H is assumed to be factorizable it follows that C is unimodular, see for example [15,Proposition 4.5]. By work of Cohen and Westreich [12] the category C always has at least one simple projective object.
4.4.
Group algebras and their quantum doubles. Now let us consider a fixed algebraically closed field k of characteristic p ≥ 0 and a finite group G. Then the group algebra kG is a finite-dimensional Hopf algebra. We also consider its quantum (or Drinfeld) double D(G) = D(kG). Although the braiding on kG-mod is symmetric, the braiding on D(G)-mod is generally not symmetric and even over the complex numbers one obtains nontrivial topological invariants [44].
Let C denote the category of finite-dimensional kG-modules or D(G)-modules, as the case may be. For both kG and D(G) the category C is an abelian pivotal k-category. It is a classical fact that kG-mod is unimodular (e.g. see [1,Theorem 6]). On the other hand, combining the work of Radford [41, Corollary 2 and Theorem 4] and Farnsteiner [17,Proposition 2.3] it is known that D(G)-mod is always unimodular.
If p = 0 or p > 0 and coprime to the order of G, then both kG-mod and D(G)mod are semisimple categories. In this case the only nonzero ideal is C itself and the only nontrivial trace is the ordinary categorical trace. Therefore in what follows we assume p > 0 and divides the order of G.
In [48, Section 1] Witherspoon proved that kG-mod can be identified as a full subcategory of D(G)-mod and that projective objects in kG-mod are still projective in D(G)-mod under this identification. Combining this with the above discussion we see that whenever kG-mod has a simple projective module Corollary 3.2.1 implies both kG-mod and D(G)-mod admits a nontrivial trace on the ideal Proj.
To proceed we need a basic fact from the modular representation theory of finite groups. By, for example, [6,Corollary 6.3.4] the existence of a simple projective kG-module is equivalent to showing that kG-mod has a block of defect zero. The question of which finite groups have a block of defect zero goes back to Brauer and was settled for finite simple groups through the work of various people with the final case of the alternating groups handled by Granville and Ono [26]. Summarizing this work we have the following result (see [26] for further details). where r is square-free and divisible by some prime q ≡ 2 mod 3. If G and p are as above, then kG has a block of defect zero.
We remark that [26] also shows that the symmetric group on n letters has a block of defect zero for all p ≥ 5. Applying this to our setting we obtain the following.
4.5.
Irrational conformal field theories. The investigation of conformal field theories leads to the study of pivotal categories. It was shown by Huang that the representation category of a rational conformal vertex algebra is, in our language, a semisimple ribbon category with finitely many simple objects (see [18,Proposition 2] and references therein). However, many problems in mathematics and physics naturally lead to the study of irrational conformal field theories which need not be semisimple. As an example, consider the logarithmic tensor category theory developed by Huang, Lepowsky, and Zhang (see [27] and its seven sequels). See also [18] for a discussion of the non-semisimple case. The following theorem suggests ambidextrous objects could be helpful in the study of irrational conformal field theories.
Let C denote the category W (p)-mod defined in [18,Section 6] and associated to the so-called (1, p) minimal model. Using the description of C given in [18,40], we see that C is a pivotal category which contains a simple projective object and is unimodular. Consequently we have the following theorem.
Theorem 4.5.1. Let C = W (p)-mod be the category associated to the (1, p) minimal model in [18]. Then the ideal Proj in C admits a unique nontrivial right trace.
4.6.
Lie algebras in positive characteristic. Let k be an algebraically closed field of characteristic p > 2 and let g be a restricted Lie algebra; that is, a Lie algebra defined over the field k with an extra map x → x [p] giving g a restricted structure. For example, if G is a reductive algebraic group defined over k, then g = Lie(G) is a restricted Lie algebra. Note that for every x ∈ g the element x [p] − x p is necessarily central in the enveloping algebra U (g). Let u(g) denote the restricted enveloping algebra of g defined by the quotient of U (g) by the ideal generated by all elements x [p] − x p for x ∈ g. Then u(g) is a finite-dimensional Hopf algebra which inherits a Hopf algebra structure from U (g). Let F 0 denote the category of finite-dimensional u(g)-modules.
Let Z denote the subalgebra of U (g) generated by x [p] − x p for x ∈ g. Let F be the category of all finite-dimensional U (g)-modules on which the elements of Z act semisimply. Then F decomposes into blocks according to the algebra homomorphisms Z → k. In particular, F 0 is precisely the principal block of F in this decomposition; that is, the full subcategory of all modules annihilated by The main result we need is due to Larson and Sweedler [36, Corollary to Proposition 8] (see also [28] for an alternate proof). Theorem 4.6.1. If g is a restricted Lie algebra, then u(g) is unimodular if and only if the trace of ad(x) is zero for all x ∈ g, where ad denotes the adjoint representation. In particular, if g = Lie(G) for some reductive algebraic group G, then u(g) is unimodular.
We then have the following theorem. We also note that if g-mod denotes the category of all finite-dimensional gmodules, then it is seen using results of [5] that g-mod has no projective objects. However, as F 0 is a full subcategory of g-mod, the Steinberg module still provides an ambi object in g-mod and so defines a trace on the ideal it generates.
The above theorem generalizes the explicit calculations done for g = sl 2 (k) in [20]. It is worth emphasizing that those calculations also show that there are simple modules for sl 2 (k) which are not ambidextrous. 4.7. Quantum groups at a root of unity. In this subsection let g be a semisimple complex Lie algebra. Fix an odd integer l > 1 which is coprime to three if g contains a component of type G 2 . Fix ζ ∈ C a primitive lth root of unity. Let U ζ (g) (resp. U ζ (g)) denote the restricted quantum group associated to g obtained by specializing the Lusztig form (resp. the non-restricted quantum group associated to g obtained by specializing the De Concini-Kac form) to the root of unity ζ. Let u ζ (g) denote the finite-dimensional Hopf algebra commonly known as the small quantum group. All three algebras are defined, for example, in [11].
Let F denote the category of all Type 1 finite-dimensional U ζ (g)-modules. By Type 1 we mean that each central element K l i acts by 1. There is no loss in assuming our modules are of Type 1 as the general case can easily be deduced from this one. As we only consider representations in F , we replace U ζ (g) with its quotient by the ideal generated by K l i − 1. Having done so, u ζ (g) appears as the Hopf subalgebra of U ζ (g) generated by E i , F i , K ±1 i . Thus we have a restriction functor from F to u ζ (g)-mod, the category of finite-dimensional u ζ (g)-modules.
On the other hand, U ζ (g) contains a large central Hopf subalgebra Z generated by E l i , F l i , and K l i . Let D denote the category of all finite-dimensional U ζ (g)modules on which the elements K ±1 i acts semisimply and the subalgebra Z also acts semisimply. Then since Z acts semisimply, the category D = ⊕ χ D χ , where the direct sum runs over all algebra homomorphisms χ : Z → C, and where D χ is by definition the full subcategory of D of all modules annihilated by x − χ(x) for all x ∈ Z.
In particular, define χ 0 by χ 0 (E l i ) = χ 0 (F l i ) = 0 and χ 0 (K l i ) = 1. Then setting K equal to the ideal of U ζ (g) generated by the kernel of χ 0 we have U ζ (g)/K ∼ = u ζ (g), where the isomorphism is as Hopf algebras. Hence there is an inflation functor from u ζ (g)-mod to D which allows us to identify u ζ (g)-mod with D χ0 . Proof. We first consider u ζ (g)-mod. By [35, Theorem 2.2] u ζ (g) is unimodular so the category u ζ (g)-mod is unimodular. By [35,Proposition 4.1] u ζ (g) has a simple projective object, St, called the Steinberg representation. Thus by Corollary 3.2.1 the module St is ambidextrous and defines a unique nontrivial trace on Proj in u ζ (g)-mod.
We now consider the restricted quantum group U ζ (g). By [2, Theorem 9.8] the Steinberg module St is in fact a simple projective object for U ζ (g). Since upon restriction to u ζ (g) the module St is ambidextrous, it follows that St is ambidextrous in F , as well. In particular, this implies Proj in F admits a unique nontrivial trace.
Finally, let us consider the non-restricted quantum group U ζ (g). Via the inflation functor, St defines a simple projective U ζ (g)-module in D χ0 , hence in D. Thus Proj in C admits a unique nontrivial trace.
Let us point out that this example has particular importance in low-dimensional topology. The first and third author showed in [21] that the non-restricted quantum group U ζ (g) admits a nontrivial trace on the ideal Proj in D by explicit calculations for certain "typical" modules. As discussed in the introduction, this provides a number of results in the theory of 3-manifold invariants. It is also worth noting that their calculations and [21,Theorem 35] provided the original motivation for our Theorem 3.1.3. 4.8. Complex Lie superalgebras. Let g = g0 ⊕ g1 be a finite-dimensional classical simple Lie superalgebra defined over the complex numbers. By classical we mean g0 is reductive as a Lie algebra. That is, g is a simple Lie superalgebra in the Kac classification [31] of type ABCDP Q, D(2, 1; α), F (4), or G (3). In fact what follows works equally well for their non-simple variants: gl(m|n), p(n), q(n), etc.
We fix a Cartan subalgebra h and Borel subalgebra b of g such that h0 and b0 define Cartan and Borel subalgebras of g0, respectively. Let F be the category of all finite-dimensional g-supermodules which are completely reducible as g0-modules and all g-supermodule homomorphisms which preserve the Z 2 -grading. In particular, F is an abelian ribbon category which contains enough projectives, but nearly always fails to be semisimple. The simple objects of F are classified up to parity change by their highest weight and we write L(λ) for the simple supermodule of highest weight λ ∈ h * 0 . Note that if g is a simple basic classical Lie superalgebra (ie. type ABCD, D(2, 1; α), F (4), or G(3) in the Kac classification), then the typical simple supermodules are simple and projective [32]. If g is a Lie superalgebras of type Q, then it again is known to have typical representations which are simple and projective [10,Lemma 4.51].
We call g unimodular if δ is the trivial g0-module. This terminology is justified by the following lemma. In what follows we abuse notation by writing δ ∈ h * 0 for the weight of δ.
Note that the proof of the lemma is an adaptation of an argument from [38]. In that paper they show quite a bit more than we need here; namely, they prove that most blocks of F (and parabolic category O) are symmetric categories. In particular, their results answer many cases of a question raised in [9]. Lemma 4.8.1. Let g be a classical Lie superalgebra and let δ = Λ dim g1 (g1). Let P 0 be the projective cover in F of the trivial supermodule. Then the socle of P 0 is isomorphic to L(δ). In particular, F is unimodular if and only if g is unimodular.
The statement on unimodularity now follows as an immediate corollary.
The importance of δ was already observed in [25] where it was proven that all simple classical Lie superalgebras are unimodular. We now come to the main theorem of the section.
Theorem 4.8.2. Let g be a simple classical Lie superalgebra not of type P . The ideal Proj in the category F admits a nonzero trace.
Proof. As mentioned above, by [25,Section 3.4.3] we have that g is unimodular for any simple classical Lie superalgebra and so by the previous lemma F is unimodular. As discussed above the so-called typical simple supermodules for the simple basic classical Lie superalgebras and for the type Q Lie superalgebras are simple and projective. The result follows.
We remark that in the case of type A and C the above result was first proven in [22]. There the authors give an explicit formula for the trace in terms of supercharacters. They also use deformation arguments to obtain ambidextrous objects for the Drinfeld-Jimbo quantum group over C[[h]] associated to g which, in turn, can be used to define link invariants. 4.9. Lie superalgebras in positive characteristic. Let k be an algebraically closed field of characteristic p and let g be a restricted Lie superalgebra; that is, a Lie superalgebra defined over the field k such that g0 is a restricted Lie algebra and g1 is a restricted g0-module via the adjoint action. In the following theorem we assume g is one of the following restricted Lie superalgebras: gl(m|n), q(n), or a simple Lie superalgebra of type ABCD, D(2, 1; α), G(3), or F (4). We assume that the characteristic of the field k is an odd prime and, in addition, greater than three if g is of type D(2, 1; α) or G(3). Let F be the category of finite-dimensional g-supermodules on which the central elements x p − x [p] (x ∈ g0) act semisimply. We take as morphisms the Z 2 -grading preserving g-module homomorphisms. Proof. If g is q(n), then by [46, Proposition 2.1] it follows that the projective cover of the trivial module is self-dual. In the other cases it follows by [47,Proposition 2.7]. Thus F is unimodular. If g is q(n), then by [46, Theorem 3.10] we have projective simple objects in F . In the other cases it follows by [49,Theorem 4.7]. | 2011-12-20T19:32:43.000Z | 2011-06-22T00:00:00.000 | {
"year": 2011,
"sha1": "49ab7ff1325af7ffe52487b259ad4bdcb3bf5223",
"oa_license": null,
"oa_url": "https://www.ams.org/proc/2013-141-09/S0002-9939-2013-11563-7/S0002-9939-2013-11563-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "98d00bbca174fba53b6c4807861b4d83948d6c39",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
143693231 | pes2o/s2orc | v3-fos-license | Tangut ( Xi Xia ) Studies in the Soviet Union : Quinta Essentia of Russian Oriental Studies
Grace to the famous discovery of Piotr Kozlov’s expedition, a very rich collection of various Tangut books in a mausoleum in the dead city of Khara-Khoto was found in 1908, and almost all the texts in the Tangut language were then assembled in Saint-Petersburg. Because of this situation Russian Tangutology became one of the most important in the world very fast, and Russian specialists, especially Alexej Ivanov, did the fi rst steps to understanding the Tangut language and history, which had for a very long time been hidden from humanity. This tradition persisted in the Soviet Union. Nikolaj Nevskij in 1929 returned to Russia from Japan, where he had stayed after 1917, mainly to continue his Tangut researches. But in 1937, during Stalin’s Purge, he was arrested and executed, Ivanov too. The line of tradition was broken for almost twenty years, and only the 1960s saw the rebirth of Russian Tangutology. The post-War generation did a gigantic work, raising Tangut Studies to a new level. Unfortunately, they almost had no students or successors. The dramatic history of Tangut Studies in Russia could be viewed like a real quinta essentia of the fate of Oriental Studies in Russia – but all the changes and tendencies are much more demonstrative of this example.
Introduction
The history of Tangut Studies is not very well known even amongst Orientalists, and that is very much a pity -it seems that even now it is a domain which conserved pretty well many specifi c features of the so called Classical Orientalism, lost by other lines of Oriental Studies.Even now Tangut Studies are very much like Orientalism a hundred years ago: it is populated by several dozens of researchers who in most cases personally know other colleagues; the general bibliography of Tangut Studies consists of four or fi ve hundreds positions, so, unlike other fi elds, researcher can easily follow all new publications in their domain.Russian Tangutology has one more very important mark -his history, despite relatively few populated, is extremely dramatic and rich of strong characters and tragic destinies.That is why I am sure that it is a history worth to be known.
A History of Russian Tangutology
For the fi rst time the name Tangut was mentioned in a runic funerary inscription of Bilge Khagan, a ruler of the East Turkic Khaganate, dated AD 735.It is clear that already at that time the people, who spoke one of the languages of the Qiangic subgroup of Tibeto-Burman group, lived at the North-West of the Tibetan plateau and on the territory of the modern Chinese province Gansu.In the second part of 7 th Century the Tangut leaders were strong enough to be considered among the strongest and dangerous vassals of Chinese empire: they got a prestigious right to bear the family name of Chinese emperors respectively from the Tang and Song dynasties.At the end of the 10 th Century the Tanguts, semi-dependent on the Song Empire, established a completely independent state, mentioned in Chinese sources as the Western Xia (Xi Xia 西夏) (the Tangut name of their country was most likely The State of High Whiteness (or The State of White and Lofty) ( Phôn mbın lhi̯ ə 1 )).Successful campaigns against their neighbours (Kitans, Uighurs, Chinese, Tibetans) gave to the Tanguts the opportunity to create a relatively big and self-suffi cient state.In 1038 the Tangut ruler, known by his Chinese name Yuanhao 元昊, proclaimed his state an Empire, and himself an emperor. 2 One of the important signs of independence for the states of the Pax Sinica at that time was the invention of a new writing -e.g. the Kitans and the Jurchens also invented their own writing.
As far as we know, it was a purely political decision -it seems that a big part of the Tanguts spoke and read Chinese, many others also used the Tibetan alphabet -and after an introduction of the new writing, they very likely continued to use Chinese or Tibetan in everyday life.The intellectual elite of the Tangut state was almost certainly bilingual (or even trilingual, including Tibetan), 3 and 1 For the pronunciation of Tangut characters we use a system elaborated by Mihail V.Sofronov (see Mihail V. Sofronov, Grammatika tangutskogo jazyka (A Tangut language grammar), Moscow: Nauka, Glavnoe izdatel 'stvo vostochnoj literatury, 1968, vol. I, 69-144. 2 For more details, see Evgenij I. Kychanov, Ocherk istorii tangutskogo gosudarstva (A sketch of a history of Tangut state), Moscow: Nauka, Glavnoe izdatel'stvo vostochnoj literatury, 1968; Li Fanwen (ed.), Xi Xia tongshi (General history of Western Xia), Yinchuan: Ningxia renmin chubanshe, 2005. 3According to the texts, in the Tangut state a big part of population was Chinese; they were all parts of the Tangut culture and life were formed under a strong infl uence of Chinese culture.So, according to Yuanhao's order Tangut writing was developed in 1036-1038 by philologists under the guidance of an emperor counsellor and relative Yeli Renrong 野利仁榮, mentioned in Tangut sources as the Great Tutor I-ri̯ ẹ ( ). 4 It is clear that the philologists from the workgroup were under a deep infl uence of Chinese science and traditions; part of the group could even be Chinese.In spite of this, Tangut writing was created not as an unrepugnant follower of Chinese writing, but as its strong rival -with the use of all the fi ne elaborated instruments of Chinese philology and lexicology of that time, the Tangut scientists were capable to create a very well elaborated and considered logograph writing system, based on original and very non-Chinese fundamental ideas. 5Of course, it wasn't mandatory -but a degree of originality in national writing was a symbol of real independence, gained by the Tangut state.
In 1227 the Tangut state was destroyed by the Mongol hordes of Genghis Khan.The Tanguts themselves, however, survived and managed to contribute a formation of the greatest empire of the Old World -the Mongol empire.Among the dignitaries of the Yuan dynasty (1260-1368),6 whose biographies can be seen in offi cial annals of the Yuan shi 元史 ("History of Yuan"), we can easily fi nd many offi cials of Tangut origin.It seems that vanquished Tanguts were successful intermediaries between Mongols and Tibetans and helped greatly with the spreading of Buddhism among Mongol nobles.An interesting model of interaction between an emperor and a State Teacher -the head of the Buddhist numerous between bureaucracy of all ranks too. 4 To picture the multicultural and polyethnical nature of Tangut state, let me add that I-ri̯ ẹ (Yeli) was very probably a Tangut transcription of Yelü 耶律 -a family name of a ruling dynasty of Kitan empire Liao 遼 (915-1125).We don't know if the creator of Tangut writing was Kitan by culture and language, or his family was already assimilated by Tanguts.More about Tangut writing, see Nikolaj. A. Nevskij, Tangutskaja pismennost' i ejo fondy (Tangut writing and his corpus), in Nikolaj A. Nevskij, Tangutskaja fi lologia, Vol.I, 74-94. 5 Fortunately, we have dictionaries (especially e tymological "Sea of characters" (•i̯ wə ngôn ); for chinese edition see Shi Jinbo et al. (eds.),Wenhai yanjiu (Study of "Sea of characters"), Beijing: Zhongguo shehui kexue chubanshe, 1983; about Russian translation see above) and possess a very detailed explanation of basic methodology of Tangut writing, proposed by creators of this writing themselves.And it's very clear that Tangut script is one of the most interesting and charming writing systems -a logographic writing, that didn't emerge and transform during centuries (even thousands of years) following constantly changing systems of transitional rules (and most part of those rules was never explained or even written).On the opposite the Tangut system of writing was invented by several intellectuals in a very short amount of time.Th at work based on clear logic and a strong ambition to make it not à la chinois.
sangha of empire -was very likely borrowed by Mongols from the Tangut state. 7When Mongol rule collapsed, the situation for the Tanguts turned bad -the land of the Tanguts became a part of China after some cruel campaigns led by the Chinese Ming dynasty (1368-1644).Ming emperors did their best to prevent a new invasion of Mongols in China, and one of their policies was the suppression of ethnic and religious groups that had been Mongol allies at the time of the Yuan rule.Tangut regions were considered suspicious -the border with the steppe was close, and the Tanguts were clearly among the "collaborators".So, the Ming armies devastated the Tangut cities, the Tanguts were decimated and scattered.The Tangut language and writings could be maintained in some monasteries for several generations (may be up to the 16 th Century), but its extinction was inevitable.That was an end to the Tangut civilisation -the people were dispersed and assimilated by Chinese, Mongols or Tibetans, 8 cities were burned and repopulated by Chinese or turned into a desert.
However, it was not the end: habent sua fata libelli.
As already mentione d, the Tanguts were known under this name very early, already in the VIII th century.But it was not their proper name -they called themselves Mi-ndzi̯ wo , where the second character is "man", and the fi rst one, Mi is a logogram especially elaborated to determinate both Tangut language and Tangut people.9So, "Tangut" is a Turkic word, very early adopted by Mongols, who continued to use it even after the destruction of the Tangut state and the fading of the Tangut people and their culture.We know that 19 th century Mongols called the tribes of warlike nomads in north-eastern Tibet (region of Kukunor, north of historical region of Amdo) by the word "Tanguts".Some of these nomads were Tibetans (like Goloks, for example) but many were Mongols that were assimilated by Tibetans.
The word "Tangut" was relatively early also perceived by the Russians -in the fi rst period of their penetration through Siberia they acquired information about China and its neighbours mainly via Mongols.In the 17 th and 18 th centuries in Russian (and scientifi c Latin of Russian scholars) the word "Tangut" meant In the border area of Sichuan/Yunnan/Tibet autonomous region still exists some Qiangic languages; many of them (especially of rGyalrongic branch) seems to be very close to Tangut.See James A. Matisoff , "Brightening" and the place of Xixia (Tangut) in the Qianqic branch of Tibeto-Burman, in Lin Yin-chin et al. (eds.)Studies in Sino-Tibetan Languages: Papers in Honor of Professor Hwang-cherng Gong on His Seventieth Birthday, Language and Linguistics Monograph Series W-4, Taipei: Institute of Linguistics, Academia Sinica, 2004, 327-352; Xiang Bolin (Guillaume Jacques), Jiarong yu yanjiu (Study of rGyalron languages), Beijing: Minzu chubanshe, 2008.
"Tibet". 10 In the same time this word became relatively popular between western botanists and was used for the creation of scientifi c names of some plants (for example, Rheum tanguticum, Aconitum tanguticum.Clemats tangutica and many others); we strongly suspect that in most cases reverend scientists were confused by their Russian colleagues and thought rather about Tibet.
Later this mistake was corrected: in an important book by a famous Russian investigator Nikolaj Przhevalsky (Íèêîëàé Ìèõàéëîâè÷ Ïðaeåâàëüñêèé) (1839-1888) called Mongolia and the Land of Tanguts (1875-76), "Tangut" meant (as in Mongolian) a nomad population of North-eastern Tibet.But, of course, it did not appeal to the "real" historical Tanguts, who by that time seemed completely forgotten by the entire world.
If we talk about the world though, in the end of the 19 th century some of the Tangut texts were already discovered and examined by western scholars.In 1870 an English missionary and colleague of James Legge, Alexander Wylie (1870), published an article about an inscription in six languages on the gates of a fortress named Jűyongguan 居庸關 near Beijing.The inscription was dated back to 1345 and consisted of six practically identical texts in Sanskrit, Tibetan, Mongolian (in so called square script), Uighur, Tangut and Chinese.Wylie was absolutely right in his suggestion that in all the texts there was one part the same: it was the transcription of Sanscrit dhāraṇī.However, he was wrong about the Tangut parthe decided that it was Jurchen writing.In 1882 a French diplomat, historian and linguist Gabriel Devéria (1882) published an article about Jurchen inscription on a stele from Yantai 宴臺.In that paper Gabriel Devéria remarked that the inscription on the gates of Jűyongguan defi nitely was not in Jurchen, but that it could be, he supposed, in Xi Xia (Tangut) writing.In 1895, under the supervision of Prince Roland Bonaparte (1895) all the texts of the Jűyongguan gates were published -a dhāraṇī part of the Tangut text was analyzed by the great French Sinologist Edouard Chavannes.Even in that edition, the Tangut origin of the text was only a very uncertain assumption.Only in 1898 the problem was fi nally solved -Devéria (1898) published a bilingual inscription from a Dayunsi temple 大雲寺 in Liangzhou, and one part of that inscription was in the same writings as the unrecognized text in Jűyongguan.The text in the Chinese part said that the inscription was made in 1094 in Xi Xia state writings.The question thus was solved, but the text remained undecoded and unread -the available inscriptions were too small to become a "Rosetta stone".
In 1904 a translator of the French embassy in Beijing, M. G. Morisse (1904), published his study on a Tangut text of Lotus sutra (Saddharma Puṇḍarīka Sūtra).M. G. Morisse bought this text near White pagoda temple (白塔寺) that had been ravaged by Boxers -one of the few buildings from the Yuan dynasty that remained in Beijing.The text was previously studied by an unknown Chinese scholar -many Tangut characters were provided with handwritten notes and Chinese translations.Thanks to this unknown Tangutologist, Morisse could make many important remarks about the structure of a Tangut phrase; he decoded some characters and discovered a pronunciation of some others (mainly of special characters, elaborated for phonetic transcription of Sanscrit or Chinese words).He also very reasonably assumed that Tangut must be a language of the Tibetan group.It was a big achievement, but further progress seemed almost impossible -the Buddhist texts did not provide any relevant material for the decoding of the language and writings, and, what was much more sad -the entire corpus of texts in the Tangut language seemed to be limited to these few texts.In short, scientists did not have an instrument for the decoding and almost nothing to decode.
But Providence was benevolent to the Tanguts.In 1907 Piotr K. Kozlov (ϸòð Êóçüìè÷ Êîçëîâ) (1863-1935), apprentice and follower of Prjevalskij, organized a new expedition named Mongol-Sichuanese, to explore the western border of China.One of the targets were the ruins of the abandoned city Khara-Khoto. 11uring excavations in this city in 1908 and 1909 Kozlov found hundreds of paintings, statues (which are in the Saint-Petersburg Hermitage museum now 12 ), and, above all -hundreds of Tangut texts of various kind. 13And then the Tanguts could speak again -the desert had saved their words from the destructive forces of time.
Those fi ndings were extremely valuable not only for Tangut culture, but also for Russian Tangut studies, which had not existed before.The main corpus of Tangut texts was transferred to the Asiatic Museum in Saint-Petersburg;14 of course, the Russian scholars where the fi rst who had the possibility to study this new and extremely challenging material.And we think that Kozlov's luck did not only determine the boom of Tangutology (especially in Russia), but also a name for a new line of Oriental Studies.Before the fi ndings in Khara-Khoto the term "Tangut" was rarely used by Western scholars -they preferred to call newly discovered language and writings by the Chinese word "Xi Xia".And the reason for the Chinese term is very clear -before Kozlov's fi ndings an overwhelming part of the sources about the Tanguts was in Chinese.The Mongolo-Turcic term "Tangut" was relatively popular only among Russian erudites and we suppose that it is quite probable that an advantage, gained by Russian scholars (thank to Kozlov's fi ndings) actually gave Tangutology its name. 15he results of Khara-Khoto discoveries appeared very quickly.Young Sinologist and a professor of Saint-Petersburg university, Aleksej I. Ivanov (Àëåêñåé Èâàíîâè÷ Èâàíîâ) (1878-1937) . 16That fi nally was a real Rosetta stone (characters were not only translated, but also transcribed by phonetically equal Chinese characters).In 1909 Ivanov published some results of his study. 17This little article (and full of mistakes, due to haste of the author, who had hurried to inform the world about new possibilities in decoding the Tangut language) was an important impulse for Tangut studies around the world.In 1916 was published an important research of Berthold Laufer (1874-1934), a famous American anthropologist of German origin, 18 with many right conclusions about the Tangut language; also, in these years were published the fi rst researches of Chinese Tangutologists (among them Luo Fucheng 羅福成 (1885-1960), Luo Fuchang 羅福萇 (1895-1921) and his father, famous philologist Luo Zhenyu 羅振玉(1866-1940) 19 ).At the same time, Władysław Kotwicz (Âëàäèñëàâ Ëþäâèãîâè÷ Êîòâè÷) (1872-1944), a Russian and Polish mongolist and turcologist, 20 began his work with the Tangut texts.
In 1925 Ivanov was the chief translator at the Soviet embassy in Beijing, but he tried to continue his studies of Tangut.That year he was visited by a Japanologist Nikolaj A. Nevskij (Íèêîëàé Àëåêñàíäðîâè÷ Íåâñêèé) (1892-1937), 21 who was among Ivanov's student in the beginning of 1910 s .Later Nevskij lived in Japan -since 1915, after graduation from Saint-Petersburg university (there he was mainly a student of the Sinologist Vasilij M. Alekseev (1881-1951)), Nevskij was sent to Japan for further studies.After the Russian revolution in 1917 he decided not to return and worked in Japan -especially in the fi eld of language, religion, folklore and customs of the Japanese, Ainu and Ryukuans.In 1927 he also traveled to Taiwan to study the language of Zou鄒 -one of indigenous people From 1900 he headed a department of Mongol philology.He participated in many scientifi c expeditions to Kalmykia (1894, 1896, 1910, 1917) of the island. 22But we know that since 1923-1924 he began to study the Tangut problems, which was extremely hard as he was so far from Russia. 23In 1925 Ivanov gave him some copies of Tangut texts and dictionaries he had brought to Beijing -probably, that was the defi ning moment and Nevskij decidedly turned to Tangut studies.In 1926 in Osaka he published a research on the Tibetan transcription of Tangut words 24 (Tibetan handwritten notes can be found in many Tangut texts -it seems that some Tanguts used the Tibetan alphabet in everyday life).But it was more and more clear that for a profound study he had to be in Leningrad and work with Tangut texts directly.Therefore, in the autumn of 1929 published many papers on Tangut problems in the Soviet Union and China. 26In 1935 he became a doctor of science honoris causa.In a famous illustrated book "Den' mira" ("One day of the world"), promoted by Maxim Gorky (1935) as a momentary snapshot of the entire world's life in the same one day (27 September 1935 was chosen) we can fi nd a brief information about professor Nikolay Nevskij, working on a decoding of the Tangut language.Who could ask for more?But suddenly things changed.Since the summer of 1937 in the Soviet Union had started the terrible period of the Great Purge, when nearly one million persons were executed or died in concentration camps, condemned in various kinds of high treason of the Soviet state.Investigators of the People's Commissariat for Internal Affairs tried to impress their bosses, including Stalin, by enormous plots of spies and traitors, revealed by them.The Institute of Oriental Studies, full of researchers with knowledge of many strange languages, many of them were even abroad, was a too tempting target.An underground group of Japanese spies, organised and coordinated by Nikolay Nevskij was revealed (even an Institute director, famous turcologist, academician Alexandr Samojlovich (Àëåêñàíäð Íèêîëàåâè÷ Ñàìîéëîâè÷) (1880-1938), was arrested as a member of «Nevskij's group»). 27Nevskij was arrested at night of 3-4 October of 1937,28 in a few days his wife was arrested too.They were condemned and executed on the 24 November of 1937.Alexey I. Ivanov was executed earlier, on 8 October, also as a Japanese spy.We do not even know the place where they are buried. 29ussian Tangutology was physically exterminated. 30And only after the death of Stalin in 1953 the situation began to change -but the line was broken.In 1955 the young Sinologist Evgenij I. Kychanov (Åâãåíèé Èâàíîâè÷ Êû÷àíîâ) (1932-2013), who had just graduated from the Oriental faculty of Leningrad university, became a graduate student at the Oriental manuscripts sector of the Institute of Oriental Studies (from 1956 -Leningrad fi lial of Institute) and intended to work in Tangut studies.He had no proper scientifi c adviser -among the specialists of the Institute there were not any Tangutologists.At that time there were many rumours that Nevskij was alive and would return -like many who returned in the 1950s from prisons."Don't worry, Nevskij will be your adviser" -said colleagues to Kychanov.Kychanov and others had to become Tangutologists by themselves.It was not easy.
Up to 1960, the Tangut fond was closed for everyone but Zoia I. Gorbacheva (Çîÿ Èâàíîâíà Ãîðáà÷¸âà) (1907-1979), who was its keeper after Nevskij's death. 31It was mainly because of the fear that somebody could steal Nevskij's drafts and misappropriate his results, but also because of an absence of specialists who could work there. 32In 1960 a huge impact into Tangut Studies was made with strong support of academician Nikolay I. Konrad (Íèêîëàé Èîñèôîâè÷ Êîíðàä) (1891-1970), a friend and a schoolmate of Nevskij -he was also touched by the Purge but survived.Konrad did everything possible to publish a facsimile of the Tangut dictionary, prepared by Nevskij, in two volumes. 33It was only a draft that Nevskij had written for personal use, unfi nished and unpolished, but it was the fi rst dictionary in the world,34 it contained almost all known Tangut characters, with translations, examples, and sometimes phonetics. 35The Tangut texts could be read at last.In 1962 for these two volumes Nevskij was postmortem granted the Lenin award. 36his publication, just like Ivanov's article about the "Pearl in the palm" in 1909, gave the strongest impulse to Tangut studies in the world, and of course in the Soviet Union, too.The Tangut workgroup was formed in the Leningrad fi lial of the Institute of Oriental studies at that time.Except for a famous Sinologist Vsevolod S. Kolokolov (Âñåâîëîä Ñåðãååâè÷ Êîëîêîëîâ) (1896-1979), all other members were young - Evgenij Kychanov, Mikhail V. Sofronov (Ìèõàèë Âèêòîðîâè÷ Ñîôðîíîâ) (b.1929), Ksenia B. Kepping (Êñåíèÿ Áîðèñîâíà Êåïïèíã) (1937-2002), Anatolij P. Terent'ev-Katanskij (Àíàòîëèé Ïàâëîâè÷ Òåðåíòüåâ-Êàòàíñêèé) (1934-1998). 37Quantity and quality of the members gave to this group an opportunity to explore many important things in a very limited period of time.
In 1963 M. V. Sofronov and E. I. Kychanov published the "Studies on the Tangut language's phonetic", where they offered the fi rst attempt for a phonetic reconstruction of the Tangut language.In 1968 Evgenij I. Kychanov edited a "Sketch of the history of the Tangut state" -the fi rst detailed account of Tangut history in the world.That same year appeared a two-volume Tangut grammar by M. V. Sofronov -also the fi rst scientifi c grammar of this language and a very important study on the reconstruction of its phonetics.In 1969 Vsevolod S. Kolokolov, Ksenia B. Kepping, Evgenij I. Kychanov and Anatolij P. Terent'ev-Katanskij as a result of a very complicated work published a translation of the Sea of characters -an etymological dictionary, which is fundamentally important for understanding the principles of Tangut writings.
It was the biggest workgroup in the history of Russian Tangut studies.Their impact, as a team and in individual projects, was abundant and very important.Thanks to them Tangut writing was fi nally well decoded and Tangut studies achieved a new level in the analysis of sources. 38fter the 1960s the members of the group mostly worked apart, but still had very good results. 39Ksenia Kepping studied many important Chinese texts, translated into Tangut, but mostly was occupied with linguistic questions. 40natolij P. Terent'ev-Katanskij published some pioneer books about the Tangut 37 In other countries the Tangut studies also saw a big fl ourishing at that time, and the majority part of leading Tangutologists of the world are of the same age: Huang Zhenhua 黃 振華 (1930-2003), Li Fanwen 李範文 (b.1932) and Shi Jinbo史金波 (b.1940) in China, Nishida Tatsuo 西田龍雄 (1928-2012) in Japan; Gong Hwang-cherng (1934-2010) in Taiwan; Eric Grinstead (b.1921) in New Zealand; James A. Matisoff (b.1937) in California.Th e next generation of scholars, unfortunately, was much less numerous. 38We must add that it was also the time when the Tangut studies became popular again -mainly due to Nevskij's Lenin award.In 1963 a documentary ("Sem' vekov spustia" ("Seven centuries later")) about a decoding of Tangut writings was made by Agasi Babajan (b.1921); programmes about Tanguts were broadcasted on the radio. 39We must add that soviet Tangutologists usually enjoyed better opportunities to contact with foreign colleagues or to go abroad -which was practically impossible to other Orientologists: Tangutology was one of the demanded and reputable "brands" of Soviet science.
Conclusion
Russian Tangutology has passed through all the important stages which are inherent to the so called classical Orientalism.The fi rst Russian Tangutologists were more rather brave travelers and adventurers, actors of the Great Game, than armchair scientists.We can easily put Piotr Kozlov in the same line with Sir Marc Aurel Stein or Sven Hedin.On the next stage we saw among them some of the fi nest and brilliant minds, equals to Paul Pelliot or Wang Guo-wei, keen geniuses capable of giving impetus to their science forward even under the pressure of very hard circumstances.
In the years of the Great Purge Russian Tangutology was a part of Russian Oriental Studies, a part of Russian history, and brilliant minds became martyrs.Progress was stopped for years with a bullet of an executioner.In Tangut studies this terrible image was only more obvious -due to the small number and extreme value of each person involved.The next generation, just like in Russian Sinology, was numerous and talented -but the gap between generations was even more clear than in Chinese studies -the previous generation was physically exterminated.Young scholars had to teach themselves and they became good heirs -we only can imagine, what could they have achieved if their predecessors had still been alive.
Nowadays, like in other lines of Oriental sciences in Russia, we -againsee the gap between generations and so far there is no solution to this problem.What will be the future of Tangut studies in Russia?Will it conserve its oldfashioned and familiar features, so rare in modern science?Is not this charming archaism a great risk for this science indeed?Would the Tangut writings fall silent again?We will see.
As a fi nal remark I will try to explain why this paper was written.Of course, a history of everything is interesting and somehow useful, but why would someone who is not a Russian tangutologists have to care about Russian Tangutology?Maybe this topic is way too specifi c? Maybe, but I do not think so.First of all, as I have said above, Tangutology is a very good and demonstrative image, which can be used to understand the basic lines and fl uctuations of a history of all Russian Orientology in the 20 th Century.Secondly, I suppose that every scientifi c tradition, especially in the fi eld of humanities, is absolutely crucial to be known to new generations of researchers; without this knowledge about our scientifi c ancestors we cannot go further.And, last but not least, I think that in our time, when the biggest problem of a scholar is in most cases a low salary or troubles with obtaining of a new grant, it is very useful to remember those who were able to literally give their lives for a possibility to study a new fi eld -which was very specifi c and absolutely not benefi cial indeed.We owe them.
found in the just arrived Tangut xilographs a Tangut-Chinese dictionary Mi źạ ngwu ndzi̯ e mbu pi̯ ạ ngu ni̯ e /番漢合時掌中珠 (Timely pearl in the palm of Tangut and Chinese languages), compiled in 1190 by a Tangut scholar called Kwəlde-ri̯ ephu
and Northern Mongolia (1912), where, besides other things, he studied Turcic runic texts.Aft er 1917 Kotwicz tried to organize in Petrograd an Institute of living oriental languages and from 1920 to 1922 he was its director.In 1923 he was elected professor and corresponding member of Academy of sciences.In 1923 he decided to return to Poland, since 1924 he was a head of department of Far Eastern philology at the Lvov University, one of the fi nest intellectual centers of a Nevskij, Moscow: Nauka, Glavnoe izdatel'stvo vostochnoj literatury, 1978.
Nevskij returned to Leningrad.His wife Mantani Isoko 萬谷磯子 (1901-1937)and their daughter Elena managed to join him only in 1933.In the fi rst years in Leningrad it seems that Nevskij's dreams came true.He worked in the Institut of Orientalism in the Academy of Sciences of the USSR (which in 1930 replaced the Asiatic Museum), in the University and in the State Hermitage.He studied ad libitum Tangut manuscripts and xylographs. 25He 22 See Nikolaj A. Nevskij, Materialy po govoram cou (Materials on dialects of Zou language), Vasilij M. Titianov, who was with Nikolaj A. Nevskij in a prison ward in 1937, remembered that Nevskij told him that his interest toward Tangut language began much earlier, just aft er Kozlov's expedition; he even told that one of objectives of his journey to Japan in 1915 was to fi nd there a specialist who could help him with decoding of Tangut writing (see Na steklah vechnosti…, 516). 24Nikolay Nevsky, A brief manual of the Si-hia characters with Tibetan transcription, Research review of the Osaka Asiatic Society, No.4, March 15, 1926, Osaka: Th e Osaka Asiatic Society. 25Before Nevskij's return the Tangut collections was a fi eld of interest of great Russian linguist Alexandr A, Dragunov (Александр Александрович Драгунов) (1900-1955) -one of the creators of modern theory of Chinese grammar.He published some works about Tangut funds (for ex.see Alexandr A. Dragunov, A catalogue of His-Hsia (Tangut) Works in the Asiatic Museum of Academy of sciences, Leningrad, Bulletin of the National Library of Peiping, Vol.4,.May-june 1930 (issued in January 1932), № 3, 367-388).But in 1930 he abandoned this work and went to Middle Asia for studies of Dungan (回族) language.He returned to Tangut collections only aft er the end of the World War II, but this time that was not his cup of tea.Although he described 2720 units of issue of Tangut collection (see Zinaida I. Gorbacheva and Evgenij I. Kychanov (comp.),Tangutskie rukopisi… , 12-17). | 2018-12-18T08:22:07.866Z | 2015-02-07T00:00:00.000 | {
"year": 2015,
"sha1": "354ff4f6272e8fbaa6b0d2ef058076199c4c7703",
"oa_license": "CCBY",
"oa_url": "https://www.mongoliajol.info/index.php/MJIA/article/download/412/433",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "354ff4f6272e8fbaa6b0d2ef058076199c4c7703",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": [
"Sociology"
]
} |
201714525 | pes2o/s2orc | v3-fos-license | Optimal HIV testing strategies for South Africa: a model-based evaluation of population-level impact and cost-effectiveness
Although many African countries have achieved high levels of HIV diagnosis, funding constraints have necessitated greater focus on more efficient testing approaches. We compared the impact and cost-effectiveness of several potential new testing strategies in South Africa, and assessed the prospects of achieving the UNAIDS target of 95% of HIV-positive adults diagnosed by 2030. We developed a mathematical model to evaluate the potential impact of home-based testing, mobile testing, assisted partner notification, testing in schools and workplaces, and testing of female sex workers (FSWs), men who have sex with men (MSM), family planning clinic attenders and partners of pregnant women. In the absence of new testing strategies, the diagnosed fraction is expected to increase from 90.6% in 2020 to 93.8% by 2030. Home-based testing combined with self-testing would have the greatest impact, increasing the fraction diagnosed to 96.5% by 2030, and would be highly cost-effective compared to currently funded HIV interventions, with a cost per life year saved (LYS) of $394. Testing in FSWs and assisted partner notification would be cost-saving; the cost per LYS would also be low in the case of testing MSM ($20/LYS) and self-testing by partners of pregnant women ($130/LYS).
Methods
This analysis is based on the MicroCOSM model, an agent-based model that was developed to simulate HIV and other STIs in South Africa 31 . A detailed description of the model is provided elsewhere 32 . Briefly, the model simulates a nationally-representative sample of individuals, with the initial simulated population size set to 20 000 at the start of the simulation (in 1985). Each simulated individual is randomly assigned a date of birth, sex and race, and individuals are tracked over time, with the model randomly assigning to each individual a series of demographic events (birth, death and migration between urban and rural areas), educational events (entering school and passing, failing or dropping out of school at the end of each year), relationship events (acquiring new partners, marrying partners, ending relationships and engaging in casual or commercial sex), health events (acquisition of HIV and other STIs) and healthcare utilization events (adoption or discontinuation of hormonal contraception, condoms, pre-exposure prophylaxis (PrEP), antiretroviral treatment (ART) and MMC). The model also simulates incarceration and unemployment. The model allows for heterogeneity between individuals in sexual preference, propensity for concurrent partnerships and commercial sex, and willingness to use condoms. Individuals who form new partnerships are assumed to select their partner from within the simulated population, with sexual mixing patterns being assumed to be highly assortative with respect to age, race, educational attainment, risk group and location. Model assumptions about HIV transmission probabilities per sex act are calibrated such that the model matches HIV prevalence data from three national household surveys (conducted in 2005, 2008 and 2012 [33][34][35] ) and national antenatal surveys conducted over the 1997-2015 period 36 , as well as surveys of HIV prevalence in sex workers and MSM. Although the model simulates both sexual and mother-to-child transmission of HIV, the model does not currently simulate HIV testing in children, and this analysis is therefore limited to testing in adults.
The model simulates eight different HIV testing modalities that have already been introduced in South Africa: 'general' HIV testing (self-initiated testing, testing for insurance purposes and provider-initiated testing not included in other modalities), ANC testing, testing in OI patients, testing partners of newly-diagnosed individuals ('passive referral'), STI patient testing, testing men seeking MMC, testing in prisons and testing in individuals receiving PrEP 32 . Key HIV testing assumptions are summarized in Table 1, and a more detailed description of these 'baseline testing modalities' is provided in the supplementary materials (section 2.1). Assumptions about historic rates of HIV testing have been set with reference to reported levels of HIV testing in antenatal clinics, STI clinics, tuberculosis patients (as a proxy for patients with OIs), and prisons, and the model has been calibrated to match the total annual numbers of HIV tests performed in South Africa (private and public sector combined 37 ) as well as the fraction of tests that are positive over the 2002-2016 period (Fig. 1). All men who seek MMC and all PrEP recipients are assumed to receive HIV testing prior to MMC/PrEP initiation. The rate of 'general' HIV testing is assumed to depend on age, sex, educational attainment, HIV testing history and calendar period. For all HIV testing modalities (with the exception of those associated with MMC and PrEP), it is assumed that previously-diagnosed individuals can get retested, although the rate of testing is assumed to be reduced by a factor of 50% in previously-diagnosed ART-naïve individuals and by 85% in ART-experienced individuals 37 . After diagnosis, HIV-positive individuals are assumed to disclose their HIV status to their partner(s) with a probability that depends on their sex and relationship type. If disclosure occurs, there is a probability that the partner will seek HIV testing (passive referral). Female disclosure of an HIV-positive status is assumed to be associated with increased risk of union dissolution. If the relationship continues and the HIV status of the other partner is negative or unknown, there is assumed to be an increased odds of consistent condom use. Individuals who test positive are assumed to start ART with a probability that depends on the HIV testing modality, and the same probability is assumed to apply whether the individual is newly diagnosed or testing positive after a prior diagnosis 38 .
In addition to the existing HIV testing strategies, the model was extended to include a number of potential new HIV testing strategies: • Home-based testing: This is assumed to be offered at an average frequency of once every two years. Male uptake is assumed to be lower than female uptake as men are less likely to be present when testing teams visit households 12 . Separate scenarios are run to compare the cost-effectiveness of home-based testing in urban and rural areas, and the effect of offering self-testing kits alongside it (as a strategy to increase uptake and get tests to household members who are not present at the time of the home visit).
• Mobile testing: Mobile clinics are assumed to move between communities, offering HIV testing. As with home-based testing, separate scenarios are run to compare the cost-effectiveness of mobile testing in urban and rural areas. A further scenario is run to evaluate the effect of using community mobilization to increase the uptake of mobile testing. • Testing targeted to men who have sex with men (MSM): Men who have recently had sex with other men are assumed to be recruited into HIV testing programmes by peers 39,40 . • Testing targeted to female sex workers (FSWs): Sex workers are encouraged to use dedicated sex worker services that offer HIV testing. • Testing in family planning clinics: Women who use hormonal contraception (injectable methods or the pill) are assumed to be offered HIV testing on an annual basis. • Assisted partner notification: Health workers are assumed to contact partners of newly-diagnosed individuals to offer them HIV testing, if the index patient consents and provides their partner's contact details. • Testing in schools: HIV testing in schools is assumed to be offered on an annual basis, with uptake being higher in sexually-experienced adolescents. Table 1. HIV testing assumptions. *Ranges are specified only for the parameters that vary in the uncertainty analysis; the specified ranges correspond to the 2.5 and 97.5 percentiles of the distributions from which the parameter values are sampled. † The parameter is fixed for the baseline testing modalities, but a different parameter value is randomly sampled for each of the new testing modalities. **The parameter is estimated by dividing the reported fraction of partners who get tested (30% 60 ) by the assumed probability of disclosure (0.71 in the case of married women). FP = family planning; FSW = female sex worker; MSM = men who have sex with men; OI = opportunistic infection; OR = odds ratio; RR = relative rate.
• Workplace testing: Uptake is conditional upon being in a workplace in which testing is offered. The fraction of the employed population working in the formal non-agricultural sector 41 is considered an upper bound on the fraction of the employed population that could be offered testing annually. • Testing partners of pregnant women: Pregnant women are issued invitation letters to give to their male partners, encouraging them to seek testing at the antenatal clinic. In an alternative scenario we consider the effect of instead providing pregnant women with self-testing kits to give to their male partners 17 .
Assumptions about the uptake and costs of these new testing modalities were based on a PubMed search (details in the supplementary material, section 1). Priority was given to African data sources and (where available) South African data in setting model assumptions. Where there was significant uncertainty around testing assumptions for the new testing modalities, prior distributions were assigned to represent the range of uncertainty around the relevant parameter (Table 1). cost assumptions. We purpose-built a cost model for this analysis. All testing is assumed to follow a testing algorithm in keeping with the South African Department of Health's testing cascade, which includes pre-and post-test counselling and rapid testing with confirmatory testing of positive results ( Fig. S2.3). The test cascades in the cost model are divided into two groups, one for tests conducted in a healthcare facility, and another for those conducted through a mobile modality. We summarized cost and resource use for staff, consumables (including test kits) and equipment, overhead, and demand creation and targeting costs. Tests conducted in a facility were costed using a bottom-up approach for the staff, consumable and equipment costs and a top-down approach for the overhead costs, demand creation and targeting. Staff costs for facility-based testing were calculated based on observed time by client HIV status from a time-and-motion study. Tests conducted through a mobile modality were costed using a top-down approach for staff, equipment and overheads (because in a dedicated mobile testing service, these expenses are not split across other health activities), and consumable costs were calculated using a bottom-up approach. Resource use was calculated from the perspective of the provider, the public health system. All costs were updated to 2016-17 public-sector prices and salaries and converted to USD using the 07/2016 to 06/2017 period average of 1 USD = 13.58 ZAR 42 . Costs are presented unadjusted for inflation and undiscounted, in order to facilitate the use of total costs results for programme planning and budgeting. Further details regarding the cost assumptions are presented in section 2.6 of the supplementary material.
Analysis. The model was used to project the expected number of new HIV infections and HIV-related deaths
in South Africa over a 20-year period from 1 July 2019 to 30 June 2039. In the 'baseline' scenario we assumed no change to the existing HIV testing modalities, with rates of testing remaining constant at the levels estimated in 2016-17. Separate scenarios were defined for each of the potential new HIV testing modalities. For each of these new scenarios we calculated the change (relative to baseline) in total new HIV diagnoses, new HIV infections, life years lost due to HIV, HIV testing costs, and total HIV programme costs. Life years lost due to HIV were calculated using the West level 26 life table 43 . Two incremental cost-effectiveness ratio (ICER) measures were calculated over the 20-year period: the cost per HIV infection averted and the cost per life year saved, the latter being defined for consistency with the metric used in the South African HIV Investment Case 30 .
For the purpose of probabilistic sensitivity analysis, we randomly varied several of the model parameters simultaneously. As described elsewhere 32 , 100 different combinations of HIV transmission parameters and sexual behaviour parameters were identified that yielded model estimates of HIV prevalence consistent with historical South African HIV prevalence data. For each of these 100 parameter combinations we randomly sampled 5 parameter combinations from the distributions shown in Table 1, to generate a total of 500 different parameter combinations. For each of the scenarios described previously, the model was run for each of the 500 parameter www.nature.com/scientificreports www.nature.com/scientificreports/ combinations to generate a distribution of model outputs. From these we calculated means and standard errors, which were used to calculate 95% confidence intervals. Because ICER distributions were highly skewed, we summarized these instead using medians, with bootstrapping being used to estimate the 95% confidence intervals around the medians. To assess the sensitivity of the ICER estimates to the parameters that were allowed to vary across the 500 simulations, we fitted linear regression models to predict the change in ICER for unit changes in each of the parameters, and highlighted those parameters that had a statistically significant effect on the ICER. We also assessed the sensitivity of the model results to an arbitrary 50% reduction in the frequency of 'general' HIV testing over the 2019-39 period.
ethics.
As this was a mathematical modelling study that did not involve human subjects, approval from an institutional review board was not necessary.
Results
The model matches the reported total numbers of HIV tests performed and the reported levels of HIV prevalence in adults tested for HIV in the period up to 2016 (Fig. 1). By 2017, the model estimates that the fraction of HIV-positive adults who were diagnosed was 88.7%, though the proportion diagnosed was substantially lower among men (84.5%) than among women (91.2%), and substantially lower in the 15-24 age group (74.5%) than in the 25-49 and 50 + age groups (90.9% and 87.8% respectively) (Fig. 2). The fraction diagnosed was also relatively low among FSWs (82.9%) and MSM (81.4%) (Fig. S3.5). The model estimates that over the 2010-2015 period, the HIV testing modalities that accounted for the greatest numbers of HIV tests were general HIV testing (63.0%), testing in STI clinics (11.4%), testing of OI patients (9.8%) and antenatal testing (7.3%), while the testing modalities that accounted for the greatest numbers of new HIV diagnoses were general HIV testing (43.6%), testing of OI patients (32.5%), testing in STI clinics (8.1%), and testing partners of newly diagnosed individuals (7.4%) ( Fig. S3.3).
The model estimates that in the absence of any change to current HIV testing strategies, 12-15 million adult HIV tests will be conducted each year over the next 20 years, and the fraction of tested individuals with positive results (including retests in previously-diagnosed individuals) will decline to around 5% (Fig. 1). Although South Africa is expected to meet the 90% diagnosis target by 2020, it is less likely to meet the 95% target by 2030, in the absence of any change to current HIV testing strategies (Fig. 2a). Reaching the 95% target is particularly unlikely in the case of men (Fig. 2b). Reaching the 90% and 95% targets is also expected to be challenging in groups that have high HIV incidence rates, notably youth (Fig. 2d), FSWs and MSM (Fig. S3.5). Of the baseline testing strategies, the strategies that are expected to lead to the highest rates of new diagnosis (per test performed) are testing partners of newly diagnosed individuals (6.8%) and testing of OI patients (6.8%). Yields are also expected to be relatively high in the case of males seeking MMC (2.3%), as most of these are young men who have not previously been tested. Yields on HIV testing in prisons and antenatal clinics are expected to be relatively low (0.7% and 1.1% respectively) because of the high rates of retesting in these settings.
The new HIV testing strategies that would most significantly increase the average annual number of HIV tests performed over the 2019-39 period include home-based testing coupled with an offer of self-testing (21.5 million additional HIV tests), standard home-based testing (16.0 million) and mobile testing with community mobilization (4.9 million) (Table S3.3). Although most strategies are expected to lead to increases in the total numbers of new diagnoses, some strategies (testing in MSM, FSWs, schools and partners of pregnant women) are expected to lead to reductions in the numbers of new diagnoses (Table S3.3), as the prevention benefits of testing imply a long-term reduction in new infections and thus a reduction in the maximum number of infections that can be diagnosed. The new strategies that are expected to achieve the highest rates of new diagnosis per test performed include assisted partner notification (7.6%), testing in FSWs (3.7%), and MSM (2.6%) (Fig. 3). Although testing partners of pregnant women is usually encouraged regardless of the woman's HIV status, yields on such testing are expected to be substantially higher when the woman is HIV-positive (6.8%) than when the woman is HIV-negative (0.7%). Yields are expected to be lowest in the case of school-based testing (0.3%) and home-based testing coupled with self-testing (0.6%). Both for home-based and mobile testing, yields are expected to be similar in urban and rural areas.
In the absence of new testing strategies, the diagnosed fraction is expected to increase from 90.6% in 2020 to 93.7% by 2030 (Fig. 2a). The introduction of home-based testing with an offer of self-testing is expected to lead to the greatest increase in the fraction of HIV-positive adults diagnosed by 2030, 96.5% (Fig. S3.4). The introduction of this strategy would lead to the 95% target being met by 2030, both in men and women, although it would not be sufficient to achieve 95% diagnosis among youth (Fig. 2). Other strategies that would significantly increase the fraction diagnosed include regular home-based testing, mobile testing and testing in family planning clinics; most other new testing strategies, however, would have negligible impact on the overall fraction diagnosed (Fig. S3.4).
In the absence of any change to HIV testing policy, 5.5 million new adult HIV infections are expected in South Africa over the 2019-39 period, and 68 million life years are expected to be lost due to HIV-related deaths in this period. As a result of the modelled increases in condom use and ART initiation following diagnosis, all of the potential new HIV testing strategies are expected to lead to reductions in new HIV infections ( Fig. 4a and Table S3.6). More substantial reductions are expected in the total numbers of life years lost due to HIV (Fig. 4b and Table S3.6). The greatest reductions are expected in the case of home-based testing coupled with an offer of self-testing: a median of 268 000 infections averted (95% CI: 249 000-288 000) and a median of 4.8 million life years saved (95% CI: 4.7-5.0 million).
The average cost per test is expected to be lowest in the case of self-testing, when offered through home-based testing ($3.08 per test) and when distributed to partners of pregnant women ($3.14), due to the saving in health worker time (Table S3.2). The cost per test is expected to be highest in the case of mobile testing with community mobilization ($17.46), due to the high costs of the community mobilization events, which we assumed to include (2019) 9:12621 | https://doi.org/10.1038/s41598-019-49109-w www.nature.com/scientificreports www.nature.com/scientificreports/ some catering. Other relatively expensive strategies include testing in schools ($7.08) and assisted partner notification ($6.95). Average test costs for home-based and mobile testing are roughly 30% lower in urban areas than in rural areas, due to the greater distances that need to be travelled in the latter.
Total HIV testing costs, in the absence of any change to current testing strategy, are expected to be an average of $64 million over the 2019-2039 period (Table S3.4). This is equivalent to 3.4% of the projected annual cost of all HIV programmes combined ($37.7 billion over the 2019-2039 period, or $1.88 billion per year (Table S3.5)). The greatest increases in total testing costs would be expected in the mobile testing scenario with community mobilization (a median increase of $85 million (95% CI: $82-87 million) in the average annual testing cost), the regular home-based testing scenario (median increase $74 million, 95% CI: $73-74 million) and the home-based testing with an offer of self-testing scenario (median increase $66 million, 95% CI: $66-67 million). When taking into account the total HIV programme costs, two of the potential new strategies are expected to be cost-saving (assisted partner notification and testing in FSWs), although the change in net cost is not significantly different from zero (Table S3.5). As a result, these two testing strategies had negative median ICERs, both for the cost per HIV infection averted and the cost per life year saved (Fig. 4), though the upper 95% confidence limits extended above zero (Table S3. www.nature.com/scientificreports www.nature.com/scientificreports/ In the regression analysis, ICER estimates were relatively insensitive to changes in the parameters specific to the new testing modalities (Table S3. 13). However, in several scenarios the ICER reduced significantly as the relative rate of testing in previously-diagnosed individuals increased (Table S3.11), because previously-diagnosed individuals were assumed to be more likely to start ART after a retest, and similarly ICERs tended to decrease as rates of ART initiation following community-based testing increased. In contrast, the ICERs tended to increase as the relative rate of testing in treated adults increased, as there was no assumed benefit to testing individuals who were already on ART. The ICERs were also significantly negatively related to the projected future HIV incidence trend (in the absence of changes in testing policy). For example, in the home-based testing with self-testing scenario, the cost per life year saved reduced by $220 for every 1 million increase in the projected number of new infections over the 2019-39 period (Table S3. 15). ICERs were also sensitive to the level of HIV testing in the baseline scenario: if it was assumed that there was a 50% reduction in the rate of general HIV testing over the 2019-2039 period, the cost per life year saved in the home-based testing with self-testing scenario decreased from $394 to $316 (Table S3. 16).
Discussion
As noted recently by De Cock and colleagues, there is a clear tension between pursuing the HIV testing strategies that are most efficient and the strategies that have the greatest impact 10 . In our analysis, strategies such as assisted partner notification, testing in FSWs and MSM, and secondary distribution of self-testing to partners of pregnant women were highly cost-effective (possibly even cost-saving) but had only a very modest impact on population-level HIV incidence and mortality. In contrast, home-based HIV testing was predicted to have the most substantial impact on HIV incidence and mortality, but at the expense of relatively low yields and cost-effectiveness. Despite this, the cost per life year saved of home-based testing coupled with self-testing was $394, well below the willingness-to-pay threshold of $547-842 previously estimated for South Africa 30 . Indeed, most of the HIV testing strategies had ICERs below this threshold, with the exception of school-based testing and mobile testing combined with community mobilization. These findings are important, as it is often assumed that home-based testing is unlikely to be efficient in settings such as South Africa, where a high fraction of HIV-positive individuals already know their HIV status 10 . Although South Africa is already close to the 90% diagnosis target in adults, South Africa is unlikely to achieve the 95% target by 2030 in the absence of major changes to HIV testing programmes, and strategies such as home-based testing may be crucial to achieving this.
The challenges that South Africa faces in reaching the 95% target are to some extent a reflection of the challenges that the country faces in reducing HIV incidence, as the fraction diagnosed can only be increased substantially if the annual number of new diagnoses exceeds the annual number of new infections. This explains why the fraction diagnosed is lowest in the sub-populations in which HIV incidence is highest (FSWs, youth and MSM), and our results suggest that even with very aggressive HIV testing strategies, the 95% target is unlikely to be met in these high-incidence groups ( Fig. 2 and Fig. S3 Table S3.15), making it a less favourable policy option when compared against the current willingness-to-pay threshold. Projections of progress towards the 95% targets and cost-effectiveness are thus very sensitive to future HIV incidence trends, which are difficult to predict with confidence. www.nature.com/scientificreports www.nature.com/scientificreports/ This analysis suggests that assessing HIV testing based on numbers of new diagnoses may be problematic, for two reasons. Firstly, our analysis shows that some HIV testing strategies might lead to increases in new diagnoses in the short term but longer-term reductions in new diagnoses as a result of reductions in HIV incidence. Secondly, much of the modelled benefit of testing arises from retesting previously-diagnosed individuals who have either never linked to HIV care or dropped out of care. Although such retesting of previously-diagnosed individuals is often considered a 'waste' of scare testing resources, it may be important in facilitating linkage to care and (re)initiation of ART. A recent South African study found similar rates of linkage to HIV care in newly-diagnosed adults and previously-diagnosed adults who had not previously linked to care, and higher rates of linkage in individuals who had dropped out of care 38 . Our sensitivity analyses suggest that the cost per life year saved reduces significantly as the relative rate of testing in previously-diagnosed untreated individuals increases, and thus it is important to consider repeat diagnoses as well as new diagnoses.
Our results point to the significant potential benefits of self-testing in improving both the impact and the cost-effectiveness of home-based testing and testing partners of pregnant women. This is in part because of the significantly higher HIV testing uptake associated with self-testing 16 , and in part because of the lower cost per HIV-negative test result when there is less health worker time required 22,23 . However, the lower cost assumption may be simplistic, as it does not take into consideration the cost of distributing the self-test, due to lack of data. Another potential concern is that self-testing may be less sensitive than standard testing performed by health workers, although a recent review found good agreement between self-testing and health worker-administered testing in most studies 44 . We have limited our analysis to two scenarios in which testing uptake would be low in the absence of self-testing: many individuals are not at home when home-based testing teams visit, and getting partners of pregnant women to come to antenatal clinics for testing is frequently challenging, making the secondary distribution of self-testing kits an obvious strategy in such scenarios. However, in settings in which the uptake of HIV testing by health workers is already high (for example, HIV testing in antenatal clinics and TB patients), the increase in HIV testing uptake associated with an offer of self-testing is likely to be minimal. Further work is required to improve our estimates of the cost of distributing HIV self-test kits, and to model the potential cost-effectiveness of offering self-testing to FSWs, a group that has been shown to benefit significantly from this intervention 18,19 . The offer of self-testing in outpatient departments could also significantly increase HIV www.nature.com/scientificreports www.nature.com/scientificreports/ testing rates 45 , and should also be evaluated in future modelling. It may also be useful to consider a more targeted approach to the secondary distribution of self-testing kits, such as limited to pregnant women who do not know their partner's HIV status -although our current model is not capable of modelling such a scenario, and such targeting is not being considered in South Africa.
Our results show that in the context of mobile testing, including community mobilization activities substantially increases the epidemiological impact, but also substantially increases the cost per life year saved (above the willingness to pay threshold). A limitation of our analysis is that we do not consider the cost and impact of community mobilization and sensitization in the context of other new models of community-based HIV testing (particularly home-based testing), due to lack of local data. To some extent community care workers (who we have assumed to conduct home-based testing) do already perform community sensitization, but it will be important for future analyses to consider whether additional community mobilization activities are economically justified.
A key strength of this analysis is that it is based on a network model, which means that individuals are dynamically linked to specific partners, making it possible to model realistically the effects of disclosure of HIV status to partners and the benefits of partner notification strategies. Our analysis suggests that assisted partner notification and testing partners of pregnant women are likely to be among the most cost-effective strategies, with high yields. However, the relatively small population-level impact of these testing strategies is in part because disclosure occurs less frequently in short-term relationships than in long-term marital relationships, and rates of marriage are relatively low in South Africa 46 . Partner notification strategies may have relatively more impact in other African settings, in which a higher fraction of HIV-positive adults are married or cohabiting.
A disadvantage of the network modelling approach is that it is individual-based, which introduces substantial stochastic variation in model outputs. As a result, the 95% confidence intervals around the model estimates tend to be wide, and in some cases this makes the ranking of different HIV testing strategies difficult, especially when comparing ICERs, which have the widest confidence intervals.
Another limitation of this analysis is that it does not consider HIV testing strategies for children 47 . We have also not included infections averted in children and life years saved in children (as a result of reduced mother-to-child transmission) when estimating the ICERs. Our analysis also does not consider couple-based testing 48 , although -as noted previously -rates of cohabitation and marriage in South Africa are lower than in most other African settings, and this may present a challenge for recruiting couples in our setting. We also did not vary our input costs in the uncertainty analysis, and the costs of the novel testing modalities that have not been implemented at scale in the public sector (in particular self-test distribution) are currently based on assumptions regarding staff time, linkage and the number of tests that can be performed per day. Rates of linkage to ART after diagnosis are difficult to determine in the context of community-based models of HIV testing, which generally suffer from lower rates of linkage than facility-based testing 18,[49][50][51] . In our uncertainty analyses, we found that in most community-based testing scenarios the cost per life year saved declined substantially as the rate of linkage to ART increased (Table S3.11), and it is therefore important to quantify linkage rates more precisely. ICERs may be similarly affected by future changes in ART effectiveness and cost, but we have conservatively assumed no future changes in costs or effectiveness in our analyses.
Some of our results may not be generalizable to other African settings. The cost-effectiveness of HIV testing is sensitive to the levels of HIV prevalence in the population 52 , which are relatively high in South Africa, and costs of HIV interventions tend to be higher in South Africa than in other African settings 53 . However, the relative impact and cost-effectiveness of different testing strategies is likely to be more similar across settings. Possible exceptions are testing strategies that target key populations, which are likely to be relatively more cost-effective (compared to testing in the general population) in the more concentrated epidemic settings of West and Central Africa, and partner notification, which may have relatively more impact in settings that have higher rates of marriage. In this analysis we have compared HIV testing strategies in terms of the cost per life year saved, rather than the cost per disability-adjusted life year (DALY) averted, which would allow for comparison with other disease areas and be more in line with economic analyses that aim to inform decisions of international funders. The primary motivation for using this metric is to achieve consistency with the metric used in the South African HIV Investment Case, which estimated the government's 'willingness to pay' threshold for HIV interventions 30 , but other metrics may be more appropriate in other African settings. Our model can be applied to other African countries, although considerable work would be required to parameterize and calibrate the model for other settings, given the large number of socio-demographic and epidemiological assumptions.
As many African countries move closer to achieving the 90% diagnosis target, there has been growing concern over whether it is feasible to sustain the same intensity of testing efforts as has been achieved in the past, and whether it may be appropriate to scale back to a more 'targeted' approach to HIV testing strategies that are associated with higher yields 10,11 . Our results suggest that focusing only on yield may be simplistic, and that even HIV testing strategies with relatively low yields may be cost-effective when compared to other currently-funded HIV interventions. Although it may be efficient to reduce the frequency of screening in certain populations (for examples, pregnant women and prisoners), more rather than less testing may ultimately be appropriate if substantial reductions in HIV incidence and mortality are to be achieved.
Data Availability
This analysis is based on simulated data generated by a mathematical model. The mathematical model and summaries of the data are available from the corresponding author on request. | 2019-09-03T15:00:47.194Z | 2019-09-02T00:00:00.000 | {
"year": 2019,
"sha1": "d1d0363d794482f21b133ec26371f91d710f79d1",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-019-49109-w.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d1d0363d794482f21b133ec26371f91d710f79d1",
"s2fieldsofstudy": [
"Medicine",
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3313923 | pes2o/s2orc | v3-fos-license | A survey of search-based refactoring for software maintenance
This survey reviews published materials related to the specific area of Search-Based Software Engineering that concerns software maintenance and, in particular, refactoring. The survey aims to give a comprehensive review of the use of search-based refactoring to maintain software. Fifty different papers have been selected from online databases to analyze and review the use of search-based refactoring in software engineering. The current state of the research is analyzed and patterns in the studies are investigated in order to assess gaps in the area and suggest opportunities for future research. The papers reviewed are tabulated in order to aid researchers in quickly referencing studies. The literature addresses different methods using search-based refactoring for software maintenance, as well as studies that investigate the optimization process and discuss components of the search. There are studies that analyze different software metrics, experiment with multi-objective techniques and propose refactoring tools for use. Analysis of the literature has indicated some opportunities for future research in the area. More experimentation of the techniques in an industrial environment and feedback from software developers is needed to support the approaches. Also, recent work with multi-objective techniques has shown that there are exciting possibilities for future research using these techniques with refactoring. This survey is beneficial as an introduction for any researchers aiming to work in the area of Search-Based Software Engineering with respect to software maintenance and will allow them to gain an understanding of the current landscape of the research and the insights gathered.
Introduction
SEARCH-Based Software Engineering (SBSE) concerns itself with the resolution of software engineering optimization problems by restructuring them as combinatorial optimization problems. The topic has been addressed and researched in a number of different areas of the software development life cycle, including requirements optimization, software code maintenance and refactoring, test case optimization and debugging. While the area has existed since the early 1990s and the term "search-based software engineering" was originally coined by (Harman & Jones, 2001), most work in this area has been recent with the number of published papers on the topic exploding in the last number of years (De Freitas & De Souza, 2011). Many of the papers in the area of SBSE propose using an automated approach to increase the efficiency of the area of the software process looked at. Of the papers published concerning SBSE, a relatively small amount is related to software maintenance. This is despite the fact that it is estimated that the maintenance process takes 70-75% of development effort (Bell, 2000;Pressman & Maxim, 2000) of the development process.
Software code can fall victim to what is known as technical debt. For a software project, especially large legacy systems, the structure of the software can be degraded over time as new requirements are added or removed. This "software entropy" implies that over time, the quality of the software tends towards untidiness and clutter. This degradation leads to negative consequences such as extra coupling between objects and increased difficulty in adding new features. As a result of this issue, the developer often has to restructure the program before new functionality can be added. This costs the developer time as the overall development time for functionality is offset by this obligatory cleaning up of code.
SBSE has been used to automate this process, thus decreasing time taken to restructure a program. SBSE can be applied to software maintenance by applying refactorings to the code to reduce technical debt. Using a search-based algorithm, the developer starts with the original program as a baseline from which to improve. The measure of improvement for the program is an uncertain aspect and can be subjective and so can be done in a variety of different ways. The developer needs to devise a heuristic, or most likely a set of heuristics to inform how the structure of the program should be improved. Often these improvements are based on the basic tenets of object-oriented design, where the software has been written in an object-oriented language (these tenets consist of cohesion, coupling, inheritance depth, use of polymorphism and adherence to encapsulation and information hiding). Additionally, there are other sources of heuristics such as the SOLID principles introduced by (Martin, 2003). The developer then needs to devise a number of changes that can be made to the software to refactor it in order to enforce the heuristics. A refactoring action modifies the structure of the code without changing the external functionality of the program. When the refactorings are applied to the software they may either improve or impair the quality, but regardless, they act as tools used to modify the solution.
The refactorings are applied stochastically to the original software solution and then the software is measured to see if the quality of the solution has improved or degraded. A "fitness function" combining one or more software metrics is generally used to measure the quality. These metrics are very important as they heavily influence how the software is modified. There are various metric suites available to measure characteristics like cohesion and coupling, but different metrics measure the software in different ways and thus how they are used will have a different affect on the outcome. The CK (Chidamber & Kemerer, 1994) and QMOOD (Quality Model for Object-Oriented Design) (Bansiya & Davis, 2002) metric suites have been designed to represent objectoriented properties of a system as well as more abstract concepts such as flexibility.
Metrics can be used to measure single aspects of quality in a program or multiple metrics can be combined to form an aggregate function. One common approach is to give weights to the metrics on which heuristics are more important to maintain and combine them into one weighted sum (although this weighting process is often subjective). This weighting may be inappropriate since there is a possibility of metrics conflicting with each other. For instance, one metric may cause inheritance depth to be improved but may increase coupling between the objects. Another method is to use Pareto fronts (Harman & Tratt, 2007) to measure and compare solutions and have the developer choose which solution is most desirable, depending on the trade-offs allowed. A Pareto front will indicate a set of optimal solutions among the available group and will allow the developer to compare the different solutions in the subset according to each individual objective used.
Using the metric or metrics to give an overall fitness value, the fitness function of the search-based technique measures the quality of the software solution and generates a numerical value to represent it. In the solution, refactorings are applied at random and then the program is measured to compare the quality with the previously measured value. If the new solution is improved according to the software metrics applied, this becomes the new solution to compare against. If not, the changes are discarded and the previous solution is kept. This approach is followed over a number of iterations, causing the software solution to gradually improve in quality until an end point is reached and an optimal (or near optimal) solution is generated. The end point can be triggered by various conditions such as number of iterations executed or the amount of time passed. The particular approach used by the search technique may vary depending on the type of search-based approach chosen, but the general method consists of iteratively making changes to the solution, measuring the quality of the new solution, and comparing the solutions to progress towards a more optimal result. This survey aims to review and analyze papers that use search-based refactoring. We apply certain inclusion and exclusion criteria to find relevant papers across a number of research databases. We highlight different aspects of the research to inspect and analyze through a set of research questions. We also identify related work and highlight the differences between those papers and this survey (e.g. many of the related reviews investigate other areas of SBSE or different aspects of refactoring like UML model refactoring or refactoring opportunities. One study that looks at search-based refactoring is investigated in more detail in Section 7.1 to compare similar aspects of the analysis conducted in the paper). Each paper is reviewed and summarised to give an overview of any experiments conducted and the results gained. The overview of the papers is organised into five different groups to cluster together related studies. The papers are then analyzed to address the research questions outlined and derive similarities and differences between the studies. Various different aspects of the papers are analyzed as well as research gaps and possible areas for future work in the area.
The remainder of the survey is structured as follows. Section 2 gives an overview of some of the more common search techniques used for refactoring in SBSE. Section 3 gives an outline of how the survey is conducted, along with an outline of aspects to be measured and analyzed, and introduces the research questions. Section 4 gives a synopsis of the analyzed papers. Section 5 analyzes the papers reviewed and measures patterns that can be derived from the work conducted in the literature. Section 6 discusses and addresses the research questions outlined in Section 3. Section 7 gives an overview of related work along with a discussion of the differences and similarities. Section 8 looks at validity threats in the survey and Section 9 concludes the survey.
There are numerous different metaheuristic algorithms available to use in the SBSE field. These methods are used to automate search-based problems through gradual quality increases. Random search is used as a benchmark for most search-based metaheuristic algorithms to compare against. Although most metaheuristics use a nondeterministic approach to making choices, the choice must be assessed for validity and a fitness function is used to evaluate whether the search should continue from that point or backtrack. Below, the most common metaheuristic algorithms used in the literature are discussed:
Hill climbing
Hill climbing (HC) is a type of local search algorithm. With the HC approach, a random starting point is chosen in the solution, and the algorithm begins from that point. A change is then made, and the fitness function is used to compare the two solutions. The one with the highest perceived "quality" becomes the new optimum solution and the algorithm continues in this way. Over time, the quality of the solution is improved as less optimal changes are discarded and better solutions are chosen. Eventually, an optimal or sub-optimal solution is reached with the same functionality but a better structure. This is considered a fast algorithm in relation to the other metaheuristic choices but, as with other local search algorithms, it has the risk of being restricted to local optima. The algorithm may "peak" at a less optimal solution (akin to reaching a peak after climbing a hill). There are two main types of HC search algorithm that differ in one aspect. First-ascent HC is the simpler version of the algorithm, whereas steepest-ascent HC has a slightly more sophisticated search method and is a superior choice for quality. Other variations are stochastic HC (neighbors are chosen at random and compared) or random-restart HC (algorithm is restarted at different points to explore the search space and improve the local optima reached). HC is one of the more common search algorithms used in SBSE, and has similarities to other search techniques. The HC technique may not produce solutions as effective as some others do, but it does tend to find a suitable solution faster and more consistently (O'Keeffe & Cinnéide, 2006).
Simulated annealing
Simulated annealing (SA) is a modification of the local search algorithm, used to address the problem of being trapped with a locally optimum solution. In SA, the basic method is the same as the HC algorithm. The metaheuristic checks stochastically between different variations of a solution and decides between them with a fitness function until it reaches a higher quality. The variation is that it introduces a "cooling factor" to overcome the disadvantage of local optima in the HC approach. The cooling factor adds an extra heuristic by stating the probability that the algorithm will choose a solution that is less optimal than the current iteration. While this may seem unintuitive, it allows the process to explore different areas of the search space, giving extra options for optimization that would otherwise be unavailable. This probability is initially high, giving the search the ability to experiment with different options and choose the most desirable neighborhood in which to optimize. This is then generally decreased gradually until it is negligible. The probability given by the cooling factor is normally linked to a "temperature" value that is used to simulate the speed in which the algorithm "cools".
Although the SA process may come up with a better solution compared to the HC process, HC is a lot more reliable as the SA process may struggle to settle on a solution.
Genetic algorithms
Genetic algorithms (GAs) are a class of evolutionary algorithms (EAs) that, much like SA, mimic a process used elsewhere in science, namely the reproduction and mutation processes in genetics and natural selection. GAs use a fitness function to measure the quality among a number of different solutions (known as "genes") and prioritize them. At each generation (i.e. each iteration of the search), the genes are measured to determine which are the "fittest". Each generation, in order to introduce variation into the gene pool, a proportion of the population is selected and used to breed the new generation of solutions. With this selection, two steps are used to create the new generation. First, a crossover operator is used to create the child solution(s) from the parents selected. The algorithm itself determines exactly how the crossover operator works, but generally, selections are taken from each parent and spliced together to form a child. Once the child solution(s) have been created, the second step is mutation. Again, the mutation implementation depends on the GA written. The mutation is used to provide random changes in the solutions to maintain variation in the selection of solutions and prevent convergence. After mutation is applied to a selection of the child solutions, the newly created solutions are inserted back into the gene pool. At this point the algorithm calculates the fitness of any new solutions and reorders them in relation to the overall set. Generally, a population size is specified, and this ensures that the weakest solutions are weeded out of the gene pool each generation. This process is repeated until a termination condition is reached.
Multi-objective evolutionary algorithms
When refactoring a software project, as with other areas of software engineering, there are likely numerous conflicting objectives to address and optimize. A multi-objective algorithm can be used to consider the objectives independently instead of having to combine them into one overarching objective to improve. There are numerous EAs available that are used for multi-objective problems, known as multi-objective evolutionary algorithms (MOEAs). The downside to using multi-objective algorithms for software refactoring over the mono-objective metaheuristic algorithms is that the extra processing needed to consider the various objectives can cause an increase in the time needed to generate a set of solutions. Another issue is that when a MOEA generates a population of solutions, the "best solution" is up to the interpretation of the user. Whereas a single-objective EA can rank the final population of solutions by a single fitness value, there may be numerous possible choices in the MOEA population depending on which objective fitness is more important. On the other hand, this gives the user multiple options depending on their desire or the situation.
Most MOEAs use Pareto dominance (Coello Coello, 1999) in order to restrict the population of solutions generated. If, for a solution, at least one objective of that solution is better than in another solution and none of the remaining objectives are worse, that solution is said to dominate the other solution. Therefore, a solution is non-dominated unless another solution in the population dominates it. When Pareto dominance is used to generate an optimal set of solutions, they can be inspected on a Pareto front. When the number of objectives is less than or equal to three, the nondominated solutions can be visualized on the Pareto front. This allows the user to easily visualize the state of the solutions according to each independent objective and choose the most suitable one along the Pareto front. Multi-objective algorithms that are designed to handle more than three objectives are generally referred to as many-objective algorithms (Deb & Jain, 2013). These tend to avoid using only Pareto dominance as it can have difficulty successfully handling more than three objectives (Jain & Deb, 2014). As the amount of objectives to measure increases, it becomes more difficult to rank the solutions into different fitness fronts (as an increasing amount become non-dominated, which results in many of the solutions being given the same fitness rank). When this happens, the multi-objective algorithm becomes less useful at discerning better populations of solutions. Table 1 lists MOEAs that use Pareto dominance to choose solutions. For further information, a survey of MOEAs is given by (Coello Coello, 1999).
Survey outline
In this survey, we aim to aggregate information and patterns about the research that have been conducted related to search-based refactoring for software maintenance and the trends that have been uncovered. As such, we aim to answer the following question: "To what extent has search-based refactoring been studied in software maintenance?" In order to address this, the following research questions have been introduced: RQ1: How many papers were published per year? RQ2: What are the most common methods of publication for the papers? RQ3: Who are the most prolific authors investigating search-based refactoring in software maintenance? RQ4: What types of studies were used in the papers? RQ5: What refactoring approaches were used in the literature? RQ6: What search techniques were used in the refactoring studies? RQ7: What types of programs were used to evaluate the refactoring approaches? To answer these questions, we analyze and discuss similarities and patterns in the studies investigated. The research questions are addressed in Section 5 by inspecting the various aspects queried such as search techniques used, popular authors, relevant conferences and journals and open source programs used in the studies.
Google Scholar, IEEE Xplore, ScienceDirect, Springer and Scopus were used to find relevant papers by using the search string "search AND based AND software AND engineering AND maintenance AND refactoring AND metaheuristic". We used AND to connect the keywords as using OR or a combination of the two would have been too general, giving hundreds of thousands of results in Google Scholar. The search was conducted by looking for the words anywhere in the article, rather than the alternative of looking only within the article title or elsewhere. The most recent search was implemented in September 2016. The time period for the search in which the papers were published was unrestricted, therefore the period is for papers published up to and including 2016. The amount of papers found in each search repository is given in Table 2. Of the papers found with the search, the results were analyzed and reduced to only include the papers that were relevant to software maintenance and involved one of the following: Refactoring with search-based techniques. Automated refactoring. Investigation of maintenance metrics with search-based techniques.
Likewise, the following papers were excluded: Papers that involved defect detection but not resolution. Literature reviews, theses, abstracts, tutorials, reports or posters. Papers that were written in a language other than English.
Although certain related areas captured in the search (such as modelling, defect detection, software architecture, testing etc.) could have been excluded from the search to reduce the number of hits, the papers were analyzed manually using the inclusion and exclusion criteria due to the similar nature of many of the areas in order to ensure that relevant papers weren't lost from the review. Of the papers analyzed, 34 were found to Table 10 in the appendix gives a list of the papers, as well as the authors and year published. Extra information about the papers surveyed is given in the Additional file 1, as well as a list of literature reviews related to SBSE in general.
Refactoring in search-based software engineering
The different papers captured in the search are analyzed below. They have been further categorised into five subsections to capture commonly recurring areas, although there may be some overlap between a paper in one section with another section.
4.1 Refactoring to improve software quality Ó Cinnéide and Nixon (1999a) developed a methodology to refactor software programs to apply design patterns to legacy code. They created a tool to convert the design pattern transformations into explicit refactoring techniques that can then be automatically applied to the code. The tool, called DPT (Design Pattern Tool), was implemented in Java and applied the transformations first to an abstract syntax tree that was used to represent the code, before changes were applied to the code itself. The tool would first work out the transformations needed to convert the current solution to the desired pattern (in the paper a plausible precursor was chosen first). It then converted the pattern transformations into a set of minipatterns. These minipatterns would then be further decomposed, if needed, into a set of primitive refactorings. The minipatterns would be reused if applicable for other pattern transformations. The authors analyzed the (Gamma et al., 1994) patterns to determine whether a suitable transformation could be built with the applicable mini transformations. They found that while the tool generally worked well for the creational patterns, structural patterns and behavioral patterns caused problems. In a different paper (Cinnéide & Nixon, 1999b), more detail was given on the tool and how it can be used to apply the Factory Method pattern, and in another subsequent paper (Cinnéide, 2000), Ó Cinnéide defined further steps of work to test the applicability of the tool. He defined plans to apply the patterns to production software to test whether behavior is truly preserved and to create a tool to objectively measure how suitably the pattern has been applied to the software.
O'Keeffe and Ó Cinnéide (2003) continued to research in the area of SBSE relating to software maintenance by developing a tool called Dearthóir. They introduce Dearthóir as a prototype tool used to refactor Java code automatically using SA. The tool used two refactorings, "pullUpMethod" and "pushDownMethod" to modify the hierarchical structure of the target program. Again, the refactorings must preserve the behavior of the program in order for them to be applicable. They must also be reversible in order to use the SA method. To measure the quality of the solution, the authors employed a small metric suite to analyze the object-oriented structure of the program. The metrics, "availableMethods" and "methodsInherited" were measured for each class in the program and a weighted sum was used to give an overall fitness value for the solution. A case study was employed to test the effectiveness of the tool. A simple 6-class hierarchy was used for the experiment. The tool was shown to restructure the class design to improve cohesion and minimize code duplication.
Further work (O'Keeffe & Cinnéide, 2004) introduced more refactorings and different metrics to the tool. Along with method movement refactorings the ability to change a class between abstract and concrete was introduced and to extract or collapse a subclass from an abstract class, as well as the ability to change the position of a class in the hierarchy of the class design. A method was introduced to choose appropriate metrics to use in the metrics suite of the tool. The metrics used measured the methods and classes of the solution, counting the number of rejected, duplicate and unused methods as well as the number of featureless or abstract classes. Due to the possibility for the metrics to conflict with each other they were then given dependencies and weighted according to the authors' judgment, as outlined in the method detailed before. Another case study was used to detail the action of the tool and the outcome was evaluated using the value of the metrics before and after the tool was applied. Every metric used either improved or was unchanged after the tool had been applied, indicating that the tool had been successful in improving the structure of the solution.
O'Keeffe and Ó Cinnéide continued to work on automated refactoring by developing the Dearthóir prototype into the CODe-Imp platform (Combinatorial Optimisation for Design Improvement). They introduced it initially as a prototype automated designimprovement tool (O'Keeffe & Cinnéide, 2006) using Java 1.4 source code as input. Like Dearthóir, CODe-Imp uses abstract syntax trees to apply refactorings to a previously designed solution, but it has been given the ability to implement HC (first-ascent or steepest-ascent) as well as SA. They based the set of metrics used in the tool on the QMOOD model of software quality (Bansiya & Davis, 2002). Six refactorings were available initially, and 11 different metrics are used to capture flexibility, reusability and understandability, in accordance to the QMOOD model. Each evaluation function is based on a weighted sum of quotients on the set of metrics.
The authors then conducted a case study to test how effective each function and each search technique is at refactoring software. The reusability function was found to not be suitable to the requirements of search-based refactoring due to the introduction of a large number of featureless classes. The other two evaluation functions were found to be suitable with the understandability function being most effective. All search techniques were found to produce quality improvements with manageable run-times, with steepest-ascent HC providing the most consistent improvements. SA produced the greatest quality improvements in some cases whereas first-ascent hill-climbing generally produced quality improvements for the least computational expenditure. They further expanded on this work (O'Keeffe & Cinnéide, 2008a) to include a fourth search technique (multiple-restart HC) and larger case studies. The functionality of the CODe-Imp tool was also expanded to include six additional refactorings. Similar results were found with the reusability function found to be unsuitable for search-based refactoring and all of the available search techniques found to be effective.
They subsequently (O'Keeffe & Cinnéide, 2007a) used the CODe-Imp platform to conduct an empirical comparison of three methods of metaheuristic search in searchbased refactoring; multiple-ascent (as well as steepest-ascent) HC, SA and a GA. To conduct the comparison, four Java programs were taken from SourceForge and Spec-Benchmarks and the mean quality change was measured across the program solutions for each of the three metaheuristic techniques. These results were then normalized for each metaheuristic technique and then compared against each other. They analyzed the results to conclude that multiple-ascent HC was the most suitable method for searchbased refactoring due to the speed and consistency of the results compared to the other techniques. This work was also expanded (O'Keeffe & Cinnéide, 2008b) with a larger set of input programs, greater number of data points in each experiment and more detailed discussion of results and conclusions.
At a later point, Koc et al. (2012) also compared metaheuristic search techniques using a tool called A-CMA. They compared five different search techniques by using them to refactor five different open source Java projects and one student project. The techniques used were HC (steepest descent, multiple steepest descent and multiple first descent), SA and artificial bee colony (ABC), as well as a random search for comparison. The results suggest that the ABC and multiple steepest descent HC algorithms are the most effective techniques of the group, with both techniques being competitive with each other. The authors suggested that the effectiveness of these techniques may be due to their ability to expand the search horizon to find higher quality solutions.
Mohan, Greer and McMullan (Mohan et al., 2016) adapted the A-CMA tool to investigate different aspects of software quality. They used combinations of metrics to represent three quality factors; abstraction, coupling and inheritance. They then constructed an experimental fitness function to measure technical debt by combining relevant metrics with influence from the QMOOD suite, as well as the SOLID principles of objectoriented design. The technical debt function was compared against each of the other quality factors by testing them on six different open source systems with a random search, HC and SA. The technical debt function was found to be more successful than the others, although the coupling function was also found to be useful. Of the three searches used, SA was the most effective. The individual metrics of the technical debt function were also compared to deduce which were more volatile.
O'Keeffe and Ó Cinnéide used steepest-ascent HC with CODe-Imp to attempt to refactor software programs to have a more similar design to other programs based on their metric values (O'Keeffe & Cinnéide, 2007b). The QMOOD metrics suite was used to compare against previous results, and an overall fitness value was derived from the sum of 11 different metrics. A dissimilarity function was evaluated to measure the absolute differences between the metric values of the programs tested, where a lower dissimilarity value meant the programs were more similar. CODe-Imp was then used to refactor the input program to reduce its dissimilarity value to the target program. This was tested with three open source Java programs, with six different tests overall (testing each program against the other two). Two of the programs had been refactored to be more similar to the targets, but for the third, the dissimilarity was unchanged in both cases. The authors speculated that this was due to the limited number of refactorings available for the program as well as the low dissimilarity to begin with. They further speculated that the reason for the limited available refactorings was due to the flat hierarchical structure in the program.
Moghadam and Ó Cinnéide (2011) rewrote the CODe-Imp platform to support Java 6 input and to provide a more flexible platform. It now supported 14 different designlevel refactorings across three categories; method-level, field-level and class-level. The number of metrics had also been expanded to 28, measuring mainly cohesion or coupling. The platform was also given the option of choosing between using Pareto optimality or weighted sums to combine the metrics and derive fitness values.
Moghadam and Ó Cinnéide used CODE-Imp along with JDEvAn (Xing & Stroulia, 2008) to attempt to refactor code towards a desired design using design differencing . The JDEvAn tool is used to extract the UML models of two solutions of code, and detect the differences between them. An updated version of the code is created by a maintenance programmer to reflect the desired design in the code and the tool uses this along with the original design to find the applicable changes needed to refactor the code. The CODe-Imp platform then uses the detected differences to implement refactorings to modify the solution towards the desired model. Six open source examples were used to test the efficiency of these tools to create the desired solutions. The number of refactorings detected and applied in each program using the above approach were collected, and in each case a high percentage of refactorings were shown to have been applied.
Seng, Stammel and Burkhart (Seng et al., 2006) introduced an EA to apply possible refactorings to a program phenotype (an abstract code model), using mutation and crossover operators to provide a population of options. The output of the algorithm was a list of refactorings the software engineer could apply to improve a set of metrics. They used class level refactorings, noting the difficulty of providing refactorings of this type that were behavior preserving. They tested their technique on the open source Java program JHotDraw, using a combination of coupling and cohesion metrics to measure the quality gain in the class structure of the program. For the purposes of the case study, they focused on the "move method" refactoring. The algorithm successfully used the technique to improve the metrics. They also tested the ability of the algorithm to reorganize manually misplaced methods, and it was successfully able to suggest that the methods are moved back to their original position.
Harman and Tratt (Harman & Tratt, 2007) argued how Pareto optimality can be used to improve search-based refactoring by combining different metrics in a useful way. As an alternative to combining different metrics using weights to create complex fitness functions, a Pareto front can be used to visualize the effect of each individual metric on the solution. Where the quality of one solution may have a better effect on one metric, another solution may have an increased value for another. This allows the developer to make an informed decision on which solution to use, depending on what measure of quality is more important for the project in that instant. Pareto fronts can also be used to compare different combinations of metrics against each other. An example was given with the metrics CBO (Coupling Between Objects) and SDMPC (Standard Deviation of Methods Per Class) on several open source Java applications.
Refactoring for testability
Harman (Harman, 2011) proposed a new category of testability transformation (used to produce a version of a program more amenable to test data generation) called testability refactoring. The aim of this subcategory is to create a program that is both more suited to test data generation and improves program comprehension for the programmer, combining the two areas (testing and maintenance) of SBSE. As testability transformation uses refactorings to modify the structure of a program the same technique can be used for program maintenance, although the two aims may be conflicting. Here a testability refactoring will refer to a process that satisfies both objectives. Harman mentioned that these two possibly conflicting objectives form a multi-objective scenario. He explained that the problem would be well suited to Pareto optimal searchbased refactoring and also mentioned a number of ways in which testability transformation may be suited to testability refactoring. investigated the use of a multi-objective approach that takes into consideration the testing effort on a system. They used their approach to minimize the occurrence of five well known anti-patterns (i.e. types of design defect), while also attempting to reduce the testing effort. Three different multi-objective algorithms were tested and compared; NSGA-II, SPEA2 and MOCell. This approach was tested on four open source systems. Of the three options, MOCell was found to be the metaheuristic that provided the best performance. Ó Cinnéide, Boyle and Moghadam (2011) used the LSCC (Low-level Similarity-based Class Cohesion) metric with the CODe-Imp platform to test whether automated refactoring with the aid of cohesion metrics can be used to improve the testability of a program. They refactored a small Java program with cohesive defects introduced. Ten volunteers with varying years of industrial experience constructed test cases for the program before and after refactoring, and were then surveyed on certain areas of the program to discern whether it had become easier or harder to implement test cases for them after refactoring. The results were ambivalent but generally there was little difference reported in the difficulty of producing test cases in the initial and final program. The authors suggested that these unexpected results may stem from the size of the program being used. They predicted that if a larger, more appropriate application was being used, then the refactored program may produce easier test cases. The programmers surveyed also mentioned the use of modern IDE's helped to reduce the issues with the initial code and alleviated any predicted problems with producing test cases for the program in this state.
Testing metric effectiveness with refactoring
Ghaith and Ó Cinnéide (2012) investigated a set of security metrics to determine how successful they could be for improving a security sensitive application using automated refactoring. They used the CODe-Imp platform to test the 16 metrics on an example Java application by using them separately at first. After determining that only four of the metrics were affected with the refactoring selection available, they were combined together to form a fitness function to represent security. To avoid the problems related to using a weighted sum approach to combining the metrics, they instead used a Pareto optimal approach. This ensured that no refactoring would be chosen that would cause a decrease in any of the individual metrics in the function. The function was then tested on the Java program using first-ascent HC, steepest-ascent HC and SA. The results for the three searches were mostly identical except that SA caused a higher improvement in one of the metrics. Conversely, the SA solution entailed a far larger number of refactorings than the other two options (2196 compared to 42 and 57). The effectiveness of these metrics was also analyzed and it was discovered that of the 27% average metric improvement in the program, only 15.7% of that improvement indicated a real improvement in its security. This was determined to be due to the security metrics being poorly formed. Ó Cinnéide et al. (2012) conducted an investigation to measure and compare different cohesion metrics with the help of the CODe-Imp platform. Five popular cohesion metrics were used across eight different real world Java programs to measure the volatility of the metrics. It was found that the five metrics that aimed to measure the same property disagreed with each other in 55% of the applied refactorings, and in 38% of the cases metrics were in direct conflict with each other. Two of the metrics, LSCC and TCC (Tight Class Cohesion), were then studied in more detail to determine where the contradictions were in the code that caused the conflicts. Different variations of the metrics were used to compare them in two different ways This study was extended to use 10 real world Java programs. Two new techniques, exploring areas referred to as Iterative Refactoring Agreement/Disagreement and Gap Opening/Closing Refactoring, were used to compare the metrics and the number of metric pairs compared was increased to 90 pairs. Among the metrics compared, LSCC was found to be the most representative, while SCOM (Sensitive Class Cohesion) was found to be the least. Veerappa and Harrison (2013) expanded upon this work by using CODe-Imp to inspect the differences between coupling metrics. A similar approach was used to measure the effects of automated refactoring on four standard coupling metrics and to compare the metrics with each other. Eight open source Java projects were used, with all but one of the programs being the same as those used in Ó Cinnéide et al.'s experiment. To measure volatility, they calculated the percentage of refactorings that caused a change in the metrics, and from these a mean value was calculated across the eight projects. The amount of spread between these values was calculated for each metric using standard deviation, as well as the correlation values between each metric. This experiment resulted in less divergence between metrics, with only 7.28% of changes in direct conflicting, but in 55.23% of cases the changes were dissonant, meaning that there was a larger chance that a change in one metric had no effect on another. They also measured the effect of refactoring with the RFC (Response For Class) metric on a cohesion metric and found that after a certain number of iterations, the cohesion will continue to increase as the coupling decreases, minimizing the effectiveness of the changes.
Simons, Singer and White (2015) compared metric values with professional opinions to deduce whether metrics alone are enough to helpfully refactor a program. They constructed a number of software examples and used a selection of metrics to measure them. A survey was then conducted and responded to by 50 experienced software engineers. They were asked on their opinion of the quality of the solutions by asking whether they agree or disagree that a solution was reusable, flexible or understandable. The metrics were corresponded to the quality attributes and correlation plots were produced to measure whether there was any correlation between the engineer's opinions and the metric values. There was found to be almost no correlation between the two, leading the authors to suggest that metrics alone are insufficient to optimize software quality as they do not fully capture the judgments of human engineers when refactoring software. Vivanco and Pizzi (2004) used search-based techniques to select the most suitable maintainability metrics from a group. They presented a parallel GA to choose between 64 different object-oriented source code metric. Firstly, they asked an experienced software architect to rank the 366 components of a software system in difficulty, from 1 to 5. The GA was then run for the set of metrics in sequential and parallel, using C++ for the GA and MPI to implement the parallel improvements. Metrics found to be more efficient included coupling metrics, understandability metrics and complexity metrics. Furthermore, the parallel program ran substantially faster than the sequential version. Bakar et al. (2012a) attempted to outline a set of guidelines to select the best metrics for measuring maintainability in open source software. An EA was used to optimize and rank the set of metrics, which were listed in previous work (Bakar et al., 2012b). An analysis was conducted to validate the quality model using the CK metric suite (Chidamber & Kemerer, 1994) of Object-Oriented Metrics (also known as MOOSE -Metrics for Object-Oriented Software Engineering). The CKJM tool proposed by (Spinellis, 2005) was used to calculate the values of the CK metrics for the open source software under inspection. These values were then used in the EA as ranking criteria in selecting the best metrics to measure maintainability in the software product. This proposed approach had not yet been empirically validated, and had presented the outcome of ongoing research.
Harman, Clark and Ó Cinnéide (2013) wrote about the need for surrogate metrics that approximate the quality of a system to speed up the search. If non-functional properties of the system (e.g. if a mobile device is used) mean limited time or power, then it may be more important for the fitness function to be calculated quickly or with little computational effort, in which case approximate metrics will be more useful than precise ones. The trade-off here is that the metrics will guide the search in the direction of optimality while improving the performance of the search. This ability would be useful in dynamic adaptive SBSE, where self-adaptive systems may take into account functional as well as non-functional properties. Harman et al. had also discussed dynamic adaptive SBSE elsewhere . used examples of bad design to produce rules to aid in design defect detection with genetic programming (GP), and then used these rules in a GA to help propose sequences of refactorings to remove the detected defects. The rules are made up of a combination of design metrics to detect instances of blob, spaghetti code or functional decomposition design defects. Before the GA was used, a GP approach experimented with different rules than can reproduce the example set of design defects, with the most accurate rules being returned. Once a set of rules were derived, they could be used to detect the number of defects in the correction approach. The GA could then be used to find sequences of refactorings that reduce the number of design defects in the program. The approach was compared against a different rules-based approach to defects detection with four open source Java programs and was found to be more precise with the design defects found.
Refactoring to correct software defects
Further work with this approach to design smell (defect) correction was then investigated Kessentini et al., 2012). First, extended the experimental code base to six different open source Java programs, with the results further supporting the approach. replaced the GA used in the code smell correction approach with a multiobjective GA (NSGA-II). They used the previous objective function to minimize design defects as one of two separate objectives to drive the search. The second objective used a measure of the effort needed to apply the refactoring sequence, with each refactoring type given an effort value by the authors. Kessentini, Mahaouachi and Ghedira (2012) extended the original approach by using examples of good code design to help propose refactoring sequences for improving the structure of code. Instead of generating refactoring rules to detect design defects and then using them to generate refactoring sequences with a GA, they used a GA directly to measure the similarity between the subject code and the well-designed code. The fitness function used adapted the Needleman-Wunsch alignment algorithm (a dynamic programming algorithm used to efficiently find similar regions between two sequences of DNA, RNA or protein. It can be used to efficiently compare code fragments) to increase the similarity between the two sets of code, allowing the derived refactoring sequences to remove code smells. Ouni et al. (2012) created an approach to measure semantics preservation in a software program when searching for refactoring options to improve the structure. They used a multi-objective approach with NSGA-II to combine the previous approach for resolving design defects with the new approach to ensure that the resolutions retained semantic similarity between code elements in the program. The new approach used two main methods to measure semantic similarity. The first method measures vocabulary based similarity by inspecting the names given to the software elements and comparing them using cosine similarity. The other method measures the dependencies between objects in the program by calculating the shared method calls of two objects and the shared field accesses and combining them into a single function. An overall objective for semantics similarity is derived from these measures by finding the average, and this is then used to help the NSGA-II algorithm find more meaningful solutions. These solutions were analyzed manually to derive the percentage of meaningful refactorings suggested. The results across two different open source programs were then compared against a previous mono-objective and previous multi-objective approach, and, while the number of defects resolved was moderately smaller, the meaningful refactorings were increased.
Ouni, then explored the potential of using development refactoring history to aid in refactoring the current version of a software project. They used a multi-objective approach with NSGA-II to combine three separate objectives in proposing refactoring sequences to improve the product. Two of the objectives, improving design quality and semantics preservation, were taken from previous work. The third objective used a repository of previous refactorings to encourage the use of refactorings similar to those applied to the same code fragments in the past. The approach was tested on three open source Java projects and compared against a random search and a mono-objective approach. The multi-objective algorithm had better quality values and semantics preservation than the alternatives, although this approach did not apply the proposed refactorings to the code, leaving the refactoring sequences to be applied manually by the developer.
They further explored this approach by analyzing co-change that identified how often two objects in a project were refactored together at the same time and also by analyzing the number of changes applied in the past to the objects. They also explored the effect of using refactoring history on semantics preservation. Further experimentation on open source Java projects showed a slight improvement in quality values and semantics preservation with these additional considerations. Another study investigated the use of past refactorings borrowed from different software projects when the change history for the applicable project is not available or does not exist. The improvements made in these cases were as good as the improvements made when previous refactorings for the relevant project were available. Wang et al. (2015) combined the previous approach by to remove software defects with time series in a multi-objective approach using NSGA-II. The time series was used to predict how many potential code smells would appear in future versions of the software with the selected solution applied. One of the objectives was then measured by minimizing the number of code smells in the current version of the software and estimated code smells in future versions of the software. The other objective aimed to minimize the number of refactorings necessary to improve the software. The approach was tested on four open source Java programs and one industrial Java project. The programs were chosen based on the number of previous versions of the software available, as the success of the approach would depend on this input. The experimental results were compared against previous mono-objective and multiobjective approaches and were found to have better results with less refactorings, but also took longer to run.
Pérez, Murgia and Demeyer (2013) presented a short position paper to propose an approach to resolving design smells in software. They proposed using the version control repository to find and use previously effective refactorings in the code and apply them to the current design as "Refactoring Strategies". Refactoring strategies are defined as heuristic-based, automation-suitable specifications of complex behaviorpreserving software transformations aimed at a certain goal e.g. removing design smells. They described an approach to build a catalogue of executable refactoring strategies to handle design smells by combining refactorings that have been performed previously. The authors claimed that, on the basis of their previous work and other available tools, it would be a feasible approach. Morales (2015) defined his aim to create an Eclipse plug-in to help with refactoring in a doctoral paper. He aimed to compare different metaheuristic approaches and use a metaheuristic search to detect anti-patterns in code. The plugin would then use automated refactoring to help remove the anti-patterns and improve the design of the code. addressed this aim with the ReCon approach (Refactoring approach based on task Context). The approach leverages information about a developer's task, as well as one of three metaheuristics, to suggest a set of refactorings that affect only the entities of the project in the developer's context. The metaheuristics supported are SA, a GA and variable neighborhood search (VNS). To test the approach, it was applied to three open source Java programs with sufficient information to deduce developer's context. They adapted the approach to look for refactorings that can reduce four types of anti-pattern; lazy class, long parameter list, spaghetti code, and speculative generality. They also aimed to improve five of the quality attributes defined in the QMOOD model. The results showed that ReCon can successfully correct more than 50% of anti-patterns in a project using less resources than the traditional approaches from the literature. It can also achieve a significant quality improvement in terms of reusability, extendibility and to some extent flexibility, while effectiveness reports a negligible increment.
Mkaouer et al. experimented with combining quality measurement with robustness to yield refactored solutions that could withstand volatile software environments where importance of code smells or areas of code may change. They used NSGA-II on six different open source Java programs of different sizes and domains to create a population of solutions that used robustness as well as software quality in the fitness measurement. To measure robustness, they used formulas to approximate smell severity (by prioritizing four different code smell types with scores between 0 and 1) and importance of code smells fixed (by measuring the activity of the code modified via number of comments, relationships and methods) as well as measuring the amount of fixed code smells. They also used a number of multi-objective performance measurements (hypervolume, inverse generational distance and contribution) to compare against other multi-objective algorithms. To analyze the effectiveness of the approach and the trade-offs involved in ensuring robustness, the NSGA-II approach was compared against a set of other techniques. For performance, it was compared to a multi-objective particle swarm algorithm (as well as a random search to establish a baseline), and was found to outperform or have no significant difference in performance in all but one project. It is suggested that since this was the smaller project, the particle swarm algorithm may be more suited to smaller, more restrictive projects. It was also compared to a mono-objective GA and two mono-objective approaches that use a weighted combination of metrics (the same ones used above). It was found that although the technique only outperformed the mono-objective approaches in 11% of the cases, it outperformed them on the robustness metrics in every case, showing that while it sacrificed some quality, the NSGA-II approach arrived at more robust solutions that would be more resilient in a more unstable, realistic environment. This study was extended (Mkaouer et al., 2016) by testing eight open source systems and one industrial project, and by increasing the number of code smell types analyzed to seven.
They also experimented with the newly proposed evolutionary optimization method NSGA-III , which uses a GA to balance multiple objectives through non-dominated sorting. They used the algorithm to remove detected code smells in seven open source Java programs through a set of refactorings. They tested the algorithm using different amounts of objectives (3, 5, 8, 10 and 15) to measure the scalability of the approach to a multi-objective and many-objective problem set. These results were then compared against other EAs to see how they scaled compared to NSGA-III. The NSGA-III approach improved as the amount of objectives used was increased, whereas the other algorithms did not scale as well. Three other MOEAs were compared; IBEA, MOEA/D and NSGA-II. The other MOEAs were comparable when the amount of objectives used in the search was smaller, but as the amount of objectives used was increased, the results became less competitive with NSGA-III. The search technique was also compared against two other techniques that used a weighted sum of metrics to measure the software. These techniques performed significantly worse than the NSGA-III approach. They extended the study by also experimenting on an industrial project and increasing the number of manyobjective techniques compared against from two to four. The number of objectives was reduced to eight and changed to represent the quality attributes of the QMOOD suite as well as other aggregate metric functions.
They also looked at many-objective refactoring with the NSGA-III algorithm for remodularization . They used four open source Java systems in the experimentation along with one industrial system provided by Ford Motor Company. They compared the technique against other approaches by looking at up to seven objectives, using objectives from previous work to look at the semantic coherence of the code and the development history along with structural objectives. Again, the approach outperformed the other techniques and more than 92% of code smells were fixed on each of the open source systems.
More recently, adapted the chemical reaction optimization (CRO) algorithm for search-based refactoring and explored the benefits of this approach. They compared this search technique against more standard optimization techniques used in SBSE; a GA, SA and particle swarm optimization (PSO). They combined four different prioritization measures to make up a fitness function that aimed to reduce seven different types of code smells. The four measures were priority, severity, risk and importance. Each of the code smell types were given a priority score of 1 to 7 to represent their opinion of which smells are more urgent from previous experience in the field. The inFusion tool (a design flaw detection tool) was used to deduce severity scores to represent how critical a code smell is in comparison with others of the same type. The risk score was calculated to represent riskier code as code that deviated from well-designed code. Code smells that related to more frequently changed classes were considered more important as code smells that hadn't undergone changes were considered to have been created intentionally. The approach was applied to five different open source Java programs and was compared against a previous study and a variation of the approach that didn't use prioritization. The approach was superior using the relevant measures to the other two solutions compared against it. It was also shown to give better solutions in larger systems than the other optimization algorithms tested. Amal et al. (2014) used an Artificial Neural Network (ANN) to help their approach choose between refactoring solutions. They applied a GA with a list of 11 possible refactorings to generate refactoring solutions consisting of lists of suggested refactorings to restructure the program design. They then utilised the opinion of 16 different software engineers, with programming experiences ranging from 2 to 15 years, to manually evaluate the refactoring solutions generated for the first few iterations by marking each refactoring as good or bad. The ANN used these examples as a training set in order to develop a predictive model to evaluate the refactoring solutions for the remaining iterations. Due to this, the ANN worked to replace the definition of a fitness function. The approach was tested on six open source programs and compared against existing mono-objective and multi-objective approaches, as well as a manual refactoring approach. The majority of the suggested refactorings were considered by the users to be feasible, efficient in terms of improving quality of the design and to make sense. In comparison with the other mono-objective and multi-objective approaches, the refactoring suggestions gave similar scores but required less effort and less interactions with the designer to evaluate the solutions. The approach outperformed the manual refactoring approach.
Refactoring tools
Fatiregun, Harman and Hierons (2004) explored program transformations by experimenting with a GA and HC approach and comparing the results against each other as well as a random search as a baseline. They used the FermaT transformation tool, and the 20 transformations (refactorings) available in the tool, to refactor the program and optimize its length by comparing lines of code before and after. The average fitness for the GA was shown to be consistently better than the random search and the HC search, while the HC technique was, for the most part, significantly better than the random search.
DiPenta (2005) proposed another refactoring framework, Evolution Doctor, to handle clones and unused objects, remove circular dependencies and reorganize source code files. Afterwards, a hybridisation of HC and GAs is used to reorganize libraries. The fitness function of the algorithm was created to balance four factors; the number of interlibrary dependencies, the number of objects linked to each application, the size of the new libraries and the feedback given by developers. The framework was applied to three open source applications to demonstrate its effectiveness in each of the areas of design flaw detection and removal.
Griffith, Wahl and Izurieta (2011) introduced the TrueRefactor tool to find and remove a set of code smells from a program in order to increase comprehensibility. TrueRefactor can detect lazy classes, large classes, long methods, temporary fields or instances of shotgun surgery in Java programs and uses a GA to help remove them. The GA is utilized to search for the best sequence of refactorings that removes the highest number of code smells from the original source code. To detect code smells in a program, each source file is parsed and then used to create a control flow graph to represent the structure of the software. This graph can be used to detect the code smells present. For each code smell type, a set of metrics are used to deduce whether a section of the code is an instance of that code smell type. The tool contains a set of 12 refactorings (at class level, method level or field level) that are used to remove the code smells. A set of pre conditions and post conditions are generated for each code smell to ensure that they can be resolved beforehand. The paper used an example program with code smells inserted to analyze the effectiveness of the tool. The number of code smells of each type over the set of iterations was measured along with the measure of a set of quality metrics. In both cases, the values improved initially before staying relatively stable throughout the process. Comparison of initial and final code smells showed that the tool removes a proportion of them and also metric values show that the surrogate metrics are improved. The tool is only able to generate improved UML representations of the code and not refactor the source code itself, and this restriction was identified as an aim for future work.
Analysis
To address the research questions outlined in Section 3, each subsection analyzes the relevant aspect of the papers. If we compare the popularity of search-based refactoring research before 2010 with after, we can see that the average has increased from one paper a year to six. The earliest refactoring tool proposed among the papers was DPT (Cinnéide & Nixon, 1999a), in 1999. Figure 2 gives a look at the amount of papers that use each type of search technique per year. In Fig. 2 Figure 3 gives the different types of paper analyzed in the literature. All but one of the papers were published in journals or featured in conferences. The other (Koc et al., 2012) was included as a book section. Of the journal and conference papers, the majority are from conferences. Table 3 gives the list of conferences that at least two of the analyzed papers are from. GECCO has seven papers (Harman & Tratt, 2007;O'Keeffe & Cinnéide, 2007a;Seng et al., 2006;Vivanco & Pizzi, 2004;. This conference, which is concerned primarily with EAs, contains more papers than conferences related to maintenance (CSMR, ICSM and Figure 4 gives the number of papers each author has published. The majority of authors have only published one paper. Of the remaining authors, six have published two papers. Only 11 of the 70 authors have published more than two papers. Table 5 lists these authors and gives the number of papers published for each. Mel Ó Cinnéide has published more papers than the other authors at 21. Only the top two authors have
Types of study
Most of the studies in the papers were quantitative. Three were qualitative in comparison to the 37 quantitative studies. A further 10 were discussion based papers with no experimental portions. Of the quantitative papers, most of the studies tested different refactorings approaches, but a number of papers (
Refactoring approaches
In many cases, the studies conducted proposed and used tools in order to detect issues in the code and of these tools some [Di Penta, 2005, Griffith et al., 2011 were used to find specific issues, like god classes or data classes in the program. The studies are listed in Table 6. One of the studies ) was used to resolve the issues via refactoring, but used a different method to determine the steps needed to resolve them. Two UML models were generated, one to represent the current solution and one to represent the desired solution. This was created with the assistance of the programmer. Using these two models the refactorings needed to improve the program were then calculated and could be applied. In this case the technique was concerned less with code smells detected in the software and more with the desired structure of the solutions in the eyes of the programmers themselves. This seems to isolate three main methods of automated maintenance from the analyzed literature. There is the above method of working towards a desired structure. There is the method where problems are first detected in the code and then either refactoring options are generated in order to be applied manually Kessentini et al., 2012;Ouni et al., 2012;Wang et al., 2015;Mkaouer et al., 2016;Griffith et al., 2011) or the problems are addressed automatically (Di Penta, 2005). , 2008b;Koc et al., 2012;Mohan et al., 2016;O'Keeffe & Cinnéide, 2007b;Ghaith & Cinnéide, 2012;Cinnéide et al., 2012;Cinnéide et al., 2016;Veerappa & Harrison, 2013;Fatiregun et al., 2004) or again, using this approach to suggest refactorings to apply (Harman & Tratt, 2007;Seng et al., 2006;. are recent as well (2015 and 2016), suggesting a possibility for the CRO and VNS searches to be explored more in future research. Figure 6 gives the dispersion of the EAs between studies that use GAs, GPs and GEAs. The majority of the EAs used are GAs, at 24 studies. The GP algorithm and GEA was only used in three studies and two studies respectively. Three of the studies involved both GP and a GA. A big reason for the prominence of the GA among the studies is that it is present across the studies of Kessentini et al., 2012), Mkaouer et al., 2016; and Ouni et al., 2012;. This group of authors regularly used the GA in their papers, amounting to 14 different instances among the 25. Of the 26 studies containing EAs 12 used MOEAs. Figure 7 shows the dispersion of the SOAs between studies using PSOs and ABCs. ABC was used in one study (Koc et al., 2012), but The PSOs were used more frequently, with three studies Mkaouer et al., 2016;. Among the studies using search techniques, some were used to test or present an approach proposed in the paper, and others compared different search techniques against each other. As such, different papers using search techniques contained a different number of search techniques within the paper. Figure 8 shows the number of papers that contained a certain number of search techniques, ranging from no search techniques to the maximum amount of four different techniques in one paper. 11 of the papers didn't contain any search techniques. The majority only used one, at 24 papers, although 15 other papers that did use search techniques used more than one. Of these, 12 papers (O'Keeffe & Cinnéide, 2006;O'Keeffe & Cinnéide, 2008a;O'Keeffe & Cinnéide, 2007a;O'Keeffe & Cinnéide, 2008b;Koc et al., 2012;Mohan et al., 2016;Ghaith & Cinnéide, 2012;Mkaouer et al., 2016;Fatiregun et al., 2004) directly compared the different search techniques against each other to speculate on the most applicable. Only one study looked at four separate search techniques. Four of the papers O'Keeffe and Cinnéide, 2006, O'Keeffe and Cinnéide, 2007a, O'Keeffe and Cinnéide, 2008b, Koc et al., 2012, focused mainly on comparing search techniques. These studies compared HC with GAs, HC with SA or all three with each other. One (Koc et al., 2012) also involved ABC by comparing it with HC and SA. In the studies, HC seemed to outperform the other techniques. Although it had the possibility of being trapped in local optima, the technique gave consistent results and was faster than other techniques that would take time to gain traction. SA and GAs could give high quality results in certain cases but for both techniques, the results depend highly upon the configuration of the search beforehand. Therefore, it seems evident that while these options may be useful, to get good results there will be overhead involved in finding the suitable parameters that yield high quality results for the problem in question. In the study that included the ABC search, it (along with the multiple steepest descent HC) did seem to be more successful. However, as this is the only study to use the ABC technique, further insight into the technique cannot be derived from the literature. Table 10 in the appendix gives the search techniques, if any, used within each paper. Figure 9 shows the different types of program used to test the approaches examined among the studies. Most of the programs used are open source. The remaining programs used consist of test programs developed for the study O'Keeffe and Cinnéide, 2003, O'Keeffe and Cinnéide, 2004, Simons et al., 2015, Fatiregun et al., 2004, Griffith et al., 2011, in-house programs Koc et al., 2012, Vivanco and Pizzi, 2004and industrial programs. Four studies Wang et al., 2015, Mkaouer et al., 2016 an industrial program by the Ford Motor Company, referred to as JDI-Ford. As the vast majority of the frameworks used dealt with Java code, the open source programs used are in Java. Table 7 lists the open source programs used among the papers, along with references to the studies that used them. The project sizes are generally adequate for the experiments as they are large enough to justify representation of a real project. The sizes of these projects generally tend to be tens of thousands of lines of code with hundreds of classes. In the case of the test programs, small programs are used or constructed to demonstrate the applicability of the program or technique with an example. One study (O'Keeffe & Cinnéide, 2007a) deliberately kept the programs smaller than 51 top-level classes. In contrast, another study (Di Penta, 2005) used a large system with over one million lines of code.
Tools
There were a number of tools Cinnéide and Nixon, 1999a, O'Keeffe and Cinnéide, 2003, Koc et al., 2012, Fatiregun et al., 2004, Di Penta, 2005, Griffith et al., 2011 proposed in the literature, using various different approaches to refactoring code. They are listed in Table 8. While most of the seven tools were developed for Java code, FermaT (Fatiregun et al., 2004) is used to provide more low-level changes with widespectrum language (WSL) transformations. For one of the tools, TrueRefactor, plans were expressed to adapt the tool to be applicable to multiple different programming languages (Griffith et al., 2011). A number of the proposed tools identified design defects first before attempting to resolve them. DPT (Cinnéide & Nixon, 1999a) was proposed to apply design patterns to the code in an automated manner. It uses mini transformations built from refactorings to apply the patterns. Similarly, Evolution Doctor (Di Penta, 2005) is used to diagnose issues in the software first, before restructuring it to ameliorate those issues. Likewise, TrueRefactor (Griffith et al., 2011) finds instances of five different types of code smells before finding refactorings to resolve them. Other tools O'Keeffe and Cinnéide, 2003, Koc et al., 2012, Fatiregun et al., 2004 use refactorings to improve the code according to metric functions. Instead of analyzing the code for issues beforehand, they refactor the code up front in order to resolve issues as they go along. A few of the proposed tools were used in multiple papers. The A-CMA tool was proposed by (Koc et al., 2012) and then adapted and used by Mohan, Greer and McMullan (Mohan et al., 2016). The CODe-Imp tool was used in a myriad of studies O'Keeffe and Cinnéide, 2006, O'Keeffe and Cinnéide, 2008a, O'Keeffe and Cinnéide, 2007a, O'Keeffe and Cinnéide, 2008b, O'Keeffe and Cinnéide, 2007b, Ghaith and Cinnéide, 2012, Veerappa and Harrison, 2013. A precursor to CODe-Imp, DPT, was also present in three different papers Cinnéide and Nixon, 1999a, Cinnéide and Nixon, 1999b, Cinnéide, 2000.
Research gaps and opportunities
Although significant work has been done to test various aspects of search-based maintenance, there are numerous areas in which ongoing research is important in order to uncover further innovations in the field. A major component of search-based maintenance and SBSE as a whole is the metrics used to measure the quality of a program. Due to the highly subjective nature of the quality of a software system, the metrics can have a huge impact on the Year Purpose A-CMA (Koc et al., 2012) 2012 Refactors Java bytecode using a selection of refactorings and metrics.
CODe-Imp 2011 Automated refactoring tool containing numerous metrics and refactorings.
FermaT (Fatiregun et al., 2004) 2004 Transformation tool for migration of legacy systems from assembly code to higher level languages.
TrueRefactor (Griffith et al., 2011) 2011 Identifies and removes five different design smells in Java. Nutch (Mkaouer et al., 2016) PDE Pixelitor (Wang et al., 2015) Platform Samba (Di Penta, 2005) Spec-Raytrace (O'Keeffe & Cinnéide, 2006) Wife (Ghaith & Cinnéide, 2012) usefulness of the metaheuristic optimization technique, depending on how accurately they portray quality in the eyes of the user. We need explicit metrics to guide the optimization of a solution, but one developer's view of quality may be different to another's. Similarly, a programmer's opinion of quality may change from project to project or over time. It would be useful to have some form of explicit guidance on how to choose metrics for a search-based optimization technique. What is the general view of software quality? How is this affected from one programming language to another? Most previous research has been applied to object-oriented programs and as such most fitness functions aim to improve object-oriented behaviors like cohesion or flexibility. Even defining these aspects has proven to be difficult. The field would gain valuable insight with research into developer opinions on software quality and on how technical debt is currently handled in the business environment. Experimentation has been done to combine different software metrics together to create more useful measures of quality, typically using either weighted sums or Pareto fronts. There has also been some research into the applicability of certain metrics and into how metrics that aim to measure similar aspects differ from each other. There is an opportunity for research into using different combinations to improve the software in different ways, similarly to how a human assisted tool can guide the improvement of the software design to a suitable solution for the user.
Another important aspect to research is the applicability of these techniques in a company environment. There are various things to consider here. Is the refactored code actually useful for a maintenance programmer? There is a risk of the code being modified so much that it becomes incomprehensible. It has been suggested that the automated maintenance of code could possibly be viewed in a similar way to a compiler that makes changes to the code behind the scenes that a programmer needs not worry about. This further abstraction of the code may be the future of software design, as metamodels become more involved in the coding aspect of the project development cycle. In the analyzed literature, experimentation with industrial code has been lacking. An increase in the use of industrial code and the opinions and expertise of experienced software developers may help to simulate the company environment and uncover possible issues or problems to address. In addition to this, the majority of studies have been concerned with the Java programming language. As this doesn't accurately reflect the range of programming languages used in the software environment, increased support for other programming languages is desirable.
Of the different search techniques used to address software maintenance, a large proportion of the analyzed literature used EAs. Among these studies, a lot of recent work has looked at multi-objective approaches. Multi-objective algorithms have been applied sparsely to SBSE problems (Simons & Parmee, 2006;Yoo & Harman, 2007;Zhang et al., 2007;Simons & Parmee, 2007;Finkelstein et al., 2008;Simons & Parmee, 2008;Wang et al., 2008;Durillo et al., 2009;Maia et al., 2009;Maia et al., 2010;Bowman et al., 2010;Durillo et al., 2011;Brasil et al., 2011;Colanzi et al., 2011;Assunção et al., 2011;Yoo et al., 2013) and only recently have been used to address issues relating to maintenance. This indicates a promising evolution of automated maintenance in SBSE to generate more sophisticated solutions to the problem area. The methods address the issue by allowing multiple aspects to be taken into consideration. Further inspection of these techniques is required to discover the potential of their use and derive ways to make the approach more practical for use in a software development environment.
The survey aim was outlined in Section 3 when it was asked "To what extent has searchbased refactoring been studied in software maintenance?" In order to address this, 10 research questions were proposed. RQ1 asked "How many papers were published per year?" From 1999 to 2016, there is an average of three papers published per year, with a notable increase in the amount being published after a drought in 2009 and 2010. Papers involving the use of EAs also increased notably after 2010, along with studies of more obscure search techniques (ABC, CRO, VNS). The largest amount of papers published in one year is eight papers, in 2012.
RQ2 asked "What are the most common methods of publication for the papers?" The majority of papers were published in conferences with the evolutionary computation conference GECCO publishing the most papers at seven. Only one paper was published as a book section and 13 were published in journals in comparison to the 36 papers (72% of those analyzed) that were published in conferences. The Empirical Software Engineering journal and Journal of Systems and Software published the largest amount of journal papers at four each. RQ3 asked "Who are the most prolific authors investigating search-based refactoring in software maintenance?" 76% of the authors of the papers analyzed had only been involved in one of the published papers, but 17 were authors in at least two. Eleven authors published more than two papers each. Of these authors, Mel Ó Cinnéide and Marouane Kessentini authored 33 of the papers between them, and worked together on four of them. RQ4 was interested in the types of papers analyzed, asking "What types of studies were used in the papers?" The majority of the papers (37) had quantitative studies, with three papers been qualitative studies and 10 being discussion papers with no experimental portions.
Most of the studies investigated refactoring approaches, but three of the quantitative studies examined other aspects, such as the setup of the search process itself or the metrics used for maintenance. RQ5 asked "What refactoring approaches were used in the literature?" We isolated three main methods for using search-based refactoring to improve the maintenance of software. Design defects can be found in the code first and then removed, or the software can be refactored up front to improve certain software metrics or sets of metrics. The less common approach used in only one study ) refactored software towards a previously generated ideal software design using UML models. Other papers also investigated the optimization process itself or tested the metrics available to measure maintenance. RQ6 was interested in "What search techniques were used in the refactoring studies?" The majority of the studies involved EAs of some sort (mostly GAs). HC and SA were also popular, being used in 14 studies and 11 studies respectively. Other search techniques were used less frequently. PSO was used in three studies while ABC was experimented with in one study. Studies also experimented with CRO and VNS in 2015 and 2016. RQ8 asked "What tools were used for refactoring?" There were seven different refactoring tools proposed and used among the studies. Most of them worked with Java code. Half of the tools (Cinnéide & Nixon, 1999a;Di Penta, 2005;Griffith et al., 2011) used the design defect approach to refactoring the code and the other half (O'Keeffe & Cinnéide, 2003;Koc et al., 2012;Fatiregun et al., 2004) used the approach that modified code based on a software quality measure. Of the tools, A-CMA was used in two papers, and CODe-Imp was used in 12.
RQ9 asked "What types of metrics were used in the studies?" Most studies have used metrics that examine object-oriented behaviours, cohesion and coupling in particular. Some studies also used metrics to represent the class structure of a program or for inheritance based observations. A number of the studies used metrics from the QMOOD metric suite (Mohan et al., 2016;O'Keeffe & Cinnéide, 2007b) or the quality attributes constructed by (Bansiya & Davis, 2002) using their QMOOD suite (O'Keeffe & Cinnéide, 2006;O'Keeffe & Cinnéide, 2008a;O'Keeffe & Cinnéide, 2007a;O'Keeffe & Cinnéide, 2008b;Simons et al., 2015;Wang et al., 2015;. Finally, RQ10 asked "What are the gaps in the literature and available research opportunities in the area?" Analysis of the literature and measurement of features of the research has derived various observations of the developments in the area and isolated a few aspects wherein there is an opportunity for future work and experimentation. Search-based refactoring to automate software maintenance has been shown to work for experimental examples and various tools have been developed to tackle the maintenance issue using automated means, but more work needs to be done to measure empirical examples. It needs to be evaluated whether the search-based refactoring techniques that have been developed can carry over to the business environment or whether real-world application scenarios will bring to light further issues. Also, the metrics used to test the experimental approaches and aid with refactoring could also be further examined. There have been a few studies investigating the metrics used, but as they can be subjective, further inspection is necessary. Likewise, recent work has experimented with the use of MOEAs for refactoring, and this is an exciting area to investigate for further research.
Related work
There has already been a number of literature reviews related to the field of SBSE as well as to various aspects of refactoring. Table 9 lists the other literature reviews related to SBSE or refactoring.
Harman, Mansouri and Zhang wrote a general review of SBSE in 2009 (Harman et al., 2009) before updating the review in 2012 . Another review of the area of SBSE was done by . These reviews give an overview of the different areas of SBSE and discuss research done in those areas up to that point in time. The literature review conducted in this survey sticks out from them as it focuses of SBSE in relation to maintenance and, in particular, refactoring. Räihä wrote a report (Räihä, 2009) in 2009 that was later released in a journal (Räihä, 2010) in 2010 that focused on the areas of architecture design, software clustering, software refactoring and software quality. Although this review focuses on similar areas to maintenance in SBSE and looks at refactoring, it was still a bit too general in comparison with the current survey.
A few recent reviews were more focused and looked at various aspects of refactoring. Misbhauddin and Alshayeb looked at UML model refactoring in 2015 (Misbhauddin & Alshayeb, 2015), concluding that UML model refactoring is a highly active area of research. They noted that, while a number of approaches have been proposed in the area, there are still some important issues and limitations to be addressed. Al Dallal also reviewed studies that identified refactoring opportunities in 2015 (Al Dallal, 2015). They too concluded that the analyzed area is a highly active area of research. They found that many of the studies used open source systems and that they were limited in their size, recommending that further studies involve industrial systems and systems of greater size to improve the generality of the results. They also encouraged researchers to expand the coverage of their work to include more refactoring activities. Mariani and Vergilio gave a review of search-based refactoring in 2017 (Mariani & Vergilio, 2017). They found that most of the search-based refactoring approaches were more recent and that many of the studies used EAs. They also found that the most commonly used refactorings were part of Fowler's (Fowler, 1999) refactoring catalogue and that the most common type of solution produced in the studies is a sequence of refactorings to apply to the system.
Although the reviews of Misbhauddin and Alshayeb and of Al Dallal looked at aspects of refactoring, unlike this survey they did not focus on the use of search-based refactoring applied to software maintenance. They looked at aspects of refactoring in software engineering that was more abstract. Misbhauddin and Alshayeb investigated the use of refactoring to modify UML models, as an aspect of model-driven engineering. This survey looks more directly at refactoring to improve aspects of the software itself. Al Dallal investigates studies that identify opportunities for refactoring in objectoriented code. Again, this is less distinct than the investigation in this paper that analyzes papers that provide actual refactoring solutions in software.
On the other hand, Mariani and Vergilio did look at search-based refactoring. However, their study focuses only on the analysis of the studies, whereas this paper gives a detailed review of the studies being analyzed. Thus, this survey can work as an introduction to the area of SBSE relating to maintenance for researchers aiming to work in the area by giving an overview of the research actually conducted, as well as analyzing this research and drawing conclusions to derive gaps and possibilities for future work. On the other hand, the literature review by Mariani and Vergilio is not a survey and as such is focused only on the analysis. Not only is the review of the literature itself more in depth but the analysis also investigates other aspects of the literature. There is a more in depth investigation of the tools used for refactoring in the studies and also an examination into how metrics have been tested and discussed in the literature. This survey also investigates how the search techniques used in the literature have changed over time. Similar aspects of the literature that have been investigated in both papers are compared below in order to examine whether related trends have been isolated across the two sets of analyzed papers.
Comparison
The review by Mariani and Vergilio (Mariani & Vergilio, 2017) analyzed some similar aspects of the literature, finding related trends. They investigated the search techniques used in the papers they examined. They observed that EAs and, in particular, the GA were the most commonly used algorithms in the studies. They observed HC in 16 studies and SA in 10, and observed ABC and CRO in a small subset of studies (two and one respectively). These trends within their review mirror this paper with EAs being observed more commonly in both papers and HC and SA being observed in a similar amount of studies, while other search techniques (e.g. ABC and CRO) are seen less commonly.
Another aspect investigated in their review is the systems used in the experiments to validate the approached proposed in the papers. Again, the observations they made were similar. Many of the systems noted were open source Java programs. Four of the papers they analyzed used the JDI-Ford program, and many of the most commonly used Java programs analyzed in this paper were also analyzed in their review (for example, Xerces-J, GanttProject, JHotDraw and JFreeChart were the most commonly used open source programs among the studies in both papers). They listed a smaller amount of open source Java programs tested, at 14, whereas in this paper there are 40 different programs used to test the approaches in the literature. They also investigated the tools used for refactoring and found that the CODe-Imp tool was used in nine of their studies, similarly to how 12 of the papers analyzed in this paper used the tool.
They examined the distribution of the studies analyzed per year, between 2005 and 2016. Multi-objective and many-objective approaches were observed from 2014, and more papers were observed in general from 2010 onwards. Before 2010, there were at most four papers per year whereas there were at least six papers analyzed per year afterwards. Again, these trends are mirrored across their paper and this one. The majority of the papers analyzed in their review were from conferences, with the remaining being published in journals (nine papers) or being from workshops (two papers). Lastly, they investigated the most prominent researchers in the studies they analyzed, with the seven authors they listed publishing more than five papers. Six of the seven authors are also listed among the seven most prominent authors of the studies analyzed in this paper, with the top two being noted in their paper and this one as authors that stand out due to publishing at least 15 of the analyzed studies each. The one author among the top seven in their survey who did not appear in this paper was Camelia Chisalita-Cretu, who had published six of the papers they had analyzed.
Threats to validity
There are number of elements of how the literature search has been conducted that may contribute threats to its validity. The methods used to address the aim of the survey "To what extent has search-based refactoring been studied in software maintenance?" may provide a validity threat to the conclusions made. In this survey, we split the aim into a set of 10 research questions, which each investigate some element of the papers analyzed. Each research question is explored individually within the analysis and answered separately in the discussion section. The papers captured in the search can also be affected by a number of attributes related to how the search has been conducted. First, the search repository or repositories used to find the papers may provide different results and could prevent the identification of certain papers. To minimize this issue, we have used five popular search repositories to look for papers related to the areas of focus. We also used a snowballing approach to finding further related papers by investigating references in the papers and conducting similar searches.
Another element that could affect the papers returned is the search string used in the search. The search string used in this survey was constructed in order to reduce the returned results from hundreds of thousands of hits to something more feasible to sift through. As such, some papers may have been filtered out of the search that could be relevant. This is also somewhat appeased by the snowballing approach used so that other potentially related papers will likely be found regardless. The time period used to filter the results can prove a threat to the validity of the search, but the search used for this survey was conducted up to September 2016, so many of the more recent results will not be filtered out. Finally, the process used to search through and pick out relevant papers from the search results could affect the analysis, depending on with papers are chosen to investigate. To maximise the repeatability of the process and minimize the validity threat, a set of inclusion and exclusion criteria have been used to aid in picking out the relevant papers for the survey.
Conclusion
This survey investigates the question "To what extent has search-based refactoring been studied in software maintenance?" and introduces a set of research questions to help address it. Five different search repositories are used with the aid of a set of inclusion and exclusion criteria to find 50 different papers that use search-based refactoring for purposes of software maintenance. Before the papers are examined, the most common search techniques used in the studies are quickly discussed and described. Each of the papers are examined and their research output is discussed. Then, different aspects of the set of papers are analyzed according to the set of research questions outlined. The research questions are addressed using each relevant aspect of the analysis. Related reviews are also discussed and compared with the survey after it is conducted, describing the similarities and differences between them.
This survey is a valuable resource for researchers planning with investigate the use of search-based refactoring techniques for software maintenance and gives an overview of the field as well as the relevant work in the area. The analysis made within the paper allows readers to be aware of how the research has progressed and addresses the aim of finding the extent that search-based refactoring has been studied in software maintenance. The identified gaps and recommended areas for future work allow researchers to investigate other aspects of the research area. Work in these areas could contribute towards progression in the use of search-based refactoring for software maintenance and could aid in making the approach more feasible for use in the development of software products, therefore saving time and developer resources. | 2018-02-16T23:04:49.890Z | 2018-02-07T00:00:00.000 | {
"year": 2018,
"sha1": "927e6b52da64caf6effb71ff0e781db305aa292a",
"oa_license": "CCBY",
"oa_url": "https://jserd.springeropen.com/track/pdf/10.1186/s40411-018-0046-4",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "6189b893e028670680e5586cc5dbc4e529fb0a27",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
233246588 | pes2o/s2orc | v3-fos-license | Improved data sets and evaluation methods for the automatic prediction of DNA-binding proteins
Motivation Accurate automatic annotation of protein function relies on both innovative models and robust datasets. Due to their importance in biological processes, the identification of DNA-binding proteins directly from protein sequence has been the focus of many studies. However, the data sets used to train and evaluate these methods have suffered from substantial flaws. We describe some of the weaknesses of the data sets used in previous DNA-binding protein literature and provide several new data sets addressing these problems. We suggest new evaluative benchmark tasks that more realistically assess real-world performance for protein annotation models. We propose a simple new model for the prediction of DNA-binding proteins and compare its performance on the improved data sets to two previously published models. Additionally, we provide extensive tests showing how the best models predict across taxonomies. Results Our new gradient boosting model, which uses features derived from a published protein language model, outperforms the earlier models. Perhaps surprisingly, so does a baseline nearest neighbor model using BLAST percent identity. We evaluate the sensitivity of these models to perturbations of DNA-binding regions and control regions of protein sequences. The successful data-driven models learn to focus on DNA-binding regions. When predicting across taxonomies, the best models are highly accurate across species in the same kingdom and can provide some information when predicting across kingdoms. Code and Data Availability All the code and data for this paper can be found at https://github.com/AZaitzeff/tools_for_dna_binding_proteins. Contact alexander.zaitzeff@twosixtech.com
Introduction
There are over 195 million known natural protein sequences that have not been experimentally annotated for function (Consortium, 2018). Our ability to sequence genomes and identify proteins outstrips our ability to experimentally identify their functions. As such, there is a need for tools to predict protein function directly from sequence. In this paper, we focus on predicting whether a protein binds to DNA.
DNA-binding proteins play important roles in cellular processes including chromosomal DNA organization, initiation and regulation of transcription, DNA replication, DNA recombination, and DNA modification (Jones et al., 1987;Jen and Travers, 2013). Accurate identification of DNA-binding proteins has implications in both systems and synthetic biology by, for example, reducing the hypothesis space of potential control elements in gene regulatory networks, or by expanding the collection of known binders that could elucidate principles of design and thereby improve de novo design of DNA binders. Typically gene function is inferred from a combination of experimental, phylogenetic, and/or computational evidence (Giglio et al., 2019). While experimental evidence may be the most reliable, it is also the most costly and far slower than genome sequencing. Thus, computational methods that infer annotations directly from sequence, and thereby sub-select and prioritize specific targets for experimental validation, could increase our efficacy in identifying DNA-binding elements in novel organisms and designing new binders.
Recently there has been an explosion of models to predict DNAbinding proteins from sequence-based properties (Kumar et al., 2007;Lou et al., 2014;Xu et al., 2014;Liu et al., 2014Liu et al., , 2015Motion et al., 2015;Liu et al., 2016;Waris et al., 2016;Wei et al., 2017;Qu et al., 2017;Zaman et al., 2017;Chowdhury et al., 2017;Zhang and Liu, 2017;Rahman et al., 2018;Du et al., 2019;Adilina et al., 2019;Ali et al., 2019;Mishra et al., 2019;Hu et al., 2019;Wang et al., 2020). Thus a fundamental question is how to assess the relative accuracy of proposed models predicting DNAbinding proteins. One reasonable method is to take experimentally labeled proteins, divide the sequences into a training and test set, and compare all the models on this train/test task. In current literature, the most common train/test task used to compare models is to train on the set of protein sequences in the "PDB1075" data set by (Liu et al., , 2015 (hereafter referred to as the "benchmark training set") and predict on the "186PDB" data set (Lou et al., 2014) (hereafter called the "benchmark test set"). The benchmark training set and benchmark test set data sets have been used extensively (Liu et al., 2016;Wei et al., 2017;Zaman et al., 2017;Chowdhury et al., 2017;Zhang and Liu, 2017;Wang et al., 2017;Rahman et al., 2018;Hu et al., 2019;Du et al., 2019;Adilina et al., 2019;Ali et al., 2019;Mishra et al., 2019;Wang et al., 2020), with each new model showing improvements on the task. Unfortunately, there are flaws with this train/test task: • 75 of the 93 (80.6%) DNA-binding protein sequences in the benchmark test set are also in the benchmark training set. While some papers (Liu et al., , 2015Zhang and Liu, 2017) remove sequences from the benchmark training set that are above a certain percent identity to those in the benchmark test set, it is unclear (and unlikely) that all papers do so. When a protein is present in both the training and the test set, models are rewarded for memorizing those proteins (often over-fitting) rather than learning general principles of DNA binding. Thus, the reported results may not accurately reflect the expected future performance if and when these models are applied to novel sequences not contained in the training or test sets. • The benchmark training set contains duplicate entries for identical monomers making up homodimers and homotrimers: entries 2AY0A, 2AY0B, and 2AY0C are the same, as are 3FDQB and 3FDQA, as well as 4GNXL and 4GNXK. While this does not invalidate predictions on the benchmark test set, most of the mentioned papers (Liu et al., , 2015Ma et al., 2016;Liu et al., 2016;Wei et al., 2017;Zaman et al., 2017;Chowdhury et al., 2017;Zhang and Liu, 2017;Wang et al., 2017;Rahman et al., 2018;Adilina et al., 2019;Ali et al., 2019;Mishra et al., 2019;Wang et al., 2020) report the results of jackknife testing on the training set, where they hold out each sample in turn for testing after training on the rest of the samples in the benchmark training set. This results in the duplicated proteins being both memorizable and over-represented in the evaluation of the model. • Some entries in the benchmark training set are not protein sequences but DNA sequences: specifically entries 4GNXL, 4GNXK, 1AOII, 3THWD, 4FCYC, 4JJNJ, and 4JJNI. These sequences were rejected from the PDB due to being DNA sequences rather than amino acid sequences, but they remain in the benchmark training set and thus might be treated as DNA-binding proteins.
Another important consideration is that the benchmark training set is very small compared to the amount of available data. Machine learning models generally perform better, and exhibit greater generalizability, when trained on more data. The train/test task should mimic situations in which users would employ the models; having an unrealistically small benchmark training set makes it unclear whether benchmark model evaluations apply to actual use.
Other studies have used different data sets to predict DNA-binding proteins, but they too are either small (fewer than 200 DNA binders in the training set (Waris et al., 2016;Xu et al., 2014)), suffer from overlap between their training and testing set (Qu et al., 2017;Hu et al., 2019), or are unavailable for download (Peled et al., 2016). The authors of (Qu et al., 2017) acknowledge the overlap and create a training set they claim has no overlap with the testing set, but they do not provide the no-overlap training set publicly.
One contribution of the work herein is the assembly and partition of larger, more reliable training and test sets. We also provide code and instructions for generation of similar data sets to address other annotation problems. These sets contain no overlap between the train and test sets, have no duplication of sequences within sets, contain only amino acid sequences, and are at least 10 times larger than the benchmark training and test set. These sets were split at random, to mimic the generation of train/test splits in the aforementioned papers. This evaluation task approximates a scenario in which many DNA binders are known, and models are tasked with identifying binders that previous efforts have missed. Using these larger training and test sets, we show that two models substantially outperform recently published results: an off-theshelf gradient boosting model using features from a sequence embedding by a protein language model (Rives et al., 2019), and a simple nearest neighbor model using BLAST percent identity.
A fundamental question of interest to biologists is whether a model could identify DNA binding proteins in a newly sequenced species with no functional annotations. To approximate this use case with an evaluative task, we ask how well a model can predict DNA binding for proteins of a single species held out from the training data. We evaluate the gradient boosting and nearest neighbor models (the best performing in the previous random train/test task) in a variety of such settings. The results show that the gradient boosting model using the sequence embedding features gives comparable results to the nearest neighbor model. Because of the untapped potential for refinement and tuning of this model, it may be a good starting point for future work in protein function prediction.
Our analysis reveals some of the weaknesses in the current body of work for predicting DNA-binding proteins. We propose several improvements to data handling and model evaluation for the prediction of DNA binding and for protein function prediction in general.
Data sets
We propose several new data sets for DNA binding prediction tasks that address the limitations of the existing benchmark data sets. Our new data sets are large, cleaned to ensure data quality, divided into non-overlapping training and test sets, and available at the provided GitHub account. Additionally, we provide tools and instructions for making machine learning ready data sets for protein function prediction more generally.
The data are taken from UniProt (Consortium, 2018). We take all bacterial and eukaryotic proteins in the Swiss-Prot subset with a sequence length between 40 and 5600 (filters: taxonomy:2 or taxonomy:2759, reviewed:yes, length:[40 TO 5600]). We set this upper limit because the longest DNA-binding protein (defined below) in UniProt is 5,588 amino acids long. We exclude sequences that appear in multiple entries which disagree about whether the protein is DNA-binding. These filters on the UniProt database result in 419,206 unique protein sequences (as of Dec 10th, 2020). To determine whether a protein is DNA-binding, we use GO code (Ashburner et al., 2000;Consortium, 2019) 3677 (name: "DNAbinding", definition: "Any molecular function by which a gene product interacts selectively and non-covalently with DNA") and any of its children through an is-a or part-of relationship.
We use a random split of the data to build members of a suite of training/testing sets. We use these sets to compare performance between different models. A random split makes few assumptions on the data or prediction goal, and reflects most of what has been previously published on this topic. The predictive task best approximated by these data splits is a scenario in which many DNA binders are known, and models are used to identify potential additional binders that previous efforts have missed. First, we subset the data to only allow sequences with the 20 canonical amino acids. Then, we randomly select ≈ 10% of the proteins to be a base test set and the remainder to be a base training set. We derive three separate training/testing sets from this base split: set is the base test set, and the training set comprises only sequences for which the top five alignments (as measured by bit score) with each sequence in the testing set have ≤ 50% identity (under default BLAST parameters using BLOSUM62) (Altschul et al., 1990). • Random, similarity-and length-limited (RS&LL): A subset of RSL, with all sequences above length 1000 removed. Table 1 details the number of sequences in each of the training and testing sets. We next make a suite of data sets geared towards the following use case: a new species has been sequenced and we want to use existing data to predict the molecular function of the proteins in this species. To mimic this situation, we choose the eight species in the bacterial kingdom and eight in the eukaryotic kingdom with the most annotated DNA binders. For each held-out species, we construct test sets of all amino acid sequences from that species, and three corresponding training sets: bacterial proteins, eukaryotic proteins, and bacterial and eukaryotic proteins. All sequences in the test set are excluded from all three training sets. Tables 2 and 3 enumerate each of the species we held out as well as the number of sequences that are DNA binders or non-binders in each one.
Metrics
In this section, we explain the metrics we use to compare models on the aforementioned data sets. Denote the number of true positives, true negatives, false positives, false negatives as TP, TN, FP, and FN respectively. Define the following: Matthews correlation coefficient (MCC) is a good measure when the data set is imbalanced (Chicco, 2017). However, it and the other metrics above require a binary positive or negative prediction, and the models we test predict probabilities. To use the above metrics, we have to choose a threshold such that a predicted probability at or above the threshold is a positive prediction (DNA-binding) and a predicted probability lower than the threshold is a negative prediction (non-DNA-binding). For all models in this paper, we take the threshold to be 0.5 when calculating ACC and MCC.
In addition, we include two metrics that evaluate the raw predicted probabilities: area under the receiver operating characteristic curve (ROC AUC) and average precision (AP). To define these, allow the quantities in equation (1) to be functions of the threshold T , e.g. TPR(T ) is the true positive rate when threshold T is chosen. Then the definitions of ROC AUC and AP are
Precision(T )Recall (T )dT
Like MCC, AP is a good metric for imbalanced data sets (Chicco, 2017).
Algorithms and Implementation
In this section, we describe the four different models that we compare. As a baseline model, we use a nearest neighbor (1-NN) model based on BLAST percent identity (Altschul et al., 1990). The nearest neighbor model predicts each sequence in the testing set to have the same label as the sequence with the highest percent identity out of the five best alignments (as measured by bit score) with sequences in the training set. To calculate the percent identity and best alignments, we use BLAST software (Altschul et al., 1990) with default parameters using BLOSUM62. For more information about local alignment, see (Altschul and Gish, 1996).
Unfortunately, many of the previously mentioned papers do not have code available (Liu et al., , 2015(Liu et al., , 2016Zaman et al., 2017;Wei et al., 2017;Chowdhury et al., 2017;Zhang and Liu, 2017;Xu et al., 2014;Peled et al., 2016), or have computationally expensive training procedures that would not scale well to a larger data set. For example, one of the methods in (Adilina et al., 2019) would require training on 32620 variants of the training set to determine the optimal features. As a result, we only evaluated two published modeling approaches, both neural networks: First, the model from (Qu et al., 2017), which features an embedding layer, two convolution layers, and an LSTM layer; second, the model from , which replaces the LSTM in (Qu et al., 2017) with 2 bi-LSTM layers.
In (Qu et al., 2017), the authors show the LSTM model performs better than (Kumar et al., 2007) and ( Wang et al., 2017;Rahman et al., 2018;Chowdhury et al., 2017). It bears mentioning that it is unclear if the authors of (Qu et al., 2017;Hu et al., 2019) compare the models using the same training set in all of these results.
We train and make predictions using these models following the approaches of the authors Qu et al., 2017). Specifically, for the LSTM model, we train 5 models using 5-fold cross-validation and use the predictions from the best performing model (as measured on the validation set). For the bi-LSTM model, we train three different models, one using 10% of the training data for validation, one using 15%, and the other using 20%. We use the predictions from the model that achieved the highest validation performance.
We also introduce a gradient boosting model trained with XGBoost (XGB) (Chen and Guestrin, 2016) applied to features generated from sequence data by the evolutionary scale model (ESM) (Rives et al., 2019). The ESM was trained on 250 million protein sequences in a self-supervised learning task to learn a language model for proteins. The ESM embeds each protein sequence as a 1280-feature vector purported to encode biochemical properties, biological variation, remote homology, and information in multiple sequence alignments. We use the features generated from ESM as-is, not tailoring them to our particular task.
For the gradient boosting model, we randomly sample 10% of the training data to serve as a validation set. The gradient boosting model sees the remainder of the training data at every step and we stop the training after the average precision fails to increase for ten rounds on the validation set; we then use the model from the best performing epoch. Positive samples in the training set are up-weighted so that positive and negative samples have equal aggregated weight. To account for the variance between individual runs, we train the model three times with different train/validation splits. The final prediction for each test sample is the median of the predictions of the three runs.
Results and Discussion
The nearest neighbor and gradient boosting models outperform the LSTM and bi-LSTM across all data sets. The nearest neighbor model, despite its simplicity, performs essentially as well as the gradient boosting model. We also investigate the sensitivity of each model to perturbations of DNAbinding and non-DNA-binding regions and find several different patterns. Figures 1 to 3 contain the results on the three tasks from the random train/test split. Since the nearest neighbor model does not output probabilities, we only show its ACC and MCC score.
bi-LSTM performs worse than simpler LSTM
Notably, bi-LSTM performs worse than LSTM on all data sets. These results contrast with , where the authors present two train/test splits on which bi-LSTM has higher accuracy than LSTM. However, their first data set, which is similar to RLL, has about a 17% overlap between the training set and testing set. The second data set, which we could not obtain, has a small testing set and may contain overlap; the testing set is 200 sequences from the species Arabidopsis thaliana and their training set also contains sequences from Arabidopsis thaliana. Furthermore, it is not specified how the authors of have a higher percentage of DNA-binding proteins (≈ 25%) than ours (≈ 8%) and are smaller (they are training with 57, 078 total proteins vs our 361, 605 in RLL).
A weakness of the LSTM and bi-LSTM models is the fixed limit on sequence length that was imposed to keep the models from being too large. Figure 4 compares the performance of versions of the models with input size limits expanded so they could be trained on RSL (which is not size-limited) vs. RS&LL (which is size-limited) when predicting on the RS&LL test set (which is a subset of the RSL test set). We see that raising the maximum sequence length and expanding the training set of LSTM and bi-LSTM degrades the performance even on amino acid sequences of length ≤ 1000.
Simple models outperform deep learning models
Unexpectedly, the BLAST nearest neighbor model does remarkably well on all the tasks; ACC and MCC are higher than both of the neural networks across all data sets, and often are competitive with the gradient boosting model (figures 1 to 3). One possible reason for the high performance of the nearest neighbor model is the way annotations in UniProt are generated. While for some proteins annotations reflect experimental observations, a search of Uniprot reveals that more than 97% of the DNA-binding data in the bacteria and eukaryotic kingdoms from Swiss-Prot are labeled by electronic annotation, biological aspect of ancestor, sequence or structural similarity, or sequence orthology. This could artificially boost performance: proteins are assigned a DNA-binding label due to sequence similarity, and then the nearest neighbor model predicts the same label also due to sequence similarity. To explore this possibility, we evaluated all four models on a subset of the RLL test set that included only positive examples with verified DNA binding activity (UniProt manual assertion codes EXP, IC, IDA, IPI, TAS, HDA) (figure 5). MCC dropped for all four models. The drop was particularly pronounced for the LSTM model, for which MCC decreased 25%. For the other three models, MCC decreased by 10%-14%. This test subset put bi-LSTM on par with LSTM, but the nearest neighbor and gradient boosted models remained substantially better. This suggests that the nearest neighbor model performance is not entirely explained by the electronic annotation process.
Model sensitivity to manipulation of DNA-binding regions
We next ask whether the models are identifying the DNA-binding regions of the proteins, as opposed to common patterns in DNA-binding proteins not specifically linked to DNA binding (e.g., nuclear localization sequences in eukaryotes (Görlich, 1997) and analogs in prokaryotes (Lisitsyna et al., 2020)). In the RS&LL test set, we examine 714 proteins that have a single DNA-binding region annotated. We modify their amino acid (AA) sequences in three ways to differentiate between these hypotheses: • DBR reversed: we reverse the DNA-binding region (DBR) subsequence and leave the rest of the AA sequence intact. • Random region reversed: we reverse a random subsequence of the same length as the DNA-binding region that does not overlap with the DNA-binding region. This test is a control for the effect of the reversal perturbation in general on model predictions. • Everything but DBR reversed: we reverse the subsequences on either side of the DNA-binding region. This test determines the relative importance of the DNA-binding region compared to the rest of the protein sequence in determining model predictions.
We then compare models' predictions on the modified amino acid sequences to those for the original protein sequences. We might expect that nearest neighbor model would not be greatly affected by reversing the DNA-binding region or a random region because the reversed region is a small portion of the entire sequence. In contrast, we expect that if the LSTM, bi-LSTM, and gradient boosting model have learned the importance of the DNA binding region in particular as opposed to their general context in the overall sequence, they would be more affected by reversing the DNA-binding region than by reversing a random region. By reversing both regions outside the DNA binding region we would expect the nearest neighbor model predictions to change dramatically, as much of the sequence has changed, but the impact on the other models would depend on the extent to which they had learned to identify DNA binding regions versus other patterns present in DNA binding proteins. The results are shown in figure 6, where we plot the fraction of perturbed binders that each model classifies as binders. We observe three distinct patterns of response. The nearest neighbor and LSTM models show virtually no sensitivity to reversing a random region, modest sensitivity to reversing the entirety of the non-binding regions, and substantial sensitivity to reversing the DNA-binding region (figures 6a and 6b). This is consistent with the DNA-binding region being more highly conserved across proteins, with the BLAST alignment reflecting this conservation and the LSTM model learning to take advantage of it. The bi-LSTM model is moderately sensitive to reversing the entirety of the non-binding regions, and barely reacts to smaller perturbations to a random region or even the DNA binding region (figure 6c). This failure to react when the binding domain is disrupted is consistent with its failure to solve the binding prediction task. The gradient boosting model is insensitive to reverses of random non-binding sub-regions, but quite sensitive to reversing the DBR or the for the gradient boosted model, when predicting DNA-binding proteins for each of eight held-out test species per kingdom after training on data from bacteria, eukaryotes, and both together.
entirety of the non-binding regions (figure 6d). As with the LSTM, this likely reflects that the gradient boosting model has learned to focus on the binding domain. The nearest neighbor, LSTM, and gradient boosting models are all able to predict reversed DNA-binding domains as less likely to bind to DNA. The gradient boosted model, perhaps because it is built from an embedding that purports to holistically encode many features of proteins, is more sensitive to large disruptions of a protein's non-binding components.
Performance of nearest neighbor and gradient boosting models on data sets broken up by species
Next, we consider a set of tests that reflect the use case of annotating a novel proteome. Because the gradient boosting model performed the best on the previous sets and does not have a maximum length constraint, we use it in these tasks. Due to its simplicity and interpretability, we also consider the nearest neighbor baseline. We do not evaluate the LSTM or bi-LSTM models in these tasks, because their performance on the simpler tasks was substantially poorer and because the computational requirements to evaluate them on all of these tasks would have been quite large. First, we evaluate predictions for individual bacterial and eukaryotic species using training sets comprising each of the two kingdoms individually and both together. Figure 7 show the results for nearest neighbors and the gradient boosting model. Unsurprisingly, training on the "wrong" kingdom yields poorer results; however, we see evidence of information transfer across kingdoms in most cases. Training on both kingdoms yields results comparable to those obtained from training on only the kingdom of the test species. As with the previous results, the gradient boosting model and nearest neighbor model have comparable performance. Taken together with the previous results, the gradient boosting model with language features reaches an acceptable performance floor, indicating a good area for future research.
Conclusion
Automatic labeling of protein function is an important problem for computational biologists, as our ability to sequence genomes outstrips our ability to experimentally determine function. Focusing on predicting DNA-binding proteins, this paper makes four main contributions to the field. First, we emphasize the importance of large, accessible data sets broken up into multiple training/testing tasks mirroring realistic use cases. Random train/test data splits, while common in the field of machine learning, don't necessarily provide the best evaluation of predictive models. Data sets should be chosen with consideration for the scenarios in which a model is intended to be used. While having benchmark data sets is useful, as the underlying data compiled into these sets change, the benchmarks must be revisited. In the case of the PDB1075 test set widely used as a DNA-binding protein prediction benchmark, several data quality issues have never been corrected, and the data set remains relatively small. Here we offer a larger, carefully curated data set as a new standard that reflects the explosion of new sequence data at our disposal. We have also made available code for the straightforward construction of similar data sets.
Second, we showed that a simple baseline nearest neighbor model outperforms neural networks from recent literature in predicting DNA binding proteins. This held even when restricting training set sequences to those that were "far" from the test set, as well as when only using experimentally-labeled DNA binders to control for bias introduced by homology-based inferred labels in the UniProt data. While it may be surprising that BLAST percent identity outperforms more sophisticated modeling efforts, this is not inconsistent with our understanding of protein function. One paper notes that "several studies have shown that homologous sequences that share more than 40% identity are very likely to share functional similarity as judged by E.C. (Enzyme Commission) numbers" (Pearson, 2013). Because of its performance and simplicity, the BLAST nearest neighbor model predictions should serve as a baseline for future modeling efforts, and any proposed machine learning algorithm should be able to show a significant improvement over this baseline before it is considered as a replacement.
Third, the gradient boosted model we evaluated shows that a simple model based on a complex off-the-shelf embedding has acceptable performance on both the random split and held-out species data sets. While it does not perform substantially better than the nearest neighbor model, it shows promise as a foundation for a more capable model. We leave it to future work to determine if different model architectures can better utilize ESM features, or if other protein language model embeddings (e.g. ProtTrans (Elnaggar et al., 2020) or newer versions of ESM) yield better results.
Fourth, to better understand how the different models are making their predictions, we manipulated the amino acid sequences and measured the effect of these perturbations on the models. Our results indicate that the DNA-binding region is important to the predictions of the nearest neighbor, LSTM, and gradient boosting model. This suggests that the models are using information from the DNA binding regions of the proteins to make their predictions, as opposed to learning to recognize patterns in DNAbinding proteins apart from the binding domains.
As figure 1 and figure 7 show, for prediction of DNA-binding within kingdoms or within a random subset of the data, a simple single nearest neighbor BLAST and an out of the box gradient boosting model are near the performance ceiling. Practitioners can expect precision and recall of 94% for prediction in bacteria, and precision and recall of 90% for prediction in eukaryotes. Small tweaks to these simple models (e.g. majority vote of the five nearest neighbors) may push the prediction rate up slightly higher. There may be a place for advanced models in predicting DNAbinding proteins across kingdoms, but it is unclear how realistic these more difficult tasks are.
Future work should instead focus on the prediction of other protein functions where pattern identity methods (like BLAST) are inadequate (Ashkenazi et al., 2012). In these areas, more complex models have an opportunity to shine. | 2021-04-16T13:25:14.597Z | 2021-04-11T00:00:00.000 | {
"year": 2021,
"sha1": "cc47d4eec18b8593293d00279cd6bf0ab29e2887",
"oa_license": "CCBY",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2021/04/11/2021.04.09.439184.full.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "cc47d4eec18b8593293d00279cd6bf0ab29e2887",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Biology",
"Computer Science",
"Medicine"
]
} |
199380212 | pes2o/s2orc | v3-fos-license | Sociosexuality and Bright and Dark Personality: The Prediction of Behavior, Attitude, and Desire to Engage in Casual Sex
Research about sociosexuality, understood as differences in people’s willingness to have sex without commitment in terms of its predictors, such as demographics, relationship status, or individual traits, such as personality, is still scarce. Although sociosexuality was initially considered unidimensional, a tridimensional structure—with behavior, attitudes, and desire as its components—is gaining momentum in the literature nowadays. The present study proposes to develop different predictive models for each dimension, examining the role of personality (i.e., the “Big Five” and the “Dark Tetrad”) and sociodemographic variables. Participants were 991 university students from a Spanish university (75.5% women, 72.0% heterosexual, Mage = 20.66). Our results provide evidence that predictors of sociosexuality vary depending on the dimension under analysis. Being female, older, not having a heterosexual orientation, and not being involved in a current relationship predicted higher scores in sociosexual behavior and attitudes. Regarding personality, psychopathy and extraversion were the only traits involved in all three components of sociosexuality. Neuroticism, agreeableness, and conscientiousness also play a role in the prediction of some of the sociosexuality dimensions. These results help to disentangle the relationship between personality and sociosexuality and to design more effective programs and policies to promote sexual health.
Introduction
It is nowadays common for university students to be involved in casual sex, which encompasses sexual behavior occurring outside a committed, romantic relationship [1][2][3]. Casual sex has been related to risky sexual behavior (i.e., less condom use), especially in party settings, and this could increase the prevalence of sexually transmitted infections and unwanted pregnancies [4,5]. Thus, having more detailed information about the predictors of casual sex could have important implications in the public health policies of different institutions, for instance in the design of sexual health programs [6].
The construct that comprises individual differences in willingness to engage casual sex is sociosexuality [7]. Previous studies are highly widespread and diverse in regards to the conception of sociosexuality on which they are based, the methodology employed, and the results obtained. The main debate is about the dimensionality of the construct. According to the classic approach of the Sociosexual Orientation Inventory (SOI; [8]), most of the studies have considered sociosexuality as being unidimensional, a continuum with two poles: Restricted sociosexuality (i.e., preference for sex within long-term and committed relationships) and unrestricted sociosexuality (i.e., preference for short-term and no-strings-attached sex). However, years later, Penke and Asendorpf [9] proposed a tridimensional structure: (1) Sociosexual behavior (i.e., past sociosexual behavior); (2) attitudes towards sex without commitment (i.e., beliefs about casual sex); and (3) sociosexual desire (i.e., arousal due to chances of casual sex). Consequently, these authors developed the Revised Sociosexual Orientation Inventory (SOI-R). This is not the only claim for a tridimensional structure of sociosexuality: Around the same time as Penke and colleagues, Hillier et al. [10] proposed a model with desire, behavior, and attitude measures to study sexual orientations of youth. Additionally, the UNESCO has just released guidelines to sexuality and education researchers taking into account these three dimensions to treat sexuality [11].
The tridimensional structure of SOI-R has been validated in a variety of studies in different contexts [12][13][14] but research on differences among these three facets of sociosexuality in terms of its predictors, such as demographics (i.e., sex, age, sexual orientation, religiosity), having a partner or not (i.e., relationship status), or individual traits such as personality is scarce [15,16]. This paper aims to fill this gap.
Personality and Sociosexuality
The role of personality in sexual behavior has been extensively studied over the years, mostly from a unidimensional perspective of sociosexuality [17][18][19] and using the "Big Five" traits approach (neuroticism, extraversion, openness to experience, agreeableness, and conscientiousness). Whereas the relationship between unrestricted sociosexuality and low scores on agreeableness and conscientiousness is consistent in previous research, findings about neuroticism, extraversion, and openness to experience are mixed [15,[17][18][19][20][21][22][23][24]. Additionally, it seems that sex and cultural differences should be taken into account, at least in some personality traits such as agreeableness and neuroticism [24].
In order to understand the differences regarding personality, research also explored the role of the "dark personality", mainly studied as the "Dark Triad" [25]. It consists of a set of three subclinical personality traits: Narcissism, Machiavellianism, and psychopathy. Narcissism is characterized by grandiosity, attention-seeking, profound lack of empathy, and a high sense of entitlement [26]. Machiavellianism involves cold selfishness and pure instrumentality, because manipulation and self-interest are its essential traits [27]. Subclinical psychopathy is a personality trait characterized by impulsivity, antisociality, absence of remorse, and lack of the ability to empathize or experience guilt [28].
Previous research pointed out the positive associations between Dark Triad traits and unrestricted sociosexuality [29,30] High scores on narcissism, Machiavellianism, and psychopathy appear to be more related to a less serious relationship (i.e., unrestricted sociosexuality or casual sex).
A few years ago, based on its unique features (i.e., enjoyment of cruelty, subjugating nature), sadism was included in the dark side of personality, transforming the Dark Triad into the "Dark Tetrad" [31]. Chabrol, Bouvet, and Goutadier [32] demonstrated that sadism can explain unique variance over the Dark Triad. Therefore, although dark traits have common characteristics such as callousness and unreadiness for emotional involvement and there could be some overlap, they still remain different constructs [33]. However, research on the Dark Tetrad and sociosexuality has not been deeply explored. To our knowledge, Tsoukas and March [16] proposed the first study analyzing the associations between the Dark Tetrad and mating orientations (i.e., short-and long-term). The results confirmed positive associations between all dark personality traits and short-term mating orientation, with psychopathy being the most important predictor. Regarding sadism, it explained only 1% of the variance of short-term mating orientation.
Unfortunately, the multidimensionality of sociosexuality has not been taken into account till now in previous research. This can limit the scope of the results, as the associations of the three dimensions of sociosexuality with other variables seem to be different [12].
The Present Study
The goals of this study were twofold: First, to contribute to the study of sociosexuality by examining the predictors of each dimension (behaviors, attitudes, and desire); and second, to analyze the predictive value of the bright and dark traits of personality in the study of unrestricted sociosexuality among university students. In this sense, we will also evaluate whether sadism adds additional variance to the prediction of casual sex.
Several reasons justify the need for the present study. As we noted, inconsistencies in previous research about the direction of the associations between personality traits and sociosexuality could be due to the consideration of sociosexuality as unidimensional. The tridimensional structure of sociosexuality (i.e., behavior, attitudes, and desire) has only been considered by Nascimento et al. [15], but although it is undoubtedly a meritorious study, more research from different cultural contexts and with greater samples is mandatory.
In this sense, we want to advance existing research on the predictors of sociosexuality also considering conjointly both sides of personality, bright (i.e., the Big Five) and dark (i.e., the Dark Tetrad). To our knowledge, only Brunell et al. [20] analyzed this issue, but using only one dark trait (i.e., narcissism). In addition, we intend to provide more data to the existing literature on the differences in sociosexuality based on sociodemographic variables such as sex, age, sexual orientation, or whether the subject had a partner.
The information obtained from this study may be relevant in the implementation of sexual health prevention and promotion programs targeting young people, due to the relevance of casual sex in contemporary university life.
Participants and Procedure
The initial sample comprised 1373 participants. Of them, we selected those who met the conditions of (a) studying a university degree (81 participants excluded), (b) being aged 18 to 26 years (111 participants excluded), (c) labeling themselves as woman or man (7 participants excluded), and (d) correctly answering a control question (173 participants excluded; see below). By doing so, we followed the inclusion criteria used in other studies with the same population and a similar topic [1,12,34]. In addition, as the sample size of non-heterosexual individuals was very small, we dichotomized sexual orientation (1 = heterosexual; 2 = sexual minority). Thus, the final sample comprised 991 university-degree students aged between 18 and 26 years (M age = 20.66, SD = 2.13). Of them, 75.5% were women and 25.4% were men. Of them, 73.0% of the sample had a heterosexual orientation and 51.1% had a partner.
Data were collected through the internet with Google Forms in November 2018. The link to the survey was distributed through the e-mail distribution lists of the students of the authors' university.
The survey remained open for 14 days. Participants provided informed consent after reading the description of the study, where the anonymity of the responses was clearly stated. Participants had to be 18 years old or older to take the survey. This procedure was approved by the Ethics Review Board for Clinical Research of the region (Ethical code: PI18/058).
Sociodemographic and Sexual Behavior Questionnaire
We developed a questionnaire based on previous research carried out in Spain [1,12,34]. Our questionnaire asked participants about their sex, age, sexual orientation (heterosexual, homosexual, bisexual, other), whether they had a partner, and whether they were current university students.
Short Dark Triad (SD3)
This is a 27 item self-report measure [37] of the Dark Triad traits of Narcissism (e.g., "People see me as a natural leader"), Machiavellianism (e.g., "Make sure your plans benefit you, not others"), and Psychopathy (e.g., "People who mess with me always regret it"). Each trait is measured with 9 items on a Likert scale ranging from 1 (Strongly disagree) to 5 (Strongly agree). We used the Spanish version of SD3 developed by Pineda, Sandín, and Muris [38].
Assessment of Sadistic Personality (SAP)
This scale is a 9-item measure of sadistic personality (e.g., "I never get tired of pushing people around"). The response format is a Likert-type scale ranging from 1 (Strongly disagree) to 5 (Strongly agree). Through a forward-backward translation procedure, the Spanish version of the SAP was translated from the original English version [39] by three of the manuscript's authors, who were native Spanish-speakers. They reviewed the translation together and agreed on a single version of the scale. Subsequently, a native professional translator reviewed the correspondence between the English and Spanish versions, who agreed with the translated version.
Sociosexual Orientation Inventory-Revised (SOI-R)
This instrument has 9 items that assess sociosexual orientation on the basis of three dimensions [9]: Behavioral (e.g., "With how many different partners have you had sexual intercourse without having an interest in a long-term committed relationship with this person?"), Attitudinal (e.g., "Sex without love is OK"), and Desire (e.g., "How often do you have fantasies about having sex with someone with whom you do not have a committed romantic relationship?"). These items are rated on a 9-point scale, ranging from 1 (0) to 9 (20 or more) for the Behavioral factor; from 1 (Strongly disagree) to 9 (Strongly agree) for the Attitudinal factor; and from 1 (Never) to 9 (At least once a day) in the Desire factor. We used the Spanish adaptation developed by Barrada et al. [12].
Control Question
In order to check whether the participants paid enough attention to the item wordings, we introduced an item asking the participants to respond to it with Disagree.
Statistical Analysis
Firstly, we computed means, standard deviations, and reliabilities (Cronbach's α) of the different computed total scores. The associations of variables were assessed with Pearson correlations. Predictive models of each dimension of sociosexuality were performed with hierarchical regression analysis with control variables in Step 1 (i.e., sex, age, involved in a relationship, sexual orientation), Big Five in Step 2, Dark Triad in Step 3, and sadism in Step 4. To better communicate our results, when the units of measurement were meaningful on a practical level (i.e., sociodemographic variables), we used the original metric of the variables, but for scale scores, we standardized the dependent variables before performing the regression.
Results
Descriptive statistics, reliabilities, and correlations among the variables are shown in Table 1. All the reliabilities were within the range of (0.78, 0.93), except for Agreeableness, Narcissism, and Psychopathy (αs between 0.64 and 0.66).
Given the large number of computed correlations, we will restrict our attention to values over |.15|. For sociodemographic variables, older participants tended to have higher levels of sociosexual behavior (r = 0.34, p < 0.001), and men (r = 0.32, p < 0.001) and those not involved in a romantic relationship (r = −0.39, p < 0.001) tended to present higher level of sociosexual desire. With respect to bright personality, higher levels of extraversion were associated with higher sociosexual behavior (r = 0.24, p < 0.001) and more positive attitudes (r = 0.15, p < 0.001), whereas those with higher conscientiousness showed more negative attitudes (r = −0.16, p < 0.001) and lower sociosexual desire (r = −0.15, p < 0.001). With respect to dark personality, psychopathy was related to all three dimensions of sociosexuality (rs in the range of (0.19, 0.34), ps < 0.001), whereas all four aspects of the Dart Tetrad were associated with sociosexual desire (rs in the range (0.18, 0.34), ps < 0.001). Also noteworthy was the high association between psychopathy and two of the Big Five traits: Agreeableness (r = −0.42, p < 0.001) and Conscientiousness (r = −0.24, p < 0.001).
Four-step regression analyses for each sociosexuality dimension were run (see Table 2). For the three criteria, all the increments in R 2 were statistically significant for all the steps (all ps < 0.001), except for the last one, which incorporates sadism (∆R 2 = 0.00 for the three variables, ps in the range of (0.114, 0.816)). The inclusion of the bright personality dimensions added 7.0%, 7.9%, and 2.5% of explained variance for behavior, attitude, and desire, respectively. The inclusion of the Dark Triad dimensions added 4.1%, 4.7%, and 4.4% of explained variance, respectively.
Discussion
Nowadays, casual sex is a common practice among university students. Some studies have concluded that there is a link between casual sex and the performance of risky sexual behaviors [4,5]. Therefore, in view of the design and implementation of the most effective preventive and promotional sexual health programs, it is important to know which variables are related to casual sex and sociosexuality. The role of personality in sociosexual behavior is well supported by the literature, although results about the relationship between bright and dark personality traits are mixed. Given that previous studies analyzed sociosexuality as a whole instead of considering its three dimensions (behavior, attitude, and desire), we hypothesized that the inconsistencies of the findings of prior research may be due to this issue.
The present study provides evidence that sociosexuality dimensions have different correlates, expanding the study by Nascimento et al. [15] with a sample almost four times larger from a different country, and developing three predictive models of sociosexuality. We shall discuss our findings beginning with sociodemographic variables.
The results regarding sex showed that men, compared to women, have an unrestricted orientation but only in terms of desire, not of behavior and attitude. As previous research highlighted, men have more sociosexual desire than women [1,12,40,41]. This does not mean that desire and unrestricted sociosexual behavior coincide, because only behavior is limited by competition when seeking a sexual partner [8,9]. Contrary to other studies, women showed more unrestricted sociosexual attitudes than did men. A possible explanation for this result is that women are increasingly more the owners of their sexuality in Spain, moving away from traditional gender roles and expressing positive attitudes towards sex without commitment, which used to be associated only with men. In addition, this result and tentative explanation are in line with the recent study conducted in Italy by Silvaggi and colleagues [42]. Nevertheless, we should take into account that the composition of our sample may be affecting our results (i.e., three out of four participants in the present study were women, as in [42]). Future research might benefit from exploring sex differences in all three dimensions of sociosexuality in greater depth.
Regarding age, sociosexual behavior is associated with being older and desire with being younger. In the literature, the relation of age with unrestricted sociosexuality seems to be curvilinear [43]. Our findings showed a positive association between age and sociosexual behavior, as can be expected due to the way this dimension is measured (i.e., number of relationships). More interesting are the results regarding the remaining dimensions: There was no association of age with attitude, which can be explained because attitudes, once shaped, do not change easily. Moreover, there is a negative association of age with desire, which may be due to the kind of relationships established by people at the university stage: At the beginning of this stage, they tend to be more likely to have and desire casual sex, but as time goes by, they tend to have stable committed relationships, which decreases their sociosexual desire [8,9].
We found significant associations that suggest that belonging to a sexual orientation minority is related to higher scores in sociosexual behavior, attitude, and desire. Some studies have found that gays/lesbians and bisexuals tend to have less restrictive sociosexuality, with a greater number of casual sexual partners [44]. Our study dichotomized sexual orientation (heterosexual and others) just because the non-heterosexual sensitivities were scarce in our sample. We should recognize that sexual orientation is a key variable to identify differences in the sexual behavior of individuals. Consequently, we encourage further research on sociosexuality to take this heterogeneity into account.
Lastly, being single is related to higher scores in sociosexual attitude and desire. The literature has shown that people involved in a romantic relationship showed lower sociosexuality scores than those who were not [8,9]. The explanation provided is usually the desire factor. Those involved in a romantic relationship tend to direct their sexual desire toward their partner, whereas those who are not, in the absence of commitment, are more open to casual sex.
Next, we shall discuss results regarding personality. According to our data, only extraversion and psychopathy contributed positively to explain all the dimensions of sociosexuality, whereas neuroticism, agreeableness, and conscientiousness were significant predictors of behavior, or attitude, or desire. Openness, narcissism, Machiavellianism, and sadism did not play a role in any of the predictive models.
It is interesting to note that psychopathy is the most relevant predictor of all the dimensions of sociosexuality. Consistent with previous research [16], high scores in psychopathy are associated with hypersexuality, high impulsivity, and a high need for stimulation [30,45] and this would influence desire (i.e., sexual arousal due to the chance of casual sex), attitudes, and, finally, unrestricted sociosexual behaviors (e.g., having more sex partners, not being involved in a romantic relationship). We also noted the high negative correlation between psychopathy and agreeableness and conscientiousness. According to Miller and Lynam [46], the core of psychopathy is lower scores on both bright personality traits. The main characteristics of psychopaths (i.e., coldness, lack of empathy and remorse, need for continuous stimulation inability to delay gratification, lack of long-term goals, impulsivity, and a lack of commitment) should facilitate the performance of a short-term mating strategy [47].
As we noted before, some authors have suggested the incorporation of sadism to the Dark Triad to build a Dark Tetrad. In the light of our results, the inclusion of this trait in the predictive model did not explain an additional amount of variance beyond the Dark Triad. In fact, the correlation between psychopathy and sadism was so high that it could suggest than the two measures overlap. Additionally, neither narcissism nor Machiavellianism were positive predictors of sociosexuality, contrary to previous research [29,30,48]. There are several explanations. First, it could be due to the common features shared by all four dark personality traits, such as being callous about the welfare of others and manipulation [31,49,50]. In fact, some authors have concluded that psychopathy may sufficiently represent the core of the Dark Triad [51]. Second, one of the main components of subclinical psychopathy is an impulsive and irresponsible behavioral style [28]. Having casual sex is related to greater impulsivity and sensation-seeking [52]. In addition, impulsivity is an important variable when it comes to performing risky behaviors in casual sex. For instance, people who are more impulsive may have a greater urgency for casual contact without being concerned about the consequences if there are obstacles to condom use at that moment [52]. Recent findings also conclude that psychopathy constitutes an adaptive trait to increase short-term mating opportunities [53]. In any event, we should emphasize that all these dark personality traits are mere personality tendencies falling in the normal or "everyday" range, and certainly they are not clinical diagnoses.
The other personality trait involved in all the predictive models was extraversion, although its relevance varies depending on the dimension under examination. Its influence is higher in the behavior dimension, which is reasonable, given that extraversion characterizes action-oriented individuals who are dominant in social settings [54]. This could explain why extraversion plays a prominent role in previous studies of sociosexuality, which are mostly behavior-focused [18,21,22,24].
Consistent with previous research [17,23,24], participants with high scores in neuroticism did not show a positive disposition toward uncommitted sex, nor were they comfortable engaging in sex without a sense of closeness or commitment. However, this personality trait was not a significant predictor of sociosexual desire, which is the motivational disposition towards short-term relationships. As Schaller and Murray [55] highlighted, neuroticism is defined, at least in part, by behavioral prudence or avoidance orientation. Taking into account that unrestricted sociosexuality is related to higher risk for sexually transmitted diseases, unintended pregnancy, and exploitation [4,5], it is plausible that those with high levels of neuroticism would tend to avoid situations that threaten their well-being.
Regarding conscientiousness, results fall in line with some previous research [24,56] reporting that this personality trait, also related to behavioral prudence, is negatively related to unrestricted sociosexual attitudes. Conscientious people control their impulses, they are generally concerned about the impact that their own behavior has on those around them and tend to delay gratification [57]. All these traits are usually present in people with a preference for a serious romantic relationship [21].
Contrary to conscientiousness and previous findings [17][18][19], in our study, agreeableness was a positive predictor of sociosexual behavior and desire. Although the effect size of our results involving this trait is low, we want to provide tentative explanations. This result may be due to the fact that this trait is related to kindness, empathy, and interpersonal trust [58], easily connecting with specific people, engaging more often in casual relationships (behavior), and increasing arousal for potential sexual partners (desire), although the general attitude toward sociosexuality is lower than that of people with lower agreeableness scores. However, given the considerable amount of research that suggests a negative relationship with sociosexuality [17,20,24], we cannot discard the existence of moderators (e.g., culture) or just a statistical artifact. Further research should explore this idea.
Limitations and Future Research
We acknowledge some limitations that might be addressed in future research. First, as mentioned, the country-level differences could be affecting our results. Prior research has demonstrated the existence of differences in personality according to culture [24]. Therefore, the present findings should be considered taking into account the cultural context, and further cross-cultural research should be performed to identify differences among countries. Second, our study simplifies the heterogeneity of sexual orientation, dichotomizing heterosexual and non-heterosexual orientations. Further research should try to go beyond the research on heterosexual participants, exploring the remaining orientations separately. Third, due convenience sampling, our sample is not representative of the population of university students: Our participants are mostly female and heterosexual, limited to those who agreed to participate in the study, and from only one university of Spain. Additionally, the fact that our sample is composed only of university students could also be affecting our results. Not all people between 18 and 26 years are at the university. Further research should explore the influence of the different backgrounds of young people on their sociosexuality. Fourth, religion has not been taken into account in our study, but also could be a factor influencing sociosexuality as previous research highlighted [59] and deserves further investigation in the future. Lastly, the internal consistency of some scales is lower than expected, although still within acceptable values for research. This is a common issue with the BFI scale [60], and with most combined measures to capture all the dark personality traits [61]. Further research should be carried out with more reliable instruments, when available, especially in studies involving the dark personality.
Conclusions
As Penke and Asendorpf [9] proposed more than ten years ago, sociosexuality is a construct that comprises three different components (i.e., behavior, attitude, and desire). Thus, researchers should consider each dimension in order to gain insight about individual differences in each one. Despite the aforementioned limitations, the present study has shown that two personality traits, the bright and the dark (i.e., extraversion and psychopathy), are related to all the dimensions of sociosexuality. However, the role of other bright traits (i.e., neuroticism, agreeableness, and conscientiousness) depends on the component under examination. These results are consistent with previous literature and, at the same time, help to disentangle the relationship between individual differences in personality and sociosexuality.
In addition, these findings may have important applications for public health policies and sex education offered to adolescents and young people. Consideration should be given to the different possibilities and types of sexual relationships that occur today, as well as to the existing relationships with the personality traits of young people. Thus, more effective programs and policies to promote sexual health could be designed and implemented (e.g., including strategies on health sexual programs adapted to the personality characteristics, such as sensation-seeking or impulse control in young people with high scores in psychopathy, etc.), which would ultimately improve people's quality of life.
Conflicts of Interest:
The authors declare no conflict of interest. | 2019-08-03T13:03:22.923Z | 2019-07-31T00:00:00.000 | {
"year": 2019,
"sha1": "418109015ba86cb5bee4bee8e5b34d79076de7ef",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/16/15/2731/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "418109015ba86cb5bee4bee8e5b34d79076de7ef",
"s2fieldsofstudy": [
"Psychology",
"Sociology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
271555171 | pes2o/s2orc | v3-fos-license | Soil Erosion Thickness and Seasonal Variations Together Drive Soil Nitrogen Dynamics at the Early Stage of Vegetation Restoration in the Dry-Hot Valley
By changing the physicochemical and biological properties of soil, erosion profoundly affects soil nitrogen levels, but knowledge about the erosion impact on soil nitrogen (N) dynamics is still rather incomplete. We compared soil N contents at the early stage of vegetation self-restoration in response to soil erosion thickness (0, 10, 20, 30 and 40 cm), by conducting a simulated erosion experiment on sloping arable land in the dry-hot valley of Yunnan Province, southwestern China. The results showed total nitrogen (TN), ammonium nitrogen (NH4+-N) and nitrate nitrogen (NO3−-N) contents reduced with increasing soil erosion thickness and decreased significantly at the soil erosion thickness of 10, 40 and 10 cm in the rainy season and 30, 10 and 10 cm in the dry season compared with 0 cm. Structural equation modeling (SEM) indicated that soil erosion thickness and seasonal variation were the important drivers of mineral nitrogen (NH4+-N and NO3−-N) content. Soil erosion thickness indirectly affected mineral nitrogen through negative on TN, carbon content and Diazotrophs (nifH genes). Dry–wet season change had an effect on mineral nitrogen mediated by arbuscular mycorrhizal fungi (AMF) and nifH genes. We also found AMF had a promotion to nifH genes in eroded soil, which can be expected to benefit nitrogen fixing. Our findings highlight the importance of considering soil erosion thickness and sampling time for nitrogen dynamics, in particular, the investigation of nitrogen limitation, in the early stage of vegetation self-restoration.
Introduction
Soil erosion is a widespread, global phenomenon, and accelerating erosion is responsible for about half of all soil degradation worldwide [1].The annual amount of topsoil being lost due to soil erosion may now exceed 75 billion tons globally [2], which could trigger considerable nutrient losses and exacerbate the nutrient limitations to plant growth [3].In particular, soil erosion can cause soil nitrogen (N) losses of as high as 160 kg ha −1 yr −1 [4].Many recent studies have emphasized soil erosion's role in nitrogen cycles, leading to a synthesis of how it exerts important control on nitrogen dynamics [5][6][7].The three-step process of soil erosion consisting of detachment, transport and deposition can affect the availability, stock, nature and persistence of soil nitrogen [8].Most part studies have explicitly accounted for the influence of soil erosion on nitrogen loss in runoff plots [9,10] or on N mineralization in a micro-plot through artificial rainfall simulation experiments [11] as well as erosion and deposition on the redistribution of nitrogen fractions in landscapes [12,13].
of 15
It is noted that with the increase in soil erosion thickness, both the soil layer depth and soil nutrients contents decrease [14], along with soil aggregates and soil microorganisms [15,16].Subsequently, the changes in soil physicochemical and biological properties, resulting from the degree of soil erosion, will inevitably impact soil N dynamics.Although many studies have focused on the soil erosion on N loss and transformation, the effects of soil erosion thickness on soil N dynamics have been neglected.Since soil N is one of the essential elements for plant nutrition, it can act as a limiting factor for the growth of many plant species, such that soil N supply capacity largely determines the course and success of ecological restoration [17].Hence, N resident in eroded soil is crucial for plant fitness [18], and it is imperative to gain a sound understanding the effect of soil erosion thickness on soil N dynamics.
Most (>99%) soil nitrogen is contained in dead organic matter derived from plants, animals and microbes.In natural ecosystems, most nitrogen absorbed by plants becomes available through the decomposition of organic matter.More than 90% of the nitrogen macromolecules in soil exist in the form of organic nitrogen [19].Some studies have shown that some plants can utilize small insoluble organic nitrogen (DON) [20,21], but many plants tend to absorb and utilize mineral nitrogen (ammonium nitrogen and nitrate nitrogen) [22,23].Microorganisms can help optimize resource availability.It is well known that ammonium nitrogen (NH 4 + -N) and nitrate nitrogen (NO 3 − -N) are primarily regulated by microbe-mediated mineralization, immobilization, nitrification and denitrification [24].It has been well documented that soil N dynamics are closely related to litter quality, phsicochemical soil properties and microbial activity, and at large climatic gradients, climatic factors can affect both the substrate availability and oxygen in the soil, thus influencing soil N dynamics [24].Although the effects of an individual factor on soil N dynamics are relatively well studied, the complex interaction among these factors and soil erosion in regulating N dynamics remains unclear, hindering our ability to understand the limits of vegetation restoration.
The dry-hot valleys are type of a unique geographical region found in southwestern China [25].These are tropical and subtropical deep river valleys, mainly distributed in the watersheds of the Jinsha River, Yuanjiang River, Nujiang River and their tributaries in Yunnan Province.Among them, the dry-hot valley area in the Yuanjiang River's catchment is the most concentrated and contiguously distributed in Yunnan Province [26].This region has a steep topography, and a substantial proportion of its land is under cultivation on steep slopes.Its extensive steep terrain and concentrated rainfall combined with steep slope cultivation render the region vulnerable to severe erosion, whose average soil erosion modulus is 12.56 t ha −1 a −1 [27].Consequently, nutrient deficits are common in dry-hot valleys, with greater soil erosion leading to a greater deficit of nutrients for plants [28].To mitigate and control soil erosion, afforestation has been carried out in many zones of the dry-hot valley over the last 50 years, and impressive outcomes have been achieved [29,30]; however, on some seriously degraded sloping land, afforestation has been proved to be ineffective [31].To understand how soil erosion thickness affects N dynamics under vegetation restoration, we established field plots that simulated different erosion thickness on sloping arable land in a typical dry-hot valley, where vegetation succession has been allowed proceed naturally, and we presented a persistent monitoring to elucidate the effects of soil erosion thickness on N dynamics.Our objectives were (1) to explore the variation in TN, NH 4 + -N and NO 3 − -N dynamics in response to soil erosion thickness and (2) to determine whether soil erosion thickness drives N dynamics.
Study Site
The plots of simulated soil erosion thickness were established in an abandoned cornfield at Yuanjiang Dry-Hot Valley Water and Soil Conservation Observation and Research Station of Yunnan Province (23 • 58 ′ 5 ′′ N, 101 • 38 ′ 55 ′′ E) in southwestern China.This station is located in a typical dry-hot valley of the Yuanjiang-Red River Basin.This region has an average elevation of 542 m and slope of 27 • and a typical plateau monsoon climate, with a multi-year average temperature of 23.9 • C, precipitation of 781 mm and evaporation of 2892 mm.From October 2020 to September 2021, the total precipitation was 651.30 mm, and the average temperature was 24.64 • C (Figure S1).As such, the region is extremely arid and has two distinct seasons, namely, the dry and rainy season [32].The latter (late May to mid-October) receives more than 80% of yearly rainfall, whereas the dry season lasts from late October to mid-May of the following year [33].Soils of the area are classified as Ferralosols in Chinese Soil Taxonomy [34] and Ultisols in United States Soil Taxonomy [35].The vegetation types in this study area are mainly shrubs and grass bushes.Based on our vegetation survey of simulated erosion thickness plots, the dominant species are Bidens pilosa, Cajanus scarabaeoides, Heteropogon contortus and Taraxacum mongolicum.
Establishment of the Simulated Erosion Thickness Plots
The pre-field survey results indicated that the effective soil layer thickness of sloping arable land in the dry-hot valley was mostly around 40 cm.In April 2020, the method of "Cutting-and-Filling" was used to establish artificial simulated erosion plots.Each plot was 2 × 2 m in size, and all had the same soil, slope, aspect, elevation and geomorphic location.According to the soil profile characteristics of sloping arable land without erosion, the levels of erosion thickness were set to 0, 10, 20, 30 and 40 cm, these corresponding to soil erosion intensity of no erosion (control), light erosion, moderate erosion, strong erosion and severe erosion, respectively.Each treatment was replicated 10 times, for a total of 50 simulated plots (Figure 1), where the treatments were randomly assigned [15,36,37].
a multi-year average temperature of 23.9 °C, precipitation of 781 mm and ev 2892 mm.From October 2020 to September 2021, the total precipitation was and the average temperature was 24.64 °C (Figure S1).As such, the region arid and has two distinct seasons, namely, the dry and rainy season [32].Th May to mid-October) receives more than 80% of yearly rainfall, whereas th lasts from late October to mid-May of the following year [33].Soils of the area as Ferralosols in Chinese Soil Taxonomy [34] and Ultisols in United States So [35].The vegetation types in this study area are mainly shrubs and grass bush our vegetation survey of simulated erosion thickness plots, the dominan Bidens pilosa, Cajanus scarabaeoides, Heteropogon contortus and Taraxacum mong
Establishment of the Simulated Erosion Thickness Plots
The pre-field survey results indicated that the effective soil layer thickne arable land in the dry-hot valley was mostly around 40 cm.In April 2020, th "Cutting-and-Filling" was used to establish artificial simulated erosion plo was 2 × 2 m in size, and all had the same soil, slope, aspect, elevation and location.According to the soil profile characteristics of sloping arable land sion, the levels of erosion thickness were set to 0, 10, 20, 30 and 40 cm, these co to soil erosion intensity of no erosion (control), light erosion, moderate ero erosion and severe erosion, respectively.Each treatment was replicated 10 total of 50 simulated plots (Figure 1), where the treatments were random [15,36,37].
Principles of the Construction and Simulation Process of the Erosion Plots
In cultivated land, surface soil in the cultivated horizon is easily lost through erosion, while the plow pan soil is usually plowed and mixed into the original cultivated horizon.
As a result, the thickness of the cultivated horizon may remain constant, but its composition continues to change.Thus, the impact of tillage on soil layers in cultivated land should be considered when simulating erosion thicknesses.The original soil profile was defined as h 0 , h 1 , h 2 , h 3 , . .., and h i layers.h 0 represented the original cultivation horizon, defined at 20 cm.We hypothesized that the annual soil erosion thickness was equal to the thickness of soil from the plow pan to the cultivation horizon and that the annual soil erosion thickness had no interannual variation.After n years of erosion, the thickness of the original i soil layer remaining in the cultivated horizon can be calculated using the formula of Wang et al. (2009) and Hou et al. (2014) [36,37].
where h ′ i is the thickness of the original i soil layer remaining in the cultivated horizon (cm) which has been plowed and eroded for n years; h i is the soil thickness of the original i layer (cm), i = 0, 1, 2, 3. . .n; d is the average annual erosion thickness, defined as 0.1 cm a −1 for Yuanjiang River Basin based on the report of the First National Census for Water in China; and m is the thickness of the cultivated horizon (cm), defined as 20 cm.The simulated erosion d thicknesses are 0 (control), 10, 20, 30 and 40 cm.The thickness of the soil layer in the original soil profile below h 0 was defined as 10 cm for h 0 , h 1 , h 2 , h 3 , . .., and h i layer.Thus, the soil components of the cultivated horizon (cm) for simulated erosion thicknesses (10-40 cm) were calculated using Equation (1).
After n years of erosion, the components of the new cultivated horizon at different simulated erosion thicknesses were calculated with Equation (2).
where h is 20 cm, the thickness of the new cultivated horizon (cm), j is the erosion time (in years) when the h i−1 soil layer was ploughed and eroded, and d and m are as above.
The thickness of the original soil layer remaining in the new cultivated horizon for different simulated erosion thicknesses are given in supplementary material Table S2.For example, for a simulated soil loss of 10 cm, the remaining thickness of the original cultivated horizon h 0 in the new cultivated horizon was computed as 12.12 cm, with the new cultivated layer now extending 7.88 cm into the original 20-30 cm layer.Similarly, the inferred thickness of the original cultivated horizon after 20 cm of soil loss and continued annual mixing of topsoil and subsoil material by tillage, would be 7.34 cm.The corresponding value for an erosion thickness of 40 cm would be 2.69 cm (Table S2).Although the components of the cultivated horizon at different erosion thicknesses can be calculated with Equation (2), it is very difficult to accurately cut a thin layer of soil for collection and mixing, because many of the remaining thicknesses in cultivated horizons are only a few centimeters.Any layer of soil has the characteristics of length, width and height.The volume of the original soil layer remaining in the new cultivated layer was , where a and b are the length and width of simulated erosion plots (this simulated erosion plot was 2 m length and 2 m width).Thus, the thickness of the original soil layer remained in the new cultivated layer can be converted into the volume composition of the original soil layer (Table S3).Then, h ′ i was set to represent thickness (h i ) of the original soil layer, and the length and width were a ′ and b ′ .Using this information, we projected a solid block with a volume of (h i /100) × a ′ × b ′ (m 3 ) and then set it equal to (h ′ i /100) × a × b (m 3 ).We know the height and volume of the soil as (h i /100) × a ′ × b ′ ; if any value of a ′ or b ′ is given (with a fixed length of 2 m in this experiment), the value of the other one can be calculated.In this way, the problem of cutting a thin layer of soil for collection was transformed into a problem of cutting soil with constant thickness, length of 2 m and width of b ′ , which was very convenient to handle (Table S4).
Parent Materials Added
Because different thickness of soil were removed, it would inevitably lead to differences in the height of each plot from the original horizontal ground.This would not only cause different conditions among different erosion plots, such as temperature and moisture, but also lead to water accumulation in the rainy season in some erosion plots with large erosion thicknesses.Considering that the original soil profile below 40 cm was parent material, proportionate amounts of subsoil materials were added above 40 cm in simulated erosion plots to match each loss of topsoil and ensure that the surface height of each erosion plot was at the same level.For example, when the erosion thickness was 10 cm, soil parent material with a thickness of 10 cm should be added above the original profile of 40 cm.The construction process and established plots are shown in Figure 1.After the establishment of simulated erosion plots, vegetation was allowed to regrow naturally through natural succession.
Soil and Plant Samples, Collection and Measurement
Soil samples were collected from within the simulated erosion plots on 8 October 2020 and 8 April 2021, respectively.Five soil cores were taken from the upper soil layer (0-20 cm) in each plot using a soil auger (5 cm diameter) and mixed into one composite soil sample after removing any visible debris and stones.Thus, a total of 50 soil samples were collected and taken to the laboratory.There, a portion of each composite sample was stored in a refrigerator at 4 • C for the determination of NH 4 + -N, NO 3 − -N and dissolved organic carbon (DOC); the other portion was placed in a 2 mL centrifuge tube and stored in a freezer at −80 • C for its later molecular analysis.The leftover portions of the composite samples were then passed through 2.0 and 0.149 mm sieves after air-drying them, for the subsequent determination of their soil physical and chemical properties.One quadrat of 0.5 × 0.5 m per plot were sampled for the determination of the aboveground biomass.The quadrat was chosen randomly, and all aboveground biomass within the quadrant was cut at soil level.A total of 50 samples were collected in the vellum bags.
Soil particle size distribution, namely, for sand (2-0.02mm), silt (0.02-0.002 mm) and clay (<0.002 mm) [38,39], was measured by applying the hydrometer method [40].Soil pH was measured in a 1:2.5 soil/water suspension with the pH meter (METTLER-S220, Labcan Scientific Supplies Co., Ltd., Shanghai, China).Soil moisture (SM) was determined by oven-drying the soil at 105 • C to a constant weight.Soil organic carbon (SOC) was measured by converting total organic carbon into CO 2 via high-temperature combustion and catalytic oxidation with an organic carbon analyzer (VarioTOC, Elementar, Langenselbold, Germany).TN was determined using the Kjeldahl method, while total phosphorus (TP) and total potassium (TK) were determined using the molybdenum antimony colorimetric and flame photometer methods, respectively; available phosphorus (AP) was quantified using a spectrophotometer; available potassium (AK) was quantified using a flame photometer after preparing the samples using an ammonium acetate solution [41]; finally, DOC, NH 4 + -N and NO 3 − -N were measured on a continuous-flow Auto Analyzer (AA3, Bran-Luebbe, Nordersteld, Germany).The plant samples were placed in a constant-temperature drying oven at 105 • C for 30 min, dried at 65 • C to constant weight and then reweighed to calculate the aboveground biomass.
Statistical Analysis
Shapiro-Wilk (S-W) and homogeneity tests were used to check the data for their normal distribution and homogeneity of variance.If the variance was homogeneous, a one-way analysis of variance (ANOVA), followed by Fisher's least significant difference (LSD) test, was applied to determine differences in soil physicochemical properties, mineral nitrogen, aboveground biomass and microbial alpha diversity among the five erosion thickness treatments.These analyses were implemented in SPSS 24.0 for Windows (SPSS Inc., Chicago, IL, USA).The beta diversity analysis was based on unweighted UniFrac data [44].Nonmetric multidimensional scaling (NMDS) was used to infer patterns in microbial community composition within and among the five erosion thickness treatments, using Bray-Curtis distances at the genus level, with the "vegan" package for R software (v4.1.1;http://cran.r-project.org/,accessed on 15 October 2022).Using this package as well, PER-MANOVA (permutation multivariate analysis of variance), based on 999 permutations, was used with Bray-Curtis distances and the Adonis function to test for significant differences in soil microbial community composition among different erosion thickness treatments.
To explore the complex relationships between soil thickness, soil properties, plant characteristics and nitrogen dynamics, we built a structural equation model (SEM).The partial-least-squares (PLS) method was used for the SEM's path analysis (PLS-PA).To estimate the accuracy of PLS parameter estimates, nonparametric bootstrapping was performed, with 95% bootstrap confidence intervals generated to determine whether the estimated path coefficients were significant.All predictors in the PLS-PA were first standardized before implementing this analysis in R v4.1.1 using the "plspm" package (v4.1.1;http://cran.r-project.org/,accessed on 4 December 2022) [45,46].
Soil Properties and Aboveground Biomass
Most of the soil physicochemical properties were significantly affected (p < 0.05) by soil erosion thickness.Specifically, during the rainy season, with the increase in erosion thickness, the silt, SOC, DOC, TP, AP and AK contents decreased (p < 0.05) whereas the sand and TK contents increased (p < 0.05).The dry season was the same as the rainy season except for the SOC (Figure 2).Aboveground biomass decreased as the erosion thickness increased, with the biomass being much lower at the 40 cm thickness than under the other treatments (p < 0.05; Figure 3).
In all five treatments, the dominant genera of microorganisms changed greatly in the dry and rainy seasons.Specifically, except for Diversispora, all the dominant genera of AMF disappeared in the dry season (Figure S3).As for nifH genes, Bradyrhizobium was the dominant genus in both rainy and dry seasons, but Azospirillum, Methylosinus, Agrobacterium, Rubrivivax, Pseudomonas and Anaeromyxobacter disappeared in the dry season, and Zohydromonas, Skermanella, Azotobacter and Beijerinckia appeared (Figure S3).The NMDS analysis indicated that community composition of AMF and nifH genes varied significantly in response to different erosion thicknesses in the rainy season (p < 0.05) but not in the dry season (p > 0.05; Figure 4).An analysis of similarities confirmed this result (Table 1).The Shannon index of AMF decreased with greater erosion thickness but that of nifH genes increased (p > 0.05; Figure S4).Abbreviations: TN, soil total nitrogen; SOC, soil organic carbon; DOC, dissolved organic carbon; SM, soil moisture; TP, soil total phosphorus; AP, available phosphorus; TK, soil total potassium; AK, available potassium.
Soil Nitrogen Dynamics
The results showed TN contents reduced with a greater soil erosion thickness, decreasing significantly at 10 cm in the rainy season, 30 cm in the dry season when compared with the 0 cm (p < 0.05; Figure 2D) at the early stage of vegetation self-restoration.The contents of NH 4 + -N and NO 3 − -N in soil had decreased significantly in response to increased erosion thickness (p < 0.05; Figure 5).
Soil Nitrogen Dynamics
The results showed TN contents reduced with a greater soil erosion thickness, decreasing significantly at 10 cm in the rainy season, 30 cm in the dry season when compared with the 0 cm (p < 0.05; Figure 2D) at the early stage of vegetation self-restoration.The contents of NH4 + -N and NO3 − -N in soil had decreased significantly in response to increased erosion thickness (p < 0.05; Figure 5).NH4 + -N content declined from 3.11 and 2.68 mg•kg −1 under no erosion (0 cm thickness) to 2.27 and 1.59 mg•kg −1 at a 40 cm thickness of soil erosion in the rainy and dry season, respectively, while the NO3 − -N content correspondingly fell from 0.33 and 0.27 mg•kg −1 to 0.09 and 0.06 mg•kg −1 .Compared with the control, at the 10, 20, 30 and 40 cm levels of soil erosion thickness, the NH4 + -N was reduced by 5.84%, 8.89%, 10.14% and 27.19% in rainy season and 10.51%, 26.23%, 29.18% and 40.68% in dry season, while the NO3 − -N was reduced by 26.30%, 53.40%, 56.86% and 72.17% in rainy season and 17.68%, 23.38%, 68.52% and 78.99% in dry season (Figure 6A).The NO3 − -N:NH4 + -N ratios were always <1, irrespective of soil erosion thickness and seasons (Figure 6B).
Interaction Effects of Soil Erosion, Soil and Plant Properties on Nitrogen
SEM explained 89% of the mineral nitrogen changes in the initial stage of vegetation self-restoration on sloping farmland (Figure 7).Soil erosion thickness and dry-wet season change are the important driving factors for mineral nitrogen content, and soil erosion thickness indirectly affected mineral nitrogen through negative on total nitrogen, carbon content and nifH genes.Dry-wet season change effect on mineral nitrogen mediated by AMF and nifH genes.We also found AMF had a promotion to nifH genes in eroded soil, which can be expected to benefit nitrogen fixing.
Interaction Effects of Soil Erosion, Soil and Plant Properties on Nitrogen
SEM explained 89% of the mineral nitrogen changes in the initial stage of vegetation self-restoration on sloping farmland (Figure 7).Soil erosion thickness and dry-wet season change are the important driving factors for mineral nitrogen content, and soil erosion thickness indirectly affected mineral nitrogen through negative on total nitrogen, carbon content and nifH genes.Dry-wet season change effect on mineral nitrogen mediated by AMF and nifH genes.We also found AMF had a promotion to nifH genes in eroded soil, which can be expected to benefit nitrogen fixing.
Effects of Soil Erosion Thickness on Soil Nitrogen Dynamics
Our results showed both soil mineral nitrogen contents decreased with an increasing erosion thickness (Figure 5).Similarly, in previous experiments simulating soil erosion, its thickness reduced soil mineral nitrogen in black soil and red soil in China [15,36].Generally, the more severe the erosion is, the lower is the mineral nitrogen content [47].This impact is largely ascribed to soil resource losses, which directly reduce the TN content (Figure 2D).Both NH 4 + -N and NO 3 − -N reached their minimum values under the most severe erosion level (40 cm thickness).However, Qiu et al. (2021) reported that the mineral nitrogen fell to its lowest values under moderate and severe erosion conditions in loess soils and black soils [16].We found an increasing rate of NH 4 + -N or NO 3 − -N decline with the soil erosion thickness, but the slope of NO 3 − -N surpassed that of NH 4 + -N; this strongly suggests that NO 3 − -N may be more susceptible to soil erosion than soil NH 4 + -N (Figure 6A).Furthermore, NO 3 − -N can rapidly be leached from soils could be another contributing reason for its greater loss than NH 4 + -N.Accordingly, N limitation would possibly tend to increase with soil erosion thickness.Our N-addition experiment proved that N limitation exists at the same simulated soil erosion plots and that it is exacerbated by greater soil erosion thickness (unpublished data).The NO 3 − -N:NH 4 + -N ratios never exceeded a value of 1 in the five soil erosion thickness treatments, also indicating that N was generally scarce (Figure 6B).
Dominant Drivers of Soil Nitrogen
Unlike the bulk of organic nitrogen, most mineral forms of nitrogen are quite soluble in water and may be easily lost from soils through leaching and volatilization [48].Using structural equation modeling, we find that soil erosion thickness and dry-wet season change are the key drivers of soil mineral nitrogen content at the early stage of vegetation self-restoration on sloping arable land in the dry-hot valley (Figure 7).Many researchers have reported that soil erosion and runoff can lead to losses of soil chemical properties, especially total nitrogen [5].Here, we found the pronounced negative relationship between soil erosion thickness and soil chemical properties; of particular interest was the observed positive relationship found between soil chemical properties and mineral nitrogen.This may be because the activity of diazotrophs usually decreases at high N availability and increases at N deficiency.The nifH genes are often used as a molecular indicator for detecting diazotrophs because it is an important structural gene for nitrogennase [49].Although soil erosion thickness reduces the relative abundance of nifH genes, nifH still positively impacts mineral nitrogen, implying that bolstered nifH abundance and activity may augment the available nitrogen in soil through nitrogen fixation, especially where soil erosion conditions are severe and soil available resources are scarce.
Our study found that the effects of dry-wet seasonal changes on mineral nitrogen are mainly mediated by AMF and nifH genes (Figure 7).The relative abundance of microorganisms exhibits specificity for seasonal changes.For AMF, in the rainy season, the soil moisture increases, which is benefit for vegetation growth, thus subsequent infection of AMF [50].When it comes to the dry season, the most abundant mycelium colonization in roots and the inability of plant roots to obtain enough water and the absorption and transport of nutrients with low mobility resulted in the decrease in AMF dominant genera in the dry season.For nifH genes, during the transition from the rainy season to the dry season, genera that are well adapted to the environment are expected to replace those that are poorly adapted.
We also found that AMF has a catalytic effect on nifH genes that erode soil (Figure 7), which may be beneficial for nitrogen fixation.This result suggests AMF are conducive to promoting the functional activity of nifH genes during the early stage of vegetation self-restoration on sloping arable land in differing soil erosion thickness conditions in the dry-hot valley [51].This finding also indicated that the interaction between AMF and nifH genes is beneficial for the accumulation of available nitrogen, which is expected to mitigate negative effects of soil erosion on mineral nitrogen.Work by Zhu et al. (2018) and Yu et al. (2021) also pointed to the interaction between AMF and nifH genes having a pivotal regulatory role in the soil nitrogen cycle [52,53].The mycelium network of AMF can extend farther away to obtain more water and nutrients.Thus, even if the top layer of soil is eroded, AMF can still reach and obtain the resources they need from deeper soil [54].
We should emphasize that our simulated soil erosion thickness is not equal to a "straightforward" erosion process of topsoil.Unfortunately, it is difficult to find and apply a differing soil erosion thickness that corresponds to different soil erosion levels at a local scale with same soil type and topography or even under same climate.We also acknowledge the limitations of the simulation approach followed, which precluded us from demonstrating the actual soil erosion process and its impact on nitrogen.However, we believe our study provides a robust snapshot of the crucial role of soil erosion thickness in governing N availability, by revealing how soil erosion thickness can affect N forms in the early stage of vegetation self-restoration on typical sloping arable land in the dry-hot valley.
Conclusions
This outdoor experimental study investigated the effect of soil erosion thickness and dry-wet season change on soil mineral nitrogen early in vegetation self-restoration on sloping arable land in the dry-hot valley.Our findings provide compelling evidence that soil erosion thickness and dry-wet season change are the important factors determining the possible fates of soil mineral nitrogen.We also find that AMF had a promotion to nifH genes in eroded soil.We suggest that soil erosion thickness and sampling time on time scales should not be neglected when studying soil mineral nitrogen and even nitrogen cycling in the early stage of vegetation self-recovery in dry-hot valley environments.Our work highlights the importance of elucidating drivers in nitrogen-limited ecosystems, which is beneficial for devising and adopting proper measures for the effective restoration of vegetation.
Supplementary Materials:
The following supplementary materials are available online at https: //www.mdpi.com/article/10.3390/microorganisms12081546/s1,Table S1: Soil and plant properties data of different erosion thickness in rainy and dry seasons; Table S2: Thickness of the original soil layer remaining in the new cultivated horizon for different simulated erosion thicknesses; Table S3: Remaining volume of the original soil layer in a cultivated horizon under different erosion thicknesses; Table S4: Soil mixing process of cultivated soil composition under different erosion thicknesses;
Figure 1 .
Figure 1.Images of the simulated erosion plots with differing erosion thickness.Ste struction process of simulated erosion plots (A,B); Side and overhead views of the plots (C,D); 10 plots per erosion thickness level.
Figure 1 .
Figure 1.Images of the simulated erosion plots with differing erosion thickness.Steps in the construction process of simulated erosion plots (A,B); Side and overhead views of the 50 established plots (C,D); 10 plots per erosion thickness level.
Microorganisms 2024 , 17 Figure 2 .
Figure 2. Soil physicochemical properties under differing erosion thicknesses.Different letters denote significant differences (p < 0.05) between the treatments according to Fisher's least significant difference (LSD).((A) the content of sand; (B) the content of silt; (C) the content of clay; (D) the content of TN; (E) the content of SOC; (F) the content of DOC; (G) the content of pH; (H) the content of SM; (I) the content of TP; (J) the content of AP; (K) the content of TK; (L) the content of AK).Abbreviations: TN, soil total nitrogen; SOC, soil organic carbon; DOC, dissolved organic carbon; SM, soil moisture; TP, soil total phosphorus; AP, available phosphorus; TK, soil total potassium; AK, available potassium.
Figure 3 .
Figure 3. Aboveground plant biomass under differing erosion thicknesses.Different letters denote significant differences (p < 0.05) between treatments as determined by comparisons using Fisher's least significant difference (LSD).
Figure 3 .
Figure 3. Aboveground plant biomass under differing erosion thicknesses.Different letters denote significant differences (p < 0.05) between treatments as determined by comparisons using Fisher's least significant difference (LSD).
Figure 3 .
Figure 3. Aboveground plant biomass under differing erosion thicknesses.Different letters denote significant differences (p < 0.05) between treatments as determined by comparisons using Fisher's least significant difference (LSD).
Figure 4 .
Figure 4. Nonmetric multidimensional scaling (NMDS) for soil microbial community compositions in the different soil erosion thickness treatments (based on the Bray-Curtis distance).The 10, 20, 30 and 40 cm inset labels denote the simulated erosion thickness applied, for which the 0 cm served as the control ((A) rainy season of AMF; (B) dry season of AMF; (C) rainy season of nifH genes; (D) dry season of nifH genes).
NH 4 + -N content declined from 3.11 and 2.68 mg•kg −1 under no erosion (0 cm thickness) to 2.27 and 1.59 mg•kg −1 at a 40 cm thickness of soil erosion in the rainy and dry season, respectively, while the NO 3 − -N content correspondingly fell from 0.33 and 0.27 mg•kg −1 to 0.09 and 0.06 mg•kg −1 .Compared with the control, at the 10, 20, 30 and 40 cm levels of soil erosion thickness, the NH 4 + -N was reduced by 5.84%, 8.89%, 10.14% and 27.19% in rainy season and 10.51%, 26.23%, 29.18% and 40.68% in dry season, while the NO 3 − -N was reduced by 26.30%, 53.40%, 56.86% and 72.17% in rainy season and 17.68%, 23.38%, 68.52% and 78.99% in dry season (Figure 6A).The NO 3 − -N:NH 4 + -N ratios were always <1, irrespective of soil erosion thickness and seasons (Figure 6B). in the different soil erosion thickness treatments (based on the Bray-Curtis distance).The 10, 20, 30 and 40 cm inset labels denote the simulated erosion thickness applied, for which the 0 cm served as the control ((A) rainy season of AMF; (B) dry season of AMF; (C) rainy season of nifH genes; (D) dry season of nifH genes).
Figure 6 .
Figure 6.The rate of decrease (A) and the ratio (B) of mineral nitrogen across differing levels of erosion thicknesses.Abbreviations: NH4 + -N, ammonium nitrogen; NO3 − -N, nitrate nitrogen.
Figure 6 .
Figure 6.The rate of decrease (A) and the ratio (B) of mineral nitrogen across differing levels of erosion thicknesses.Abbreviations: NH 4 + -N, ammonium nitrogen; NO 3 − -N, nitrate nitrogen.
Figure 7 .
Figure 7. Structural equation model (A) showing the direct and indirect effects between minera nitrogen and their key drivers and its (B) standardized total effects.Note: Continuous arrows and dashed arrows indicate significant and non-significant relationships, respectively.The significance level is denoted by * (p < 0.05), ** (p < 0.01), and *** (p < 0.001).Numbers adjacent to arrows indicate actual p-values; an arrow's width is proportional to the size of its path coefficient.The red and black arrows indicate positive and negative relationships, respectively.Abbreviations: SOC, soil organic carbon; DOC, dissolved organic carbon; TN, soil total nitrogen; AMF, arbuscular mycorrhizal fungi NH4 + -N, ammonium nitrogen; NO3 − -N, nitrate nitrogen.
Figure 7 .
Figure 7. Structural equation model (A) showing the direct and indirect effects between mineral nitrogen and their key drivers and its (B) standardized total effects.Note: Continuous arrows and dashed arrows indicate significant and non-significant relationships, respectively.The significance level is denoted by * (p < 0.05), ** (p < 0.01), and *** (p < 0.001).Numbers adjacent to arrows indicate actual p-values; an arrow's width is proportional to the size of its path coefficient.The red and black arrows indicate positive and negative relationships, respectively.Abbreviations: SOC, soil organic carbon; DOC, dissolved organic carbon; TN, soil total nitrogen; AMF, arbuscular mycorrhizal fungi; NH 4 + -N, ammonium nitrogen; NO 3 − -N, nitrate nitrogen.
Figure S1 :
Total precipitation and average temperature during the experiment; Figure S2: Rarefaction curves of arbuscular mycorrhizal fungi (AMF) and Diazotrophs (nifH genes) in the soil of five different erosion thickness treatments ((A) rainy season of AMF; (B) dry season of AMF; (C) rainy season of nifH genes; (D) dry season of nifH genes); Figure S3: Relative abundances of the dominant genera of arbuscular mycorrhizal fungi (AMF) and Diazotrophs (nifH genes) in the soil of five different erosion thickness treatments ((A) rainy season of AMF; (B) dry season of AMF; (C) rainy season of nifH genes; (D) dry season of nifH genes); Figure S4: Soil microbial alpha diversity indices under differing erosion thickness.Different letters denote significant differences (p < 0.05) between the treatments according to Fisher's least significant difference (LSD) ((A) rainy season of AMF; (B) dry season of AMF; (C) rainy season of nifH genes; (D) dry season of nifH genes).
Table 1 .
Analysis of similarities between arbuscular mycorrhizal fungi (AMF) and Diazotroph (nifH genes) community composition in different soil erosion thicknesses (based on 999 permutations) (R 2 is the proportion of variance explained, and p < 0.05 was considered significant).
Table 1 .
Analysis of similarities between arbuscular mycorrhizal fungi (AMF) and Diazotroph (nifH genes) community composition in different soil erosion thicknesses (based on 999 permutations) (R 2 is the proportion of variance explained, and p < 0.05 was considered significant). | 2024-07-31T15:04:08.414Z | 2024-07-28T00:00:00.000 | {
"year": 2024,
"sha1": "cc6c53b212735dde80ef51996d163920ae397d55",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2607/12/8/1546/pdf?version=1722912984",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9301301dbd62de9b6e7a672f887cd13e0d31ebc2",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
221744555 | pes2o/s2orc | v3-fos-license | A novel press-fit minimally-invasive symphysiodesis technique
Objective Instability of the pubic symphysis often results in a poor outcome and reduced mobility of the patient. In some cases, an arthrodesis of the pubic symphysis is required. Until today, there is no data published how many of these procedures are performed annually and there is also no data about the outcome after this extensive surgery. Methods We developed a novel surgical technique to address the arthrodesis of the pubic symphysis in a minimally invasive approach. Therefore, we used for this purpose modified instruments and performed the transplantation of a cylindrical bone substitute into the pubic symphysis, without an extensive approach or dissecting the anterior or posterior symphyseal ligaments. Results Using this novel technique, a minimally invasive symphysiodesis was achieved in radiological findings, after the procedure. Conclusion Thus, this actually minimally invasive surgical technique seems to be a promising advancement for the arthrodesis of the pubic symphysis.
Introduction
Chronic anterior pelvic ring instability has been associated with chronic pain and often results in poor patient outcome [1].
The instability of the pubic symphysis often occurs posttraumatically, for example after a symphyseal rupture, related to pelvic ring injuries [1][2][3]. Chronic instability can also result from non-traumatic conditions like osteitis pubis [1] or chronic sportive overuse with repeated adductor tendon injuries [4]. Furthermore, osteitis pubis resulting from rheumatologic disorders and puerperal symphyseal rupture after delivery have been reported within 1 in 300 to 1:30000 cases [1,[5][6][7]. The treatment of these patients often is challenging and implant failure after plate stabilization is well described in the literature. However, these complications are usually tolerated by most authors [1]. For the treatment of nontraumatic symphyseal ruptures, there are only few reported cases [1], but there are still some complex cases, in which even plate stabilization doesn't result in pain free walking. For these specific cases, an arthrodesis of the pubic symphysis (symphysiodesis) has been advocated, with promising data of the published cases [1,[8][9][10]. For this technique, usually an open modified Pfannenstiel approach is used. The anterior, posterior and cranial ligaments of the symphysis are dissected and the bony parts of the pubic symphysis are removed on both sides. Subsequently, a tricortical autologous bone graft from the iliac crest is harvested and transplanted into the prepared area. Usually the symphysiodesis is stabilized using one cranial plate or a cranial plate with an additional "bumper" plate [8,11]. Because this surgical procedure can result in approach related complications such as wound infection or prolonged healing, we established a novel surgical technique, which can be performed minimally-invasive. Therefore, existing instruments were modified in order to be in a position to perform a minimally-invasive symphysiodesis using a cylindrical bone substitute on a human specimen without further dissecting the stabilizing anterior or posterior symphyseal ligaments. To our knowledge, this is the first attempt to address symphyseal instability using a pressfit cylindrical bone substitute for arthrodesis. In the future, we presume that chronic instability of the pubic symphysis can be reliably treated using this minimally invasive technique in addition to our published minimally invasive stabilization technique of the pubic symphysis using an internal fixator [2] or even without osteosynthesis for stabilization.
Instruments
For this technical protocol, a novel additional guide sleeve, for the already existing Bone Block Harvesting system, was created in collaboration with KARL STORZ (Tuttlingen, Germany) ( Fig. 1). To fit the inner diameter of 8 mm and to allow for a sufficient guidance, two fins at the inferior end were created. The guide sleeve itself also was cannulated to allow the placement of a guidewire prior to the placement of the guidewire sleeve.
Specimens
We investigated on one fixated complete human female cadaver. The cadaver had no history of pelvic ring fractures or pelvic ring instability.
Surgical procedure
The human specimen was positioned in supine position on a radiolucent table. Before the skin incision, fluoroscopy was used in anterior-posterior, inlet and outlet position to gain adequate orientation during procedure.
Following these preparations, a scalpel blade was placed at the top of the pubic symphysis in both inlet and outlet view. After that, a transverse skin incision of 2 cm was performed. The subcutaneous tissue was dissected by palpating the bony pubic symphysis superior and then spreading the tissue using scissors.
Next, a 2.0 mm guide wire was positioned inside the pubic symphysis, using a motor drill and fluoroscopy in inlet and outlet position verified its central position (Fig. 2a, b). After correct placement of the guide wire, the guide probe was placed over it to achieve a central positioning (Fig. 2c). Then the bone harvesting instrument was driven 3 cm deep into the pubic symphysis (Fig. 2d, e) in order to remove a 10 mm bone-symphysisbone cylinder (Fig. 2f). After that, the cylinder created (diameter 10 mm x length 30 mm) and the guide wire were removed. A cylinder (diameter 13 mm x length 30 mm) made out of bone substitute (Synbone, Foam Block, 5 PCF, Zizers, Switzerland) was prepared, using the novel bone harvesting instrument (KARL STORZ, Tuttlingen, Germany) (Fig. 3a). The bone substitute was impregnated using Zinc-paint to increase visibility while using fluoroscopy and for later anatomical dissection for study purposes. Subsequent to this, the cylinder was placed at the artificial opening superior at pubic symphysis and then successively introduced with light hammer blows into the created cavity using fluoroscopic guidance (Fig. 3b). The final fluoroscopic imaging showed no signs of displacement of the pelvic ring and a proper press-fit position of the bone substitute within the bony confines of the pubic symphysis (Fig. 3c, d).
Anatomical dissection after the procedure
After completing the surgical procedure, we dissected the pubic symphysis to analyze the correct position and the perisymphyseal ligaments (Fig. 3e). There were no signs of ligament insufficiency in the specimen. The implanted bone substitute showed a central positioning in both anterior-posterior and inferior-superior axis as shown in the drawing (Fig. 3f).
Discussion
Chronic instability of the pubic symphysis may result from inadequate posttraumatic healing of the symphyseal ligaments, degenerative changes due to chronic sportive overuse and adductor tendon injuries [4], osteitis pubis [10] or post-partum because of hormonal misbalance [8,9]. These changes often result in persisting pain and may cause severe immobility. In an early stage, a revised stabilization of the pubic symphysis using plate fixation or novel minimally invasive methods may provide enough stability for ligamentous healing [10]. However, there are still patients with persisting chronic instability, which results in a high demand of pain medication and often poor life-quality. In such cases, arthrodesis of the pubic symphysis (symphysiodesis) often is the last option to achieve adequate stability [1,8]. Pohlemann & Tscherne described the implantation of a tricortical autologous bone block from the iliac crest, which was modified by Giannoudis et al. using a T-shaped bone graft [8,11]. To stabilize the bone block and the symphysis pubis, usually a cranial 4.5 dynamic compression plate (DCP) or 3.5 symphyseal locking plate (SLDCP) often in combination with a second anterior "bumper" plate are used [8,11,12]. Although the surgical technique has been described over 20 years ago, only few specialists treat a little number of cases per year and no data about complication rates or the outcome after that procedure have been published [13]. An extensive surgical approach, the modified Pfannenstiel incision, usually is used for this technique. Considering this, local regional pain syndromes (Pfannenstiel syndrome) [13] and postoperative healing problems can occur. The resection of the symphyseal bone mass usually depends on subchondral sclerosis of the symphyseal bone and usually is performed using a chisel until bleeding of the cortical bone [11]. This allows the surgeon to modify the depth of cortical bone resection, but also results in soft tissue damage, especially of the remaining anterior or posterior symphyseal ligaments. To treat osteochondral defects, the usage of press-fit cylinders has been well established in past years [14,15]. However, its use in the treatment of pelvic non-unions has been described in only few cases. Herath et al. described the usage of Surgical Diamond Instruments (SDI) to create press-fit cylinders for the treatment of posterior pelvic ring non-unions [15]. According to the results of our institution for the posterior pelvic ring, we created the novel minimally-invasive symphysiodesis method as described in this article. Using the cannulated and modified bone graft harvesting tool, a safe resection and harvesting of the symphyseal bone was possible. At the end of the procedure, in the anatomical dissection we found the symphyseal ligaments still completely intact. We estimate that leaving the symphyseal ligaments and performing the symphysiodesis with a pressfit cylinder will result with a high stability of the bone graft and may result in sufficient osseous healing without extensive stabilization as described in previous studies for articular cartilage [8]. In posttraumatic cases or cases with a persisting wideness, reduction of the symphysis, prior to harvesting the symphyseal bone cylinder may help to achieve the same effect with the press-fit bone cylinder transplant. However, in this report we did not analyze any biomechanical or clinical data.
Limitations
In this study we analyzed the feasibility of a minimally invasive surgical technique for the arthrodesis of the pubic symphysis. So biomechanical stability can only be predicted and has to be analyzed in further studies.
Conclusion
The described novel minimally invasive surgical technique to create a symphysiodesis in this report, is technically feasible and might be a promising alternative to the more invasive approaches. Furthermore, we also hypothesize, that this novel technique can also be used without additional extensive stabilization in the future, due to the press-fit purchase of the bone graft and its suspected high biomechanical stability. | 2020-09-17T13:50:15.882Z | 2020-09-17T00:00:00.000 | {
"year": 2020,
"sha1": "b261fb3e6d0170d7663087497c1b8301a219b772",
"oa_license": "CCBY",
"oa_url": "https://jeo-esska.springeropen.com/track/pdf/10.1186/s40634-020-00284-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b261fb3e6d0170d7663087497c1b8301a219b772",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
203647420 | pes2o/s2orc | v3-fos-license | Ensemble and Deep-Learning Methods for Two-Class and Multi-Attack Anomaly Intrusion Detection: An Empirical Study
Cyber-security, as an emerging field of research, involves the development and management of techniques and technologies for protection of data, information and devices. Protection of network devices from attacks, threats and vulnerabilities both internally and externally had led to the development of ceaseless research into Network Intrusion Detection System (NIDS). Therefore, an empirical study was conducted on the effectiveness of deep learning and ensemble methods in NIDS, thereby contributing to knowledge by developing a NIDS through the implementation of machine and deep-learning algorithms in various forms on recent network datasets that contains more recent attacks types and attackers’ behaviours (UNSW-NB15 dataset). This research involves the implementation of a deep-learning algorithm–Long Short-Term Memory (LSTM)–and two ensemble methods (a homogeneous method–using optimised bagged Random-Forest algorithm, and a heterogeneous method–an Averaged Probability method of Voting ensemble). The heterogeneous ensemble was based on four (4) standard classifiers with different computational characteristics (Naïve Bayes, kNN, RIPPER and Decision Tree). The respective model implementations were applied on the UNSW_NB15 datasets in two forms: as a two-classed attack dataset and as a multi-attack dataset. LSTM achieved a detection accuracy rate of 80% on the two-classed attack dataset and 72% detection accuracy rate on the multi-attack dataset. The homogeneous method had an accuracy rate of 98% and 87.4% on the two-class attack dataset and the multi-attack dataset, respectively. Moreover, the heterogeneous model had 97% and 85.23% detection accuracy rate on the two-class attack dataset and the multi-attack dataset, respectively. Keywords—Cyber-security; intrusion detection system; deep learning; ensemble methods; network attacks
I. INTRODUCTION
The proliferation of information and the technology used for enabling communication in everyday life has prompted the immense need for computer security [1]. The impact of Information and Communication Technology on economic growth, social wellbeing, private and public business growth, and national security is enormous as it provides the devices that propagate digital communications among hosts. The overall protection of these hosts, which exist as computers, network devices, network infrastructures, etc. [2], as well as data and information against cyber-attacks, worms, potential leakage and information theft is fundamental to cyber-security [3].
The level of research on the development of Intrusion Detection System (IDS) continues to increase as attacks abound and attackers continue to evolve in practice. As a result, IDSs must evolve to prevail over the dynamic malicious activities carried out over a network. The development of a Network Intrusion Detection System (NIDS) is critical for monitoring the network pattern behaviour of a computer networked system [4]. Typically, an IDS monitors network packets to facilitate the identification of attacks and are basically categorised as either misuse/signature or anomaly based. Signature based IDS matches attacks to previously known attacks, and anomaly-based IDS uses the created normal profile of a user to flag any profile that deviates from the user known behaviour [5].
Because of the unrelenting efforts of attackers to compromise a known network of computers and the new pattern of executing attacks and other malicious activities, the need for a robust, up-to-date IDS is imminent to adequately prevail against unknown attacks/threats or zero-day vulnerabilities.
As such, an empirical research study was conducted to develop an IDS that can address new types of attacks in our modern-day network using machine and deep learning algorithms. The contributions to knowledge produced during this research work are highlighted below: 1) The use of more recent and complex network data as input data, i.e. the UNSW-NB15 dataset, for the development of an IDS.
2) Two (2) methods of implementing ensemble learning methods for the development of an IDS; 3) Implementation of a deep-learning technique (LSTM) for building a NIDS; 4) Development of two (2) categories of NIDS, i.e., twoclass (normal and attack labels) and multi-attack (ten class labels).
Moreover, it is the intent of this research work to answer the following research questions: 1) How effective is the ensemble learning method implementation of NIDS for detecting attacks, both in a twoclass scenario and a multi-attack scenario? www.ijacsa.thesai.org 2) How effective is the deep-learning implementation of NIDS for detecting attacks, both in a two-class scenario and a multi-attack scenario?
3) What peculiarities are found in two-class and multiattack datasets and how do they affect the developed NIDS models?
II. RELATED WORKS
The research conducted by [6] presented a deep-learning method for developing a NIDS. The work proposed and implemented a Self-taught Learning (STL) deep-learning based technique on a NSL-KDD dataset. The STL model when evaluated based on training and test data achieved, in terms of percentage, 88.39% accuracy for 2-class and 79.10% accuracy for 5-class.
The work of [4] is a closely related work, wherein the authors developed a multi-classification NIDS using the UNSW-NB15 dataset and implemented an Online Average One Dependence Estimator and an online Naïve Bayes with 83.47% and 69.60% accuracy, respectively.
Another research work conducted by [7] reported the use of a deep neural network for development of a NIDS. The study implemented LSTM-Recurrent Neural Network (RNN) to identify network behaviour as normal or affected based on the past observations. KDDCup"99 was used as the dataset, and the work achieved a maximum value of 93% efficiency.
The research work carried out by [8] developed four different IDS models using the RNN algorithm and tested them on a NSL-KDD dataset (binary and 5-classes) to evaluate the models. The best model on a binary class achieved 98.1% accuracy using a 1-hidden layer BLSTM. For a 5-class, 87% accuracy was achieved using a 1-hidden layer BLSTM.
Using deep autoencoder (AE) after extracting features via statistical analysis methods, [9] developed an IDS that achieved 87% accuracy on NSL-KDD dataset.
The study of [10] focused on using machine learning methods for developing an IDS using J48, MLP and Bayes Network (BN) algorithms to achieve the overall best accuracy of 93% with J48, 91.9% accuracy using MLP and accuracy of 90.7% using BN on the KDD dataset.
A. Dataset
First, confirm that you have the correct template for your paper size. This template has been tailored for output on the US-letter paper size. If you are using A4-sized paper, please close this file and download the file "MSW_A4_format".
Most research studies on the development of IDS use the KDDCup"99 dataset; however, this dataset is gradually becoming (if not already) obsolete because it does not contain most new forms of attacks prevalent in modern networks of computers. Reflection of contemporary threats and the inclusion of normal network packets are two important features of a high-quality NIDS dataset. Because attackers execute dynamic attacks daily, it is thus necessary to make use of a recent dataset to uncover new malicious activities in a network [11]. Thus, UNSW-NB15 was used in this study. The UNSW-NB15 data was developed using the IXIA PerfectStorm tool in the Cyber Range laboratory of the Australian Centre for Cyber Security, which captured the sets of abnormal and modern-day normal network traffic. More details regarding the dataset creation are given in [2]. Table I provides insights into the datasets used in this study.
As depicted in Table I above, the dataset is comprised of 45 attributes, of which, two (2) are dependent variables. Two subsets of data are obtainable from the original dataset according to the dependent variables; one of these subsets was obtained to develop a two-class anomaly IDS, and the other was use dot develop a multi-attack anomaly IDS. The distribution of the attacks is contained in the attack_cat attribute, and the label attribute is comprised of normal and attack instances, denoted as 0 and 1, respectively.
Regarding the features, Table II presents the details of both the independent and target variables.
Moreover, in light of data pre-processing and removal of redundant attributes, the first attribute indexed-id, serving as the index of the dataset, was removed because it is irrelevant, thus leaving two-class and multi-attack datasets with 43 attributes each. Fig. 1 and Fig. 2 above depict the data distribution for both subsets of the original dataset. Fig. 1 depicts the ten (10) class labels of the multi-attack dataset presented in Table I; each of the labels is displayed using different colour. Fig. 2 shows the two-class labels as presented in Table I above, with blue colour representing the normal labels and red colour representing the attack labels. An ensemble method [12] is the process of combining some different results, produced by contributing base learners, of predictive models via different combination methods to make a final prediction based on aggregated learning. This method is typically implemented via two phases: the first phase being the construction of various models, and the second phase involving the combination of the estimates obtained from the various models [13]. The ensemble method is said to be homogeneous when the contributing base learners are multiples of the same computational characteristics (family). Base learners in an ensemble model are standard classifiers. In this study, the homogeneous ensemble was implemented in the form of the Random-Forest (RF) algorithm. The Random-Forest algorithm is a bagging method that consists of a finite number of decision tree algorithms with the addition of a "perturbation" of the classifier used for fitting the base learners. In particular, RF uses "subset splitting". The RF ensemble of trees makes use of only a random subset of the variables while building its trees; thus, the ensemble method is homogeneous.
Alternatively, a heterogeneous ensemble is the combination of various results of base learners that have different learning methods or computational characteristics, that is, the contributing base learners belong to different categories of classification algorithms. The standard classifiers for the heterogeneous ensemble considered in this study are described as follows: Bayes Theory (Naïve Bayes algorithm), Instance Learning (k Nearest Neighbour), Rule-based (RIPPER) and Tree methods (C4.5 Decision Tree). The voting combination method [14] [15] was adopted in this study for building the heterogeneous ensemble method. The voting method is a noncomplicated method of combining several predictions of varied or different models, and it can be implemented in a variety of approaches, including majority vote, minority vote and average of probabilities. The average of probabilities method of voting [16] was selected for combining the results of each standard classifier because the averaged results of the models are used to provide the final prediction.
DL is an advanced implementation of a neural network. A neural network is the simulation of the human brain, that is, a model of connected neurons. A neural network is usually constructed to possess input, processing and output layers of neurons [17]. The processing layer, often referred to as the hidden layer, may contain one or more layers-a basic implementation of neural network is the Multilayer Perceptron (MLP) [18]. DL is an advancement on the MLP [19], but with more sophisticated and densely connected neurons that are capable of representing and extracting data in a more advanced form from data and mapping it into the output [20,21]. The neural network implementations that are used for DL include but not limited to Convolutional Neural Network, RNN and Long Short-Term Memory (LSTM). In this study, the deeplearning method implemented was LSTM-a type of RNN. A typical LSTM [7] consists of a cell, an input, an output, and a forget gate, with which it captures the order dependence and recollection of values over random time intervals.
Using the three (3) different data mining methods discussed above, several predictive models were developed using the afore-mentioned datasets. Because it is known that model development is the next stage after the dataset and algorithm selection process and method identification phases, the percentage split model development process was used in this research work. The percentage split is the method of dividing a given dataset into two: the first part is used for executing a training phase-wherein the algorithms builds or fits their respective models, and the second part of the dataset is then used for testing-the phase whereby the fitted models are tested by making predictions using the independent variables of the disjoint test set. Thus, a certain percentage value is given to split the dataset into the training split and the test split. Moreover, having two datasets (two-class and multi-attack datasets), each selected algorithm was fitted on each dataset type, and the resulting models were tested on each corresponding test sets, thereby producing some sets of models that are categorised as (i) two-class attack anomaly IDS, and (ii) multi-attack anomaly IDS, each having three (3) separate models with respect to the applied method discussed above.
To summarise how the data mining methods were implemented and all robust NIDS models were all developed in this study, the proposed empirical framework is depicted in Fig. 3, and the experimental results produced are presented in table and charts and extensively discussed as seen in sections below. www.ijacsa.thesai.org
C. Performance Evalutaion Metrics
Following the model development process stage, the developed models are evaluated. As such, the performances of models were evaluated based on the category they belonged to. The two-class anomaly IDS models were evaluated using the following metrics [17]: Detection rate, Area Under Curve (AUC), True Positive (TP), False Positive (FP), True Negative (TN) and False Negative (FN). The multi-attack anomaly IDS models were evaluated based on the following metrics [18]: Detection rate, Kappa value and Weighted (AUC, TP, FP and F-measure). The multi-attack models were evaluated using weighted values because of the multiple values of the class labels (ten in number), unlike the two-class anomaly IDS, which has just two classes (normal and attack)-a binary classification model.
The proposed empirical framework presented in Fig. 3 above consists of the Data Pre-Processing and Re-Labelling Module and the Method Module, which interacts with the Model Development Process Module in producing the two forms of IDS mentioned in this study. The Algorithm module consists of the selected algorithms for this study, and this module interacts with the Method module, which defined the data mining implementations. Last, the Metrics component evaluates the produced model based on its form, and the evaluation results are subsequently discussed. Table III presents the parameter settings for each algorithm used in this study. All models were trained and tested using the percentage split strategy-80% was used for training and 20% was used for testing, and their performances were evaluated using various metrics as appropriate for the type of the developed IDS model.
Conclusively, all experiments were carried using Waikato Environment for Knowledge Analysis (WEKA) tool for data analysis, wherein results were all obtained and presented in relevant section of this paper.
IV. RESULTS
Having implemented the proposed framework of this research, the reported results will be categorised into two according to the model development processes. Note that the test was conducted on 20% of the dataset, resulting in 16,466 instances. First, the two-class anomaly IDS is basically the prediction of whether a network packet is normal or an attack and is thus evaluated using the given metrics in Fig. 3. For the homogeneous method, Tables IV and V present the performance scores of the model and its corresponding confusion matrix, respectively.
From Last in this category, the results of deep-learning method for developing a two-class anomaly IDS as implemented with the specified parameters described in the previous section are shown in Tables VIII and IX. Critical evaluation of the models in this category reveals that, despite all models performing well using the AUC metric, the deep-learning model is weak in the detection of normal packets and will generate more false flagging of normal packets, thereby degrading the network monitoring in real time. Moreover, although the homogeneous and heterogeneous models competed fairly with each other, as they are both robust models for the detection of normal and attack packets, the homogeneous ensemble model is the best model in terms of lower FP and higher AUC values.
The second category is the multi-attack anomaly IDS, which is the classification of packets into normal and nine different types of attacks-a typical multi-classification problem, as discussed in previous section. The models are evaluated as depicted in Fig. 3. For the homogeneous ensemble method in this category, Table X reveals various performances scores.
Table X reveals the model"s ability to detect whether a packet belongs to any of the ten (10) classes at 87.39%. This model had a kappa value of 0.8 and a weighted AUC of 0.98. The weighted TP value is 87.4%, and the weighted FP value is 2.5%. The model also had a weighted F-measure value of 0.87. Table XI, this model detection rate was 85.23% but with a weighted AUC of 0.98, a weighted TP value of 0.852-85.2% correct classification of each class label instances, a weighted FP value of 0.031, a weighted F-measure of 0.855, and a kappa value of 0.79.
Similarly, in
Last in this category, the deep-learning model of multiattack anomaly IDS was also evaluated; its scores are represented in Table XII. The deep-learning model yielded an ability to detect and predict the class of any packet at 72%. This result is achieved at a weighted AUC value of 0.868, a weighted F-measure score of 0.659, and a kappa value of 0.57. This model is capable of correctly detecting each class instance at the weighted TP value of 72.3, and it had a weighted FP value of 0.17. In this multi-attack category, the homogeneous ensemble method also achieved the best performance, with a weighted Fmeasure of 0.87, a kappa value of 0.82, and an overall detection rate of 87%. Although the heterogeneous had a weighted AUC of 0.982, it is the second best in this category. Last, the deep-learning model competed fairly well with the other models, with its weighted AUC of 0.868; however, it had a low kappa value of 0.57 and a low detection rate of 72.26. Moreover, the confusion matrix for each model reveals the classification and misclassification of the instances accordingly. The deep-learning model was found to be unable to detect many attack classes, whereas the homogeneous model was adequately robust.
A summary of the detection rate of all models for both categories is presented in Table XIII. Table XIII concisely presents the detection rates for all the above-described models, as is pictorially depicted in Fig. 4. Based on the implementation of the various methods of machine learning and deep-learning algorithms in the development of several IDS models and making use of a modern-day dataset, it is possible to generalise the results. First, this study supports the fact that machine learning and DL are competent and effective technique for developing IDS in various capacities, such as two-class and multi-attack anomaly detection. This work also revealed that simple implementation of a machine learning algorithm is required and is a much more effective with less computational cost and complexity in the development of IDS regarding the strong predictive prowess of the homogeneous ensemble method compared to the heterogeneous (though it fiercely competed) and the deeplearning methods. Moreover, it can be generally stated that the detection rate for a two-class IDS is higher than that of multiattack IDS because of the number of classes the machine learning will learn to make correct predictions and also the nature of data, wherein imbalance is peculiar to the multiattack dataset, whereas the two-class dataset is mostly balanced.
Generally, the best model produced by this research work for detecting either a normal or attack packet (two-class anomaly IDS) operates at the rate of 97.96% and the best model for the multi-attack (ten-classes) anomaly IDS has the detection rate of 87.39%. In direct comparison with the recent work of [4], which actually outperformed much past research models, their work produced an overall detection rate of 83.47% for their online AODE model and a 69.60% detection rate for their online Naïve Bayes model, both of which were outperformed by the best (87.39%) and second-best (85.23%) detection rate of the multi-attack anomaly IDS models developed in this research work.
Comparatively, the research work conducted by [6] produced NIDS of 88.39% accuracy for a two class attack which was outperformed by two of the three NIDS of this study developed for 2-class attack detection (with 97.96% and 96.92% detection rate produced in this study), and also while their work produced a 79.10% accuracy for 5-class, the NIDS developed in this study produced two out of the three NIDS with 87.39% and 85.23% detection rate for 10-classes. Also, their STL model yielded a 75.76% f-measure value for the 5class NIDS while this study produced 87% for homogeneous ensemble and 85% for heterogenous ensemble for a 10-class NIDS.
Additionally, the study of [10] developed IDS using J48, MLP and Bayes Network (BN) algorithms to achieve the overall best accuracy of 93% with J48, 91.9% accuracy using MLP and accuracy of 90.7% using BN on the KDD dataset. The homogenous ensemble NIDS developed in this study outperformed their work with a detection rate of 97.96% as well as the heterogenous ensemble NIDS with the detection rate of 96.92% Having implemented several machine learning and deeplearning algorithms and several techniques for combining models, the application of feature selection technique to best select features from the available ones in the dataset is recommended as future work and also in practice to produce an optimal model with less cost and computational complexity. Moreover, the deep-learning method requires further investigation because there is need for improvement in both two-class and multi-attack anomaly IDSs.
VI. CONCLUSION
This research work revealed answers to several research questions. In response to the first question, the NIDS developed using machine learning is highly effective with a homogeneous ensemble implementation achieving a detection rate as high as 98% in a two-class scenario and 87% in a multiattack scenario, and its heterogeneous counterpart is effective for NIDS with a detection rate of 97% in a two-class scenario and 85% in a multi-attack scenario.
In response to the second research question, the empirical research work revealed that a deep-learning implementation can be effective at as low as 80% detection rate in a two-class scenario and can effectively detect various types of attacks and normal packets in a multi-attack scenario at 72%.
Answering the third research question, it was discovered that two-class datasets have a balanced distribution unlike the multi-attack which is greatly imbalanced. These peculiarities affected the developed models as the developed models better fitted the balanced dataset than the imbalanced dataset.
The results of this research work also revealed that it is easier to identify two classes of network packet than ten (10) different classes belonging to a network packet.
This research work also revealed the weakness of DL, as it cannot produce a competitive model if its configuration is not sophisticated, i.e., is comprised of a high number of layers, which in turn increases computational complexity and cost.
A dataset consisting of 43 attributes is usually considered as a high-dimensional dataset that requires a feature preprocessing stage, wherein redundant, irrelevant and (in some cases) highly correlated attributes are removed to develop a robust model that neither over fits or under fits the dataset. This stage is executed by applying a feature selection technique, which includes filter and wrapper methods; however, this stage was not conducted in this research work and will be considered in future work. Additionally, while developing NIDS using two-class dataset, it was discovered that the dataset was imbalance. Thus, class balancing is also considered as future work.
The development and deployment of the developed NIDS models for real time detection of attack is considered as an important future work. | 2019-10-02T00:31:15.480Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "d919f3e281cf4f3896d3c820c3e273515b88daa6",
"oa_license": "CCBY",
"oa_url": "http://thesai.org/Downloads/Volume10No9/Paper_69-Ensemble_and_Deep_Learning_Methods.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e0e4cdb1499d11d7906d1b92375230f3a5a240e6",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
251043714 | pes2o/s2orc | v3-fos-license | Circadian Oscillations in the Murine Preoptic Area Are Reset by Temperature, but Not Light
Mammals maintain their internal body temperature within a physiologically optimal range. This involves the regulation of core body temperature in response to changing environmental temperatures and a natural circadian oscillation of internal temperatures. The preoptic area (POA) of the hypothalamus coordinates body temperature by responding to both external temperature cues and internal brain temperature. Here we describe an autonomous circadian clock system in the murine ventromedial POA (VMPO) in close proximity to cells which express the atypical violet-light sensitive opsin, Opn5. We analyzed the light-sensitivity and thermal-sensitivity of the VMPO circadian clocks ex vivo. The phase of the VMPO circadian oscillations was not influenced by light. However, the VMPO clocks were reset by temperature changes within the physiological internal temperature range. This thermal-sensitivity of the VMPO circadian clock did not require functional Opn5 expression or a functional circadian clock within the Opn5-expressing cells. The presence of temperature-sensitive circadian clocks in the VMPO provides an advancement in the understanding of mechanisms involved in the dynamic regulation of core body temperature.
INTRODUCTION
The preoptic area (POA) of the anterior hypothalamus is most commonly associated with its role in the regulation of body temperature and sleep (Tan and Knight, 2018). Increases in environmental warmth activate neurons within the ventromedial POA (VMPO), whose neural activity is correlated with behavioral and physiological mechanisms of cooling (Knox et al., 1973;Bachtell et al., 2003;Tan et al., 2016). In addition to responding to environmental temperature, the VMPO is active in regulation of body temperature changes during sleep (Rothhaas and Chung, 2021). Interestingly, this regulation of body temperature during sleep is independent of the regulation of the circadian rhythm of core body temperature by the brain's central circadian clock, the suprachiasmatic nucleus (SCN). Lesioning the VMPO actually increases the amplitude of the circadian rhythm of body temperature, suggesting a modulatory role of the VMPO on the range of normal core body temperatures over a circadian cycle (Osborne and Refinetti, 1995;Lu et al., 2000). Such lesions also diminish animals' ability to maintain set points of body temperature in response to environmental temperature challenges (Teague and Ranson, 1936). Inversely, lesioning of the SCN abolishes the circadian component of body temperature, but leaves the relation between sleep and body temperature intact (Baker et al., 2005). Although not directly connected, the VMPO receives afferent input from the SCN via the paraventricular nucleus (PVN) (Vujovic et al., 2015).
It has long been appreciated that neurons in the VMPO are activated by either temperature information relayed by dermal thermosensors or to local changes in brain temperature (Nakayama et al., 1961;Boulant and Hardy, 1974). Local heating of the POA causes systemic body temperature reduction (Magoun et al., 1938;ANDERSSON et al., 1956;Carlisle and Laudenslager, 1979). We have recently identified neurons in the POA which are directly sensitive to violet light and whose activation similarly causes a reduction in core and brown adipose tissue (BAT) temperature (Zhang et al., 2020). These neurons express the atypical opsin, Opn5 (or "neuropsin"), which has been found in both the murine and primate VMPO (Tarttelin et al., 2003;Yamashita et al., 2014). Unlike wild-type mice, Opn5null mice do not regulate their body temperature in response to violet light (Zhang et al., 2020).
Opn5 absorbs UV/violet light (~380-400 nm) and acts as a bistable photopigment which retains its retinaldehyde chromophore through successive activation events (Yamashita et al., 2010;Yamashita et al., 2014). In birds, Opn5 is expressed in hypothalamic neurons bordering the third ventricle which act as deep brain photoreceptors regulating seasonal reproduction (Nakane et al., 2010;Nakane et al., 2014). In mammals, Opn5 has been detected in neural tissues such as the brain and retina, as well as non-neural tissues like the gonads, skin, and cornea (Tarttelin et al., 2003;Kojima et al., 2011;Buhr et al., 2015;Díaz et al., 2020). During retinal development Opn5 modulates the rate of vascular development through violet-light exposure (Nguyen et al., 2019). Opn5 also allows for the photoentrainment of local circadian clocks in the murine retina, skin, and cornea (Buhr et al., 2015;Buhr et al., 2019). Deletion of Opn5 in mice leads to a slower behavioral re-entrainment to shifted, "jet lag" light cycles (Ota et al., 2018).
The majority of mammalian cells express autonomous molecular circadian clocks (Yi et al., 2021). Interestingly, many neuronal populations throughout the brain are unique in that they do not show sustained autonomous molecular rhythmicity (Abe et al., 2002;Kriegsfeld et al., 2003). Notable exceptions to this include sustained rhythms in isolated SCN (the central timekeeper for the entire body) (Groos and Hendriks, 1982;Shibata et al., 1982;Shibata and Moore, 1988), arcuate nucleus (Abe et al., 2002), olfactory bulb (Granados-Fuentes et al., 2004), and hippocampus (Wang et al., 2009), among a few others. Here, we analyze the sustained rhythmicity and phase resetting properties of the VMPO. Many peripheral (non-SCN) oscillators can use temperature as a synchronizing time cue (Brown et al., 2002;Buhr et al., 2010). Similarly, the local circadian clocks of tissues which express Opn5 (retina, cornea, exposed skin) are directly photoentrainable (Buhr et al., 2015;Buhr et al., 2019;Díaz et al., 2020). We wished to analyze the ability of light and temperature to shift the local clocks of the VMPO, and to address any role for Opn5. We find that the cells with the strongest autonomous rhythmicity are medial to the Opn5-expressing cells, are not light sensitive, and are shifted by physiological temperature changes in an Opn5-independent manner.
Animals
All mouse procedures were carried out in accordance with regulations of the University of Washington Institutional Animal Care and Use Committee, Seattle, WA. All mice were of the background strain C57Bl/6J. Both male and female adult mice were used between the ages of 1 month and 1 year of age. Opn5 Cre ; Ai14 mice were originally described in (Nguyen et al., 2019). These were crossed to Per2 Luciferase mice (Jax strain # 006852) as described in (Yoo et al., 2004). Opn5 −/− ; Per2 Luciferase are as described in (Buhr et al., 2015). Opn5 Cre mice were also crossed to Bmal1 flox mice (Jax strain # 007668) as described in (Storch et al., 2007).
Lighting and Husbandry
Mice were weaned at 3 weeks of age and transferred from standard laboratory housing with white fluorescent lighting to housing cabinets with LED lighting which included violet (415nm), blue (475-nm) and green (525-nm) light to activate known mouse retinal photoreceptors at a total intensity of 5 W/m 2 . Mice were exposed to a 12-h light: 12-h dark cycle. Mice were removed from the light cycle approximately 6 h into the light phase (zeitgeber time 18, "ZT 18") and were euthanized with CO 2 asphyxiation in room light for ex vivo luminescence experiments.
Fluorescence Imaging
For fresh slices ( Figures 1A, 2A), mice were euthanized with CO 2 asphyxiation at ZT18 and brains were rapidly dissected in ice cold HBSS. Brains were sliced either coronally or sagittally with a vibroslicer (World Precision Instruments) and imaged on an Olympus IX81 microscope.
For immunohistochemistry, mice were exposed to 12-h light: 12-h dark cycles of violet (415-nm), blue (475-nm) and green (525-nm) light for 2 weeks. At the indicated ZT phases (with ZT0 being times of lights-on and ZT12, lights-off) three mice per time point were euthanized by CO 2 asphyxiation in dim red light and brains were rapidly removed into 4% paraformaldehyde. The brains were postfixed for 24 h, and cryoprotected in 30% sucrose in PBS. Brains were sectioned in the coronal plane at 20 μm sections in a cryostat (Leica CM 1850; Nussloch, Germany) at −20°C. Slices were incubated for 1 h in blocking buffer (2.5% normal goat serum, 2.5% normal donkey serum, 2% gelatin, 1% BSA, 0.2% Triton X-100) and then incubated for 1 h in 5% goat serum in PB + 0.3 % Triton X-100. They were then incubated in rabbit anti-Per2 primary antibody (PER21-A, Alpha Diagnostic Intl. Inc, San Antonio, TX, United States) diluted in PB + 0.3% Triton X-100 for 24 h. After incubation, samples were rinsed in PBS, incubated with secondary antibody goat anti-rabbit-Alexa 633 (Invitrogen, United States) for 1 h at room temperature, and mounted with ProLong Gold Antifade Mountant with 4′, 6diamidino-2-phenylindole (DAPI) (Invitrogen).
Immunofluorescent images were analyzed using a Leica DM6000 microscope with a Leica SP8 confocal system and processed with FIJI/ImageJ (National Institutes of Health, Bethesda, MD, United States). A separate researcher collected the tissues than analyzed the images who was blinded to the groups.
Luminescence Imaging
Fresh 300 μm brain slices were placed on cell culture membranes (Millipore #PICMORG50) in DMEM (Cellgro) supplemented with B-27 Plus (Life Technologies), 2 mM Glutamax (Gibco), 10 mM HEPES buffer (Life Technologies), 25 U/ml penicillin; 25 μg/ml streptomycin (Sigma Millipore), 352.5 μg/ml NaHCO 3 , and 0.1 mM D-Luciferin potassuim salt (Biosynth). Cultures were maintained in 1.2 ml of media in 35 mm petri dishes which were sealed with sterile vacuum grease. Sealed dishes were maintained at 36°C with a microscope stage warmer and imaged with a Retiga Lumo camera (Teledyne Photometrics). Images of bioluminescence were collected over 1 h exposures and 3 1-h images were averaged into 1 using FIJI/ImageJ software. This procedure was repeated 4 times for each orientation (coronal or sagittal) of imaging.
Luminescence Trace Measurements
Fresh 300 μm brain slices were cultured as above. Dishes were maintained in a Lumicycle PMT apparatus (Actimetrics) contained within an air-jacketed incubator (Sanyo MIR-262) at 36°C. POA and SCN were isolated from each mouse for n = 8. POA slices were approximately 600 μm anterior to the SCN.
Raw data was used to measure periods using LM-Fit which uses a best-fit sine harmonic regression which also accounts for natural damping of the trace. Fourier power spectrum transforms with a time window between 22 and 40 h were performed using a Blackman-Harris filter within Lumicycle Analysis software (Actimetrics). Both period and Fourier amplitude was determined on days 2-7 in culture. Initial culture phase was determined using background-subtracted luminescence traces fit with a 2-degree polynomial fit line using Lumicycle Analysis and was determined as the time of the peak of the first full oscillation. For phase shifts, a best-fit sine wave (using LM-Fit which also adjusts for oscillation damping) was fit to data before or after a pulse. Phase shifts were measured as observed phase subtracted from expected phase. Amplitude was restored in brain slices by replacing culture media in Figure 1B.
Light and Temperature Changes Ex Vivo
Cultured POA slices were transferred using an insulated shuttle box to a separate incubator which either included violet LEDs (peak output 415 nm) at 2 × 10 14 photons cm −2 s −1 or a heating block set to warm a sealed dish to a stable 38°C. POA-containing dishes remained either exposed to violet light or to a temperature bock (in darkness) for 1.5 h before being returned to the Lumicycle in darkness. For light cycles a custom light: dark apparatus was used as described in (Buhr and Van Gelder, 2014). Briefly, a 24-h clock motor rotated a disc above two diametrically opposed culture dishes containing organotypically cultured POA. A window in the otherwise opaque disk allowed light to pass through to the culture dishes only when directly above the tissue. A 10 h light: 14 h dark cycle was administered for four full 24-h cycles, so that one POA received the oppositely phased cycle as another (designated 0°and 180°). The light consisted of 415-nm at 2 × 10 14 photons cm −2 s −1 . Following the light: dark cycles the tissues were returned to constant darkness inside a Lumicycle apparatus.
Time of the first peak after return to darkness was measured as the phase of traces after an ex vivo light:dark cycle.
Statistics
Statistics were run using SigmaPlot 11.0. Two-tailed paired t-tests were used to compare the initial phase and periods of POA to SCN from the same animals. One-way ANOVA tests were used to Frontiers in Physiology | www.frontiersin.org July 2022 | Volume 13 | Article 934591 test for differences beyond the 95% confidence intervals of multiple groups in phase shifts and of periods/amplitudes of more than two groups as in analyses of Bmal1 flox experiments. No data was excluded.
An Autonomous Circadian Clock in the VMPO
In previous work, we have observed a correlation between tissues which express the atypical opsin Opn5 and sustained clock gene expression (Buhr et al., 2015;Buhr et al., 2019;Díaz et al., 2020). We wished to analyze the molecular circadian clock properties of the VMPO where Opn5 is expressed (Zhang et al., 2020). We identified the region of interest with Opn5 Cre/+ ; Ai14 mice using tdTomato fluorescence as a marker. There is a pyramidal nucleus of Opn5-cells, approximately 500 μm in width at its base, which converges dorsally to its termination location just above the anterior commissure ( Figure 1A). Although this Cre line lineage-marks cells during development, lacZ expression in Opn5 LacZ mice and in situ hybridization has confirmed expression in adult brains within this region (Zhang et al., 2020). Using an Opn5 Cre/+ ; Ai14; Per2 Luciferase mouse line, we measured PER2 bioluminescence rhythms of the Opn5containing region of the VMPO and the SCN of the same mice ( Figure 1B) (Yoo et al., 2004). The VMPO displayed robust circadian oscillations of Per2 Luciferase bioluminescence which could be reinitiated with a replacement of culture media. The free-running period of this region was similar to tissue cultures of SCN from the same mice, with POA oscillating at 23.9 ± 0.16 and SCN with 23.6 ± 0.2 ( Figure 1C, p > 0.05 twotailed t-test). The initial phase of POA was advanced compared with SCN, with the peak of the first full cycle in culture occurring approximately 6 h earlier than the SCN. This indicates that there is an endogenous clock within the POA that is capable of oscillating independently of the SCN. A limitation of the above studies using photomultiplier tubes (PMT) is that the precise location of the luminescence cannot be assessed. We next imaged the POA of Opn5 Cre/+ ; Ai14; Per2 Luciferase using both a fluorescence microscope and a cooled CCD camera for long-exposure luminescence imaging. From the same slices we observed the extent of Opn5 Cre ; Ai14 expression (tdTomato) and Per2 Luciferase bioluminescence and saw that the PER2 reporter was at the midline near the third ventricle, roughly within the bounds of the Opn5-cells (Figure 2A). By slicing brains of Per2 Luciferase mice sagittally the circadian activity of the VMPO and SCN can be observed in the same brain slice. The posterior boundary of the VMPO clock lies about 800 μm anterior to the SCN and extends rostrally for about 1200 μm ( Figure 2B). The VMPO clock shows strongest Per2 Luciferase activity in its most ventral portion but has rhythmic luminescence observable approximately 200 μm into the hypothalamus.
To further understand the anatomy of the relation of PER2 expressing cells to Opn5 cells, we collected brains from Opn5 Cre/+ ; Ai14 mice in a 12 h: 12 h light: dark cycle at four time points and antibody labelled for PER2. Of the times tested, the highest expression of PER2 in the VMPO was observed at ZT 6, and PER2 expression was medial to Opn5 expressing cells. Using an arbitrary threshold of PER2 expression in individual cells, 44 ± 8% of the Opn5 cells stained with the PER2 antibody at ZT 6 (n = 3, Figure 2C). In summary, the VMPO clock is contained within the boundaries of Opn5 cells with overlap, and the PER2 expression stretches anteriorly towards the rostral boundary of the hypothalamus ( Figure 2D).
The Circadian Clock in the VMPO is Not Photoentrainable
The photoentrainability of circadian clocks in close proximity to Opn5 expressing cells in other tissues inspired us to test for light sensitivity of circadian clocks in the POA (Buhr et al., 2015;Buhr et al., 2019). We made organoptypic cultures of POA from Per2 Luciferase animals and placed them on opposite sides of a light apparatus that exposes two cultures to oppositely phased light: dark cycles of 415 nm, 2 × 10 14 photons cm −2 s −1 (Buhr and Van Gelder, 2014). After 4 days the tissues were returned to darkness and the bioluminescence phases were measured using PMTs. The phases of POA clocks after light: dark cycle exposure were the same as would be predicted by the phase prior to light exposure, and importantly, were not different between POAs receiving oppositely phased light: dark cycles ( Figures 3A,B). In addition to light cycles (photoentrainment), we assessed the response to acute light pulses (phase shifts). After a light pulse of 415 nm, 2 × 10 14 photons cm −2 s −1 , for 90 min, the phase of POA rhythms were unchanged ( Figures 3C,D). This was performed for phases both during the rising and descending phases of Per2 Luciferase bioluminescence. Unlike the clocks in the retina, cornea, and skin (Buhr et al., 2015;Buhr et al., 2019;Díaz et al., 2020), the circadian clocks of the VMPO are not sensitive to violet light.
The POA Clock can be Shifted by Temperature Changes, and This is not Mediated by Opn5 Cells
To further assess any influence of the POA's Opn5 cells on the local circadian clock, we collected the POAs of Opn5 −/− ; Per2 Luciferase and Opn5 Cre ; Bmal1 flx/flx ; Per2 Luciferase mice. Compared to wild-type Per2 Luciferase POAs, there were no differences in period or Fourier amplitude strength of mice lacking Opn5 (Opn5 −/− ) or lacking a functional clock within the Opn5-expressing cells (Opn5 Cre ; Bmal1 flx/flx ) ( Figures 4A-C). Neither the function of Opn5 itself nor the molecular circadian clock within the Opn5 cells directly modulates the freerunning rhythms of the VMPO.
Because cells within this region of the POA are known to be temperature sensitive, we tested the ability of temperature changes within the physiologic range to shift the circadian clocks in the VMPO. Temperatures changes from 36 to 38°C for 1.5 h caused a stable change in the phase of the cultured VMPO tissue ( Figure 4D). When given during the early rising phase of Per2 Luciferase bioluminescence, a temperature pulse caused a 3.0 ± 1.1 h phase advance, and during the descending phase caused a −2.2 ± 0.9 h phase delay ( Figure 4E).
Extra-ocular opsins have been implicated in thermal responses in mammalian spermatazoa (Pérez-Cerezales et al., 2015;Roy et al., 2020). To test if opsin-based thermal responses regulate the phase of the circadian clock, we also tested if the VMPO of Opn5 −/− ; Per2 Luciferase and Opn5 Cre ; Bmal1 flx/flx ; Per2 Luciferase mice responded normally to temperature pulses. We gave temperature pulses during the early rising phase of Per2 Luciferase bioluminescence traces and measured the resulting phase shifts. Opn5 −/− ; Per2 Luciferase VMPO shifted by 4.3 ± 1.2 h and Opn5 Cre ; Bmal1 flx/flx ; Per2 Luciferase shifted 3.3 ± 0.6 h in response to 1.5 h temperature changes. This did not differ statistically from wild-type tissue (p > 0.05, one-way ANOVA). These results demonstrate that although Opn5 is expressed near to the circadian clocks of the VMPO, the presence of Opn5 itself and a functional clock within the Opn5-cells are not necessary for the freerunning rhythms or the thermal responses of the VMPO circadian clock.
DISCUSSION
Here we describe an autonomous circadian clock in the most ventral aspect of the VMPO. This brain region displays robust rhythmic expression of the Per2 Luciferase circadian reporter and can maintain oscillations for at least 1 month with fresh tissue culture media. These cells are near the third ventricle, but do not border it. Because this brain region also hosts the brain's most concentrated expression of the atypical opsin, Opn5, we were interested in the direct photosensitivity of local VMPO circadian clocks (Zhang et al., 2020). In previous work we have observed that the presence of Opn5 is sufficient for the photoentrainment of surrounding tissue. In the retina, cornea, and pinna skin, Opn5 is expressed in a small subset of cells, but Opn5-dependent circadian photoentrainment is observed across diverse cell types throughout the local niche (Buhr et al., 2015;Buhr et al., 2019;Díaz et al., 2020). In the VMPO, Opn5 expressing cells straddle and overlap the areas in which we observed the strongest circadian rhythmicity. Thus we hypothesized that the violet light sensitive Opn5 cells may regulate the phase of the rhythmic VMPO cells. However, this was not the case. In both cyclic lighting conditions and acute light exposure, the phase of the circadian clocks in the VMPO were unchanged by direct light exposure.
A unique feature of the VMPO is the direct thermosensitivity of its warm-sensitive neurons (Nakayama et al., 1961) and the systemic response to local warming (Magoun et al., 1938). Because the brain is mostly insulated from environmental temperatures, changes in brain temperature occur mainly due to circadian variations, fever, or sleep. Our finding that the VMPO clocks are set by temperature changes within the physiologic range suggest that systemic body temperature FIGURE 3 | VMPO circadian clock is not directly entrained by light. (A) Bioluminescence traces of Per2 Luciferase from two independent organotypic cultures (red and blue) of VMPO. The cultures are removed from PMT measurement and exposed to Light:Dark cycles from days 3 to 7. Light paradigms are indicated below for blue (0°) and red (180°) cultures. Light consisted of 415 nm, 2 × 10 14 photons cm −2 s −1 . (B) Phase of luminescence traces in constant darkness after exposure to oppositely phased Light:Dark cycles for 4 days. Points are average (±SEM) phases of first peak after return to PMT measurement with previous Light:Dark cycles indicated with gray and white shading. n = 8. (C) Bioluminescence traces of VMPO from Per2 Luciferase mice given 90 min light pulses where indicated with a yellow box (415 nm, 2 × 10 14 photons cm −2 s −1 ). (D) Average phase change (comparing phase of rhythm in days 5-7 to phase in days 1-4) after a light pulse administration occurring either in the early ascending phase of the luminescence rhythm (left, and upper panel of (C) or early descending phase (right, and lower panel of (C). Shown are mean ± SEM. n = 5 each.
Frontiers in Physiology | www.frontiersin.org July 2022 | Volume 13 | Article 934591 6 may act as a synchronizing cue for the VMPO (Buhr et al., 2010). This is in addition to potential cues coming from the SCN via the subparaventricular zone (Vujovic et al., 2015). Many cells in the POA are activated by changes in environmental temperature as detected by cutaneous thermosensors (Nakamura and Morrison, 2008). Future work would be necessary to determine if this warm-induced activity also affects the local VMPO clock. This might present a discordant signal in a nocturnal animal for which environmental temperatures are typically cooler at night, when internal brain temperature experiences its maxima (Gao et al., 1995). It should be noted that neuronal firing is not required for temperature-based tissue entrainment of neural tissue, so the environmental temperature response may involve entirely separate mechanisms than clock phase alignment to internal temperatures (Buhr et al., 2010;Bedont et al., 2017).
A question arises as to why the VMPO contains autonomous circadian clocks, whereas so much of the surrounding hypothalamic tissue does not. The VMPO has numerous efferent targets, including the dorsomedial hypothalamus (DMH), arcuate nucleus, and the paraventricular nucleus (PVN), which support the VMPO's role in autonomic modulation of body temperature (Tan et al., 2016). The fact that lesioning of the VMPO leads to a rhythm of core body temperature with a higher amplitude than normal suggests a potential role for the VMPO clock in rhythmically regulating temperature maxima and minima (Osborne and Refinetti, 1995;Lu et al., 2000). A future tracing study would be helpful to determine the specific output targets of the VMPO clock cells.
Finally, while this work demonstrates that these Opn5expressing cells are not necessary for the circadian function or FIGURE 4 | Opn5, and rhythms of Opn5-cells, are not necessary for VMPO rhythmicity. (A) Bioluminescence traces of Per2 Luciferase from organotypic VMPO cultures from wild-type (Per2 Luciferase , blue), Opn5 −/− ;Per2 Luciferase (gray), and Opn5-Bmal1-null (Opn5 Cre ;Bmal1 flx/flx ;Per2 Luciferase , red) mice. (B) Free-running period of 5 days of oscillations as measured by LMFit (Lumicycle Analysis). (C) Fourier power-spectrum amplitude strength between 20 and 30 h with a Blackman-Harris filter of the same VMPO rhythms as B (Lumicycle Analysis). n = 8. Shown are mean ± SEM. ANOVA, p > 0.05. (D) Bioluminescence traces of VMPO from Per2 Luciferase mice given 90 min 38°C temperature pulses (from a baseline of 36°C) where indicated with a yellow box. (E) Average phase change (comparing phase of rhythm in days 5-7 to phase in days 1-4) after a temperature pulse occurring either in the early ascending phase of the luminescence rhythm (left, three genotypes) or early descending phase (right). Shown are mean ± SEM. n = 8, 5, 6, and 5, respectively.
Frontiers in Physiology | www.frontiersin.org July 2022 | Volume 13 | Article 934591 thermal entrainment of the VMPO clock, it was limited in determining if the Opn5-cells themselves express functional circadian clocks. While we did observe that~44% of Opn5 Cre ; Ai14 cells co-expressed PER2 at peak expression, because PER2 is expressed as a gradient over time, this required use of an arbitrary threshold. It is possible that an underlying PER2 rhythm is present in Opn5-cells but is too low to be measured in the presence of the strongly rhythmic cells near the third ventricle ( Figure 2). In the future, a Cre-based reporter system could address this directly. However, our results indicate that any Bmal1-based rhythms within the Opn5 cells are not directly related to the circadian function of the primary VMPO clock. The autonomous circadian clocks within the VMPO appear to be functionally isolated from the violet-light sensitive Opn5-cells which surround them.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.
ETHICS STATEMENT
The animal study was reviewed and approved by Institutional animal care and use committee, University of Washington. | 2022-07-26T13:39:28.585Z | 2022-07-22T00:00:00.000 | {
"year": 2022,
"sha1": "b23704eef8c38c5ac52609dcc678ac8118c1b478",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "b23704eef8c38c5ac52609dcc678ac8118c1b478",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
222212155 | pes2o/s2orc | v3-fos-license | Political correctness and the alt-right: The development of extreme political attitudes
Recent studies have documented a shift from moderate political attitudes to more extreme attitudes at the ends of the political spectrum. This can be seen in Political Correctness (PC) on the left, and white identitarian (WI) attitudes on the ‘Alt-Right’ (AR). While highly covered in mainstream media, limited academic research has investigated their possible antecedents and psychological correlates. The current study investigated the prevalence and psychological predictors of these attitudes. Utilising a quota-based sample of 512 U.S. participants, we found that extreme political attitudes were associated with various personality traits, social media use, and upbringing. PC attitudes were associated with agreeableness, black-white thinking, social-media use, and perceived overprotective parenting. WI attitudes were associated with low agreeableness and openness, and high black-white thinking. Our results show that extreme left and right attitudes are separated by individual differences, and that authoritarianism can be seen on both the left and the right.
Introduction
Political attitudes are typically measured on a single continuum with 'left-wing' ideologies advocating for liberal values, and 'right-wing' ideologies promoting conservative principles. Traditionally, U.S. political opinion has been largely situated in the centre, with the electorate holding a mix of liberal and conservative attitudes [1]. However, recent political commentary has suggested an evacuation from moderate views and a move toward more extreme attitudes [2]. This migration has manifested on both sides of the political aisle, as left and right have fragmented with the inscription of Political Correctness (PC) in the so-called 'Regressive Left', and white identitarian (WI) attitudes in the 'Alt-Right' (AR) [3,4].
Mainstream news outlets have emphasized the role that adherents to such ideologies have had in violent incidences such as the Charlottesville rally [5] and numerous protests on college campuses over free speech [6,7]. Authors have argued that these movements are associated with deleterious societal outcomes, including discrimination [5] and loss of free speech [8]. Accordingly, a 2018 survey found that 80% of Americans consider PC an issue and 60% consider white supremacists a growing threat in the U.S. [9]. While highly covered in the media, little research has investigated the predictors of such attitudes. Despite this, scholars and political commentators have theorized that the increase in PC is attributable to social media usage and changes in child rearing, with increased overprotective parenting [10]. Although widely publicized, limited peer-reviewed research has directly investigated these claims. Only two studies have investigated predictors of PC. Firstly, Andary-Brophy found that PC is partially explained by personality traits, specifically, high trait Agreeableness [11]. Further, PC was found to be a multidimensional construct, consisting of both liberal and authoritarian attitudes. Secondly, Moss & O'Connor reported that the Dark Triad traits and Entitlement explain a substantial amount of variance in the authoritarian attitudes of PC and WI [12].
While some authors have extended the effect of social media to explain the increase in extreme right attitudes [13,14], there is no published investigation into such correlates of WI attitudes. The current study assesses whether a set of hypothesized factors predict these extreme political attitudes. By surveying a nationally quota-based sample of 512 U.S. participants on a range of personality, social, and political factors, we assess elements of Lukianoff and Haidt's theory regarding PC attitudes and extend the study of political attitudes to those of the extreme right [10]. Specifically, we investigate 1) the extent to which PC and WI attitudes are held within the general U.S. population, 2) the possible antecedents of these attitudes, 3) whether 'extremists' on the left and right have traits in common, and 4) the claim that PC is a multidimensional construct. We argue that understanding the antecedents of extreme political attitudes will help clarify the etiology of ideological commitments in general and assist efforts to reduce the negative behaviours associated with these attitudes.
Political attitudes and ideologies
Political attitudes have typically been conceptualised on a single left-right (liberal-conservative) continuum. Because political views are multidimensional, several authors have considered this one-dimensional measure inadequate [15,16]; however, it demonstrates satisfactory predictive validity on voting behaviour [17]. The scale essentially measures one's tendency toward advocating or resisting social change, and accepting or rejecting inequality [18].
Left voters are more likely to reject inequality and seek out change [18]. Historically, liberals have been considered as having an optimistic view of human nature [19]. Sowell has argued that liberals assume that perfectibility in man is achievable, which motivates an expanse to promote the ideal [20]. Contrasting this on the right, conservatives have been considered as having a more pessimistic view of human nature [19]. Contending that civility necessitates the constraints of authority and institutions, conservatism is concerned with the preservation of tradition and stability. Conservatives are more sensitive to threats against social order, which motivates a defence of social organizations and an insistence on hierarchy [21]. As such, conservatives are more likely to accept inequality and resist social change [18].
Although on different sides of the political spectrum, reports had previously situated both parties close to the centre [2]. However, with the aforementioned evacuation from moderate attitudes, the current political spectrum shows a less populated centre (which is becoming more vacuous) defining a chasm between the left and right. In a nationally representative study of 10,013 participants, the Pew Research Center found that, over the past two decades, fewer U.S. citizens are holding a mix of liberal and conservative attitudes [2]. In this study, participants answered questions about their political attitudes and engagement via a telephone interview or online survey. The authors found that twice as many Americans ubiquitously express opinions consistent with their party's platform as they did in the 1990's (from 10% in 1994 to 21% in 2014). Ideological uniformity was most pronounced for those most politically active, however the general overlap between the major parties has diminished. Beyond disconnecting from the other party, growing ideological uniformity also increased feelings of antipathy for the opposing party [1].
Political correctness
The term PC is generally interpreted as any effort to avoid or suppress potentially offensive content [22]. While historically defined as the unequivocal commitment to party propaganda [23], reinterpreted on the principles of egalitarianism, PC is now solely (colloquially) associated with the left [11]. Proponents of PC (sometimes referred to as 'social justice warriors') have popularized terms such as 'trigger warnings' (alerts that warn an audience of forthcoming content that may 'trigger' a remembrance of past trauma) and 'microaggressions' (small actions or statements that, though seemingly innocuous, are insensitive to another person's felt experience) that encourage people to reassess their unassuming word choices, and accommodate for the emotional experience of others [10].
While a significant proportion of PC advocates argue that their motivation is compassion for others, scholars have noticed that methods used by some advocates involve purposeful violence and intimidation [24]. In the first empirical investigation of PC, Andary-Brophy found that PC is a two-factor structure, consisting of liberal and authoritarian components [11]. PC-Liberalism (PCL) represents the concern of individual welfare. PCL proponents are primarily concerned with promoting socially disadvantaged groups. They take offense to language that undermines their goal of diversity, and protest against the use of such language. PC-Authoritarianism (PCA) focuses on purity and safety. While promoting diversity, adherents of PCA appear to be primarily concerned with censoring language that is emotionally upsetting. Like PCL, PCA proponents protest against the use of offensive or exclusive language, however they show a greater inclination toward immediate, autocratic methods, typical of the aforementioned violent protests.
Although the concept of PC has not been well-substantiated in peer-review, Brophy's (2015) conceptualization allows one to distinguish between the socially liberal effort towards cultural inclusivity and sensitivity (PCL), with the 'cancel culture' (PCA) that makes news headlines and represents the colloquial view of PC. These scales were found to be internally reliable (both reporting α > .8) which encourages the first steps toward measuring this previously abstract topic. Essentially, PC represents a sensitivity to offensive content, PCL represents sensitivity in content that challenges social inclusivity, whereas PCA represents sensitivity to content that challenges one's own feelings of emotional safety.
The 'alt-right' and white identitarian attitudes
The AR is a term used by the media and political commentators to describe several subgroups on the extreme right that reject mainstream conservative views [4]. The term has been given traction by highly influential people, such as the previous senior counselor to President Donald Trump, Steve Bannon. Considering traditional conservatism politically inefficient and culturally out-of-touch, advocates promote the AR as an alternative to conservative ideology [14]. While this vague definition can encompass any conservative disenchanted with their political representation, political commentators and figureheads of the ideology have specified that the AR essentially represents WI attitudes [3,25].
In the promotion of WI politics, adherents of the AR indicate: 1) strong feelings of white identity, 2) a strong sense of white solidarity, and 3) a belief in white victimization [3]. Such attitudes have long been recognised to manifest in voting behavior, and political orientation. Kinder & Sears investigated the voting behavior of suburban white voters in two LA mayoral elections in 1969 and 1973, and how direct or socioculturally prejudiced racial threats motivated political decisions [26]. Direct racial threats were defined as how white participants considered the impact that blacks have on their personal life (schooling, jobs, neighborhoods). Sociocultural prejudice was considered more abstractly as 'symbolic racism': moralistic resentment, and alienation of blacks. While direct threats were minimally influential, symbolic racism was found to significantly skew white voters against black candidates. Furthermore, the authors found that one's proximity and engagement with black neighbors had a non-significant effect in participants holding symbolically racist attitudes. This suggests that WI ideology represents deeper, moral claims, rather than private/ personal interactions.
Indeed, in several interviews and public statements, AR leaders announced Western culture as an achievement of white ingenuity, and that a defense of Western values necessitates advancement of the white race [27]. Accordingly, Jarrod Taylor (a prominent head of the AR) denounces multiculturalism, arguing that non-whites cannot maintain or contribute to Western civilization [25]. As recognised by Jardina, such ideas have contributed to the political divide on issues such as immigration and were able to be capitalised during the 2016 Trump candidacy [28]. Though many members claim their separatist views do not assume superiority of race [3], others stridently argue for the intellectual and cultural supremacy of whites [27]. As such, critics believe the AR to be a contemporary white supremacist movement [29].
Proposed predictors of extreme political attitudes
In this section we outline a set of hypothesized predictors of PC and AR attitudes. Our choice of predictors was guided by the theoretical work on PC by Lukianoff & Haidt as well as the extensive empirical work on traditional political attitudes (i.e. moderate liberal and conservative attitudes) [10]. Specifically, Lukianoff and Haidt argue for the importance of overprotective parenting and social media use in the development of PC attitudes, whereas empirical studies on traditional political attitudes have confirmed the role of the Big Five personality traits (as outlined below) [10]. We have therefore chosen to focus on a broad set of predictors in this study; we believe this is appropriate given that our primary objective is to make an early contribution to understanding extreme political attitudes. We argue that, because so little is known about correlates of these attitudes, a useful starting point involves mapping extreme political attitudes onto a highly generalizable personality taxonomy (the Big Five) and drawing from the existing-albeit limited and largely untested-theoretical work in this area.
News and social media
Over the previous 40 years, news reports have become increasingly negatively biased [30]. Consequently, individuals are tending to overestimate negative events and underestimate their personal safety [31]. Additionally, more people are retrieving news from social media sites: 68% of American adults occasionally getting news from social media, 20% saying that they do so often [32]. Social media sites allow users to curate their social network, which has seen news consumption become increasingly self-selected [33,34]. While selectively engaging with certain outlets, users filter the newsfeed of others by 'sharing' stories to their social network. Politically consistent social media users are more likely to 'follow', 'like', and communicate with others that share their political attitudes, and to disconnect from users that express disparate opinions [35]. This has been argued to have created 'echo chambers', wherein information becomes increasingly uniform and limits one's exposure to, and delegitimizes the moral claims of, disconfirming attitudes [36]. While popular thought supposes that intergroup contact diminishes contention, Bail et al. found that when partisans are exposed to antithetical views online, they become increasingly polarized [37]. This suggests online echo chambers decrease one's leniency toward opposing views, increase confidence in pre-established viewpoints, and increase exposure to more extreme attitudes [38,39].
Accordingly, mainstream outlets and psychologists have reported much online activity from the extreme left and right [40,41]. While PC is widespread, several authors have observed AR proponents sanitize their ideology for public consumption through antagonistic memes satirizing PC culture and progressivism in general [13,14]. This has been reported to have caused an influx of young subscribers who enter AR echo chambers and become introduced to racialist content. Therefore, we predict that extremists on the left and the right will spend more time on social media sites. Specifically, it is hypothesized that those reporting higher social media use will report higher levels of PCL (Hypothesis 1a), PCA (Hypothesis 1b), and WI attitudes (Hypothesis 1c).
Overprotective parenting
Lukianoff and Haidt theorised the increase of PC can be attributed to generational changes in child rearing [10,24]. While the typical childhood of Baby Boomers and Gen Xers was largely unsupervised, their Millennial and (especially) iGen children have been subject to a significant increase in parental surveillance (with a market for phone applications that allows constant monitoring of children, creating the so-called 'electronic umbilical cord') [42]. Parents are more likely to act on behalf of their children in ambivalent situations (with parents often speaking to college administrators for the concerns of their adult child) [43], and to disengage their child from any potentially dangerous activity (at least without adult supervision) [44,45]. At school, pre-emptive safety measures have been realized with the enforcement of such things as zero-tolerance bullying policies and the insistence of formal, adult organized play times [46]. Further, children are provided an emotionally accommodating environment with awarding of 'participation-trophies'. Consequently, overprotected children have less opportunity to independently explore the world: to experience, learn from, and realize they can survive failure.
Previous research indicates that overprotective parenting is associated with issues in anxiety [47], depression and lower life-satisfaction [48], worse academic performance, and more relational problems [49]. However, most problems associated with overprotective parenting can be largely explained through the child's perception of their own competence and autonomy [48,50]. In congruence with self-reports, overprotected children tend to be perceived by others as less resilient, capable, and independent [51]. As such, the ostensible increase in overprotective parenting practiced by Baby Boomers and Gen Xers may have made Millennial and iGen children more emotionally fragile, more likely to consider adverse content a threat, and more likely to seek an external authority to resolve their problems [52]. Thus, while ensuring the short-term welfare of children, overprotective parenting reduces a child's development of resilience. Accordingly, without resilience to deal with problems, people consider adversity at any degree of severity as synonymous with violence, and therefore demand its immediate elimination. As such, it is hypothesised that perceptions of overprotective parenting will be negatively correlated with age (Hypothesis 2a), and resilience (Hypothesis 2b). Furthermore, it is hypothesised that perceptions of overprotective parenting (Hypothesis 3a) and low resilience (Hypothesis 3b) will predict PCA.
Personality traits
We also consider personality traits as predictors of extreme political attitudes. Although the recent shift in political attitudes cannot be attributed to personality traits (i.e. overt levels of traits have not changed), it is likely that some individuals, based on their personality traits, will be more sympathetic to extreme attitudes. Indeed, academic literature demonstrates the association between personality traits and political preferences [53]. The current study therefore utilizes the most popular and empirically supported model of personality traits: the Big Five [54]. The Big Five is a descriptive model of personality that specifies the major, broad dimensions upon which people differ: Openness to Experience (Openness-Intellect), Extraversion, Conscientiousness, Agreeableness, and Neuroticism [55]. Although proven useful across a range of situations and contexts, researchers have extended the Big Five to account for higher and lower order factors with each of the five traits capturing the covariance of two lower-order traits (aspects) [56].
Specifically, Openness-Intellect describes one's tendency to try new things and their engagement with ideas [56]. Openness-Intellect consists of aspect Openness, which details one's interest in abstract content, and aspect Intellect, which measures one's capacity for intellectual pursuits. Extraversion is the pleasure-seeking and positive affect dimension; as the behavioural mode of exploration, it manifests as social engagement. It includes aspects Assertiveness, which explains one's sense of urgency and social dominance, and Enthusiasm, which measures sociability and outgoingness. Conscientiousness details one's tendency to follow rules and pursue non-immediate goals. Its aspects are Industriousness and Orderliness, which detail one's work ethic and desire for order, respectively. Agreeableness shows one's proclivity to be altruistic and cooperative with others. Comprising aspects Compassion and Politeness, it explains one's sensitivity and interaction with others. Neuroticism is the measure of one's sensitivity to threat and punishment. Ultimately, it is the negative affect dimension, and consists of aspects Volatility, measuring emotional reactivity, and Withdrawal, assessing susceptibility to negative emotion.
Political attitudes are highly explicative of two main traits: Openness-Intellect and Conscientiousness [17]. Liberals tend to be high in Openness-Intellect and low in Conscientiousness (conservatives showing the opposite disposition). However, analysis into personality aspects revealed that Agreeableness also contributes to political attitudes [53]. Specifically, liberals tend to be high in aspects Openness and Compassion, and low in Orderliness. Conservatives evince high levels of aspects Orderliness and Politeness, and low levels of Openness and Compassion.
Recently, Andary-Brophy found that PC attitudes are associated with trait Agreeableness, specifically aspect Compassion [11]. PCL is predicted by trait Openness-Intellect, specifically aspect Openness. PCA is predicted by high trait Conscientiousness, specifically aspect Orderliness. Therefore, compassionate ends largely motivate PC, however the means used to meet these ends, whether socially democratic or immediate and autocratic, are moderated either by Openness or Orderliness, respectively. It is predicted that PC adherents will evince more compassion to vulnerable groups and will differentiate as to whether they prioritize their consideration or protection of these groups. Specifically, it is hypothesized that high Compassion and Openness will predict PCL (Hypothesis 3a), and high Compassion and Orderliness will predict PCA (Hypothesis 3b).
Although no research has been conducted on the Big Five and WI attitudes, previous findings have shown that AR adherents display higher scores of right-wing authoritarianism (RWA) and social dominance orientation (SDO) [57]. As both RWA and SDO have been heavily studied, it may be possible to extend such findings to inform the hypotheses regarding WI attitudes. Such investigations have found a strong relationship between personality and authoritarian attitudes [58,59]. Specifically, racially prejudiced attitudes are primarily predicted by low Openness-Intellect and Agreeableness, and high Conscientiousness [60]. Therefore, it is reasonable to conclude that people harboring prejudiced attitudes are less compassionate and have a need for conceptual and material organization. However, although manifesting as a rejection of minority groups, it remains possible that WI attitudes more appropriately represent an exaggerated acceptance (or protection) of white identity, and therefore, the aforementioned results apply to the AR only partially. Even so, these findings informed the hypotheses of the current study. It was hypothesized that WI attitudes would be predicted by low aspects Openness and Compassion, and high aspect Orderliness (Hypothesis 4).
Moral absolutism
A final trait predictor of extreme political attitudes we consider is moral absolutism. Moral absolutism is the tendency to engage in rigid, 'black-and-white' moral thinking, in terms of others' behavior [61]. Lukianoff and Haidt have argued that in the promotion of equality of outcome, advocates of PC necessarily apply a moral framework [10]. According to this moral framework, 'good' is defined as those promoting equal outcomes across groups, while 'bad' is defined as those in positions of power. Such a moral distinction allows advocates to make claims on the goodness of others character based on superficial characteristics. This superficial analysis then permits inference into others motives regardless of behavior. For example, guilt is not only attributable to those explicitly promoting inequality, but also to those deemed as non-committal benefactors of the system of inequality. Such a simple moral framework is likely to encourage proponents to adopt a morally dichotomous, 'us versus them', worldview.
This line of reasoning is directly applicable to WI attitudes. While PC defines morality on the basis of equity, WI appear to define it on group identity. Indeed, previous research has shown that morally dichotomous attitudes are associated with generalized prejudice [62]. Furthermore, those high in moral absolutism tend to regard others' behaviours and attitudes as categorically 'good' or 'bad', resultantly providing moral license to directly oppose different viewpoints with force [63,64].
As such, we suggest that moral absolutism is an antecedent to the adoption of extreme political attitudes. We argue that those with a tendency to make good-bad/right-wrong judgements about complex issues are more likely to uniformly accept one position and reject/oppose all others and consequently hold extreme views (rather than seeing both sides of an issue). As such, it is hypothesized that moral absolutism will significantly predict PCA (Hypothesis 5a) and WI attitudes (Hypothesis 5b).
We note that various other factors in this study potentially impact moral absolutism. For example, overprotective parenting and lower resilience in young people may serve to enhance dichotomous thinking in such individuals. We do not specifically test indirect effects in this study however, since our primary focus in this new research area is to identify predictors of extreme political attitudes. Nevertheless, we conduct hierarchical analyses to gain insight into the relative contribution of different predictors.
Participants and design
Participants were a quota-based sample of 512 adults (Females = 264) between the ages of 18 and 84 (M = 45.7, SD = 16.9) living in the U.S. (see Table 1). This sample size was based on an a priori power analysis using G � Power 3.0 [65] specifying a regression model with small to medium effect size (f 2 = 0.08), 18 predictors (our largest regression included 13 predictors and 5 covariates), a power level of 0.9 and a p-value of 0.001. Output from G � Power specified that a minimum of 454 participants were required.
The sample was recruited using the online panel research company, Qualtrics. Qualtrics was chosen over other research panels and MTURK because they offered the option of recruiting a more representative sample and high user control required to enhance data quality. To ensure that the sample was representative of the demographics in the US, quotas that reflected the demography of the national population were made on dimensions of age, ethnicity, gender, employment status, and education level. Qualtrics recruited participants according to these nationally representative quotas. Participants were informed of the nature of the study on an information sheet and provided informed consent by clicking on a button at the bottom of the page that continued them to the survey. The study utilized a cross-sectional, correlational design, whereby participants were asked to respond to an online self-report survey. For their participation, participants received 'points' from Qualtrics, which they can accrue and redeem for in-store rewards. Participants who completed the survey faster than one-third of the average completion time were removed from the sample. Ethics approval (approval number: 1800000544) was granted by the Queensland University of Technology (QUT) Human Research Ethics Committee (UHREC).
Measures
Qualtrics software was used to conduct the online survey. Participants completed the survey via an anonymous link. The measures used in the survey are detailed here.
Politically correct attitudes. To assess PC, the 36-item PC scale (short version) [11] was used. The scale consists of two subscales: PCL-S and PCA-S ('S' stands for 'short version'). The PCL-S scale is a 19-item scale that measures PCL, and the PCA-S scale has 17 items that measure PCA. Participants were asked to identify their level of agreement or their assessment toward the items, which were in the format of statements or definitions. For instance, items included statements such as "There are no biologically based differences in personality, talent, and ability to reason, between racial groups". Because questions have different response options (Likert scale, and single choice items), a total PC score was calculated after averaging the standardized scores of the items (this procedure being repeated for the relevant items in each subscale). The current study found that the subscales PCL-S (α = .72) and PCA-S (α = .86) both reported acceptable Cronbach's alphas. White identarian attitudes. To measure WI attitudes, the WI Scale was developed. To create this scale, statements made by mainstream outlets regarding the AR as well as official statements made by recognized AR figureheads and endorsed by their followers were collected [3,4,25]. Such statements had three main themes: 1) a focus on ethnic identity, 2) a strong sense of white solidarity, and 3) a belief that whites are being displaced in the U.S. This method of scale development was consistent with that used by Andary-Brophy to develop the PC scale [11]. Overall, the questionnaire consists of 12 items, such as "Race is the foundation of identity" and "Whites are being forgotten and replaced by minorities in this country". To complete the questionnaire, participants responded to statements on a 5-point Likert scale ranging from completely agree to completely disagree (the option of there being not enough background information was also provided and was considered synonymous with 'neither agree nor disagree' responses in statistical analyses). A total AR score was calculated after averaging the responses to each of the items. An exploratory factor analysis confirmed the presence of one primary factor and the internal reliability of the scale was satisfactory (α = .88).
Demographics and general information. Demographics were obtained with questions asking for age, gender, ethnicity, employment status, and highest level of education achieved. To allow for correlational analyses, demographic variables (gender, ethnicity, employment status, and education level) were coded as numbers prior to statistical analyses (i.e., gender: male = 1, female = 2; ethnicity: Caucasian = 1, other = 2; employment status: employed = 1, unemployed = 2; education: below grade 10 = 1, grade 12 = 2, trade/cert III/IV = 3, diploma/ associate diploma = 4, bachelor degree = 5, post-graduate = 6). The amount of time spent on social media was assessed using a single-item measure. Participants were directly asked "On average, what is the approximate amount of time you spend on social media sites each day? Participants were given a set of 9 response options ranging from 'none' to "more than 6 hours". Participants were told that social media sites include "Facebook, Twitter, YouTube, Snap chat, and other such sites." Overprotective parenting. To measure parental overprotection, the overprotection subscale of the Parental Bonding Instrument [66] was used. The overprotection subscale includes 13 items that measure self-reports of how controlling and invasive participants believed their parents to be up to the age of 16. Participants were required to respond to how such statements as my parent/s "felt I could not look after myself unless she/he was around" were representative of their childhood on a 4-point Likert scale, ranging from "very like" to "very unlike". Scores were calculated using the averages of the items. The internal reliability satisfactory in the current study (α = .71).
Resilience. The Brief Resilience scale [67] was used to assess resilience. The scale consists of six items, which participants had to respond to on a 5-point Likert scale, ranging from strongly disagree to strongly agree. Questions included "I tend to bounce back quickly after hard times". Resilience scores were calculated by averaging the responses to the items. Its authors have shown it to positively correlate with positive outcomes, and negatively with negative outcomes. In the current study, it had satisfactory internal reliability (α = .74).
Personality traits and aspects. The 100-item self-report Big Five Aspect Scale [57] was used to assess the Big Five and their respective aspects. The Big Five includes Openness-Intellect, Extraversion, Conscientiousness, Agreeableness, and Neuroticism. The aspects of Openness-Intellect include Openness to Experience and Intellect. The two aspects of Extraversion are Enthusiasm and Assertiveness. Conscientiousness includes aspects Industriousness and Orderliness. Agreeableness comprises aspects Compassion and Politeness. Lastly, the aspects of Neuroticism are Volatility and Withdrawal. The instrument consists of 10 scales (one for each aspect), each containing 10 items. Using a 5-point Likert scale ranging from strongly agree to strongly disagree, participants were asked to rate each item to the extent that they thought it was descriptive of them in general. After reversing the appropriate responses, scores for the 10 aspects were calculated by averaging the items. Scores of the five traits were calculated by averaging the scores of the two scales of their respective aspects. Previous research has validated it against standard Big Five scales, such as the Big Five Inventory (mean r = .88) and the Revised NEO Personality Inventory (mean r = .82) [56]. In this study, it had satisfactory internal reliability (mean α = .83).
Black-and-white moral thinking. Black-and-white moral thinking was assessed using the Moral Absolutism/Splitting items of the Attitude Toward Ambiguity construct [62]. The items consist of seven statements, such as "There is a right and a wrong way to do almost everything". Participants expressed the degree to which they agree with the statements on a 7-point Likert scale ranging from strongly disagree to strongly agree. Moral absolutism scores were calculated after averaging the responses to the items. The scale was found to have high internal reliability in the current study (α = .91).
Traditional political attitudes. Left-right political attitudes were assessed using a single item. Participants were provided with the typical definition of liberal and conservative and were required to report their political views on a 5-point Likert scale ranging from very liberal to very conservative.
Procedure
The survey was completed online via Qualtrics software. Once participants accessed their anonymous link, they were directed to a participant information page, which provided information relevant to the study. Upon reading the information sheet, tacit informed consent was provided in the participant continuing to the survey. Participation was voluntary and anonymous, and participants were able to withdraw from the study at any time (however this would forfeit their claim to the incentives from Qualtrics).
Upon commencing the survey, participants were asked to provide demographic information, and continued to the questionnaires. Participants completed the survey at their own pace, which generally took 24 minutes. Once they completed the survey, participants were provided the opportunity to make any comments about the survey and were thanked for their participation. The contact information of the research team was also provided if they wanted to make any further comments. The vast majority of participants who provided open-ended comments indicated that they found the survey enjoyable and thanked the research team for the opportunity to be involved.
Data analysis
To investigate the hypotheses, several hierarchical multiple regression analyses were undertaken. Each regression was conducted using the same three-step procedure, with demographics in the first step, personality traits in the second, and parenting, resilience, moral absolutism, and social media use in the third step. This was done to control for demographics, and to assess whether the hypothesized non-Big Five predictors (parenting, resilience, moral absolutism, and social media) had incremental validity over personality traits. However, to include all participants in the analysis (i.e. employed and unemployed) employment status was not included in step 1. Also, to avoid issues of multicollinearity with the Big Five traits, personality aspects were not included the original analysis. Instead, subsidiary hierarchical multiple regression analyses were undertaken to assess the contribution of the personality aspects. Big Five traits were removed from the regression, and the personality aspects were introduced in the second step of the separate analyses (with the same hierarchical entrance of variables). To assess the unique contribution of the variables the reported values were attained from the third step of the model. The reported contributions of the non-personality predictors were taken from regression analyses with the Big Five traits unless specified otherwise.
Political opinion in the U.S.
Analysis of the responses to the single item assessing traditional political attitudes (left vs right) showed that political opinion in the U.S. is normally distributed, with the largest portion of participants identifying as politically moderate (30.9%). To determine the proportion of participants holding 'extreme' attitudes, we looked at the frequencies of high scorers on PCL, PCA and WI scales. Because these variables were measured on a 1-5 response scale, we classified individuals obtaining a mean score of 4 or above as holding extreme attitudes. In other words, individuals were considered extreme scorers when they either agreed or strongly agreed (on average) to all items assessing each attitude. Using this criterion, we found that 8.2% of participants held extreme PCL attitudes, whereas 6.1% held extreme PCA attitudes. We also found that 14.1% of white participants held attitudes typical of the AR (see Table 2).
To provide a clearer picture of the extent of extreme political attitudes in the U.S., we next provide an overview of frequencies for a selection of questions from each scale. Regarding PCL attitudes, 2 in 10 participants agreed or strongly agreed that retail stores should avoid using the word 'Christmas' in advertising campaigns; 7 in 10 participants believed that newspapers should have some degree of screening for offensive, racist or sexist language/ideas. Regarding PCA attitudes, approximately 8 in 10 participants believe a professor should be punished in some form for using a racist, sexist or homophobic slur when teaching a class; 3 in 10 participants agreed or strongly agreed that an alleged perpetrator of sexual assault should have to prove their innocence.
Regarding WI attitudes, approximately 2 in 10 white participants agreed or strongly agreed that racially or ethnically defined states are legitimate and necessary; 3 in 10 white participants agreed or strongly agreed that there was a progressive conspiracy against white identity; 3 in 10 white participants agreed or strongly agreed that whites are being forgotten or replaced by minorities in this country.
Bivariate correlations among focal variables
Bivariate correlational analysis was run to investigate relationships amongst demographics and extreme political attitudes (see Table 3), and hypothesised predictors. Hypotheses were
Discussion
Previous research claims increased political polarization in the U.S., with an ostensible mass attitudinal migration toward PC on the left, and the AR on the right [2]. Although highly covered in mainstream media and theorized by prominent social psychologists, academic literature on these attitudes is scarce. The current study represents the first in-depth investigation into the psychological correlates of PC and the AR. Using a large quota-based U.S. sample, we found extreme attitudes represent a significant minority of attitudes in America. Accordingly, most participants were indifferent, disagreed, or strongly disagreed with the extreme left and right. This study confirmed previous research [11] that PC is a multidimensional construct, and also provided evidence that adherents of the extreme left and right share certain traits.
We observed the effect of social media on political attitudes was different for extreme left and extreme right attitudes. While PCL and PCA were significantly predicted by social media use, WI attitudes were not. As most participants (56.4%) reported that Facebook was their most used site, such findings are in accordance with previous research reporting a disproportionate amount of politically left content on Facebook [68]. Liberals are also more likely to use social media in general [69]. With more liberal users and content, it is likely that Facebook has more liberal echo chambers, and therefore increases the probability of engaging with leftist reverberations. Secondly, as the first investigation into the hypotheses of Lukianoff and Haidt, this study provides preliminary evidence that changes in parenting have contributed to extreme left attitudes [10]. Generational changes in overprotective parenting were observed, with younger people more likely to report having overprotective parents. Further, overprotective parenting and low levels of resilience differentiated whether extreme attitudes on the left manifested as PCL or PCA (contributing to PCA but not PCL).
Thirdly, aligning with previous research, personality aspects significantly predicted political attitudes. PC partially manifested from trait Agreeableness, with aspect Compassion predicting PCL and PCA, and low aspect Politeness predicting PCA. However, against previous findings [11], aspects Openness and Orderliness were non-significant predictors of PCL and PCA, respectively. The discrepant findings of aspect Openness may be explained by the different regression analyses undertaken in this study, as compared to Andary-Brophy. After performing a correlational analysis, Andary-Brophy excluded variables that reported non-significant correlations with PC from later regression analyses to avoid issues of statistical singularity [11]. As preliminary investigation showed this was not problematic in the current study, all personality aspects were included in regression models. Further, the current study reported a significant bivariate correlation between Openness and PCL. Therefore, while Andary-Brophy reported significance after controlling for some other aspects, after controlling for all variables in the current study, the unique contribution of Openness was non-significant. However, the bivariate correlation between Orderliness and PCA was non-significant, which confuses the discrepancy.
Fourthly, WI attitudes were negatively predicted by trait Openness-Intellect, specifically aspect Openness; and trait Agreeableness, specifically aspect Politeness. However, the contribution of trait Conscientiousness (despite positive contribution from aspect Orderliness) was non-significant (see Table 7). As previous research has shown RWA attitudes are largely explained by Conscientiousness [60], the results suggest WI attitudes are not explicative of typical far right-wing attitudes. Indeed, at the trait level the AR displayed the opposite personality as PCL, with PCL positively predicted by traits Openness-Intellect and Agreeableness. As recognised by Andary-Brophy, PCL is characterised by the desire for the inclusion of all ethnic groups, and therefore truly represents the liberal compassion for recognised, disadvantaged groups [11]. Against this, the AR is marked by racial exclusivity and a belief in societal prosecution against whites. It is possible that progressive advocacy of ethnic diversity serves as confirmation for AR adherents of the narrative that whites are being displaced. This narrative then motivates their defence of white identity through anti-leftism. This suggests that the AR may be more appropriately conceptualised as the extreme opposition to progressivism.
Practical implications
A major outcome of extreme political views is violence. Examples include the aforementioned Charlottesville rally and violent protests against 'controversial' speakers on college campuses. From a practical standpoint, our findings help us understand the underlying causes of specific instances of violent behaviours and may assist in reducing relevant instances of violent behaviour. Our generally strong values for adj. R square suggest the set of variables in this study are highly relevant for extreme political attitudes.
As seen in Table 3, PCA and WI attitudes are positively correlated. This suggests that a common trait may explain their engagement in violent protests. Indeed, it is reasonable to suspect the violent protest behaviour typical of these groups may be partially explained by the common trait of moral absolutism. Black-and-white moral thinking has been shown to be associated with a willingness to support violence [70]. Black-and-white moral thinking castigates opponents as morally decrepit and supports engagement in extreme intervention against antithetical views [64]. As moral absolutism is characterised by an unquestioning assumption of one's already established moral claims [71], this suggests that exposure to the legitimate moral claims of opposing views may reduce one's certainty. Without absolute conviction in one's preconceptions, adherents of these groups may be less likely to engage in extreme behaviours in defence of them.
Limitations
A limitation in the present study is the cross-sectional design. While permitting investigation into correlations, this study cannot inform causal mechanisms. Second, this study used retrospective self-reports to assess parental overprotection. As recognised by Schwarz [72], in responding to retrospective self-reports of behaviours that occurred frequently (as in the Table 7.
Step 3 of hierarchical multiple regression of big-5 traits, parenting, and personal factors.
Attitude
Step Parenting Bonding Instrument), participants do not have an accurate picture but rely heavily on estimation strategies to recount such information [69]. As such, responses may be unreliable. Third, while the total sample size is sufficient to generalise findings to the population and is consistent with previous polling data on U.S. agreement with these attitudes [9], we caution the interpretation of the demographic data. Because this is the first look into the demographics of these attitudes, we maintain this data is interesting, however it is possible that the sample sizes of each group may not be sufficient to generalise to the population. Finally, social media use was measured using a single self-report item. However, the use of a single item scales can be justified given the large scope of the current research [73]. Further, as recognised by Fuchs and Diamantopoulus, when assessing explicit and generally understood constructs, single item measures can provide valid and reliable responses. Because 'social media use' was specifically defined at the start of the survey and can be considered general knowledge, a single-item assessment is appropriate [74].
Future research
It is important that future studies address the identified limitations. It is recommended that future research assess the psychometric properties of the WI scale, and its validity in properly tapping into AR attitudes. As explained in the methods section, the items were developed after collating commonly used phrased by AR adherents. Even so, some of the questions are also typical of traditional conservative ideas (for instance, "Marriage should only be allowed between a man and a woman"). This suggests that the scale may have incompletely approximated WI attitudes, advertising the need to properly define the construct. It is possible those in agreement with standard conservative attitudes are more likely to engage with people espousing more extreme attitudes of the political right and therefore display a more sympathetic interpretation of their plights.
Second, while this study provides insight into the prevalence of PC and WI attitudes in the U.S., future studies should assess the prevalence of such attitudes in other countries. This will not only report the spread of these attitudes outside of the U.S. but may also elucidate how different cultural environments contribute to the development of such attitudes.
Third, given the scope of this study, more complicated pathways were unable to be investigated. For instance, the contribution social media has on adherence to extreme political attitudes may be moderated by the particular social media site. Investigation into potential moderating effects from different social media platforms is recommended.
Further research should also continue the investigation into the hypotheses of Lukianoff and Haidt [10]. Specifically, research should investigate how overprotective parenting contributes to PCA via other developmental pathways. As previously mentioned, the results of the present study suggest that overprotective parenting (while not necessarily lowering resilience) reinforces the assumption that requesting intervention from authoritative parties will solve personal adversities. It is therefore recommended that future research investigate how variables such as psychological entitlement contribute to PCA.
Conclusion
The current study is the first in-depth investigation into the prevalence and developmental associates of PC and the AR. The study found that although PC and WI attitudes are prevalent, the majority of Americans disagree with extreme political attitudes on both the political left and the right. Further, generational changes in social media use and overprotective parenting contributed to the development of PC. However, social media did not contribute to WI attitudes. Personality trait Agreeableness was shown to significantly explain extreme political attitudes, with high Compassion separating PC from WI attitudes, and low Politeness separating PCA from PCL. At the trait level, Agreeableness and Openness-Intellect revealed PCL and AR to be antonymous. Lastly, authoritarian attitudes in the political left and right were predicted by moral absolutism.
These findings provide clear evidence that PC and AR are valid terms representing the far ends of political thought in the U.S. Further, this study contributes to the literature, finding that political differences are not insubstantial, but are largely explained by one's environment, upbringing, and individual disposition. | 2020-10-09T13:05:28.065Z | 2020-10-07T00:00:00.000 | {
"year": 2020,
"sha1": "9101f02edb9ac7ca83053ec07ae32f05b0eef6f8",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0239259&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6d2ddc082912e1c2940240474b40987c40ddf74b",
"s2fieldsofstudy": [
"Political Science",
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
22956956 | pes2o/s2orc | v3-fos-license | Potential of pomegranate fruit extract (Punica granatum Linn.) to increase vascular endothelial growth factor and platelet-derived growth factor expressions on the post-tooth extraction wound of Cavia cobaya
Background: Pomegranates fruit extracts have several activities, among others, anti-inflammatory, antibacterial, and antioxidants that have the main content punicalagin and ellagic acid. Pomegranate has the ability of various therapies through different mechanisms. Vascular endothelial growth factor (VEGF) function was to form new blood vessels produced by various cells one of them was macrophages. Platelet-derived growth factor (PDGF) was a growth factor proven chemotactic, increased fibroblast proliferation and collagen matrix production. In addition, VEGF and PDGF synergize in their ability to vascularize tissues. The PDGF function was to stabilize and regulate maturation of new blood vessels. Activities of pomegranate fruit extract were observed by measuring the increased of VEGF and PDGF expression as a marker of wound healing process. Aim: To investigate the potential of pomegranate extracts on the tooth extraction wound to increase the expression of VEGF and PDGF on the 4th day of wound healing process. Materials and Methods: This study used 12 Cavia cobaya, which were divided into two groups, namely, the provision of 3% sodium carboxymethyl cellulose and pomegranate extract. The 12 C. cobaya would be executed on the 4th day, the lower jaw of experimental animals was taken, decalcified about 30 days. The expression of VEGF and PDGF was examined using immunohistochemical techniques. The differences of VEGF and PDGF expression were evaluated statistically using t-test. Results: Statistically analysis showed that there were significant differences between control and treatment groups (p<0.05). Conclusion: Pomegranate fruit extract administration increased VEGF and PDGF expression on post-tooth extraction wound.
Introduction
Tooth extraction is a surgery involving bone tissue and soft tissues of the oral cavity. Tooth extraction is also considered as a curative effort most frequently performed as described in the health profiles of East Java in 2014 showing that 201,922 of 305,400 basic dental services delivered in health centers were related to tooth extraction [1]. Unfortunately, tooth extraction can cause complications, such as bleeding, infection, fracture, and dry socket [2]. Impaired wound healing after the tooth extraction even can also become unfavorable for the next prosthodontic treatment.
Therefore, acceleration of wound healing process after tooth extraction is very necessary. Periodontal tissue destruction due to tooth extraction is an inflammation [3]. This condition can make macrophages activated, and can also trigger synthesis of both cytokines that have pro-inflammatory activities, such as interleukin (IL)-1, IL-6, IL-8, and tumor necrosis factor α [4], as well as another cytokine, namely, IL-10 serving as regulator [5]. In addition to cytokines growth factor plays an important role in the wound healing process [6].
Inflammation actually is a normal response to injury, but it must be controlled since it can cause negative effects on surrounding tissues [7]. Inflammation usually occurs in response to trauma, chemical irritation, and infection caused by bacteria or viruses. The healing process occurs in three main stages, namely, hemostatic and inflammatory stage, proliferation stage, and remodeling stage. Although granulation occurred at the stage of proliferation, angiogenesis begins immediately after the injury and is mediated through the wound healing process as a whole [8].
Increased angiogenic growth factor, plays an important role in the wound healing process, such as vascular endothelial growth factor (VEGF) and platelet-derived growth factor (PDGF signaling protein triggering the growth of new blood vessels during the healing process. In other words, the protein will stimulate new blood vessel growth after injury. VEGF is produced by many cells one of which is macrophage. VEGF also plays an important role in bone and blood formations, necessary for the wound healing process [9]. In normal wound healing process, formation of granulation tissue containing fibrovascular composed of fibroblast, collagen, and blood vessels as markers of the healing response is important. Those vascular components depend on angiogenesis, in which new blood vessels appear on the 3 rd day after the injury (wounding). Therefore, this research observed increased expression of VEGF.
PDGF, on the other hand, is the first chemotactic growth factor for migration of cells to the injured area, such as neutrophils, monocytes, and fibroblasts. PDGF can increase fibroblast proliferation and extracellular matrix production due to those cells. PDGF can also stimulate fibroblasts to produce collagen matrix and to induce myofibroblast phenotype in those cells; therefore, PDGF has a major role in wound healing [10].
Furthermore, pomegranate has been used traditionally as medicine [11,12]. Pomegranate has various therapeutic abilities through different mechanisms such as anti-inflammatory, antibacterial, and antioxidants [12,13]. The main compound contained in pomegranates is polyphenols composed of punicalagin and ellagic acid (EA). EA has anti-inflammatory activity, degrading IL-6 by inhibiting nuclear factor-kappa B (NF-kB) [14].
For those reasons, it is necessary to conduct a research on the use of pomegranate extract on tooth extraction wound to accelerate the healing process by observing the increased expressions of VEGF and PDGF. Thus, this research aimed to investigate the potential of pomegranate extract (Punica granatum Linn.) to increase the expressions of VEGF and PDGF in the post-tooth extraction wound.
Ethical approval
This research was approved by the Ethical Committee of the Faculty of Dental Medicine, Universitas Airlangga, Indonesia.
Experimental design
This research was a laboratory experimental research with a post-test only control group design. C. cobaya used in this research were 2-3 month old and weighed 250-350 g. 12 C. cobaya were adapted for 1 week before the treatment, and then randomly divided into two groups, namely, control group and treatment group. Each of the group consisted of six C. cobaya. During the study, all animals were given with standard chow and tap water ad libitum.
Research method
The standardized pomegranate extracts used in this research was dissolved in 3% CMC-Na. 3% CMC-Na gel was made by dissolving 3 g of CMC-Na in 100 ml of warm water slowly to obtain a homogeneous solution. 2.5% pomegranate extract gel was made of 2.5 g of pomegranate extract standardized to 40% EA and dissolved in 97.5 g of 3% CMC-Na gel. Meanwhile, 3% CMC-Na gel was applied to the post-tooth extraction sockets of those C. cobaya in the control group. The treatment group was given with the standardized pomegranate extracts at a concentration of 2.5%.
The combination of ketamine HCl and diazepam was used to anesthetize those animals (1:1 cc, with a dose of 1 ml/kg b.m. intramuscularly) [15]. Leftover food was cleaned from their left mandibular incisor with water spray using a syringe and dried. Sterile elevator was used to separate the right and left incisors of those C. cobaya and their left mandibular incisor was extracted using sterile needle holder with the direction of movement toward the lingual aspect. Pomegranate extract gel was applied into the tooth sockets of those animals in the treatment group using insulin syringe with a dose of 0.1 ml per socket. The extraction wound in both groups then was sutured using non-absorbable thread. The C. cobaya in both the control and treatment groups was sacrificed on day four, and then their mandible was taken. The mandible of those animals was immersed in buffered formalin for 24 h. Buffered formalin then was replaced with 10% EDTA to decalcification for approximately 30 days until their mandibular bone tissue, and their teeth became soft and could be cut.
The tissue blocks were removed and embedded in paraffin, furthermore, longitudinal serial cross section of 5 µm tissue blocks were placed on poly-L-lysine coated slides for the purpose of performing immunohistochemistry. The slide sections were immersed in target retrieval solution (DAKO) and heated in microwave oven at 98°C for 20 min (maximum power 700 W) and then cooled at room temperature; anti-VEGF (Abcam) and anti-PDGF (Sigma) primary antibodies were used. The sections were incubated with each primary antibody as mentioned above for 1 h, after rinsing thrice in DAKO wash buffer (Tris-buffered saline [TBS]), the sections were then incubated with biotinylated secondary antibody using the kit (LSAB system 2-HRP) for 1 hour at room temperature. Following TBS rinses, the sections were incubated with streptavidin-horseradish peroxidase conjugate for additional 30 minutes at room temperature (DAKO), followed by a course of incubation diaminobenzidine (DAB) DAKO [16].
The expressions of VEGF and PDGF were measured by calculating the number of cells (macrophage and fibroblast) expressing VEGF and PDGF. The total number of immunoreactive cells was tabulated as data and then analyzed using a t-test to determine differences in the expressions of VEGF and PDGF.
Results
The number of VEGF expressed by macrophages and fibroblasts in the control group was less than in the treatment group (Figure-1). Similarly, the number of PDGF expressed by macrophages in the control group was less than in the treatment group (Figure-2). Results of the analysis of VEGF and PDGF expressions in both the control and treatment groups using the t-test also showed that there was a significant difference in VEGF and PDGF expressions in both groups (p<0.005). The mean and standard deviations of VEGF and PDGF in both the control and treatment groups were 14.50±2.258; 22.33±3.077; 12.50±2.588; and 22.67±3.141, respectively (Table-1).
Discussion
Wound after tooth extraction can provide a suitable environment for the growth of microorganisms. This condition then contributes to impede or prolong the healing process. During inflammation phase, cells (neutrophils, macrophages, fibroblasts, and endothelial cells) produce reactive oxygen species (ROS), thereby causing a signal inflammation and protecting the wound from the invasion of microorganisms. Excessive ROS, however, will activate IkB kinase and IkB that will be phosphorylated, resulting in degradation of IKB and NF-kB protein will be released. NF-kB will translocate into the nucleus and drive target gene expression such as inflammatory cytokines [17]. ROS can decrease the rate of wound healing and cause damage to the surrounding cells.
Pomegranate extract can decrease IKK activation, and IKB degradation can be inhibited after tooth extraction. This condition then will lead to a decrease in NF-kB translocation. The decreased NF-kB translocation is accompanied by the decrease production of pro-inflammatory cytokine IL-6 [18]. It proves that pomegranate extract that contained EA and punicalagin has anti-inflammatory activity. Consequently, the inflammatory phase of the wound healing process can be regulated properly.
Angiogenesis, the growth of new blood vessels from existing blood vessels, is an important aspect of the healing process. The improvement of blood flow to the damaged tissue can supply oxygen and nutrients needed to support the growth and function of reparative cells. Therefore, the success of the wound healing depends on angiogenesis, the growth of new capillaries of blood vessel. The new capillaries usually will appear first in the wound 3-5 days after the injury [19].
Results of this research showed that there was a significant increase in VEGF expression on the 4 th day in the treatment group with the provision of pomegranate extract compared to the control group. The increased VEGF is very essential in angiogenesis during the wound healing process since it can produce vigorous angiogenic response. Thus, it can be said that pomegranate extract has pro-healing effects on injured area.
The 4 th day after administration of pomegranate extract, there were many macrophages in the injured area. Macrophages secrete VEGF, which stimulates the endothelial cells to migrate into the clot area, to proliferate, and to form new blood vessels [10]. These findings are consistent with a research conducted by Johnson and Wilgus [20] that explain VEGF expression will increase on days 3-5 and will decrease from day 7 to 14 after injury. These conditions indicate a normal healing process. Results of some previous researchers even reveal that there is a positive correlation between VEGF level and VEGF activity and the amount of granulation tissue formed in healthy rat [21]. Different conditions occur in diabetic animal, VEGF=Vascular endothelial growth factor, PDGF=Platelet-derived growth factor, SD=Standard deviation where the levels and activity of the VEGF decreases and followed by a decrease in the formation of granulation tissue [22,23]. VEGF is also considered as a major angiogenic agent stimulating migration, proliferation, and differentiation of endothelial cells. VEGF, thus, is known as a strong positive regulator of angiogenesis and endothelial cell functions needed for new blood vessel formation, such as proliferation, migration, differentiation, and survival [24].
In this research, the increased VEGF expression was accompanied with the increased PDGF expression. VEGF and PDGF synergize in their ability to vascularize tissues [25]. PDGF appears to transduce its signal through macrophages, and may also trigger the activation of feedback loops and the synthesis of both endogenous PDGF as well as other growth factors, thereby enhancing the cascade of tissue repair processes required for a fully healed wound. Newly forming blood vessels must be stabilized or matured under influenced of PDGF [26].
In the normal wound healing process, the expression of PDGF is a very important factor since its deficiency leads to abnormal and poorly-formed immature blood vessels [8,27]. Similarly, previous research on diabetic experimental animals shows that decreased PDGF expression can result in delayed wound healing process.
Conclusion
The provision of pomegranate extract can increase VEGF and PDGF expression, leading to the acceleration of the healing process. | 2018-04-03T01:26:00.240Z | 2017-08-01T00:00:00.000 | {
"year": 2017,
"sha1": "343cfe078f3bbca84cd9ebf10a990a03e5b93151",
"oa_license": "CCBY",
"oa_url": "http://www.veterinaryworld.org/Vol.10/August-2017/27.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "343cfe078f3bbca84cd9ebf10a990a03e5b93151",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
216554220 | pes2o/s2orc | v3-fos-license | An Adaptive Sparse Subspace Clustering for Cell Type Identification
The rapid development of single-cell transcriptome sequencing technology has provided us with a cell-level perspective to study biological problems. Identification of cell types is one of the fundamental issues in computational analysis of single-cell data. Due to the large amount of noise from single-cell technologies and high dimension of expression profiles, traditional clustering methods are not so applicable to solve it. To address the problem, we have designed an adaptive sparse subspace clustering method, called AdaptiveSSC, to identify cell types. AdaptiveSSC is based on the assumption that the expression of cells with the same type lies in the same subspace; one cell can be expressed as a linear combination of the other cells. Moreover, it uses a data-driven adaptive sparse constraint to construct the similarity matrix. The comparison results of 10 scRNA-seq datasets show that AdaptiveSSC outperforms original subspace clustering and other state-of-art methods in most cases. Moreover, the learned similarity matrix can also be integrated with a modified t-SNE to obtain an improved visualization result.
INTRODUCTION
Cells are the basic functional unit all organisms are made of and play significant roles in the different stages of life. Through various DNA and RNA sequencing data, researchers have a comprehensive and deep understanding of cell biology. However, traditional sequencing data is obtained from bulks of cells, and these are composed of the mixed effect of numerous cells and ignore cell heterogeneity. These bulk-seq data will lead to deviations in downstream analysis if a specific type of cell is expected. Recently, single-cell sequencing techniques have developed rapidly and make up the defect of bulk sequencing data. Although the single-cell sequencing technique cannot capture all cell information, it provides a great opportunity to reveal the characteristics of an individual cell.
The fundamental step of analyzing the single-cell data is to identify the cell types. Utilizing single-cell RNA-seq (scRNA-seq) data to obtain the cell clusters is one of the most efficient methods available. The amount of clustering methods on the basis of scRNA-seq data have been proposed. A group of methods are focused on calculating more accurate and robust similarity scores between cells. SNN-cliq (Xu and Su, 2015) constructed the distance matrix and counted the number of common neighbor cells for each pair of cells as the similarity scores and then incorporated these within a clique-based clustering method. Seurat (V3.0) was inspired by an SNN-cliq and applied the SNN graph with a louvain algorithm (Butler et al., 2018;Stuart et al., 2019). Seurat is one of the most widely used methods. SIMLR and SC3 (Kiselev et al., 2017) adopted multiple similarity metrics from different aspects. In SIMILR, we could learn the inherent similarity matrix from a different resolution of Gaussian kernels, while SC3 combined multiple sub-clustering results together to build up a consensus matrix. Random forest (Pouyan and Kostka, 2018) was another way to calculate the similarity. The correlation coefficient has been proven to be effective when estimating the pairwise similarity of cells, and a highorder correlation coefficient was also applied in the scRNA-seq data analysis (Jiang et al., 2018;Tang et al., 2019). Compared to the methods based on pair-wise distance or correlation measurement, SinNLRR (Zheng et al., 2019b) considered the subspace characteristics of cells' expression and assumed the low rank and non-negative properties of the similarity matrix. Besides, several methods, including nonnegative matrix factorization (NMF) (Shao and Höfer, 2017;Zhu et al., 2017), imputation, and dimensionality reduction-based methods (Yau et al., 2016;Lin et al., 2017), have been used widely in assessing cellular heterogeneity. In the other aspect, the increasing number of well-learned scRNA-seq datasets also drives the appearance of supervised methods. These methods depended on labeled training datasets or some prior biological knowledge, such as gene markers (Wagner and Yanai, 2018;Pliner et al., 2019). According to the latest study (Abdelaal et al., 2019), most of the supervised methods are sensitive to prior knowledge, dataset complexity, or input features. Moreover, this kind of method has a fixed resolution and cannot find the detailed subtypes from a rough cell group. In this study, we have focused on the unsupervised clustering methods to identify the cell types. Inspired by previous methods, calculating the distance or similarity matrix of cells is a critical step. To recognize more accurate similarities of cells from high dimensional expression profiles, we have proposed an adaptive sparse subspace clustering method called AdaptiveSSC. AdaptiveSSC follows the subspace assumption and remains the nearest neighbors of a cell by a data-driven adaptive sparse constraint. The derived similarity matrix is used to obtain the clustering result and visualization. AdaptiveSSC obtains an improved performance on multiple experimental datasets.
MATERIALS AND METHODS
The pipeline of AdaptiveSSC is shown in Figure 1. Taking the scRNA-seq expression matrix as the input, AdaptiveSSC constructs the sparse cell-to-cell similarity matrix by keeping the most similar cells for each cell before then applying it to spectral clustering and modified t-distributed stochastic neighbor embedding (t-SNE) to obtain cell groups and the visualization result.
Data Pre-processing
The quantified scRNA-seq data contain thousands of genes, and the sparsity of gene expression is usually high. Therefore, AdaptiveSSC filters the genes expressed in <10% of the cells (the maximum number is 100), which are not regarded as informative genes. AdaptiveSSC investigates the linear effect of other cells on the target cell. To remove the scale of cells' expression, the L 2 normalization is carried on the original gene expression matrix.
where G is the original expression matrix with M genes and N cells. The normalized matrix X is used in the following calculation.
Adaptive Sparse Subspace Clustering
Most clustering methods depend on the calculation of the similarity or distance matrix. The most popular similarity measurements include Euclidean distance, Pearson or Spearman correlations, and cosine similarity, which are all based on a pairwise estimation. The scRNA-seq data usually contains thousands of genes; however, only a part of a gene determines the cell type, which corresponds to a low-dimensional manifold surface. According to the common strategy in manifold learning, only the local measurement of similarity or distance is reliable, so previous scRNA-seq clustering methods (Xu and Su, 2015;Wang et al., 2017) usually apply k-nearest neighbors (KNN) to keep the locality. However, the KNN is used arbitrary to select the same number of neighbors for each cell, and the selection of k would have a great influence on the final result in some situations. In order to overcome these shortcomings, we propose an adaptive sparse subspace clustering method, which we have called AdaptiveSSC.
AdaptiveSSC is developed from sparse subspace clustering (SSC) methods. SSC is proposed to solve the motion segmentation and face clustering problems (Elhamifar and Vidal, 2013). SSC assumes that the feature vector of a sample can be expressed as the linear combination of other samples in the same subspace or type. Based on the assumption, the expression of a cell X i = c 1 X 1 +c 2 X 2 +· · ·+c i−1 X i−1 +c i+1 X i+1 +· · ·+c N X N and c k is the subspace coefficient denoting the similarity score between cells. If the cell i and k are the same type, c k > 0, otherwise it is 0. By adding l 1 term, the most similar cells lying in the same subspace are retained. Extending it to all cells, the calculation of the subspace coefficient matrix is defined as Equation (2): where X is the normalized expression matrix. C is the coefficient matrix and C ij denotes similarity between cell i and j. | · | 1 denotes l 1 norm. The larger values in C mean the more similar cells. The relaxation formula of the optimization problem is shown: F means the Fresenius norm and λ is the l 1 penalty factor, which controls the sparsity of the coefficient matrix. J is an auxiliary matrix.
In the Equation (3), the coefficient matrix C is sensitive to the selection of the l 1 penalty factor. Another problem is that the same penalty factor for all coefficients will lead to the loss of consistency between estimation and variable selection (Zou, 2006). Therefore, we have introduced a data-driven adaptive strategy to solve these problems. As a Pearson correlation has been proven to be effective when measuring the similarity in previous studies (Kiselev et al., 2017;Wang et al., 2017), we utilized it to adjust the penalty factor for each coefficient. If the correlation of two cells is high, the penalty factor is decreased and vice versa. The modified optimization problem is therefore defined: W means element division of matrix J and W. We set the negative value of the Pearson correlation to 0. Because only the trend of the expression of two cells are positively correlated, we regard them as similar cells. Some zero values in W would lead to zero values in J during the optimization.
Alternating direction method of multipliers (ADMM) (Boyd et al., 2011) is an efficient method to solve Equation (4). According to ADMM, the augmented Lagrangian formula is defined: where Y is a dual variable, γ is an augmented Lagrangian penalty parameter, and tr means the trace of the matrix. ADMM updates C, Y, or J by fixing others. In iteration k + 1, the optimized form of C k+1 , J k+1 , and Y k+1 is shown in Equations (6-8): where sign() means the sign function. The convergence of ADMM mainly includes primal residuals and dual residuals. On the basis of updating process, the penalty parameter γ affects the speed of convergence. In AdaptiveSSC, we apply a balance strategy (Boyd et al., 2011) between primal residuals and dual residuals to adjust γ . The setting of γ is shown: 2γ k , when ||s k || 2 > µ||r k || 2 , γ k , others.
where r k = C k − J k is the primal residual and s k = 1 γ J k − J k−1 is the dual residual. The µ is set to 50 as default. To reduce the computational complexity, γ is updated by 10 iterations. When max(abs(C − J)) < 0.0001 or the number of iteration is larger , 2007) is applied on the learned similarity matrix. The SC is based on the point of graph cut and utilizes the characteristic of the corresponding Laplacian matrix to divide the graph into several clusters. In AdaptiveSSC, we use the normalized Laplacian matrix L norm = I−D − 1 2 SD − 1 2 , where D is the degree matrix, to obtain its k eigenvectors corresponding to the smallest k eigenvalues. Then, k-means is used to obtain the final clusters.
scRNA-seq Datasets
We collected 10 scRNA-seq datasets to evaluate the performance of AdaptiveSSC. These datasets are based on different singlecell techniques or protocols, such as Smart-seq, SMARTer, and Drop-seq based methods. Meanwhile, the scale of these datasets ranges from the tens to the tens of thousands. The variety of the datasets could indicate the generalization ability of AdaptiveSSC comprehensively. The details of these datasets are shown in Table 1. All datasets contain the real cell types from the original researches.
Evaluation Metrics
In order to compare the performance of different clustering methods, we selected two popular metrics: normalized mutual information (NMI) and adjusted rand index (ARI). Both NMI and ARI can quantify the consistency between the clustering results and the real labels. The definition of NMI and ARI is shown: Where T and P mean the real labels and clustering labels, respectively. In Equation (11), n ij denotes the number of cells belonging to i group in real labels and j group in clustering labels; n i denotes the number of cells belonging to the i group in real labels, while n j denotes the number of cells belonging to the j group in clustering labels.
Parameter Analysis
Although the adaptive strategy is used in AdaptiveSSC, there are still some hyperparameters to be set. The most important hyperparameter is the l 1 penalty factor λ. By the adaptive adjustment, the learned similarity matrix is not so sensitive to it. We evaluated the NMI and ARI of AdaptiveSSC on eight small datasets (smaller than 5,000 cells) with λ ranging from 0.01 to 0.19 and the interval set to 0.02. The results for eight small datasets are shown in Figure 2. Based on the result, when the λ was in the 0.01-0.05, both NMI and ARI were in the best range and were more stable. Therefore, we used λ = 0.03 as a default in AdaptiveSSC. During the experiment, we also found the optimal λ was not consistent for big datasets (in Baron is 0.01 and in Shekhar and Vento is 0.007). We recommend that users select the proper λ by grid searching with the following rule. If the corresponding sparsity of C is between 0.02 and 0.05, the λ should be selected. In Baron and Shekhar, we selected the corresponding λ with the sparsity of C is 0.03.
Comparison Analysis of Clustering Methods
To validate the effectiveness of AdaptiveSSC, we selected seven competitive methods, including SIMLR , MPSSC (Park and Zhao, 2018), SNN-cliq (Xu and Su, 2015), RAFSIL (Pouyan and Kostka, 2018), Seurat(V3.0) (Butler et al., 2018;Stuart et al., 2019), SinNLRR (Zheng et al., 2019b), and sparse subspace clustering (SSC) (Elhamifar and Vidal, 2013). All these methods are based on the construction of similarity matrix. SNN-cliq and Seurat recalculate the similarities based on their shared neighbors. SIMILR and MPSSC focus on the different resolution of Gaussian kernels, while RAFSIL applies random forest. SinNLRR is based on the subspace assumption with low rank constraint. The original SSC was selected as the baseline method. The results of NMI and ARI on 10 datasets are shown in Figure 3. Compared to SSC, AdaptiveSSC improved NMI and ARI in six datasets. Especially in Treutelin, Kumar, Vento, and Shekhar, AdaptiveSSC exhibited a significant improvement, more so than SSC, which means the adaptive penalty factor leads to the more accurate similarity matrix. In Kolod and Ting, AdaptiveSSC achieved the same performance with SSC.
Overall, AdaptiveSCC exhibited a better performance than SSC in most cases. Besides, AdaptiveSSC achieved the best (or a tie for first place) performance in seven datasets upon NMI and eight datasets upon ARI compared with other six state-of-the-art methods. It is worth noting that only AdaptiveSSC obtains the perfect result on Treutelin. The results in Baron and Shekhar also verify AdaptiveSSC's effectiveness in large datasets. Estimation of the number of cell types is another important aspect in application. In AdaptiveSSC, we also used eigengap to determine the number of clusters, which was popular in previous studies. The results can be found in the Supplementary Material. As shown in the results, none of the methods predict the correct number of clusters in all datasets. However, AdaptiveSSC obtains the correct number of clusters in three datasets and gets the closest number in five datasets, which is a better selection overall. Moreover, we select five different scale datasets to evaluate the computational efficiency of these methods. The running time can be found in the Supplementary Material. AdaptiveSSC has a faster speed than SSC but is still time-consuming in large datasets compared with SIMLR and Seurat. All the experiments run on the server with 24 cores and 512 GB memory. The methods with running time more than 36 h are excluded, such as RAFSIL, SNNcliq, and SinNLRR in large scale datasets, and MPSSC gets out of memory error on Shekhar.
Comparison Analysis of Visualization
Visualization of scRNA-seq is another important issue. Previous study proposed a modified tdistributed stochastic neighbor embedding (t-SNE) to validate the performance of learned similarity. We also adopted this evaluation to AdaptiveSSC and generate 2D-embedding images on Darmanis and Treutelin with the learned similarity matrix of t-SNE, SIMLR, MPSSC, and AdaptiveSSC, respectively. The result is shown in Figure 4. The points with the same color mean they have the same cell type. Compared to other methods, AdaptiveSSC could group the same cells together and exhibits good silhouettes. Although SIMILR and MPSSC contain more dense parts, they divide cells with same type into different cliques, which are usually far away from each other. This will give the researchers a misconception that they are belong to exactly different types. Therefore, AdaptiveSSC has a better performance and potential in the visualization of scRNA-seq data.
Discussion and Conclusion
The identification of cell types is a fundamental problem is scRNA-seq data analysis. In recent years, a lot of clustering methods have been proposed to solve it. However, most of these methods do not exhibit a good generalization on different datasets. In this study, we proposed a subspace clustering with an adaptive sparse constraint, called AdaptiveSSC. AdaptiveSSC regards the expression of a cell can be expressed as a linear combination of other cell's expression from the same type. A data-driven adaptive sparse strategy is applied to keep the locality of cells in the original dimension and decrease the sensitivity to the penalty factor. Eight scRNAseq datasets were used to evaluate the performance of AdaptiveSSC. By comparing with SSC, AdaptiveSSC improves the clustering results significantly in some cases, which indicates the effectiveness of our strategy. Moreover, six state-of-the-art methods were selected as comparison. From the NMI and ARI, AdaptiveSSC achieves the best performance in most of datasets. Finally, we integrated the learned similarity with modified t-SNE further, which also shows the powerful potential of AdaptiveSSC in visualization.
However, the computational efficiency of AdaptiveSSC is still low for large datasets and should be improved in the future. Some strategies used in the fast clustering method could be considered to make AdaptiveSSC more efficient (Ren et al., 2019). Moreover, AdaptiveSSC explores the cell heterogeneity from a gene level, but it is also important to study the different biological functions of cells. Regulatory modules (Aibar et al., 2017) have been proved effective when showing the functional heterogeneity of cells. It is possible to identify the cell type from the whole gene regulatory network perspective (Li et al., 2017;Zheng et al., 2018Zheng et al., , 2019a. Besides, motivated by previous studies (Lan et al., 2018;Chen et al., 2019;Shi et al., 2019), multi-view learning and integrating with prior knowledge are promising directions to improve the accuracy of clustering and give a higher resolution of cell types.
DATA AVAILABILITY STATEMENT
Publicly available datasets were analyzed in this study. This data can be found here: https://github.com/zrq0123/AdaptiveSSC.
AUTHOR CONTRIBUTIONS
RZ and CC designed the methodology. RZ, ZL, XC, and YT run the comparison experiments on datasets. RZ and ML wrote the paper. All authors revised and approved the manuscript. | 2020-04-28T13:10:50.015Z | 2020-04-28T00:00:00.000 | {
"year": 2020,
"sha1": "0f25848094a58321a82df35aaed08da2d84a962a",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2020.00407/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0f25848094a58321a82df35aaed08da2d84a962a",
"s2fieldsofstudy": [
"Computer Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
269831714 | pes2o/s2orc | v3-fos-license | Association between Obesity and Atrial Function in Patients with Non-Valvular Atrial Fibrillation: An Echocardiographic Study
Background: Obesity is a public health problem which prevalence has increased worldwide and is associated with different degrees of hemodynamic alterations and structural cardiac changes. The aim of the study is to investigate the impact of body mass index (BMI) on left atrial function using standard and advanced echocardiography in a population of patients with non-valvular atrial fibrillation (AF). Methods: 395 adult patients suffering from non-valvular AF, divided into three tertiles based on BMI value, carry out a cardiological examination with standard and advanced echocardiography. Results: Peak atrial longitudinal strain (PALS), a measure of left atrial function, is lower in the tertile with highest BMI (14.3 ± 8.2%) compared to both the first (19 ± 11.5%) and the second tertile (17.7 ± 10.6%) in a statistically significant manner (p < 0.002). Furthermore, BMI is significantly associated independent with the PALS by multilinear regression analysis, even after correction of the data for CHA2DS2-VASc score, left ventricular mass index, left ventricular ejection fraction, E/E’ ratio and systolic pulmonary arterial pressure (coefficient standardized β = −0.127, p < 0.02; Cumulative R2 = 0.41, SEE = 0.8%, p < 0.0001). Conclusions: BMI could be considered an additional factor in assessing cardiovascular risk in patients with non-valvular atrial fibrillation, in addition to the well-known CHA2DS2-VASc score.
Introduction
Obesity represents a serious and constantly growing public health problem [1].In recent decades, the prevalence of obesity has increased worldwide [2]; the average BMI (Body Mass Index) has grown globally, both in men and women [3].The increase in body weight is due to a positive energy balance; however despite studies suggesting that the main cause of obesity is an abnormal caloric intake [4], it must be considered that there are also behavioral factors, which have recently acquired greater importance, such as the spread of increasingly sedentary behaviors and work activities [5].Furthermore, it must be considered that the development of obesity is also associated with socio-economic status, as well as environmental factors, which can determine epigenetic modifications through gene-environment interaction [6].In obese subjects, adipose tissue dysfunction occurs, causing a pro-inflammatory state and an imbalance in the production of adipokines, which are associated with the development of cardiovascular diseases (CVD), as well as some types of cancer [7].Furthermore numerous recent evidence suggested a role of epicardial adipose tissue in the development of atrial fibrillation, in fact several potential arrhythmogenic mechanisms have been attributed to epicardial adipose tissue, including myocardial inflammation, fibrosis, oxidative stress, and fat infiltration.Collagen deposition and subsequent fibrosis may be the substrate for atrial fibrillation [8].Altered myocardial metabolism may be one of the potential mechanisms underlying obesity-related heart disease.Obesity, in fact, causes an increase in the oxidation of fatty acids and a reduction in the oxidation of glucose [9].These metabolic alterations modify cardiac metabolism, resulting in an alteration of the efficiency of the cardiac pump and reduced energy production with increased incidence of heart failure [10].The various classes of obesity are associated with different degrees of hemodynamic alterations and structural changes [11].The ejection fraction of the left ventricle, an expression of systolic function, is usually preserved in obese subjects.However, new sensitive measures of myocardial function, such as strain imaging in echocardiography and magnetic resonance imaging, have allowed the detection of subclinical alterations even in the presence of a preserved left ventricular ejection fraction [12].The prevalence of altered myocardial strain in obese patients, varies from 37% to 54%.Subclinical functional abnormalities of the right ventricle may also be present [13].It has also been observed that higher values of left ventricular mass are found in obese subjects, as well as high diastolic filling pressures in class II and III obese subjects [14].Prevalence of diastolic dysfunction in obese patients varies from 23% to 75%, depending on the diagnostic criteria used [15].BMI appears to be a significant predictor of left atrial size [16].It has been demonstrated that left atrial dilatation is present in obese patients, even when they do not have other cardiovascular diseases.The importance of accurate evaluation of the size of the left atrium in obese patients was also underlined in a recent paper that analyzed the difference between indexing the atrial volume for height or for body surface area in patients with moderate and severe obesity in order to identify the reduction in atrial function assessed with speckle-tracking echocardiography [17].In one study it was demonstrated that obese and hypertensive patients have a greater risk of having an increased atrial volume compared to normal weight hypertensive patients, indicating that obesity may be a predictive factor of increased atrial size, both individually and in presence of arterial hypertension of which it amplifies the effect [18,19].A large study of 5282 patients over a follow-up of 13.7 years showed that there was a 52% increase in the risk of new onset atrial fibrillation due to obesity [20].A meta-analysis demonstrated a 29% increased risk of atrial fibrillation incidence for every 5 unit increase in body mass index (BMI) [21].In a recent study it was demonstrated that in patients with heart failure with preserved ejection fraction (HFpEF) abdominal obesity was associated with increased incidence of atrial fibrillation, in particular AF incidence rose by 18% per centimeter in circumference [22].Obesity increases the thromboembolic risk, probably due to the chronic inflammatory state which in itself activates the thrombotic process and plays a role in the pathogenesis of "atrial myopathy" responsible for both HFpEF and AF.Unfortunately, the thromboembolic risk in obese patients is often undertreated, which makes the evaluation of this type of patient highly relevant to allow for more accurate risk stratification and consequently adequate treatment [23].
Role of Echocardiography in Atrial Fibrillation
Echocardiography, together with the medical history, physical examination, electrocardiogram and evaluation of some laboratory parameters, is an examination indicated in all patients with atrial fibrillation (AF).While in valvular forms of AF the thromboembolic risk is high by definition, in non-valvular forms it is estimated through a score known as CHA 2 DS 2 -VASc score.
Echocardiography is useful for assessing thromboembolic risk in patients with atrial fibrillation (AF) as it contributes to the estimation of CHA 2 DS 2 -VASc score regarding the evaluation of congestive heart failure, which is responsible for the letter C in the acronym CHA 2 DS 2 -VASc.It is in fact considered a risk factor chronic systolic heart failure with ejection fraction ≤40% or any form of acute heart failure, both systolic and diastolic (i.e., with preserved ejection fraction).It is clear that, in each of these cases, echocardiography is decisive, being the technique generally used first in the clinical setting to establish the ejection fraction of the left ventricle and for the evaluation of diastolic function.
Many believe that the contribution of echocardiography to the estimation of thromboembolic risk in patients with AF is limited to the evaluation of the ejection fraction.On closer inspection, however, this method also provides further information in the evaluation of vascular disease, which constitutes another significant risk factor for CHA 2 DS 2 -VASc score.Vascular disease includes three entities: previous myocardial infarction, aortic plaques and peripheral arterial disease.In the first two cases, echocardiography is able to make a significant contribution.
As regards previous myocardial infarction, it should first of all be remembered that the standard ECG has a low sensitivity in identifying a stabilized or healed myocardial infarction, whether subendocardial or transmural, especially when it is located in an inferior, posterior or lateral location [24].Cardiac imaging methods, and among these echocardiography, are particularly useful in revealing the presence of structural and/or functional alterations compatible with a previous myocardial infarction and therefore can allow the diagnosis even when the ECG and clinical history are silent.
As regards aortic plaques, it should be remembered that transthoracic echocardiography allows, in a high percentage of patients, the study of the aortic root and arch and, in some cases, also of the descending thoracic and abdominal aorta Therefore it is possible to use this method to detect the presence of aortic plaques, even complex ones.
Ultimately, therefore, it is clear that echocardiography can contribute to the estimation of thromboembolic risk in the context of CHA 2 DS 2 -VASc score both for estimating the ejection fraction and for the recognition of a vascular, coronary or aortic disease, although the performance of dedicated tests such as color Doppler ultrasound of the supra-aortic trunks, upper and lower limbs are necessary for the evaluation of vascular disease.
Purpose of the Study
The aim of our study was to investigate the impact of BMI on left atrial function using standard and advanced echocardiography in a population of patients with non-valvular AF.
Study Population
The study performed is an observational, non-prospective and monocentric study.We enrolled 395 adult patients suffering from non-valvular AF belonging to our EchoLab at AOU Federico II to carry out a cardiological examination and echocardiographic evaluation (NeAfib-echo registry).
Regarding the clinical characteristics of the enrolled patients, exclusion criteria were considered: known coronary artery disease, known cardiomyopathies, significant valvular diseases, previous cerebral ischemic episodes and known oncological pathologies.
At baseline anthropometric parameters were recorded for each patient; systolic and diastolic blood pressure values and heart rate were measured at the beginning of each echocardiographic exam; for each patient the BMI and the CHA 2 DS 2 -VASc score were obtained.
Echocardiography
Echo Doppler exams were performed by a Vivid E95 ultrasound machine (Horten, Norway) equipped with a 2.5 MHz phased-array transducer according to the American Society of Echocardiography (ASE)/European association of Cardiovascular Imaging (EACVI) standardization [25].
Evaluation of the ejection fraction (EF) of the left ventricle, the left ventricular mass indexed to height powered to 2.7 (LVMi), the global longitudinal strain (GLS) of the left ventricle, the diastolic function by Doppler evaluation of the transmitral pattern and tissue Doppler of the septal and lateral mitral annulus was carried out.Particular attention was paid to the quantification of left atrial dimensions and function.All echocardiographic measurements were performed on an average of three consecutive cardiac cycles of good quality, at high frame rate (40-80 frames/s).
Atrial Dimension
Atrial dimensions are measured at the end of ventricular systole, when the chamber is at its largest.The methods based on the measurement of left atrial (LA) size in an anteroposterior linear dimension by M-mode or 2D echocardiography in parasternal longaxis view don't represent the true LA size, particularly when LA dilatation is present, resulting in an asymmetric shape of the LA.For this reason, LA-indexed volumes should be assessed for quantification of LA size [26,27].
In our study we performed the modified Simpson method (biplane disk summatio): the atrial volume is calculated by adding the volume of a stack of cylinders of height "h" and with the area which is calculated from the orthogonal transverse axes (Figure 1).The size of the left atrium depends on gender.However, gender differences are still taken into account when indexing values for body size [28].The most recommended indexing method is by body surface area (BSA) [29].Currently the upper limit of left atrial volume indexed for BSA is set at a value of 34 mL/m 2 [30].
indexed to height powered to 2.7 (LVMi), the global longitudinal strain (GLS) of the left ventricle, the diastolic function by Doppler evaluation of the transmitral pattern and tissue Doppler of the septal and lateral mitral annulus was carried out.Particular attention was paid to the quantification of left atrial dimensions and function.All echocardiographic measurements were performed on an average of three consecutive cardiac cycles of good quality, at high frame rate (40-80 frames/s).
Atrial Dimension
Atrial dimensions are measured at the end of ventricular systole, when the chamber is at its largest.The methods based on the measurement of left atrial (LA) size in an anteroposterior linear dimension by M-mode or 2D echocardiography in parasternal longaxis view don't represent the true LA size, particularly when LA dilatation is present, resulting in an asymmetric shape of the LA.For this reason, LA-indexed volumes should be assessed for quantification of LA size [26,27].
In our study we performed the modified Simpson method (biplane disk summatio): the atrial volume is calculated by adding the volume of a stack of cylinders of height "h" and with the area which is calculated from the orthogonal transverse axes (Figure 1).The size of the left atrium depends on gender.However, gender differences are still taken into account when indexing values for body size [28].The most recommended indexing method is by body surface area (BSA) [29].Currently the upper limit of left atrial volume indexed for BSA is set at a value of 34 mL/m 2 [30].
Estimation of Left Atrial Function by Speckle Tracking Echocardiography
The left atrium modulates left ventricular filling through its functions as reservoir, conduit and pump.During ventricular systole and isovolumetric relaxation, when the atrioventricular (AV) valves are closed, the atrial chambers function as distensible reservoirs that receive blood flow from the venous circulation.The reservoir function depends on the starting atrial volume (end-diastolic), on the relaxation of the atrial fibres, on the distensibility or compliance of the atrial walls and on the descent of the atrioventricular plane during ventricular systole [31].Normally, the considerable capacitance of the left atrium makes it capable of receiving and withstanding increases in venous return which are then transferred to the ventricle through the functions of conductance and contractility.Therefore, the reservoir function of the left atrium-pulmonary vein system may contribute, together with the Frank-Starling mechanism at the level of the underlying ventricle, to the adaptation of the left sections to sudden changes in the flow rate of the left ventricle, which would otherwise lead to the development of stasis and pulmonary congestion.It is important to recognize the interaction between these atrial functions and ventricular performance throughout the cardiac cycle.For example, although the reservoir function is mainly determined by atrial compliance during
Estimation of Left Atrial Function by Speckle Tracking Echocardiography
The left atrium modulates left ventricular filling through its functions as reservoir, conduit and pump.During ventricular systole and isovolumetric relaxation, when the atrioventricular (AV) valves are closed, the atrial chambers function as distensible reservoirs that receive blood flow from the venous circulation.The reservoir function depends on the starting atrial volume (end-diastolic), on the relaxation of the atrial fibres, on the distensibility or compliance of the atrial walls and on the descent of the atrio-ventricular plane during ventricular systole [31].Normally, the considerable capacitance of the left atrium makes it capable of receiving and withstanding increases in venous return which are then transferred to the ventricle through the functions of conductance and contractility.Therefore, the reservoir function of the left atrium-pulmonary vein system may contribute, together with the Frank-Starling mechanism at the level of the underlying ventricle, to the adaptation of the left sections to sudden changes in the flow rate of the left ventricle, which would otherwise lead to the development of stasis and pulmonary congestion.It is important to recognize the interaction between these atrial functions and ventricular performance throughout the cardiac cycle.For example, although the reservoir function is mainly determined by atrial compliance during ventricular systole (and, less obviously, by atrial contractility and relaxation), it is also influenced by the descent of the base of the left ventricle during systole and from the end-systolic volume [32].The conduit function is influenced by atrial compliance and is reciprocally linked to the reservoir function, but is necessarily related to the relaxation and compliance of the left ventricle.Finally, the atrial pump function reflects the intensity and duration of atrial contractility, but it also depends on venous return (atrial preload), as well as on the end-diastolic pressures of the left ventricle (atrial afterload) and its systolic reserve.In this context, the measurement of atrial dimensions, better defined by volume rather than by atrial diameters, can provide useful elements for obtaining information on the value of atrial pressures and for evaluating the diastolic function of the left ventricle.An echocardiographic method for the evaluation of left atrial function is speckle tracking echocardiography (STE), a system of observation of the "speckles" during the cardiac cycle: speckles are a set of pixels (dots) that form the standard 2D grayscale echocardiographic image [33].Speckles can be recognized in an entire region and followed throughout the cardiac cycle.This is possible by recording 2D images with a temporal resolution of approximately 40-80 frames per second (fps) [34].The principle is that if two consecutive frames are temporally close, the small variation in the position of the speckle can be easily recognized by the software which in this way manages to encode and follow the deformation of the selected myocardial segments during the entire cardiac cycle using approaches of probabilistic.The system's computing power allows this operation to be performed for dozens of regions simultaneously along the profile of a 2D image.After acquiring left atrial images in the apical 4 chambers and 2 chambers, a region of interest (ROI) is identified which draws 6 different segments.To trace the ROI at the auricle and the outlet of the pulmonary veins, the direction between the endocardial and epicardial surfaces is extrapolated by the software.We therefore obtain 12 total segments with 12 corresponding curves for which the software then generates an average curve [35].Two parameters of longitudinal strain of the left atrium are recognized: -Peak atrial longitudinal strain (PALS), measured at the end of the atrial reservoir phase.Which in normal subjects is greater than 40%.In patients with atrial fibrillation it has been seen that the reduction in strain reflects structural alterations of the left atrium (wall fibrosis) [36] (Figure 2) -Peak atrial contraction strain (PACS) identified just before the onset of the active phase of atrial contraction [37], which indicates the contribution of the active contraction of the left atrium to the filling phase of the left ventricle [38] and which is lacking in patients with permanent atrial fibrillation.
CHA₂DS₂-VASc Score Assessment
Regarding the assessment of thromboembolic risk of patients enrolled in the study the CHA₂DS₂-VASc score was used according to the 2014 AHA/ACC/HRS guideline for the management of patients with atrial fibrillation [39].In detail, one point was assigned
CHA 2 DS 2 -VASc Score Assessment
Regarding the assessment of thromboembolic risk of patients enrolled in the study, the CHA 2 DS 2 -VASc score was used according to the 2014 AHA/ACC/HRS guideline for the management of patients with atrial fibrillation [39].In detail, one point was assigned in the case of congestive heart failure or left ventricular dysfunction, one point in the case of arterial hypertension, one point in the case of diabetes mellitus, 2 points in the case of a positive history of stroke, transient ischemic attack (TIA) or thromboembolism, 1 point in case of vascular pathology, 1 point in case of age between 65 and 74 years, 2 points in case of age greater than or equal to 75 years, 1 point in case of female sex.
Statistical Analysis
Statistical analysis was performed by SPSS package, release 12 (SPSS Inc., Chicago, IL, USA).Data are expressed as mean value ± standard deviation (SD) for normal continuous variables; categorical variants are expressed as n of patients (%).Continuous variables were compared with the Student t-test.Discrete variables were compared with Chi-square (χ2) statistics or Fisher's exact test when appropriate.Multilinear regression analysis was also performed, including all the variables found to be statistically significantly associated with PALS, adjusting these data for the CHA 2 DS 2 -VASc score.Quantitative data were checked for normality of distribution by Shapiro-Wilk and Kolmogorov-Smirnov tests with a p-value < 0.001.Therefore considering that the data do not follow a normal distribution, a non-parametric Kruskal-Wallis test was performed.
The null hypothesis was rejected at 2-tailed p < 0.05.
Results
Among the patients considered, 175 were female and 220 were male; the mean age was 70.6 ± 11 years; the mean BMI was 27.8 ± 5.6 kg/m 2 .Of them 54.1% (214 patients) had permanent/persistent AF and 45.9% (181 patients) had paroxysmal AF.
The population was divided into three subgroups (terziles) depending on BMI value: • First tertile: patients with BMI less than 25.3 kg/m 2 (n = 127).
No statistically significant differences were found between the three tertiles regarding sex, age, systolic blood pressure and heart rate.However, a statistically significant difference was highlighted regarding the diastolic blood pressure value which was higher in the third tertile (p < 0.001).The CHA 2 DS 2 -VASc score was not statistically significantly different between the three groups.As regards the echocardiographic functional parameters, the following was noted: in the population examined LVMi increases progressively from the first to the third tertile (p = 0.001); however, a similar trend is not observed for the indexed left atrial volume, the ejection fraction and the Global Longitudinal Strain of the left ventricle and the E/E' ratio; in fact, these values were not statistically significantly different between the three groups.As regards Peak Atrial Longitudinal Strain, its value was lower in the third tertile (14.2 ± 8.3%) compared to both the first (19 ± 11.5%) and the second tertile (17.8 ± 10.6%) in a statistically significant manner (p < 0.02) (Table 1).
However, in a subanalysis based on the type of AF, we found that in reality this reduction in PALS as BMI increases can be observed in the first and third tertiles (p = 0.001) of the group of paroxysmal fibrillating patients while this difference however, it was not statistically significant in patients with persistent/permanent AF (p = 0.158) as shown in Figure 3. Furthermore, in the population under examination, PALS was associated in a statistically significant manner with BMI (p < 0.001) but also with age, heart rate, LVMi, left ventricular EF, GLS, E/E' ratio and systolic pulmonary arterial pressure (PAPs).However, after having conducted a multilinear regression analysis, adjusting these data for the CHA₂DS₂-VASc score, the LVMi, the left ventricular EF, the E/E' ratio and the PAPs, we found that the BMI remained significantly associated independent with the PALS (coefficientstandardized β = −0.127,p < 0.02; Cumulative R 2 = 0.41, SEE = 0.8%, p < 0.0001) as shown in Table 2 and Figure 4. Furthermore, in the population under examination, PALS was associated in a statistically significant manner with BMI (p < 0.001) but also with age, heart rate, LVMi, left ventricular EF, GLS, E/E' ratio and systolic pulmonary arterial pressure (PAPs).However, after having conducted a multilinear regression analysis, adjusting these data for the CHA 2 DS 2 -VASc score, the LVMi, the left ventricular EF, the E/E' ratio and the PAPs, we found that the BMI remained significantly associated independent with the PALS (coefficientstandardized β = −0.127,p < 0.02; Cumulative R 2 = 0.41, SEE = 0.8%, p < 0.0001) as shown in Table 2 and Figure 4.
Discussion
In our study we investigated the role of obesity, assessed by the anthropometric parameter of BMI, on left atrial function, assessed by baseline echocardiography and speckle tracking, in patients with non-valvular AF.
Risk factors associated with atrial fibrillation (including obesity) [40] are associated, in some studies, with the onset of AF in patients with normal atrial dimensions, presupposing the existence of etiopathological mechanisms that are not expressed in atrial dilatation [41], thus giving rise to the need for a rigorous evaluation of left atrial function.The new echocardiographic techniques and, in particular, speckle tracking echocardiography, allow greater accuracy in the functional evaluation of the left atrium [37].
The PALS, which constitutes a marker of the atrial reservoir function, representing the point of maximum positivity of the strain curve, correlates better than the PACS with the prognostic data [42,43].
In our population of patients with non-valvular AF, the gradual reduction in PALS found when examining tertiles with increasing BMI shows us how overweight and obesity could have a detrimental effect on left atrial function [44].
Furthermore, BMI correlates with PALS even if we analyze this trend in the absence of numerous confounding factors (including CHA₂DS₂-VASc score) which could have an impact especially on obese categories with a higher BMI.
Overweight and obesity could increase the risk of AF through multiple mechanisms, such as structural and electrical atrial remodeling, which contributes to the creation of a proarrhythmic substrate.Ectopic deposition of adipose tissue in the heart has been associated with the prevalence and severity of atrial fibrillation.Epicardial adipose tissue, in particular, appears to constitute an important arrhythmogenic substrate that could explain the increased risk of AF related to obesity [21].Furthermore, the amount of epicardial adipose tissue is a predictor of AF persistence [45].The mechanisms by which epicar-
Discussion
In our study we investigated the role of obesity, assessed by the anthropometric parameter of BMI, on left atrial function, assessed by baseline echocardiography and speckle tracking, in patients with non-valvular AF.
Risk factors associated with atrial fibrillation (including obesity) [40] are associated, in some studies, with the onset of AF in patients with normal atrial dimensions, presupposing the existence of etiopathological mechanisms that are not expressed in atrial dilatation [41], thus giving rise to the need for a rigorous evaluation of left atrial function.The new echocardiographic techniques and, in particular, speckle tracking echocardiography, allow greater accuracy in the functional evaluation of the left atrium [37].
The PALS, which constitutes a marker of the atrial reservoir function, representing the point of maximum positivity of the strain curve, correlates better than the PACS with the prognostic data [42,43].
In our population of patients with non-valvular AF, the gradual reduction in PALS found when examining tertiles with increasing BMI shows us how overweight and obesity could have a detrimental effect on left atrial function [44].
Furthermore, BMI correlates with PALS even if we analyze this trend in the absence of numerous confounding factors (including CHA 2 DS 2 -VASc score) which could have an impact especially on obese categories with a higher BMI.
Overweight and obesity could increase the risk of AF through multiple mechanisms, such as structural and electrical atrial remodeling, which contributes to the creation of a proarrhythmic substrate.Ectopic deposition of adipose tissue in the heart has been asso-ciated with the prevalence and severity of atrial fibrillation.Epicardial adipose tissue, in particular, appears to constitute an important arrhythmogenic substrate that could explain the increased risk of AF related to obesity [21].Furthermore, the amount of epicardial adipose tissue is a predictor of AF persistence [45].The mechanisms by which epicardial fat contributes to the onset and progression of AF are not fully understood [46].The pathophysiological mechanisms involve the infiltration of adipose tissue, the profibrotic and proinflammatory paracrine effects exerted by epicardial fat, as well as oxidative stress [47].In fact, adipocytes, under the impulses of both physiological and pathophysiological stimuli, are capable of producing over 50 cytokines, hormones and peptides, grouped under the definition of "adipokines", which play a very important role in the regulation of energy homeostasis and inflammation [48].In obese individuals, hypertrophic adipocytes produce a series of molecules such as resistin, leptin, interleukin-6 (IL-6) and tumor necrosis factor-α (TNF-α) which determine a switch towards the proinflammatory state [49].Increased sympathetic tone related to the dense innervation of cardiac adipose tissue depots contiguous with the atrium and pulmonary veins may also play a role [50].The phenomena listed certainly contribute to varying degrees to the myocardial dysfunction characteristic of these patients.
Even though various mechanisms may play a role in the pathogenesis of cardiac dysfunction in obesity, early detection of these myocardial abnormalities may be important in management of the disease.
Limitations of the Study and Future Prospects
Among the limitations of our study it must certainly be recognized that it is an observational, non-prospective study and secondly it is a monocentric, not multicentric study, despite the number of patients enrolled being 395.The study also offers various food for thought; in fact, it could be interesting to investigate the relationship between atrial function indices analyzed by speckle tracking echocardiography and the risk of recurrence of atrial fibrillation not only in patients with obesity but also in other clinical settings, with the aim of carrying out an accurate evaluation of the patients' risk profile for optimal therapeutic management.
Conclusions
In patients with non-valvular AF, overweight and obesity have a deleterious effect on left atrial function.This is demonstrated by the gradual reduction of PALS with increasing BMI across the various tertiles.BMI remains independently associated with PALS even after correction for various confounding factors including the CHA 2 DS 2 -VASc score.Therefore, BMI could be considered an additional factor in assessing cardiovascular risk in patients with non-valvular atrial fibrillation, in addition to the well-known CHA 2 DS 2 -VASc score.
Figure 1 .
Figure 1.Modified Simpson method (biplane disk summatio) for quantification of left atrial volume index.
Figure 1 .
Figure 1.Modified Simpson method (biplane disk summatio) for quantification of left atrial volume index.
Figure 2 .
Figure 2. Evaluation of left atrial function using longitudinal strain: peak atrial longitudinal strain (PALS) and peak atrial contraction strain (PACS).
Figure 2 .
Figure 2. Evaluation of left atrial function using longitudinal strain: peak atrial longitudinal strain (PALS) and peak atrial contraction strain (PACS).
Figure 3 .
Figure 3. PALS reduction in a subanalysis based on the type of AF (permanent vs. paroxysmal).Abbreviations: PALS Peak Atrial Longitudinal Strain; BMI body mass index; AF atrial fibrillation.
Figure 3 .
Figure 3. PALS reduction in a subanalysis based on the type of AF (permanent vs. paroxysmal).Abbreviations: PALS Peak Atrial Longitudinal Strain; BMI body mass index; AF atrial fibrillation.
Figure 4 .
Figure 4. Relation between Body Mass index and Peak Atrial Longitudinal Strain by multilinear regression analysis.
Figure 4 .
Figure 4. Relation between Body Mass index and Peak Atrial Longitudinal Strain by multilinear regression analysis.
Table 1 .
Anthropometric, clinical and echocardiographic characteristics of patients with non-valvular atrial fibrillation divided into three tertiles based on body mass index values.
Abbreviations: SBP: Systolic Blood Pressure; DBP: Diastolic Blood Pressure; BMI: Body Mass Index; LV Mass Index: left ventricular mass indexed by height 2,7 ; EF: Ejection Fraction; GLS: Global Longitudinal Strain; E/e': ratio of transmitral pattern; LA volume index: left atrial volume indexed by BSA (body surface area); PALS: Peak Atrial Longitudinal Strain; Pa: p-value of I tertile compared to III ttertile; Pb: p-value of II tertile compared to III tertile; Pc: p-value of III tertile compared to I tertile; NS: not statistically significant.J. Clin.Med.2024, 13, x FOR PEER REVIEW 8 of 13
Table 2 .
Multilinear regression analysis of variables with main prognostic value.
Table 2 .
Multilinear regression analysis of variables with main prognostic value.: PALS Peak Atrial Longitudinal Strain, LV left ventricular, EF Ejection Fraction, E/e' ratio of transmitral pattern, PAPs systolic pulmonary arterial pressure, BMI Body Mass Index. Abbreviations | 2024-05-18T15:24:59.812Z | 2024-05-01T00:00:00.000 | {
"year": 2024,
"sha1": "30ebf504cb406cf8e24791ecae97785212e6b78f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/13/10/2895/pdf?version=1715677713",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e2c959544b24aa0a1a8afa58f4da42d2647259bd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
13626818 | pes2o/s2orc | v3-fos-license | Adolescents’ use of the built environment for physical activity
Background Physical activity is a health-enhancing behavior, but few adolescents achieve the recommended levels of moderate-to-vigorous physical activity. Understanding how adolescents use different built environment spaces for physical activity and activity varies by location could help in designing effective interventions to promote moderate-to-vigorous physical activity. The objective of this study was to describe the locations where adolescents engage in physical activity and compare traditional intensity-based measures with continuous activity when describing built environment use patterns among adolescents. Methods Eighty adolescents aged 11–14 years recruited from community health and recreation centers. Adolescents wore accelerometers (Actigraph GT3X) and global positioning system receivers (QStarz BT-Q1000XT) for two separate weeks to record their physical activity levels and locations. Accelerometer data provided a continuous measure of physical activity and intensity-based measures (sedentary time, moderate-to-vigorous physical activity). Physical activity was mapped by land-use classification (home, school, park, playground, streets & sidewalks, other) using geographic information systems and this location-based activity was assessed for both continuous and intensity-based physical activity derived from mixed-effects models which accounted for repeated measures and clustering effects within person, date, school, and town. Results Mean daily moderate-to-vigorous physical activity was 22 minutes, mean sedentary time was 134 minutes. Moderate-to-vigorous physical activity occurred in bouts lasting up to 15 minutes. Compared to being at home, being at school, on the streets and sidewalks, in parks, and playgrounds were all associated with greater odds of being in moderate-to-vigorous physical activity and achieving higher overall activity levels. Playground use was associated with the highest physical activity level (β = 172 activity counts per minute, SE = 4, p < 0.0001) and greatest odds of being in moderate-to-vigorous physical activity (odds ratio 8.3, 95% confidence interval 4.8-14.2). Conclusion Adolescents were more likely to engage in physical activity, and achieved their highest physical activity levels, when using built environments located outdoors. Novel objective methods for determining physical activity can provide insight into adolescents’ spatial physical activity patterns, which could help guide physical activity interventions. Promoting zoning and health policies that encourage the design and regular use of outdoor spaces may offer another promising opportunity for increasing adolescent physical activity.
Background
Physical activity has many associated health benefits, decreases risk of obesity, and independently decreases morbidity and mortality. There is mounting evidence on the association between certain built environments, particularly outdoor spaces such as parks and playgrounds, and higher physical activity levels [1][2][3][4][5]. Investigators are beginning to use Global Positioning System (GPS) receivers along with accelerometers to better understand how youth use the built environment [6]. However, there are three major gaps in this field which hinder optimizing adolescents' use of the built environment. First, no standardized method exists for measuring physical activity in the built environment. Most prior work in this emerging area has assessed physical activity using discrete activity levels, such as sedentary and moderateto-vigorous physical activity (MVPA) [3,[7][8][9], with continuous physical activity measures used less often [10], and little is known about patterns and variations in built environment use over time [2]. Second, although national adult health guidelines recommend physical activity bouts of at least 10 consecutive minutes [11][12][13], pediatric guidelines do not address bout length, recommending only that children achieve a minimum of 60 minutes of MVPA per day [13]. Third, while strategies for assessing and analyzing use of the built environment for physical activity are beginning to offer a better understanding of how much time youth spend using various built environment attributes and how activity levels vary by location [7,8], questions remain about the most useful way to analyze use of the built environment for physical activity.
In this study, we sought to address these gaps in the current knowledge base by first describing the different locations where adolescents spend time as well as the patterning of adolescent physical activity, and second, by comparing the use of a traditional intensity-based physical activity (MVPA) measure with continuous physical activity data when analyzing use of the built environment among youth.
Participants
From April 2011-April 2012, we recruited a convenience sample of 96 non-Hispanic white, non-Hispanic black, and Hispanic 11-14 year olds who sought care at a community health and/or attended a community recreation center living in three surrounding predominantly urban middle-and low-income communities in the greater Boston area. Age eligible subjects without physical impairments limiting ambulation were recruited sequentially during the study enrollment period. Signed parental informed consent along with signed child assent was obtained from each family. For inclusion in this study, subjects had to have accelerometer and GPS data available from at least one of the two measurement periods. During the first measurement period, five subjects dropped out of the study, two were lost to follow up, and eight provided inadequate data. During the second measurement period, four subjects provided insufficient data, sixteen subjects declined participation, two subjects relocated, and twelve subjects were unable to be contacted. A total of 80 (83.3%) participants completed the study with sufficient data and were included in the final study sample. The Partners HealthCare Institutional Review Board approved the study.
Data collection and measures
A researcher provided each subject with a hip-worn belt equipped with an accelerometer (GT3X; ActiGraph LLC) to record physical activity and a GPS receiving unit (QStarz BT-Q1000XT) to record location. Accelerometer and GPS devices were both set to record at 30 second intervals, with their internal clocks synchronized. Subjects were asked to wear the belt at all times during waking non-water activity hours for at least 5 weekdays and 2 weekend days and to recharge their GPS overnight in two separate seasons, one warm (September through mid-November, April through June) and one cold (mid-December through March), to account for known seasonal variations in physical activity levels and built environment use [7]. Study staff recorded the temperature (high, low, average; recorded as Fahrenheit to the nearest whole unit) and weather condition (sun; overcast; rain; snow) on each study day.
At entry, trained research staff measured height and weight using a stadiometer (SECA) and digital scale (LifeSource MD) with participants wearing indoor clothing, pockets emptied, and shoes removed. Measurements were taken in duplicate then averaged, from which body mass index (BMI) was calculated (kg/m 2 ), then used to determine participants' weight status (healthy weight, overweight, obese) according to CDC age-and sexspecific percentile cut-offs [14]. Self-reported age (date of birth), sex, and race/ethnicity were obtained at baseline, along with highest parent-reported level of parental education.
Data processing Data merging and processing
Study personnel reviewed GPS and accelerometer output to ensure each subject had at least 2 hours of daily timematched data, on at least 2 week days and 1 weekend day, with accelerometer data having at least 10% nonzero epochs per hour [15]. Datasets that met minimum inclusion criteria were then cleaned to exclude days and/or hours of non-wear as defined by the validation criteria above (e.g. hours with <10% non-zero epochs, days with <2 hours valid time-matched data). Data occurring during overnight (12 AM-5 AM) hours were removed, along with accelerometer datapoint(s) at the start of a day without a matching GPS datapoint. Furthermore, to avoid misclassification of imputed GPS data due to GPS malfunction (signal loss, excessive cold-and warmstart up times, jitter, drift) or battery depletion, protracted missing GPS data (>2 hours) during non-school hours were also removed. Review of data revealed signal loss occurring frequently in larger buildings, most prominently schools, and infrequently in residential buildings. GPS data missing for > 2 hours during school hours was thus imputed, as this was felt to represent appropriate indoor signal loss. Accelerometer and GPS data were joined according to the date and time information in each unit based on a prior published software program written in 2011 [7]. For accelerometer datapoints without a corresponding GPS point, the missing latitude and longitude were imputed using the last previously recorded values. The joined data were collapsed into 1 minute epochs for all data analyses to align with recent GPS-accelerometer studies and physical activity guidelines [2,3,7,10,16].
Land use classifications
Each subject's home address and school and all joined data were geo-classified using geographic information system (GIS) (ArcGIS 9.2). Each GPS datapoint was assigned one of six distinct land-use classifications: home, school, park, playground, street/sidewalk, or other [7] using the Commonwealth of Massachusetts' Office of Geographic Information (MassGIS) Land Use 2005 GIS layer. Location data were further categorized as indoor (home and school), outdoor (park, playground, street/ sidewalk), or other. MassGIS classifies land use into 40 different types as observed from 50 cm pixel resolution color orthoimagery, with playground including areas built specifically for public recreation (soccer, football, and baseball fields, golf courses) and park including green spaces and open land (forest, town green, beaches, open park and public space). MassGIS land-use assignments were verified using 30 cm pixel resolution 2008 color orthoimagery. Forty meter buffers around the center of the subject's house and school building perimeter, and a hierarchical land-use classification system was used to account for inherent GPS error and reduce possible misclassification.
Physical activity data
Accelerometers provided activity counts per minute. Due to skewness, count data were log transformed for analysis as continuous data. In addition to using continuous physical activity data for analyses, to relate the findings to current physical activity guidelines and compare our findings to the literature, we also classified activity data into intensity-based categories, with <100 counts per minute as sedentary and ≥2296 counts per minute (corresponding to 4 metabolic equivalents) as moderate-to-vigorous physical activity (MVPA) [17].
Analysis
Univariate analyses included deriving mean daily minutes of total time, MVPA, and sedentary time that youth spent in each land-use category. We also plotted physical activity counts per minute versus location data separately by person-date and visually inspected the plots for patterns in physical activity and variations in land-use. Using these longitudinal plots, we compared total daily energy expenditure (Kcal) from MVPA to energy expenditure achieved from all daily activity levels below MVPA threshold [18].
Bivariate analyses tested for associations of physical activity counts and minutes of each physical activity category (sedentary and MVPA) by grouped land-use categories (indoor, outdoor, other). Nonparametric tests (Mann Whitney U and Kruskal Wallis) were used due to skewed distributions. Because these are time series data, autocorrelation (where activity at one minute is not independent from activity at previous minutes) is a concern. When autocorrelation is present, it can bias the estimates. To account for this, time series analysis (SAS Arima procedure) was used to test for autocorrelation up to the fifteenth order separately for each person-date and removed prior to subsequent longitudinal analyses. Time series analyses by person-date also allowed us to test for bout length. Finally, we performed multivariable modeling adjusting for potential confounders (age, sex, race/ethnicity, weight status, parental education, valid hours of combined data, season, temperature, and weather) to assess the relationship between land-use categories and physical activity. We first ran longitudinal mixed effects models that accounted for repeated measurements and clustering effects with continuous physical activity (log counts per minute) as the dependent variable, built environment variables as fixed predictors, with polynomial contrasts of time as random effects, and the covariates noted above. A backward elimination (with cutoff of p ≤ 0.01) algorithm was applied. We performed interaction analyses for location and weather to explore possible effect modification of activity by weather. We then used generalized estimating equation logistic regression models to test the association of landuse categories and intensity-based measures of physical activity, accounting for repeated measures and clustering effects, adjusting for potential confounders. Dependent variables for these logistic regression analyses were whether a minute was classified as MVPA (yes/no) or sedentary (yes/no), respectively. Table 1 describes the study sample. The mean age of participants was 12.6 years, with 44% male, 40% white, 23% black, 36% Hispanic or Latino, and 49% overweight or obese. The majority of adolescents resided in the town where the community health and recreation center were located. Most (80%) adolescents lived in households where one or more parent had a high school education or higher, with 6% of parents reporting less than a high school education, and 14% missing or not reporting parental education.
An average of 277 minutes of data were collected per adolescent per day at home, along with 296 minutes at school, 45 minutes in streets and sidewalks, 25 minutes at playgrounds, 17 minutes at parks, and 99 minutes in all other locations (p < 0.001 for indoor vs. outdoor vs. other locations). Visual inspection of the longitudinally plotted data revealed substantial variation of youth physical activity over time, both within and between land-use categories. Figure 1 provides an illustrative example of a single subject's activity levels across time of day for three different days. The adolescent's location and physical activity intensity in each different location is noted to vary by type of day (school vs. non-school day) and time of day. The figure also reports the total daily energy expenditures calculated for each day, with daily MVPA tallies ranging from one half to one twentieth the daily energy expenditure approximations derived from activity data below the MVPA cut-off.
Overall, physical activity levels rarely reached the MVPA threshold; only 3.4% of all minutes collected were categorized as moderate-to-vigorous physical activity.
Adjusted associations of physical activity with the built environment
In longitudinal mixed effects models using continuous physical activity counts rather than MVPA/sedentary, the built environment showed broad associations with physical activity (Table 2). Compared to being at home, all other locations, including all outdoor land-use categories, were associated with higher recorded physical activity counts. Being in a playground was associated with the highest levels of physical activity, with each additional minute of playground use resulting, on average, in an additional 172 counts per minute of activity. Boys averaged 34 more counts per minute than girls. Temperature significantly predicted physical activity level, while weather modified physical activity across all locations with snow having the greatest influence on activity. During snow days, adolescents had increased activity at playgrounds and parks compared to other locations or other weather conditions, while adolescents had increased activity on streets and sidewalks during sunny and overcast weather compared to rain and snowy days (Table 3). Multivariable analyses of physical activity separated into MVPA and sedentary time showed all land-use types predicting higher odds of a minute being in MVPA compared to a minute at home (Table 4). Adolescents had nearly 7 times the odds of a minute being in MVPA on streets and sidewalks, over 8 times the odds of a minute being in MVPA in playgrounds, and 5 times the odds of a minute being in MVPA in parks compared to being at home. Black compared to white adolescents had increased odds of a minute being in MVPA, as did boys compared with girls. All land-use types were associated with higher odds of one minute being in sedentary time compared to one minute spent at home, with all landuses having less than twice the odds.
Discussion
In this study which used GPS and accelerometer data, we found that intensity-based and continuous physical activity analysis methods revealed a statistically significant role for the built environment in adolescent physical activity. This study is among the first to compare intensity-based to continuous physical activity data when assessing physical activity by location in youth. Both analysis methods identified outdoor spaces as superior to indoor spaces for promoting physical activity. Figure 1 Illustrative examples of longitudinal plotting of a subject's daily physical activity data versus time with different symbols (triangle, diamond, circle, heart, star, square) representing various locations. *Activity counts >2296 constitute moderate-to-vigorous physical activity (MVPA). **Physical activity data presented over time in 1 minute intervals. Playgrounds were the location with the greatest predictive value for being in MVPA, and were also the location with the highest recorded physical activity levels. We also plotted physical activity and built environment use patterns over time to delineate the temporal and sporadic nature of adolescent physical activity. Though schools have received much policy and research attention as desirable venues for promoting physical activity, similar to a study in English children, we found physical activity levels among adolescents to be higher in outdoor than indoor spaces [2]. These findings, along with our findings that adolescents obtained only about one third the daily recommended levels of physical activity, would argue for increased efforts by urban planners and public officials to create more outdoor environments which are 'activity friendly' and will attract youth outdoors.
Using intensity-based (MVPA and sedentary time) and continuous physical activity data, we confirmed previous research identifying parks and playgrounds as spaces that promote physical activity [4,5,19]. However, prior studies measuring physical activity at playgrounds have tended either to assess only specific facilities or not simultaneously record activity occurring at other locations to allow for direct comparison of activity levels by land-use category. A recent similar study of younger children found that only 2% of children's total daily physical activity occurred at parks or playgrounds. This study did not disassociate the physical activity time spent in parks, which have been the most well studied location [3,20], from that spent in playgrounds, nor did it compare activity intensity in parks and playgrounds to activity levels in other land-uses.
[10] Though we found that both parks and playgrounds were important locations for obtaining physical activity, playgrounds, streets and sidewalks were the most likely locations for recorded physical activity in adolescents, more so even than parks. This finding was consistent using both continuous and intensitybased physical activity analyses methods. The planning literature has long identified the importance of a wellconnected street system to encourage pedestrian activity [21,22], yet less is known about the potential benefits of street and sidewalk systems as direct promoters of higher level physical activity. We found all land-use categories to be associated with sedentary time relative to time spent at home, yet the odds for being sedentary were considerably lower and more uniform than the odds for being in MVPA in these locations. This suggests that sedentary time may be more of a common daily function that occurs throughout the day and that has less of a potential to be impacted by urban form or location than higher-level physical activities which showed greater variations by land-use type.
To our knowledge, this is one of the first built environment studies to test for bout lengths. We found that in a free-living scenario, when adolescent physical activity was recorded throughout the entire day and over all locations, physical activity occurred in bouts lasting up to fifteen minutes. Further research should assess whether bouts vary by time of day, activity, and location, given our observed variations in activity levels. The health benefits of prolonged versus intermittent physical activity also remain unclear [23,24]. Adult physical activity guidelines are based on 10 minutes bouts [13], and prior pediatric studies looking at physical activity bout lengths have used 5 minute bins to study associations between physical activity and health outcomes [25,26]. One study assessing weight status outcomes found longer bouts of MVPA to protect against risk of overweight [25], while another study reported equal cardiovascular benefits from sporadic (<5 consecutive minutes) versus consecutive physical activity [26]. Our finding that physical activity occurs in bouts lasting three times longer than the bout estimate used in prior pediatric health outcome studies provides useful information for the design of future adolescent physical activity interventions.
We also found that using traditional measures for quantifying physical activity (summing total minutes of daily MVPA) grossly undervalues the relative benefits adolescents derive from different built environments, by undercalculating caloric expenditure by as much as 90 percent. Adolescents in this study achieved on average the greatest total daily MVPA in schools, a mean daily MVPA of approximately 8 minutes, due to the large total daily time adolescents spent at school. However, schools were a location with generally low recorded activity levels, second only to the level recorded at home, and when physical activity was adjusted for total time spent in each location, school time was associated with low odds of being in MVPA. Comparing energy expenditure calculations derived using continuous activity data with energy expenditure calculations obtained using intensity-based activity data illustrate another way in which using MVPA to assess the health benefits of physical activity likely substantially underestimates the actual health benefits adolescents accrue from their total daily physical activity. In measuring adolescent physical activity with traditional intensity-based methods, one misses a large degree of daily activity below the moderate activity threshold with known health benefits. In addition, daily MVPA minutes may not provide actionable information, as these minutes may be separated throughout the day and may not occur in the same location. Future pediatric physical activity and obesity interventions aimed at increasing energy expenditure might consider targeting physical activity in the ranges and locations where it is most common and achievable, rather than focusing primarily on MVPA.
Our study has limitations. Subjects are from one metropolitan region and findings may not be generalizable to other populations. Subjects were not randomly selected and our sample may differ from a sample that is randomly selected. Our subjects were predominantly from one town, and while this mirrors the populations attending the community health and recreation centers, the possibility of systematic bias exists. Validity criteria for combined data were set at 2 hours to minimize data loss given known difficulties in obtaining simultaneous GPS and accelerometer data, including GPS signal loss, start-up time, jitter and drift, and battery depletion. To account for this low threshold and adjust for days with more limited physical activity, we included a covariate for valid hours of day, a commonly used method in accelerometry studies. We imputed missing GPS data to re-capture data loss due to indoor signal loss, a well-known limitation of using satellite signals. Despite taking steps to avoid location misclassification, including only imputing data falling within the same day and limiting the amount of consecutive imputed data, misclassification bias remains possible. In subanalyses we found location misclassification to be minimal (<2%). Though we identified location and temporal patterns in physical activity, we did not test for spatial autocorrelation [27,28], nor did we include health outcomes. We identified fifteen minutes as the ceiling for continuous physical activity among adolescents. However, without further information on the distribution of adolescent bout lengths, it is difficult to interpret the relative frequency of these findings and caution should be employed when attempting to use this finding to counsel adolescents on activity or in formulating physical activity recommendations.
Conclusion
An objective assessment of adolescent physical activity patterns highlighted the importance of outdoor spaces among the cohort participating in this study. Knowledge about the locations and patterns of physical activity among targeted populations can help guide physical activity interventions and the design of outdoor spaces, to ensure that location, duration and intensity of physical activity are taken into consideration. Targeting activity interventions for adolescents solely in the MVPA range may represent a lost opportunity for physical activity promotion.
Abbreviations BMI: Body mass index; MVPA: Moderate-to-vigorous physical activity; GPS: Global positioning system; GIS: Geographic information systems.
Competing interests
The authors declare that they have no competing interests.
Authors' contributions NMO conceptualized and designed the study, drafted the initial manuscript, assisted with the analysis and interpretation of the data, and approved the final manuscript as submitted. JMP, JPW, AEF, and EG assisted in designing the study, assisted in interpreting the data, reviewed and revised the manuscript, and approved the final manuscript as submitted. AIR and CG coordinated and supervised data collection, critically reviewed the manuscript, and approved the final manuscript as submitted. JJL and MLC carried out the data analyses, assisted in interpreting the data, reviewed and revised the manuscript, and approved the final manuscript as submitted. JB assisted in designing the study, carried out the geographic information system data analysis, reviewed and revised the manuscript, and approved the final manuscript as submitted. | 2016-05-12T22:15:10.714Z | 2015-03-15T00:00:00.000 | {
"year": 2015,
"sha1": "71f2e9d38e1ca04fca02527014b06c15d64dbee2",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-015-1596-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ecdc789d49ac1735ba8344247feac20b022b325d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254349580 | pes2o/s2orc | v3-fos-license | CITIZEN PARTICIPATION IN ELECTRONIC PUBLIC ADMINISTRATION: THE CONSIDERATIONS OF FUNCTIONALITY AND THE TECHNOLOGY ACCEPTANCE MODEL
ABSTRACT
The adoption of technology-based public administration has revealed the ability of virtual spaces to accelerate public services, making them more effective and accessible. However, technology-based public services also have negative effects relating to the lack of citizen supervision and the possible misuse of personal data. This study analyzed the effects of functionality and data privacy on the perceived usefulness and perceived ease of use of technology in Indonesian public administration, using administrators as respondents. An empirical analysis was conducted using quantitative methods with the help of the WarpPLS (Partial Least Squares) program. The results showed that data privacy and functionality have positive effects on perceived ease of use (PEU) and perceived benefits (PB). Furthermore, the statistical results showed that PEU and PB had positive effects on citizen participation in electronic public administration. The results of testing the mediating effects of PB revealed its role in strengthening the positive and significant effects of the functionality and data privacy variables on citizen participation in use. On a theoretical level, these results contribute to an explanation of the application of the technology acceptance model in the online public sector and underline the crucial elements that influence technology acceptance.
INTRODUCTION
Technology use in the public sector has developed rapidly in recent years. The Covid-19 pandemic has driven a massive and intensive increase in technology use in public administration, moving public services from the physical to the virtual space. Several studies have confirmed the important role of technology while revealing its effects, both positive and negative, on government achievement and delivery of public services (Heflin, Shewmaker, & Nguyen, disclosure of personal information to unauthorized parties. This, in turn, ensures that data is used appropriately between users and service providers and limits the uncontrolled sharing of data. Since data security and privacy are closely related to the use of the internet and digital technologies, data privacy positively affects perceived benefits (Dimitropoulos et al., 2011;Ziefle et al., 2016) and perceived ease of use (Distler, Lallemand, & Koenig, 2020;Schnall, Higgins, Brown, Carballo-Dieguez, & Bakken, 2015).
H3. Data privacy has a positive and significant effect on perceived benefits.
H4. Data privacy has a positive and significant effect on perceived ease of use.
Perceived Ease of Use, Perceived Benefits, and Citizen Participation Behavior
According to Chen and Aklikokou (2020), perceived usefulness and perceived ease of use affect technology adoption, and this is reflected by the citizen participation behavior that ensues. According to Davis et al. (1989), perceived usefulness refers to the user's expectations of the technology's ability to increase potential performance.
Furthermore, according to Mois and Beer (2020), perceived ease of use refers to the perceived amount of effort required to use the technology effectively (Davis, 1989). Several previous studies have confirmed the effect of perceived ease of use on perceived benefits, where the greater the user's perception of the convenience of the product, the greater their perception of the potential benefits they can obtain (Abdullah, Ward, & Ahmed, 2016;Moslehpour, Pham, Wong, & Bilgiçli, 2018). This is because users will be more inclined to use technology that is easy to use, compared with technology that does not offer the same convenience. Based on these considerations, the following hypothesis is proposed in this study: H5. There is a positive and significant effect of perceived ease of use on perceived benefits. Hsu, Chen, and Ting (2018) found that the two main components of TAM, perceived ease of use and perceived benefits, have a positive effect on technology adoption. Likewise, Al-Rahmi et al. (2021) found that factors such as compatibility, enjoyment, and relative advantage have a substantial impact on perceived usefulness, which ultimately affects citizen participation in public sector technology use. Al-Adwan (2020) found that perceived ease of use and perceived usefulness significantly influence citizens' participation behavior in online public sector courses.
Several other studies have found an effect of perceived ease of use and perceived benefits on citizen participation in the use of public sector technology (Berque & Newman, 2015;Dehghani, 2016).
METHOD
This study analyzes the effects of functionality and data privacy on perceived benefits and perceived ease of use of electronic technology in Indonesian public administration. In addition, an empirical analysis was carried out to analyze the effect of perceived ease of use on the perceived benefits of and citizen participation in electronic public administration, as well as the effect of perceived benefits on citizen participation in the use of electronic public administration, see Figure 1. To analyze these various effects, empirical tests were conducted using a quantitative approach. The model used replicates that of AlHamad et al. (2021).
In this study, functionality is defined operationally as the degree to which the service offered by the provider has a practical value in accordance with the intended purpose of the user, using a 3-item measure adopted from Sovacool and Del Rio (2020). The data privacy variable is defined as limiting the disclosure of personal information to unauthorized parties and ensuring that data is used appropriately between users and service providers, as well as limiting uncontrolled data sharing; the variable's measure was adopted from Dhagarra et al. (2020). Furthermore, the perceived benefits variable is defined as the user's expectation of the technology's ability to increase potential performance, which refers to the theoretical conceptualization of Davis (1989). Perceived ease of use in this study refers to the amount of effort required to use the technology effectively (Davis, 1989). Finally, the citizen participation variable is defined as the willingness of users to use digital technology to meet their specific needs, with the measure used adopted from Kasilingam (2020).
To enable the empirical analysis to be carried out, this study used administrators as respondents. Sampling was performed using a purposive random sampling technique. Questionnaires were distributed to respondents in Bandung Regency, West Java, Indonesia. A total of 282 questionnaires were distributed, and 158 questionnaires were returned, entailing a response rate of 56 percent. The scale used in the questionnaire was a 5-point Likert scale. Data analysis was conducted using WarpPLS.
RESULTS
The results of the outer loadings are shown in Table 1. Outer loadings inform the loading factor, which shows the magnitude of the correlation between indicators and latent variables. A loading factor value greater than 0.7 for each indicator is said to be valid. The test results show that all variables of functionality (FN), data privacy (DP), perceived ease of use (PEU), perceived benefits (PB), and citizen participation in use (IU) had an indicator with a loading factor value greater than 0.7. Thus, the indicators used in this study were all valid. Furthermore, the R-squared test was carried out to analyze the proportion of variation in the dependent variable that could be predicted from each independent variable. The test results in Table 4 show the R-squared adjusted value for the dependent variables in this study, namely PEU, PB, and IU, which had values of 0.227, 0.240, and 0.616, respectively. This shows that the independent variables of functionality and data privacy can predict 22.7 percent and 24 percent of variations in perceived ease of use (PEU) and perceived benefits (PB), respectively.
Furthermore, the dependent variable of citizen participation in use (IU) was predicted with a higher variation value, namely 0.621, than the independent variables in this study. The results of the model fit test, shown in Table 6, reveal that the model's fit was acceptable, both the saturated model and the estimated model. This indicates that the model used in this study satisfied all the set indices and confirms that the formulated model could be applied in further tests. Furthermore, hypothesis testing was carried out, and the results are detailed in Table 7 (2017), who found that functionality was a determinant of benefits perceived by users. Thus, the second hypothesis, stating a positive and significant effect of functionality (FN) on perceived ease of use (PEU) is empirically proven and accepted. This result is consistent with previous research that underscored the important role of data security and privacy in users' ease-of-use perceptions (Abdullah et al., 2016;Brown, 2002;Rao et al., 2011;Ryan & Rao, 2008;Tandon et al., 2018).
The results of testing the third hypothesis showed a positive and significant effect of data privacy (DP) on perceived benefits ( convenience (Distler et al., 2020;Schnall et al., 2015). The more users trust in the security of their personal data, the more intensively they will use a technology product. accepted. This is in accordance with Hsu et al. (2018), who found that perceived ease of use and perceived benefits had a positive effect on technology adoption, and with Al-Rahmi et al. (2021), who found that items such as compatibility, enjoyment, and relative advantage had a substantial impact on perceived usefulness, which ultimately influenced citizen participation in public sector technology use. (2017), Berque and Newman (2015), and Dehghani (2016), who found that perceived ease of use and perceived usefulness significantly influenced citizens' participation in online public sector courses.
The indirect effects were then tested to analyze the mediating role of the perceived ease of use (PEU) and perceived benefits (PB) variables in bridging the relationship between the functionality and data privacy variables and citizen participation in use. The test results for the indirect effect are shown in Table 8 and those for the total effect in Table 9. Overall, the results show that perceived ease of use (PEU) and perceived benefits (PB) mediate the relationship between the functionality and data privacy variables and citizen participation in use. Figure 2 illustrates that perceived ease of use (PEU), and perceived benefits (PB) are likely to strengthen the positive and significant effects of functionality (FN) and data privacy (DP) on citizen participation in electronic public administration. In general, today's increasingly intensive adoption of technology in the public sector requires administrators to improve citizen involvement in digital public services. Moreover, administrators need a high level of digital knowledge and skills to deliver public services professionally (Lauermann & König, 2016). Public service administrators can implement and apply digital and online technologies to improve appropriate public services, but they must develop reliable and safe electronic services in the public sector to increase citizen engagement with digital technology in public services (Bakar, 2018). Citizens must thus be encouraged to increase their willingness to participate in technology when obtaining public services (Kunter et al., 2013;Wijiastuti & Nurhayati, 2021). To monitor the progress of public services, citizens can appropriately supervise the public sector by using the available channels (Hallström & Schönborn, 2019). Another indicator that shows that a person is engaged is being able to equip others with knowledge of appropriate public administration techniques by combining time, energy, materials, and abilities (Blau, 1964).
CONCLUSION
The results of the study have shown that data privacy and functionality have a positive effect on the perceived ease of use (PEU) and perceived benefits (PB) of technology. Furthermore, the statistical results have shown that PEU and PB have a positive effect on citizen participation in electronic public administration. The results of testing the mediating effects of PB revealed its role in strengthening the positive and significant effects of the functionality and data privacy variables on citizen participation in electronic public administration.
On a theoretical level, these results contribute to an explanation of the application of the technology acceptance model in the online public sector and underline the crucial elements that influence technology acceptance among citizens. Practically, the results are useful for encouraging public sector administrators, when delivering public services in a technology-laden era, to ensure citizens are more aware and involved in electronic public services. A limitation of this study is that more respondents would be required to generalize the findings. In addition, the variables used in this study need to be further developed in future research. Therefore, it is recommended that future research use more respondents and investigate variables that were not tested in this study to analyze the factors affecting technology acceptance in the public sector.
Funding: This study received no specific financial support. | 2022-12-07T19:09:33.388Z | 2022-11-28T00:00:00.000 | {
"year": 2022,
"sha1": "b8b78b5153b0cd5d6ed088f43cd022d3f9db7110",
"oa_license": null,
"oa_url": "https://archive.conscientiabeam.com/index.php/74/article/download/3206/7217",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4c603841e81d353507197717917fd262ab39fdae",
"s2fieldsofstudy": [
"Computer Science",
"Political Science"
],
"extfieldsofstudy": []
} |
257131781 | pes2o/s2orc | v3-fos-license | Diagnostic Performance Evaluation of Multiparametric Magnetic Resonance Imaging in the Detection of Prostate Cancer with Supervised Machine Learning Methods
Prostate cancer is the second leading cause of cancer-related death in men. Its early and correct diagnosis is of particular importance to controlling and preventing the disease from spreading to other tissues. Artificial intelligence and machine learning have effectively detected and graded several cancers, in particular prostate cancer. The purpose of this review is to show the diagnostic performance (accuracy and area under the curve) of supervised machine learning algorithms in detecting prostate cancer using multiparametric MRI. A comparison was made between the performances of different supervised machine-learning methods. This review study was performed on the recent literature sourced from scientific citation websites such as Google Scholar, PubMed, Scopus, and Web of Science up to the end of January 2023. The findings of this review reveal that supervised machine learning techniques have good performance with high accuracy and area under the curve for prostate cancer diagnosis and prediction using multiparametric MR imaging. Among supervised machine learning methods, deep learning, random forest, and logistic regression algorithms appear to have the best performance.
Introduction
Prostate cancer (PCa) is the most common cancer in men and the second leading cause of cancer-related death in them [1][2][3]. Various methods are used for PCa screening, though these methods are invasive or have low accuracies, such as digital rectal examination, prostate-specific antigen (PSA) tests, and transrectal ultrasound (TRUS)-guided prostate biopsy [4][5][6][7]. New biomarkers, named 8-hydroxy-2-deoxyguanosine (8-OHdG) and 8-isoprostaglandin F2α (8-IsoF2α), have been reported. Increased levels of these biomarkers indicate prostate cancer in the patient and they are measured through urine tests. Of course, validating these urinary biomarkers in relation to prostate cancer still requires significant research [8]. Meanwhile, prostate MRI plays a crucial role before a biopsy in patients with raised PSA. Multiparametric magnetic resonance imaging (mp-MRI) is a commonly used imaging procedure for diagnosing PCa. Mp-MRI is recognised as the combination of conventional anatomical MRI and at least two functional magnet resonance sequences: diffusion-weighted imaging (DWI), dynamic contrast-enhanced MRI (DCE-MRI), and, optionally, MR spectroscopy (MRS) [9,10]. Various studies have noted that mp-MRI has good accuracy for diagnosing or determining the grade of prostate cancer [11,12]. Of course, it is more challenging to determine the aggressiveness of cancer using MRI than when it is detected by a physician with good reliability. Recently, various studies have used artificial intelligence and MRI images to diagnose or assess the characterization and severity of cancers, including prostate cancer, to reduce human error, increase the speed of diagnosis and classification, and improve overall efficiency and accuracy [13][14][15]. Indeed, artificial intelligence is beneficial in acquiring important clinical information that can help physicians to provide key and critical opinions about clinical prognosis, diagnosing diseases, and treatment outcomes [16,17].
Artificial Intelligence (AI) describes the capability of a computer to model intelligent behaviour, with minimal human intervention, and to reach a certain goal based on provided data. AI has multiple branches. One of these branches is machine learning (ML). ML describes algorithms used to incorporate intelligence into machines by automatically learning from data [16,18]. There are different types of ML. In general, ML types are branched into four groups: Unsupervised learning, Semi-supervised learning, Supervised learning, and Reinforcement learning [19][20][21]. In Supervised learning, an observer provides data to the machine and labels the data types. Input and output are specified and the machine attempts to learn a pattern from the input to the expected output [22,23]. In unsupervised learning, the computer finds connections between data and discovers patterns without the help of a trainer and without the use of labels that define the type of data [24,25]. Semi-supervised learning is a learning paradigm that studies how computers learn in the attendance of labeled and unlabeled data. During semi-supervised learning, the aim is to design algorithms using combinations of labeled and unlabeled data [26]. Reinforcement learning is conducted by encouraging desirable behaviour and punishing undesirable behaviour. In this way, the computer can understand and interpret various issues by trial and error, according to the feedback it receives as a result of its actions [27,28].
The most common categories of ML algorithms are classification and regression. Examples of supervised learning algorithms include linear and logistic regression, support vector machines (SVMs; classification), K nearest neighbours (KNN; classification and regression), naive Bayes (classification), decision tree and random forests (DT and RF, respectively; both classification), and deep learning techniques (classification) [16,25].
The goal of this review study is to show the diagnostic performance (accuracy and area under the curve) of mp-MR images for predicting prostate cancer with and without using supervised ML learning algorithms. In this review, for a better comparison of the method's results, studies have been used whose input data included T 1 -weighted imaging (T 1 WI) or T 2 -weighted imaging (T 2 WI), DWI, DCE-MRI, and, optionally, MRS.
mp-MRI in the Detection PCa
Mp-MRI primarily contains at least three sequences: T 2 WI or T 1 WI, DWI, and DCE imaging [29]. T 1 WI is used to detect bleeding after a biopsy. T 2 -weighted images can detect the anatomical shape of the peripheral and transitional zones, where 70% and 30% of cancers are found, respectively [9]. DWI measures the Brownian movement of free water protons inside a tissue. Malignant tissue is denser than normal tissue, triggering restricted free water movement inside the cancerous tissue, thereby decreasing the diffusion of water [30][31][32]. DCE assesses the perfusion and vascular permeability throughout the prostate and within a cancerous tissue through the rapid administration of gadolinium chelates and the use of fast T 1 -weighted images. Unlike normal tissue, malignant tissue has more penetrable, heterogeneous, and disordered vessels due to neoangiogenesis [9,33].
Kam et al. [36] assessed the accuracy of mpMRI to predict PCa pathology. In their work, 235 patients underwent mpMRI with a 1.5 T or 3 T MRI. The results of mpMRI were compared with the final radical prostatectomy specimen to analyze the performance of mpMRI for significant prostate cancer (sPCa) detection. They reported the accuracy of mpMRI for the prediction of sPCa. Overall, the sensitivity, specificity, and positive predictive value (PPV) of mpMRI for the detection of sPCa were 91%, 23%, and 95%, respectively. In 2020, Ippolito et al. [37] stated the multiparametric diagnostic accuracy of 201 patients for PCa detection. Patients underwent mp-MRI examination with a 3 T MR scanner and a body coil with sequences T 2 WI, DWI, and DCE. The sensitivity, specificity, and accuracy of PI-RADS for the detection of PCa were 65.1%, 54.9%, and 64.2% (55.1-72.7%), respectively.
Consequently, in a study of systematic review and meta-analysis, Zhao et al. [38] reported the diagnostic performance of mp-MRI. The meta-analysis included 10 articles. At a per-patient level, the pooled sensitivity, specificity, and AUC values for mpMRI or low signal on ADC maps (d). An early and clear enhancement on the DCE-MRI (a midline apex TZ (b) was recognized as a high-possibility lesion. This lesion was proven transperineal biopsy (Gleason 5 + 4). "Reprinted with permission from Ref. [34]. 2020 More details on "Copyright and Licensing" are available via the follo https://link.springer.com/article/10.1007/s00330-020-06782-0 (accessed on 12 March 202 [34]. 2020, Springe tails on "Copyright and Licensing" are available via the following link: https://link.sprin ticle/10.1007/s00330-020-06782-0 (accessed on 12 March 2020) Kam et al. [36] assessed the accuracy of mpMRI to predict PCa patholog work, 235 patients underwent mpMRI with a 1.5 T or 3 T MRI. The results of m A systematic TRUS biopsy was performed with negative cores. "Reprinted with permission from Ref. [34]. 2020, Springer". More details on "Copyright and Licensing" are available via the following link: https://link.springer.com/article/10.1007/s00330-020 -06782-0 (accessed on 12 March 2020).
Machine Learning (ML)
ML includes unsupervised, semi-supervised, supervised, and reinforcement learning. In this study, the emphasis is placed on supervised methods that can be employed on data that have been class-labeled for imaging data. There are three primary applications to use ML in medical imaging for tumor diagnosis: localization, segmentation, and classification [16]. The use of a classification model usually includes three stages: training, validation, and testing. Figure 3 shows the flow diagram of a computer-aided diagnosis system, that begins with MRI procurement and finishes with ML analysis [39].
Machine Learning (ML)
ML includes unsupervised, semi-supervised, supervised, and reinforcement learning. In this study, the emphasis is placed on supervised methods that can be employed on data that have been class-labeled for imaging data. There are three primary applications to use ML in medical imaging for tumor diagnosis: localization, segmentation, and classification [16]. The use of a classification model usually includes three stages: training, validation, and testing. Figure 3 shows the flow diagram of a computer-aided diagnosis system, that begins with MRI procurement and finishes with ML analysis [39]. The purpose is first to develop a computer-aided diagnosis system based on regions of interest (ROIs) drawn by the radiologist. The radiologist then questions the system about a suspicious area and the system returns a probability estimation of malignancy as a reply. For most computer-aided diagnosis systems, this approach can be partitioned into five fundamental steps: MRI acquisition, image segmentation, image processing (resampling, normalization, and discretization), feature extraction (extract multiple pa- The purpose is first to develop a computer-aided diagnosis system based on regions of interest (ROIs) drawn by the radiologist. The radiologist then questions the system about a suspicious area and the system returns a probability estimation of malignancy as a reply. For most computer-aided diagnosis systems, this approach can be partitioned into five fundamental steps: MRI acquisition, image segmentation, image processing (resampling, normalization, and discretization), feature extraction (extract multiple parameters of structural, statistical, and functional), and classifier construction and evaluation (classifiers include linear and logistic regression, SVM, KNN, naive Bayes, ANN, DT, and RF) [40].
In this study, supervised machine learning algorithms were evaluated to compare their performances with accuracy, ROC-AUC on prostate cancer diagnostic in classifying cancer and normal tissues, and cancer grading.
The definition and diagnostic performance of supervised machine learning algorithms in prostate cancer detection and prediction in the study are provided.
Detecting/Predicting PCa with mp-MRI Using Linear/Logistic Regression
The function of linear regression is to create a linear relationship to show the relationship between a numeric dependent variable and one or more independent variables. In logistic regression, to specify the model of the relationship between the dependent and independent variable instead of a linear relationship, a "Logistic Function" is used that varies from 0 to 1. This technique uses for data classification. The main feature that distinguishes logistic regression from linear regression is that the dependent variable has two or more classes [16].
Iyama et al. developed a logistic regression model to differentiate transition zone (TZ) cancers and benign prostatic hyperplasia (BPH) on mp-MRI. In this study, 60 patients with BPH or TZ cancer were enrolled. Patients underwent a 3 T MR scanner, a surface coil, and a radical prostatectomy. They calculated the AUC of logistic regression models with a leave-one-out cross-validation procedure [41]. In 2019, Kan et al., in a study of 346 patients with PI-RADS 3 lesions at two institutions, retrospectively collected their data and showed that external validation was performed using a hospital dataset. Patients experienced prostate mpMRI with a 3 T scanner and a surface coil. Two radiologists, using PI-RADS v2.1 standards, managed the results of the images. All lesions of PI-RADS 3 were approved by another radiologist. Finally, all patients experienced both targeted and systematic biopsies to correctly classify the lesions seen on the MRI report. Subsequently, they reported the diagnostic performance of the logistic regression classifier [42].
Alam et al. reported PCa diagnostic and prediction sensitivity and specificity mp-MRI using a modified logistic regression. A total of 387 patients (193 PCa patients and 194 who did not have PCa) were enrolled. Accuracy values were obtained for logistic regression to distinguish cancerous tissue from normal tissue and for cancer prediction [43]. Tang et al. investigated the value of logistic regression combined with mp-MRI in detecting PCa. This study included 64 cases of PCa confirmed by biopsy. After PCa diagnosis, patients underwent radical prostatectomy. Cross-validation of the model was conducted on the external or newly-arrived data [44].
Detecting PCa with mp-MRI Using Support Vector Machines
One of the supervised learning methods is Support Vector Machines (SVM), used for regression and classification. The working basis of the SVM classifier is the linear classification of the data. In the linear division of the data, a line is chosen that has a higher margin of confidence. Assuming that the categories are linearly separable, it obtains hyperplanes with the maximum margin that separates the categories. In cases where the data are not linearly separable, the data are mapped to a space with larger dimensions using a suitable kernel function so that they can be linearly separated in this new space [45].
Niaf et al. [40] have evaluated the performance of SVM for detecting PCa in the peripheral zone (PZ) based on mp-MRI. This study included a series of 42 cancer ROIs, 49 suspicious benign ROIs, and 124 nonsuspicious benign ROIs. Radical prostatectomy was used as the gold standard. The classifier's performance was assessed using a crossvalidation method. The quantitative evaluation of the diagnostic performance of the SVM classifier was obtained for the differentiation of cancerous versus noncancerous tissues and the differentiation of cancerous versus suspicious tissues. Tang et al. [44] retrospectively evaluated 64 patients who underwent mp-MRI before radical prostatectomy to assess the value of AL-combined mp-MRI in PCa detection. SVM-mpMRI achieved a detection rate with an accuracy of 74.9% and an AUC of 0.82.
A recently reported study by Gravina et al. [46] evaluated ML procedures in the diagnosis of PCa in 109 patients with PI-RADS score 3 lesions by focusing on clinicalradiological features. Patients received mp-MRI and transrectal prostate biopsy. The SVM algorithm used a linear kernel. The 10-fold Cross-validation was conducted to evaluate the classifier's performance.
Detecting PCa with mp-MRI Using k-Nearest Neighbors
The k-nearest neighbor (KNN) algorithm can be used for regression and classification problems. However, it is often used for classification problems. The KNN algorithm requires training data and a specified K value to search for k-nearest data using distance computations. In the classification mode, the algorithm calculates the distance of the point that needs to be labeled with the closest points according to the specified value of K. Then, the label of the desired point is determined according to the maximum number of votes of these neighboring points. Different procedures can be used to compute this distance. One of the most well-known methods is the Euclidian distance. In the case of regression, the average of the values accessed from the K is its output [47].
Anderson et al. trained and tested three diagnostic models: a logistic regression model, a KNN classification algorithm, and a combination of the two. The input data generated from the multiparametric images of PCa patients included apparent diffusion coefficient (ADC), volume transfer constant (Ktrans), a conventional average of T 2 values, and MRS score. They used leave-one-out cross-validation to separate the data set into a training set and a test set. Finally, they investigated the performance of three models in detecting and classifying the degree of malignancy of PCa [48].
Various studies have used the KNN algorithm to diagnose prostate cancer using multiparametric images and noted its diagnostic performance. A study reported the diagnostic performance of the KNN with an accuracy of 78.75% [43]. Another study reported an AUC value of 0.88 (0.81-0.92) for the differentiation of cancerous from noncancerous tissues when combining the t-test property selection procedure with a KNN classifier [40].
Detecting PCa with mp-MRI Using Decision Tree/Random Forest
Decision tree (DT) and random forest (RF) can be used for both classification and regression problems. One of the advantages of a DT is that we can draw the entire trained model, drawn upside down with its root at the top. The DT is one of the most interpretable models in ML. It uses a series of decision rules. It expands with the first decision rule at the top (the root of the tree) and subsequent decision rules below, which are called nodes. In a DT, a decision rule occurs at each decision node, that then leads to new nodes. At the end of each tree, it reaches the 'leaf', which is the goal of the problem and determines the class. Meanwhile, RF creates a forest randomly. The built "forest" is a group of "decision trees". The work of making a forest using trees is often performed by the "bagging" method. The principal idea of the bagging method is that the combination of learning models increases the overall results of the model. Simply put, an RF builds multiple DTs and merges them to produce more accurate and stable predictions [49,50].
In 2021, Peng et al. [51] developed ML models including a DT and RF for the diagnosis of PCa with a Gleason score ≥7 using mp-MRI, texture analysis, DCE-MRI quantitative analysis, and clinical parameters. In their study, a dataset of 194 patients was collected and the mp-MRI using a 1.5 T system with a combined spine and body-array coil was performed before the target biopsy. The ROIs in temporal validation were independently delineated by two radiologists (Doctor A and Doctor B with 10 and 3 years of experience in prostate MRI, respectively.) they then stated the accuracy values for Doctor A and Doctor B with DT and RF models [51].
In some studies, the values of the diagnostic performance of DT and RF using mp-MRI in PCa patients have shown the sensitivity, specificity, overall accuracy, and AUC under the default threshold for the RF classifier by lesion to be 0.613, 0.952, 0.860, and 0.832, respectively [42]; the accuracy of the DT and RF classifier to be 77.95% and 92.84%, respectively [43]; and the accuracy and AUC of the RF classifier in 10-fold cross-validation to be 77.98% and 83.32%, respectively [46].
Detection PCa with mp-MRI Using Naive Bayes
The Naïve Bayes algorithm is a classification method based on the application of the Bayes theorem with the strong assumption that all predictors are independent of each other. This method is considered one of the simplest forecasting algorithms and has a very acceptable accuracy, which are both its advantages [52].
Alfano et al. developed a radiomics-based Naïve Bayes algorithm to detect cancerous tissue from noncancerous tissue of the prostate using mp-MRI. They reported that the AUC value for a 5-feature Naïve-Bayes classifier was 0.80 and this was validated using leave-onepatient-out cross-validation. To detect differences in shape between cancerous tissue and noncancerous tissue of the prostate, classifiers were invariant (AUC: 0.82). Performance for models trained and tested in the PZ (AUC: 0.75) was lower than in the central gland (AUC 0.95) [53]. In the study mentioned in the previous sections, Niaf
Detection PCa with mp-MRI Using Artificial Neural Network
Artificial neural network (ANN) training can be divided into unsupervised learning, supervised learning, and rain forest learning [24]. ANN is one of the primary ML methods used for data classification. The central premise of ANN is inspired by the way the biological nervous system works, such as the way the brain processes data and information to learn and create knowledge. An ANN consists of three layers: input, data transforming or hidden, and output. Each layer contains a group of connected computational units (neurons). The network is trained to create accurate predictions by recognizing predictive properties in a set of labeled training data, while the outputs are compared with the true labels by an objective function [16]. Deep learning (DL) or deep neural networks (DNNs) are the more complex form of ANN with multi-layer perceptrons. DNN algorithms require a significant amount of data and equipment with exceptionally tall computing control that can handle this data. In this algorithm, features are automatically and directly derived from the crude imaging data and optimally adjusted for the desired result. In medical imaging, DL is often produced by convolutional neural networks (CNNs) [16,54].
Kiraly et al. [55] have proposed a deep convolutional neural network (DCNN) for PCa detection and classification. The conclusive diagnosis was determined by histopathological examination of tissue biopsies. The input data consisted of T 2 W, ADC, high b-value DWI, and Ktrans parameter maps (images were obtained using DCE) from the ProstateX challenge database for 202 patients. The value of AUC was obtained with 5-fold crossvalidation across. In 2018, Wang et al. developed a fully convolutional neural network using mp-MRI including T 2 W, DWI, ADC, and K-trans for PCa detection. The dataset used for the development of tumor detection contained volumetric prostate MR images acquired from 79 patients. The CNN generated an accuracy level of 0.85 [56]. In 2022, Pellicer Valero et al. [57] employed an automatic system based on DL that performed localization, segmentation, and Gleason grade group (GGG) prediction of PCa in mp-MRIs. The prostate mp-MRI datasets of the Valencian Oncology Institute Foundation (IVO) and ProstateX, which is part of an ongoing online challenge, were used for the development and validation of the model. Using ProstateX and IVO, the data included a total of 204 and 221 mp-MRIs, respectively. The physician's report of the lesion's locations were confirmed by MR-guided biopsy. Data were divided into four PCa categories: GGG0 or benign, GGG1 (Gleason Score (GS) 3 + 3), GGG2 (GS 3 + 4), and GGG3+ (GS ≥ 4 + 3). In the test dataset, at a lesion and patient level using the ProstateX and the IVO dataset, the DL achieved good diagnostic performance.
Discussion
In this review study, we showed the diagnostic performance of mp-MRI images for diagnosing or predicting PCa with and without using ML-supervised learning algorithms. In recent years, many studies have been published, showing the use of ML methods on mp-MRI to detect and classify PCa to compare the diagnostic performance of ML methods and radiologist, to provide intelligent methods to increase the speed of diagnosis and classification, reduce human error and prevent from unnecessary biopsies, and to help the radiologist in their diagnostic workflow. Diagnosis of PCa and its grade is of vital to control, treat, and prevent the disease from spreading to other tissues. Successful treatment of this cancer requires its early diagnosis, and for this purpose, pathological examination of the tissue sample is necessary [58]. According to studies, 70% of PCa occurs in the PZ and 30% in the TZ. The TZ is also the place of BPH, a non-cancerous growth, that can result in urinary obstruction. BPH appears to mimic PCa on mp-MRI, thus rendering the diagnosis of PCa in the TZ difficult. This triggers unnecessary biopsies and identifies various undiagnosed cancerous lesions that indicate progression [1]. According to studies, PSA value, biological, and mp-MRI information without machine learning have low specificity (significant overdiagnosis) in PCa diagnosis, leading to unnecessary biopsies and to significant infectious complications, psychological harm, and financial costs [59][60][61]. A radiologist's diagnostic performance is influenced by their skill level and, perhaps, experience (not definitively), and possibly the PI-RADS category version.
In a study using three radiologists with 7, 3, and 1 years of experience in diagnosing PCa, Campelli et al. reported that there were no significant differences between the ROC curves for each protocol between the most experienced radiologist and the others. [35].
Kam et al. compared mp-MRI cases using the technical and reporting specifications of PI-RADS version 1 and version 2. The sensitivity for prediction of significant PCa was lower in the PI-RADS version 1 (87%) cases when compared with PI-RADS version 2 (99%, p = 0.005) [36].
Therefore, due to the possibility of affecting the diagnostic performance of radiologists, the reduction of unnecessary biopsies, and reducing the time spent on recognition and diagnosis, it is better to make use of AI with good performance. As noted above, AI describes the capability of a computer to model intelligent behavior and to reach a specific goal based on provided data. One of the branches of AI is ML. In supervised machine learning methods, an observer provides data to the machine and labels the data types. In this type of learning, the input and output are specified, and the machine tries to learn a pattern from the input to the expected output. Examples of supervised learning algorithms whose diagnostic performance was examined in this review study include linear and logistic regression, K nearest neighbors (KNN), support vector machines (SVMs), naive Bayes, decision tree (DT) and random forests (RF), and artificial Neural Network (ANN) techniques. Table 1 shows the diagnostic performance (accuracy or AUC) of mp-MRI in the detection of PCa with supervised machine learning algorithms noted in several studies. Table 2 shows the results of several works that have stated the algorithms with better diagnostic performance among the studied algorithms. In a study, Kan et al. [42] used prostate mp-MRI and clinical-radiological featuresbased ML to state if a PI-RADS 3 patient was benign, using logistic regression (LR), SVM, and RF classifiers. They reported that the RF classifier had the best performance in both lesion-based and patient-based datasets, with a sensitivity of 0.613 and 0.857, an overall accuracy of 0.860 and 0.713, and an AUC of 0.832 and 0.771, respectively. In another study, Gravina et al. [46] compared the algorithms' performance in RF, SVM, and neural networks in the diagnosis of PCa in patients with PI-RADS score 3 lesions with attention to clinicalradiological features. The patients underwent mp-MRI. In total, they reported the RF had the best performance, with an AUC of 83.32%. The value of the AUC of the NN and SVM were reported at 74.51% and 72.76%, respectively. Tao Peng et al., established ML models including LR, classical decision tree (cDT), RF, and SVM for the diagnosis of clinically significant prostate cancer (csPC) using mp-MRI and clinical parameters. They reported that the RF and LR models had better classification performance in the imaging-based diagnosis of csPC [51]. Donisi et al. combined radionics and ML approaches (RF, NB, and KNN) to distinguish csPC lesions on T 2 W and ADC maps images. They concluded the best algorithm is RF, due to its high accuracy (77.9%) and AUC (0.73). However, NB obtained the highest sensitivity (56.6%) while KNN had the highest specificity (91.9%) [39]. Chiu et al. [62] compared the values of AUC, of PSA, PSA density, and techniques of logistic regression, SVM, and RF using PSA, digital rectal examination (DRE), and transrectal ultrasound (TRUS) prostate volume information in the prediction of any grade PCa. Their results revealed that, in PCa prediction, all ML techniques achieved better AUC than PSA (AUC 0.67, 95% CI 0.63-0.71) or PSA density (AUC 0.75, 95% CI 0.71-0.80). ML techniques using the same clinical parameters can improve the PCa prediction when compared with PSA and PSA density and prevent up to 50% unnecessary biopsies. The RF model achieved the best AUC among other ML models (AUC 0.82, 95% CI 0.78-0.86).
Therefore, according to the studies in the literature, the RF model is likely the best ML model compared to LR, SVM, NB, NN, KNN, and cDT. The reasons for this superiority can be found in the RF model's features and advantages compared with other ML models. RF is less susceptible to overfitting and decreases overfitting in DTs, helping to improve accuracy. The output is the most important feature, which is very useful for model analysis. It has good performance with categorical and continuous values for classification and regression problems. It solves lost values in the data by automating the analysis of lost values. The output changes significantly with minor changes within the data. Meanwhile, SVM and KNN are prone to overfitting and sensitive to noise. In addition, KNN is sensitive to missing data and SVM is sensitive to large datasets. NB has difficulties with complex datasets as they are linear classifiers. Logistic regression assumes linearity between dependent and independent variables, and linear relationships between variables are rare [16]. Despite this, in a study on the prediction of PCa, it was shown that RF and LR both have better accuracy (90%) when compared to other ML algorithms, including SVM, KNN, and NB [63]. In another previously mentioned study, it was reported that RF and LR perform better than SVM and cDT in the diagnosis of csPC using mp-MRI [51].
Currently, deep neural networks (DNN) are the most advanced ML models in various domains, outperforming other established modeling methods in several important metrics [16]. Wang et al. compared the performance of DL with a DCNN algorithm and a non-DL (SVM) method to differentiate cancerous from non-cancerous prostate tissue using images of T 2 WI, T 1 WI, DCE-MRI, and DWI. The values of AUC, sensitivity, and specificity for DL and non-DL methods were achieved at 0.84, 69.6%, and 83.9% and 0.70, 49.4%, and 81.7%, respectively [56]. In a non-medical study, a comparison was made between the performance of DL (DNN and CNN) and RF methods for predicting monthly evaporation pan rates. The results revealed that higher accuracy was obtained with the DL models (DNN and CNN) for predicting evaporation versus a RF model. This can be associated with the DL characteristic of revealing hidden features, meaning that DL can be regarded as a better method for prediction [64]. Tabares-Soto et al. compared the performance of ML and DL algorithms for the classification of cancer species based on microarray gene expression data. They obtained the highest tumor identification accuracies for CNN (94.43%) and for LR (90.6%) using 10-fold cross-validation [65]. Non-deep-learning methods such as KNN, BNs, SVM, DTs, and ANNs depend on a feature extraction step that typically describes images using texture, gradient, and Gabor filters, etc. Despite this, the superior performance of the deep learning method is because it learns image features automatically in deep networks.
In recent years, high-frequency ultrasound imaging (micro-ultrasound) has been introduced in the urological field and many advances have been made as a result. Various studies have shown that high-resolution micro-ultrasound has the same or even higher prostate cancer detection ability than mp-MRI [66,67]. Compared to the mp-MRI method, this method has advantages such as easy access, simplicity, and lower cost [67]. Our suggestion is to produce a review article showing the diagnostic performance of microultrasound in the detection of prostate cancer with supervised or unsupervised machine learning methods and compare it with mp-MRI.
Conclusions
PSA value, biological, and mp-MRI information without machine learning has low specificity (significant over-diagnosis in prostate cancer diagnosis, leading to unnecessary biopsy and side effects such as infectious complications, psychological harm, and financial costs). But, when this information is given as input data to supervised machine learning algorithms, it increases the sensitivity, specificity, and the accuracy of prostate cancer diagnosis and prediction. It appears that deep learning, random forest, and logistic regression algorithms have the best performance among supervised machine learning techniques.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare that they have no conflict of interest. | 2023-02-24T17:33:37.268Z | 2023-02-01T00:00:00.000 | {
"year": 2023,
"sha1": "f9fcf82dea28c9679ed344fc24f6b3eb610feb8c",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "80c0788568c088fabbbc8e7990052ca2baff4665",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
208370 | pes2o/s2orc | v3-fos-license | Green synthesis, characterization and catalytic activity of natural bentonite-supported copper nanoparticles for the solvent-free synthesis of 1-substituted 1H-1,2,3,4-tetrazoles and reduction of 4-nitrophenol
In this study, Cu nanoparticles were immobilized on the surface of natural bentonite using Thymus vulgaris extract as a reducing and stabilizing agent. The natural bentonite-supported copper nanoparticles (Cu NPs/bentonite) were characterized by FTIR spectroscopy, X-ray diffraction (XRD), X-ray fluorescence (XRF), field emission scanning electron microscopy (FE-SEM), energy dispersive X-ray spectroscopy (EDS), transmission electron microscopy (TEM), selected area electron diffraction (SAED) and Brunauer–Emmett–Teller (BET) analysis. Afterward, the catalytic performance of the prepared catalyst was investigated for the solvent-free synthesis of 1-substituted 1H-1,2,3,4-tetrazoles and reduction of 4-nitrophenol (4-NP) in water. It was found that the Cu NPs/bentonite is a highly active and recyclable catalyst for related reactions.
Introduction
The development of new methodologies for the preparation of heterogeneous catalysts is of great interest in organic synthesis [1]. Metal nanoparticles immobilized on supports such as carbon, zeolites, clay, metal oxides, graphene, etc., have been successfully applied as heterogeneous catalysts due to their interesting structures and properties [2][3][4][5][6]. The extremely small scale of nanoparticles (NPs) is the main factor leading to their surprising reactivity as compared to their corresponding bulk metals [7]. However, most of these supports suffer from inefficiency to achieve highly distributed and stable metal NPs.
In recent decades, the use of natural bentonites has been studied due to their high specific surface area, low cost, ordered structure, thermal stability, high safety, high exchange capacity and intercalation abilities [8]. Smectites are major clay minerals in bentonite with an aluminum octahedral sheet sandwiched between two silica tetrahedral sheets [9]. These layered materials are very promising supports for the design and preparation of green catalysts [10].
Tetrazoles are among the heterocycles most applied in medicine and industry due to their structural potential such as their usage as an isosteric substituent for carboxylic acids [11], analytical reagents and biological applications [12,13]. Therefore, nowadays attention and progress in synthesis of these heterocycles play an essential role in organic, medicinal and synthetic chemistry [14][15][16]. During recent research on tetrazoles, 1-substituted 1H-1,2,3,4-tetrazoles was found to be a special category due to their biological activity [16].
The plant biosynthesis of nanoparticles immobilized on natural supports is a subject of new research as little has been published on this topic [17,18]. Therefore, the use of plants as a natural and biological source for biosynthesis of nanoparticles should be explored.
As a continuation of our research on heterogeneous catalysts [20,21], we report a new protocol for the preparation of Cu NPs/bentonite by Thymus vulgaris extract and catalytic applications as a novel heterogeneous catalyst for the synthesis of 1-substituted 1H-1,2,3,4-tetrazoles and reduction of 4-nitrophenol (4-NP). It was found that Cu NPs/bentonite is a highly active and recyclable catalyst for related reactions. The obtained results will be presented and described here.
Experimental Instruments and reagents
All reagents were purchased from the Merck and Sigma-Aldrich and used without further purification. The bentonite and Thymus vulgaris plant used in this paper were collected from the Vartoon region (Isfahan, Iran). The IR spectra were recorded on a JASCO, FT/IR-6300 instrument in KBr pellets. The NMR spectra were obtained on a Brucker Avance 90 MHz spectrometer, using tetramethylsilane (TMS) as an internal standard. The melting points were taken in open capillary tubes with a Büchi 510 melting point apparatus and were uncorrected. Thin-layer chromatography (TLC) was performed on silica gel polygram SIL G/UV 254 plates. XRD analysis was performed on a Philips powder diffractometer type PW 1373 goniometer, which was equipped with a graphite monochromator crystal. The XRF analysis of the catalyst was performed with a Bruker S4 instrument. The morphology and particle dispersion was investigated by FE-SEM (Cam scan MV2300). The chemical composition of the modified bentonite was measured by EDX performed in a SEM. TEM images were obtained using a Philips-EM-2085 transmission electron microscope with an accelerating voltage of 100 kV. Nitrogen adsorption isotherms were performed on a volumetric gas adsorption apparatus (BEL Japan, Belsorp-max). The pore distributions and pore volumes were calculated using the adsorption branch of the N 2 isotherms based on the Barrett-Joyner-Halenda (BJH) model. The specific surface area was calculated from the BET equation.
Preparation of Thymus vulgaris extract 5 g of a dried powder of Thymus vulgaris leaves and pedicles was extracted by boiling in 30 mL double distilled water for 15 min and the aqueous extract was centrifuged at 7000 rpm to obtain the supernatant as an extract. This solution of the extract was used for the synthesis of Cu NPs/bentonite.
Preparation of Cu NPs/bentonite
For green synthesis of Cu NPs/bentonite composite, 10 g natural bentonite was dispersed in 100 mL of 0.2 M CuSO 4 ·5H 2 O under continuous stirring. After separation of quartz and feldspar precipitated in container, the above extract was added. This dispersion was stirred at 80 °C for 4 h. The prepared Cu NPs/bentonite was separated by filtration, washed several times with deionized water and absolute ethanol and dried at 100 °C for 2 h.
General experimental procedure for the synthesis of 1-substituted 1H-1,2,3,4-tetrazoles A mixture of amine (2 mmol), NaN 3 (2 mmol), triethyl orthoformate (2.4 mmol) and Cu NPs/bentonite (0.05 g) was heated up to 120 °C for 3 h and stirred. After completion of the reaction (monitored by TLC), the reaction mixture was cooled to room temperature and diluted with cold water (5 mL) and extracted with ethyl acetate (3 × 10 mL). The catalyst was filtered and washed with water and ethanol. The combined organic layers were washed with brine and dried over the anhydrous MgSO 4 and concentrated and crystallized with EtOAchexane to give different tetrazoles. All compounds were known and were characterized by spectral analysis or melting points [14].
Catalytic reduction of 4-NP
Typically, 25 mL of 4-NP aqueous solution (2.5 mM) was mixed with 15 mg of the Cu NPs/bentonite in a beaker, stirring constantly for 2 min. Next, 25 mL of freshly prepared aqueous NaBH 4 (0.25 M) was added and the mixture was allowed to stir at room temperature until the deep yellow solution became colorless. 1.0 mL of the solution was extracted and diluted to 25 mL for further UV-vis absorption analysis at certain intervals. The concentration of 4-NP was determined spectrophotometrically at a wavelength of 400 nm.
Results and Discussion Preparation and characterization of Cu NPs/ bentonite
The Cu NPs/bentonite composite was prepared by a simple and inexpensive method involving the immobilization of Cu NPs on bentonite using an aqueous extract of Thymus vulgaris without the usage of any special capping agents or surfactant template. The plant not only functioned as a reductant, but also served as a stabilizer for the formation of Cu NPs on the surface of natural bentonite. The obtained Cu NPs/bentonite was fully characterized by XRF, FTIR, XRD, FE-SEM, EDX, TEM, SAED and BET analyses.
As depicted in Figure 1, the UV-vis spectra of the plant showed π→π* transitions of aromatics in the extract. Also, given the presence of monoterpenes such as thymol and carvacrol, these transitions are probably related to the mentioned compounds involved in the reduction process and formation of copper nanoparticles deposited on bentonite surface through π-electron interactions [17,21]. Hence, the extract of Thymus vulgaris acts as the reductant as well as the stabilizer.
The XRF results in Table 1 show the variations in the chemical composition of bentonite upon successful modification with Cu (4.3 wt %). The presence of Cu and the absence of Na 2 O in these results as compared with the XRF results of natural bentonite [22] were due to ion exchange of Na + ions from the bentonite with Cu 2+ ions, followed by reduction under Thymus vulgaris extract. On the other hand, the reduction and ion exchange processes are simultaneous. FTIR spectra of the natural (a) and modified (b) bentonite are presented in Figure 2. As can be seen in Figure 2, four typical groups of IR bands can be clearly distinguished in both spectra. These have been previously reported and assigned to the following major vibrations: below 400 cm −1 , lattice vibrations occur, while pseudo-lattice vibrations are observed at about 500-700 cm −1 ; internal vibrations of Si-O(Si) and Si-O(Al) in tetrahedral or alumino-and silico-oxygen bridges emerge in the (Figure 2b). From the FTIR spectrum of the modified bentonite sample (Figure 2b), it is notable that the broad band of water near 3430 cm −1 has shifted to higher frequencies in comparison to the natural bentonite ( Figure 2a). Also, a new peak at 1385 cm −1 is observed for modified bentonite, the origin of which was interpreted in terms of substitution of the naturally present alkaline metals with Cu(II) ions as a result of the modification [28]. The observed results confirmed that the structural changes were acquired during the modification of bentonite. Further proof was obtained by the XRD and EDS results.
The XRD patterns of the raw bentonite ( Figure 3a) and modified bentonite (Figure 3b) are shown in Figure 3. The XRD pattern in Figure 3a pectively. The other peaks are impurities corresponding to crystobalite, feldspar and illite [25]. It is expected that after modification of bentonite with Cu, a number of new peaks appear at 2θ = 40-55° [29]. However, as can be seen in the XRD pattern of Figure 3b, no specific peaks were obtained for the Cu NPs. This might be due to the low percentage or high dispersion of Cu NPs in the bentonite matrix. However, the existence of Cu NPs can be confirmed by the EDS technique. Figure 4 shows the FE-SEM of Cu NPs/bentonite. The sheetlike structure of montmorillonite can be seen in SEM images. These images show that the adsorption of Cu NPs can occur on both the external surface and interlayer spaces. However, in the EDS spectrum ( Figure 5), peaks related to Cu (6.07 % w/w), Si, Al, Mg and O were observed.
The size of the as-prepared Cu NPs/bentonite was further examined by TEM. The histogram of the particle size distribution of Cu nanoparticles on the surface of bentonite is given in Figure 6a-c. The average size of the Cu NPs on bentonite was 56 nm. The particles exhibited spherical morphology with a low tendency for agglomeration. Figure 6d (SAED) shows the measured selected area electron diffraction pattern of as-prepared Cu NPs/bentonite. This result indicates that the nanoparticles are crystalline and mainly composed of fcc Cu. The SAED patterns of the Cu NPs/bentonite sample are used to characterize the planes, interplanar spacing, nanostructure and zone axis. These results verified the successful synthesis of Cu NPs on bentonite. In the presence of the Thymus vulgaris extract as a reducing and stabilizing agent, the Cu 2+ ions convert to Cu NPs and immobilize on the surface of bentonite [6,20,[30][31][32][33]. The surface area measurements were performed on Cu NPs/ bentonite. Figure 7 shows the N 2 adsorption-desorption isotherm and BJH pore size distribution plot of Cu NPs/ bentonite. The results indicate that the surface area, total pore volume and average pore diameter were 19.1 m 2 /g, 0.071 cm 3 /g and 14.79 nm, respectively. For similar Iranian bentonite [34], the surface area, total pore volume and average pore diameter were 31.8 m 2 /g, 0.093 cm 3 /g and 11.7 nm, respectively. The surface area and total pore volume of Cu NPs/bentonite decreased compared to the natural bentonite, whereas its pore size increased.
As proved by control experiments, no reaction occurs in the absence of Cu NPs/bentonite. The best result was obtained with the with 2.0:2.0:2.4 molar ratio of aniline/sodium azide/triethyl orthoformate, in the presence of Cu NPs/bentonite (0.05 g) under solvent-free conditions at 120 °C.
A series of primary aromatic amines were converted into the corresponding 1-substituted tetrazoles with sodium azide and triethyl orthoformate using Cu NPs/bentonite in high yields under thermal and solvent-free conditions ( Table 2). The influence of various substituents in different ortho, meta or para positions on the type of products were examined. Amines containing both electron-releasing and electron-withdrawing groups underwent the conversion in good to excellent yield.
The Cu NPs/bentonite likely plays an important role in the preparation of 1-substituted 1H-1,2,3,4-tetrazole as a Lewis acid and the plausible mechanism is shown in Scheme 3 [14]. The
breakdown of the C-OEt bond in triethyl orthoformate facilitates the elimination of EtOH and the final 1-substituted 1H-1,2,3,4-tetrazole is produced. The energy of these cleavages and formation of products are provided by the heat (Scheme 3).
The recyclability of Cu NPs/bentonite in the preparation of 1-phenyl-1H-1,2,3,4-tetrazole was also investigated. As shown in Figure 8, no significant decrease in catalytic activity was observed for the recovered catalyst after four catalytic cycles. As compared with other literature works on the synthesis of 1-substituted 1H-1,2,3,4-tetrazole [35], the present work is comparable because it was carried out without solvent, at low reaction time, in the presence of a catalyst prepared from bentonite as an inexpensive and natural compound. The concentration of 4-NP was monitored at given intervals by using UV-vis spectroscopy and the results are shown in Figure 9. The original absorption peak of 4-NP is centered at about 320 nm and shifts to 400 nm after addition of the NaBH 4 solution. This is due to the formation of p-nitrophenolate ions under alkaline conditions with NaBH 4 [36]. The absorption peak at 400 nm fully disappeared in the presence of 15 mg of catalyst after a 90 s induction period. In similar conditions, the study was carried out at lower concentrations of NaBH 4 . The reaction time was 420 and 515 s in the presence of 2.5 × 10 −2 and 1.3 × 10 −2 M NaBH 4 , respectively. In addition, a reference experiment of Cu NPs/bentonite with only 4-NP was carried out. After 12 h, the 4-NP could not be reduced to 4-AP by the catalyst in the absence of NaBH 4 . The catalyst could be recovered and reused several times without significant loss of catalytic activity. In the presence of natural bentonite (15 mg), no significant color change was observed in a similar reduction process within 3 h. The catalytic reduction process using Cu NPs/bentonite can be summarized as follows: diffusion and adsorption of both BH 4 − , as a strong nucleophile, and 4-NP onto the Cu surface; followed by electron transfer from the BH 4 − to 4-NP; and finally, desorption of the generated 4-aminophenol from the surface of the catalyst [37,38].
Since catalysis takes place on the Cu surface, Cu NPs/bentonite are much more reactive than the unmodified natural bentonite. Furthermore, it is found that the reduction of 4-NP over catalyst in the presence of a large excess of NaBH 4 compared to 4-NP can be treated by a pseudo-first-order equation [18]: ln(C t /C 0 ) = ln(A t /A 0 ) = −kt, where C t is the concentration of 4-NP at a reaction time t, C 0 is the initial concentration of 4-NP, A t is the absorbance at any time t, and A 0 is the absorbance at time t = 0. From the linear relations of ln(A t /A 0 ), shown in Figure 10, we found that the rate constant (k) for this reaction is 0.041 s −1 , which is comparable to that previously reported [39-
Conclusion
In this study, copper nanoparticles supported on natural bentonite using a Thymus vulgaris extract as a reducing and stabilizing agent were prepared and characterized. This catalyst was found to be an efficient and recyclable heterogeneous catalyst for the synthesis of 1-substituted 1H-1,2,3,4-tetrazoles and reduction of 4-NP under mild conditions. The Cu NPs/ bentonite composite remained stable under several reactions. | 2017-04-04T13:07:17.928Z | 2015-12-03T00:00:00.000 | {
"year": 2015,
"sha1": "06f3765111b7fae90464ff251ab9c6e497f32d4f",
"oa_license": "CCBY",
"oa_url": "https://www.beilstein-journals.org/bjnano/content/pdf/2190-4286-6-236.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "06f3765111b7fae90464ff251ab9c6e497f32d4f",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
3310027 | pes2o/s2orc | v3-fos-license | Serum protein electrophoretic pattern in one-humped camels (Camelus dromedarius) in Tripoli, Libya
The aim of this study was to characterize serum protein capillary electrophoretic pattern in apparently healthy adult male (age: 3-7 years) dromedary camels and also evaluate total protein and albumin levels using automated analyzer. Blood samples were taken from 20 camels. 5ml of blood was collected from the jugular vein and serum was separated from samples by centrifugation. Capillary electrophoresis of serum proteins identified six protein fractions in adult camels, including albumin, alpha1, alpha2, beta1, beta2 and gamma globulins, serum levels of these parameters were 3.9±0.04 g/dl, 0.16±0.01 g/dl, 0.39±0.03 g/dl, 0.515±0.03 g/dl, 0.205±0.01 g/dl and 0.61±0.04 g/dl, and 65.42±0.62 g/l, respectively. The total protein concentration was 65.42±0.62 g/L, while, the albumin/globulin (A/G) ratio was 2.4±0.14. The present study indicates six peaks with minicapillary electrophoresis and the results obtained were compared and interpreted in the light of finding reported by other investigators in camels.
Introduction
Camels represent an important sector in the livestock in Libya. They are mainly valuable for their milk and meat production. The total number of camels in Libya is 62125 (FAO, 2016). For a long time, camel is a neglected animal in terms of science and research (Wernery and Kaaden, 1995). Hematological and biochemical analysis of blood often provides valuable information for diagnosis and surveillance of general health (Nyang'ao et al., 1997). The blood serum is composed of hundreds of different proteins, and concentrations of total proteins and several specific proteins are of clinical value (Joliff, 1992). Protein electrophoresis is a standard technique to separate and determine the protein components in plasma or serum in clinical biochemistry. Analysis of serum proteins by electrophoresis resolved 6 bands, comprising albumin, alpha1, alpha2, beta1, beta2 and gamma globulins (Vaden et al., 2009). The variability in the concentration of serum proteins has significant challenges in understanding different diseases. Chaudhary et al. (2003) have reported that serum protein electrophoresis on agarose gel in camels produced six peaks comprising one albumin, α1, α2, β1, β2 and γ-globulin fractions. However, Others (Elkhair and Hartmann, 2014) have reported that serum electrophoresis pattern of the camel using capillary electrophoresis showed five peaks, one albumin, α1, α2, β and γ-globulins fractions and is influenced by age. The sources of these variations are likely due to genetic, environmental or sensitivity of the techniques. This study was carried out to determine the normal electrophoretic pattern, total serum proteins and albumin of different serum proteins of blood collected from apparently healthy adult male dromedary camels (Camelus dromedarius).
Animals and blood sample collection
Twenty adult (3-7 years old) healthy male camels where chosen for the study from a herd with good management and feeds, the herd located in the suburban area of Tripoli, Libya. Five ml of blood was collected from jugular vein of each camel into silicon-coated vacuum containers for biochemical studies. Serum was separated by centrifugation at 3000 RPM for 10 minutes and stored at 2-8 degrees Celsius until further analysis after five days.
Laboratory methods
Albumin and total protein were analyzed on autoanalyzer Cobas c 311 (Roche/Hitachi cobas C systems). Serum proteins were analyzed in Sebia minicapillary (Sebia, France). The Sebia system uses the principle of capillary electrophoresis in free solution, and charged molecules are separated by their electrophoretic http: //www.openveterinaryjournal.com O. Abdoslam et al. Open Veterinary Journal, (2018), Vol. 8(1): 1-4 ________________________________________________________________________________________________________ mobility at a specific pH in an alkaline buffer separation, each sample is diluted in a dilution buffer and the capillaries are filled with the separation buffer, samples are then injected by aspiration into the anodic end of the capillary. A high voltage protein separation is then done; direct detection and quantification of the different protein fractions is performed at a specific wavelength at the cathode end of the capillary (Landers, 1995;Gay-Bellile et al., 2003;Sam and Larry, 2006;Tariq et al., 2016). The capillary instrument enables the separation of serum protein into six major fractions (albumin, alpha1, alpha2, beta1, beta2, and gamma) and calculate the albumin : globulin ratio (A/G ratio).
Statistical Analysis
The means (x) and standard errors of the mean (SEM) were calculated using descriptive statistical procedures with SPSS for Windows version 11.0 (SPSS, Chicago, USA).
Results and Discussion Sebia minicapillary electrophoresis of serum protein revealed six fractions: albumin, two alpha globulins (alpha1 and alpha2) two beta globulins (beta1 and beta2) and gamma globulins fractions in adult male camels. The percentage and concentrations of six fractions of proteins and A/G ratio are shown in Fig. 1. The protein fractions, Total protein and Albumin levels are presented in Tables 1 and 2.
The mean±SEM values of various serum protein fractions, Total protein and Albumin will be discussed in relation to other findings reported in camels. The serum proteins have been studied intensively in many animal species (Osbaldiston, 1972;Keay and Doxey, 1982), yet little work has been done to study the serum protein electrophoresis of camels. The current study focused on biochemical analysis of serum proteins that provides valuable information for diagnosis of some diseases. At birth, both albumin and globulin are generally low following ingestion of colostrum, globulin concentration increase because of absorption of immunoglobulin, besides that, serum protein levels reported often to decrease during pregnancy and lactation (Jain, 1993). Albumin and globulin production increases as the animal mature and reaches adulthood (Jain, 1993). Total protein is generally higher in old animals because of slight decreases in albumin (Kaneko, 1989) and increases in alpha and gamma globulins (Batamuzi et al., 1996). Because the proteins of an individual or a species are synthesized under genetic control, it is to be expected that variations in proteins would occur between individuals and between species (Keay and Doxey, 1982). Inflammatory disorders have significant effects on plasma protein, protein uptake from plasma increases during tissue repair from injury, and tissue inflammation causes increased vascular permeability, leakage of protein (primarily albumin) into the extravascular space (Kaneko, 1989). With both stress and inflammation there was increased production of an assortments globulin that migrates in the α and β fractions as part of the acute phase response (Harvey and West, 1987;Eckersall, 1995) Decrease serum protein may also occur in those patients whose nutritional requirements are not met as a results of malabsorption or maldigestion syndrome (Werner et al., 1994). The concentration of total serum proteins (mean 65.42±0.62, range 59.59-70.29) recorded in this study is in agreement with earlier report (Higgins and Kock, 1986;Wernery et al., 1986;Chaudhary et al., 2003;AL-Busadah, 2007). Electrophoresis of serum samples produced six peaks comprising albumin, alpha1, alpha2, beta1, beta2 and gamma globulins. The results of the present study performed on mature camels when compared with previous studies (Chaudhary et al., 2003;Al-Sultan, 2008) where a significance difference between adult and young camels reported, this difference may be due to physiological factor because the concentration of total proteins and albumin increases with age due to progressive increase in globulins.
In this study validated the use of sebia minicapillary electrophoresis technique for fractionation of serum proteins in dromedary camels. The minicapillary electrophoresis illustrates six peaks comprising one albumin, alpha1, alpha2, beta1, beta2 and gamma globulins and is in accordance with the results reported by Chaudhary et al. (2003) which reported that serum protein electrophoresis on agarose gel in camels produce six peaks, but in contrast with Elkhair and Hartmann (2014) and Ahmadi-Hamedani et al. (2014). This difference can be attributed to the method used for electrophoresis. The reference range of serum total proteins obtained in the present study for adult camels is comparable to the values reported previously (Abdalla et al., 1988;Mohamed and Hussein, 1999;Elkhair and Hartmann, 2014). However, the range obtained in the current study is lower than that reported by Bogin (2000). (2014) and Ahmadi-Hamedani et al. (2014), but higher than the values reported for young and adult camels (Chaudhary et al., 2003).
In the present study, the A/G ratio range was 1.36 -3.52 (mean 2.24±0.14). This result is in agreement with the results of Ahmadi-Hamedani et al. (2014) recorded in adult camels, but wider than previous reports published by others (KHadjeh, 1998;Chaudhary et al., 2003;Patodkar et al., 2010;Elkhair and Hartmann, 2014). Conclusion Serum protein electrophoresis and determination of absolute values of serum protein fractions may be useful diagnostic tool. Yet little work has been done to study the serum proteins in Libyan camels. The present study use fully automated analyzer to determined total proteins and albumin. Capillary electrophoresis identified six peaks of protein fractions and A/G ratio in adult male dromedary camels in Libya. This result is in agreement with previous reports published and need more extensive studies. | 2018-04-03T01:46:04.745Z | 2018-01-16T00:00:00.000 | {
"year": 2018,
"sha1": "bedbde4dcd2f885d1b555b3573f8dda8a6929ae3",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.ajol.info/index.php/ovj/article/download/165482/154941",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bedbde4dcd2f885d1b555b3573f8dda8a6929ae3",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
8761075 | pes2o/s2orc | v3-fos-license | Prostaglandin production by rheumatoid synovial cells: stimulation by a factor from human mononuclear Cells.
Human peripheral blood mononuclear cells (lymphocyte-monocyte) in culture release a solube factor which can stimulate, up to 200-fold, production of prostaglandin E2 by isolated, adherent, rheumatoid synovial cells. Production of the factor by the mononuclear cells is enhanced by phytohemagglutinin. This factor is similar in apparent mol vt (10,000-20,000) to that which also stimulates collagenase production by the same cells.
separated on sodium metrizoate/Ficoll gradients (Nyegaard and Co., Oslo, Norway) (11). The mononuclear cells were cultured at 37°C, in an atmosphere of 95% air and 5% CO2, at 2 × 106 cells/ ml in 6-cm diameter polystyrene Petri dishes in Dulbecco's modification of Eagle's medium with 10% fetal bovine serum (DMEM 10% FCS). After 24 h the nonadherent lymphocytes were separated from the adherent monocytes, centrifuged, resusponded in DMEM 10% FCS, and incubated for a further 3 days with or without phytehemagglutinin (PHA) (Wellcome Research Laboratories, Kent, England). Although most monocyte-macrophages were removed by this procedure, it was not determined to what extent the nonadherent population, here referred to as lymphocytes, was still contaminated by monocyte-macrophages. The lymphocyte-free supornatant medium (LM) prepared by centrifugation (300g for 10 rain) was used as a source of the stimulating factor.
Chromatography of LM. Before gel filtration, cell-free supernatant media from 9 unstimufated lymphocyte cultures and 10 cultures of lymphocytes stimulated with PHA were pooled (total vol 450 ml). Cells were from both normal and rheumatoid subjects. The pooled medium was dialyzed for 3 days against 100 volumes of 0.04% sodium azide in distilled water followed by distilled water alone. The retentate was removed, freeze-dried, and kept at -30°C until use. A solution (2.5 ml) of the dried powder in the column buffer (60 mg/ml) was then applied to a calibrated column (36.5 × 2.5 cm) of Ultrogel AcA 54 (LKB Instruments, Inc., Rockville, Md.), equilibrated, and eluted with buffer (10 mM Tris-HCl, pH 7.5, 165 mM NaCl, and 5 mM CaCl2). The eluant fractions were diluted in DMEM 10% FCS and sterilized by Millipore filtration before incubation with ASC.
Cell Counting and [3H]Thymidine Uptake. Cells were counted using a Coulter counter (Coulter Electronics Inc., Hialeah, Fla.). Lymphocyte [~H]thymidine uptake was measured with and without PHA (5 ~g/ml) by adding 5 pCi/0.5 x 10 s cells/0.2 ml medium for the last 16 h of a 72 h incubation period. Isolation of ~H-DNA was performed on glass fiber strips after filtration using the MASH-If apparatus (Microbiological Associates, Bethesda, Md.).
Results
The response of production of PGE2 and collagenase by ASC to LM in the presence and absence of indomethacin is shown in Table I. Marked stimulation by LM of both PGE2 and collagenase was observed in the absence of indomethacin. Whereas indomethacin blocked production of PGE2, this drug had no effect on total collagenase production. Cellular proliferation was decreased in the presence of LM but was increased when indomethacin was present in addition to LM.
It was possible to establish a dose-response for production of PGE2 ( Fig. 1 A) and collagenase ( Fig. 1 B) by ASC stimulated by LM, with and without indomethacin. At all doses of LM, indomethacin blocked PGE2 production but did not affect total collagenase production. LM prepared from cultures of lymphocytes which had been incubated in the presence of PHA had a greater stimulatory effect on production of both PG~ and collagenase by ASC than LM from cultures incubated without PHA (Figs. 2 A and 2 B). PHA alone at concentrations present in diluted LM incubated directly with ASC had no effect on production of either PGE2 or collagenase.
Medium pooled from several lymphocyte cultures was chromatographed on columns of Ultrogel AcA 54 as shown in Fig. 3. The peaks of stimulatory activity for both collagenase and PGE2 were similar and were eluted in the region between the carbonic anhydrase and ribonuclease A markers. This would sug- gest an approximate molecular weight of the stimulating factor of 10,000-20,000. The addition of lymphocyte media increased PGE release by all 20 rheumatoid synovial cell cultures so far tested. In some synovial cell cultures, even after several passages, exposure to cell-free LM increased PGE concentrations to levels approaching those found in freshly isolated synovial cells (-1,000 rig/106 cells/day) (6). The variability in response probably depended upon several factors including passage and dilution of ASC and the source of the LM. However, each of the 45 lymphocyte preparations so far tested (from control and rheumatoid subjects) stimulated PGE production by synovial cells.
Prostaglandins from representative stimulated culture media were extracted with ethyl acetate and separated by thin-layer chromatography (13). The areas corresponding to PGE2 and to PGA~ and PGBz were identified by tritiated marker compounds, and eluted from the chromatographic strips. PGE~ was found, in four separate stimulated cultures, to account for 85-95% of the total PG assayed (PGE2 plus PGB2 plus PGA~).
Discussion
In searching for substances which might influence collagenase production by synovial cells we found that lymphocytes-monocytes in culture released a factor which markedly increased collagenase produced by these cells (10). A factor which had a similar pattern of elution from gel filtration columns also stimulated the PGE2 production by these cells. Indomethacin, the nonsteroidal antiinflammatory drug so far tested, inhibited PGE production but in most cultures did not inhibit collagenase production; in some cultures even stimulation of collagenase production by LM in the presence of indomethacin was observed as was found with freshly isolated synodal cells. In the presence of lymphocyte factor, while there was a striking stimulation of the production of both PGE2 and collagenase, there was a significant decrease in the number of synovial cells. This suppression of cell growth in the presence of the LM was reversed by indomethacin, where production of PGE2 was also suppressed. It is possible that the decrease in cell number is due to an increase in cellular cAMP (14). In preliminary experiments we have found that PGE2, at concentrations similar to those measured in these experiments, acutely increases levels of cAMP in these synodal cells and it is known that cAMP inhibits cell division in several different systems (15). The prostaglandins released in increased concentration stimulated by the LM could also function in a negative feedback capacity in the immune system as has been shown for several lymphokines (16).
The mechanism of the stimulation of two separate factors, collagenase and prostaglandin by the lymphocyte factor is not known. It is conceivable that the PGE2 increase is accounted for either by increasing the availability of arachidonate or by increasing the activity of one or more enzymatic steps on the pathway of prostaglandin biosynthesis.
Summary
Human peripheral blood mononuclear cells (lymphocyte-monocyte) in culture release a soluble factor which can stimulate, up to 200-fold, production of prostaglandin E2 by isolated, adherent, rheumatoid synovial cells. Production of the factor by the mononuclear cells is enhanced by phytohemagglutinin. This factor is similar in apparent mol wt (10,000-20,000) to that which also stimulates collagenase production by the same cells. | 2014-10-01T00:00:00.000Z | 1977-05-01T00:00:00.000 | {
"year": 1977,
"sha1": "4da3f33f42148185751100c75d2e51db794f5ee8",
"oa_license": "CCBYNCSA",
"oa_url": "http://jem.rupress.org/content/145/5/1399.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "4da3f33f42148185751100c75d2e51db794f5ee8",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
269881261 | pes2o/s2orc | v3-fos-license | Early Feeding Strategy Mitigates Major Physiological Dynamics Altered by Heat Stress in Broilers
Simple Summary In order to ensure the profitability of the broiler industry, it is essential to maintain optimal performance, especially under undesirable environmental conditions. Elevation in environmental temperatures and its subsequent negative impacts on broilers’ physiological and metabolic homeostasis deleteriously affect production performance. Early adaptation is an effective strategy for ensuring sustainable broiler production. We investigated three early feed withdrawal (FWD) regimes for 24 h, either continuous or equally distributed over two or three days, as potential thermal stress-mitigating strategies. The results demonstrated a positive adjustment in metabolic hormones and biochemical metabolite markers in addition to an elevation in antioxidant enzyme activity and immune response in the FWD groups. Finally, we established that the investigated FWD strategies can promote broiler thermotolerance adaptation, reflected in the significant enhancement in broiler production performance, immunomodulation response, and recovery of the antioxidant balance. Abstract Heat stress is one of the stressors that negatively affect broiler chickens, leading to a reduction in production efficiency and profitability. This reduction affects the economy in general, especially in hot and semi-hot countries. Therefore, improving heat tolerance of broiler chicks is a key to sustained peak performance, especially under adverse environmental heat stress conditions. The present study investigated three early feed withdrawal regimes (FWD) as a potential mitigation for thermal stress exposure. A total of 240 unsexed one-day-old Cobb-500 chicks were randomly recruited to one of four experimental groups using a completely randomized design (10 birds × 6 replicates). The experimental groups included the control group with no feed withdrawal (control), while the other three groups were subjected to early feed withdrawal for either 24 h on the 5th day of age (FWD-24), 12 h on the 3rd and 5th day of age (FWD-12), or 8 h on the 3rd, 4th, and 5th day of age (FWD-8), respectively. Production performance was monitored throughout the experiment. Meanwhile, blood and liver samples were taken at the end of the experimental period to evaluate major physiological dynamic changes. Our findings demonstrated that under chronic heat stress conditions, FWD treatments significantly improved broilers’ production performance and enhanced several physiological parameters compared with the control. Serum levels of thyroid hormones were elevated, whereas leptin hormone was decreased in FWD groups compared with the control. Moreover, serum total protein, globulin, and hemoglobin levels were higher, while total cholesterol and uric acid were lower in the FWD groups. Furthermore, FWD groups showed significantly higher antioxidant marker activity with a significantly lower lipid peroxidation level. Immunoglobulin levels, lysozyme, complement factor C3, and liver heat shock protein 70 (HSP70) concentration were also elevated in FWD compared with the control. Also, serum interleukin-1β (IL-1β) and interferon-gamma (IFN-γ) significantly increased with FWD. Based on our findings, early feed withdrawal can be applied as a promising non-invasive nutritional strategy for broilers reared under chronic heat stress conditions. Such a strategy promotes the alleviation of the deleterious effects of heat stress on broiler performance, immunity, and redox status, owing to the onset of physiological adaptation and the development of thermotolerance ability.
Introduction
Broiler chicken production is a vital sector of the global poultry industry, providing a major protein source for human nutrition [1].However, the optimum productivity of broilers is often hindered by various environmental stressors.In tropical and subtropical regions, heat stress is considered a major challenge confronting the modern poultry industry as it can compromise broilers' growth rate, immune response, redox status, and the overall welfare of the birds [2][3][4][5].High temperatures, coupled with humidity, can result in heat stress, which disrupts normal physiological processes and impairs the performance of broiler chickens [6].Awad et al. [7] stated that the deleterious effects of heat stress on broilers' performance and immune responses were consistent across different commercial strains.Thus, overcoming such a challenge is essential for obtaining sustainable poultry meat production that covers the growing global demand.Early-life adaptation was imposed as a potentially potent strategy to mitigate the negative consequences of stress exposure during the birds' lifespan [8][9][10].
In recent years, researchers have explored innovative strategies to mitigate the adverse effects of heat stress and recover broiler performance, immunity, and redox balance.Strategies to mitigate the negative impacts of heat stress are of paramount importance for the sustainable and efficient production of broilers [11].Feeding strategies targeting increased dietary intake, reduced heat production, and improved general health are highly necessary for the nutritional management of chickens under heat stress [12].Wet feeding, feeding form, feeding structure, feed additives, dual feeding, and feed restriction were some of the promising strategies proposed to reduce the deleterious effects of heat stress on poultry production [13][14][15][16][17][18][19][20].One such promising strategy that has gained attention recently is feed withdrawal at an early age.Previous research has explored the theory of applying mild stress at early ages to potentially mitigate the negative effects of stress exposure later in life.These programs include thermal conditioning and feed manipulation techniques, like withdrawal or restriction.El-Moniary et al. [21] suggested that feed withdrawing for 24 h on the fifth day of age could improve the productivity of broiler chicks under summer stress conditions.Early feed withdrawal has emerged as a potential solution to combat heat stress in poultry [22].This practice involves temporarily restricting the access to feed during the critical early stages of broiler development, with the goal of enhancing their ability to cope with heat stress exposure later in life [23].This approach is rooted in the idea that early nutritional management can have lasting effects on the birds' thermotolerance and overall performance under challenging environmental conditions [12,24].Zhan et al. [25] suggested a prolonged metabolic programming induction in broiler chicks exposed to early feed restriction for four hours a day during the first 21 days of age.Accordingly, early feed withdrawal can be used to induce broilers' heat tolerance ability through an early metabolic adaptation, which consequently optimizes nutrient utilization and improves performance during heat stress exposure.The optimal and effective feed withdrawal regimes under thermal stress and their potential for physiological adaptation are not yet fully defined.
The present research was designed to clear out and provide an in-depth exploration of feed withdrawal at an early age as a potential nutritional intervention to reduce the deleterious influences of heat stress on broilers' production, immunological parameters, and redox status.We aimed to contribute valuable insights to the ongoing efforts to enhance the resilience and productivity of broiler chickens facing the challenges of heat stress through investigating the effectiveness and the mechanisms underlying early feed withdrawal at different intensities to enhance broiler resilience and induce thermotolerance.
Bird Management and Experimental Design
Two hundred and forty unsexed one-day-old Cobb-500 chicks were randomly divided into four experimental groups using complete randomized design (10 birds × 6 replicates).After distributing the birds, the sex ratio in each group replicate was found to be 1 male to 9 females.Chicks were reared in a hierarchically designed cage system divided by wire mesh barriers with dimensions of 1.0 m × 0.50 m × 0.40 m.Each compartment housed ten birds and was identical to the others.The experimental groups were the control group with no feed withdrawal (control), while the other three groups were subjected to early feed withdrawal for either 24 h on the 5th day of age from 7:00 a.m. to 7:00 a.m. next day (FWD-24), 12 h on the 3rd and the 5th day of age from 7:00 a.m. to 7:00 p.m. (FWD-12), or 8 h on the 3rd, 4th, and 5th day of age from 7:00 a.m. to 3:00 p.m. (FWD-8), respectively.Afterward, all experimental birds were fed ad libitum with free access to water until the end of the experiment.The experimental diets were formulated to cover the nutrition requirements of broilers, following the recommendation of NRC [26] and the Cobb-500 broiler management guide with two formulas: for starter (from 1-21 days of age) and grower-finisher (from 22-35 days of age) (Table 1).All birds were housed under the same management conditions.The lighting regime was set at 23 h light and 1 hr dark for the first three days of the experiment; afterward, it was adjusted to 16 h light and 8 h dark for the rest of the experiment period.
The Experimental Environment Conditions
The present study was carried out during the summer season.Throughout the experimental period, the minimum and maximum ambient temperatures as well as humidity percentage were monitored every day, and the average weekly readings were then calculated.
Accordingly, the corresponding temperature-humidity index (THI) was calculated [27].The average weekly changes in ambient temperatures, relative humidity percentage, and the calculated THI are presented in Table 2.The THI value ranged from 28.3 to 33.8, indicating that birds were subjected to chronic heat stress conditions.
Production Performance
Chicks were individually weighed on the 1st, 21st, and 35th day of age and weight gain (BWG) was then calculated.Meanwhile, feed intake (FI) was recorded on a replicate basis at the end of the starter period and the finisher period.Accordingly, feed conversion ratio (FCR) was measured as g feed intake per g weight gain.
Blood Samples Collection
At end of the experimental period, blood samples were collected from each experimental group as one sample per each group replicate (n = 6, all from females).For serum collection, blood samples were taken in clean and dry vials and centrifuged at 4000 rpm for 15 min at room temperature.The collected serum was stored at −20 • C until further analysis.Another fresh blood sample (n = 6, all from females) was used to estimate blood hemoglobin (Hb) according to the method described by Jain [29].
Serum Antioxidant Markers
Serum total antioxidant capacity (TAC) was determined according to the method of Janaszewska and Bartosz [35].Catalase (CAT) and superoxide dismutase (SOD) enzyme activities were assessed according to the methods of Aebi [36] and Sun et al. [37], respectively.Malondialdehyde (MDA) concentration was determined according to the method described by Placer et al. [38].
Liver Heat Shock Protein 70
To induce the production of heat shock protein 70 (HSP70), birds were subjected to 42 • C for one hour before slaughter [41].Afterwards, the liver was instantly dissected out (n = 6, all from females) and then vacuum packed and kept at −20 • C. The HSP70 was measured using ELISA following the method of Anderson et al. [42].
Statistical Analysis
The data underwent one-way analysis of variance (ANOVA) utilizing IBM SPSS Statistics 20 (IBM Corp., Chicago, IL, USA).Replicates (n = 6) served as the experimental unit for production performance parameters, while the individual bird was the experimental unit for the physiological parameters.Group means were assessed for significant differences using Duncan's multiple range test at a confidence level of 95% (p ≤ 0.05).The findings were reported as mean ± standard error of the mean (SEM).
Production Performance
Table 3 presents the growth performance of broiler chicks under various early feed withdrawal (FWD) treatments.The results highlight the impact of different FWD strategies on key growth parameters.The initial body weights (BW) of the chicks across all treatments were similar, with no significant differences observed.Meanwhile, BW at 21 days varied significantly among treatments.Treatment FWD-24 had the highest BW and was significantly different from other experimental groups.Furthermore, treatment FWD-24 resulted in the highest final BW, followed by FWD-8 and FWD-12, while the control had the lowest weight.The increased level of final BW compared with the control group was 20, 9, and 13% for FWD-24, FWD-12, and FWD-8, respectively.Consistently during the period from 1 to 21 days of age, FWD-24 led to the highest body weight gain (BWG), followed by FWD-8 and FWD-12.The same significantly higher BWG persisted for the overall experimental period 1-35 days of age, with FWD-24 showing the highest BWG, followed by FWD-8 and FWD-12, respectively.Meanwhile, the results of feed intake (FI) showed no significant differences during the starter period from 1 to 21 days of age across treatments.However, during the period from 22 to 35 days of age and for the overall experimental period, FWD groups had a significantly higher FI than that of the control group.Accordingly, the FWD-24 group had the best FCR through the experimental period, followed by FWD-8 and FWD-12 groups, respectively, compared with the control group.The current results indicate that early feed withdrawal strategies significantly influenced the growth performance of broiler chicks.Treatment FWD-24, with feed withdrawal for 24 h on the fifth day of age, resulted in the highest body weights and most favorable FCR in various growth periods.
Blood Metabolic Hormones and Biochemical Markers
The metabolic-related hormones concentration of broilers subjected to early feed withdrawal (FWD) strategies are presented in Table 4. Triiodothyronine (T 3 ) and thyroxin (T 4 ) levels showed a higher concentration for the FWD groups compared with the control group.The FWD-24 group had the highest T 3 and T 4 concentration by 12 and 13%, followed by FWD-8 by 9 and 9%, and finally FWD-12 by 5 and 8%, respectively.Meanwhile, the FWD groups showed a significant reduction (24-32%) in leptin concentration, an appetite regulation hormone, compared to the control group.The impact of various early feed withdrawal (FWD) strategies on blood serum metabolites concentration is presented in Table 4.The results demonstrate significant differences among treatments, indicating the influence of FWD on metabolic parameters.A significantly higher serum total proteins (18-24%), globulin (27-31%), and Hb (10-12%) levels were observed for the FWD treatment groups compared with the control group.Contrarily, serum total cholesterol and uric acid showed significantly lower levels for the FWD treatment groups, ranging from 7 to 10% and 10 to 11%, respectively.Thus, it seems that early feed withdrawal strategies significantly enhanced thyroid hormone levels and ameliorated blood serum metabolite concentrations while decreasing the level of leptin hormone concentration in heat-stressed broiler chickens.These findings offer insights into the metabolic adaptation responses of broilers to different early feed withdrawal practices.
HSP70 and Oxidation Markers
The heat shock protein 70 (HSP70) and redox status markers of different early feed withdrawal (FWD) strategies are presented in Table 5.The results reveal significant differences in the investigated antioxidant markers and HSP70 level among the different experimental groups.Treatment FWD-24 showed the highest HSP70 levels, followed by FWD-12 and FWD-8, while the control had the lowest level.The fold increase in HSP70 levels for the FWD groups was approximately 1.8-fold higher than that observed in the control group.Meanwhile, the redox status improved significantly in response to various FWD treatments.The total antioxidant capacity (TAC), catalase, and superoxide dismutase (SOD) activities were significantly higher for the FWD groups by approximately 1.3, 1.9, and 1.1-fold, respectively, compared with the control group.Meanwhile, the MDA level, an oxidative stress indicator marker, was significantly lower by 45 to 50% in the FWD groups compared with the control group.Eventually, the antioxidant status of broiler chickens was significantly improved by different early feed withdrawal strategies.These findings have implications for broiler general health and can guide optimal feeding practices under heat stress challenges.
Innate and Humoral Immuneity Marker Levels
Table 6 provides results related to the effect of various early feed withdrawal strategies on innate immunity markers and immunoglobulin (IgA, IgG, and IgM) levels in heatstressed broiler chickens.The immunoglobulin levels exhibited significant differences among treatments.Compared with the control group, the IgA showed the highest level for the FWD-8 (1.59-fold), followed by FWD-12 (1.56-fold) and FWD-24 (1.39-fold).Moreover, the IgG level was significantly 1.8 to 2.0-fold higher for the FWD treatment groups.Meanwhile, the IgM level was significantly higher in FWD-24 and FWD-8 by 1.9 and 1.8-fold, respectively.On the other hand, serum pro-inflammatory cytokines, IL-1β and IFN-γ, significantly increased in the FWD groups compared with the control group by 8-9% and 49-61%, respectively.Furthermore, two distinguished innate immunity factors were significantly increased with FWD.Lysozyme and complement C3 protein showed a 26 to 27% and an 11 to 15% increase, respectively, in response to different FWD regimes compared to the control group.The results illustrate the significant positive impact of FWD on broilers' immune and inflammation responses, with substantial progressive impacts on innate and humoral immunity in chronic heat-stressed broiler chickens.
Discussion
There is general agreement on the deleterious impact of heat stress on production performance parameters [3,13,18,43,44].The average minimum and maximum recorded brooding temperature in the present study were 30 and 35 • C, respectively, which is considered the critical temperature zone (26-35 • C) for growing broilers [44,45].Moreover, the current calculated THI values averaged from 29 to 33.8, which was classified as severe to very severe heat stress [28].The premier factor for impaired performance of chickens is the drop in feed intake observed under heat stress.Other factors that can be attributed to performance deterioration under thermal challenge are impaired digestibility with the occurrence of physiological and metabolic changes that negatively influence feed efficiency [3,17].Brugaletta et al. [46] stated that heat stress exposure stimulates tissue catabolism and subsequently weight loss in chickens.Mohamed et al. [47] introduced feed restrictions for three hours a day as an effective nutritional tool to improve production performance of broilers raised in hot climates.The present study also indicated significant improvement in heat-stressed broiler production performance parameters (i.e., BWG, FI, and FCR) with early feed withdrawal compared with the ad libitum feed group.Thus, early feed withdrawal seems to improve feed intake and re-establish the physiological homeostasis of birds, resulting in improving BWG, FI, and FCR.
Birds' serum metabolites level can significantly be altered under heat stress and can be used as a physiological marker.Lu et al. [48] indicated a wide array of serum metabolites alterations in heat-stressed broilers.They reported a significant increase in serum urea, uric acid, and cholesterol in heat-stressed broilers.Lu, He, Ma, Zhang, Li, Jiang, Zhou, and Gao [48] linked the increase in serum urea and uric acid with protein degradation and muscle atrophy under heat stress.Heat stress was also reported to induce hypercholesterolemia in broiler chickens [49].Meanwhile, a reduction in serum total cholesterol was achieved with feed restriction for three hours a day in heat-stressed broilers [47].Moreover, the observed higher blood hemoglobin concentration in the FWD groups can be an additional physiological indicator of the achieved improvement in body condition and general physiological fitness of heat-stressed broilers [50].Our results showed significantly higher serum Hb, TP, and globulin levels, with lower serum total cholesterol and uric acid for the FWD groups, which physiologically reflects the onset of metabolic adaptation to heat stress.
Another major piece of evidence that FWD induced physiological adaption was the significant reduction in leptin and the increase in thyroid hormone circulation levels, observed in the FWD groups.Leptin is an energy balance regulator hormone that plays a key role in the regulation of nutrient intake, and its high level suppresses feed intake in chickens [51,52].Food deprivation or restriction was reported to induce a reduction in leptin circulation, causing a short-term reduction in energy expenditure and an increase in food intake [53].Moreover, birds adapt to heat stress by lowering blood thyroid hormone levels, leading metabolic hormones to reduce metabolic heat production, which consequently reduces chickens' performance [3].Kpomasse et al. [54] stated that heat stress exposure induces a reduction in thyroid hormone levels in broilers.Beckford et al. [55] also reported a significant reduction in plasma T 3 levels in heat-exposed broilers, with a substantial alteration in the hypothalamus-pituitary-thyroid axis.They concluded that chronic heat stress exposure induces a prolonged decrease in T 3 circulation that contributes to the reduction in broilers' growth rate.The reduction in the T 3 circulation level under heat stress was reported to be associated with impaired production performance (i.e., final BW, BWG, FI, and FCR) [56].A reduction in production performance and serum T 4 was reported in broilers exposed to constant heat stress [57].Based on the current observed reduction in leptin with the increase in the T 3 and T 4 hormone levels, it can be suggested that early feed withdrawal induced metabolic modulation, which improved production performance in heat-stressed broilers.Hence, the observed reduction in leptin circulation and the increase in T 3 and T 4 in the FWD groups can be the leading factor for the observed shift increase in feed intake that is directly reflected in a higher BWG and better feed efficiency.
An additional physiological marker demonstrating the onset of broiler thermotolerance adaptation induced by early feed withdrawal was the increasing level of HSP70 observed with the FWD treatments.HSP70 is one of HSPs families responsible for protecting various cellular processes and ensuring cells' survival during stress [58].Researchers demonstrated that HSP70 gene expression was associated with increasing the thermotolerance ability of native chicken breeds [59,60].Furthermore, early feed restriction was reported to increase HSP70 levels in heat-stressed broilers [61].Early feed restriction induced thermotolerance in broiler chickens, with an associated improvement in HSP70 response [18,58].Goel, Ncho, Gupta, and Choi [8] stated that after early thermal manipulation, the up-regulation in HSP gene expression in chicks exposed to heat stress reflects the acquisition of thermotolerance.On the other hand, the negative effects of heat stress exposure on redox balance are well documented [62,63].Heat stress was reported to induce oxidative stress that subsequently increases reactive oxygen species (ROS) formation and induces redox imbalance and immunosuppression [64].In the present study, the control group exhibited lower antioxidant marker activity and higher lipid peroxidation levels.Al-Otaibi, Abdellatif, Al-Huwail, Abbas, Mehaisen, and Moustafa [49] reported a substantial reduction in total antioxidant capacity with an elevation in plasma MDA levels in laying hens subjected to heat stress.Alaqil, Abd El-Atty, and Abbas [56] also reported a significant 4.6-fold increase in MDA levels in broilers exposed to chronic heat stress compared with those reared under thermoneutral conditions.However, FWD improved the redox status, with a significant reduction in MDA levels and elevation in TAC and antioxidant enzyme activity.The favorable mitigation effect of FWD observed in our investigation can be justified by the improvement in antioxidant activity with the reduction in lipid peroxidation, which is reported to be directly related to broilers' muscle quality and general growth [65,66].
Heat stress exposure is reported to induce immunosuppression in poultry [67,68].Furthermore, immunosuppression with a significant reduction in immune cytokine levels, such as IL-1β, was reported in heat-stressed broilers [69].Moreover, a significant drop in serum IgG and IgM concentrations was reported in two commercial broiler strains subjected to heat stress [7].Korkmaz et al. [70] also reported a general suppression in immune functions with a reduction in IgA, IgG, and IgM in broilers subjected to heat stress.Currently, low levels of immune modulator cytokines, IgA, IgG, and IgM, were observed in the control group.Interestingly, a significant increase in pro-inflammatory cytokine secretion levels (i.e., IL-1β and IFN-γ) as well as an increase in serum globulin concentration and immunoglobulin levels was observed in response to FWD regimes.Interferon-gamma (IFN-γ) is a cytokine that plays a fundamental role in the regulation, maturation, and differentiation of cell-mediated immune responses in birds [71].Saleh and Al-Zghoul [72] reported an increase in plasma and splenic gene expiration of IFN-γ in broilers subjected to acute heat stress and embryonic thermal adaptation, indicating the onset of heat tolerance acquisition.Thus, it can be inferred that the observed increase in the pro-inflammatory cytokines level illustrates thermotolerance adaptation in the FWD groups.
In addition to immune cells, the innate immune system uses a diverse range of soluble proteins to directly combat infections, identify pathogens, and trigger further immune responses [73,74].The complement system is an important component of innate immunity, which fights infection by tagging pathogens for destruction, promoting inflammation, and directly killing cells.A key element in the avian innate immunity complement system is the complement component C3 protein that is up-regulated upon pro-inflammatory cytokine stimulation [74,75].Lysozyme is an enzyme with antibacterial properties that is part of the innate immunity of animals [76].Recently, exogenous lysozyme supplementation into broilers' diets was reported as a promising antibiotic alternative growth promoter replacer as well as an immunomodulation agent [76][77][78].Abdel-Latif, El-Hamid, Emam, Noreldin, Helmy, El-Far, and Elbestawy [77] reported a significant up-regulation of IFN-γ mRNA expression in lysozyme-supplemented broilers.This finding is consistent with our observation of increased lysozyme and IFN-γ levels in the FWD groups compared to the control.These results suggest a positive influence of FWD on key physiological processes related to both innate and humoral immune responses.
The current investigated FWD regimes seem to mitigate the undesirable impacts of heat stress on redox status and exhibit an immunomodulation effect.These effects can be justified by the onset of physiological adaptation to heat stress and induction of heat tolerance that restore the redox balance and subsequently immune activation [68].Furthermore, the increased level of HSP70 with FWD beneficiary enhanced the immune responses as it was reported to boost infectious bursal disease (IBD) resistance in heatstressed broilers [61].Finally, early feed withdrawal proved to reverse the homeostatic and metabolic distresses induced by heat stress and appeared to be a promising approach to overcome this growing threat to the broiler industry's sustainability.
Conclusions
It can be concluded from the present study that early feed withdrawal for 24 h, either continuously for one day or equally distributed over two or three days, can be applied as a non-invasive nutritional strategy to protect broiler chickens from the deleterious impacts of heat stress.Early feed withdrawal promotes thermotolerance development in broilers by inducing physiological changes that enhance performance under heat stress.Thus, aligned with the global rise in environmental temperatures, the currently proposed nutritional strategies holds promise for enhancing broilers' production performance, strengthening their immune response, and maintaining redox balance when confronting challenging heat stress conditions.
Table 1 .
Basal diet ingredients and chemical composition.
Table 3 .
Production performance parameters of broiler chickens exposed to different early feed withdrawal strategies.Different superscript letters within a row denote significant differences (p ≤ 0.05).* Control: group with no feed withdrawal; FWD-24: feed withdrawal for 24 h on the 5th day of age; FWD-12: feed withdrawal for 12 h on the 3rd and the 5th day of age; FWD-8: feed withdrawal for 8 h on the 3rd, 4th, and 5th day of age.FCR: feed conversion ratio.
Table 4 .
Serum metabolic hormones and metabolites concentration of broilers affected by different early feed withdrawal strategies.
Table 5 .
Liver HSP70 and serum oxidation markers of broilers affected by different early feed withdrawal strategies.
Different superscript letters within a row denote significant differences (p ≤ 0.05).TAC: total antioxidant capacity; CAT: catalase; SOD: superoxide dismutase; MDA: malondialdehyde; SEM: standard error of mean.* Control: group with no feed withdrawal; FWD-24: feed withdrawal for 24 h on the 5th day of age; FWD-12: feed withdrawal for 12 h on the 3rd and the 5th day of age; FWD-8: feed withdrawal for 8 h on the 3rd, 4th, and 5th day of age.
Table 6 .
Serum levels of immunoglobulin, pro-inflammatory cytokines, and innate immunity factors of broilers affected by different early feed withdrawal strategies. | 2024-05-19T15:35:00.124Z | 2024-05-01T00:00:00.000 | {
"year": 2024,
"sha1": "905417f907695bf9f21dd19c6c553ba2f8a7ed3c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2615/14/10/1485/pdf?version=1715871264",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c5a7ebfeaa247d830c2193b52cc9da1be3cc439a",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
} |
210042080 | pes2o/s2orc | v3-fos-license | Metabolic Reprogramming and Vulnerabilities in Cancer
Metabolic programs are rewired in tumors to support growth, progression, and immune evasion. A wealth of work in the past decade has delineated how these metabolic rearrangements are facilitated by signaling pathways downstream of oncogene activation and tumor suppressor loss. More recently, this field has expanded to include metabolic interactions among the diverse cell types that exist within a tumor and how this impacts the immune system. In this special issue, 17 review articles discuss these phenomena, and, alongside four original research manuscripts, the vulnerabilities associated with deregulated metabolic programming are highlighted and examined.
The reprogramming of cellular metabolism is a hallmark feature observed across cancers [1]. Contemporary research in this area has led to the discovery of tumor-specific metabolic mechanisms and illustrated ways that these can serve as selective, exploitable vulnerabilities. In this Editorial, we provide a high-level overview of the central themes from among the 21 review articles and original research studies in the Special Issue on Metabolic Reprogramming and Vulnerabilities in Cancer.
Nutrient acquisition and cancer growth: Metabolic programs are rewired in cancer cells to facilitate macromolecular biosynthesis required for cellular proliferation and tumor growth. These programs are frequently driven by oncogenic signaling pathways. This observation gave rise to the notion that metabolic pathways may exist that cancer cells are over-reliant or uniquely dependent upon and could therefore serve as drug targets. Accordingly, there has been considerable interest in mapping the regulation and activity of metabolic pathways across virtually every type of cancer. In this Special Issue, a detailed review on the regulation of glucose metabolism in pancreatic cancer is provided by Yan et al. [2], nucleotide metabolism by Villa et al. [3], and amino acid metabolism by Choi and Coloff [4].
In order to fuel such biosynthetic pathways, cancer cells employ a variety of mechanisms to enhance the uptake and utilization of nutrients including the over-expression of carbohydrate, amino acid, and lipid transporters as well as the activation of other bulk nutrient uptake programs ( Figure 1A). Based on their abundance in circulation and the ubiquity of metabolic pathways into which they can integrate, glucose and glutamine are two of the primary nutrient inputs that support the growth of cancer cells. However, much of the work on these important fuels has been determined using cell culture models, where nutrient and oxygen concentrations, matrix effects, inter-cellular interactions, among other factors, do not accurately reflect the physiochemical makeup of a tumor. Accordingly, the relevance of in vitro described glucose and glutamine pathways in tumors in vivo is now being delineated. Here, several reviews tackle this challenging topic as it relates to glutamine [5][6][7].
Cancers 2020, 12, x FOR PEER REVIEW 2 of 5 among other factors, do not accurately reflect the physiochemical makeup of a tumor. Accordingly, the relevance of in vitro described glucose and glutamine pathways in tumors in vivo is now being delineated. Here, several reviews tackle this challenging topic as it relates to glutamine [5][6][7]. In the research article from Guda et al., the authors illustrate that glucose uptake correlates with aggressive features of brain cancer and describe new strategies to target this axis [8]. Outside of the brain, access to glucose for some tumors can be more restricted, and alternate pathways must be employed to support growth in its absence. For example, in the research article by Hodakoski et al., the authors found that non-small cell lung cancers employ macropinocytosis, a process of bulk extracellular engulfment or "cell drinking" to obtain nutrients to support glucose independence [9]. Notably, Hodakoski et al. found that protein-derived alanine obtained via macropinocytosis and released upon lysosomal protein breakdown served as a gluconeogenic intermediate.
Stress resistance: Metabolic pathways are also reprogrammed in cancer cells to enable resistance to intrinsic stressors including oxidative stress and apoptosis as well as to promote resistance to therapies. Reactive oxygen species (ROS) are byproducts of metabolism that can activate signaling pathways or damage biomolecules including DNA. In cancer cells, the production of ROS are often elevated as a consequence of metabolic rearrangements and are selected to promote genetic mutations ( Figure 1B). Petronek et al. provide an up-to-date review on the role of iron as a central player in these processes [10]. However, if unchecked, excessive ROS can also be toxic to cancer cells. As such, cancer cells simultaneously drive antioxidant pathways that quench ROS. These are detailed In the research article from Guda et al., the authors illustrate that glucose uptake correlates with aggressive features of brain cancer and describe new strategies to target this axis [8]. Outside of the brain, access to glucose for some tumors can be more restricted, and alternate pathways must be employed to support growth in its absence. For example, in the research article by Hodakoski et al., the authors found that non-small cell lung cancers employ macropinocytosis, a process of bulk extracellular engulfment or "cell drinking" to obtain nutrients to support glucose independence [9]. Notably, Hodakoski et al. found that protein-derived alanine obtained via macropinocytosis and released upon lysosomal protein breakdown served as a gluconeogenic intermediate.
Stress resistance: Metabolic pathways are also reprogrammed in cancer cells to enable resistance to intrinsic stressors including oxidative stress and apoptosis as well as to promote resistance to therapies. Reactive oxygen species (ROS) are byproducts of metabolism that can activate signaling pathways or damage biomolecules including DNA. In cancer cells, the production of ROS are often elevated as a consequence of metabolic rearrangements and are selected to promote genetic mutations ( Figure 1B). Petronek et al. provide an up-to-date review on the role of iron as a central player in these processes [10]. However, if unchecked, excessive ROS can also be toxic to cancer cells. As such, cancer cells simultaneously drive antioxidant pathways that quench ROS. These are detailed from a metabolic perspective in the reviews by Purohit et al. [11] and Cockfield and Schafer [12]. Similarly, the role of cysteine in mitigating ROS is reviewed by Combs and DeNicola [13].
The Nrf2 transcription factor responds to oxidative stress by activating an antioxidant signaling program. In a research article by Haley et al., the authors found that inhibition of Nrf2 promoted epithelial to mesenchymal transition (EMT) of non-small cell lung cancer cells and facilitated metastasis [14]. These studies provide new insights into the role of this important antioxidant signaling program in cancer cell dissemination. Finally, cancer cells must also protect themselves against cell death programs. Sharma et al. present an argument for the rewiring of metabolism as an active player that directly maintains survival and defends against apoptosis [15].
Metabolism in the tumor microenvironment: Tumors are composed of a complex myriad of malignant and non-malignant cells that cooperate to support the growth of the tumor and the evasion of the anti-tumor immune system [16]. This collective group of cells is known as the tumor microenvironment (TME), and can play deterministic roles directing the dysregulated metabolic traits of the cancer cells ( Figure 1C). For example, Loponte et al. discuss how differential nutrient access, cell-cell interactions, and intrinsic genetic programs lead to metabolic heterogeneity among different malignant cells in a tumor [17]. These sorts of interactions also impact the redox state, for example, based on the local oxygenation state, and Weinberg et al. discuss how this impacts metabolic programs in malignant cells [18].
The TME can also reprogram the metabolism of non-tumorigenic cells within solid tumors, which supports their survival and growth and that of the tumor. For example, a major component of many tumors is the activated stroma comprised of cancer associated fibroblasts (CAFs). These cells have been reported to play an ever increasing role in tumor metabolism, as detailed in the review by Sanford-Crane et al. [19]. Finally, outside of the TME, Ramteke et al. describe how systemic factors including blood glucose levels, circulating insulin, growth factors, and other nutrients impacts tumor growth [20].
Therapeutic targeting of metabolic vulnerabilities: The detailing of metabolic programs in cancer cells and the TME has suggested numerous vulnerabilities and opened the door to new drug targets and therapeutic options [21]. This topic takes center stage across the reviews and research articles in this Special Issue. In this vein, it is important to remember that metabolism-targeted therapies are among the first and most successful chemotherapeutic paradigms [22]. As a prime example, Naffouje et al. detail the storied history of inosine monophosphate dehydrogenase (IMPDH) inhibitors, an enzyme that acts at a central node in nucleotide biosynthesis [23]. Deregulated metabolic programs also promote resistance to many established therapies [24]. In the research work by Luanpitpong et al., the authors illustrate that upregulation of lipid import and the formation of lipid droplets promotes resistance to the proteasome inhibitor bortezomib in mantle cell lymphoma [25].
New insights into metabolism have also been leveraged to expand current modalities or design new drug targets. For example, Zhou and Wahl provide a contemporary perspective on the deregulated metabolic state in brain tumors, detailing insights into how these vulnerabilities can sensitize tumor cells to radiation and epigenetic therapies [26]. Of note, the authors describe how the oncometabolite 2-hydroxyglutarate (2HG), a product of the mutant isocitrate dehydrogenase 1 (IDH1) enzyme, rewires the epigenome, creates a metabolic vulnerability, and sensitizes these tumors to DNA damage-targeted combination therapy. The discovery of mutant IDH1, and the function of the oncometabolite 2HG, has paved the way for one of the most successful metabolism-targeted modalities in oncology in the modern era [27].
Knowledge about the metabolic programs operative in cancer has grown tremendously in the past 15 years. Many of the emerging themes in this rapidly growing field are comprehensively detailed in this collection of up-to-date reviews and research articles. They also contain insightful perspectives on the emerging areas of study and the associated therapeutic opportunities. It is our hope that readers find them to be timely and informative.
Conflicts of Interest:
The authors declare no conflicts of interest. | 2020-01-08T14:04:48.166Z | 2019-12-30T00:00:00.000 | {
"year": 2019,
"sha1": "38ec02d777c299138f32f080e7e004831a91dfcd",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/12/1/90/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "53da952f8feaa4578fe5fe0e977e6b23b43001fb",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
253557444 | pes2o/s2orc | v3-fos-license | Liangxue Jiedu Runzhi ointment in the treatment of mild and moderate psoriasis with blood-heat syndrome: A double-blind randomized controlled trial
Introduction: Psoriasis is a kind of chronic inflammatory skin disease characterized by erythema, skin hyperplasia, scales and keratinocyte hyperproliferation. Psoriasis Vulgaris, the most common kind of psoriasis, severely deteriorates the life quality of patients. Traditional Chinese Medicine (TCM) is a good choice for the treatment of psoriasis, which has been proved to be safe and effective, and may reduce the recurrence rate. In clinical practice, Liangxue Jiedu Runzhi (LJR) ointment can effectively treat mild and moderate psoriasis with blood-heat syndrome, but there is a lack of evidence-based medical evidence. This trial aims to evaluate the efficacy and safety of LJR ointment for the treatment of mild and moderate psoriasis with blood-heat syndrome. Methods: A multicenter, randomized, double-blind, placebo-controlled, and self-controlled clinical trial was carried out according to this paper. The symmetrical rashes of each subject were regarded as the target lesions and were randomly divided into a treatment group (LJR ointment group) and a control group (placebo group). The LJR ointment or placebo ointment were externally administered on bilateral symmetric rashes, twice a day for eight weeks. The follow-up examination was made for subjects every two weeks. The primary research finding was conveyed by Psoriasis Area and Severity Index (PASI) in 8 weeks. The secondary research finding includes adverse events. Results: 46 subjects undergo this research project. The difference between PASI scores of the target lesions in the treatment group and control group is statistically significant were in 8 weeks (P < .001). The percentage of PASI 75 in treatment group and control group were 48% and 15% in week 8, respectively (x2 = 11.33, P < .05). No severe adverse events were reported. Conclusions: LJR ointment was proved to have efficacy in the treatment of mild and moderate psoriasis with the blood-heat syndrome.
Introduction
Psoriasis is a kind of chronic inflammatory skin disease characterized by erythema, skin hyperplasia, scales, and keratinocyte hyperproliferation. Dysfunction of the skin barrier is also one of the characteristics of psoriasis, which is correlated with disease severity. [1] According to an epidemiological study, there are 64.6 patients with psoriasis in every million persons. [2] Patients with psoriasis account for about 0.12% [3] of the total population in China. Clinically, psoriasis can fall into four types, namely, YQ and WZ contributed equally to this work. Vulgaris Type, Erythrodermic Type, Pustular Type, and Arthritic Type. Psoriasis Vulgaris is the most common type. [4] Disease progression takes a rapid pace at the active stage, during which new lesions constantly appear. Any acupuncture, surgery, or scratching results in psoriatic lesions in the affected area, which is called "homo-reaction." According to the recent research findings, the total financial burden of psoriasis is estimated at USD35.2 billion, while medical costs increase by USD12.2 billion in (35%). [5,6] For the treatment of mild-to-moderate psoriasis, ointments are commonly used externally in Western medicine, such as corticosteroids, retinoic acid, vitamin D 3 derivatives, and calcineurin inhibitors. [7] Oral and (or) topical use of TCM is a good choice for the treatment of psoriasis, which has been proved to be safe and effective and may reduce the recurrence rate. [8,9] Recently, emerging evidence has suggested that natural products, like chinese herbal medicine, have many active ingredients that restore illness. For example, cucurbitacin B, isolated from Luffa operculata, has antiproliferative and genotoxic activities [10] ; convulxin, isolated from the venom of the snake species, stimulates platelet aggregation. [11] Furthermore, in Silico studies based on natural sources also indicated that some derivatives isolated and identified from herbs with potential of anti-disease. [12][13][14][15] Therefore, exploring the various active ingredients of TCM and combining them into different derivatives may become a popular complementary and alternative therapy.
TCM ointment is favored in the treatment of psoriasis across Asia. [16][17][18][19][20][21] LJR ointment takes root in the basic TCM theory, with Chinese herbal medicine as the main ingredient. According to the trial design, patients were randomly assigned to the treatment group and control group under the inclusion and exclusion criteria. Patients in the treatment group were administered with LJR ointment and patients in the control group were administered with placebo ointment twice a day (morning and evening) for eight weeks, with visits every two weeks. Beijing Hospital of Traditional Chinese Medicine has taken the Liangxue Jiedu Runzhi (LJR) ointment topical use to treat mild and moderate psoriasis with the blood-heat syndrome, and it has good clinical efficacy. But there is a lack of evidence-based medical evidence.
This was a multicenter, randomized, double-blind, placebo-controlled, and self-controlled clinical trial designed to observe the improvement of patients' target skin lesions, and evaluate the efficacy and safety of LJR ointment in the treatment of mild-to-moderate active blood heat syndrome of psoriasis vulgaris, thereby marking a new attempt for external treatment of psoriasis vulgaris.
Trial design and ethics
This research project was a multicenter randomized, double-blind, placebo-controlled, and self-controlled trial, which was registered at the Chinese Clinical Trial Registry (Registration ID: ChiCTR-INR-16007941) in February 17, 2016. The participants were recruited from the Dermatology Clinics of Beijing Hospital of Traditional Chinese Medicine affiliated to Capital Medical University, Dermatology Clinics of Shunyi Hospital of Traditional Chinese Medicine, and Dermatology Clinics of Beijing Gulou Traditional Chinese Medicine Hospital, from February 2016 to December 2017. Washout Period refers to a period during which no intervention was administered. A washout may be administered between different treatment periods (to "wash out" the effects of treatment before it was readministered). [22] All patients were recruited after a two-week washout period. The written letters of informed consent were collected from all subjects. All researchers accept relevant training on the research protocol, research program, and researchers' responsibilities before the kick off of this research project, this research project was approved by the relevant hospital ethics committee (Ethical Approval Number: 2016BL-006-03).
Patients
2.2.1. Diagnostic criteria. The Western medicine diagnostic criteria for psoriasis vulgaris [23,24] and Guidelines for the Diagnosis and Treatment of Psoriasis in the Chinese Medical Association Psoriasis Research Panel [25,26] : Typical clinical manifestations of psoriasis vulgaris include papules and macules with red lesions, which can merge into tablets. The edge of the lesions was obvious, covered with multiple layers of silverywhite scales. After scales are scraped off, bright scales are exposed. The punctate hemorrhage was found after bright scales are scraped off. The judgment for the active stage will be made based on the course of the disease.
Inclusion criteria.
The inclusion criteria for the study were as follows: 1. meeting the above diagnostic criteria for psoriasis; patients corresponding to diagnosis standards psoriasis vulgaris with the blood-heat syndrome of Traditional Chinese Medicine; patients corresponding to the progressive stage of psoriasis vulgaris; 2. patients diagnosed with psoriasis vulgaris between 18 and 65 years old, regardless of gender and course of the disease; 3. severity of disease: mild-to-moderate skin lesions (5% < Body Surface Area (BSA) < 20%); 4. sign informed consent of Good Clinical Practice (GCP) and volunteer to participate in this research project.
Exclusion criteria.
Participants will be excluded who meet any following conditions: 1. articular/pustular/erythrodermic psoriasis; 2. score of Self-rating Anxiety Scale (SAS) > 50 points, the score of Self-rating Depression Scale (SDS) > 53 points, or other mental disorders; 3. pregnant or lactating women, or women who have a birth plan within the next three months; 4. patients who took glucocorticoids and/or immunosuppressive drugs and retinoids acid in the past month, or took glucocorticoid preparations, retinoids acid, and vitamin D3 derivatives for about two weeks; 5. complicating serious primary diseases in the cardiovascular and cerebrovascular system, liver, kidney, and hematopoietic system; 6. malignant tumor with concurrent infection, electrolyte imbalance, or acid-base disorder; 7. those who were allergic to drugs of this research project; 8. patients who were participating in clinical trials of other drugs; 9. patients with psoriasis, who need systemic treatment; 10. any other condition that the investigator's judge as likely to make the patient incapable to complete, comply, or unsuitable for the clinical trial.
2.2.4.
Sample size calculation. The sample size was calculated through the following formula: n = [(Z 1−α/2 + Z 1−β/2 ) 2 × (σ 2 1 + σ 2 2 )]/δ 2 , where "n" is the sample size, Z 1−α/2 and Z 1−β/2 refer to the table of z values, σ 1 is the standard deviation of Treatment Group, σ 2 is the standard deviation of Control Group, and δ is the difference value between two groups with clinical significance. α = 0.05, β = 0.10, 1β = 0.90, bilateral μα = 1.96, μβ = 1.282. δ is the required discrimination, which is the difference between the Treatment Group and Control Group, the value was set at 1.5 based on previous research results. The average PASI standard deviation of the treatment group compared with the control group is 2. It is required to infer that the probability of making Type I Error is controlled below 0.05 (two sides), and the probability of making Type II error is controlled below 0.1. Consequently, n = [(1.96 + 1.28) 2 × (2 2 + 2 2 )]/1.5 2 , Considering the 15% loss to follow-up rate, the sample size was set at 50 patients.
Randomization and masking.
Since psoriasis rash is characterized by symmetrical distribution, the clinical research was carried out through the self-matching method of bilateral symmetrical rashes on patients. Drug grouping adopts the random number table assignment method, in which the left and right rashes of the subjects were randomly assigned. Ointment A and Ointment B were respectively applied in the treatment group and control group include, which were assigned to patients with symmetrical rashes on the left and right sides under the random number table method. Statistical professionals provide a blind table of random numbers, and SAS9.3 software was used to generate fixed seed numbers. The specific grouping was determined by persons irrelated with the clinical research based on a random number table. The sealing and opaque envelopes will be used to achieve allocation concealment. At the end of the test, allocation forms were unsealed during a face-to-face meeting. If the seals were damaged, an explanation will be needed. Otherwise, the test will be considered invalid. According to the random information in the random number table, the physicians prescribe for the corresponding drug treatment in the corresponding places.
2.2.6.
Blinding. The double-blind was made for both the patients and the researchers in this research project. In the process of drug formulation, Beijing Chinese Medicine Research Institute prepares the test drug and placebo. The placebo was the same as the test drug in terms of appearance, smell, and package. The blinding process was recorded on the blinding record certificate. Head of the blinding process evaluates the appearance of the consistency between drugs in the treatment group and control group to ensure that the patients were always blinded in the study.
Preparation of LJR ointment and placebo
ointment. LJR Ointment: each LJR ointment contains 150 g Chinese angelica (Radix Angelicae Sinensis), 30 g Arnebia root (Radix Arnebiae), 9 g Rhubarb root and rhizome (Radix et Rhizoma Rhei), 9 g Natural indigo (Indigo Naturalis), 9 g Belvedere fruit (Fructus Kochiae), 9 g Light yellow sophora root (Radix Sophorae Flavescentis), 9 g Glabrous greenbrier rhizome (Rhizoma Smilacis Glabrae), 110 g Sesame oil, 162.5 g Vaseline, and 12.5 g Beeswax (Cera Flava). Details of the preparation process were given as follows: Soak the above ingredients in sesame oil for two days; Fry over a gentle heat until the Chinese angelica was slightly boiled; Turn off the heat, filter the dregs, and retain the decoction; Mix vaseline and beeswax into a decoction, and stir evenly; Put into medicine boxes (20 g each box), and naturally solidify. Placebo ointment: The placebo ointment, 20 g per box, was prepared by vaseline and beeswax into a decoction and stirred evenly. The pigment was added into the placebo to make its physical properties similar to those of LJR ointments, such as appearance, color, dosage form, weight, and smell. But active ingredient of the drug as above was not contained.
Supervision and administration of LJR ointment
and placebo. In this research project, patients' rashes were symmetrical on both sides, with one side of rash in the treatment group, and the other side of rash in the control group. When distributing the ointment to the patients, the left and right sides of the ointment cover were marked, and patients were told to use the ointment according to the mark and not replace the ointment on both sides. Subjects were asked to use the ointment to the designated side of lesions twice a day for eight weeks, once each in the morning and evening. The dosage of ointment was evaluated by estimating the surface area of the target lesion in each subject. To facilitate the calculation, the fingertip unit (FTU) was used. A fingertip unit was extruded in a standard external paste tube with a nozzle diameter of 5mm to cover the ointment dosage from the distal phalanx fold of the index finger to the tip of the index finger. 1 FTU = 0.5g = 100cm 2 of surface area. Subjects were asked to apply the ointment to such an area that was 2mm larger than the target lesion surface, which ensures adequate coverage of the target lesion. Gently click the target lesion to form a thin layer of ointment, and then rub towards a certain direction until the ointment was fully absorbed.
Primary outcome measures. Psoriasis Area and
Severity Index (PASI) of target lesions was the primary outcome indicator. PASI = (Erythema + Scale + Infiltration) × Area Score (If the target lesion area was large than 5cm 2 , the area score before treatment will be 3 points). [27] The severity of the patient's condition was assessed according to PASI and Guidelines for the Diagnosis and Treatment of Psoriasis Vulgaris 2018. PASI score < 3 indicates mild psoriasis, PASI score of 3-10 indicates moderate psoriasis, and PASI score ≥ 10 indicates severe psoriasis.
Statistical analysis
Per-protocol set was used for data analysis through SPSS 21.0 software. Under the per-protocol principle, subjects, who were randomized and under the protocol, were included in the analysis. A per-protocol analysis was a subset of the full dataset, in which subjects were more compliant with the protocol (who take 80% to 120% of the drug dose under the protocol; primary outcome data was available; and no severe violation of the protocol). The missing value will be dealt with by carrying forward the most recent observation to the endpoint. Professionally trained dermatologists will collect all data. Members of the research team will perform the data collection and statistical analyses.
Safety analysis refers to the safety analysis set. It follows the exposure principle. In other words, for all subjects who have used the ointment at least once, adverse events will be observed and safety data will be recorded, to evaluate the correlation between any adverse events and the test drug.
Quantitative indicators were expressed as mean standard deviation. If the sample is normally distributed and the variance Medicine is uniform, the t-test will be performed: the paired-sample t-test will be used for intra-group comparison, and the independent sample t-test will be used for inter-group comparison. If the variance is uneven, the approximate t-test will be used. If the sample is not normally distributed, the rank-sum test (nonparametric) will be applied. The Chi-square test will be used for counting data. P values < .05 will be considered statistically significant.
Study population
A total of 50 participants of psoriasis vulgaris of the bloodheat syndrome were recruited for this study. The left and a right bilateral symmetric rash of 50 subjects with psoriasis vulgaris were randomly assigned to the treatment group or control group. In the research process, four subjects who did not follow the schedule were fell out. A total of 46 subjects completed this research project, with a completion rate of 92%. Figure 1 gives the flow chart of the clinical trial.
The subjects were mainly ranged 24-35 and 45-64. The male to female ratio was about 4:1. The patients without a family history of psoriasis account for about 78%. The disease duration was 1-34 years. About 63% of subjects show abnormal body mass index, 37%, a history of smoking, and 35%, a history of alcohol drinking. Baseline characteristics of subjects were as shown in Table 1.
PASI score
PASI score of the treatment group and control group at baseline was 18.20 ± 6.84 and 18.04 ± 6.75(Mean ± SD), and there was no statistically significant difference (P = .89, P > .05). After 8 weeks of treatment, the PASI scores of the treatment group and control group were 5.24 ± 3.03 and 9.24 ± 5.30 (mean ± SD), in the treatment group was better than in the control group. The Rank-sum test showed that the difference between the two groups was statistically significant (P < .001). Inter-group significant differences start at 6 weeks (P = .005, P < .01) ( Table 2).
PASI 75
PASI75 is a PASI score that decreases more than 75% from baseline. There was no significant difference in PASI75 between the treatment group and the control group in the first four weeks. Four weeks after, the percentage of PASI75 in the treatment group was significantly higher than that of the control group. In 8 weeks, the percentage of PASI75 in the treatment group was 48%, which was 15% in the control group. The difference between the two groups in PASI75 was statistically significant (x 2 = 11.33, P < .05) (Fig. 2).
PGA
In 8 weeks, Physician Global Assessment (PGA) in the treatment group and control group was 73.9% and 41.3%, respectively. The difference between the two groups in PGA was statistically significant (x 2 = 10.015, P < .05). PGA changes in the two groups are shown in Figure 3.
Skin barrier function
There was no statistically significant difference between the two groups at baseline in all indicators of skin barrier function (P = .901, P > .05). In 8 weeks, the differences between the two groups in skin pH value was statistically significant (P = .011, P < .05), and in humidity was statistically significant (P = .003, P < .05). After 8 weeks, the differences between the two groups in skin temperature were not statistically significant (P = .211, P > .05), and in lipids was not statistically significant (P = .16, P > .05). Skin temperature, pH value, skin humidity, and lipids were significantly different in the treatment group before and after treatment (P < .05). The difference was found in these indicators in the control group before and after treatment was not statistically significant (P > .05) ( Table 3). 5.24 ± 3.03 9.24 ± 5.30 <.001** *P < 0.05, **P < 0.01 was statistically significant between the treatment group and the control group. PASI = Psoriasis Area Severity Index
Safety
A total of 12% (n = 6) of subjects experienced adverse events. The rates of adverse events in the treatment group and control group were 4% (n = 2) and 8% (n = 4), respectively. Adverse events include itching, burning sensation, desquamation, rash, and dryness. There was no statistical difference between the two groups in adverse events (x 2 = 0.707, P > .05) ( Table 4).
Discussion
After 8 weeks of treatment, the treatment group performs better than the control group in the PASI score, the percentage of PASI 75, and PGA changes (P < .05). In terms of the skin barrier, the differences in the improvement of humility and lipids between the two groups were statistically significant (P < .05). It indicates that LJR Ointment may improve the PASI score, PASI 75, and PGA in patients with mild and moderate psoriasis with the blood-heat syndrome, and reconstruct the damaged skin barrier to some extent. LJR Ointment contains Chinese angelica (Radix Angelicae Sinensis), Arnebia root (Radix Arnebiae), Rhubarb root and rhizome (Radix et Rhizoma Rhei), Natural indigo (Indigo Naturalis), Belvedere fruit (Fructus Kochiae), Light yellow sophora root (Radix Sophorae Flavescentis) and Glabrous greenbrier rhizome (Rhizoma Smilacis Glabrae). Angelica polysaccharides can promote the apoptosis of keratinocytes of psoriasis-like lesions in guinea pig, [29] and significantly reduce the positive expression rate of PCNA [30] to inhibit the proliferation of psoriatic epidermal cells. In addition, Angelica polysaccharide has immunological activity and has a good curative effect on tumor treatment and radiation damage [31] Arnebia root (Radix Arnebiae) inhibits the proliferative effect of IL-17A [32] and EGF [33] on HaCaT Cells, induces proliferation of HaCaT cells and secretes related cytokines, which can be used to treat psoriasis by inhibiting chemokine recruitment of leukocytes. In addition, the imiquimod-induced mouse model of psoriasis was restored by shikonin treatment, which ameliorated excessive keratinocyte proliferation. [34] Rhubarb root and rhizome (Radix et Rhizoma Rhei), Natural indigo (Indigo Naturalis), Belvedere fruit (Fructus Kochiae), and Light yellow sophora root (Radix Sophorae Flavescentis) inhibit the proliferation of HaCaT cells treated with serum immunoglobulin and peripheral blood mononuclear cells after treatment, in a dose-dependent manner. In particular, Fructus Kochiae has the strongest inhibitory effect. [35] The indirubin, an active component of natural indigo (Indigo Naturalis), can inhibit the activation of the cyclin-dependent Table 3 Skin barriers function of treatment group and control group. Table 4 Safety outcome of treatment group and control group after treatment.
Treatment group Control group Total
Target lesions (n) 50 50 100 Pruritus (n%) 1 (2%) 2 (4%) 3 (6%) Burning (n%) 1 (2%) 2 (4%) 3 (6%) Scales (n%) 0 (0%) 0 (0%) 0 (0%) Rash (n%) 0 (0%) 0 (0%) 0 (0%) Dryness (n%) 0 (0%) 0 (0%) 0 (0%) Adverse events (n%) 2 (4%) 4 (8%) 6 (12%) All 50 patients report no significant abnormalities in blood routine examination, urine routine examination, liver function, renal function, and electrocardiogram before and after this research project. www.md-journal.com kinase, signal transduction, and transcription-3 (STAT3). [36,37] Therefore, it inhibits the excessive proliferation of keratinocytes in vitro. [38] Glabrous greenbrier rhizome (Rhizoma Smilacis Glabrae) can significantly reduce serum TNF-α and IL-8 levels in patients with blood-heat-type psoriasis, inhibit chemotherapeutic effects of VEGF on inflammatory mediators and vascular endothelial cells and alleviate inflammation of the psoriasis lesions. [39] Other studies have shown that: Topical indigo naturalis ointment is clinically proved to be an effective therapy for plaque-type psoriasis. Indirubin, as the active component of indigo naturalis, inhibits cell proliferation of epidermal keratinocytes. [40] And indirubin ameliorated psoriasiform dermatitis by breaking CCL20/CCR6 axis-mediated inflammatory loops. [41] The working mechanism of LJR ointment for mild and moderate psoriasis vulgaris with the blood-heat syndrome remains unknown. Due to the functions of single Chinese herbs in LJR Ointment, it may explain the reason why LJR ointment can significantly improve skin pathological changes, inhibit related immune factors, and improve target lesions. More research is needed to explore the mechanism of such compound preparations as LJR ointment for mild and moderate psoriasis vulgaris with blood-heat syndrome. The skin barrier function refers to a "brick wall" structure composed of lipids and natural moisturizing factors between the stratum corneum cells and cells. The skin surface is covered with a sebum membrane, which retains moisture and has anti-inflammatory effects, forming a natural protective barrier for the human body. [42,43] LJR Ointment functions in a two-pronged manner: Firstly, it prevents the loss of moisture from human skin and stops the outside water from entering the human body easily. Secondly, it prevents bacteria and fungi on the surface of the skin from invading the human body. The skin barrier function of patients with psoriasis is more vulnerable than that of healthy people Even skin areas are not affected by lesions, while the recovery rate of barrier function sharply declines. [44] Abnormal skin barrier function may be the underlying pathogenesis of psoriasis.
The function of the skin barrier depends on skin temperature, sebaceous gland content, pH value, and the water content of the stratum corneum. Under normal circumstances, skin pH value ranges between 4.5 and 6.5, [45] and the normal stratum corneum water content ranges between 20% and 35%. [46] The decrease in ceramide content of the main component of the sebaceous gland can make the keratinocytes disorderly arranged, dislocated with water loss, resulting in dryness, desquamation, and scale-like skin. [47] Patients with psoriasis have dry skin and severe water loss and may have a pH value, high transepidermal water loss, and drop of water content in the stratum corneum.
The dermis telangiectasia accelerates blood flow, raises the skin temperature, and aggravates transepidermal water loss, complicated by chronic inflammation. As a result, the normal metabolism of the skin fails. The lipids, natural moisturizing factors, and anti-inflammatory factors reduce, resulting in dry skin and desquamation. [48] Based on previous findings, this paper further explores the skin barrier function of patients with psoriasis. It was found that the temperature, moisture, lipid, and pH value of the skin barrier function in the treatment group were significantly improved before and after treatment, while the control group shows no significant changes. It indicates that LJR ointment can restore skin barrier function.
The reasons why no significant difference takes place in body temperature and lipids between the two groups after treatment are analyzed as follows: Firstly, participants with mild and moderate psoriasis with the blood-heat syndrome were in the active stage so that lesions may worsen in 8 weeks follow-up examination period. Secondly, the placebo ointment contains beeswax and vaseline. Vaseline has some therapeutic effects on psoriasis. [49] There are many limitations to the research findings of this paper. Firstly, psoriasis vulgaris is a chronic recurrent inflammatory skin disease so the long-term effect and safety of the treatment are very important for patients. This research project lasts for 8 weeks, which cannot give a long-term evaluation of the effect and safety of LJR ointment. Secondly, the sample size of this research project was relatively small, which may influence the outcomes. More clinical studies with larger sample size and longer duration and follow-up examination period are needed.
Conclusion
LJR Ointment can improve PASI score, PASI 75, and PGA in patients with mild and moderate psoriasis with the blood-heat syndrome. It may reconstruct the skin barrier function. LJR ointment seems safe to be used for patients with mild and moderate psoriasis with the blood-heat syndrome. | 2022-11-17T15:39:47.123Z | 2022-11-11T00:00:00.000 | {
"year": 2022,
"sha1": "315915b75cfb5357364978c4fd7cee440a29b8e6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "WoltersKluwer",
"pdf_hash": "315915b75cfb5357364978c4fd7cee440a29b8e6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236513575 | pes2o/s2orc | v3-fos-license | Large Right Atrial Myxoma Presenting As Bilateral Pulmonary Embolism
Myxoma is a rare benign tumor of the heart. Cardiac myxomas are the most common primary cardiac tumor in adults, commonly found within the left atrium. It can occur at any age and is more common in females than males. This case report aims to identify the clinical symptoms of cardiac myxoma, which can be life-threatening if neglected. Here, we present the case of a 30-year-old female with past smoking history. For the past three to four weeks before this hospitalization, her symptoms worsened including shortness of breath with exertion, dry cough, and pleuritic chest pain. Outpatient treatment with antibiotics and nebulizers did not relieve her symptoms. She went to the emergency room and underwent computed tomography of the chest with contrast showing bilateral lower lobe pulmonary emboli and a large mass in the right atrium. Intravenous unfractionated heparin was initiated. A transthoracic echocardiogram confirmed a 3.76 cm × 4.95 cm mass in the right atrium. The patient underwent surgical resection of the right atrial mass the following day and was discharged four days later in a stable condition. Pathology of the mass confirmed atrial myxoma.
Introduction
Primary cardiac tumors are extremely rare, with an incidence of less than 0.1% found in 12,000 autopsies [1]. Primary cardiac tumors are less common than secondary (metastatic) tumors, with incidence for primary tumors of 0.056% versus 1.23% for secondary tumors found in 12,485 autopsies [2]. Cardiac myxomas are more common in the left atrium as opposed to the right atrium [3]. The mean age of patients diagnosed with cardiac myxoma is 50 years, occurring more commonly in females [4].
Case Presentation
A 30-year-old woman with a past medical history of tobacco use presented to our facility with worsening shortness of breath on exertion, dry cough, and pleuritic pain. Outpatient treatment with antibiotics and nebulizers did not relieve her symptoms. She then presented to the emergency room (ER). In the ER, she was in no apparent distress; her blood pressure was 122/75 mmHg, heart rate 90 beats per minute, respiratory rate 18 breaths per minute, temperature 98.2°F, and pulse oximetry was 95% on room air. Heart and lung examinations were normal.
FIGURE 1: Chest X-ray showing normal lungs, with no consolidation or pleural effusion and normal cardiac silhouette.
Given the persisting symptoms despite outpatient medical management and clear chest X-ray, she underwent computed tomography (CT) of the chest with contrast showing bilateral lower lobe pulmonary emboli (Figures 2, 3) and a 4.5 cm mass in the right atrium ( Figure 4).
CT: computed tomography
Intravenous unfractionated heparin was started. A subsequent transthoracic echocardiogram confirmed a 3.76 cm × 4.95 cm mass in the right atrium (Video 1, Figures 5, 6).
VIDEO 1: Transthoracic echocardiogram with RV inflow window
showing the large and highly mobile mass in the RA protruding through the tricuspid valve and obstructing the RV inflow.
RA: right atrium
A discussion was held with the cardiothoracic surgery team. The patient underwent surgical excision of the right atrial mass the following day. An intraoperative transesophageal echocardiogram revealed the massive right atrial mass with a stalk attached to the interatrial septum (Video 2). Pathology of the mass confirmed right atrial myxoma (Figure 7). The patient was discharged four days later to a rehab facility in a stable condition.
Discussion
Atrial myxomas are the most common benign primary cardiac tumors with a prevalence of around 0.01% [1]. Most cases occur between the third and sixth decade of life, with a high prevalence among females. Although most myxomas arise from the border of the fossa ovalis in the left atrium, rarely they can emerge from the right atrium, as in our case [5]. Morphologically, they are pedunculated smooth polypoid round or oval structures with their mobility dependent on their attachment site and stalk length. Clinical presentation can vary from asymptomatic to systemic symptoms to sudden cardiac death [6]. Due to the nonspecific presentation, diagnosis can be challenging. Left-sided myxomas with racemose structure and size over 5 cm in diameter are more likely to produce symptoms [7]. Myxoma-related symptoms are usually produced by mechanical interference with valvular function, leading to stenotic or regurgitant lesion or embolization due to their highly vascular and friable nature.
For diagnostic evaluation, a transthoracic echocardiogram is widely available and provides essential information such as tumor location, size, mobility, and attachment. Color Doppler can provide further details regarding hemodynamic consequences. Transesophageal echocardiogram has 100% sensitivity and is more specific compared to transthoracic echocardiogram. Cardiac magnetic resonance imaging delineates the point of attachment most precisely with the postsurgical correlation of 83% [8].
Surgery is the treatment of choice, with perioperative mortality around 2.2%. Postoperative atrial arrhythmia has been reported in 26% of cases [9]. In sporadic cases, 2-5% of patients can have recurrent myxoma, whereas 12% in familial cases and 22% in complex atrial myxomas. Currently, there is no consensus on the adequate follow-up interval for recurrence or for screening family members.
Conclusions
This case emphasizes the varying presentation of atrial myxoma and the diagnostic challenge it may present. Echocardiography is a widely available noninvasive tool that can identify the tumor and should be performed early in patients with suspicion. Surgical resection is usually curative with an excellent success rate but is associated with a risk of developing atrial arrhythmias. Currently, there is a lack of guidelines on determining the optimal time for surgical intervention and decision to initiate anticoagulation therapy as there is an increased risk of thrombotic complication in addition to the embolization of the tumor mass. Further research is needed for determining the optimal follow-up duration and frequency for tumor recurrence, primarily when it occurs in younger patient populations and at atypical locations.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2021-07-31T05:15:50.441Z | 2021-06-01T00:00:00.000 | {
"year": 2021,
"sha1": "adb7b0477b78fde672c4e8333e16cecf4b9c03f9",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/61102-large-right-atrial-myxoma-presenting-as-bilateral-pulmonary-embolism.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "adb7b0477b78fde672c4e8333e16cecf4b9c03f9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255081886 | pes2o/s2orc | v3-fos-license | Effectiveness of glycopyrronium bromide in the treatment of small airway dysfunction: A retrospective study
Objective: Glycopyrronium bromide has a quaternary ammonium structure and a low oral bioavailability, which reduces its systemic effects; it acts through a bronchodilating blockade of muscarinic receptors. The aim of this retrospective study was to analyze a possible relationship between the changes in the small airways and the efficacy of a bronchodilation with glycopyrronium bromide; exercise tolerance was also assessed, by performing the six-minute walking test. Methods: Forty-one patients were identified (23 females/18 males; mean age 66.82 ± 9.75 years), with a normal forced expiratory volume in 1 s (FEV1)/forced vital capacity ratio of 77.45% ± 4.86%, a reduced forced mid-expiratory flow between 25% and 75% of forced vital capacity (FEF25–75) of 42.9% ± 10.5%, with an increased residual volume/total lung capacity ratio of 132.68% ± 6.41%, FEV1 1.85 ± 0.54 L, forced vital capacity 2.39 ± 0.71 L, airway resistance (sR tot) 168.18% ± 42.5%, total lung capacity 98.28% ± 8.9%, six-minute walking test distance 318.3 ± 36.6 m, modified British Medical Research Council dyspnea scale 1.48 ± 0.77. All patients were initiated with glycopyrronium bromide 50 μg/die and reassessed after 4 months. Results: After the treatment with glycopyrronium bromide, a significant improvement was noted regarding forced vital capacity (p = 0.04), FEF25–75 (p < 0.001), sR tot (p < 0.001), residual volume/total lung capacity ratio (p < 0.001) with reduction of dynamic hyperinflation, the significant increase of the distance covered during the six-minute walking test (p < 0.001), and modified British Medical Research Council (p < 0.001) showed enhanced exercise tolerance. FEV1 improved, but the difference was not statistically significant. Conclusions: Small airway dysfunction is associated with bronchodilator responsiveness. Glycopyrronium bromide has proven to be capable of inducing favorable effects on lung hyperinflation and its functional and clinical consequences, with a decrease in dyspnea and an increase in exercise capacity. The use of anticholinergic drugs is useful in the management of small airway disease.
Introduction
Glycopyrronium bromide (GB) is an inhaled, long-acting muscarinic receptor antagonist (LAMA) with a quaternary ammonium structure and a low oral bioavailability, which reduces its systemic effects. It acts as a bronchodilator by blocking the muscarinic receptors M1 and M3. It does not cross the blood-brain barrier and therefore has few or no central effects. In clinical trials, patients who received GB as a symptom controller for moderate or severe chronic obstructive pulmonary disease (COPD) 1,2 experienced a relevant amelioration in lung function, associated with an increased control of COPD symptoms, as well as with a lower need for rescue medication inhalers. An improvement in the quality
Effectiveness of glycopyrronium bromide in the treatment of small airway dysfunction: A retrospective study
of life was also reported. Quality of life represents an important aspect of COPD management. 3,4 Published studies have shown that the damage caused by the inhalation of toxic particles, such as cigarette smoke and environmental pollutants, affects the "small airways." Small airways-the quiet zone between the conducting and the respiratory lung zonesconsist of respiratory bronchioles, which have a partially alveolated wall, and terminal bronchioles, that are devoid of cartilage and mucous-secreting glands. 5 The disease of the small airways is characterized by an inflammation of smallest bronchi and bronchioles, with inflammatory cellular infiltration, metaplasia of goblet cells and fibrosis, which leads to an increased thickness and tortuosity of the walls, as well as to an enhanced airway resistance, due to bronchial obstruction. This produces airflow limitation during expiration, resulting in lung hyperinflation and air trapping. 6,7 We have studied a particular phenotype of patients, in whom the rate of bronchial caliber as forced expiratory volume in 1 s (FEV 1 )/forced vital capacity (FVC) ratio was in the normal range, but the forced expiratory flow in the middle half of the patient's exhaled volume, forced mid-expiratory flow between 25% and 75% of FVC (FEF ) was reduced. FEF 25-75 may be a more sensitive index for identifying obstruction at level of small airways. We recorded an increased value of residual volume (RV)/total lung capacity (TLC) ratio, as a marker of lung hyperinflation, and assessed the exercise tolerance by performing the six-minute walking test (6MWT) distance. The aim of this study was to determine whether there is a relationship between the changes in the small airways and the efficacy of the bronchodilation with GB.
Methods
This study included consecutive patients without acute manifestations of any disease and in a stable clinical state, whose dyspnea and exercise intolerance induced a referral to the Department of Respiratory Pathophysiology of "Mariano Santo" Hospital in Cosenza, Italy. Participants were evaluated by respiratory physicians from January 2018 to December 2018. All data were collected retrospectively. A detailed clinical history was taken, and a physical examination was performed. Lung function was measured through spirometry, carried out according to the American Thoracic Society/European Respiratory Society guidelines 8 ; exercise tolerance was assessed by performing the 6MWT. 9 Symptoms were also evaluated using the modified British Medical Research Council (mMRC) scale. 10 Patients were excluded if they had reported exacerbations in the previous 4 weeks, or in case of other lung diseases and uncontrolled comorbidities, such as severe cardiovascular diseases and malignant disorders.
Small airway obstruction is characterized by premature airway closure and air trapping, regional heterogeneity, and exaggerated volume dependence of airflow limitation. Therefore, tests that focus on these functional aspects can be useful surrogates in order to detect and quantify small airway disease. High levels of RV can be detected in the presence of premature airway closure and air trapping, TLC is commonly increased in obstructive disease, and the RV/TLC ratio is the best measure of RV increase, being also considered the first indicator of hyperinflation. FEF is the most cited functional measure of small airway obstruction. By excluding the initial peak of expiratory flow and averaging the flow rate over the mid-quartile range of FVC, FEF 25-75 is very sensitive to the same small airway characteristics that result in the concavity of the expiratory flow-volume curve.
As a measure of airflow limitation, FEF 25-75 is highly correlated with FEV 1 /FVC ratio, but non-linearly, so that FEF 25-75 decreases more steeply than FEV 1 /FVC at mild obstruction levels. We considered a FEV 1 /FVC ratio <70% as a marker of obstruction, 11 a FEF 25-75 <60% of predicted value as an expression of small airway dysfunction, 12 and an increase in RV/TLC ratio >20% as an indicator of lung hyperinflation. All subjects who exhibited a FEF 25-75 reduction, a normal FEV 1 /FVC ratio, and an increased RV/TLC ratio with symptoms of dyspnea upon exertion, were treated with a bronchodilator therapy consisting of GB at the dosage of 50 μg once a day for 4 months. Patients did not receive other inhaled therapies during the study period. Clinical and functional parameters were collected at baseline and after 4 months of GB treatment. Because of the mild impairment of lung function detected in these patients, we decided to treat them only with an effective and safe LAMA, and our choice thus pointed to GB. Therefore, this therapeutic approach was part of an institutional protocol, aimed to avoid the use of albuterol or salbutamol on demand. Indeed, we think that the use of short-acting β 2 agonists as rescue medications is not able to improve lung hyperinflation in such patients as those recruited in our retrospective study. All subjects released a written informed consent to the study, in accordance with the Helsinki Declaration.
Statistical analysis
The statistical analysis was performed using the SPSS program for Windows, version 9.0.0 (SPSS, Inc., Chicago, IL, USA). Data are shown as mean ± standard deviation (SD). Comparisons between data before and after treatment, within each group of patients, were done by paired Student's t-test. The level of significance was set at p < 0.05.
Results
Overall, 115 patients were screened. Among these, 41 subjects met the inclusion criteria (23 females/18 males; mean age 66.82 ± 9.75 years), who had a normal FEV 1 /FVC ratio of 77.45% ± 4.86%, a reduced FEF 25-75 of 42.9% ± 10.5%, with increased RV/TLC ratio of 132.68% ± 6.41%, FEV 1 1.85 ± 0.54 L, FVC 2.39 ± 0.71 L. Their main demographic and clinical features are shown in Table 1. All patients had a diurnal respiratory function characterized by an obstruction of the small airways, associated with air trapping. Twentyeight patients were smokers, while 13 subjects never smoked. A decreased exercise tolerance was shown by both baseline 6MWT (318.3 ± 36.6 m), and the information provided in the mMRC questionnaire (1.48 ± 0.77).
No side effects were recorded after the administration of GB.
Discussion
COPD is currently recognized as a complex clinical syndrome, rather than a specific disease entity, thus being considered as a broad term comprising a heterogeneous group of phenotypes that may have different treatment responses. In light of this understanding, the approach based on "one treatment to fit all" may not be appropriate, and there is a trend toward the individualization of the COPD therapy based on distinct phenotypes. More specifically, a key contributing factor to poor disease control might be the fact that such patients express a "small airway phenotype," in the presence of an ongoing and unopposed small airway inflammation which is not being targeted nor controlled. 13,14 Many patients who come to our observation with symptoms such as shortness of breath and recurrent cough display small airway parameter changes (decreased FEF ) and significant hyperinflation (increased RV/TLC ratio), with a FEV 1 /FVC ratio within the normal range as observed with spirometry testing. It can be useful to treat patients with this phenotype. This is the first study that highlights the efficacy of LAMA treatment in patients with FEV 1 /FVC ratio within the normal range, but with a significant obstruction of small airways. Generally, small airway dysfunctions are associated with bronchodilator responsiveness. Bronchodilation effectiveness can be demonstrated as a significant improvement in both FEV 1 and FVC, when compared to the values measured before bronchodilator use. 15,16 In our study, after LAMA treatment, we recorded an improvement in FVC. Lung hyperinflation is more closely associated with symptoms and exercise performance than spirometric assessments of reduced maximal expiratory flow rates. The progressive increase in resting hyperinflation as the disease advances has major implications for dyspnea and exercise limitation in COPD. 17,18 It is well established that the widespread inflammatory damage to the peripheral airways, lung parenchyma and pulmonary vasculature can occur with only minor airflow obstruction. 19 Gas trapping, as assessed by expiratory CT scans, can exist in the absence of structural emphysema, and it is believed to indirectly reflect small airway dysfunction in mild COPD. 20,21 Corbin et al., 22 in a 4-year longitudinal study on smokers with chronic bronchitis, reported a progressive increase in lung compliance leading to gas trapping, manifested by an increase in RV without significant changes in FEV 1 . 22 The net effect is that resting lung hyperinflation contributes to an increased elastic load on the inspiratory muscles, while simultaneously impairing their force-generating capacity. 22,23 Small airway closure can lead to expiratory flow limitation through the increase in airways resistance, a consequence of the bronchial-bronchiolar caliber reduction due to structural remodeling and augmented vagal tone, together with the destruction of elastic pulmonary tissue. In these conditions, the RV increases, because the volume at which the balance between the elastic pressures of the lung and chest wall occurs is increased, leading to static lung hyperinflation. 24,25 Thus, lung hyperinflation and the consequent changes in respiratory mechanics cause an increased respiratory work, that in turn leads to respiratory muscles fatigue, increased load, and respiration inefficiency. Dyspnea usually arises when the gas exchange is inefficient, as in ventilation/perfusion mismatching, exercise-induced hypoxemia, and impaired respiratory mechanics, where an uncoupling occurs between the increased ventilatory stimulus and the decreased mechanical performance. 26,27 Bronchodilators, the cornerstone of COPD therapy, reduce the airway obstruction, leading to a decrease in the RV, and allow patients to exercise longer, thus positively impacting their quality of life. 28 The majority of the published studies aim to estimate the effectiveness of the drug through the improvement in FEV 1 , which correlates with little change in symptoms and exercise tolerance, being more linked to the reduction in lung inflation. In this study, we showed that GB can induce favorable effects on lung hyperinflation and its functional and clinical consequences, leading to a decrease in dyspnea and an increase in exercise capacity. In these patients, the most important effect was the increase in FVC, which sustained symptom improvement, even in the absence of FEV 1 /FVC ratio changes. 29,30 FEV 1 increase after bronchodilation is generally small, if any. The most significant change after bronchodilation is the decrease in RV/TLC ratio, with a subsequent RV reduction, underlying a lowered lung hyperinflation, regardless of the basal FEV 1 value. 27 Interestingly, the therapeutic efficacy of anticholinergic bronchodilation was primarily shown in our patients by the decrease in peripheral airway resistance. This appears to be a class effect due to LAMA utilization. In particular, we decided to use GB rather than the more referenced tiotropium bromide, because GB is characterized by a faster onset of action which provides a more rapid relief of dyspnea. Antimuscarinic bronchodilators, such as GB, have proven to be very useful in terms of lung deflation and exercise tolerance. 30,31 Indeed, after treatment our patients experienced a better 6MWT performance. Evidence is growing in support of the concept that the airways dysfunction and inflammation in the small airway region of the lung may be contributing to distinct patient phenotypes. Besides the wellknown concept that non-neuronal acetylcholine plays a relevant inflammatory role, the muscarinic receptor antagonism of non-neuronal acetylcholine released from airway epithelium is also important, 32,33 thus potentially contributing to GB effects at level of small airways. [34][35][36] Taken together, the results of the present study suggest that GB is capable of positively and significantly affecting lung hyperinflation, respiratory symptoms, and the psycho-physical status of patients with mild lung function impairment, likely allowing them to better use tidal volume and improve their ventilatory performance. 37,38 Bronchodilation induced by GB is maintained throughout 24 h, after the administration of a single daily dose. This certainly represents a considerable advantage with regard to therapy adherence, because it is well known that the efficacy of a treatment also depends on patient's compliance. 29,31 An increased awareness of the importance of small airway dysfunction is an obvious first step, but the therapeutic challenge is to reverse the damage or at least to prevent disease progression. Earlier diagnosis could allow a more effective understanding of the role of small airways in triggering COPD pathogenesis, thereby driving a more rational approach to treatment, which may ultimately affect the prognosis of this disabling respiratory disorder. However, the small size of patient population, the lack of an appropriate calculation regarding sample size/ power analysis, the absence of both a control arm and a randomization procedure, as well as the single-center nature of this study represent some relevant limitations.
Conclusions
In summary, based on the above-mentioned results, this study shows that GB can provide a beneficial effect on lung hyperinflation and its clinical and functional consequences. Anticholinergic bronchodilators such as GB can play an important role in the management of small airway dysfunction. There is currently a great interest in identifying clinical phenotypes of COPD that will ultimately guide toward a more personalized approach to disease management. This is a retrospective study which suggests, without definitely establishing, that LAMA may be potentially effective in patients with small airway impairment. Within this context, our limited clinical experience implies that further studies are needed to determine whether the patient with lung hyperinflation due to diseases of small airways will qualify as a distinct phenotype, possibly susceptible to specific therapeutic approaches. | 2022-12-25T16:17:36.517Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "d6188c4bf9faa652f33e3254465aa32e0dc11e45",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "0da5a0f8d87f37b4a0659f67024a7e8d49e1841b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
202214867 | pes2o/s2orc | v3-fos-license | Properties of self compacting concrete exposed to wetting and drying cycles in oil products
Although the early deterioration of concrete exposed to petroleum products has being observed yet, the behavior of Self Compacting Concrete (SCC) being exposure to oil products is still unknown. In the present investigation, the mechanical and physical properties of hardened SCC cyclically being exposed to oil products (crude oil, gas oil, motor oil and fuel oil) have tested and discussed. The numbers of exposure cycles are 2, 3, 4, 5 and 6. Each exposure cycle consists of twenty days immersion in water or different oil products followed by air drying at ambient temperature for a period of ten days. The results have been compared to other specimens cured in water or continuously exposed to oil products for the same period. The results show that the effect of continuous exposure to oil products on the mechanical properties is more severe than that of cyclic exposure of SCC specimens. The compressive strength, splitting tensile strength, modulus of rupture, and static modulus of elasticity of SCC specimens have slightly deteriorated until the fifth cycle of exposure for all types of oil products comparing to reference specimens (cured in water). Eventually at the sixth cycle all properties have decreased.
Introduction
Oil has become one of the most vital energy resources from the beginning of the pervious century for its unique economic and operative characteristics. This has enabled it to exceed the other available power resources, and its importance has increased rapidly with its wide spread use and the discovery of huge oil reservoir in different parts of the world [1]. There are limited information and studies about the behavior of concrete which have been exposed to petroleum products especially the new types of concrete such as self-compacted concrete (SCC). In spite of the advantages of concrete structures such as shock and fire resistant, low cost of maintenance, yet the use of concrete structures as oil storage still limited due to restrictions. The unknown behavior of concrete exposed to oil products, the penetration of oil through concrete or concrete cracks, and the difficulty to modify or repair [2]. Due to the dominant improvements in the properties of concrete containing different types of admixtures, it is very important to study the effect of admixtures on the behavior of concrete exposed to oil products to improve their properties [3]. The behavior of SCC after exposure to oil products is still a fogy area and requires more investigations.
Coarse Aggregate
Iraqi natural crashed gravel is used. Its grading conforms to the requirement of ASTM C33-03 [5] size No7. The specific gravity, sulfate content and absorption of the coarse aggregate are 2.64, 0.096%, and 0.7% respectively.
Fine Aggregate
Iraqi natural sand is used as fine aggregate. Its gradation was within the requirements of ASTM C33-03 [5]. The specific gravity, sulfate content and absorption of the fine aggregate are 2.6, 0.19%, and 0.75% respectively.
Superplasticizer
The Superplasticizer that has been used throughout this study is commercially known as "structure 520" [6]. It is based on a unique carboxylic ether polymer with long lateral chains. It is suitable for the production of SCC. It is free from chlorides and complies with ASTM-C494 type F [7]. Its specific gravity is 1.1 with PH value 6.5 and alkali contents less than 1.5 gm of Na2O equivalent per liter of admixture.
Water
Potable water is used throughout this experiment for mixing the SCC and curing of the hardened SCC specimens.
Silica Fume
Condensed silica fume is used to produce SCC with reliable fresh concrete properties. Its accelerated pozzolanic strength activity index with Portland cement at 7 days is 106%. The chemical composition and physical requirements shown in Tables (1) and (2) respectively indicate that the silica fume conforms to the chemical and physical requirements of ASTM C1240 specifications [8].
Oil Products
Oil products from Iraqi ministry of oil are used in this investigation. Table (3) shows the chemical analysis of the oil products used in this investigation to expose different SCC specimens for specific period.
Concrete Mixes
Self-compacted concrete mix has been designed according to European guidelines for testing fresh SCC EFNARC [5] to obtain a minimum cubic compressive strength of 65 MPa at 28 days. The mix proportions are (1:1.72:1.97) by weight with 10% silica fume as addition by weight of cement and a w/c ratio of 0.38. Several trail mixes are carried out in order to select the optimum dosage of Superplasticizer (SP) (2.8 liter per 100 kg of cementitious materials) which is determined by using all workability testes of SCC (slump flow test, V-funnel test, L-Box test, and J-Ring test).
Mixing of SCC
SCC has been mixed in a rotary mixer with capacity of 0.1 m 3 . Fine aggregate has added to the mixer with 1/3 quantity of water and being mixing lasts for 1.5 minutes. Cement and silica fume are added with the end 1/3 of water and mixing continued for 3 minutes. Then half the quantity of coarse aggregate is added with the remaining quantity of water and 1/3 the dosage of superplasticizer (2.8 liter per 100 kg cementitious materials) and mixed for another 1.5 minute. The mixture rest for about 0.5 minute, the remaining quantity of coarse aggregate and superplasticizer added and remixed for 1.5 minutes. After that the fresh properties test were performed to assure the self-compatibility requirements.
Casting and Curing of SCC Specimens
Standard moulds have been prepared for SCC casting. They include 100mm cubes for compressive strength according to BS 1881: part 116 [9], 100*200mm cylinders for splitting tensile strength according to ASTM C496 [10], 150*300mm cylinders for elasticity modulus according to ASTM C469 [11], and 100*100*400 mm prisms for rupture modulus according to ASTM C78 [12]. SCC mixture does not require any compacting procedure, so the mix is poured into the tight steel moulds until completely filled and has leveled easily. The moulds have covered with polyethylene sheet for about 24 hours, and then the specimens are demoulded. To develop the strength of specimens, they are cured in water tanks for 27 days. After 28 days, age the specimens have divided into five groups, according to the goal of the study. These groups are cyclically (six cycles) exposed to the following fluid:-Group 1: exposed to water. Group 2: exposed to crude oil. Group 3: exposed to gas oil. Group 4: exposed to motor oil. Group 5: exposed to fuel oil. Each cycle exposure consist of twenty days of immersion in water or the various oil products followed by air drying at ambient temperature for a period of ten days. To examine the effect of such condition, the mechanical and physical properties of the SCC specimens have assessed throughout experimental tests as detailed in section 3.3.2. Table 3. Properties of different oil products* All oil products test were carried out by the laboratory of Ministry of Oil
Fresh Properties of SCC
Fresh properties of SCC have been tested according to the procedures European guidelines of testing fresh SCC EFNARC [13]. Three characteristics have been achieved by conducting three tests which are, flow ability, passing ability and segregation resistance. After specifying the weight mix proportions of 1:1.72:1.97 (cement: sand: Gravel), several mixtures were carried out as trial to select the optimum dosage of superplasticizer which produce SCC. It is 2.8% liter per100kg, 10% silica fume as addition by weight of cement is used with w/c ratio 0.38. The ingredients of the mixture to produce SCC with acceptable criteria are shown in Table (4).
Oil inspection data
Fuel oil results Table 4. Mix proportions and fresh properties for the selected SCC mix
Compressive Strength
The results summarized in Table (5) and Figure (1) represent compressive strength values of SCC exposed to various types of immersion and drying cycles in water or oil products. The compressive strength of SCC specimens increases as the number of wetting and drying cycles in water has increased. This may be due to the continuous hydration process of cement paste which forms new hydration product within the SCC mass. This leads to improve the bonds between cement paste and aggregate [14,15]. The presence of silica fume in SCC improves the mechanical properties and durability of concrete [16]. Generally, the results indicate that the compressive strength of SCC specimens being exposed to cycles of immersion and drying in oil products (all types) decreases comparing to the reference specimens (exposed to the same number of wetting and drying cycles in water). The reduction of compressive strength may probably be due to the extension of gel pores and spreading solid hydration components caused by oil products penetration into the microstructure of SCC leading to adhesion and cohesion forces in the cement paste matrix reduction. In addition to the effects of oil products on SCC surface interactions which have been confirmed by other investigations [17,18,19,20]. There is a slight decline in compressive strength of SCC specimens exposed to immersion and drying cycles in oil products till the exposure period of 150 days (5 cycles) relative to specimens exposed to same cycles in water. While specimens exposed to six cycles show higher reduction in compressive strength. This could be attributed to the pores of SCC which is still partially filled with water and leads to further hydration that delay the deterioration of SCC [21,22]. The presence of silica fume leads to a modification of the microstructure of concrete, especially at later ages that significantly reduces the permeability of concrete. This will delay the deterioration of concrete subjected to oil products [23]. It is observed that specimens exposed to six cycles of immersion in crude and gas oil show higher reduction in compressive strength than those exposed to motor and fuel oil. This is due to the difference in the viscosity of these products which has great effect in concrete properties [24]. The higher the viscosity of the oil, the less dangerous it is on concrete [22]. Table (2) shows that crude and gas oils have less viscosity than motor and fuel oils.
The results of the present study have been compared with the results of a study conducted by [25]. Figure (2) investigates the properties of SCC continuously exposed to oil products. The figure shows that the compressive strength of SCC specimens continuously immersed in water is higher than that exposed to wetting and drying cycles in water for the same of period. This can be attributed to continuous hydration of cement for specimens continuously immersed in water which improves compressive strength. Figure (3) shows the decline of compressive strength of SCC specimens exposed continuously to oil products and specimens immersed in oil products for different cycles relative to reference specimens (exposed to water for the same period). It can be concluded that the continuous exposure to oil products is more severe than the exposure to wetting and drying cycles. This may be due to the deterioration of specimen's surface being exposed continuously to oil product [17]. Moreover the compressive strength of high density concrete depends to a large extent on cement hydration. When coating a concrete surface or un hydrated cement grains in the concrete, the oil film blocks moisture from the coated surfaces and concrete grains, arresting their further hydration. Therefore, increasing concrete strength has slowed down or stopped altogether [26].
Splitting Tensile Strength
Splitting tensile strength results of SCC specimens exposed to wetting and drying cycles in water or different oil products are shown in Table ( 5) and Figure (4). Generally, the test results indicate that, the pattern of behavior of the splitting tensile strength of SCC is similar to that of compressive strength. The test results for SCC specimens have being subjected to wetting and drying cycles in water show continuous increase in splitting tensile strength as the number of cycles have increased.
It is clear that increasing compressive strength will lead to an increase in splitting tensile strength [27]. The results show that the splitting tensile strength of SCC specimens exposed to immersion and drying cycles in different oil products slightly decrease for immersion and drying 2,3,4 and 5 cycles as compared to reference specimen (exposed to water for the same number of cycles). The decrease is only 0.36%, 0.18%, 0.17% and 0.86% for crude oil and 0.36%, 1.4%, 0.34% and 0.86% for gas oil, while the decrease is about 2.14%, 1.58%, 0.52% and 0.17% for motor oil and 2.14%, 1.93%, 0.52% and 0.69% for fuel oil, respectively. The logical explanation to this phenomenon is that the pores in concrete are still partially filled with water, when causes further hydration in cement that increases the strength of concrete. The low penetration of oil products cannot permit to loss the bond due to low permeability of SCC produced in this investigation (containing silica fume) that greatly enhances the durability of concrete subjected to oil products. The splitting tensile strength for SCC specimens exposed to crude oil and gas oil has significantly decreases as the number of immersion and drying cycles in oil products increased to six cycles compared to the reference. The decrease ratio is about 24.1% and 14.85% respectively. Specimens being exposed to motor oil and fuel oil show slight decrease in splitting tensile strength in comparison with the reference (2.56% and 2.39% respectively). This is due to the low viscosity of crude and gas oil. As for compressive strength, splitting tensile strength of SCC specimens continuously immersed in water is higher than that being exposed to wetting and drying cycles in water as shown in Figure (5). The continuous exposure to oil products is more severe than immersion and drying cycles of exposure on splitting tensile strength of SCC, as shown in Figure (6). Compressive strength for SCC specimens exposed to wetting and drying cycles in water or oil products Fig.4. Splitting tensile strength of SCC specimens exposed to wetting and drying cycles in water or different oil products
Modulus of Rupture
Modulus of rupture of SCC exposed to wetting and drying cycles in water or different oil products are summarized in Table ( 5) and Figure (7). The results indicate that there is a continuous increase in modulus of rupture as the number of wetting and drying cycles in water increased. The percentage has increased about 1.09%, 1.37%, 2.74%, 3.15% and 6.86% being exposure in 2, 3, 4, 5 and 6 cycles respectively comparing to the reference (not exposed to wetting and drying cycles). The modulus of rupture of SCC specimens exposed to wetting and drying cycles in different oil products has slightly decreased being exposure in 2, 3, 4and 5 cycles in comparison with reference specimens (exposed to water for the same number of cycles). This is due to the closing and autogenously healing of crack and flaws in SCC due to possible volume change by effect of products [28]. The modulus of rupture of SCC specimens exposed to different oil products began to decrease as the number of wetting and drying cycles to oil products increased to six cycles relative to the reference. The decrease ratio in modulus of rupture of SCC specimens exposure to six cycles in crude oil, gas oil, motor oil and fuel oil is 15.17%, 7.46%, 5% and 5% respectively relative to the reference. Figure (8) shows that modulus of rupture of SCC specimens immersed in water for a period is more than that which exposed to wetting and drying cycles for all exposure periods. This may be attributed to the continuous hydration of cement due to continuous supply of water which can lead to increase in modulus of rupture. The comparison between the decrease in modulus of rupture of SCC specimens continuously exposed to oil products and specimens exposed to wetting and drying cycles in oil products relative to the reference specimens is shown in Figure (9). It can be concluded that the continuous exposure to oil products is more severe than wetting and drying exposure. This might be attributed to the continuous losses of some oil by evaporation during the drying cycles and hence reduces its negative affect [22]. While for continuous exposure, oil film coated the unhydrated cement grains blocks moisture from the coated surfaces and concrete grains, ceasing further hydration. Therefore, the strength of concrete has slowed or stopped [26].
4 Static Modulus of Elasticity
The results of static modulus of elasticity of SCC specimens exposed to wetting and drying cycles in water and oil products are summarized in Table ( 5) and Figure (10). The results illustrate that there is a continuous increase in static modulus of elasticity as the number of wetting and drying cycles in water increased. Static modulus of elasticity for SCC specimens exposed to immersion and drying cycles in different oil products slightly decreases for 2, 3, 4 and 5 cycles relative to reference specimens (exposed to water for the same number of cycles). The pattern of decrease in compressive strength leads to a decrease in modulus of elasticity for concrete. SCC specimens exposed to crude oil and gas oil show the higher reduction in static modulus of elasticity when it exposed to six cycles in comparison to the reference. The decrease ratio is about 27.6% and 19.6% respectively. This is because crude and gas oils have lower viscosity than motor and fuel oils, so the aggressivity of crude and gas oils is higher than that of motor and fuel oil. This reduction in modulus of elasticity attributed to: a-The presence of the oil might cause a weak effect on the cement paste components due to the dilution of the cement paste and the reduction of the adhesive forces between the gel particles. b-The presence of micro cracks at the aggregate-matrix interfaces due to the volume changes during setting and hardening [22]. Figure (11) shows that static modulus of elasticity for SCC specimens continuously immersed in water is more than that exposed to wetting and drying cycles in water for all exposure periods. From Figure (12), it can be concluded that the continuous exposure of SCC to different oil products is more severe than wetting and drying exposure. This may be due to the deterioration of specimen's surface continuously exposed to oil products [17].
Total Absorption
The results show that the total absorption of all SCC specimens exposed to water or different oil products are below 10 % by weight as shown in Table ( 5). This gives an indication of concrete with low permeability [29]. The total absorption for SCC specimens exposed to wetting and drying cycles in water decreases as the number of wetting and drying cycles increased. This is attributed to the continuous hydration of cement which decreases the absorption of SCC. The use of low water/cement ratio and silica fume which modifies the microstructure of concrete and reduces the capillary porosity leading to better packing and increase the density then reduces the absorption ratio [30]. The results also show that the total absorption of SCC specimens exposed to wetting and dry cycles in crude oil, motor oil and fuel oil products has decreased as the number of cycles increased. Specimens have been exposed to wetting and drying cycles in water show higher total absorption than those exposed to wetting and drying cycles in different oil products for all exposure cycles. This may be related to the large molecular size of oil products and its viscosity compared to water. Also, the SCC which has been produced in this investigation has small pores in its microstructure since silica fume leads to pores size refinement and this will need more time for oil products to penetrate in comparison with water. Generally, total absorption of SCC exposed to wetting and drying cycles in crude oil is lower than that of SCC exposed to cycles in gas oil, motor oil and fuel oil. This can be attributed to the deposits and the wax that are found in the chemical composition of crude oil as shown in Table (3) which can block some pores.
Density
The results of density of SCC specimens exposed to wetting and drying cycles in water and oil products are shown in Figure (13). The results indicate that there is a slight increase in the density as the number of wetting and drying cycles in water have increased. This can be attribute to the continuous hydration process of cement paste which forms a new hydration product within the SCC mass, then increases the bond between cement paste and aggregate [29,14] . From the results, it can be concluded that there is no significant effect on the density of SCC exposed to 2, 3, 4 and 5 wetting and drying cycles in different oil products, but as the exposure cycles increased to six cycles, the density slightly decreases relatively to reference the specimen (exposed to water for the same number of cycles). The results of SCC specimens which have been exposed to six wetting and drying cycles in crude oil and gas oil show the higher reduction in density (1.42% and 1.1%) compared to the reference. This may be due to the slight deterioration of specimen's surface exposed to oil product [17].
Conclusions
This research presents an experimental study of the properties of SCC exposed oil products for different cycles. The following conclusions are drawn from the test results: 1. The compressive strength, splitting tensile strength, modulus of rupture, and static modulus of elasticity, of SCC specimens exposed to wetting and drying cycles in water have increased as the number of cycles increased. The percentage of increase is between 1.77-16.52 % for compressive strength, 2.18-6.54% for splitting tensile strength 1.09 -6.86% for modulus of rupture, and 9.50 -18.96% for static modulus of elasticity for 2-6 cycles respectively. 2. The compressive strength, splitting tensile strength, modulus of rupture, and static modulus of elasticity of SCC specimens exposed to wetting and drying cycles in all oil products generally have decreased in comparison with the reference specimen (exposed to water for the same number of cycles). Specimens exposed to crude oil and gas oil show a significant decrease as the number of wetting and drying cycles has increased to six cycles. The decrease ratio is about 26.13%, 18.7% for compressive strength, 24%, 14.85% for splitting tensile strength, 15.2%, 7.46% for modulus of rupture, and 27.6%, 19.6% for static modulus of elasticity respectively relative to the reference (exposed to water for the same number of cycles). 3. Continuous exposure to oil product is more severe than wetting and drying exposure cycles on compressive strength, splitting tensile strength, modulus of rupture, and static modulus of elasticity of SCC specimens. 4. Total absorption for SCC specimens exposed to wetting and drying cycles in water decreases as the number of cycles increased. Water absorption is about 3.1%, 2.7%, 2.5%, 2.46% and 1.9% for wetting and drying cycles of 2,3,4,5 and 6 respectively. While the total absorption for SCC specimens exposed to wetting and drying cycles in oil products is lower than that exposed to wetting and drying cycles in water. 5. The density of SCC is slight increase as the number of wetting and drying cycles in water increased, while there is no significant effect on the density of SCC exposed to 2, 3, 4 and 5 wetting and drying cycles in different oil products, but the density slightly decreases as the exposure cycles increased to six cycles relatively to SCC exposed to water for the same number of cycles. | 2019-09-11T02:02:49.259Z | 2019-08-16T00:00:00.000 | {
"year": 2019,
"sha1": "3da40d573cfa2490d2e94117a016682d49b8413e",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/579/1/012047",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "b0fa9377aa098be55391dece8a011ceb4b4471bf",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
240039449 | pes2o/s2orc | v3-fos-license | Comparison of the Chemical and Sensorial Evaluation of Dark Chocolate Bars
: As it mimics olfactory perception, headspace analysis is frequently used for examination of products like chocolate, in which aroma is a key feature. Chemical analysis by itself, however, only provides half the picture, as final consumer’s perception cannot be compared to that of a Gas Chromatography-Mass Spectrometry (GC-MS) port, but rather to a panel test assessment. The aim of the present study was the evaluation of combined chemical (by means of headspace solid-phase microextraction and GC-MS) and panel test data (by means of a sensory evaluation operated by 6 untrained panelists) obtained for 24 dark chocolate bars to assess whether these can discriminate between bars from different brands belonging to different commercial segments (hard discount, HD; supermarket, SM; organic bars, BIO). In all samples, with the only exception of one supermarket bar (in which esters exhibited the highest relative abundance), pyrazines were detected as the most abundant chemical class (HD: 56.3–74.2%; BIO: 52.0–76.4%; SM: 31.2–88.9%). Non-terpene alcohols, aldehydes, and esters followed as quantitatively relevant groups of compounds. The obtained data was then subjected to hierarchical cluster (HCA) and principal component (PCA) analysis. The statistical distribution of samples obtained for the chemical data did not match that obtained with panelists’ sensorial data. Moreover, although an overall ability of grouping samples of the same commercial origin was evidenced for hard discount and supermarket bars, no sharp grouping was possible.
Introduction
Flavor is undoubtedly the most important chocolate attribute in consumers' experience. As a complex combination of taste, aroma, and chemesthetic perceptions originating from the mouth and nasal area, many parameters concur to its development [1,2]. The quality of the starting raw material is of the utmost importance; cocoa varieties are, indeed, characterized by enormous genetic diversity, which ultimately confers peculiar aromatic notes to the different cultivars, classified based on their overall quality as "bulk" and "fine aroma" beans [3,4]. The cocoa market for chocolate mass production is dominated by the "bulk" cocoa, since these Theobroma cacao L. varieties are more resilient to diseases and have larger yields. However, in recent years, especially in central Europe, chocolate is more and more consumed as "gourmet" food, with higher quality standard requirements from the average consumers' market, as well [5]. In general, consumers tend to perceive cocoa beans coming from a single geographical region as an indication of higher quality of the product, but this is considered misleading by a lot of chocolate manufacturers since the terroir greatly varies within the same country, as well [5].
The subsequent manufacturing phases of the primary and secondary processing have a great influence on the quality of the final product. The flavor precursors are, indeed, developed during primary processing phases (pulp management, fermentation, individually put in 4 mL glass vials (up to 1/3 of their total volume), which were then covered with aluminum foil and left to equilibrate at room temperature for 30 min. A Supelco SPME (Solid Phase Micro-Extraction) device coated with polydimethylsiloxane (PDMS, 100 µm) was used to sample the headspace of the samples. SPME sampling was performed using the same new fiber, preconditioned according to the manufacturer instructions, for all the analyses. Sampling was accomplished in an air-conditioned room (22 ± 1 • C) to guarantee a stable temperature. After 30 min of equilibration time, the fiber was exposed to the headspace for 30 min at room temperature. Both the equilibration and sampling times were experimentally determined to obtain an optimal adsorption of the volatiles, and to avoid both under-and over-saturation of the fiber and of the mass spectrometer ion trap. Once sampling was finished, the fiber was withdrawn into the needle and transferred to the injection port of the GC-MS system. The desorption conditions were identical for all the samples (indicated in Section 2.3). Furthermore, blanks were performed before each first SPME extraction and randomly repeated during each series. Quantitative comparisons of relative peak areas were performed between the same chemicals in the different samples.
Gas Chromatography-Mass Spectrometry (GC-MS) Analyses and Peaks Identification
As reported in Ascrizzi et al. [4], the GC/EI-MS analyses were performed with a Varian CP-3800 apparatus equipped with a DB-5 capillary column (30 m × 0.25 mm i.d., film thickness 0.25 µm) and a Varian Saturn 2000 ion-trap mass detector. The oven temperature was programmed rising from 60 • C to 240 • C at 3 • C/min, and the injector temperature, 220 • C, transfer-line temperature, 240 • C, and carrier gas, He (1 mL/min). The acquisition parameters were as follows: full scan; scan range: 35-300 m/z; scan time: 1.0 sec; threshold: 1 count.
The identification of the constituents was based on the comparison of their retention times (t R ) with those of pure reference samples and their linear retention indices (LRIs) determined relatively to the t R of a series of n-alkanes. The mass spectra were compared with those listed in the commercial libraries NIST 14 and ADAMS and in a home-made massspectral library, built up from MS literature [18,19] combined with data experimentally obtained from pure substances and commercial essential oils of known composition.
Panel Test
The organoleptic chocolate bar profiles were evaluated by a panel of 6 "naive assessors" (3 women, 3 men), aged 20 to 35 years, selected by a panel leader. The sensory evaluation was set up before the tasting session, leading to a guided assessment ( Table 1) that included all five senses, and developed merging, with slight modifications, two lists of attributes previously reported for chocolate tasting sessions [2,20]. A final set of descriptors was presented to the panelists to evaluate, for each sample, whether they applied or not, with a yes/no selection. The tasting was carried out in the morning, in a well-ventilated, quiet room, in a relaxed atmosphere. To avoid cross contamination, different bars were assessed in different moments of the same session by the same group of panelists. For each tasting session, each panelist was provided with 2 chocolate squares (2.5 × 2.5 cm 2 ), without any indication about the sample identity. All samples were presented to the panel test in the same size to avoid the impact of chocolate shape on sensory attributes of bars [21]. Water, apple cubes, and unflavored salty crackers were provided to the panelists between each tasting.
Statistical Analyses
All statistical analyses were carried out with the JMP Pro 13.2.1 software package (SAS Institute; Cary, NC, USA).
The hierarchical cluster analyses (HCA) were performed using Ward's method on unscaled, normalized data for both the chemical composition of sample complete headspaces and panel test evaluation responses.
Data used for the principal component analysis (PCA) of the complete headspace compositions was a 97 × 24 (97 individual compounds × 24 samples = 2328 data) covariance matrix. The chosen PC1 and PC2 studied 62.1% and 13.5% of the variance, respectively, for a total explained variance of 75.6%.
The principal component analysis (PCA) was performed selecting the two highest principal components (PCs) obtained by the linear regressions operated on mean-centered, unscaled data; as an unsupervised method, this analysis aimed at reducing the dimensionality of the multivariate data of the matrix, whilst preserving most of the variance [22,23]. Both the HCA and the PCA methods can be applied to observe groups of samples even when there are no reference samples that can be used as a training set to establish the model.
Headspace Compositions
The headspace (HS) composition of all the hard discount (HD), organic (BIO), and supermarket (SM) samples are reported in Tables 2-4, respectively. Pyrazines were the most represented chemical class of volatile organic compounds (VOCs) in all HD the samples, as they ranged from a minimum of 56.3% in HD3 to a maximum of 74.2% in HD4. Pyrazines were detected as the most abundant VOC chemical class in BIO samples, as well, where they accounted for 52.0% and 76.4% in BIO1 and BIO2, respectively. With the only exception of sample SM13, pyrazines were detected as the most abundant chemical class of VOCs in all SM samples, too, accounting for up to almost 90% in SM10. In SM4 headspace (HS), however, their relative content was only slightly higher (31.2%) than that of oxygenated sesquiterpenes (28.8%). Among this chemical class, tetramethylpyrazine and 2,3,5-trimethylpyrazine were the most quantitatively relevant in all samples. Pyrazine are heterocyclic volatiles produced during the Maillard reaction, whose aroma contribution is of the utmost importance in chocolate, as they are responsible for its typical pleasant flavor [1,24,25]. Tetramethylpyrazine was found as the most abundant compound in all HSs, with relative abundances always higher than 35%. Its odor is described as reminiscent of coffee, with a green and roasted aroma [2,12,15]. 2,3,5-Trimethylpyrazine followed; its aroma contribution is similar to that of tetramethylpyrazine, but with a slightly earthier and roasted-nuts like flavor [15,26,27].
Non-terpene esters followed as the second most abundant VOC chemical class in HD3 to HD8 samples, with relative abundances varying between 9.4% and 22.1% in HD7 and HD8, respectively. Their quantitative importance was also evidenced for BIO (33.3% in BIO1, 8.8% in BIO2) and SM (from 2.6% in SM5, up to 36.8% in SM13) samples, although with great variation based on the bar brand. Among them, ethyl acetate and 2-phenylethyl acetate were the most represented. Pyrazines decrement in SM13 and BIO1 were coupled with the increment of non-terpene esters, in particular of that of ethyl acetate. Ethyl and acetate esters are indeed reported as important VOCs involved in chocolate aroma [12,13]. In SM4, instead, their decrement was coupled with an increment in sesquiterpene hydrocarbons, which confer green and spicy notes to the product [28].
Among HD bars, HD1 HS demonstrated the highest relative content of non-terpene aldehydes, of which nonanal was the most quantitatively important. These compounds also exhibited a quantitatively relevant presence in SM4 and SM12, as they reached up to 18.1% and 13.8%, respectively; among them, the former was mainly rich in tetradecanal, while pentanal reached 13.5% in the latter. In BIO bars, they were detected with a 2.5% relative abundance in BIO1, while in BIO2 they only accounted for 0.3%. Fruity and flowery notes are associated to aliphatic volatile aldehydes [15], among which the most abundant detected in the present study were nonanal and pentanal. The former is associated with citrus-like aroma notes [29], while the latter is more pungent and bitter [1]. A citrus peellike flavor is also described for tetradecanal [29], which was only detected in sample SM4 but with a significant relative abundance. In this study, benzaldehyde followed among the most detected aldehydes, whose highest relative concentration was registered in sample HD1; it is listed among the undesirable notes, as it is pungent and bitter [11,12,14,15].
Non-terpene alcohols, another typical chemical class of VOCs found in chocolate headspaces, were identified in relevant relative concentrations in all samples (HD: 1.9-10.6%; BIO: 2.7-4.2%; SM: 1.4-8.5%). Among them, phenylethylalcohol and 2,3-butanediol were the most represented. Non-terpene alcohol presence is commonly reported as a desirable attribute of chocolate aroma, as they are responsible for pleasant flowery [1,4,9,11] and sweet [11] notes. Among those detected in the present study, 2,3-butanediol, whose highest relative abundances were identified in samples SM12 and 14, and HD3, 5, and 7, confer cocoa butter aroma notes [4,14,30]. Phenylethyl alcohol is reported as a pleasant odor in chocolate samples, reminiscent of honey [11]: its highest relative quantities were found in samples SM13 and HD2. Dodecanol, whose presence was detected only in sample HD2, is described as a delicate floral note when diluted, but it turns into an unpleasant aroma when present in significant content [29].
(E)-Anethole was the most represented detected phenylopropanoid compound, varying between samples in which it was not detected at all, to headspaces in which its presence was quantitatively relevant (6.2% in HD3; 6.9% in SM13).
In HD HSs, relative terpene concentrations over 5% were only found for HD2 and HD8; in the former, monoterpene hydrocarbons, only represented by limonene, were up to 7.2%, while in the latter sesquiterpene hydrocarbons prevailed, with β-patchoulene as the most represented. Among terpenes, (E)-dehydroapofarnesol and epi-cedrol, two oxygenated monoterpenes undetected in all the other SM samples, were detected in relative abundances up to 19.4% and 9.4%, respectively, in SM4.
Unsurprisingly, all the samples demonstrated volatile emissions mainly composed of pyrazines, with tetramethylpyrazine as the main compound in all HSs. Moreover, esters, especially acetates, followed as expected, being the second most reported chemical group of volatile compounds commonly reported in chocolate products. The other compounds demonstrated an overall higher degree of variation between the samples, especially in quantitative terms. The typical aroma of a chocolate product is, indeed, the result of a complex interaction between the starting material quality and the following processing phases, which add the "matrix-effect" to the release dynamics of the very same volatile compounds, thus conferring a unique aroma bouquet for each bar.
Statistical Analysis of Chemical Data
The hierarchical cluster analysis (HCA) dendrogram is reported in Figure 1. Two main macro-groups were identified by the HCA; the first one (red group, Figure 1) mainly composed of SM bars, and a larger, more varied one composed of two sub-groups (green and blue, Figure 1). Although no completely sharp distribution was evidenced among the samples, an overall grouping of bars sharing the same commercial origin was visible for SM and HD samples. BIO bars were not clustered very close; this could be due to the high variability induced by the raw starting material and processing method, as well as to their smaller sample size. 2 Not detected. Legend of the sample names: the SM code stands for supermarket, followed by a unique number, each identifying one specific commercial sample among this type.
Statistical Analysis of Chemical Data
The hierarchical cluster analysis (HCA) dendrogram is reported in Figure 1. Two main macro-groups were identified by the HCA; the first one (red group, Figure 1) mainly composed of SM bars, and a larger, more varied one composed of two subgroups (green and blue, Figure 1). Although no completely sharp distribution was evidenced among the samples, an overall grouping of bars sharing the same commercial origin was visible for SM and HD samples. BIO bars were not clustered very close; this could be due to the high variability induced by the raw starting material and processing method, as well as to their smaller sample size.
The score and loading plots of the principal component analysis (PCA) are reported in Figure 2. Half of HD bars (HD4, 5, 6, and 8) were plotted in the upper left quadrant (PC1 < 0, PC2 > 0) of the PCA score plot (Figure 2, left); this positioning was due to their higher content in tetramethyl pyrazine, whose vector was sharply directed towards the left side of the loadings plot (Figure 2, right). The other 4 HD bars were all distributed in the right quadrants (PC1 > 0), between the bottom area of the upper quadrant (PC2 > 0) and the upper area of the bottom quadrant (PC2 < 0); compared to the other HD bars, their compositions were richer in nonanal, whose vector pointed towards the right quadrant of the Half of HD bars (HD4, 5, 6, and 8) were plotted in the upper left quadrant (PC1 < 0, PC2 > 0) of the PCA score plot (Figure 2, left); this positioning was due to their higher content in tetramethyl pyrazine, whose vector was sharply directed towards the left side of the loadings plot (Figure 2, right). The other 4 HD bars were all distributed in the right quadrants (PC1 > 0), between the bottom area of the upper quadrant (PC2 > 0) and the upper area of the bottom quadrant (PC2 < 0); compared to the other HD bars, their compositions were richer in nonanal, whose vector pointed towards the right quadrant of the loadings plot (Figure 2, right). Samples previously grouped by HCA in the first macro-cluster (samples in red, Figure 1) were all plotted in the right (PC1 > 0) score plot quadrants, where only a few samples of the other macro-cluster were positioned, in the bottom right quadrant (PC2 < 0). All other samples from the second HCA macro-cluster were grouped quite close to each other towards the central area of the left quadrants (PC1 < 0), mainly due to tetramethylpyrazine vector (Figure 2, right).
Samples plotted quite distant from all others were BIO1 and SM13 in the upper right quadrant (PC1 and PC2 > 0), due to their higher relative content in ethyl acetate (Figure 2, right), and SM4 in the bottom right quadrant (PC1 > 0, PC2 < 0), whose position was due to the tetradecanal vector (Figure 2, right).
The VOCs emission, used as a tool to evidence the overall distribution of samples by statistical means, could not sharply divide samples based on their commercial origin; however, it demonstrated a general ability to group SM and HD bars. The 2 BIO samples, however, were clustered and plotted as two very distant samples; a larger difference in degree in the raw material quality and/or the processing technology might thus be hypothesized for these samples compared to their SM and HD counterparts. However, the size of the BIO group is too small to draw definitive conclusions on the matter. In accordance with the results emerged from the HSs analysis, each bar, no matter its commercial fragment of origin (HD, SM, or BIO), displayed a unique profile, based on all the phases involved in the development of an aroma bouquet as complex as that of chocolate. The overall aroma profiles for all samples were, indeed, composed of the expected main chemical groups (pyrazines and esters). The secondary chemical classes, such as aldehydes, ketones, and alcohols, displayed higher variability among the samples, but without evidencing any pattern that might be attributed to the commercial segment of the product. This is quite interesting, as the commercial origin of chocolate might influence consumers' preference towards bars perceived as having a higher quality (i.e., oganic and supermarket bars over the hard discount ones.).
Panel Test
The average score for each descriptor as perceived by the six untrained panelists for all the samples is reported in Table 5. Each descriptor had a yes/no response; thus, the scores in Table 5 were calculated by assessing how many panelists had attributed each descriptor for the bars of the same commercial origin (hard discount, HD; organic, BIO; super market, SM), averaging it based on the number of bars for each provenience (8 HD, 2 BIO, 14 SM).
The brown color intensity resulted higher for HD and SM bars, while BIO bars where perceived as lighter in color. HD bars also displayed the highest gloss rate, as well as the highest number of samples with air bubbles, while BIO and SM bars demonstrated the highest average scores for the presence of stripes ("sugar bloom" effect) on bar surfaces. HD bars scored the highest glossiness rate, while SM and BIO bars appearance was described as matte.
The appearance of chocolate is an important hedonic parameter in determining consumers' preference. The color intensity is mainly determined by the temper regime; under-tempered chocolate, in particular, develops a lighter color as fat blooming creates fat crystals which scatter the light, resulting in a paler appearance [31]. Among the analyzed chocolates, organic bars exhibited a lighter shade (medium brown) compared to the dark brown descripted by panelists in HD and SM bars. However, the oily descriptor for the in-mouth perception scored higher in the HD bars. The gloss parameter is also influenced by the tempering process, together with the particle size; a higher fat blooming degree in under-tempered chocolates, as well as an increase in the particle size, reduce the desirable glossy appearance of the final product [31]. Among visual markers of poor chocolate production, HD bars exhibited the highest number of samples with air bubbles, indicating that the molds in which the tempered chocolate was poured were too cold [32]. The BIO and SM bars, instead, scored higher average values for surface stripes, signs of the "sugar bloom" effect, which occurs when either pods are stored in too humid places or an intermediate product is too rapidly transferred from low to high temperatures; this ultimately causes the water to reach the surface, where it dissolves sugar and then evaporates, leaving a white appearance due to the remaining sugar crystals [33]. Organic and supermarket bars were reported as having harder and crunchier snap sounds, while lighter and more mellow sounds were described for the HD bars. The lighter and more mellow sound perceived by the panelists when breaking the HD bars compared to the harder notes reported for the organic and supermarket bars might indicate a higher presence of fats, and especially in their unsaturation degree, since this parameter positively correlated with more unpleasant softness and lack of snap in the final product [34]; on the contrary, a good, high-pitched snap at ambient conditions is a desirable character in chocolate [2].
The reported odor attributes were mainly fruity for all bars; in BIO and SM bars the caramel-like odor followed, while for HD bars the herbaceous odor was the second overall most attributed descriptor. The aroma, perceived in the retro-nasal area once bars were put in the mouth, was quite consistent with their odor perception; high scores for the fruity aroma notes were described for all bars, but BIO bars scored higher in the nutty descriptor, while the caramel-like contribution was attributed more to HD and SM bars. Both odor and aroma negative attributes of dairy and animal-like notes were overall more perceived in HD bars. Fruity notes perceived in all bars can be attributed to the quantitatively relevant presence, in all samples, of 2-phenylethyl acetate, whose aroma contribution is reported as pleasantly sweet and fruity [1,11,12]. The more intense herbaceous odor perceived by panelists in HD bars could be ascribed to their overall higher relative content of 2,5-dimethylpyrazine, whose olfactory contribution is described as a "green" note [4,35]. HD bars were also richer in nonanal, a non-terpene aldehyde to which their higher scores in the dairy and animal-like aroma and odor notes can be attributed, since its contribution is reported as "fatty" [29]. In both odor and aroma perception, the nutty descriptor was mainly attributed to the HD bars; this might be due to the overall higher relative content of benzaldehyde in HD bars [36]. Aldehydes, in general, are released faster during mastication, as they have less interaction with the oral mucosa [37].
The texture of the chocolate matrix is a result of its particles distribution and size; solid particles over 35 µm are, indeed, detected by the tongue during mastication, causing a rough and gritty in-mouth perception [2]. Most of the chocolate bars were perceived as smooth on the lips; the textural perception then turned velvety once the product reached the tongue, due to the fat melting triggered by the higher in-mouth temperature. Moreover, the majority of the bars exhibited a delayed in-mouth melting.
The bitter taste attribute prevailed in panelists' descriptions of most of the bars, especially BIO samples. The sweet descriptor followed, and it was mainly scored by HD bars. The in-mouth sensation after mastication is an after-taste perception which persists even after the swallowing. The astringency perceived by the panelists when tasting the BIO bars is a sharp and, to a certain degree, pleasant sensation, related to the origin of the used cocoa [2,4,38]; however, if it is too strong and turns to sour, it might be an indication of over-fermentation of the beans [2].
The two in-mouth sensations most attributed to BIO bars were astringent and oily; for HD and SM bars the warm and oily in-mouth sensations prevailed.
Overall, the HD bars scored the highest number of negative characteristics, whose presence might be attributed to the use of a higher fat content. Their appearance was, indeed, glossier and the in-mouth oily feeling was higher in these bars, which let us hypothesize that the tempering phase was conducted with higher fat concentrations. This was also confirmed by lighter and more mellow snap sound and by the higher scores of perceived negative aroma attributes (dairy and animal notes). The BIO bars were more characterized by a higher degree of acidic notes and, thus, astringency, and also by a lighter color, which might indicate those bars were under-tempered. SM bars displayed intermediate characteristics, with an overall predominance of a sweet taste and fruity aroma.
Statistical Analysis of Panel Test Data
The dendrogram of the hierarchical cluster analysis (HCA) performed on the panel test data is reported in Figure 3.
Statistical Analysis of Panel Test Data
The dendrogram of the hierarchical cluster analysis (HCA) performed on the panel test data is reported in Figure 3. Two macro-groups were distinguished by the HCA; the first one comprising two sub-groups (red and green samples, Figure 3), and the second one comprising only one group of bars (blue samples, Figure 3). Although no completely sharp classification of the bars based on their commercial origin was evidenced by the panel test evaluation, SM and HD bars were, overall, grouped close to similar origin samples. For BIO bars, as previously referred for the chemical analysis, the low sample number might have played a role in their completely different clustering, as evidenced in the HCA performed on the chem- Two macro-groups were distinguished by the HCA; the first one comprising two sub-groups (red and green samples, Figure 3), and the second one comprising only one group of bars (blue samples, Figure 3). Although no completely sharp classification of the bars based on their commercial origin was evidenced by the panel test evaluation, SM and HD bars were, overall, grouped close to similar origin samples. For BIO bars, as previously referred for the chemical analysis, the low sample number might have played a role in their completely different clustering, as evidenced in the HCA performed on the chemical composition of their headspaces, as well. As evidenced in for the chemical data, the statistical analyses performed on the panel test results could not provide a strict distribution of samples based on their commercial origin, although a general grouping of SM and HD samples was obtained.
Conclusions
Chemical data obtained by HS-SPME analysis of chocolate bars represent valuable tools to evaluate the emitted odor-active compounds; among these, unpleasant odors are especially useful to assess evident flaws due to the raw starting material and/or the production process. Although a general ability of these data to group, by statistical means, samples based on their commercial origin was evidenced for both SM and HD bars, the clustering and the plotting were not able to sharply define bars provenience.
Neither panel test results, whose evaluation includes more parameters, however, were able to clearly separate chocolate bars based on their commercial origin; moreover, samples distribution provided by hierarchical cluster analysis performed on these data did not match with that obtained when evaluating the VOC emission.
Chemical and panel test data evidenced that the commercial origin of a chocolate bar sample does not provide a certain key to assess the quality of the product, although they appeared to point out a low degree of variability among HD and SM samples. | 2021-10-28T15:15:45.976Z | 2021-10-25T00:00:00.000 | {
"year": 2021,
"sha1": "3b0b46504ee25f6a73324b573d38167048851c28",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/11/21/9964/pdf?version=1635166862",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "83550c7bdeef54b94a61775160d647f7497e6a7c",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
26220141 | pes2o/s2orc | v3-fos-license | Nonuniform subduction of the Indian crust beneath the Himalayas
Himalayan tectonic activity is triggered by downward penetration of the Indian plate beneath the Asian plate. The subsurface geometry of this interaction has not been fully investigated. This study presents novel constraints on this geometry provided by two newly obtained, deep seismic reflection profiles. The profiles cover 100- and 60-km transects across the Yarlung-Zangbo suture of the Himalaya-Tibet orogen at c. 88°E. Both profiles show a crustal-scale outline of the subducting Indian crust. This outline clearly shows Indian understhrusting southern Tibet, but only to a limited degree. When combined with a third seismic reflection profile of the western Himalayas, the new profiles reveal progressive, eastward steepening and shortening in the horizontal advance of the subducting Indian crust.
The Himalayan-Tibetan Plateau is currently the world's largest example of an active continent-continent collisional orogen. The plateau formed primarily due to the collision between the Indian and Eurasian tectonic plates after subduction and closure of the intervening Neotethyan Ocean in the past 55 Ma 1 . The plateau rises 4-5 km above sea level and spans 2900 km in an east-west direction (inset in Fig. 1). The plateau is scientifically significant due to its size, elevation and crustal thickness, which is twice the global average. Due to its position, this feature also exerts a strong influence on the regional climate of South Asia 2 . In addition, the plateau and its formation generate significant regional seismic activity, including the 2008 Mw 7.9 Wenchuan earthquake in eastern Tibet 3 and the 2015 Mw 7.9 Gorkha earthquake (Nepal) in southern Himalayas 4 . These events caused numerous fatalities and significant property damages.
Over the past few decades, geoscientists have analyzed the India-Eurasia collision using a range of different geological and geophysical tools. This research has generated several different models for explaining the Tibetan Plateau's unusual style of uplift 1,[5][6][7][8] . Models attribute plateau construction to either a doubling of crustal thickness that occurred when the Indian crust had underthrust most of Tibet 5 or to crustal shortening that did not require the presence of the Indian crust beneath the plateau 1 . The extent of the subducted Indian crust beneath the Tibetan Plateau is thus a critical assumption in models seeking to explain plateau history. Previous receiver function studies have provided unequivocal constraints on the northward extent of the downwelling Indian continental lithosphere [8][9][10][11][12][13] . Given the data resolution of these studies however, crustal geometry within the collision zone remains ambiguous. Multichannel seismic reflection profiling experiments in the 1990s recorded lithosphere-scale structure of the Himalayas 14-16 , but the exact geometry of the collisional suture remains poorly constrained. Previous interpretations have been based on single south-north cross-sections and assumed uniform subduction of the Indian crust. It is not certain however that the Indian crust does subduct in a uniform manner across the entire 2900 km-wide orogenic belt. Results from individual seismic profiles integrate consistently with local structural interpretations but have not provided consistent interpretations for crustal geometry at orogen-scale.
In this study, we present two recently acquired, high-resolution, deep seismic reflection profiles that span the Yarlung-Zangbo suture zone of the central Himalayas (Fig. 1a). Emerging data collection and processing methods enabled acquisition of high quality reflection images showing deep subsurface structure. The overall image clearly shows the geometry of the Indian crust beneath Tibet and critical spatial variation in its subducted/underthrust areas.
Two new deep seismic reflection profiles across the Yarlung-Zangbo suture
The two profiles run 120 km in an N-S direction and overlap by ~40 km around the Yarlung-Zangbo suture (Fig. 1a). These transects therefore cover a swath of the Tethyan Himalaya to the south and a broad swath to the north that includes the west-northwest-trending Yarlung-Zangbo suture and Gangdese magmatic belt to the north (Fig. 1b). The Tethyan Himalaya in the study area consists of thick marine sedimentary and ophiolitic sequences that range from Precambrian to Cretaceous in age (Fig. 1b) 17 . The Mabja metamorphic core complex (Fig. 1b) near the southern terminus of the seismic profiles has been exposed at the surface by mid-crustal ductile extension initiated at ~35 Ma and lasting ~12-19 my 18 . The profiles interpreted in this report also span the Great Counter Thrust (GCT) and Gangdese Thrust (GT) (Fig. 1b) 19 . Structural field investigations show top-to-the north displacement along the GCT 18 . Age constraints suggest an early Miocene initiation 18,[20][21][22] for the GCT. The GT bounds the southern margin of the Gangdese magmatic belt (Fig. 1b) and has been constrained at 27-23 Ma by 40 Ar/ 39 Ar methods 23 .
Data collection and processing steps followed those described in our previous seismic studies of the western Himalaya 24 . We deployed three types of explosive sources: 50 kg shots in single shot holes at 250 m intervals, 200 kg shots in pairs of shot holes at intervals of 1 km, and 2000 kg shots in clusters of 10 shot holes at 50 m depth over intervals of 50 km. Data were recorded by 600 receiver traces over a 60 s two-way travel time window. This experimental setup provided a nominal 60-fold common mid-points (CMPs) stacked section. Processing steps also used Kirchhoff pre-stack time migration (see Methods details). Figures 2a and 3a show the near-vertical, uninterpreted seismic reflection images for the east and west lines, respectively. These extend to about 30 s two-way time (t.w.t.) (Figs 2a and 3a) or ~90 km depth (assuming an average crustal velocity of 6 km/s). To visualize overall structure of the area imaged, we observed high-amplitude reflections in the seismic transects (Figs 2b and 3b) and constructed composite line drawings for the east and west lines, shown in Figs 2c and 3c (respectively). Figures 2d and 3d present structural interpretations that integrate seismic reflection data and previous geological interpretations for the east and west lines, respectively.
The two seismic profiles clearly outline the structural geometry of the collision zone (Figs 2c and 3c). The base of the lower crust in both seismic transects appears as a zone of high-amplitude reflectors at depths of ~22-27 s Figure 2a contains raw seismic reflection data that were collected in the field. Superimposed line drawings in Fig. 2b and c, as well as structural interpretation in Fig. 2d, were drafted by X.G. and X.X. using the software of CorelDRAW X5. and 3c,d). This structure (interpreted as a mantle suture) appears in single shot, 2000 kg source datasets from both east and west lines (Fig. 4a,b, respectively; 2000 kg shots shown as black stars in Fig. 1b).
Upper crustal areas of seismic images show a series of steep, south-dipping reflectors truncated at their base by two sets of reflections exhibiting a ramp-flat geometry (Figs 2c and 3c). In accordance with previous studies of surface geological features, we interpret these basal, southerly-dipping reflections (0-4 s, t.w.t) as corresponding to the GCT that represents thrusting emplacement of the Tethyan Himalayas over the Kailas succession 17 (Figs 2d and 3d) and the GT that marks the southern edge of the Gangdese belt 17 (Fig. 2d). The GCT and GT appear to displace the base of the Yarlung-Zangbo suture complex in a low-angle thrust giving each element a southerly dip (Figs 2d and 3d). It is noteworthy that, instead of being top to the south, the whole-scale geometry of the GT in the seismic transect appears top to the north, which is opposite to previous studies from surface geological investigations 23 . In addition, surface structural investigations have documented top-to-the north displacement bounding the ophiolitic mélange 17 , which also appeared in the INDEPTH seismic reflection profile across the Yarlung-Zangbo suture zone 16 . This observation runs contrary to the assumption that faults bordering the exposed suture zone trend parallel to the direction of subduction. The bivergent structure appearing in the upper crust indicates progressive thickening within the accretionary prism and the area to the south. The suture zone within the upper crust's accretionary prism thus steepened and developed into a retrowedge with southward truncation of the GCT and Gt at its base.
Along the southern segment of the seismic transect, a group of prominent reflectors appear at depths of 14-15 s (t.w.t.) and extend continuously downward to a depth of 28 s (t.w.t.) (Figs 2a and 3a, see additional review material for higher-resolution seismic images). Comparison with features identified by previous seismic reflection and wide-angle studies of the southern Himalayan Mountain Range 12,14,25 indicate that laterally extensive Figure 3a contains raw seismic reflection data that were collected in the field. Superimposed line drawings in Fig. 3b and c, as well as structural interpretation in Fig. 3d, were drafted by X.G. and X.X. using the software of CorelDRAW X 5.
reflectors at 14-15 s (t.w.t.) depth represent the Main Himalayan Thrust (MHT). The Main Central Thrust (MCT), the Main Boundary Thrust (MBT) and the Main Frontal Thrust (MFT) are supposed to branch on the MHT, a major regional detachment fault 12,14,25 . An additional set of high amplitude reflectors (bright-spot) appear just above the MHT at depths of 6-15 s (t.w.t.) in the Tethyan accretionary complex (TAC) (Figs 2c and 3c). These sub-horizontal reflectors terminate to the north against a set of south dipping "bright-spot" reflections. This latter feature differs from the overlying crust and accretionary prism to the north. Given the well-documented crustal-scale duplexing that has transferred material from the lower to the upper plate in the western Himalayas 24 , we interpret this anomalous zone (bright-spot) as consisting of fragments of ophiolitic mélange from the Yarlung-Zangbo suture (Figs 2d and 3d) dislocated during crustal-scale duplexing of the subducted Indian crust along the MHT at its base 24 (Figs 2d and 3d).
In northerly areas of both seismic transects (Figs 2a and 3a), areas beneath 8 s (t.w.t) in Fig. 2a and beneath 6 s (t.w.t) in Fig. 3a appears as a zone having only a few short-wavelength south dipping reflectors. The basal geometry of the Moho and orientation of its offsets indicate that this zone represents the crystalline basement The continuous trace of the MHT and prominent Moho reflections (Figs 2c and 3c, see additional review material for higher-resolution seismic images) highlight the geometry of the subducting Indian crust (Figs 2 and 3). Images clearly show the limited extent of Indian crust subduction beneath the Lhasa terrain. Figure 1a (blue stars) shows surface projections of Moho offsets (mantle suture). The subducting Indian crust extends beneath the Gangdese batholith along the western profile but only advances to a limited extent beyond the northerly Yarlung-Zangbo suture zone along the eastern one. The western profile thus shows greater horizontal advance for the subducting Indian crust than that indicated by the eastern profile (Fig. 1a). Leading edge of the subducting Indian crust within the western profile exhibits a dip angle of about 27°NE (Fig. 2d), while the eastern profile indicates a dip angle of about 45°NE (Fig. 3d). Previous seismic reflection studies of the western Himalaya (solid black line C in Fig. 1a) detected relatively flat Moho reflections and no offset related to continent-continent collision beneath the dominant collision zone 24 . The overall crustal geometry imaged by these seismic reflection profiles (seismic lines A, B and C in Fig. 1a) affirms lateral variation along the subductive margin and indicates progressive eastward steepening of the down-going Indian crust. Figure 5 presents a sketch showing variation in the regional-scale geometry of the Indian crust as it subducts beneath southern Tibet. New high-resolution seismic reflection profiles described here reveal that India is underthrusting southern Tibet but only to a limited degree. When interpreted in combination with a previous profile of the western Himalayas, the new profiles indicate progressive eastward steepening and shortening, as well as only limited horizontal advance of the subducting Indian crust (Fig. 5). Our high-resolution seismic datasets provide a consistent image of crustal-scale geometry within the collision zone. Along with previous teleseismic receiver function studies 26 and body wave tomographic studies 27 , new data presented here clearly demonstrate west to east spatial variation in the Indian lithosphere subducting beneath southern Tibet.
Discussion
Together with the deep seismic reflection profile in the western Himalayas, the results also outlined the limited thickness of Indian crust subduction beneath southern Tibet. We tentatively interpret this observation as evidence of continental materials resisting subduction due to crustal buoyancy. This process partly contributed to initiation of the Main Himalayan Thrust, which may act as an intracrustal ductile detachment fault decoupling deformation between the lower crust and overlying crust during continent-continent collision (Fig. 6a). As a result, the lower crust of the Indian plate was dragged beneath southern Tibet while the rest of the Indian crust peeled off and has experienced top-to-the-south duplexing or exhumation along the Main Himalayan Thrust to contribute to the rapid uplift of the Himalayan orogen 28 (Fig. 6b). Consequently, an increase in the northward compressive forces occurred from the rapid uplift of the northern Himalayas 28 . The accretionary prism between the northern Himalayas and the Lhasa terrain experienced dramatic shortening and thickening (Fig. 6b). As a result, extensional collapse of the wedge under its own weight drove progressive development from the forewedge to the retrowedge. The ophiolitic suture zone experienced simultaneous reversal during this process and displays top-to-the north displacement bounding the ophiolitic mélange (Fig. 6c). On the other hand, contrasting material mechanics of the Tethyan sedimentary sequence and the relatively rigid Gangdese magmatic belt caused their differing responses to regional crustal shortening and rapid uplift of the northern Himalayas. They specifically experienced differing amounts of shortening while a detachment fault, the Gangdese thrust system (GT) 17 , developed along the margin between them (Fig. 6b and c). As northward compression continued, Tethyan sediment and the ophiolitic suture were thrusted atop the Gangdese magmatic belt.
Overall, the observations of both the limited extent of the Indian crust and lateral variation of the subduting Indian lithosphere require revision of interpretations regarding uplift and collapse of the Tibetan Plateau. Observations are specifically not consistent with a doubling of crustal thickness between the Indian crust and the Lhasa terrain evoked to explain the unusual crustal thickness of the Tibetan Plateau. Additionally, the timing and development history of lateral variation in the Indian lithosphere may contain important information for explaining sudden eastward plateau-wide collapse since mid-Miocene. Additional observation and interpretation is needed however in order to address this issue.
Methods
Data processing for producing the deep seismic reflection profile. Software modules from the CGG, Omega and GeoEast packages were used in seismic data processing. Processing procedures included static correction, true-amplitude recovery, frequency analysis, filter-parameter tests, surface consistent amplitude corrections, surface consistent deconvolution, coherent noise suppression, random noise attenuation, human-computer interactive velocity analysis, residual statics correction, Kirchhoff time migration for incorporating rugged topography, DMO stack and post-stack filtering to remove noise.
After fully analyzing and testing results of the raw data, targeted techniques were used to resolve several imaging problems. Tomographic static correction without ray tracing and multi-reflector interface residual statics were used to resolve static problems caused by irregular topographic relief and low-velocity structure of near surface layers. Pre-stack multi-domain processing was combined with noise attenuation techniques to suppress a range of noise sources. Human-computer interactive velocity analysis methods provided relatively accurate RMS (root-mean-square speed) estimates. Treatment of rugged topographic areas with Kirchhoff time migration improved the quality of seismic images. Four large dynamite shot gathers (charge ≥ 2000 kg) with high signal-noise ratios were processed to generate a single-fold profile. This profile revealed the main subsurface structures of the study area (Fig. 4 in the main text). The final migrated profile, contained abundant reflection events from the surface to the Moho discontinuity and provided a consistent image of the collision zone's complex subsurface structure. | 2018-04-03T02:37:38.666Z | 2017-10-02T00:00:00.000 | {
"year": 2017,
"sha1": "d554ce1e2bceb0c9c4a73271baabd908554c2271",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-12908-0.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d554ce1e2bceb0c9c4a73271baabd908554c2271",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Geology",
"Medicine"
]
} |
198932739 | pes2o/s2orc | v3-fos-license | Association of metabolic syndrome with the incidence of hearing loss: A national population-based study
Background & aims Sensorineural hearing loss (HL) is one of the most common public health problems, and its prevalence increases with increasing life expectancy. An association between HL and metabolic syndrome (MetS) is suspected. Although previous epidemiological studies have investigated the association between the two variables, there have been conflicting conclusions. Therefore, we aimed to evaluate the association between the presence of MetS—and individual components of MetS—and HL, using a longitudinal design and a large-scale population. Methods A total of 17,513,555 individuals who underwent national health screening between January 2009 and December 2010 were identified. Subject data from the Korean Health Insurance Review and Assessment Service were reviewed. A total of 11,457,931 subjects were ultimately included in the analysis. Baseline comorbidities were defined according to the ICD-10 code from the Korean Health Insurance Review and Assessment Service data. If the participants had an ICD-10 code for HL during the follow-up, they were defined as having incident HL. Criteria for MetS adhered to the revised National Cholesterol Education Program Adult Treatment Panel III. Results There were 7,574,432 subjects without MetS and 3,883,499 with MetS. The incidence of HL in subjects without MetS and with MetS was 1.3% and 1.8% at 1 year, 4.1% and 5.2% at 3 years, and 6.8% and 8.6% at 5 years, respectively (P < 0.001). However, multivariate analyses revealed a negative association. Analyses according to the components of MetS demonstrated a positive association for those associated with dyslipidemia; however, the others exhibited an inverse association with HL. We also performed analyses using 4 groups according to the presence of MetS and the components of dyslipidemia. Univariate analysis revealed a positive association between the presence of MetS and HL; however, multivariate analysis revealed a positive association between the presence of dyslipidemia components and HL, regardless of the presence of MetS. Conclusion Among the components of MetS, the association between low HDL or high TG levels and HL was most apparent. It is useful to evaluate each MetS component in isolation, such as the presence of low HDL or high TG levels, rather than the presence of MetS as a cluster of components.
Results
There were 7,574,432 subjects without MetS and 3,883,499 with MetS. The incidence of HL in subjects without MetS and with MetS was 1.3% and 1.8% at 1 year, 4.1% and 5.2% at 3 years, and 6.8% and 8.6% at 5 years, respectively (P < 0.001). However, multivariate analyses revealed a negative association. Analyses according to the components of MetS demonstrated a positive association for those associated with dyslipidemia; however, the others exhibited an inverse association with HL. We also performed analyses using 4 groups PLOS ONE | https://doi.org/10.1371/journal.pone.0220370 July 26, 2019 1 / 11 a1111111111 a1111111111 a1111111111 a1111111111 a1111111111
Introduction
Sensorineural hearing loss (HL) is one of the most common public health problems, and its prevalence increases with increasing life expectancy. Results from the Global Burden of Disease Study revealed that the prevalence of HL increased from 14.33% in 1990 to 18.06% in 2015 [1]. In addition, the study reported that HL was ranked fifth highest for disease years lived with disability in both developed and developing countries [2]. Age, excessive noise, and ear diseases are established risk factors for HL [3]. However, recent studies have demonstrated the association of HL with various cardiovascular and/or metabolic disorders that would cause injury to the nerves or vessels within the cochlea [4]. The World Health Organization has reported increasing trends in aging populations and the prevalence of various cardiovascular and/or metabolic diseases, which are projected to be associated with further increases in the prevalence of HL [3]. Metabolic syndrome (MetS), also known as syndrome X, is the term attributed to a cluster of a clustering of 5 medical conditions including the following: large waist circumference (WC); high blood pressure (BP); high fasting blood glucose (FBG); high triglyceride (TG) levels; and low levels of high-density lipoprotein (HDL) [5]. Individuals with � 3 of the 5 components are usually diagnosed with MetS. The presence of MetS is closely associated with cardiovascular and/or metabolic diseases including diabetes mellitus (DM), hypertension (HTN), coronary artery disease, and stroke.
Many researchers have suggested that there are significant associations between cardiometabolic disturbances and HL or MetS. Accordingly, an association between HL and MetS is also suspected. Although previous epidemiological studies have investigated the association between the two variables, there have been conflicting conclusions [6][7][8][9][10][11][12]. Most studies investigating the association, however, have been cross-sectional in design. Therefore, we aimed to evaluate the association between the presence of MetS-and individual components of MetSand HL, using a longitudinal design and a large-scale population.
Subjects
A total of 17,513,555 individuals who underwent national health screening between January 2009 and December 2010 were identified (Fig 1). Subject data from the Korean Health Insurance Review and Assessment Service (KHIRAS) were reviewed. Our study involved the analysis of two databases, namely national health screening records collected between January 2009 and December 2010 and KHIRAS collected between January 2008 and December 2016. We first identified the baseline characteristics of the participants, including the presence or absence of MetS, MetS components, and comorbidities, from the national health screening data. Therefore, the presence or absence of MetS was evaluated at baseline alone. This database did not include the data regarding ICD-10 code, hearing thresholds, or questionnaires for HL. We also subsequently merged the data with KHIRAS. KHIRAS did not include the data for hearing thresholds, but included all ICD-10 codes accompanied with the dates when the relevant diagnosis was made during the follow-up period. Comorbidities were identified by reviewing International Classification of Diseases, Tenth Revision (ICD-10) codes during the previous year before the time of health screening. Among all subjects, those < 40 years of age (n = 4,789,159), with insufficient data (n = 317,181), or an ICD-10 code for HL during the previous year, before or at the time of health screening, were excluded. A total of 11,457,931 subjects were ultimately included in the analysis. This study was not a controlled study and the frequency of follow-up was not defined. In our study, the next follow-up point of the participants was defined as hospital visit point for admission or outpatient department. The followup interval or length of follow-up was variable. However, the last follow-up date could be identified. The study was approved by the Institutional Review Board (IRB) of Kyungpook National University Hospital (Daegu, South Korea; IRB No 2017-11-014-001). All personal identifiers were deleted prior to analysis. Therefore, the board waived the need for informed consent. The study was conducted in accordance with the principles that have their origin in the Declaration of Helsinki.
Baseline comorbidities were defined according to the ICD-10 code from the Korean Health Insurance Review and Assessment Service data. We defined DM as having FBG level � 126 mg/dL or ICD-10 code E11-14; HTN as systolic BP � 140 mmHg or diastolic BP � 90 mmHg or ICD-10 code I10-13, I15. Further, cerebrovascular accident and heart disease were defined based on medical history; chronic kidney disease (CKD) was defined as eGFR < 60 ml/min/ 1.73 m 2 .
In Korea, the ICD-10 code is usually identified by a medical doctor. Therefore, HL was defined using ICD-10 code for sensorineural HL. HL was defined as ICD-10 codes H90.3-8, H91.1, H91.8-9 and, if the subject had HL during year before health screening, the subject was excluded from analysis. Subjects were followed from health screening to December 2016. If the participants had an ICD-10 code for HL during the follow-up, they were defined as having incident HL.
Criteria for MetS adhered to the revised National Cholesterol Education Program Adult Treatment Panel III [5]. Five components for MetS were defined as follows: WC, > 90 cm men and >80 cm women; FBG � 100 mg/dL or a diagnosis of DM; BP �130/85 mmHg or a diagnosis of HTN; TG �150 mg/dL; and HDL < 40 mg/dL for men and < 50 mg/dL for women. MetS was considered to be present subjects with � 3 of the 5 components of MetS.
Statistical analysis
The data were analyzed using SAS version 9.4 (SAS, Cary, NC, USA). Categorical data were expressed as number (percentage), and continuous data were expressed as mean ± standard deviation. The distributions of continuous variables were evaluated according to the Kolmogorov-Smirnov criteria. Because the distribution of TG values was skewed, log-transformed values were used in this study. Distributions of categorical data were analyzed using the chisquared test, and the mean values of continuous data were compared using the Student's t-test. The incidence of HL was compared using Kaplan-Meier curve and Cox regression analyses; P <0.05 was considered to be statistically significant. Model 1 was adjusted for age and sex; model 2 was adjusted for age, sex, smoking habits, alcohol intake, exercise, and low income; model 3 was adjusted for age, sex, smoking habits, alcohol intake, exercise, low income, and BMI; and, finally, model 4 was adjusted for age, sex, smoking habits, alcohol intake, exercise, low income, BMI, and the presence of ear disease.
Baseline characteristics
There were 7,574,432 subjects without MetS and 3,883,499 with MetS. The mean age of the subjects without MetS was 52.3 ± 10.0 years and 58.1 ± 10.6 years for those without MetS (Table 1). Follow-up periods for participants without MetS and those with MetS were 6.5 ± 1.4 and 6.4 ± 1.6 years, respectively. The prevalence of comorbidities, including DM, HTN, dyslipidemia, cerebrovascular accident, heart disease, CKD, or ear disease was greater in those with MetS than without MetS. Cardio-metabolic parameters, including BMI, eGFR, FBG, SBP, DBP, total cholesterol, HDL-C, LDL, and log(TG), were greater in those with MetS than in those without MetS. Table 3. Subjects with WC, BP or FBG demonstrated a lower risk for HL compared with those without these components; however, those with TG or HDL had a higher risk for HL compared with those without these components.
Subgroup analyses for identifying an association between MetS and HL
Subgroup analyses according to sex and age were performed. Model 4 revealed statistically significant results for men and women 40-64 years of age (S1 Table). There was an inverse association between the two variables in men, and a positive association in women 40-64 years of age. The association between the number of each MetS component and HL were analyzed (S2 Table). Model 4 revealed that men with 1, 2, 3, or 4 MetS components exhibited an inverse association with HL compared with men without any MetS components. A positive association was shown in the two variables in women 40-64 years of age, but not women � 65 years of age. The presence of the large WC component was inversely associated with the risk for HL in men � 65 years of age and women of all ages (S3 Table). The presence of a high BP or high FBG component was inversely associated with the risk for HL in men and women of all ages. The presence of high TG or low HDL component was positively associated with risk for HL in men and women of all ages.
Association between dyslipidemia components and HL
Kaplan-Meier curve analysis revealed that the presence of MetS was associated with a high incidence of HL, regardless of the presence of high TG or low HDL levels (S1 Fig). For subjects with high HDL and low TG levels and without MetS, the incidence of HL was 1.3% at 1 year, 4.1% at 3 years, and 6.8% at 5 years. For subjects with low HDL or high TG levels and without MetS, the incidence of HL was 1.3% at 1 year, 4.1% at 3 years, and 6.8% at 5 years. For subjects with high HDL and low TG levels and with MetS, the incidence of HL was 1.7% at 1 year, 5.2% at 3 years, and 8.4% at 5 years. For subjects with low HDL or high TG levels and with MetS, HL incidence was 1.8% at 1 year, 5.3% at 3 years, and 8.6% at 5 years. However, multivariate analysis revealed that subjects without MetS, and with high TG or low HDL levels, exhibited the highest risk for the incidence of HL (S6 Table). Those with MetS but without high TG and low HDL levels demonstrated the lowest risk for the incidence of HL. The presence of MetS had an inverse association with HL; however, the presence of high TG or low HDL levels demonstrated a stronger positive association with HL, regardless of the presence of MetS.
Discussion
The present study was longitudinal investigation using a large-scale population aimed at identifying the association between MetS and HL. First, we analyzed the association between two variables using all participants, and the results revealed a negative association on multivariate analyses. Analyses according to the components of MetS demonstrated a positive association for those associated with dyslipidemia; however, the others exhibited an inverse association with HL. Secondarily, we performed subgroup analyses according to age and sex, and the presence of ear disease. There was positive association between the presence of MetS and HL in women aged 40-64 years, and an inverse association between the two variables in men all ages. Analyses according to MetS components yielded similar results for all subjects. We also performed analyses using 4 groups according to the presence of MetS and the components of dyslipidemia. Univariate analysis revealed a positive association between the presence of MetS and HL; however, multivariate analysis revealed a positive association between the presence of dyslipidemia components and HL, regardless of the presence of MetS.
Although some previous studies have reported a negative association between the presence of MetS and HL, most have demonstrated a positive association between the two variables. Most studies, however, were performed using a cross-sectional design. Some studies analyzed the association between MetS components and HL [7,9,11,12]. Han et al. reported a positive association between WC, HDL, or FBG and HL, while other studies reported a positive association with only FBG and positive association with HDL [7,9,12]. There were inconsistent results regarding the association between individual MetS components and HL. Our results demonstrated a negative association between the presence of MetS and HL. Two components of dyslipidemia among the 5 MetS components were associated with the incidence of HL, similar to the results reported by Sun et al. [7]. The other components were inversely associated with the incidence of HL. WC, FBG, and BP were closely associated with DM or HTN, and epidemiological studies investigating the association between DM or HTN and HL have reported inconsistent results [13][14][15][16]. The inverse association found in our study may be consistent with these investigations. We suggest that individuals with these problems need regular medical support compared with those without, and may take various medications, which may be protective or a hazard to HL. These may be associated with inverse or negative associations between WC, FBG or BP and the incidence of HL.
Some clinical studies have reported an association between dyslipidemia and HL [12,17]. Previous researches involving guinea pigs fed a lipid-rich diet reported showed vacuolar edema and degeneration of the stria vascularis as possible pathognomonic changes in dyslipidemia [18,19]. A decrease in nitric oxide production and an increase in reactive oxygen species levels caused by dyslipidemia can induce hearing impairment [20][21][22]. In addition, previous research has demonstrated that HDL has anti-inflammatory, anti-oxidant, and anti-apoptotic effects, which may attenuate the pathological changes induced by dyslipidemia [23,24]. These data suggest that dyslipidemia is associated with development of HL. Our study revealed no association between MetS-defined as a group-and HL, although a conflicting effect may be associated with a hazard effect for dyslipidemia and a protective effect for the other components.
Nevertheless, our study had some inherent drawbacks that should be addressed, the first of which was its retrospective design. Second, co-morbidities, such as DM, HTN, heart disease, and cerebrovascular accident, were defined using ICD-10 codes or initially collected on simple histories or laboratory findings. We also did not collect data regarding hearing thresholds, and HL was simply defined as an ICD-10 code. Although ICD-10 code was determined by a medical doctor, the diagnosis could range widely from a simple patient complaint to a thorough evaluation for HL. In addition, we did not collect data regarding subject medications or drug classes. Third, our data did not include those regarding noise exposure, occupation, or history of ear disease. HL is associated with excessive noise exposure and/or history of ear diseases, which are important risk factors. However, we attempted to mitigate these problems by performing subgroup analyses or multivariate analyses including ear disease.
Although our study had some important limitations, it was based on a population that was virtually all � 40 years of age. In addition, our study design was longitudinal with a median follow-up duration of 6.4-6.5 years, which may have also overcome these limitations. Further well-designed, prospective longitudinal studies are, however, needed to evaluate the association between the presence of MetS-or individual MetS components-and HL.
In conclusion, the results of our study suggest that MetS did not demonstrate a definite positive association with the incidence of HL compared with individuals without MetS. Among the components of MetS, the association between low HDL or high TG levels and HL was most apparent. It is useful to evaluate each MetS component in isolation, such as the presence of low HDL or high TG levels, rather than the presence of MetS as a cluster of components. | 2019-07-28T13:03:20.738Z | 2019-07-26T00:00:00.000 | {
"year": 2019,
"sha1": "b910dc0db72942c4ce2fb09be61312670c268dd2",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0220370&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b910dc0db72942c4ce2fb09be61312670c268dd2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231662301 | pes2o/s2orc | v3-fos-license | Angstrom-wide conductive channels in black phosphorus by Cu intercalation
Intercalation is an effective method to improve and modulate properties of two-dimensional materials. Even so, spatially controlled intercalation at atomic scale, which is important to introduce and modulated properties, has not been successful due to difficulties in controlling the diffusion of intercalants. Here, we show formation of angstrom-wide conductive channels (~4.3 A) in black phosphorus by Cu intercalation. The atomic structure, resultant microstructural effects, intercalation mechanism, and local variations of electronic properties modulated in black phosphorus by Cu intercalation were investigated extensively by transmission electron microscopy including in situ observation, DFT calculation, and conductive atomic force microscopy.
atomic structure of BP, Cu atoms anisotropically intercalate in BP along the zigzag direction occupying angstrom-wide spaces with Cu intercalated regions sandwiched between pristine BP regions. Using in situ experimental techniques combined with theoretical calculations, we reveal the intercalation process in detail with respect to atomic structure, microstructural modifications such as strain and deformation, and also elucidate the mechanism of intercalation.
Conductive atomic force microscopy results point to localized differences in electronic properties with Cu intercalated regions having enhanced conductivity and showing semimetallic behavior. Our findings throw light on the fundamental relationship between microstructure changes and properties in intercalated 2D materials. Our study thus provides a new way to tailor the properties of anisotropic 2D materials at angstrom scale.
Main Atomic structure of angstrom-wide Cu intercalated black phosphorus
Two dimensional (2D) black phosphorus (BP) has an anisotropically puckered structure where the structural organization is distinctly different along the zigzag and armchair directions [10][11][12] . We anticipated this structural anisotropy of BP to profoundly influence intercalation in BP. To intercalate Cu into BP, Cu was first deposited on BP, followed by a heat treatment to diffuse the deposited Cu into the BP matrix (Fig. 1a). The Cu intercalated BP sample was imaged by high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM). Fig. 1b and c show, respectively, the plan-view and crosssectional view of Cu intercalated BP where we identify Cu intercalated regions by the vertical white lines in the two figures; the higher intensity of the HAADF-STEM image in these regions is due to the higher atomic number of Cu. The white lines indicate that Cu atoms are anisotropically intercalated along the zigzag direction of BP.
Apart from atomic resolution HAADF-STEM analysis, energy dispersive X-ray spectroscopy (EDS) and density functional theory (DFT) calculation were used to further elucidate the atomic structure of Cu intercalated BP (Fig. 2). First, the Cu intercalated BP structure was analyzed along the four different zone axes of BP (as shown in the reconstructed image in Fig. 2a), after which the corresponding atomic resolution EDS maps were obtained although undulations could intermittently be observed in some experimental images too. We attribute this difference to the following: in our DFT calculation, all boundaries were allowed to move freely to obtain a fully relaxed structure. However, under real conditions, most intercalated regions could not be fully relaxed and hence show a straight rigid morphology in HAADF images unlike the calculated structures. In the unrelaxed structure, we expect induced local strain in the vicinity of intercalated regions. We will discuss this strain effect further in the following section.
In the Cu intercalated structure, we see periodically missing Cu atoms along the [101] zone axis of BP (Fig. 2c); every fifth Cu atom is missing and the column of missing Cu has an angle of ~73 with respect to the BP layer (Fig. 2g). Besides every fifth missing Cu, other missing atom periodicities were also occasionally observed. The coexistence of different periodicities indicates that Cu intercalated BP cannot be considered as a compound with a fixed atomic ratio of the two constituent elements. To explain above observation, we calculated Cu binding energies for several different periodicities of missing Cu and the result shows that the structure with every fifth Cu atom missing is energetically more favorable than others.
Microstructural effects of Cu intercalation in black phosphorus: strain, deformation and atomic configurations in single crystalline black phosphorus
In the previous section on the structure of Cu intercalated BP, the presence of strain was inferred by comparing the undulating and straight morphologies of calculated and experimental images of Cu intercalated structures (Fig. 2c, d). The Cu intercalated structure has a larger atomic distance than pristine BP, this increase being ~6.65 % in the zigzag direction and ~4.06% for interlayer distance (Fig. 3a). Since the BP matrix is restricted, this expansion cannot be compensated by a full relaxation of the structure, resulting in a straight rigid morphology in contrast to the undulating morphology derived from DFT calculations based on a fully relaxed structure (Fig. 2c, d). The strain induced by the unrelaxed Cu intercalated BP structure could be readily identified in and around Cu intercalated regions (Fig. 3a) due to which layers of BP in these regions appear to be fluctuating and the atomic structure is not clearly resolved.
Interestingly, it turns out that the strain can be relaxed to some extent by forming kinks in the Cu intercalated structure. Thus, fluctuation and less resolved atomic structure of BP are predominantly observed in areas preceding the formation of a kink (the area just above the kink point in Fig. 3a) rather than after the kink (below the kink point in Fig. 3a). Consequently, as the density of Cu intercalation is increased, kinks are more frequently observed since the strain readily accumulates in high-density regions of the Cu intercalated structure followed by relaxation. This observation demonstrates that Cu intercalation induces strain in BP and this strain can be relaxed by the formation of kinks in the Cu intercalated regions.
In addition to structural strain, we found that Cu intercalation of BP introduced a layer
Rate-limiting Cu intercalation mechanism: Top-down intercalation
To elucidate the mechanism of Cu intercalation in BP, we investigated truncated Cu intercalated structures where intercalation was interrupted before completion to the bottom of the BP flake ( Fig. 4a and b). Two types of truncated structures were seen: in the first, two Cu atoms are fully intercalated at the end of the truncated structure (Fig. 4a) and in the second, there is one additional intercalated Cu atom in the partially intercalated next layer as compared to the structure in Fig. 4a (Fig. 4b). By repeating these two steps, Cu atoms can be intercalated from top to bottom of the BP crystal. In addition, as shown in Fig. 4c, Cu intercalation that does not contact with the edges of the BP flake was also frequently. This is a further indication that top-down intercalation is the dominant mechanism for Cu intercalation in BP.
We calculated the energy barriers of Cu diffusion along zigzag, armchair directions of a BP layer, and the crossing a BP layer by DFT calculations. The energy barriers of Cu diffusion along the armchair and zigzag directions (0.18 eV and 0.38 eV, respectively) are much lower than that crossing a BP layer (1.78 eV), although the latter energy barrier is comparable to that observed for vacancy-assisted top-down intercalation of MoS2 13 . When strain was taken into account, the calculated diffusion barrier of crossing a BP layer was reduced (1.48 eV) (Fig. 4df and), indicating that top-down intercalation of Cu is more favorable nearby a intercalated region rather than that in a pristine BP region. We carried out in situ HAADF-STEM to experimentally investigate the mechanism of Cu intercalation (Fig. 4g, h) where is the rate of intercalation and 0 is a constant, is the Boltzmann constant and is the temperature (Fig. 4h). The estimated activation energy (1.24 eV) agrees well with the calculated energy barrier of Cu diffusion crossing the BP layers (1.48 eV) and thus validated the proposed top-down mechanism of intercalation. This result implies that the overall intercalation process is limited by the diffusion of Cu atoms crossing BP layer and we therefore conclude that top-down Cu intercalation is the rate-limiting process.
Angstrom-wide conductive channels by Cu intercalation in black phosphorus
According to previous reports, doping with a transition metal such as Cu in BP generally causes a Fermi level shift without significantly changing the electronic band structure 14,15 .
These results were obtained however, by assuming a random distribution of the doped metal atom in BP, resulting in a lower estimate of the metal atom concentration in the BP matrix. Our research has shown that Cu atoms intercalate periodically into BP. Thus, we expect the electronic band structure to be significantly changed in the Cu intercalated regions. Fig. 5b-c show the calculated density of states (DOS) and electronic band structure of Cu intercalated BP. Semimetallic behavior is clearly evidenced in Cu intercalated BP in contrast to the semiconducting nature of pristine BP.
To experimentally investigate local variations in electronic properties in Cu intercalated BP, we performed conductive atomic force microscopy (C-AFM) (Fig. 5d and e). The Cu intercalated regions were easily identified in morphology images since Cu intercalation leads to a deformation-induced height difference. We have already noted from our microstructure studies that Cu intercalation induced a height difference (Fig. 2f and Fig. 3b). In the current mapping image, higher conductivities were measured at Cu intercalated regions than in the pristine areas of BP (Fig. 5d). The higher conductivity was repeatedly observed in all Cu intercalated regions. Comparing the I-V curves at the Cu intercalated and pristine regions also confirmed the higher conductivity at the Cu intercalated regions (Fig. 5e). The locally increased conductivity agrees with the calculated electronic properties where semimetallic property appears at the Cu intercalated region while pristine BP remains semiconducting. These results demonstrate that angstrom-wide conductive channels can be formed in anisotropic 2D materials by a simple intercalation method.
In this study, we reveal that Cu atoms are anisotropically and periodically intercalated in BP | 2021-01-22T02:15:46.324Z | 2021-01-21T00:00:00.000 | {
"year": 2021,
"sha1": "006a2c4eb7774b2f59f535febcfacf79a54c2d84",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2101.08447",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "006a2c4eb7774b2f59f535febcfacf79a54c2d84",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
4981703 | pes2o/s2orc | v3-fos-license | The complete replicons of 16 Ensifer meliloti strains offer insights into intra- and inter-replicon gene transfer, transposon-associated loci, and repeat elements
Ensifer meliloti (formerly Rhizobium meliloti and Sinorhizobium meliloti) is a model bacterium for understanding legume–rhizobial symbioses. The tripartite genome of E. meliloti consists of a chromosome, pSymA and pSymB, and in some instances strain-specific accessory plasmids. The majority of previous sequencing studies have relied on the use of assemblies generated from short read sequencing, which leads to gaps and assembly errors. Here we used PacBio-based, long-read assemblies and were able to assemble, de novo, complete circular replicons. In this study, we sequenced, de novo-assembled and analysed 10 E. meliloti strains. Sequence comparisons were also done with data from six previously published genomes. We identified genome differences between the replicons, including mol% G+C and gene content, nucleotide repeats, and transposon-associated loci. Additionally, genomic rearrangements both within and between replicons were identified, providing insight into evolutionary processes at the structural level. There were few cases of inter-replicon gene transfer of core genes between the main replicons. Accessory plasmids were more similar to pSymA than to either pSymB or the chromosome, with respect to gene content, transposon content and G+C content. In our population, the accessory plasmids appeared to share an open genome with pSymA, which contains many nodulation- and nitrogen fixation-related genes. This may explain previous observations that horizontal gene transfer has a greater effect on the content of pSymA than pSymB, or the chromosome, and why some rhizobia show unstable nodulation phenotypes on legume hosts.
INTRODUCTION
Rhizobia are an important group of bacteria because of the symbioses they form with legume plants. These bacteria provide the plant with fixed nitrogen by converting atmospheric N 2 into a plant-usable form. In exchange, the plant provides carbon to the rhizobia located within root or stem nodules, thereby supporting greater bacterial growth and reproduction. The Medicago truncatula-Ensifer meliloti (Sinorhizobium meliloti) symbiosis is a model system to better understand the genetic basis and evolution of rhizobial-legume symbioses and the N 2 -fixation process [1]. E. meliloti strains contain more than one large replicon, which is similar to roughly 10 % of assayed bacterial species [2]. In E. meliloti large non-chromosomal replicons are referred to as megaplasmids or symbiotic (Sym) plasmids [2].
The E. meliloti reference genome, Rm1021, has a chromosome and two megaplasmids, pSymA and pSymB, but not other small accessory plasmids [1]. Previous work has shown that genes involved in similar functions tend to be concentrated on particular replicons: pSymA contains genes playing essential roles in symbiosis, including nodule formation and symbiotic nitrogen fixation [1], whereas pSymB contains a large proportion of genes involved in import/ export functions [3], and the chromosome contains most of the housekeeping genes [4][5][6]. These replicon-specific gene functions have been hypothesized to be the result of the initial acquisition of plasmids followed by horizontal gene transfer events [6]. Some strains also contain smaller accessory plasmids [7], some of which have been shown to affect nodulation and metabolic potential [8,9].
Previous studies have shown that the three main replicons of E. meliloti have distinct evolutionary histories [10], and interreplicon differences can be seen in the: (1) levels of standing nucleotide variation [10], (2) effects of purifying and positive selection [5,11], (3) proportion of duplicated and horizontally transferred genes [11], and (4) structural rearrangements and core gene content [5]. These studies have relied primarily on the use of short-read sequencing (mainly Illumina-based Hiseq or Miseq platforms) and a limited number of complete genomes. However, it is well known that mapping short-read sequences back to a single reference genome biases assembly and downstream analyses to what was found in the reference genome, and thus provides limited insight into large structural variation, genes missing from the reference genome, intrareplicon gene movement and quantification of repeated sequences [12,13].
Fully assembled, reference-quality genomes generated from long-read technologies, such as Pacific Biosciences (PacBio) sequencing [14], allow for better assignment of genome rearrangements, repetitive sequences, gene content and evolutionary histories of populations. This enables characterization of transposon-associated loci (TALs), genes that encode proteins that may mediate the transposition and duplications of DNA within the genome [15], and repeat elements (REs), sequences of DNA repeated one or more times within a single genome. These features are difficult to assemble and differentiate using short-read data. The TALs and REs have been inferred to facilitate roles in gene movement among bacterial lineages [15], and presumably facilitate movement of genic regions between replicons [16].
Accessory plasmids are small replicons present in some, but not all, E. meliloti strains [7,9,13] and some have been identified as facilitating important biological roles such as metabolic potential, host incompatibility and nodulation competitiveness [9]. Relatively little is known about the origin or evolution of accessory plasmids in Ensifer, although they are generally thought to be transient components (easily gained or lost) from the species pan-genome. The limited data on E. meliloti accessory plasmids suggest that genes found on these plasmids are similar to those found on pSymA or pSymB. However, no extensive genomic analyses of multiple plasmids have been done [but see 7,13,17], and reference-quality genomes can provide data for such analyses.
Here we describe the complete genome sequence of 16 E. meliloti reference-quality genomes, 10 of which were newly sequenced in this study using high-coverage PacBio data. The fully assembled genome sequences were used to characterize: (1) the diversity of gene content, TALs and REs found in E. meliloti, (2) gene transfer events between replicons and (3) genomic composition and relationships among E. meliloti accessory plasmids. Our research shows that there is benefit in analysing E. meliloti in a repliconindependent manner, and describes how E. meliloti can gain and lose genes in a manner consistent with maintenance of overall genome stability and functionality.
METHODS
Genome sequencing, assembly and annotation Genomic DNA from the 10 E. meliloti strains was isolated by using UltraClean Microbial DNA Isolation Kits from MoBio Laboratories. Strains were previously obtained [18] or newly acquired from the USDA culture collection. Cultures were grown at 30 C in TY medium [19]. Genomic sequence data were generated using a Pacific Biosciences (PacBio) RS II sequencer at the Mayo Clinic with one PacBio single molecule real time (SMRT) cell per strain. Genomes were assembled using HGAP version 3.0 [20]. Each genome was assembled multiple times, adjusting the predicted size of the genome during each subsequent assembly run. The genomes were circularized and replicons were confirmed using gepard version 1.3.1 [21]: coverage range, 33.3-153.6; read range, 20 333-100 711; N50 range, 17 362-20 321; and average read length range, 10 826-20 341. The assembled genomes were individually polished
IMPACT STATEMENT
This article provides evidence that the three main replicons of Ensifer meliloti have distinct gene content and gene patterns that are maintained even as strains genetically diverge. The symbiotically important replicon, pSymA, appears to preferentially recombine or transfer genes with the accessory plasmids. Some accessory plasmids were previously shown to be involved in host specificity. This suggests a mechanism for gaining or losing genes involved in symbiosis and host selection. The replication protein for the accessory plasmids, RepA, is found in numerous other bacterial species and may have been acquired by plasmid transfer from other soil microbiota. Our results reveal that some structural rearrangements of replicons in E. meliloti are common, but that gene translocation between replicons is relatively rare, or selected against. Intra-replicon gene transfer is associated with repeat elements, but not transposonassociated loci. The gene transfer events that occurred between the accessory plasmids and pSymA in E. meliloti demonstrate one mechanism by which the Ensiferlegume symbiosis is constantly evolving. Overall, and despite what has been found in other rhizobia, E. meliloti has a fairly stable genome structure on an evolutionary timescale.
with Pilon 1.16 [22], using Illumina reads from previous studies that were mapped to the PacBio assemblies using BWA version 0.7.12-r1039, and using the 'mem' algorithm [23]. Files were converted from SAM to BAM files using samtools view 1.3 [24]. Pilon was run with the required arguments and additionally '-changes -fix "bases" '. The complete commands for running BWA, samtools and Pilon on the M2 genome are available at: https://github.com/jguhlin/pacbio-paper-code/blob/master/pilon-M2-genome/runpilon.sh. Base pair changes during the polishing stage for each assembly ranged from 4 to 624 (<0.01 % of the genome) [22]. The protein-coding genes of each assembled genome were predicted using prodigal, with no specialized parameters [25]. Replicon names were assigned based on sequence similarity to the reference strain E. meliloti Rm2011 [26]. The six previously sequenced strains were imported from NCBI, and all genomes were also independently assembled and annotated using the MaGE and the MicroScope platform (http://www.genoscope.cns.fr/agc/ microscope/home/index.php). Annotations are also available at: https://github.com/jguhlin/pacbio-paper-code/tree/ master/gene-prediction.
Identification of syntenic regions, core and pan genomes Synteny analyses was performed using NUCMER from the MUMMER package (version 3.1) [27]. Plots and downstream analyses were done using custom code. Core and pan genomes were generated by performing an all-vs-all BLAST+ comparison on the predicted protein sequences, clustering based on BLAST+ bitscores using mcl with an inflation value of 10.0, due to strains belonging to a single species. Orthology-based approaches are not appropriate for this analysis [28,29]. Identification of single-copy core genes was performed using an ODG database and a CYPHER query, available at the Github repository referenced below, with additional analysis with custom code [30]. Custom code, commands and scripts are available at: http://github.com/ jguhlin/pacbio-paper-code. The 'Pan/Core-Genome' and the 'Gene Phyloprofile' tool of the MicroScope platform was also used to identify similar genes between bacterial strains and individual replicons, with thresholds set at the recommended MicroScope specifications of 80 % amino acid identity and 80 % alignment coverage [31,32]. The MicroScope protein-coding gene annotations were used to generate core-genome and core-replicon gene content. Inter-replicon gene movements were defined at genes present in the coregenome, but absent in the core-replicon for the chromosome, pSymA and pSymB.
Identification of transposon-associated elements and repetitive elements
To predict TALs, de novo gene prediction was performed on each genome using Prodigal v2.6.3, and exporting coding sequences, peptide sequences and a GFFv3 file detailing predicted genes. The specific commands used are available at: https://gist.github.com/jguhlin/67811311c36e35b0c1a-c2ef772c129cb [25]. Functional predictions were generated using orthologous functional prediction via eggNOG v4.5.1 and matched using HMMER v3.1b2 [33,34]. The TALs were identified based on matching one or more eggNOG-based annotations associated with transposable elements. TALs were subsequently clustered using MCL V14-137 based on sequence similarity bit scores determined by BLAST+ [28,29,35].
Repeats and repeated elements were identified using NUCMER, from the MUMMER version 3.23 package [27], as follows. Total genomic content of each strain was compared against itself (using the command 'nucmer -p ID-vs-ID -maxmatch -nosimplify' ID.fasta ID.fasta, where ID is the strain ID), and the sequences of matches were extracted and combined with the matches from all strains. NUCMER was subsequently used to compare all repeats to each other, and MCL was used to cluster repeats based on sequence matching coverage pairwise for each repeat [35]. Repeats were aligned using MAFFT (with argumentsmaxiterate 1000 -localpair -adjustdirection), and poor alignments were trimmed or removed using TrimAL (resoverlap 0.6 -seqoverlap 60). Repeats were re-aligned with MAFFT, using the same arguments as previously [36,37]. HMM profiles were built from these alignments using HMMER [34], and the genome of each strain was analysed for repeat content using nhmmscan, with an e-value cutoff of 0.0001 [34]. Regions matching multiple HMM profiles were assigned to the best match, by either length, identity percentage or score, in that order. Principal component analyses (PCAs) were performed on both the repetitive element and the transposable-associated element contents of each replicon in each strain [38].
Mantel test
Correlations among pairwise genetic distance matrices were tested for each replicon. A genetic distance matrix for each replicon was constructed using the concatenated alignments of single copy core genes and the dist.dna function from the R [39] package ape [39]. This was done using the TN93 model of evolution [40]. A Mantel test [41] was implemented in the ade4 package [39], with 10 000 permutations, to calculate the correlation between distance matrices from different replicons and to test for significance.
Replicon overview
All replicons in the 10 new strains sequenced in this study were de novo and completely assembled, and fully circularized using both PacBio and Illumina sequencing data. Our analyses were supplemented with six previously published complete genomes [1,13,26,49,50]. The total sizes of genomes from these 16 strains ranged from 6.68 to 7.27 Mb (Tables 1 and S1, available in the online version of this article). The strains were obtained from geographically diverse regions of the USA, Europe, Australia and the Middle East (Table S1). Each genome contained three main repliconsa chromosome and the two megaplasmids, pSymA and pSymBand in many cases from one to three accessory plasmids. The chromosomes had an average G+C content of 62.72 mol%similar to pSymB, which had an average of 62.40 mol% (pairwise t-test for difference in G+C content, P=0.34 after Bonferroni correction). In contrast, replicon pSymA had substantially lower G+C content, 60.31 mol% (ttest P<0.001 compared to the chromosome and pSymB), similar to the 59.18 mol% average of the accessory plasmids. The accessory plasmids showed great variance in G+C content, with s 2 accessory = 0.888, s 2 psyma = 0.020, s 2 psymb = 0.119 and s 2 main = 0.0052, perhaps reflecting their diverse origins.
General core genome and replicon-specific core genomes The core genome of E. meliloti was defined by us as consisting of the intersection of all gene families (clusters) found in each of our assayed strains. We clustered genes based on predicted protein similarity from strains: AK83, SM11, BL225c, GR4, RM2011, RM1021, HM006, KH35c, KH46c, M270, RM41, T073, USDA1106, USDA1157, M162 and USDA1021. The core genome of our sample was composed of 4315 gene clusters (Fig. 1). We also performed this analysis using individual replicons, identifying 2472 core gene clusters for the chromosome, 308 for pSymA and 1242 for pSymB (Figs S1-S3). The core genome of the chromosome segments into three distinct patterns due to the inclusion of strains M270 and USDA1021. The pattern resulting from M270 is probably due, for the most part, to a deletion in the chromosome; the chromosome of M270 is the second smallest in our population at 3.5 Mbp. USDA1021 contains a translocation from the chromosome to pSymB, as seen in Fig. 2(b). Fig. S2 segments into two because of a likely split of the pSymA plasmid in strain M162. Because of the large translocations present in strains M162 and USDA1021 (Fig. 2), separate analysis on core genome structure was done excluding these two strains. For this slightly smaller population, 4389 core gene clusters were identified. This accounted for 69-75 % of all genes in the 16 strains (Table S2).
Core genes specific to the chromosome (2734 genes) accounted for 79-87 % of the total genes on each chromosome. Core genes on pSymA (536 genes) accounted for 40-53 % and pSymB (1255 genes) accounted for 85-90 % of the total genes on each replicon (Table S2). The percentage of genes that were part of the core were significantly different between replicons: the chromosome and pSymA (pairwise ttests, Bonferroni-corrected: P<2Â10 À16 ), the chromosome and pSymB (P=0.023), and pSymA and pSymB (P<2Â10 À16 ).
Inter-replicon gene movement
The complete genome assemblies allowed for the identification of inter-replicon gene movement that would not have been identifiable by using short-read sequencing. This is due, in large part, to gaps, potentially unplaced contigs and misplaced contigs due to insertional sequence (IS) elements.
To identify inter-replicon gene movement, the gene content for each replicon was compared to those in all strains via BLAST. Based on the analysis of single-copy core genes, 102 gene translocation events were identified. These were primarily from pSymB to pSymA (62 genes) and the chromosome to pSymB (33 genes). In all cases, the genes were found in 15 strains on an equivalent replicon, and in a single strain on a different replicon. In addition, there was one case where a gene moved from pSymA to pSymB in one strain, but that same gene moved to an accessory plasmid in another strain.
Two large sequence translocation events were identified in E. meliloti strains M162 and USDA1021 (Fig. 2). In strain M162, we identified a translocation of 300 kbp from pSymA into a large accessory plasmid. The MaGe annotation for the genes on the accessory plasmid in M162 which match pSymA genes is M162 (pA0019-pA0464). None of these genes was annotated as being nod, nif or fix genes. Moreover, the pSymA in strain M162 had a replication protein (RepA) identical to that found on an accessory plasmid in Ensifer medicae strains M2 and WSM419, suggesting a likely horizontal gene transfer event also occurred. RepA is a main replication protein found on non-chromosomal plasmids [51]. Furthermore, the RepA from accessory plasmid M162 was identical to those found on the pSymA replicons in the other E. meliloti strains. This indicated that a genomic translocation event involving pSymA and an accessory plasmid occurred in strain M162.
A second large gene transfer event was detected in strain USDA1021, where a 325 kbp region, containing 337 genes, moved from the chromosome to pSymB. The MaGe gene annotations (mpb0525-mpb0862) show these genes are involved in flagella biosynthesis and ATP production. This resulted in a 325 kb increase in the size of pSymB in strain USDA1021 and similar sized reduction in the chromosome as in the other strains (Table S1).
Transposon-associated loci and repeat elements by replicon
TALs were identified by running a motif search on predicted protein sequences for our strains, and identifying related eggNOG categories. These 1768 eggNOG classifications included transposase, integrase, recombinase, phage integrase family and IS family members (Table S3) Despite being much smaller than the other replicons, the accessory plasmids contained almost half of all the TAL families ( Table 1). The TALs on accessory plasmids were also much more likely to be found in only a single strain; more than half of the accessory plasmid TALs were found in only a single strain whereas <25 % of the chromosomal TALs were found in a single strain. In contrast, >50 % of the chromosomal TALs and >40 % of pSymB TALs were found in nearly all of the strains (Fig. 3).
A total of 327, 83 and 49 of the TAL families were found on only one, two or three of the main replicons, respectively. If the accessory plasmids were also counted as a single replicon class, 232, 156 and 39 TAL families were found on one, two or three main replicons, respectively. Moreover, 47 TALs were found on all three main replicons and one or more of the accessory plasmids. The presence or absence of specific TALs was more related to a replicon than to a specific strain (Fig. 4).
We also identified repeat sequence elements (REs) by first searching for regions of high similarity within each genome [27], and then clustering these sequences, based on sequence identity, across all genomes to form hidden Markov models (HMMs). This process identified 48 133 repeated elements, ranging in size from 66 to 12 593 bp, that clustered into 688 repeat families by similarity Markov clustering (MCL) [52].
Each strain contained from 2702 to 3132 REs, accounting for 1.46-2.04 Mb of DNA, and comprising~22-28 % of the total sequence content of each strain. The REs were more evenly distributed amongst the three main replicons than were the TALs: the chromosome contained 417-446 repeats/Mb, while pSymA contained 380-561 repeats/Mb, and pSymB 390-408 repeats/Mb (Table 1). In contrast to TALs (Fig. 3), approximately 50 % of the repeat families were found in all strains, and less than 5 % were found in only a single strain (Fig. 3).
Although the majority of REs were not replicon-specific, the distribution of REs among all replicons and strains was more related to a replicon than to a strain (Fig. 4). Like TALs, the REs could be clustered into families with 78, 22 and nine found only on pSymA, pSymB or the accessory plasmids, respectively. The accessory plasmids and pSymA shared 68 RE families that were not found on the chromosome or on pSymB, whereas there were 14 RE families shared by the accessory plasmids and the chromosome but not with pSymA, and five RE families were shared by the accessory plasmids and pSymB but not with pSymA.
Accessory plasmids as part of the pSymA panreplicon The 14 accessory plasmids we identified in our sample contained 3215 predicted genes. These accessory plasmids had a similar ratio of predicted genes, including TALs (1.01 genes/kb), to the other replicons, the chromosome (1.03 genes/kb), pSymB (1.01 genes/kb) and pSymA (1.17 genes/ kb). As expected, there were no genes in common among all the accessory plasmids, and several of these plasmids shared little sequence identity with any of the other accessory plasmids. Adding the accessory genome to the pSymA core and pan genome increased the number of core genes from 536 to 553 (excluding M162 and USDA1021, Fig. S4). The RepA protein was not identical across all accessory plasmids based on amino acid comparison (Fig. S5), indicating a potentially diverse source of genetic material [53]. RepA proteins that were highly similar to those found on our accessory plasmids were also found in other rhizobial species, including E. medicae, Ensifer fredii and Agrobacterium tumefaciens.
The gene content of the accessory plasmids was compared to that of the pan-replicon of each of the three main replicons for all 16 strains, based on gene annotation from MaGe ( Table 2). The results of this analysis indicated that more genes found on the accessory genome were exclusively found on pSymA, rather than pSymB or the chromosome. This suggests that the accessory plasmids and pSymA have a greater ability to exchange genetic elements than the other replicons.
TALs accounted for about 12 % of the total gene content of the accessory plasmids. Less than 2 % of TALs were exclusive to the chromosome or pSymB, whereas 32 % of the accessory plasmid genes were also found exclusively on pSymA.
Further evidence of a shared pSymA/accessory plasmid genome was found by performing a BLAST analysis against the NCBI NT database using each of the accessory plasmids as a query sequence. Many of the accessory plasmids had BLAST alignments to plasmids found in other rhizobial species, and many contained genes similar to those found on pSymA (Table S4). For example, the accessory plasmid from strain T073 contained a 13 kb gene region matching E. fredii, an accessory plasmid from Ensifer sojae and plasmids A and C in Ensifer americanum at~95 % identity.
Perhaps the most interesting match found was between accessory plasmid B in strain M270, which contained 64 genes (>20 % of the total) that were also found on the Tiplasmid in A. tumefaciens strain C58 (Fig. S6, Table S5). Genes involved in agrocinopine synthases, transport and catabolism were present, but those required for T-DNA transfer [54] were missing. Agrocinopines are a sugarphosphodiester subclass of opines (amino acid derivatives) that are typically found in tumours induced by A. tumefaciens [55]. While agrocinopines were originally thought to only be synthesized in crown gall tumours, a wide variety of bacteria are capable of utilizing them [56], and E. meliloti strain M270 probably gained the ability to synthesize and catabolize these opines via horizontal gene transfer. Additionally, the genes involved in conjugation were also present. These genes were also found on other accessory plasmids, which may indicate a transmission advantage for accessory plasmids with a conjugation gene cluster.
Replicons have distinct evolutionary lineages
A Mantel test was used to determine if the nucleotide content of the replicons (chromosome, pSymA and pSymB) in a specific strain diverged together or separately. The Mantel test revealed no statistically significant correlations in pairwise divergence among strains for any pairs of genes on replicons (all P>0.3, Table S6, Fig. S7). This shows that two strains might have a similar chromosome but distinct pSymA or pSymB replicons. The results in Fig. S8 show that the phylogenetic trees of single-copy core genes for each replicon were distinct. Additionally, the Microscope Gene Phyloprofile tool was used to examine the proportion of genes found in each strain as compared to the reference strain Rm2011 (Table S7). The lack of correlation in the rate of divergences can be seen by comparing the order of strains. For example, strain T073 had the fourth most similar chromosome, but was only the 14th closest to pSymA when compared to Rm 2011. In contrast, strain Rm41 had the 12th most similar chromosome and was the fourth closest to pSymA.
DISCUSSION
Characterizing genomic diversity is an important step in identifying genes responsible for naturally occurring variation, as well as gaining insight into past adaptation and evolutionary processes. The E. meliloti genome consists of three large replicons found in every strain, as well as smaller accessory plasmids that were present in only some strains. Previous analyses found that the replicons differ in their evolutionary histories, due to the strength of purifying selection to which they are subjected, the extent of horizontal gene transfer and the proportion of core versus accessory genes [5,10]. Here, we show that the three primary replicons also do not often exchange genes with each other, leading to replicons with distinctive G+C, gene, RE and TAL contents.
Perhaps most strikingly, we saw evidence for only 40 nontransposable-element core genes having moved between repliconsfewer than 1 % of the 4600 core genes, with the exception of a single large translocation event involving the movement of a~300 kb region from the chromosome to pSymB. The nearly complete lack of core gene movement between replicons is puzzling given that there is experimental evidence for frequent genome rearrangements in Ensifer under laboratory conditions [57], and some closely related rhizobia have shown genome instability and plasticity. Additionally, essential genes from an Ensifer ancestor are thought to have moved from the chromosome to pSymB [58]. TALs and REs, both of which are able to contribute to gene movement through the translocation of genes between cells and mediate horizontal gene transfer, are abundant on each replicon and often found on multiple replicons [15,[59][60][61][62].
We identified a relatively large numbers of REs and TALs compared to many bacterial genomes. The main chromosome had~18 % repeated sequence in our population. In another study prokaryotic genomes with 20 %+ repeated sequence had the highest repeat coverage from a sample of 720 genomes [63]. Our study was conducted using with different metrics than those used by Treangen et al. [63], and probably identified more repeated sequences to identify more diverged repeated elements.
The presence of TALs and REs on multiple replicons indicates that movement between replicons is possible, as do the two large translocation events we detected -a~300 kb region that moved from pSymA to an accessory plasmid, and a 325 kb region that moved from the chromosome to pSymB. Although the G+C content consistently differed among replicons, a phenomenon also found for the multiple chromosomes in Burkholderia cenocepacia [64], the magnitude of the difference (<3 %) is not expected to act as an appreciable barrier to gene exchange. The TALs, unlike the REs, did cluster more closely between pSymA and the accessory plasmid (Fig. 1, Table 1), suggesting that TALs may be involved in inter-replicon gene transfer between pSymA and accessory plasmids.
Understanding inter-replicon gene movement is also important to understand the evolution of Ensifer strains as foreign DNA inside the cell will either be eliminated, persist autonomously as a plasmid, or become co-integrated into an existing plasmid through inter-replicon gene transfer [65]. Although TALs can clearly have important roles in gene movement, only seven of 40 possible inter-replicon translocation events we detected had TALs in the regions flanking the translocated regions. In contrast, a high frequency of rearrangements has been found in Rhizobium etli [66,67], and all of the genes showing evidence of inter-replicon movement were found in multiple copies, in at least some strains. This suggests that gene duplication may be involved in inter-replicon gene transfer between replicons in Ensifer. Another possibility is that gaining a functionally redundant gene on a different replicon allows for the loss of the original gene without loss of critical function.
Despite finding that <1 % of core genes are found on different replicons in different strains, we detected evidence for extensive gene movement within replicons, and within the chromosome and two megaplasmids. We also found that movement of genes between pSymA and the accessory plasmids was great, with >40 % of accessory plasmid genes also found on pSymA. Given that small plasmids can play central roles in inter-strain gene transfer through conjugation [62], the high rate of gene sharing between pSymA and accessory plasmids suggests that these accessory plasmids might be an important mechanism by which genes are moved between strains, and potentially other bacterial species. This is particularly important from a symbiotic and host-range perspective, given that many of the genes that are essential for establishing a functional symbiosis are found on pSymA [68].
Accessory plasmids in E. meliloti have been shown to cause host incompatibility or increase nodulation competitiveness [9]. Indeed, the type IV secretion system, which has been shown to have a variety of effects on nodulation [69,70], is found on an accessory plasmid or pSymA [69]. Because accessory plasmids can exchange gene content with pSymA, this may allow for the rapid gain or loss of symbiosis-related genes. This phenomenon may lead to some of the symbiotic instability noted for these and other fast-growing rhizobia, where symbiosis genes are plasmid-borne.
While genes on the accessory plasmid were also found as part of the pSymA pan-genome, this was not the case for all genes and some were found in bacterial species other than rhizobia, presumably the result of horizontal gene transfer.
Most striking was that one of the E. meliloti accessory plasmids had 64 A. tumefaciens-like genes. This indicates that rhizobia can probably obtain genes from Agrobacterium and other soil microbiota in their free-living, saprophytic, soil phase of existence. Agrobacterium and Ensifer are closely related and represent different genera within the family Rhizobiaceae [71]. Agrobacterium Ti-plasmids can be maintained and expressed by E. meliloti, although this rhizobial transconjugant is still unable to form tumours on plants [72] although Rhizobium trifolii can induce tumours with the addition of a Ti-plasmid [73]. Although the Ensifer M270 accessory plasmid B did not contain the tumourinducing genes, it did contain opine metabolism genes that are found in A. tumefaciens Ti-plasmids [74]. This may give this bacterium a selective advantage for growth in some soils and in association with plant roots.
Conclusion
The complete sequences and analyses of 16 E. meliloti genomes offer important insight into the evolution of symbiosis-related loci in this bacterium. Our analyses, done using de novo assembled long read sequence data, revealed that the three main replicons have different characteristics with respect to gene content, REs, TALs and G+C content. Ten of the strains harboured accessory plasmids, often with distinct replication proteins, and their gene content was more similar to that of pSymA than to the other replicons. Further studies should investigate this phenomenon, which may give insight into how accessory plasmids form and interact in populations of rhizobia in soils. Intra-replicon gene transfer is associated with REs, but not TALs. The gene transfer events that occurred between the accessory plasmids and pSymA demonstrate one mechanism by which the Ensifer symbiosis is constantly evolving.
Funding information
These studies were funded, in part, by grants DBI-1237993 from The National Science Foundation and USDA-HATCH award number MIN-71-030. | 2018-04-27T04:32:23.861Z | 2018-04-19T00:00:00.000 | {
"year": 2018,
"sha1": "ef87e7c695c08b0cf5ce0800099e6dba69be08fb",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1099/mgen.0.000174",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a953098461cc4623ba85dbbca88ba7ae1eaff79b",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
8479034 | pes2o/s2orc | v3-fos-license | Fragility, fluidity, and resilience: caregiving configurations three decades into AIDS
ABSTRACT HIV and AIDS have impacted on social relations in many ways, eroding personal networks, contributing to household poverty, and rupturing intimate relations. With the continuing transmission of HIV particularly in resource-poor settings, families and others must find new ways to care for those who are living with HIV, for those who are ill and need increased levels of personal and medical care, and for orphaned children. These needs occur concurrently with changes in family structure, as a direct result of HIV-related deaths but also due to industrialization, urbanization, and labor migration. In this special issue, the contributing authors draw on ethnographies from South Africa, Swaziland, Lesotho, Zambia, and – by way of contrast – China, to illustrate how people find new ways of constituting families, or of providing alternatives to families, in order to provide care and support to people infected with and afflicted by HIV.
For nearly half a century, scholars have recognized the various ways that families are constitutedways that go beyond rigid arrangements and grand theories about marriage and reproduction that characterized the structural approach to kinship prior to the feminist turn in the 1970s (Lévi-Strauss, 1949/1969Needham, 1971;Radcliffe-Brown, 1952;Schneider, 1984). Anthropologists, in particular, have made great strides in rethinking how families are formed through various processes of kin-making and unmaking, drawing on the rich data derived from Western examples, where rapid social change necessitated and facilitated a move away from heteronormative, biogenetic, and strictly patriarchal modes of living and caring (see, e.g., Carsten, 2004;Ginsburg & Rapp, 1995;Inhorn & Birenbaum-Carmeli, 2008;Strathern, 1992Strathern, , 2005. Societies in the Global North are increasingly fluid, and the nature of the family has changed dramatically. Despite what is often reported as firmly entrenched and reified cultural beliefs, and continued emphasis on lineality and patriarchal structures in local rhetoric (Block, 2014), kinship in societies in the Global South is equally fluid. Yet as articulated in state policy and programsor as tacitly implied by the lack of other optionsthere is still a common assumption that kin will provide care for those in need. This is especially so in low-resource settings, where states lack the capacity to provide institutional alternatives, and where it is frequently assumed that kinsfolk are morally and socially obligated and thus will step up to provide care (Akintola, 2006;Foster, 2000;Seeley et al., 1993;Shawky, 1972). HIV/AIDS, however, combined with demographic and economic changes to families, has brought into sharp relief the ways in which families are made and unmade in unexpected ways. As we illustrate in this issue, families everywhere are flexible, and have to be so to be resilient in the face of economic, political, social, and personal forces.
The unifying concern of the families and communities represented in this issue is the impact of HIV/AIDS on their everyday lives. However, HIV is now a chronic illness, co-existing with other conditions, and AIDS is not most people's primary concern. Rather, people are concerned with a wide range of questions around livelihood, sociality, and relatedness; HIV and AIDS shape and underpin many of these relationships. Our aim in this collection is to tease out the different ways in which care is distributed, so as to illustrate how families and communities provide care and support to people infected with and afflicted by HIV.
Care is a deeply personal and intimate labor of love, obligation, and social reproduction, and we begin this introduction by grounding our interests in the lives of people as they are influenced by HIV. The following excerpt is from Ellen Block's field notes of a young child who she came to know in Lesotho. All personal names are pseudonyms.
Kananelo Mohlomi was born in South Africa in 2005 to a Mosotho woman called Lebo. Lebo had moved to KwaZulu Natal four years prior to find work; her husband, 20 years her senior, stayed behind in the highlands of Lesotho. Kananelo was only 21 months of age when Lebo died of AIDS. Although the family did not have the money to bring Lebo's body home for burial, an uncle travelled to South Africa to bring Kananelo home to her maternal grandmother in rural Lesotho. When she arrived, Kananelo weighed less than 5 kg; her skin was wrinkled and desiccated; her eyes deeply set in their sockets. She showed signs of both advanced AIDS and TB. She was wasted and despondent. She had no appetite, and had consumed only water for the previous few days. What little food she ate gave her diarrhea. Her breathing was shallow. She had an ear infection that produced a putrid smell. Her grandmother thought her death was inevitable, and she simply wanted the child to return to Lesotho to die in the midst of love and care from her family. But other family members convinced the grandmother to let her go to the hospital. There, Kananelo started ART and TB treatment regimens. She slowly regained her health and returned to her family after several months.
While she was in hospital, Kananelo's grandmother died. Kananelo's mother's estranged husband, Ntate Rorisang, was not her biological father, and he was in his 60s, but he accepted Kananelo as his daughter because his social role as father outweighed any questions of paternity. At marriage, he had paid Lebo's family lobola (bridewealth payment) of seven cows, giving him both rights and responsibilities to the children born to her.
For almost a decade now, Kananelo has been living with Ntate Rorisang in a village in the mountains, with other grandmothers, aunts, and unclessome consanguineal kin, others nothelping to care for her.
Everywhere, people negotiate with others as they determine who should provide care for those living with or otherwise affected by HIV. In this issue, we are concerned with how care responsibilities fall to some family members and not to others, and the mix of pragmatic decision-making and fortuity that influences this. We examine how families work around the limits of their personal capacities and economic contingencies to provide care, and how institutional and structural factors define what can be provided under the rubric of care in different environments.
Kananelo's story illustrates only one way in which kin might respond to a crisis precipitated by AIDS. Her story has an apparent happy endingshe is embraced by her mother's consanguineal and affinal kinbut it need not have taken this turn. The circumstances of Lebo's migration, infection, and death, and Kananelo's near death, draw attention to the ways that demographic shifts and socio-economic realities in low-and middleincome countries have altered family relations and social dynamics. These effects reverberate meaningfully, and sometimes painfully, through relations of care and responsibility. In southern Africafrom which most of the articles in this issue derivesocial and economic life over the past several decades has been characterized by high rates of labor migration, increased marital dissolution, a rise in both the numbers and acceptance of children born outside of recognized unions, and as a direct result of HIV infection and AIDS, lowered life expectancy. The exceptionally high incidence of HIV and its continued transmission, despite three decades of public health and medical interventions, continue to have a significant impact on families. HIV has cut into family life wherever people are infected, requiring individuals, households, and communities to reimagine caregiving and social responsibility to others (Majumdar & Mazaleni, 2010). As Dawson (2013) illustrates, the extended family, where it is sustained, can no longer be regarded as a certain "safety net" for caregiving or for other social or economic purposes.
Kananelo's situation also captures some of the tensions that exist between idealized forms of social relations and the increasing flexibility required to respond to the care needs of extended kin. Kananelo's family was able to respond flexibly, in ways that illustrate the richness of the local kinship networks in the study setting, but also the capacity of people to interpret and use cultural conventions to their own ends. Here, the bridewealth payment at marriage established the responsibilities of kin for a small child who was effectively orphaned. In other settings, motives to provide care may be determined by family circumstances and individual family and gender-based values (Akintola, 2006;Opiyo, Yamano, & Jayne, 2008;Orner, 2006;Shefer et al., 2008). Although women may be expected more often than men to take on care work, shifting circumstances shape dependency, care needs, and caregiving. In poor urban and rural environments, and in households where parentage may be unknown or unacknowledged, there may be less robust networks of kin or other available and willing adults (Kang'ethe, 2009;Moyer & Igonya, 2014). In settings where everyday circumstances are shaped by physical distance from services, social isolation, poor infrastructure, and absolute poverty, people may lack the networks and the resources to assist in providing the most basic care and support.
Worldwide, HIV and AIDS affected populations enormously in the first two decades of its transmission, as men and women of productive age were diagnosed and rapidly fell ill and died. Many of those deceased were never counted as having died from AIDS, because of the stigma to the individual and family, and because a positive test was seen as a certain death sentence. Antiretroviral therapy (ART) has proven a turning point in the trajectory of the illness, and while effective prevention appears still to be elusive, HIV is (in 2016) in most cases a chronic condition. The increasing routine introduction of ART at the point of diagnosis ensures that people will potentially live long lives despite their infection as long as access to treatment continues to expand, and those who accept and remain on treatment are unlikely to transmit the infection by any means. Yet while the health outcomes of those living with HIV have much improved, the disease continues to have a great impact on populations, including nuclear and extended family systems, informal care networks that extend beyond families, and formal institutions and health care providers.
The necessary medicalization of everyday life with ART means that there is an expectation that the person with HIV will require self-care and care by others within his or her network: to monitor adherence to medication, to monitor general health, and to treat other diseases (tuberculosis especially). Personal behavior is part of the field of the surveillance of those with AIDS, so to reduce risks to self and others (of reinfection, or of other infections). But as Whyte (2009) has argued, they are not "simply" patients; their lives extend beyond clinic doors. People living with HIV are also members of families, support groups, and communities, and their health impacts the lives of those in their close social networks and beyond. From within these webs of supportive kin and non-kin, formal and informal arrangements emerge for the provision of care, with labor divided depending on the specific capacities of its members. Care work for those living with HIV can be arduous, depending on the health status of an individual; it can include personal bodily care, child care, the provision and preparation of food, household chores and maintenance, assistance with transportation, medical appointments, assistance with finances (paying bills and lending money), and social activities. Patients and families must extend and redefine caring relationships to provide a range of support and assistance.
As the authors in this issue illustrate, kin relationships that contribute to such care are neither prescriptive nor predictive. Family caregiving may be an extension of the reciprocity that shapes family relationships, but as we have noted above, gender and specific kinship relationships influence who takes on care work. Wives are more likely to provide care for a spouse than husbands, though some men do care for their partners. Daughters are more likely to take on care work than sons, and grandmothers provide substantial care for their sick children and their grandchildren, whether or not infected. Children too often take on the responsibility of providing personal care (Andersen, 2012). Wider networks are also often brought into the fold, including the siblings of primary caregivers or care recipients, grandchildren, and other older peoplewe illustrated this above for Kananelo. However, as demonstrated by some of the authors in this issue, sometimes there are no kin to care for people who are sick, or none able or willing to take on the long-term care work of children orphaned by HIV. Sometimes, families consider the cost of time out of work to be too great, and so they must privilege the continued well-being of all household members over the everyday needs of one person. And sometimes friends, community members, or health workers are asked toor attempt to -fill this niche. Churches throughout the region have contrarily both stigmatized and excluded people with HIV (Campbell, Nair, Maimane, & Sibiya, 2008;Goudge, Ngoma, Schneider, & Manderson, 2009) and provided many of the volunteers working with people in greatest need (Akintola, 2010(Akintola, , 2011. Klaits (2010) has shown how church members in Botswana can sometimes take the place of kinand at times act like kinto perform the many tasks required for a dying person. Igonya and Moyer (2016) illustrate this mutual support in their description of a group of Kenyan HIV-positive men who have sex with men. These men support each other during illness, yet they also care for each other due to other social vulnerabilities, particularly in the absence of supportive families and in the context of a hostile wider community. A number of papers in this issue also emphasize this point. For example, Casey Miller examines support networks of gay men in China who willingly provide support and care for each other. Likewise, Nonhlanhla Nxumalo, Jane Goudge and Lenore Manderson show how sometimes no kin -fictive, affinal or consanguineal are available, and such people may be dependent on community health workers to fill the gaps.
This issue begins with a unique contribution to the growing literature on kinship and care in the context of the continuing transmission of HIV. Fortunate Shabalala, Ariane de Lannoy, Eileen Moyer, and Ria Reis present five case studies that illustrate the diverse experiences of family, care, and belonging among HIV-positive adolescents. These young people live in Swaziland, a country with one of the highest HIV prevalence rates in the world and where more than two-thirds of the population lives below the poverty line. What emerges from their study is the importance of biogenetic connections: this shapes who are considered "family" and who are not. In contrast to most of the papers in this issue, Shabalala and her co-authors show how kinship in this context is interpreted as inherently inflexible, narrowing instead of broadening the possibilities for appropriate care. In teasing this out, the authors highlight the methodological as well as ethnographic significance of their work, as they point to the importance of working with and learning from young people of their experiences and desires regarding family, belonging, and care. It is perhaps a caution against romanticizing the willingness to care, and not to ascribe the values and beliefs of older generations to those of younger ones.
In southern Africa and elsewhere, the voices of adults dominate discussions of care, including in deciding the best interests of the child or children. Sometimes kin take on this task with apparent enthusiasm, but often this has multiple motives. Dahl (in press) and Reynolds (in press), for example, both draw attention to the significance of the monetary compensation that kin may receive to care for orphans. In both their studies, state subventions provide a motive to care as much as they compensate sometimes distant kin for the "costs" of care. Further, as Sonja Merten illustrates in this issue, in Zambia, those who are available to provide care, the nature of the care they can provide, and the conditions under which this occurs varies on the basis of pragmatism, sentimentality, and happenstance.
As Shabalala and her co-authors show, young people in Swaziland emphasized biogenetic connections in creating family and belonging. But the responsibility of caring for young people varies by social and economic context and cultural convention. Most societies have rules about who should care for children and in the majority of cases, everyday caregiving is relegated to mothers and other female kin. Southern Africa increasingly has a feminized labor force, with women migrating away from their families in search of work and remitting funds to support those left behind, including their own children and others. In such contexts, resources are central to decisions about who provides, and if, when, and how care is provided. Knight, Hosegood, and Timaeus (in press) illustrate this in considering the different motives to care for kin in KwaZulu Natal. By extending the definition of care to include financial support and in-kind assistance, these authors show that people still uphold particular kinship-related care obligations. In their rural research setting, where employment opportunities are scarce, out-migration is both common and necessary, and families must balance the decision to earn an income to support all members of the family against the needs of particular individuals for everyday health supervision, personal care, and social support. Knight and colleagues are concerned with evolving ideas of responsibility and obligation that underlie decisions around caregiving, and highlight that while some decisions may derive from physical and relational proximity within webs of relatedness, other decisions are strategic and conditional.
We know little of what happens in families when women are not around to provide care for children because they have died from AIDS or have migrated for work. Ellen Block shows how in Lesotho, in the absence of female caregivers, men are increasingly taking on the role of providing care for children affected by HIV/AIDS. This deviation from the ideal that women should care for their children has not translated to a change in the discourse about care responsibility; women remain the preferred (and primary) caregivers and care is often spoken about using feminized discourse. The emergence of male caregivers, Block suggests, is a consequence of larger structural issues and demographic shifts which have manifested in the increasing difficulty men face in finding work and their diminished ability to provide for their families (Shefer et al., 2008;Spiegel, 1981). It also occurs in the context of the dwindling numbers of AIDS-free grandmothers, as the HIV-affected population ages (Negin & Cumming, 2010;Negin, Mills, & Bärnighausen, 2012).
The entanglement of health and care with global politics and larger socio-economic issues runs thematically through all articles in this issue, but it is particularly pertinent in Sonja Merten's consideration of family and care in Zambia. With the changing salience of kinship and changes in household structure consequent to urbanization, and Westernization, Merten writes that sibling obligations no longer pertain, and siblings are far less likely now than in the past to provide care to someone with AIDS, or for any other reason of frailty, illness, or limited capacity to self-care. This, she suggests, is not because of the changes in how families are created, but because of the dilution of mechanisms of accountability, which previously compelled family members to assume care responsibilities. There has been a presumption that families will provide care for people living with AIDS, but relatives are not always willing or able to do so. Given the unpredictability of care, people attempt to build up care capital by shaping interpersonal relationships through economic, social, and affective bonds.
From these examples, we turn to other relations of care, as provided through communities, health services, and self-care. In his article on HIV-positive gay men in China, Casey Miller illustrates the importance of care provided by people outside of kinship networks. Stigma, exclusion, and discrimination continue to shape the everyday life of gay men in much of China, and widespread and institutionalized homophobia continues. Men with HIV infection necessarily have to manage the double burden of stigma as a result of sexual identity and same-sex unions, as well as their HIV status. These men need to make their own families, and various NGOs function as alternative families of care. The networks that arise from these NGOs extend beyond the practical provision of everyday care and support; they also take on advocacy around moral and political agendas. Like the gay men's community-based organizations and social networks that emerged, notably in Australia, in the first decade of AIDS (Edwards, 1990(Edwards, /2004Power, 2011;Whittaker, 1992), in China, the voluntary caregiving that gay men undertake for each other becomes simultaneously a process of caring for others and caring for the self. Caregivers and care recipients provide reciprocal support; caregiving is never unidirectional.
The care that people provide to each other flows from the affective quality of their personal relationshipsas husband and wife, for instance, or mother and son. Relationships of care largely exist because of the emotional commitment that people have to each other, even (or especially) in the context of calculations of the costs and benefits of caregiving, and the exigencies of time, proximity, and resources. But the kind of care people provide to each other is largely determined by capacity, as illustrated in the examples presented by Nxumalo, Goudge, and Manderson. In their cases, people are limited in their ability to do the work of care because of geographic isolation, material deprivation, acute poverty, lack of access to goods and services, and their limited capacity to negotiate the bureaucracies that might provide resources.
Where resources are limited, community health care workers play a critical role. However, in South Africa, bureaucratic and structural obstacles limit the assistance they can provide (Akintola, 2008;Schneider, Hlophe, & van Rensburg, 2008). Ironically, in the better-provisioned of the two provinces that these authors discuss, health workers did not feel that they had the resources at their disposal to address the needs of those most marginalized; in the poorer of the provinces, the health workers were able to mobilize support at either community or government level to draw attention to people's problems or to solve them directly. To some extent, this was possible because of the willingness of the community health workers to invest in building relationships with their clients, so to establish relationships of trust. In public health terms, this increased the possibility that people would take their advice, thus increasing their effectiveness in providing primary health care. But the need for community health workers to mediate between the formal health care system and marginalized populations is particularly important, given that structural and human resource challenges limit the effectiveness of the primary health care system. Community health workers can and often do provide important services that the formal health care system is not able to. At the same time, as Kalofonos (2014) has illustrated in Mozambique, the demands placed on community health workers can often be overwhelming, leaving them frustrated and with a sense of exploitation.
The final contribution of this supplement takes us back to people living with HIV, or those "at risk"that is, all peopleand so explores a different kind of carecare through empowerment. Jessica Ruthven discusses the attempt of theater makers in South Africa to heal what she sees as a "disconnect" between policy, programs, and target audiences. Her argument is based on a critique of the persistent didactic nature of HIV health education and health promotion programs in South Africa and elsewhere. People are admonished to use condoms, be faithful or abstain, and by so doing, they care for themselves and for others. However, the continued transmission of HIV demonstrates the limits to the success of these directives. Ruthven locates this disconnect of policies and everyday self-care in the inability and unwillingness of those shaping such policies and implementing programs to enable people to make their own decisions about health-related action. The majority of programs, she argues, are premised on an idea of a responsible individual who invests in self-care and selfregulationa neoliberal ideal inadequately transported to other, often marginalized, populations and settings. Because these messages lack coherence and fail to speak to people's everyday experiences, the "care" that is delivered through these programs has the potential to be uncaring. According to the theater educators and advocates with whom she has worked, people are more likely to follow a course of action after weighing up their options and bringing their lived experiences to bear.
The authors of these articles illustrate a number of trends in care for family members impacted by AIDS. Women are most likely to provide continuous care, and non-related women are most likely to provide other support. However, men also provide active support and care for partners, siblings, and fostered or biological children. In the context of HIV, informal caregiving within the family and from wider kinship networks are supplemented by the engagement of community health workers and others in social networks. In doing so, caregivers often experience considerable pressurea combination of emotional strain and the time and physical effort required for personal care. The immensity of these pressures help to explain the significance some people give to state support for care, in low-and middle-income settings as much as in high-income countries, and reinforce the need to develop public policies and practices to enhance the welfare of caregivers and care recipients. Yet more important than these specific trends in care are the ways in which the many ethnographic examples in this issue point to people's context-specific interpretations and flexibility in managing care relations: people draw on kin and non-kin, they interpret relations as more or less flexible, and navigate their social environments in anticipation of the future possibility of needing to provide or receive care. By attending to the relationships of care giving and receiving that are formed around HIV, and the limits to both formal and informal care in many settings, we gain a better understanding of the continuing reverberations and resilience required in households and communities with the ongoing and aging epidemic.
Disclosure statement
No potential conflict of interest was reported by the authors. | 2018-04-03T03:42:30.271Z | 2016-07-13T00:00:00.000 | {
"year": 2016,
"sha1": "305a5fc3fff073754528a7ae2734265357c30131",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/09540121.2016.1195487?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "6c99196afc8b9cdf22daddf359418bce32a152fc",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Medicine",
"Sociology"
]
} |
255590278 | pes2o/s2orc | v3-fos-license | Role of the blue light receptor gene Icwc-1 in mycelium growth and fruiting body formation of Isaria cicadae
The Isaria cicadae, is well known highly prized medicinal mushroom with great demand in food and pharmaceutical industry. Due to its economic value and therapeutic uses, natural sources of wild I. cicadae are over-exploited and reducing continuously. Therefore, commercial cultivation in controlled environment is an utmost requirement to fulfill the consumer’s demand. Due to the lack of knowledge on fruiting body (synnemata) development and regulation, commercial cultivation is currently in a difficult situation. In the growth cycle of macrofungi, such as mushrooms, light is the main factor affecting growth and development, but so far, specific effects of light on the growth and development of I. cicadae is unknown. In this study, we identified a blue light receptor white-collar-1 (Icwc-1) gene homologue with well-defined functions in morphological development in I. cicadae based on gene knockout technology and transcriptomic analysis. It was found that the Icwc-1 gene significantly affected hyphal growth and fruiting body development. This study confirms that Icwc-1 acts as an upstream regulatory gene that regulates genes associated with fruiting body formation, pigment-forming genes, and related genes for enzyme synthesis. Transcriptome data analysis also found that Icwc-1 affects many important metabolic pathways of I. cicadae, i.e., amino acid metabolism and fatty acid metabolism. The above findings will not only provide a comprehensive understanding about the molecular mechanism of light regulation in I. cicadae, but also provide new insights for future breeding program and improving this functional food production.
The Isaria cicadae, is well known highly prized medicinal mushroom with great demand in food and pharmaceutical industry. Due to its economic value and therapeutic uses, natural sources of wild I. cicadae are over-exploited and reducing continuously. Therefore, commercial cultivation in controlled environment is an utmost requirement to fulfill the consumer's demand. Due to the lack of knowledge on fruiting body (synnemata) development and regulation, commercial cultivation is currently in a difficult situation. In the growth cycle of macrofungi, such as mushrooms, light is the main factor affecting growth and development, but so far, specific effects of light on the growth and development of I. cicadae is unknown. In this study, we identified a blue light receptor white-collar-1 (Icwc-1) gene homologue with welldefined functions in morphological development in I. cicadae based on gene knockout technology and transcriptomic analysis. It was found that the Icwc-1 gene significantly affected hyphal growth and fruiting body development. This study confirms that Icwc-1 acts as an upstream regulatory gene that regulates genes associated with fruiting body formation, pigment-forming genes, and related genes for enzyme synthesis. Transcriptome data analysis also found that Icwc-1 affects many important metabolic pathways of I. cicadae, i.e., amino acid metabolism and fatty acid metabolism. The above findings will not only provide a comprehensive understanding about the molecular mechanism of light regulation in I. cicadae, but also provide new insights for future breeding program and improving this functional food production.
Introduction
Fungi are the second largest species on the earth after insects, with abundant resources. Entomopathogenic fungi is an important branch among them, which has special value in production for human health. Isaria cicadae, is an edible and potent medicinal entomopathogenic fungus with lots of immunogenic properties. The fruiting bodies (synnemata) of I. cicadae are collected due to its multiple pharmacological attributes and unique flavor. These fruiting bodies of I. cicadae have abundant important constituents such as cordycepic acids (Sun et al., 2017), cordycepin (Olatunji et al., 2016a;Zhang et al., 2019), polysaccharides (Jike et al., 2016;Wang et al., 2019), adenosine (Latini and Pedata, 2010;Olatunji et al., 2016b), ergosterol peroxide (Kuo et al., 2003), myriocin (ISP-1; Yu et al., 2009;Fujita et al., 2010), etc. These ingredients show important pharmaceutical properties such as antitumor (Kuo et al., 2003), anti-influenza (Lu et al., 2015), and antiinflammatory responses (Smiderle et al., 2014;Jiao et al., 2021). Over the past few decades, I. cicadae has become one of the most interesting research topics in the field of natural traditional medicines worldwide. Studies have shown that some ingredients of I. cicadae have a significant clinical effect in treating nephropathy also Huang et al., 2020).
I. cicadae has many similarities with Ophiocordyceps sinensis in various components and functions, and can be used as a substitute for the expensive traditional Chinese medicine (Zeng et al., 2014). In addition, recently, I. cicadae was listed as a novel food by the Ministry of Health of the People's Republic of China 1 . However, due to the rapid reduction of natural resources and the long life cycle, there is a serious shortage of wild I. cicadae resources and the market needs are not meet (Liu, 2008). Moreover, the lack of knowledge on the developmental regulations and conditional development of I. cicadae fruiting bodies, have negatively impacted large-scale commercial production of I. cicadae. As a result of these constraints, there is an urgent need to focus on the developmental regulation of the fungus to meet the demand for artificial culture. The mechanism of fruiting bodies formation has great significance in the cultivation and breeding of artificially grown edible mushrooms. The mechanism of morphogenesis of fungi, especially large edible basidiomycetes, has always been one of the hot topics in mycological researches (Wu et al., 2019). To understand the mechanism behind this phenomenon, researchers worked on model fungi Schizophyllum commune and Coprinus Coprinopsis cinerea to gain further mechanistic insight (Terashima et al., 2005;Ohm et al., 2013). Yet, there is still a lack of clarity about the mechanism of I. cicadae fruiting body formation and development to scale up the production to commercial level.
Light is one of the most important environmental factors in the life cycle of fungi and play an important role in growth and metabolism. A majority of the fungi respond to light, eliciting 1 http://law.foodmate.net/show-206341.html changes in several physiological characteristics including pathogenesis, development and secondary metabolism. Fungi respond to light by photoreceptor proteins where the lightabsorbing component undergoes a photochemical and structural changes. In Neurospora crassa, the light-induced changes are transduced from photoreceptor proteins to a signaling cascade that modulates downstream pathways by White Collar-1 (WC-1) protein. N. crassa has been shown to respond to blue light, and this response is mediated by the WC-1 and WC-2 proteins acting in a complex called the White-Collar-Complex (WCC). WCC functions as light-activated transcription factor (Dasgupta et al., 2016). In recent years, many researchers have focused on exploring signal transduction pathways of fungal photosensitive mechanisms. In filamentous fungi, the mechanism of action of the wc-1 gene has been investigated more clearly (Estrada et al., 2008). Knockout mutants of the wc-1 and wc-2 genes in N. crassa are incapable of light response, including defect in the synthesis of carotenoids in mycelia, circadian clock dysregulation, and loss of phototropism in the conidia beak (Linden et al., 1997;Liu et al., 2003). Subsequent experiments have shown that the wc-1 and its homologs are also found in macrofungi, including Coprinus cinereus (Terashima et al., 2005), Schizophyllum commune (Ohm et al., 2013), Cordyceps militaris (Yang and Dong, 2014), and C. sinensis (Yang et al., 2013). Research results showed that in some edible fungi, the functions of WC-1 homologs are related to the fungal growth and development. Hence, it is implied that wc-1 gene might have an important influence on fungal growth and development (Ohm et al., 2013).
In this study, by following genetic approaches and analyzing transcriptomic data in I. cicadae, we confirmed that the Icwc-1, the photoreceptor gene, plays a positive role in regulating the growth and development in I. cicadae. This study will also provide important information for I. cicadae breeding, production process improvement and development of related functional foods.
Strains and growth conditions
The wild strain of I. cicadae (WT) was provided by College of Agriculture and Biotechnology, Zhejiang University, PR China. The Icwc-1 knockout mutant (∆Icwc-1) strain and the complementary strain (∆Icwc-1-C) were developed in this study. Plasmid pCAMBIA1300 (Liu et al., 2015), used for knockout vector and PKD5-GFP (Qu et al., 2022), used for complementary vector, were procured from Zhejiang University. The competent cells of Escherichia coli DH5α and Agrobacterium tumefaciens AGL-1 were purchased from Qingke (Hangzhou) Biotechnology Co., Ltd., PR China.
Bioinformatics analysis
The nucleotide sequence of Icwc-1 gene was obtained from the genome of I. cicadae strain WT. Using BLAST from NCBI, the nucleotide sequence similarity was analyzed 2 . Protein motifs were identified using the Conserved Domain Database from NCBI. The amino acid sequence of IcWC-1 protein in I. cicadae were predicted from the genome of the wild-type (WT) strain, and the other species WC-1 protein sequences were obtained from NCBI database. The Neighbor Joining method were used to generate the homologous evolutionary tree of WC-1 protein sequence on MEGA7 version software (Kumar et al., 2016).
Disruption of Icwc-1 gene in Isaria cicadae
Genomic DNA was prepared using the cetyltrimethylammonium bromide (CTAB) method (Doyle and Doyle, 1987). Primers Icwc-1-F/R was used to amplify the fulllength Icwc-1 gene. A strategy of homologous recombination was employed to delete the Icwc-1 in I. cicadae. The 1,185 bp and 1,286 bp DNA fragments upstream and downstream of Icwc-1 gene were amplified from genomic DNA with primers Icwc-1-UP-F/R and Icwc-1-DOWN-F/R, respectively. The hygromycin phosphotransferase gene hph fragment was cloned from plasmid pBHt2. These three fragments were connected by Fusion Enzyme Kit (Vazyme, China) and inserted into the vector pCAMBIA1300 digested with Xhol and HindIII, to generate pCAMBIA1300-Icwc-1 knockout vector.
The constructs were introduced into I. cicadae by ATMT using the method reported by (Khang et al., 2006) with slight modifications. Conidia for transformation were harvested and suspended into the sterile 0.05% Tween 80 and adjusted to a concentration of 10 5 spores/mL. Further, 100 μl of I. cicadae conidial suspensions and 100 μl of A. tumefaciens were mixed together and spread on the AIM agar plate and co-incubated at 23°C for 2 days. The co-culture of A. tumefaciens and I. cicadae was covered with PDA agar supplemented with 300 μg/ml cefotaxime and 350 μg/ml hygromycin (hygB) and incubated at 23°C for 3-6 days. Primers Icwc-1-CK-F/R were used to identify the transformants by PCR. All primers used in this study are listed in the Supplementary Material Table S1.
Complementation of the Icwc-1 disruption mutant
To investigate the function of the Icwc-1 gene, the complementary experiment of the Icwc-1 gene was carried out. Primers Icwc-1-HB-F/R was used to amplify the Icwc-1 gene (containing PKD5-GFP vector linker) in the genome of wild-type strain of I. cicadae. The PKD5-GFP vector digested with Xbal and SalI, and the full-length Icwc-1 gene, the 2,940 bp fragment, were inserted into the corresponding sites of GFP to generate PKD5-GFP-Icwc-1. For complementation, PKD5-GFP-Icwc-1 was introduced into the ∆Icwc-1 strain by the ATMT method. Transformants were selected on DCM plate supplemented with 200 μg/ml of SUR at 25°C. The complemented strain ∆Icwc-1-C was confirmed by PCR amplification using primer paire Icwc-CoF and Icwc-CoR.
Mutant transformants validation by southern blotting and q-PCR
For Southern blot analysis of genomic DNA, 50 μg of DNA extracted from each three mutant strains and wild-type strain were digested with XhoI restriction enzyme and separated on 0.7% agarose gel. The mutant hph fragment was amplified from pCAMBIA1300 plasmid using primers Southern-F and Southern-R as probes. The probe labeling, hybridization and signal detection were performed by employing a DIG DNA Labelling and Detection Kit (Cat. No. 11745832910, Roche, Germany).
In addition, single copy of knock-out mutants were also confirmed by q-PCR method according to the assay descripted by Lu et al., 2014(Lu et al., 2014. Briefly, when tubulin as the reference gene, single copy of the target gene was determined as ∆∆CT = 0.9 ~ 1.3, where ∆∆CT = (CT HPH -CT tubulin-m ) − (CT gene -CT tubulin-w ), CT HPH is the CT value of HPH in mutant, CTt ubulin-m is the CT value of tubulin in mutant, CT gene is the CT value of target gene in wild-type strain, CT tubulin-w is the CT value of tubulin in wild-type strain.
2.6. Measurement of hyphal growth rate, spore production and biomass Three I. cicadae strains, WT, ∆Icwc-1, ∆Icwc-1-C, were maintained at 25°C on PDA medium. The spores were gently washed with ultra-pure water and diluted to 1 × 10 6 /mL. The 10 μl spore suspension was inoculated at the center of PDA containing petriplates. These culture plates were incubated at 25°C and after 4 days of growth, the colony edge was marked and measured every 24 h till next 7 days under the light and dark (12 h:12 h) culture conditions. During these intervals of time, the spores were washed and counted. This experiment was repeated three times independently. 5 μl spore suspension was added to PDA medium containing cellophane, incubated for 7 days under white light conditions, hyphae collected, baked to constant weight in a 65°C oven, and data was recorded.
RNA preparation and RT-PCR analysis
The 10 μl spore suspension was inoculated in PDB liquid medium, cultured at 25°C with 140 rpm under the light and dark (12 h:12 h) culture conditions and samples were collected after 7 days. Mycelium (0.2 g) was ground to powder with liquid nitrogen, and RNA was extracted using RNAiso Plus (TaKaRa, Japan) according to manufacturer's instructions. Reverse transcription of total RNA was carried out using PrimeScripTM RT regent Kit with gDNA Eraser (Takara, Japan). The qRT-PCR was then performed using TB Green ® Premix EX TaqTM (Tli RNaseH Plus; TaKaRa, Japan) to analyze the expression level of development-related genes. All primers used in this study are listed in the Supplementary Material Table S1.
Cultivation methods for fruiting bodies
For culturing I. cicadae, wheat medium as described by Guo et al. (2016), was used. The 10 μl of the spore suspension of wildtype strain WT, mutant strain ∆Icwc-1, and the complemented strain ∆Icwc-1-C, were inoculated on wheat medium separately and cultured in dark at 25°C until mycelium-covered the medium. The mycelial colonized substrate was transferred to a culture box with alternating light and dark (12 h:12 h) for 1 month. The growth status of the strains was observed and photographed.
RNA seq analysis
For RNA-Seq analysis, the Illumina NovaSeq platform was used for paired-end sequencing of wild type and null mutant ∆Icwc-1. The RNA-Seq raw reads obtained by sequencing on the Illumina sequencing platform were processed to remove adaptors and low-quality bases using Trimmomatic v0.39 (Bolger et al., 2014) with default parameters (ILLUMINACLIP:adapters:2:30:10 SLIDINGWINDOW:4:20 MINLEN:50). In order to identify the genes with low expression or only partial fragments in individual RNA-Seq samples, the clean reads of all samples were assembled to a reference sequence using Trinity v2.8.5 (Grabherr et al., 2011). Further, Corset v1.09 (Davidson and Alicia, 2014) was used to aggregate transcripts into many clusters according to the Shared Reads between transcripts, and then combined the transcript expression levels between different samples and the H-Cluster algorithm to classify the expression differences between samples. The longest transcript in each cluster is selected as the representative sequence in the cluster, which is defined as the unigene sequence.
To obtain comprehensive gene function information, the transcripts were functionally annotated to obtain the functional information of the gene from different databases, including: Nr (NCBI non-redundant protein sequences), Nt (NCBI nucleotide sequences), KOG (Clusters of Orthologous Groups of proteins), Swiss-prot (A manually annotated and reviewed protein sequence database), Uniprot (Universal Protein), KEGG (Kyoto Encyclopedia of Genes and Genomes), GO (Gene Ontology) using emapper v2.0.0 (Cantalapiedra et al., 2021). The cleaned reads were subsequently mapped to the reference sequences assembled by Trinity using bowtie2 (Langmead and Salzberg, 2011). To quantify the gene expression levels, the number of clean reads mapped to a gene is called read count of each sample using RSEM v1.2.28 (Cantalapiedra et al., 2021). The R package DESeq2 was used to identify differentially expressed genes (DEGs) under the threshold of FDR ≤ 0.05 and absolute value of log2FC ≥1. The GO enrichment was performed using the R package cluster Profiler .
Results
3.1. The structural characters of Icwc-1 gene and phylogenetic analysis of the WC-1 proteins in related fungal species The nucleotide sequence of Icwc-1 gene with a length of 2,940 bp was identified from the genome of I. cicadae. The Icwc-1 gene contains an open reading frame of 2,878 bp, interrupted by one intron of 62 bp (Supplementary material), and encodes a protein with 959 amino acid residues (see Supplementary material for Amino acid sequence of WC-1 protein of I. cicadae). The sequence information of Icwc-1 gene was stored in Genbank with accession number OP675621. The Domain prediction with EXPASY, showed that the IcWC-1 protein has three PAS (Per-Arnt-Sim) structural domains. The first domain is a LOV structural domain (light, oxygen, voltage; Figure 1A), second domain is a ZnF (zinc-finger DNA-binding motif) and third one is a GATA structural domain. In fungi, the LOV domain conserves and can receive light signals as a receptor, while the ZnF and GATA type domain is a transcription factor that specifically recognizes and regulates the GATA sequence of promoters of downstream gene regulatory regions, suggesting that in I. cicadae, Icwc-1 may be a receiver of light signal, which in turn regulates the expression of downstream genes.
In addition, we performed phylogenetic analysis of the white collar 1 (WC-1) protein of 19 fungi, which showed that the WC-1 proteins of these species are indeed evolutionarily similar, and the homology between I. cicadae and Cordyceps fumosorosea ARSEF 2679 was higher ( Figure 1B). The results showed that I. cicadae was more closely related to C. fumosorosea ARSEF 2679 (ISF_00644) in evolutionary relationship.
Single copy knock-out mutant was also verified by Q-PCR where the ∆∆CT value was 0.9. In a word, PCR identification, southern blotting, and q-PCR results all showed that we had successfully obtained a homokaryotic mutant of Icwc-1. The mutant ∆Icwc-1-40 was used to analyze the phenotypes in the development process of fungi ( Figure 2D). To further confirm the biological role of Icwc-1 gene, we generated the complemented strain ∆Icwc-1-C, in which the full-length Icwc-1 gene were introduced into the ∆Icwc-1 mutants, and verified by PCR and RT-PCR. The 0.8 kb of Icwc-1 fragments were amplified in ∆Icwc-1-C strain and wild strain ( Figure 2D), indicating that Icwc-1 was successfully replenished into the ΔIcwc-1 mutant. The RT-PCR analysis showed that Icwc-1 was not expressed in mutant ∆Icwc-1, while the expression of the wc-1 gene in the complementary strain ∆Icwc-1-C was restored ( Figure 2E). All in all, ∆Icwc-1-C was replenished into the mutant strain, and also showed the correctness of the mutant strain. GFP observation confirmed that Icwc-1 protein was located in nucleus ( Figure 2F).
Role of Icwc-1 gene on the growth characters and colony morphology
To investigate the growth characters and pattern of fungal growth, the colony diameters of the knockout strain ∆Icwc-1, the complementary strain ∆Icwc-1-C and the wild type were measured at 10 days after inoculation, and all results of triplicate experiments were compared and statistically analyzed (Figure 3). At the 10th day of incubation, significant differences (p < 0.05) The domain organization of IcWC-1 protein and the Phylogenetic tree of IcWC-1. (A) The regions corresponding to LOV, PAS domains, the nuclear localization signal (NLS) and ZnF are shown. (B) Phylogenetic tree of WC-1 homologous proteins from fungi. The amino acid sequences of WC-1 homologous proteins from different species were downloaded from the NCBI database for phylogenetic analyses (protein sequences in Supplementary Material). The topology of this tree was generated using neighbour-joining (NJ) method of MEGA7 with 1,000 bootstraps replicates. These numbers represent the percentage of replication trees (1,000 replications) with related taxa clustered together in the boot test.
Frontiers in Microbiology 06 frontiersin.org in colony morphology and diameter of all the three strains were observed. Among them, the colony diameter of ∆Icwc-1 was smaller and spore production was also reduced compared to WT and complementary strain ∆Icwc-1-C. The results indicated that the Icwc-1 gene of I. cicadae has a regulatory effect on both of the growth rate of the fungal mycelium and sporulation ( Figure 3B). In terms of colony morphology, we found that the mycelium of the wild-type strain WT was sparse, thin, yellowish, and mainly creeping on the medium. However, the mycelium of the ∆Icwc-1 strain was thicker, mountainous, and mainly aerial on the medium, with white mycelium (Figure 3A), indicating that the Icwc-1 gene had a greater effect on the aerial hyphal growth and colony morphology.
Sporulation is an important means of reproduction in I. cicadae. During the study, it was found that when the Icwc-1 gene was knocked out, the number of spores of the mutant strain ∆Icwc-1 was significantly reduced as compared to the wild-type strain WT and the complementary strain ∆Icwc-1-C, while the number of spores of the wild-type strain WT and the complementary strain ∆Icwc-1-C did not differ significantly ( Figure 3C), indicating the influence of Icwc-1 gene on sporulation of the fungus. The reduction of spores in null mutant may have resulted due to decrease of conidiophore compared to the WT strain. Due to deletion of the Icwc-1 gene, indicating that the Icwc-1 gene of I. cicadae positively regulates the conidial spore formation pathway.
The biomass is an essential component for the large-scale production of I. cicadae. The results of biomass determination experiments showed that when the Icwc-1 gene was knocked out, the biomass of the strain increased significantly, possibly due to the increase in aerial hyphae of the mutant strain ∆Icwc-1, indicating that the Icwc-1 gene may have a significant inhibitory effect on the production of hyphae biomass in I. cicadae ( Figure 3D). In fungi, carotenoids are an important class of metabolites, and light is an important environmental factor that induces the carotenoid biosynthesis. As the carotenoid synthesis pathway is still unclear in this fungus, we found 4 homologous genes with crucial role of carotenoid biosynthesis by screening genome of I. cicadae. Among them, Car1 is homologous with ylo-1 in Neurospora crassa (Estrada et al., 2008), Car2, Car3 and Car4 are homologous with CCM_03059, CCM_03697, and CCM_06355 in Cordyceps militaris, respectively, (Lou et al., 2019), which are the key genes in the biosynthesis of carotenoids pathway. The expression levels of the key genes in the carotenoid synthesis pathway may be reflected in the content of carotenoids in vivo. RT-PCR results showed that the expression of Car1 and Car2 in the mutant strain ∆Icwc-1 was down-regulated by about 50% as compared to the wild-type. However, there was no significant change in the expression level was observed in Car3 and Car4 ( Figure 4A).
Impact of Icwc-1 on protease and amylase
Protease and amylases are important enzymes in fungal physiology, and play an important role in the invasion of I. cicadae into host and parasitic growth. Both enzymes help I. cicadae to break down the host proteins and carbohydrates for better nutrient absorption. RT-PCR was used to detect the expression levels of prl and amy genes in wild-type and mutant strains, and it was found that the expression level of pr1 gene in mutant strain ∆Icwc-1 was 1.37 times higher than wild-type strains. There was no significant difference in the expression of the amy gene in the mutant strains ( Figure 4A). The experimental results show that Icwc-1 may mainly affect the expression of protease to achieve the purpose of infection during I. cicadae infestation in the dark, this result may adapt the habit that this entomopathogenic fungus infects cicada nymphs underground.
Role of Icwc-1 gene on fruiting body formation
Since I. cicadae is full of medicinal properties, so for commercial production of fruiting body of this mushroom, we need to pay attention on the mechanism of fruit body formation and factors responsible for the growth and development. To investigate whether the Icwc-1 gene influences the formation of fruiting bodies of I. cicadae, cultivation experiments were performed on artificial flour medium. The wild type, mutant and complementary strains were inoculated on flour medium and incubated in the dark until the mycelium covers the medium. Later, the strains were placed in the light for pin head formation Effect of Icwc-1 gene on growth characters in I. cicadae. (A) Colony characters of different strains (wild-type, ∆Icwc-1 and ∆Icwc-1-C) of I. cicadae were photographed and recorded after 10 days of alternating light and dark culture on PDA medium; (B) 5 μl of the spore suspension is inoculated on PDA medium, cultured under white light conditions for 10 days, and the colony diameter was recorded; (C) The strains were cultured on a PDA plate for 10 days at 25°C, spores were washed with water, diluted to a certain concentration and counted under a microscope; (D) Effects of Icwc-1 gene deletion on mycelial biomass of I. cicadae. The symbol "*" indicates a significant difference (p < 0.05) compared with wild-type strain. The symbol '**' indicates significant difference (p < 0.01) compared with wild-type strain.
Frontiers in Microbiology 08 frontiersin.org and development of fruiting body. The experimental results showed that the complementary strain ∆Icwc-1-C could form fruiting bodies, like the wild-type strains, but when the Icwc-1 gene was knocked out, the strain could only form some aerial hyphae on the flour medium and could not differentiate into fruiting bodies ( Figure 4B). These results show that the Icwc-1 gene is a necessary for the growth and development of fruiting bodies in I. cicadae.
Discussion
Fruiting body development is a crucial phase of the mushroom life cycle and depends on various phenotypic and genotypic traits. Fruiting body formation is an important manifestation of the production value of mushrooms, and the formation of fruiting bodies mainly depends on environmental and genotypic factors. Light is one of the most prominent abiotic factors affecting the overall mushroom growth and development, especially in fruiting body development and pigment formation. Previous studies have focused on the effects of light quality on different stages of fungal development , but few studies have been undertaken at the genetic level. Light response in model fungi, Neurospora crassa is mediated by the WC-1 and WC-2 proteins acting in a complex called the White-Collar-Complex (WCC). The WC-1 protein utilizes FAD (flavinadenine dinucleotide) as a chromophore. A chromophore is the Effect of Icwc-1 gene deletion on expression of genes related to mycelial metabolism and fruiting body formation of I. cicadae. (A) The relative expression of genes of wild-type strain and Icwc-1 mutant strain in I. cicadae detected by RT-PCR. (B) Fruiting body formation of I. cicadae (WT, ∆Icwc-1 and ∆Icwc-1-C). The symbol "*" indicates a significant difference (p < 0.05) compared with the original strain, while "****" indicates an extremely significant difference (p < 0.01). Gene expression profile regulated by Icwc-1 gene.
Frontiers in Microbiology 09 frontiersin.org light-absorbing component in photoreceptor complex. The LOV (light-oxygen-voltage) domain of WC-1 covalently binds FAD at an active cysteine residue upon light exposure. WC-1 contains a Zn finger domain (GATA-like transcription factor domain), two PAS domains that modulate protein interactions, and a putative transcriptional activation domain. The WC-1 homolog can participate not only in blue light but also red light in Aspergillus nidulans (Dasgupta et al., 2016). In this study, we cloned the Icwc-1 gene, the homolog of the blue-light photoreceptor of N. crassa and attempted to investigate its effect on the growth and morphological development of mushroom I. cicadae (Ponting and Aravind, 1997;Yang and Dong, 2014). The results indicated that the Icwc-1 gene was highly conserved in different fungal species. In this study, we first used genetic methods to obtain mutants of the blue light gene Icwc-1 of I. cicadae and carried out related biological experiments. The decreased number of conidia after the knockout of the Icwc-1 gene indicates that the Icwc-1 gene could promote the production of conidia during the development and asexual reproduction stage of I. cicadae. Higher production of conidia, might help in the hyphae kink to form fruiting bodies. It was observed that the aerial mycelium of the mutant strain ∆Icwc-1 was denser but less likely to form pin-head structural primordium of synnemata. Moreover, the complement strain ∆Icwc-1-C showed recovered phenotypes in fruiting body formation and mycelia growth. Our results are Comparative gene ontology (GO) analysis of the differentially expressed genes (DEGs) between the transcriptome of wild type and Icwc-1 mutants. (A) The volcano plot is two-dimensional, with the y-axis representing the negative log10 of the adjusted value of p, and the x-axis displays the variation ratio of each gene (log2 FC). The green spots represent significantly down-regulated genes in the mutants, while the red spots indicate up-regulated genes [false discovery rate (FDR) < 0.05]. Gray spots represent the genes that did not show differential expression. consistent with the study conducted on fruit body development in S. commune by Ohm et al. (2013). The authors proposed that Scwc-1 plays an important role in mycelial aggregation and fruiting body maturation. In the mechanism of fruiting body formation (Ohm et al., 2013), ScWC-1 protein acts as a photoreceptor that can receive light signals to regulate the activity of downstream transcription factors, promoting the synthesis of related proteins during fruiting entity development. The same function of the wc-1 homologous gene has also been found in large fungi such as C. militaris and C. sinensis (Yang et al., 2013;Yang and Dong, 2014). The cordycepic carotenoids have various bioactive medicinal properties like anticancer, antioxidants and immunomodulatory etc. and also utilized in food industry. In C. militaris, content of carotenoid is considered as the parameter of quality standard Lou et al., 2020). Previously, it was observed that the carotenoid biosynthesis is affected by the light and imparts the color fungal mycelium and fruiting bodies. Our experimental results showed that the mycelium color of Icwc-1 mutant strains is significantly different from that of wild-type strains, which is speculated to be caused by the blockade of carotenoid synthesis pathway. Further, we measured the expression of transcription levels of the carotenoid synthesis genes through RT-PCR. The results showed that the expression of Car1 and Car2 in the mutant strain was significantly reduced, while the expression of Car3 and Car4 did not change significantly. This suggests that in I. cicadae, the Icwc-1 gene may have controlled the amount of carotenoid synthesis by regulating the expression of the Car1 and Car2 genes. In another study conducted on C. militaris, also showed the similar results. Researchers prepared the Cmfhp gene (a light responsible gene) mutant and found that it affects the fruiting bodies and conidia formation along with the reduced production of carotenoids (Lou et al., 2020). The biosynthesis pathway of carotenoids in this important fungus needs to be deeply analyzed in the future.
During the development of macrofungi, there are many genes that regulate the development of fruiting body, and their expression levels can be used to as an important reference for mushroom development. The KEGG enrichment analysis conducted in our study revealed that the inactivation of Icwc-1 significantly affected regulatory genes participating in various biosynthetic pathways of I. cicadae like amino sugar and nucleotide sugar metabolism, ubiquitin mediated proteolysis, ubiquinone and other terpenoidquinone biosynthesis, arginine and proline metabolism etc., (Supplementary Tables S3, S4). These are the crucial metabolites of fungal growth and development. Previous research showed that several signal pathways and transcription factors participate in fungal light reaction (Dasgupta et al., 2016;Yang et al., 2016). Transcriptome analysis in the present study showed that genes related to MAPK signal pathway and two-component system signal pathway were downregulated seriously (Table 1). In our transcriptome data 4 genes related to two-component system signal pathway, including UM03949 (encoding Transcription initiation factor TFIID subunit) and UM08613 (encoding RPN3, Proteasome regulatory particle subunit) were found to be down-regulated. Two-component signal transduction (TCST) pathways is considered upstream of MAP kinase cascades, play crucial roles in hyphal growth and asexual development in filamentous fungi (Furukawa et al., 2005;Yu et al., 2016;Mohanan et al., 2017). Moreover, 6 genes involved with MAPK signal pathway were down-regulated in ∆Icwc-1 mutant, including UM03948 [encoding Plasma membrane osmosensor that activates the high osmolarity glycerol (HOG) MAPK signaling pathway] and UM06752 (encoding MAPK component in response to HOG pathway). Literature illustrated that their orthologs have the roles to regulate the secondary metabolism and fruiting body formation (hyphal growth and conidiation) in fungi, such as Magnaporthe oryzae (Mehrabi et al., 2008;Liu et al., 2011), Aspergillus fumigatus (Rocha et al., 2020), Trichoderma brevicrassum and Neurospora crassa (Park et al., 2008). Expression of some transcription factors was down-regulated in ∆Icwc-1 mutant, such as UM07148 (encoding GAL4-like Zn2Cys6 type) and UM01575 (encoding basic leucine zipper (bZIP) family). Homologs of these 2 type of transcription factors are involved in important biological process during growth and fruiting body formation in fungi, including conidium maturation in Beauveria bassiana (Chen et al., 2022), conidiation of Neurospora crassa (Sun et al., 2012), sexual development and stress responses in Aspergillus nidulans (Yin et al., 2013), asexual development in Aspergillus nidulans (Etxebeste et al., 2008), differentiation processes and phytotoxin production in Botrytis cinerea (Temme et al., 2012), carotenoid synthesis and fruit body formation in Cordyceps militaris . Homolog of UM01321 (encoding Vivid protein) is the component of the transcription factor complex that initiates light-regulated transcriptional responses in Neurospora (Chen et al., 2010), and regulates fruiting body formation in Cordyceps militaris . UM02902 encodes guanine nucleotide exchange factors (RhoGEF), RhoGEFs can activate small GTPases of the Rho family (Fort and Blangy, 2017). The function of RhoGTPases is to activate a set of downstream effectors to control cell morphology in eukaryotes. RhoGEF in yeast regulates pheromone response pathway (Hoffman et al., 2000), but rare investigated in filamentous fungi. Due to the fact that the expression of UM02902 was sharply reduced in ∆Icwc-1 mutant by transcriptome analysis, we infer that it is the putative gene which is related to fruiting body formation. Some putative genes related to the fruiting body formation based on transcriptome analysis are listed in Table 1 and need to be investigated in the future. The results of the present study are consistence with the previous studies showing that the light may stimulate multiple signal pathways leading to the expression of specific transcription factors, which regulate the fungal growth, fruiting body formation and secondary metabolites synthesis. Considering that the synnemata is the main part used for human consumption in this edible fungus, but the molecular mechanisms of synnematal formation is still unclear. Our study provides some candidate gene which need to be assessed in the future for better understanding of the molecular mechanisms of synnematal formation. This may also be useful for molecular breeding improvement on this important fungus.
Here, in this study, we identified a blue light receptor gene Icwc-1 in Ascomycetes fungi, I. cicadae as a novel regulator of synnematal Frontiers in Microbiology 11 frontiersin.org development by providing the following lines of evidence: (1) the transcription of Icwc-1 was highly induced during the development of fruiting body; (2) some genes, related to fruiting body development, were positively regulated by Icwc-1, such as anchored component of membrane (54 related genes), apical part of cell (41 related genes) and division septum (40 related genes). Taken together, our results demonstrated that the Icwc-1 gene plays a critical role in the growth and development in I. cicadae, especially during the formation of fruiting bodies and carotenoid biosynthesis. When the Icwc-1 gene is knocked out, it results in the inability of I. cicadae to form synnemata and conidia formation, which is consistent with the previous studies in C. cinerea (Terashima et al., 2005), S. commune (Ohm et al., 2013) and C. militaris (Yang and Dong, 2014;Lou et al., 2020). In future work, exploring the specific mechanism and factors responsible for the fruiting body development, especially the role of other light responsive genes along with Icwc-1 on fruiting body development, is an important direction.
Data availability statement
The original contributions presented in the study are publicly available. This data can be found at: GenBank, OP675621.
Author contributions
LS: conceptualization, methodology, data curation, investigation, writing -original draft, reviewing and editing. NS: methodology, data curation, writing-reviewing and editing. YG: methodology, data curation, investigation, writing -reviewing and editing. DL: methodology, data curation, writing -reviewing and editing. WC: supervision, resources, funding acquisition. YS: methodology, writing -review and editing. FL: supervision, writing -review and editing. JL: conceptualization, supervision, writing -review and editing. HW: conceptualization, supervision, writing -review and editing, funding acquisition. All authors contributed to the article and approved the submitted version.
Funding
This work is supported by a grant Organism Interaction from Zhejiang Xianghu Laboratory to FCL. This research was supported by Zhejiang Science and Technology Major Program on Agricultural New Variety Breeding (grant no. 2021C02073-9). | 2023-01-11T15:54:12.445Z | 2023-01-10T00:00:00.000 | {
"year": 2022,
"sha1": "cb0625d4a8c46c3bd96ab443347496f3a09dd91e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "cb0625d4a8c46c3bd96ab443347496f3a09dd91e",
"s2fieldsofstudy": [
"Biology",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
254406378 | pes2o/s2orc | v3-fos-license | Vaginal neutrophil infiltration is contingent on ovarian cycle phase and independent of pathogen infection
The mucosa of the female reproductive tract must reconcile the presence of commensal microbiota and the transit of exogenous spermatozoa with the elimination of sexually transmitted pathogens. In the vagina, neutrophils are the principal cellular arm of innate immunity and constitute the first line of protection in response to infections or injury. Neutrophils are absent from the vaginal lumen during the ovulatory phase, probably to allow sperm to fertilize; however, the mechanisms that regulate neutrophil influx to the vagina in response to aggressions remain controversial. We have used mouse inseminations and infections of Neisseria gonorrhoeae, Candida albicans, Trichomonas vaginalis, and HSV-2 models. We demonstrate that neutrophil infiltration of the vaginal mucosa is distinctively contingent on the ovarian cycle phase and independent of the sperm and pathogen challenge, probably to prevent sperm from being attacked by neutrophils. Neutrophils extravasation is a multi-step cascade of events, which includes their adhesion through selectins (E, P and L) and integrins of the endothelial cells. We have discovered that cervical endothelial cells expressed selectin-E (SELE, CD62E) to favor neutrophils recruitment and estradiol down-regulated SELE expression during ovulation, which impaired neutrophil transendothelial migration and orchestrated sperm tolerance. Progesterone up-regulated SELE to restore surveillance after ovulation.
The mucosa of the female reproductive tract must reconcile the presence of commensal microbiota and the transit of exogenous spermatozoa with the elimination of sexually transmitted pathogens. In the vagina, neutrophils are the principal cellular arm of innate immunity and constitute the first line of protection in response to infections or injury. Neutrophils are absent from the vaginal lumen during the ovulatory phase, probably to allow sperm to fertilize; however, the mechanisms that regulate neutrophil influx to the vagina in response to aggressions remain controversial. We have used mouse inseminations and infections of Neisseria gonorrhoeae, Candida albicans, Trichomonas vaginalis, and HSV-2 models. We demonstrate that neutrophil infiltration of the vaginal mucosa is distinctively contingent on the ovarian cycle phase and independent of the sperm and pathogen challenge, probably to prevent sperm from being attacked by neutrophils. Neutrophils extravasation is a multi-step cascade of events, which includes their adhesion through selectins (E, P and L) and integrins of the endothelial cells. We have discovered that cervical endothelial cells expressed selectin-E (SELE, CD62E) to favor neutrophils recruitment and estradiol down-regulated SELE expression during ovulation, which impaired neutrophil transendothelial migration and orchestrated sperm tolerance. Progesterone up-regulated SELE to restore surveillance after ovulation.
Introduction
Multiple types of bacteria, viruses, fungi, and parasites are transmitted through sexual contact. Among these, several pathogens have a high prevalence in the population of which only some are currently curable (Treponema pallidum, Neisseria gonorrhoeae, Chlamydia trachomatis and Trichomonas vaginalis), whereas others are incurable viral infections with only symptomatic treatment (hepatitis B virus (HBV), herpes simplex virus (HSV), human immunodeficiency virus (HIV), human papillomavirus (HPV)). The cases of sexually transmitted infections (STI) and microorganism resistance increase yearly (CDC STD surveillance), and STIs have a substantial impact on the population at reproductive age, with overwhelming economic effects, working hours lost, and serious health consequences (infertility) beyond the infection itself (WHO STI surveillance 2015). Indeed, the World Health Organization (WHO) estimated that more than 1 million people acquire STIs worldwide every day despite all prevention efforts. Therefore, there is an unmet need to understand the interplay between pathogens causing STIs and host immune responses in order to design more efficient prevention strategies.
The female reproductive tract is highly susceptible to sex hormone variations during the menstrual cycle, which comprises three main phases: (a) follicular phase (proestrus in mice), (b) ovulatory or estrus, and (c) luteal phase (metestrus). Estradiol (E2) increases during proestrus, reaches a peak during the estrus, and gradually falls during the end of estrus. After ovulation, progesterone (P4) levels peak and lower during metestrus. Hormonal level fluctuations during the menstrual cycle have well-known effects on tissue-resident leukocytes (1)(2)(3)(4), including neutrophils, the predominant phagocytic immune cells in the vaginal lumen.
Neutrophils are essential players in the innate immune response to invading bacteria (5), unicellular parasites (6), and fungi (7). In steady-state conditions, neutrophil trafficking is tightly controlled by circadian rhythms, as aberrant leukocyte recruitment contributes to tissue damage or inflammatory diseases (8). In response to infections or injuries, however, neutrophils are rapidly recruited to the site of inflammation to (A) Neutrophils infiltrate tissues following circadian rhythms as part of normal repair, as microbiota regulation or as immune-surveillance in the absence of inflammatory stimuli. (B) Some tissue infiltration is independent of circadian rhythms like intestine, liver and white adipose tissue. However, inflammatory signals up-regulate neutrophil extravasation in the tissues. Cervical tissue mediates an exceptionally high and continuous neutrophil infiltration in absence of inflammatory stimuli (diestrus 2 to 4 days) to maintain constant neutrophil immune-surveillance and control of commensal microbiota balance. However, during proestrus (12 to 20h) and ovulation (12 to 20h dark period), estradiol (C) induces neutrophil withdraw from the vaginal mucosa (D) defense to favor reproduction at the cost of a potential impact in the onset of STIs. Surprisingly, vaginal challenges do not induce increase neutrophil infiltration neither during the high number of neutrophil (metestrus) nor the low neutrophil (estrus) content stages in the vaginal lumen.
clear pathogens (9). Neutrophil recruitment to the tissues require adhesion and transmigration through endothelial cells walls. The initial process is the rolling of neutrophils which starts with the tethering of PSGL-1 and CD44 with the endothelial selectins E and P (SELE and SELP) on the EC, which mediate neutrophil rolling on EC. This is followed by the firm interaction between EC integrins (ICAM1 and VCAM1) with the neutrophil ligands (CD11a and CD49d) which ultimately allows extravasation into tissues [for review (10)]. Vaginal lumen typically contains neutrophils to maintain vaginal immune surveillance and STI protection. Importantly, neutrophils, which are very efficient at killing exogenous sperm, disappear from the vaginal lumen during the ovulatory phase facilitating exogenous spermatozoa survival and transit (11, 12). How vaginal mucosa deals with pathogens during ovulation to avoid inflammatory reactions that may harm sperm quality and cause infertility (13,14) is still unknown. Here, we show that neutrophils are constitutively and highly recruited to the vaginal lumen regulated by sex hormones but independent of infection stimulus or insemination, globally favoring reproduction over the cost of increasing the risk of onset of STIs.
Materials and methods
Animals, vaginal cytology, and in vivo hormonal treatment The IiSGM Animal Care and Use Committee and Comunidad de Madrid approved all the animal procedures (PROEX-188/18 and 198/19).
Eight week-old female BALB/c (H-2d) mice were maintained with environmental enrichment, under specific pathogen-free, 12h light/dark, temperature and humiditycontrolled conditions in the Animal Facility of IiSGM. To determine the ovarian cycle stages, we gently placed 5mL of PBS on the vaginal entrance, flushed, collected, and observed under a light microscope (15).
To mimic the female ovarian cycle, mice previously treated with hormones were bilaterally ovariectomized under anesthesia (16). After a two-week recovery, females were injected subcutaneously with 0.006mg of 17b-E2 (Calbiochem, Germany) dissolved in 100µl of sesame oil (Sigma-Aldrich, USA) for 48h. Then, mice were treated with 0.2mg of P4 (Calbiochem, Germany) or 17b-E2 for 12h. After that, mice were challenged with 2×10 6 C. albicans blastoconidia in 20µl of PBS into the vagina (17,18). Finally, vaginal samples were obtained 12h later.
Vaginal and peritoneal secretions collection
Peritoneal cells were harvested by flush with 5ml of PBS into the peritoneal cavity (23). Vaginal secretions were gently collected by flushing the vagina four times with 50 µl of sterile PBS (16, 19). In both cases flushed medium was centrifuged, re-suspend in PBS 5% FBS and cells were collected at 4°C for analysis.
SELE inhibitor
In vivo SELE blocking expression was performed by intravenous injection of 200mg of rat monoclonal anti-mouse SELE antibody (9A9) or control isotype antibody (Bio-X-Cell, USA) in 0.1ml PBS sterile for 12h prior to the experiment (24).
Blockade of SELE in endothelial cells was confirmed by confocal microscopy with the anti-mouse CD62-E-REA369 antibody (Miltenyi Biotec).
Statistical analysis
The test used to determine significance between the treatments in each experiment can be found in the figure legends. GraphPad Prism 5 (GraphPad Software, Inc, USA) was used for statistical analysis. A p value <0.05 was considered statistically significant.
Results
Vaginal neutrophil infiltration depends on the ovarian cycle phase in steadystate conditions We studied vaginal lumen immune and peritoneal cells in normal (non-infected) mice by 10 color flow cytometry ( Figure 1A). We found that neutrophils (CD11B + , LY6G + , F4/80 -, CCR2 -, Supplementary Figure 1A; Supplementary Table 1) were the predominant immune cells (~95%) in the vaginal lumen, which contrasted with their scarce presence in the bladder and peritoneum ( Figure 1B; Supplementary Figure 1B). Vaginal lumen neutrophils, however, changed in quantity during the menstrual cycle. During proestrus (12 to 20h) light period, neutrophils lower their numbers in the vaginal lumen; however, during estrus (12 to 20h) dark period, neutrophils disappear from the vaginal lumen and slowly start to infiltrate the tissue at the end of that period. Later, during metestrus (12 to 20h) light period again, neutrophils highly infiltrated in the vaginal tissue and lumen (metestrus I~11-fold, metestrus II~17-fold). Next, during diestrus stage (2 to 4 days) neutrophil number in the vaginal lumen is very high (~35-fold) ( Figure 1C; Supplementary Figure 1C). Therefore, we used a standard gating strategy (25,26) that allowed us to determine the vaginal lumen resident immune cells (27). Our results confirmed that, in steady-state conditions, neutrophils are constitutively high recruited to the vaginal lumen, and it was drastically influenced by the ovarian cycle phase (28) and independent of circadian rhythm.
Vaginal neutrophil influx is independent of insemination
We wondered if sperm challenge could induce neutrophil influx to the vaginal lumen. Notably, we did not detect significant differences between inseminated and noninseminated mice, number of neutrophils in the vaginal lumen were similarly low during estrus and similarly high during metestrus ( Figure 1D). To validate the inoculum, we challenged mice with sperm in the peritoneal cavity. The same doses of sperm challenge induced a significant increase in neutrophil influx into the peritoneum in estrus (~200-fold) and metestrus (~80-fold) compared with non-challenged mice ( Figure 1D). These results revealed that, in contrast to the peritoneum, vaginal lumen neutrophil content was only dependent on the ovarian cycle phase and was not affected by insemination. Our data pointed out that during ovulation, neutrophils disappear from the vagina and sperm do not induce a rapid neutrophil influx.
Neutrophil influx in the vagina is independent of local microbial aggressions
To test whether pathogens trigger a rapid wave of neutrophil infiltration to the vaginal lumen, we challenged mice with the bacteria N. gonorrhoeae, the protozoan T. vaginalis, or the fungal pathogen C. albicans in the vagina. Two hours later, the number of neutrophils in the vaginal lumen was similarly low during estrus and similarly high during metestrus in the vaginal lumen of challenged and non-challenged mice (Figures 2A-C). Of note, the intraperitoneal administration of the same inoculums of N. gonorrhoeae, T. vaginalis, or C. albicans induced significant increases of neutrophils influx in the peritoneum in estrus and metestrus (~40-fold,~350-fold, and~200-fold respectively) compared with non-challenged mice. Furthermore, we also failed to detect vaginal neutrophil infiltration during estrus in mice that were vaginally challenged with a five-fold higher concentration of pathogens (Figures 2A-C). Our data indicated that the presence of neutrophils in the vagina was only contingent on the ovarian cycle phase and independent of the presence of N. gonorrhoeae, T. vaginalis, or C. albicans.
We next wondered whether sexually transmitted viruses like HSV-2 induce a differential response. Again, we did not detect a difference in neutrophil numbers in the vaginal lumen between HSV-2 challenged and non-challenged mice ( Figure 2D), suggesting that pathogens challenge also does not change the ovarian cycle pattern of vaginal leucocyte influx ( Supplementary Figures 2A, B).
Estrus prevents neutrophil recruitment into the exocervical tissue
Inhibition of neutrophil influx to the vaginal lumen during estrus could be due to diminished extravasation or impeded transmucosal migration. We analyzed neutrophil numbers in the vaginal tissue in estrus and metestrus mice by flow cytometry and cervix sections to discriminate between these two possibilities (19). We detected low (~10-fold) neutrophil numbers in the vaginal tissues, in the cervical epithelium (~20-fold) and in the stroma (~7-fold) in estrus compared with the respective metestrus samples. Notably, cervical neutrophil numbers in estrus and metestrus were similar in mice infected with N. gonorrhoeae or C. albicans compared with non-challenged mice ( Figures 3A, B). These data indicated that the presence of neutrophils in the cervical tissue was also dependent on the ovarian cycle phase and independent of the presence of pathogens. Furthermore, the absence of neutrophil accumulations in the stroma or epithelium during estrus pointed to diminished recruitment from the vascular beds rather than impeded transepithelial migration as the potential operative mechanism. Therefore, we evaluated the number of capillary and venular beds, which were similar in estrus and metestrus cervical tissues ( Figure 3C). However, the number of attached neutrophils to venular beds in metestrus was higher (~4-fold) than in estrus ( Figure 3D). These data suggested that estrus diminished the neutrophil recruitment from the vascular beds in the cervix, which was consistent with the lower presence of neutrophils in the vaginal lumen during estrus. Thus, we next focused on the regulation of the neutrophil-endothelial cell (EC) interaction by the menstrual cycle.
Ovarian cycle regulation of neutrophil adhesion molecule expression on endothelial cells in the vaginal tissue
The expression of PSGL-1, CD44, CD11a, and CD49d in neutrophils has strong circadian oscillations, resulting in increased efficacy of the leukocyte-endothelium interaction and transmigration process during nighttime (8,29). We observed that expression of PSGL-1, CD44, CD11a, and CD49d at the same diurnal time (8,29) in mice was similar in estrus and metestrus (Supplementary Figure 3A), suggesting that differential recruitment might be associated with changes in the endothelium. We then evaluated the expression of ICAM1, VCAM1, SELP, and SELE in the EC of the cervical venular beds in estrus and metestrus. As assessed by confocal microscopy and flow cytometry, only SELE showed differential expression in EC of the cervical venular beds (~3-fold higher in metestrus) (Figures 4A-C; Supplementary Figures 3B, C). Blockade of SELE (24) in metestrus decreased the number of neutrophils in the ectocervical venular beds ( Figure 4D). Next, we wondered whether SELE differential expression between estrus and metestrus could be due to neutrophil attachment to the venular beds. After depleting neutrophils (11), however, we still detected higher expression of SELE in venular beds in metestrus than in estrus (Supplementary Figure 3D). Therefore, we concluded that SELE expression in EC could play a key role in neutrophil extravasation during metestrus, independently of neutrophil attachment.
Sex hormone regulation of SELE expression on cervical endothelial cells
Our data globally suggested that SELE expression could be a key regulator of cervical neutrophil extravasation during the ovarian cycle. To mimic the ovarian cycle and assess if sex hormones regulated SELE expression in the cervix, we treated ovariectomized mice consecutively with E2 and P4 (17). We detected higher SELE expression (~2-fold) and higher neutrophil numbers in the venular beds of E2/P4-treated mice than in E2/E2-treated ones ( Figure 4E), suggesting that E2 could down-regulate SELE expression and/or that P4 could promote it. To discriminate between the two possibilities, we treated mice at proestrus with Faslodex (ESR1 inhibitor) or Prolutex (P4) 12 hours before euthanasia. Both treatments increased SELE expression (~2fold) in the venular ECs during estrus compared with proestrus vehicle-treated mice ( Figure 4F), suggesting that SELE expression on the venular ECs of the cervix is downregulated by E2 through the ESR1 during estrus and upregulated by P4 during metestrus.
Discussion
In the present study, we show that neutrophil influx to the vagina of mice was exclusively contingent on the ovarian cycle stage, regardless of the presence of infectious agents or sperm. In normal conditions, cervical tissue mediated an exceptionally high and continuous neutrophil infiltrate. During ovulation, however, E2 reduced neutrophil extravasation by downregulating the expression of SELE on the venular ECs of the cervical tissue, favoring reproduction over the immune defense. After ovulation, P4 up-regulated the expression of SELE and restored immune surveillance. All these ovarian cycle-dependent changes were independent of and not influenced by the presence of various microbial invaders or sperm.
Neutrophils increase in the blood during the day due to a higher release from the bone marrow. At night, circulating neutrophils decrease because they infiltrate tissues (spleen, lung, skin, skeletal muscle, lymph nodes, kidneys, heart, and others) as part of the routine repair, microbiota regulation, or immune surveillance in the absence of inflammatory stimuli (29). Thus, most of the tissues barely express SELP, ICAM-1, or VCAM-1 in the ECs of their venular beds, but their expression peaks in the evening to slightly increase tissue homing of old neutrophils for surveillance, repair, and death (8,30). Likewise, ECs do not express SELE in normal conditions, but inflammatory signals up-regulate their expression to promote neutrophil extravasation (31-34) and guide neutrophils through ECs (35). Cervical mucosa is a unique site critical for the conciliation of reproductive success and vaginal immunology. Importantly, it must control commensal microbiota and protect against sexually transmitted and opportunistic pathogens that induce infertility (36). In contrast to other tissues, we observed high constitutive expression of SELE, SELP, ICAM1, and VCAM1 in ECs of the cervical venular beds, resulting in exceptionally high and continuous neutrophil infiltration in basal conditions. Therefore, neutrophil infiltration of the vaginal lumen and cervical tissues was independent of circadian rhythms like intestine, liver and white adipose tissue (37). Nevertheless, neutrophils are recruited in large numbers into the vaginal lumen to maintain constant neutrophil immune surveillance against pathogens and to control commensal microbiota balance. The constitutively high neutrophil content in the vagina could harm sperm quality (11, 12), but the cervical mucosa must allow sperm to swim up during the ovulatory phase of the ovarian cycle. It is well-known that neutrophils do not firmly arrest on ECs that do not express SELE (38). Here, we suggest that, during ovulation, E2 prevents neutrophil entry into the vaginal lumen by down-regulating SELE transcription in venular EC (39) through the E2 receptor (ESR1) because neutrophils do not firmly arrest on EC that do not express SELE (38) to protect sperm from neutrophil attack (11). Then, during the early luteal phase, P4 peaks and up-regulates SELE expression, probably by inhibiting ESR1 effect, and quickly refresh neutrophil influx to restore immunity.
Evaluation of our findings in human tissues is limited due to the lack of an appropriate model to assess infections in a standardized way, as well as to the difficulty of obtaining samples at early stages before the infections are already established. Despite the difference between humans and mice, our results challenge the current universal model of quick and high neutrophil influx to tissues in infections conditions because the neutrophil influx to the vaginal lumen was exclusively dependent on the ovarian cycle phase and high in noninfected mice, except during ovulation. These changes in neutrophil infiltration were not influenced by circadian rhythms or exposure to sperm or infectious agents. During ovulation, neutrophils withdrew from the cervical mucosa to avoid jeopardizing reproductive function. Therefore, basic mucosal protective innate strategies against vaginal STI (40) have evolved to coexist with sperm and to create the estrogen-dependent secretory milieu (41) in the vagina. Clinically, hormonal deregulations may lead to infertility due to the attack of sperm by neutrophils and may compromise vaginal immunity by making it more vulnerable to STDs.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding authors.
Ethics statement
The animal study was reviewed and approved by The IiSGM Animal Care and Use Committee and Comunidad de Madrid.
Funding
This work was partially supported by the Ministry of Economy and Competitiveness ISCIII-FIS grants (PI19/00078 and PI19/00132) co-funded by the European Union Funds from the European Commission, "A way of making Europe" and IiSGM intramural grant II-PI-MRC-2017. ML holds IiSGM intramural contract.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Supplementary material
The | 2022-12-08T14:15:54.887Z | 2022-12-08T00:00:00.000 | {
"year": 2022,
"sha1": "a7f4c7f3b8bf68f1f73d2e96100413563a1f0cfa",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "a7f4c7f3b8bf68f1f73d2e96100413563a1f0cfa",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
} |
145639997 | pes2o/s2orc | v3-fos-license | Religious Innovation The Meaning of Rituals in Shan the Rising Light
The new religions, each in their own way, present a belief system unknown to the surroundings, thereby dissociating themselves from the familiar, while developing their own identity. Religious innovations, however, can also be studied via the development or change that takes place in the rituals. Now and then a new ritual orientation or ritual structure is a very significant expression of the new religious identity that is gradually developing, and at the same time informative with regards to the underlying belief system. In this paper it is my intention to present an example of how rituals may play an important role in the birth of a new religion, and how this religious innovation can be interpreted through the rituals. This example concerns a religious group — Shan the Rising Light — that has managed to introduce a comprehensive body of rituals into a belief system otherwise characteristed by its general lack of rituals and ceremonies (namely the theosophy of Madame Blavatsky), thereby setting the standards for a virtually new religion (another example of the new religions, focusing on the rituals and their function during the formation of the religion, is given in Rothstein 1991).
Introduction
The new religions, each in their own way, present a belief system unknown to the surroundings, thereby dissociating themselves from the familiar, while developing their own identity.Religious innovations, however, can also be studied via the development or change that takes place in the rituals.Now and then a new ritual orientation or ritual structure is a very significant expression of the new religious identity that is gradually developing, and at the same time informative with regards to the underlying belief system.In this paper it is my intention to present an example of how rituals may play an important role in the birth of a new religion, and how this religious innovation can be interpreted through the rituals.This example concerns a religious group -Shan the Rising Light -that has managed to introduce a comprehensive body of rituals into a belief system otherwise characteristed by its general lack of rituals and ceremonies (namely the theosophy of Madame Blavatsky), thereby setting the standards for a virtually new religion (another example of the new religions, focusing on the rituals and their function during the formation of the religion, is given in Rothstein 1991).
Theoretical and Methodological Notes
The studies of new religious groups in the Western world has primarily been sociological.Öf course this approach is absolutely necessary, but any sociological (and that very often means comparative) analysis needs monographical descriptions of the groups in order to do them justice.This contribution concerns one aspect of such a monographical outlining and stresses historical and phenomenological themes rather than sociology.
This exposition is based on a minor fieldwork conducted during 1990 and 1991 in Shan the Rising Light and related groups: Interviews, less formal conversations, participant observations and literature of various kinds, all form part of the source material.It is important to observe, though, that my research to this point is rather limited.I have not had the opportunity to study the group more intensively and therefore I have restricted my interest to the rituals and their use.As the rapid process of change and construction in Shan the Rising Light seems to go on and on, one can understand that the results presented here are nothing but status quo in the autumn of 19911 .
However relevant a monographical study may be, another aspect of the analysis may prove no less interesting: It is my suggestion that the study of contemporary developments in minor religions and religious sects may cast some light on the formation of previous religious innovations.In principle I find it possible to use the vast modern material as a comparison in our efforts to understand similar phenomena in ancient sects or new religions, of which only a little is known.This, of course, is not the intention here, but one should not ignore the possibility of such an undertaking
Shan the Rising Light -Background and Belief
Shan the Rising Light was founded by the Danish woman Jeanne Morashti (b.Jeanne Ruben in 1946) in 1987.In the esoteric context her name is Ananda Tara Shan; "The Blissful Mother of the Earth" .In fact the group has existed since 1982, but with other names and other spiritual functions according to the divine instructions that are followed.A less formalized group around Jeanne Morashti is likely to have existed even earlier, especially because her spiritual work, according to her present followers, started as early as 1977.Thus the name Shan the Rising Light indicates the current phase in the life of the group and indeed in the spiritual career of its charismatic leader Jeanne Morashti.The names The Society for Maitreya Theosophy, Rosenhaven (The Rose Garden), The Church of the Sacred Heart of Maitreya and Right Human Relations are those (of which some may cover special sections of the organization) used during the last ten years.It is predicted that a new name will be introduced in 1992 (when new, epoch-making revelations in the shape of books are expected to be received from the divine beings), and another during the final spiritual revolts expected to happen in 1997.This development, with its significant millennial perspectives, can easily be traced through the bulky writings of Ananda Tara Shan's closest associate and disciple Asger Lorentsen, and in the magazine "Shan" until recently published by the group.
Shan the Rising Light in Denmark has around 60 inner-members while another 60 are more loosely connected.Many people from the broader non-institutionalized New Age-milieu may visit the group, but such persons can hardly be considered actual adepts.Together with Ananda Tara Shan, seven individuals lead the group2 (Beckford 1985 provides a good model for analyzing such internal differentiation, although the Shan movement is not mentioned).The group runs a "spiritual community", a semi-monastic commune called Ananda Ashram in Gundsølille near Roskilde, but Ananda Tara Shan herself has lived in Australia since 1982.At that time she established what was then called The Church of the Sacred Heart of Maitreya in Melbourne'.This Australian group today has around 200 inner-members, according to my Danish informants.Contact between Denmark and Australia is constantly maintained, especially by telex, which provides an excellent possibility for Ananda Tara Shan to communicate her message to the believers and various practical orders to her second-in-command Asger Lorentsen.As a matter of fact Ananda Tara Shan seems very present in Gundsølille, even though she lives on the other side of the globe.The use of audio-and video-tapes has certainly found its place in the religious communities of the modern world.Normally Ananda Tara Shan only visits her Danish followers once a year during the recurring "Summer Ashrams".
Until 1981, when Jeanne Morashti was excommunicated, she was an active member of the Theosophical Society in Denmark, although she was never elected to the actual leadership.Since the early 1970's she had become known as "The Clairvoyant Öne" in the occult milieus of Copenhagen and was "gradually maturing to her present position as one of the leading figures of the New Age", as one of her disciples explained to me.
No doubt Jeanne Morashti's strong feeling of spiritual competence, and a number of concrete spiritual experiences, led her to challenge the leadership of the Theosophical Society in Denmark.According to her teachings, the Theosophical Society has deserted the obligations imposed on it by the Mahatmas of the Great White Lodge (the brotherhood of enlightened souls, guiding humanity towards perfection from secret sites around the world [Barborka 1973]), and consequently her endeavours must be understood as a project of reconstruction and fulfilment.The intention is to: "Lift up Theosophy so that it may contain the vibration from the Hierarchy's Inner Ring" (The Hierarchy is synonymous with The Brotherhood of the Mahatmas) (AA 1), and to "bring about new developments within the theosophy, so that it may truly serve humanity".Her Jewish background does not show in her theology.As a matter of fact Judaism (and Islam) are considered rather primitive or even "dark" religions with little or no value to the spiritually developed persons'.Individuals in the Jewish milieu of Copenhagen remember Jeanne Morashti as an odd person: "only eating vegetables, and practising all sorts of exotic humbug.I felt so sorry for her mother" (AA 2).To her followers, however, Jeanne Morashti is unique.The soul now incarnating as her originally came from Venus 16 million years ago where it lived in close association with (other) celestial and divine beings.
This perspective of split and internal antagonisms is by no means unusual in the history of the Theosophical Society.Since it was established by Helena Petrovna Blavatsky and Henry Steel Olcott in 1875, the Society has faced numerous discords, and a multitude of theosophically inspired fractions still exist.The more prominent include such occult personalities as Alice A. Bailey, William Q. Judge, Francia LaDue, Gue Ballard, Robert Crosbie and more recently Elisabeth Clare Prophet, among others (Melton 1986: 45-52, 87-92; Melton, Clark and Kelly 1991: 1-37).Some of these have been restored to favour by Ananda Tara Shan, whereas others have to face severe accusation.Thus Alice A. Bailey and her writings (believed to be transmissions from the Mahatma Djwhul Khul) are considered authentic and most valuable ("the highest wisdom given on this planet until Ananda Tara Shan came", according to one informant), while Elizabeth Clare Prophet and her organization Summit Lighthouse (operating in close association with another organization set up by Elizabeth Clare Prophet; The Church Universal and Triumphant) is more or less outright rejected6.This dissociation from the last seems very understandable as Shan the Rising Light and Summit Lighthouse resemble each other in so many ways that a mutual rejection is needed as an element in the ongoing missionary work of the two organizations.In this connection it should be observed that Shan the Rising Light officially claims that no competition or hostility exists between the two organizations (Lorentsen 1990: 51).In Denmark, where the first theosophical lodge was established in 1908, these internal controversies culminated a few years ago when the Theosophical Union (Teosofisk Forening) split from the traditional Theosophical Society (Teosofisk Samfund).To the rather few members of the strongly reduced Theosophical Society (about 30 individuals) this rupture is fatal.They believe themselves to support the true theosophical organization, but they recognize that their influence is now very weak (even if the international leadership of the Theosophical Society in Adyar, India -so they saysupports the group).Ananda Tara Shan, although not representing the new Theosophical Union (counting about 300 individuals), is partly to blame, they say.To my informants it seems likely that the discords only became possible because Ananda Tara Shan started to undermine the organization more than ten years ago.I cannot judge in this matter, but it seems likely that the upheavals caused by her made a general insurrection possible.After all her excommunication is still remembered as the "necessary purging of a person associated with the dark forces of Evil" (AA 3).It also became known among he theosophists that she had practised black magic at Atlantis, and that her task was to destroy humanity.She proudly confirms that she previously incarnated on Atlantis (not to forget Lemuria), but rejects their harsh attack.Not less profoundly Ananda Tara Shan (at that time still Jeanne Morashti) exclaimed in an "open letter": This letter I especially address to you that hate, condemn, slander, manipulate, envy, "throw stones", pull wires, are jealous, call yourself spiritual without being it, are hypocrites ... , lie ... i.e. all you that knowingly or not work for the Lord of Darkness -the negative powers.I thank you for having led me to this, my Great Day, the 8. of May 1982, when I -because of the adversity and hatred you have shown me -have had the strength and courage back, which was mine in former incarnations on Earth.With this strength, that courage and that power God today has bestowed on me, I am able to say the words: I swear that I will defeat all negative on Earth with the power you have given me today, Oh God.(AA 4:1) 7 Öne thing in particular gave rise to the conflict.In 1980 in New York, during a meal with her husband, Jeanne Morashti received a revelation from the Mahatma Morya who told her that she was the "direct incarnation" of the founder of the Theosophical Society Madame Blavatsky.At the same time Jeanne Morashti's husband realized through revelation that he himself was a corresponding incarnation of Madame Blavatsky's close companion and co-founder of the Society Henry Steel Ölcott.They experienced what is termed a "soul recall", and Jeanne Morashti now remembered everything from her previous incarnations.The personality of Madame Blavatsky started to develop and grow within her, and she had to go through what is described as an immense spiritual battle in order to prepare herself for her final mission: "The ignorant can destroy the body, but the soul will always return to complete the task imposed on it by God" (AA 4:4).In trying to legitimize her alternative ideas through Madame Blavatsky (understanding herself to be her direct incarnation), Jeanne Morashti violated one of the major theosophical doctrines; that of Madame Blavatsky's spiritual sovereignty, and could not expect anything but excommunication.The fact that Jeanne Morashti gradually came to physically resemble Madame Blavatsky was taken as a proof of the postulated "direct incarnation" by her supporters, while one member of the remaining Theosophical Society interpreted the change as an "element in her vulgar plans".
To her disciples, Ananda Tara Shan is considered an avatar, and her mere existence is interpreted as an ongoing epiphany.She is a prominent member of a group of twenty avatars especially chosen by the Mahatmas to prepare the basis for the "new World Religion and the new spiritually clean society" (AA 5).Through her the Mahatmas speak, and through her the healing light and energies and the enlightening gnosis flows into Earth (Shan being the esoteric or occult name of the planet, according to Ananda Tara Shan).In short, one can say that Ananda Tara Shan is the physical precondition for the divine powers' work on planet Earth.In turn, so it is believed, her disciples will gain the same competence, and the rituals, introduced by Ananda Tara Shan herself, are the means in every respect.
Put bluntly, the situation is this: Ananda Tara Shan appears as the direct incarnation of Madame Blavatsky, and her task in this life is to complete the process inaugurated by Madame Blavatsky (i.e.herself) in 1875.As the Theosophical Society has disregarded its divinely sanctioned mission, a new organization has been set up (namely Shan the Rising Light), and by introducing a comprehensive set of rituals, unknown to traditional theosophy, world-wide success is expected.
In the following this process of innovation through the introduction of new rituals will be analyzed, but for the purpose of comparison, I shall first give a short account of the traditional theosophical idea of rituals and ritualism -or rather the theosophical lack of rituals.
The Theosophical Society and Rituals'
The theosophical belief system is syncretistic and eclectic indeed.Based on a conglomerate of traditions including Western occultism, spiritualism, parapsychology, Hinduism, Buddhism, Christianity and even elements from the sciences9 , it is one of the first modern alternatives to the traditional religious belief systems (Ahlbäck 1990: 49-60; Ahlbäck 1989: 36-44).The basic assumption is that there is "no religion higher than Truth", and it is believed that this "Truth" is the core of every religion.Access to the "Core of Truth" is given through the esoteric schools of the religions, and this is what theosophy is all about.
Öne of the theses in Madame Blavatsky's comprehensive Collected Writings is that every ritual and ceremony, regardless of its complexity, symbols and structure, can be traced to a common origin: "the actual occult rituals and ceremonies" (Blavatsky 1950(Blavatsky -1969)).Some of these archetypical rituals, she claims, were meant to manage the positive and benevolent "White Magic", while others served the negative forces of "Black Magic".In this way she recognizes the efficacy of the rituals, and relates the ritual practices to the powers ruling the Universe and our lives.As Madame Blavatsky claims the historical sources to be more or less useless, her precise knowledge of the ancient rituals seems to be derived from her spiritual contact with the entities of the "Great White Lodge".
It is obvious, then, that a clear awareness of rituals and ceremonies in principle forms a part of the theosophical belief system.Nevertheless it is a fact that almost no rituals are seen within the frames of the Theosophical Society.Ön the contrary, the religious awareness constantly concentrates on the internal transformations, the development of higher levels of consciousness and the non-or meta-physical through speculations, intellectual studies, education and instructions.The theosophical writings rarely comment on rituals, but when this does happen, the perspective is usually (if not always) negative.The classical Danish theosophical dictionary does not include any entry on ritual, ceremony, etc. ( Kapel 1925).Answering the rhetoric question why people in the West have until now (i.e.1889) been unaware of the perennial theosophical teachings, Madame Blavatsky says that "their long-lasting slavery under dogmatic belief in words and ritualism" is an important reason (Blavatsky s. a.: 6).The only common practice is meditation, and even this is not strongly emphasized".At this point we may ask where the rituals are in relation to the Theosophical Society.Within the frames of the Society as an institution we cannot se them.
A leading person in the Theosophical Society in Denmark gave us the answer: The Theosophical Society does not make use of any rituals or ceremonies whatsoever, but supports two great sections that use rituals and ceremonies indeed.(AA 6) The two organizations he refers to are The Liberal Catholic Church and the Co-Masonic Örder [Co-Frimurer-Ördenen], both of which are independent organizations, although fundamentally inspired by theosophy and theosophists.
The Liberal Catholic Church and the Co-Masonic Order Both organizations were founded in the beginning of the 19th century on the initiative of prominent theosophists.The theosophists had originally sought collaboration with the traditional freemason-milieu and the Christian churches, but with no success.Ön the contrary, they met rejection or even hostility everywhere.The masonic orders refused to accept women, a fact the theosophists (with its female founder) could not accept, and the traditional Christian congregations considered the theosophist's esoteric understanding of Christ (as one of the Mahatmas) and the Gospel absurd or heretic.According to my informants, however, Madame Blavatsky declared that the rituals of the masonic orders were in the service of the "Good Forces", and the activities of the freemasons were therefore designated "White Magic".What was needed was an even better management of the rituals, and therefore the Co-Masonic-Örder was set up, but with nothing formal or institutional in common with the Theosophical Society: Here the rituals could be conducted by qualified theosophists for the benefit of all.In addition (in Denmark somewhat later) the theosophists established a special freemason order for children, the Örder of the Round Table, an institution equally important to the analysis of Shan the Rising Light.
In sum, we may conclude that the Masonic orders refused to co-operate with the theosophists and therefore the theosophists took over the freemasons' rituals.As an element in the soteriological endeavours of the theosophists this was simply necessary".Unfortunately only very little information has been obtainable regarding the rituals of the Co-Masonic Örder.This, of course, is due to its esoteric and secret nature.Later, in the comparison with Shan the Rising Light, the little I know will be mentioned.
The same idea regarding the proper conduct of the ritual practice underlies the Liberal Catholic Church and its bulk of rituals, founded by James Ingall Wedgwood and C. W. Leadbeater, one of the leading theosophists in the generation after Blavatsky and Ölcott.He (Leadbeater) presided as Bishop of the Liberal Catholic Church for many years, and his influence on the occult interpretations and practices is still evident.The belief system of the Liberal Catholic Church is -like the theosophy itselfsyncretistic indeed with ancient Catholic and Buddhist ideas being the most prominent.The cult however, carefully follows a Catholic ideal of form and structure, although the interpretation of the rites is very different (Frick 1978: 314).
A hitherto much too narrow understanding of the conventional Churches' rituals led to the foundation of the Liberal Catholic Church, theosophists say -a statement echoed in the preface to one of the most striking sources to the Liberal Catholic liturgy, "The Science of the Sacraments" by Leadbeater (Leadbeater 1957: xiii-xiv).Contrary to the Co-Masonic rituals, the rites of the Liberal Catholic Church are described in minute detail.Again the idea of actual functioning rituals is in front.Under the heading "A New Idea of Church Worship" Leadbeater writes: The sacrament of the Eucharist beneíits not only the individual, as do the other Sacraments, but the entire congregation; it is of use not once only, like Baptism or Coníirmation, but is intended for the helping of every churchman all his life long: and in addition to that, it affects the whole neighborhood surrounding the church in which it is celebrated.(Leadbeater 1957: 1) And later in the text: The temple or church is meant to be not only a place of worship, but also a center of magnetic radiation through which spiritual force can be poured out upon a whole district.(Leadbeater 1957: 3) But the rituals are not considered absolutely necessary to everybody: The rituals and ceremonies [of the Liberal Catholic Church and the Co-Masonic Order] are put at our disposal because some people need it to develop spiritually.Very often such people are not in the habit of studying, and their ability to understand esoteric instructions is bad.Through the rituals they get their chance.I myself also used to participate in some ceremonies from time to time, but it was never my main activity.But I am still a member of the Church.[The Liberal Catholic Church i.e.] (AA 7) The last information expresses the exception, not what is common In Denmark only very few organized theosophists (maybe as few as four or five) are initiate members of the Liberal Catholic Church or the Co-Masonic Örder.This is quite contrary to the situation when these organizations were established.At that time the founders placed persons from the elite in the Theosophical Society (members of the so called Esoteric Training School (E.S.) within the Theosophical Society) in the new organizations to secure a genuine theosophical influence.Today the 100 Danish members of the Liberal Catholic Church and the (at most) 150 members of the Co-Masonic Örder are -with very few exceptions -not members of the Theosophical Society".
If we keep to the Danish context, and so we must in order to analyze Shan the Rising Light, it seems obvious that the total theosophical milieu is split into a number of inharmonious groups.There are severe theological and personal antagonisms between the Theosophical Society and the Theosophical Union, and neither of the two have formalized collaboration with the strongly ritualised groups described above, the way they used to in the first third of this century.The only organization in Denmark that seems to cover every aspect is the relatively new-born Shan the Rising Light.In the following I shall describe how the various aspects form a whole in this organization and in the teachings of Ananda Tara Shan, thus focusing on some of the premises for the development of this new religion.
Comparisons with Shan the Rising Light
The rituals of Shan the Rising Light may be described in three categories: 1) The transmissions of messages from the Mahatmas through Ananda Tara Shan, 2) The collective healing rites designated "Cosmic Peace Services" or "Healing Services" and 3) Individual meditation and contemplation of various kinds.The first category obviously requires the presence of Ananda Tara Shan herself.Consequently these very important rituals are only performed in Denmark occasionally, as Ananda Tara Shan lives in Australia.The rituals classified under category 2. are in Denmark led by the more prominent members of the group.The last category may imply group performance, but very often the meditations and therapeutic techniques are conducted in private.This elaboration is sufficient for our purpose, although a more detailed systematism could be developed.
Önly the third category is identifiable within the context of the Theosophical Society.It is true that Madame Blavatsky (and a few others) during the early days of the Theosophical Society received revelations from the Mahatmas, but the structure of this communication was different from that of Ananda Tara Shan and her Masters.Usually the Mahatmas would by supernatural means communicate their message to Madame Blavatsky in writing (Ahlbäck 1990: 57).Ananda Tara Shan is rather a medium that speaks out the divine messages.She is believed to form a unity with the Mahatma that "transmits", and it is explained that she herself formulates what is told her "in a spiritual language" during the "Hierarchical Transmissions".Very often Ananda Tara Shan's followers are present when the Mahatmas speak.They can see and hear the actual communication, and to many of the believers this experience is a proof of the truth in Ananda Tara Shan's teachings.When the Mahatmas "transmit", Ananda Tara Shan, wearing white or coloured robes, experiences some kind of altered state of consciousness, and to the sound of specifically chosen music, she will lift up her "Magic Wand" with its crystal knob and exclaim the message.In other words, the "transmissions" only occur during carefully planned ritual sessions, quite contrary to the precipitated correspondences in the days of Madame Blavatsky (Ahlbäck 1990: 57).Very often the theme of an approaching "transmission" is known in due time: During Ananda Tara Shan's next visit to Denmark the Hierarchical Transmissions will [last for] two evenings (... ).During these evenings the Lord Maitreya and other cosmic entities, together with the Hierarchy, will address the Danish people and carry on the preparation of the Danish national mind, in order to make it ready for the task it is meant to carry out during the next centuries.(AErkeenglen Michael 1988-89: 17) All "transmissions" are taped, and when a session is completed, the tape is transcribed.Consequently Shan the Rising Light produces a lot of religious texts on the basis of Ananda Tara Shan's "transmissions".These texts form the core of the sacred writings, and together with Ananda Tara Shan's personal commentaries and those of Asger Lorentsen, they represent the actual canon of the group.This material can of course be compared to the "Mahatma Letters" of the Theosophical Society and the inspired writings of Alice A. Bailey, and it is true that their content and their structure resemble the older material in many ways (see as an example Humphreys and Benjamin 1979; Barborka 1973).It is interesting then, that the followers of Ananda Tara Shan can participate in the "transmissions", contrary to the members of the Theosophical Society in Madame Blavatsky's days.In Shan the Rising Light the congregation is meant to support Ananda Tara Shan while she prepares herself for the communication.The group will form "energy circles", conduct "spiritual purification" of the location, meditate and pray.In this way the leader and the group work together, and every participant knows that the divine interference is partly due to his or her efforts.
As far as I can see, the rituals introduced in close connection with the theosophical belief system are derived form the traditions of the Co-Masonic Orders and The Order of the Round Table .Two informants that are initiated members of the Co-Masonic Order in Copenhagen have confirmed that the ritual garments, the "Magic Wand" , a sword that is used during the "transmissions" and the geometrical formations formed by the congregation during these rituals in Shan the Rising Light are copied from the rituals of the Co-Masons.My only informant with contact to the Order of the Round Table likewise certifies that the round piece of marble and various round tables used by Ananda Tara Shan are taken from their rituals.Referring to the beliefs of the Shan movement, I told a former leader of the Order of the Round Table in Copenhagen about the function of the marble piece: It is believed to possess a "magnetic force of energy" that is carried from ritual to ritual, constantly accumulating "energy" derived from the rituals (Lorentsen 1983: 239).His reaction was the following: "Yes.It ought to.But it does not work in the hands of Morashti".He also declared that the interpretation of the round marble piece as a symbol of the earth, which is the case in Shan the Rising Light, is quite unknown to his Order.To my knowledge Ananda Tara Shan was never a member of the Co-Masonic Order, but she was indeed involved with the Order of The Round Table .As a matter of fact she was expelled from the Order by its leader.At that time the accusation was that she tried to inflict the children of the Order with her "dark powers by reversing the rituals"13 .Today the same person declared: People in the Shan movement play with low-astral phenomena, something Blavatsky always warned against.It can be fatal.You become unresisting and unable to act and you may risk obsessions.It is straightforward psychism.She [Ananda Tara Shan] is harmless to those who know.Only weak persons are in danger.Some of the worst experiences in my life were the psychic battles I fought against her.The tension can be witnessed by many people who were there.(AA 8) Asger Lorentsen of the Shan movement i Denmark explains such accusations as misunderstandings: Two prominent theosophists have seen Ananda's "Plutonic Aura", and they have interpreted it as an expression of black magic.It is not.It is an expression of God's Will, a force that gives her tremendous ability to act and perform.
-(AA 9) The members of the Liberal Catholic Church are less worried."The rituals of the Shan movement do not work at all," they say.The problem according to my informants is that Jeanne Morashti (always called by her original name by her opponents) has not received the proper initiation, and therefore does not have the authority to conduct the rituals.The Liberal Catholics see themselves as representatives of the "Öld Catholic Union of Utrecht", and they claim to pass on the only true succession, which has never been offered to Jeanne Morashti.Nevertheless she was indeed an initiated (baptized) member of the Liberal Catholic Church in Copenhagen for some years, and I see no reason to doubt that she has used her knowledge from there in her own religious organization.
In their critique of the rituals of Shan the Rising Light, the Liberal Catholics mainly point to the rituals classified under category 2. above.These rites of healing and "energy channelling" truly resemble the rites of the Liberal Catholic Church.Not all of them, but those concerned with just that.
Shan the Rising Light is very concerned with "Earth-healing".Through the "injection of divine light that purifies and renews creation" they seek to prepare the Earth for the coming of Maitreya, the redeemer of this age (the Age of Aquarius): The Cosmic Earth-Healing Service is the divine service, where the associates of the Hierarchy consciously put themselves at the disposal of a higher rhythm.In this way the energies from the World of God are canalized to the surrounding astral and mental atmosphere so that the vibrations are lifted.In this way the stronger energies of the New Age can meet a better response.(Lorentsen 1983: 238) During the Liberal Catholic celebration of the Mass, a "wave of peace and strength goes into the Earth", very similar to what is described above.This function Leadbeater calls "the primary object of the service" (Leadbeater 1957: 3), an interpretation identical with the Shan understanding of their service.It is also possible to identify the Liberal Catholic selfesteem in Shan the Rising Light.Leadbeater writes: "If we be truly religious [we] must be unselfish; we must be working together with Him, our Lord" (Leadbeater 1957: 10).In an internal note from Shan the Rising Light it says: The servants of the World constitute a divine, non-selfish relief squad.It is our task to calm the roaring waters through our services, for the benefit of all and on behalf of our Lord.(AA 10) In both cases it is emphasized that Man is obliged to provide the facilities necessary for the transmission of divine energy.These facilities are the rituals.It is further stated that the congregations should know the liturgy and the rituals very well.For that reason the Liberal Catholics present their rituals in minute detail in books and lectures.Shan the Rising Light distributes ritual manuals to the participants so that the ritual structure and the wording will become known.An example is given in the following:
Cosmic Service for the Earth
All the participants (sitting on chairs) form a circle.Music according to the purpose of the service.Invocation of the Archangels (the leader): We call upon the Archangels: Michael, Jophiel, Chamuel, Gabriel, Raphael, Uriel and Zadkiel.And the female Archangels: Faith, Constance, Charity, Hope, Maria, Donna Grace and Amethyst.The Leader continues: Ruler of the Universe.We ask you to abolish the negative aspects today, and strengthen the good aspects so that we may co-operate for the good of the Light.
Invocations
The Leader: Shining circle, thou art the beginning and the end.Flaming sword, thou art the pioneer and the path.Protect us and help us in this service.The ceremony of the circle and the sword is performed [by the leader].(AA 11)14 This section is followed by further invocations stressing the role of the adepts and the quest for unity with the divine.The invocations are always made by the leader, but from this point the congregation joins by adding "Lets us work for the Light" , "Let us work for the Earth and Humanity" and "Let us work for all that lives" .So far all join hands and "seek God within themselves".The leader then says: The heart of the universe pulsates and glows.We are that heart.
In the following the participants are asked "to visualize the power and energy that is sent into the Earth and everything that lives on Earth" , and to follow the energy as it returns to the cosmos.Finally the leader exclaims: "The Earth is shining" which everybody repeats.Then the circle is opened and the ritual (that all in all has lasted for about 30 minutes) ends (AA 12).
It is believed that this (and similar) rituals are the preconditions for any further spiritual development.
Öf course this kind of ritual guide is less elaborated than those of the Liberal Catholics as produced by Leadbeater.Shan the Rising Light, however, refers to the writings of Leadbeater concerning the interpretation of rituals and ceremonies (Lorentsen 1989: 334).
Another striking feature that combines Shan the Rising Light and the Liberal Catholic Church is the idea of "Thought-Forms"; spiritual constructions that are gradually built up during the rituals.This kind of "spiritual architecture" can only be seen by especially initiated persons and individuals with psychic abilities.The structures -depicted in Leadbeater's book -are magnificent "buildings" with spires, domes, geometrical patterns etc. developing in the same place where the ritual is in progress.The physical church building is understood to be a symbol of the "actual church" appearing as the "Thought-Form".
In Shan the Rising Light the Ananda Ashram of Gundsolille functions in much the same way as the cult-room of the Liberal Catholics.Ön the 7. of July 1990, the Mahatma Count of Saint Germain initiated the place by constructing an ideal "Thought-Form" (although this term, to my knowledge, is never used in Shan the Rising Light): I place an angel of Violet fire here.I already have placed one over the main building, but I shall place one over the whole area (... ) A new castle of the Grail has been formed around his Church.It is up to you to make it physical.It is up to you to enliven the Grail once more.(AA 13) " It is a well known phenomenon in Shan the Rising Light that people may vision these spiritual buildings: When a group conducts this service of peace, you can see a lot of light-beings and beautiful thought-forms arise between the participants.It is a mighty ocean of light that is being established.It may look like an aura of flames and light, not only covering the house in which the service is celebrated, but also pouring out over the landscape.(Healing 1984: 7) The church of the Liberal Catholics is considered a "retreat from the confusing world", and exactly the same formulation is used to describe Ananda Ashram.
Further, it is possible to link Ananda Tara Shan's use of music during the rituals to that of the Liberal Catholic Church.Leadbeater urged his followers to develop a special Liberal Catholic musical liturgy (Leadbeater 1957: 23), and in Shan the Rising Light such a thing has existed for years.The arguments given by Leadbeater seem to be fulfilled by Ananda Tara Shan, even if she does not refer to him personally (Musik 1983: 10-14).It is also possible that the important "Magic Wand" of Ananda Tara Shan, or rather the way it works, is partly inspired by the crosier of the Bishop in the Liberal Catholic Church.I have been told that the Liberal Catholics can feel the divine energies by touching their spiritual leader's (the Bishop's) crosier, and a similar account was given in a talk-show on the "spiritual local radio" of Copenhagen: Whenever you approach a person with spiritual power, you can feel it!When I approach Ananda, I can virtually see the magnetic radiation of her stick.The crystal seems to glow.It is very beautiful.(AA 14)
Conclusion
Jeanne Morashti was excommunicated from the organization she felt obliged to save, and started her own religious group.During her religious career, she had encountered numerous groups within the broader limits of the theosophical milieu, and when establishing her own group, she formed a synthesis of the various elements.The rituals of the Co-Masonic Örders and the Örder of the Round Table, along with the comprehensive ceremonial of the Liberal Catholic Church, were related to the otherwise non-ritualistic theology of Madame Blavatsky.In the words of Ananda Tara Shan, her contribution (the setting up of Shan the Rising Light) "was the final precondition to the Age of Aquarius, when Maitreya is to arrive" (AA 15).As she is believed to be a direct incarnation of Madame Blavatsky, it is only natural that she is trusted to finish her assignment in this incarnation.What interest us, however, is that this fulfilment is explicitly carried out by introducing the rituals: The philosophy has been known for a long time.Now we eventually have the means to realize it.It is the destiny of us, the Healers of the Earth, to enliven the will of the Masters, to let the Light of The Hierarchy shine on every man and woman on Earth.It is our destiny to create this Great Focus of Light and Love.It is for us to see the Castle of the Grail grow.Everybody that participates in our circles shall be blessed in all future incarnations.(AA 16) It is through the rituals that the preconditions for these goals are created and it is through the rituals that the contact with the divine beings is maintained.It is also through the rituals that the new elements in the belief system are given, and it is through the rituals that Ananda Tara Shan confirms her religious authority.Further, the rituals provide the believers with the experience of contact with a higher reality.This is why I consider the introduction of rituals into a traditionally non-ritualistic belief system an important element in the establishment of a (this) new religion.The introduction of rituals affects the beliefs and the sociological conditions alike.Ananda Tara Shan has managed to form a synthesis of institutions and structures otherwise separated.
While a sectarian split from the traditional theosophical body is nothing unique, the case of Shan the Rising Light seems to present something new in the history of theosophical offshoots.As pointed out by Melton, the various reformers of the theosophical ideas have each claimed their competence through their own selected Mahatma: Bailey and Mahatma Djwul Khul, Ballard and Mahatma St. Germain, LaDue and Mahatma Hilarion, etc. (Melton 1986: 90).Ananda Tara Shan communicates with all of them, including Jesus, although the Archangel Michael seems to be especially fond of her".Her ability to "canalize energies" from all members of the White Brotherhood was explained to me as the result of her "formidable ability to open the gates so that the energies may flow".This ability, one informant told me (just after the presentation of a video-tape showing Ananda Tara Shan transmitting a message from the Archangel Michael), has been developed "thanks to the rituals rediscovered by Ananda Tara Shan herself'.After a while my informant corrected herself: "It may be that she actually did not discover them herself.Maybe they were given her by the Masters," she said.
For the purpose of our analysis this changes nothing.The fact that rituals are introduced in a certain way to obtain certain results is what interests because it seems that this is something new to the theosophical sects.The old process of syncretism and eclecticism, in the case of Shan the Rising Light, have managed to include the rituals too.
I think this conclusion is supported by Bruce F. Campbell when he states the following: Ritual is a central element in the religious life.The symbolism and activity of group worship are powerful means both for making religious experience real and for creating a feeling of fellowship and community.Theosophy is therefore weakened as a vital movement by its lack of official ritual.(Campbell 1980: 196) Finally, as a curiosity, it is interesting to observe that the Mahatma Koot Hoomi, in February 1882, in a letter to the medium A. P. Sinnett commented on the young Theosophical Society and its internal problems, and said: How will you do it?How can you do it?Think of it well, if you care for further intercourse.They want something new.A ritual to amuse them.(Humphreys and Benjamin 1979: 262) And this was what they got through the Shan movement, exactly one hundred years later.
Final comment
During the symposium on rituals in Abo when this paper was originally presented, Dr. Tore Ahlback, being my co-referent, gave some very interesting pieces of information, especially regarding the history of the Theosophical Society.One thing in particular was of interest in relation to my analysis.Dr. Ahlback revealed that C. W. Leadbeater actually was the occult genius among the first theosophists, and he indicated that the influence of Leadbeater upon the occult traditions may very well be more important than that of Madame Blavatsky.This observation led Dr. Ahlbäck to suggest that Shan the Rising Light is primarily in tune with the occultism, and thus the tradition of ritualism, of Leadbeater.Only secondly, he suggested, the movement is in tune with the teachings of Madame Blavatsky".
As the resemblances between the ritualized occult traditions and Shan the Rising Light are obvious, this may very well be so.On the other hand, I find it hard to ignore the fact that the belief system of Ananda Tara Shan to a very high degree resembles that of Madame Blavatsky and other theosophical thinkers.As far as I can see, the judgement depends on where the emphasis is laid.By focusing on the belief system the theosophical heritage dominates, but concentrating on the rituals the other occult traditions or disciplines show themselves.Öne way or the other: the systematized mixing of strongly ritualized traditions with a non-ritualized belief system has led to a religious innovation.
References Cited
Unpublished sources CopenhagenArchive of the author AA 1-16.Documents, interviews, tape-recordings in connection toShanthe Rising Light.AA 1 Maitreya Theosophy.Booklet distributed by Shan the Rising Light for internal use, 1987.AA 2 Private conversation with an elderly woman who had known Jeanne Morashti before her occult career began, July 1990.AA 3 Interview with a dominant figure in the Theosophical Society in Denmark, May 1991.AA 4 Copy of internally distributed letter by Jeanne Morashti under the heading "Jeg anklager det Teosofiske Samfund of idag og de pseudo åndelige".No date.AA 5 Mailout issued during spring 1990.No date.AA 6 Tape-recording made by the author with a leading person in the Theosophical Society in Denmark, 27.6.1991.AA 7 Tape-recording made by the author with a Danish theosophist with contact to the Liberal Catholic Church, July 1991.AA 8 Interview by the author with the former leader of the Order of the Round Table, 4.7.1991.AA 9 Interview by the author with Asger Lorentsen, February 1991.AA 10 Internal note from Shan the Rising Light under the heading Fremtiden er her allarede.No date.AA 11 Ritual-manual distributed by Shan the Rising Light for internal use.No date.AA 12 Cosmic Service for the Earth.Ritual-manual distributed by Shan the Rising Light for internal use.No date.AA 13 Speech transmitted through Ananda Tara Shan, issued under the heading The Count Speaks, 7.7.1991.AA 14 Conversation between representatives of four new religions in Denmark.Radio Lotus, spring 1991.AA 15 Quotation from a video-tape, shown to visitors at Ananda Ashram, spring 1991.AA 16 "Hvad er en fredstjeneste?" Leaflet distributed by Shan the Rising Light for internal use, 1987. | 2019-05-06T14:08:21.234Z | 1993-01-01T00:00:00.000 | {
"year": 1993,
"sha1": "d1a75a72c084488debbb7bc3ec15eec10f880b5c",
"oa_license": "CCBY",
"oa_url": "https://journal.fi/scripta/article/download/67217/27515",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d1a75a72c084488debbb7bc3ec15eec10f880b5c",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": [
"Sociology"
]
} |
59427037 | pes2o/s2orc | v3-fos-license | Differential Evolution for Lifetime Maximization of Heterogeneous Wireless Sensor Networks
Maximizing the lifetime ofwireless sensor networks (WSNs) is a hot and significant issue.However, using differential evolution (DE) to research this problem has not appeared so far.This paper proposes a DE-based approach that can maximize the lifetime of WSN through finding the largest number of disjoint sets of sensors, with every set being able to completely cover the target. Different from othermethods in the literature, firstly we introduce a commonmethod to generate test data set and then propose an algorithmusing differential evolution to solve disjoint set covers (DEDSC) problems. The proposed algorithm includes a recombining operation, which performs after initialization and guarantees at least one critical target’s sensor is divided into different disjoint sets. Moreover, the fitness computation in DEDSC contains both the number of complete cover subsets and the coverage percent of incomplete cover subsets. Applications for sensing a number of target points, named point-coverage, have been used for evaluating the effectiveness of algorithm. Results show that the proposed algorithmDEDSC is promising and simple; its performance outperforms or is similar to other existing excellent approaches in both optimization speed and solution quality.
Introduction
The past two decades have witnessed the boom of wireless sensor networks (WSNs) and their applications, such as battlefield surveillance, environment supervision, traffic control, animal tracking, and home applications [1][2][3][4][5].A key technology for various applications that involve long-term and low-cost monitoring, in other words, the fundamental criterion for evaluating a WSN, is the network lifetime [6], which is defined as the period that the network satisfies the application requirements.Since most devices of WSNs are powered by nonrenewable batteries, studies for prolonging the network lifetime have become one of the most significant and challenging issues in WSNs.Thus, how to maximize the network's lifetime is a critical research topic in WSNs.
Various methods have been proposed for prolonging the lifetime of WSNs focusing on the issues of data processing [7], routing [8][9][10][11], device placement [12], topology management [13], and device control [14][15][16].In a WSN where devices are densely deployed, a subset of the devices can already address the coverage and connectivity issues.Since it has been proven that the network connectivity of active sensors in complete coverage is guaranteed by having the communication range of each sensor at least twice of its sensing range [17,18], thus, only the sensing coverage problem is researched in this paper.
In a WSN, coverage problem is an important issue, which determines how well an area of interest is monitored or tracked by sensors, and the device control approach that schedules the devices' sleep/wakeup activities has shown to be promising [19,20].In general, random deployment is performed to place the sensors (dropping from a plane) to the target area.And a sensor generally has two operation modes, active mode and sleep mode [4].When in active mode, a sensor can carry out its full operations, such as sensing, computation, and communication.To maintain those operations, sensors need to consume a relatively large amount of energy.In contrast, a sensor in a sleep mode uses only a small amount of energy and can be awoken in Mathematical Problems in Engineering a scheduled working interval for full operations.When a subset of sensors in the area can already cover the target area completely, the other sensors can be scheduled to be in the sleep mode to save energy; thus, if the number of subset is maxed, the lifetime of WSN is prolonged.In other words, maximizing the number of completely cover subset is a more direct way to maximize the network lifetime [21].Note that the target includes point target and area target; so the coverage problem is divided into point-coverage and areacoverage, respectively.Because the point target is the most common phenomenon in practical application, in this paper, the point-coverage problem is considered to maximize the lifetime of WSN by maximizing the number of completely cover subsets.
The problem of finding the maximum number of complete cover subsets is difficult because each subset must fulfill complete coverage to the target area, and the WSN can satisfy the surveillance task with only one subset of sensors active at any time.This problem in a WSN is called the disjoint set covers problem or the SET k-cover problem, which has been proven to be nondeterministic polynomial complete (NPC) complex problem [22].In [22], a maximum cover using mixed integer programming (MC-MIP) algorithm is proposed to find the maximum number of complete cover subsets.In [23], Slijepcevic and Potkonjak proposed a greedy algorithm, which is named the most constrained minimally constraining covering (MCMCC) heuristic to completely cover the target area.In [24], Lai et al. firstly introduce gene algorithm to solve the point-coverage problems for WSN by finding the maximum number of complete cover subsets, termed GAMDSC.Recently, a novel hybrid genetic algorithm using a forward encoding scheme (STHGA) was proposed by Hu et al. [25] for lifetime maximization of WSN, which adopts a forward encoding scheme for chromosomes and sensor schedule transition operations.
Besides, from other perspectives to research wireless sensor networks such as evolutionary algorithm (EA) [26,27] are promising direction.Furthermore, using other gene algorithms to research this problem is worth considering.Among the popular gene algorithms and their application in the recent years [28][29][30][31], differential evolution (DE) algorithm [32,33] appears to be a competent candidate.Due to its original definition by Storn and Price [34], the DE algorithm and it's variant are perceived as a reliable and versatile population-based heuristic optimization technique, which exhibits remarkable performance in a wide variety of application problems [35][36][37][38][39][40].
In this paper, a DE-based algorithm is presented for aiming at solving disjoint set covers problems for maximizing the WSN lifetime.The proposed algorithm, termed the differential evolution for solving disjoint set covers (DEDSC) problems, comprises the following features.Firstly, before running of DEDSC, test data set is produced by a common generation method, which includes the targets and sensors coverage information and the upper bound of disjoint set covers number ( max ).And then, at initialization of DEDSC, the population is generated randomly; in each chromosome the value of gene is a random integer between [1, max ].After that, a recombination operation is performed to guarantee at least one critical target's sensor is divided into different disjoint subsets.Fourthly, there are just two parameters in DEDSC, and the mutation strategy of DEDSC is "DE/best/1" differential evolution; the round number style is ceiling.Lastly, the fitness computation in our proposed algorithm considers both the number of complete cover subset and the coverage percent of incomplete cover subset.In general, these distinct features of DEDSC make it simple and effective to implement.
The rest of this paper is organized as follows: Section 2 defines the problem addressed in this paper.Section 3 describes the implementation of the proposed algorithm in detail and includes the encoding method of chromosomes, the design of the fitness function, the recombining operation, the crossover operation, and mutation operations.Experimental results and discussions are presented in Section 4. Finally, Section 5 draws a conclusion and provides guidelines for future research.
Disjoint Set Covers Problem
In this section, the problem of finding the maximum number of disjoint set covers in WSN is defined.And then we introduce a method for estimating an upper bound of the number of disjoint set covers.
Problem Definition.
Suppose that, in an L × W area, there are have a set of targets = { 1 , 2 , 3 , . . ., }, and then randomly deploy a set of sensors = { 1 , 2 , 3 , . . ., } in this area to monitor the targets.All of the sensors have sleep mode and active mode: in the active mode sensors can sense information of target, and assume sensors have the same sensing region, but in sleep mode they cannot sense due to saving energy.A target is said to be covered by a sensor if it lies within the sensing region of the sensor.In order to prolong the lifetime of WSN, we need find the maximum number of disjoint sensor covers.The problem can be solved via transformation to the DSC problem [22], which is defined that to find the maximum number of disjoint complete cover sets, and the corresponding cover set is satisfied [24,25].
where the max is the upper bound of disjoint set covers number .Namely, ∈ , and each can complete coverage to all of the targets.Beyond that, for every of belongs to at least one member of , and for any two different covers and , ∩ = .Take Figure 1 as an example; there are five sensors S1, S2, S3, S4, and S5 and four targets t1, t2, t3, and t4; every sensor has the same circular sensing region.Note that in the real applications, sensing region of a sensor can be an irregular shape [41].
According to Figure 1
Upper Bound of Disjoint Set Covers Number C.
In WSNs, the maximum number of disjoint set covers cannot exceed the maximum number of full cover subsets that satisfy the coverage constraint.Thus, the maximum number of full cover subsets ( max ) can be used as the upper limit of the number of disjoint set covers.It has been proven that finding the maximum number of full cover subsets is an NPC problem [21,42], but we can compute the upper bound of the number of full cover subsets with the following method in test data sets.When all the deployed sensors are active, all the targets are covered.For each target, it can be covered by at least one sensor.The targets which are covered by the minimum number of sensors is named critical targets, and their corresponding sensors are named critical sensors.It is worth noting that in the WSNs there maybe exists one or many critical targets, and each critical target has its own corresponding critical sensors.In the literature [24], the authors proposed term critical sensors which is just a special case; namely, there is just one critical target in WSNs.The number of sensors covering a critical target can be estimated by the upper bound of the number of full cover subsets [23,25].Therefore, the minimum number of sensors covered, which is denoted by max , can be used as the upper bound of C. Take the case in Figures 1 and 2 as an example; for these targets, the covered sets by sensors are , and 4 ∈ ( 3 , 4 ).According to above analysis, there are three critical targets ( 2 , 3 , and 4 ), and each critical target is covered by 2 sensors; thus the upper of the is 2 ( max = 2).
Differential Evolution for Maximizing the Disjoint Set Covers Problems
In this section, we will describe the DEDSC approach for maximizing the number of disjoint set covers problems in point-coverage targets WSN.Firstly, the test set generation algorithm is given.Then the implementation of the proposed DEDSC is presented step by step, including the initialization, the evaluation of the population, the recombination operations, and the crossover, mutation, and selection operations.
For easy understanding, the notations used in this paper are shown uniform in Table 1.
The Test Data Set Generation Algorithm.
According to the application of WSN in the real-world, sensors and targets are randomly deployed in the specific area.Thus the test data set needs to be randomly generated by algorithm; however, in all of the above literature [23][24][25], the test data set generation algorithm is not introduced.Therefore, a general test data set generation algorithm is given in this subsection.The pseudocode of this algorithm is shown in Pseudocode 1.
Note that the code is described by Matlab language, and the bold words are inner functions or keywords in Matlab.For example, the cell is the cell structure type.
The test data set is obtained after running this algorithm, which contains the sensing information of every sensor (Scell), the information of every target covered (Tcell), the matrix of all critical targets and their corresponding critical sensors (Critial Target Matrix), and the upper limit of the number of disjoint set covers ( max ).
The Representation of Chromosomes.
It is known that to find disjoint covers is that each sensor randomly joins a group among prescribed groups.A group forms a cover if it can cover all targets.Based on this idea, we use integer representation to encode a grouping combination of sensors.The value of a gene indicates the index of the subset that the sensor joins [24], and thus the sensors with the same index number form a disjoint cover set.
For example, suppose a chromosome is CH 1 = (2, 1, 2, 3, 1, 3, 2, and 1).It means that there are eight sensors and set 1, set 2, and set 3 three disjoint subsets in WSN.The set 1 contains sensor 2 , 5 , and 8 , set 2 contains sensor 1 , 3 , and 7 , and set 3 contains 4 , 6 .If each of the above three sets can completely cover all targets, it means that the number of disjoint set cover is 3.In application, the sensors with the gene pop(i,rt(j)) = sequ(j); % recombination (7) end for (8) end for Pseudocode 2: The pseudocode of recombination operation.value 1 are scheduled to be activated at first, the other sensors will be kept in a sleep mode until the second set of sensors is activated, and the sensors in set 3 are activated in the last scheduling.
Initialization and Recombination.
According to the representation of chromosomes, we randomly generate an integer between 1 and max as each gene value (subset number) in the initialization.The sensors with same gene value mean they form a subset.Take Figure 1 as an example, there are five sensors, and max = 2; therefore after initialization a chromosome maybe is CH 2 = (2, 1, 1, 1, 2), each gene value is random, and sensors 2 , 3 , and 4 form subset 1; sensors 1 , 5 form subset 2.
After initialization a recombination operation is performed, which guarantees at least one critical target's sensor is divided into different disjoint sets (subsets).Note that the recombination operation is different from the scattering in paper [24]; in this literature the authors assume there is only one critical target and scatter the corresponding sensors into different subsets.However, there maybe exist many critical targets in WSNs, and it is very difficult to scatter all of corresponding sensors into different subsets, this is likely an NP problem.Thus, our recombination operation considers every chromosome, randomly chooses a critical target, and recombines different gene values to their corresponding sensors.Take the chromosome CH 2 as an example; after initialization the chromosome CH 2 = (2, 1, 1, 1, and 2); then doing recombination operation, according to above analysis in Section 2.2, there are three critical targets ( 2 , 3 , and 4 ), which 2 ∈ ( 2 , 3 ), 3 ∈ ( 1 , 4 ), and 4 ∈ ( 3 , 4 ).Thus randomly choose one critical target from 2 , 3 , and 4 , suppose the target 4 is chosen, and its corresponding sensors are scatted into different subset; one case is 3 = 2, 4 = 1.Namely, the chromosome CH 2 became CH 2 = (2, 1, 2, 1, 2); it is obvious that after recombination the CH 2 has better fitness.The pseudocode of recombination is given in Pseudocode 2. It needs to explain that the code is described by Matlab language, and the bold words are inner functions or keywords in Matlab.
Objective Function.
The objective fitness function of a chromosome CH in the population is defined as follow: where C ( ⩾ 1) is the number of disjoint complete cover sets and ( ∈ (0, 1)) is the average coverage percentage of incomplete cover subset.Actually, it is not difficult to calculate the average coverage percentage of incomplete cover subset in the point-coverage problem.Assume in a chromosome, the number of disjoint complete cover is C, and then the number of incomplete cover subset is ( max -).For the th incomplete cover, if it can cover targets, then the average coverage of th incomplete cove subset is k/n.3.5.Differential Evolution for DSC Problem.Combined with the above analysis, this subsection introduces the detail of using differential evolution to solve DSC problem.Differential evolution [34] is a novel evolutionary algorithm, which has the following different mutation strategies which are frequently used in the literature: (1) "DE/rand/1" (2) "DE/best/1" (3) "DE/current-to-best/1" where the indices 1 , 2 , and 3 are distinct integers uniformly chosen from the set { = 1, 2, . . ., NP} \ {}, ( 1, − 2, ) is a difference vector to mutate the corresponding parent , , best, is the best vector at the current generation g, and is the mutation factor which usually ranges on the interval (0, 1).In classic DE, = is a fixed parameter used to generate all mutation vectors at all generations, while in many adaptive DE algorithms each individual is associated with its own mutation factor .Following the mutation stage, a binomial crossover operator is implemented to enhance diversity of the population.In crossover operation, the target vector , is connected with its corresponding mutant vector V , by using binomial crossover to generate a trial vector , .The aforementioned scheme can be outlined in The rand , [0, 1] is a uniformly distributed random number with the range [0, 1], different for each j. denotes a user-defined parameter, called crossover probability, which determines how similar the trial vector will be with respect to the mutant vector. rand ∈ [1, ] indicates a randomly selected integer which ensures at least one dimension of the trial vector , will differ from its associated target vector , .It has been proven that large values of improve variation and hence accelerate convergence speed, while small values promote exploitation.
Finally in selection operation, the selection operator is a one-to-one spawning strategy.In selection, the target vector competes against the trial vector according to their fitness values, and the better one will be selected to enter the next generation: For using DE to solve the disjoint set cover problem systemically and concisely, we provide the flow chart and the pseudocode of basic DEDSE in Figure 3 and Pseudocode 3, respectively.Figure 3 contains all operation modules, and Pseudocode 3 shows the corresponding pseudocode.
From Figure 3 and Pseudocode 3, we summarize that the distinct features of DEDSC are as follows.Firstly, test data set is produced by generation algorithm before running DEDSC, and the overall ideal of DEDSC is differential evolution (17) for = 1: popsize % for each chromosome (18) for = 1: max (19) find those sensors with same subset index .(20) for every sensors .(21) count each cover the targets and union them.( 22) end (23) if the number of targets is equal (24) this subset can complete coverage; (25) else (26) this algorithm.Second is initialization, the population is generated randomly, and thus the value of genes in chromosome is a random integer between 1 and max .After that, a recombination operation is performed to guarantee at least one critical target's sensor is divided into different disjoint subsets.
Fourthly, there are just two parameters in DEDSC, and the mutation strategy of DEDSC is "DE/best/1" differential evolution; the round numbers style is ceiling.In addition, the fitness computation in our proposed algorithm considers not only the number of complete cover sets but also the coverage percent of incomplete cover sets.Lastly, termination condition of DEDSC is that the best fitness in current population equals max or the generation of evolution greater than m * 100.
Runtime Complexity Analysis of DEDSC Algorithm.
Runtimecomplexity analysis of the population-based stochastic search techniques, like DE, GA, and so forth, is a critical issue by its own right.In this section, we examine the computational complexity of the proposed DEDSC algorithm, which can be estimated based on the number of calculations of metrics and reproduction that are required.
Assuming NP is the number of individuals, namely, size of population, and is the dimension of problem; the serial DE model needs (NP ⋅ ) time algorithm for upgrading population and computing values and fitness object function.Hence, if the algorithm is terminated after a fixed number of generations max , the overall runtime is (NP ⋅ ⋅ max ).
In DEDSC, besides the fundamental operations for two parts of vectors, we have to take into account both of them.In our DEDSC, a recombination operation after initialization is proposed, which just performsone time in every time run.For each chromosome in NP, the best and the worst run complexity is (1) and O(D), respectively.Thus, the average complexity of recombination is (0.5⋅(1+)⋅NP).Therefore, the time complexity of DEDSC is (0.5⋅(1+)⋅NP)+(NP⋅ ⋅ max ).
The above formula reveals the proposed DEDSC is slightly higher than original algorithm on runtime complexity.
Simulation Experiment and Analysis
In this section, a series of experiments are performed to evaluate the performance of DEDSC.Since the proposed approach is for maximizing the number of disjoint sets cover in point-average WSNs, the state-of-the-art algorithms, that is, GAMDSC [24] and STHGA [25], are used for comparison.
Parameter and Test Environment
Setting.It needs to explain that, if not specially stated, the experiments for DEDSC use the same parameters settings as the population size popsize = 10; the rate of mutation and crossover are initialized F = 0.5 and = 0.3, respectively, in which value of and are empirical values.And in DEDSC, the mutation strategy is DE/best/1, the maximum number of evolution generation function evaluations (FES) is m × 100, and if the number of disjoint complete cover sets reaches max , the algorithm also terminates.These parameters influence the performance of DEDSC, which will be analyzed in Sections 4.3 and 4.4.
Parameter settings of GAMDSC and STHGA can be referred in [24,25].For DEDSC and other referred algorithms, each case is tested 100 times independently, the sensors are deployed in a 50 × 50 rectangle area, and the coordinates The "ok%" indicates the percent to find max in 100 runs, "Best" is the best result in algorithm, "Mean" indicates the mean value of 100 runs, and the "avgE" denotes the average of evolution generation when finding max .
From Table 3, it can be observed that the proposed DEDSC is significant improvement rather than GAMDSC in cases 1-2 and 4-7; in case 3 they have equal performance.Moreover, the proposed DEDSC algorithm almost has the same performance as the STHGA, and it can reach a hundred percent to find max in all cases except for case 6.However, the result shows that our proposed DEDSC need the least evolution generation to find max in cases 1-5 and case 7, which implies the DEDSC has the most fast convergence rate.
In order to analyze the performance of DEDSC detailed, the average optimization curves of DEDSC in cases 1 and 6 are provided in Figures 4 and 5, respectively.From Figures 4 and 5 we can conclude that (1) DEDSC finds the best result much faster than other two methods which benefit from differential evolution and the comprehensive evaluation of fitness; (2) due to the recombination operation, DEDSC has the best initial population.In general, the traditional intelligence method, that is, Genetic Algorithm, has better initial population than STHGA, that is because the GA randomly generates group number, but STHGA generates index increase by degrees.It is can be seen from Figures 4 and 5 STHGA has the least number of complete cover sets in initialization; (3) the performance of STHGA, stable progressive increase, and it easily finds global optima because of the forward encoding scheme.DEDSC has better convergence rate and initial population than STHGA.However, in the later stage of evolution it may fall into the local optimal solution.That is, the reason for our proposed DEDSC is not better than STHGA in case 6.
Test for the Method of Round Numbers.
In this subsection, the different way of round numbers and mutation are tested.The used test data is cases 1 to 7 are shown in Table 2.
After mutation operation, the value of gene maybe is float type, but the index of subset is integer type; therefore, it needs to do round number operation for population.There are there popular ways for round number operation, ceiling, floor, and round.In order to find which method is more suitable for DEDSC, all of three ways are tested and compared in Table 4.
Table 4 reports the experimental results of the three methods for round numbers in 100 independent runs.And Here, we provide an overall performance comparison between these tests in Tables 5, 6, and 7.The better result among those is indicated by Boldface in the table.Storn and Price in [34] have indicated that a reasonable value for is usually between 0.4 and 1, and a good initial choice of was 0.5.The parameter controls how many parameters in expectation are changed in a population member.Feoktistov and Janaqi [43] claimed that a plausible choice of the crossover rate is between [0.3, 0.9].Recently, the authors in [44] state that should lie in (0, 0.2) when the function is separable, while in (0.9, 1) when the function's parameters are dependent.For the disjoint set covers problem it likely belongs to separable function; so we apply = 0.3 and = 0.5 as comparison, respectively, and provide the detail results in Tables 5-7.
From Tables 5-7, it can be seen that the "DE/best/1" strategy with F = 0.5 and = 0.3 provides the best performance on cases 1-7.To be specific, considering the values of and in each mutation strategy, it is observed that using F = 0.5 and = 0.3 the results are better than that using F = 0.9 and = 0.5.In especially the cases 1, 5, and 6, the performance of using the former parameter combinations significantly improves.
On the other hand, consider the mutation strategy with different values of and .Tables 5-7 show that only using "DE/best/1" strategy can reach 100 percent success rates to find max in case 5.And in case 6 the performance of using "DE/best/1" strategy is 87, which is the best in the three mutation strategies.
As a conclusion, according to the experimental results in Tables 5-7 and the above analysis, the "DE/best/1" mutation strategy using F = 0.5 and = 0.3 has the best performance in DEDSC.
Test for DE Variants with Self-Adaptive Parameters.
In this subsection, we study the effect of DE's variants which with self-adaptive parameter control strategy.
Inspired by "jDE" [39] we humbly employ the selfadaptive scheme to update parameters for solving the DSC problem, which can self-update to and among perform.Moreover, various mutation strategies are tested, and the comparison results are provided in Table 8.
In this subsection test, we not only apply the idea of selfadaptive in "jDE" to update and , but also test some Mathematical Problems in Engineering variants of "jDE, " such that the various mutation strategies in "jDE" are tested.Note that in original "jDE, " the initialized F = 0.5, = 0.5, and the mutation strategy is "DE/rand/1." Therefore in this subsection test, F = 0.5 and = 0.5 are used in all "jDE" and their variants, but different mutation strategies are applied in different variants.Table 8 shows that in the case 5, DEDSC outperforms "jDE" and its variants significantly, only DEDSC can attain 100 percent success rate to find max .In case 1, both DEDSC and "jDE/best/1" can achieve 100% success rate; the "jDE/current-to-best/1" and original "jDE" only reach 84% and 94%, respectively.This means that the original "jDE" maybe is not very suitable for this problem, and the mutation strategy "/best/1" is better than others.Consider case 6, all of those algorithms cannot reach 100% success rate, original "jDE" shows the best performance, and our proposed DEDSC ranks second.It implies that our algorithm maybe lost in local optimum and needs more suitable control parameters to improve its performance.
As a conclusion, DEDSC is the best among the three algorithms in comparison to cases 1, 5, and 6.However, because the performance of DEDSC is not better than STHGA in case 6, we intend to study more suitable parameters in the future work.
Conclusion
Considering maximizing the lifetime of wireless sensor networks, our paper here proposes using differential evolution to solve this optimization problem, in which algorithm is termed DEDSC.
In DEDSC, there are just two parameters, mutate factor rate and crossover rate .The major features of DEDSC are recombination operation and fitness computation, and the former guarantees at least one critical target's sensor is divided into different disjoint sets.This is a very important operation; especially there are many critical targets in test data set.Our method for computing fitness not only considers the number of complete cover subset, but also contains the cover percent of those incomplete cover subsets.To verify the DEDSC's effectiveness, an extensive performance comparison has been conducted over 7 commonly used test cases.
The experimental results suggested that its overall performance was better than GAMDSC and almost same as STHGA.Considering on the convergence rate, DEDSC needs the least evolution generations to find max among the above algorithms.Moreover, a scalability study test was carried out using the different parameters and mutation strategies for DEDSC.In addition, for updating the parameter self-adaptive we tentatively combine "jDE" with DEDSC for comparing, and three "jDE" mutation strategies are tested.From the above results we obtain that the DEDSC with "DE/best/1" mutation strategy and F = 0.5, = 0.3 have promising performance to maximize the lifetime of WSN.
However, using other self-adaptive parameter schemes, such as "JADE" [29], to improve the performance of DEDSC is a future direction for the algorithm.
Figure 4 :
Figure 4: Average optimization curves of DEDSC, STHGA, and GAMDSC in case 1.Each of 50 generations evaluates one time.
Figure 5 :
Figure 5: Average optimization curves of DEDSC, STHGA, and GAMDSC in case 6.Each of 20 generations evaluates one time.
Crossover between V , and , by (5) % Evaluate fitness for , and , The pseudocode of proposed DEDSC algorithm.Note that the pseudo code is described by Matlab language, and the bold words are inner functions or keywords in Matlab.
subset coverage percentage is /;
Table 2 :
The generated test cases index 1-7.All cases are run by a computer with a Core I3 2.8 GHz CPU.We use the proposed generated test data set algorithm to obtain the test cases 1-7 and show the details of these test cases in Table 2. 4.2.Overall Test for DEDSC and Referred Algorithm.In this subsection, the overall test for DEDSC and comparison of referred optimization algorithms are provided in Table 3.The best result among those methods is shown by Boldface in the table.
Table 4 :
Experimental results of cases 1-7, averaged over 100 independent runs.Meanwhile there are two important parameters and in DE.Therefore, finding the suitable mutation strategy and F, are significant to the performance of DEDSC.In this subsection, we apply many popular mutations and F, for testing DEDSC.The used test data sets are cases 1 to 7, and each test averaged over 100 independent runs with m * 100 FES, and the used round numbers method is ceiling.
Table 8 :
Effects of paramaters on performance of jDE, averaged over 100 independent runs with m * 100 FES. | 2018-12-31T05:08:41.150Z | 2013-04-22T00:00:00.000 | {
"year": 2013,
"sha1": "9dd736fdbd76a2d2211abeb4cc97a4762b3c6968",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/mpe/2013/172783.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9dd736fdbd76a2d2211abeb4cc97a4762b3c6968",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
234487890 | pes2o/s2orc | v3-fos-license | Multicohort Analysis Identifies Monocyte Gene Signatures to Accurately Monitor Subset-Specific Changes in Human Diseases
Monocytes are crucial regulators of inflammation, and are characterized by three distinct subsets in humans, of which classical and non-classical are the most abundant. Different subsets carry out different functions and have been previously associated with multiple inflammatory conditions. Dissecting the contribution of different monocyte subsets to disease is currently limited by samples and cohorts, often resulting in underpowered studies and poor reproducibility. Publicly available transcriptome profiles provide an alternative source of data characterized by high statistical power and real-world heterogeneity. However, most transcriptome datasets profile bulk blood or tissue samples, requiring the use of in silico approaches to quantify changes in cell levels. Here, we integrated 853 publicly available microarray expression profiles of sorted human monocyte subsets from 45 independent studies to identify robust and parsimonious gene expression signatures, consisting of 10 genes specific to each subset. These signatures maintain their accuracy regardless of disease state in an independent cohort profiled by RNA-sequencing and are specific to their respective subset when compared to other immune cells from both myeloid and lymphoid lineages profiled across 6160 transcriptome profiles. Consequently, we show that these signatures can be used to quantify changes in monocyte subsets levels in expression profiles from patients in clinical trials. Finally, we show that proteins encoded by our signature genes can be used in cytometry-based assays to specifically sort monocyte subsets. Our results demonstrate the robustness, versatility, and utility of our computational approach and provide a framework for the discovery of new cellular markers.
INTRODUCTION
Monocytes, together with macrophages and dendritic cells (DCs), are part of the mononuclear phagocyte system. Monocytes and monocyte-derived cells play important roles in the regulation of inflammation, both as precursors as well as effector cells (1)(2)(3). Monocytes are a heterogeneous group of cells, and since Passlick and colleagues showed that the combined use of CD16 (FcgRIII) and the LPS co-receptor CD14 identified distinct subsets of monocytes in humans (4), three major subsets have been defined, as well as their murine counterparts. These subsets are: a classical (CD14 + CD16in humans and Ly6C hi in mice), a nonclassical (CD14 -CD16 + in humans and Ly6C lo in mice), and an intermediate subset (CD14 + CD16 + in humans and Ly6C + Treml4 + in mice) (5)(6)(7). Although plasticity is an important characteristic of monocytes (8,9), surface marker expression and functional studies have shown each monocyte subset to be functionally distinct. Consistent with this, transcriptome analyses of sorted monocyte subsets have revealed different gene expression profiles ascribed to each subset (7,10,11). Classical monocytes, around 90% of total monocytes in humans, are efficient phagocytic cells important for the initiation of inflammatory response, with high expression of scavenger and chemokine receptors, and elevated cytokine production (12,13). Earlier work on nonclassical monocytes, around 5% of total monocytes, emphasized their capacity to produce inflammatory cytokines, especially TNF (14). However, recent work has shown that nonclassical monocytes are also involved in immune surveillance of the vasculature and have pro-and antiinflammatory functions (9,15). Intermediate monocytes are considered efficient antigen presenting cells (12). Consequently, altered frequencies of different subsets have been associated with inflammatory conditions, such as infections and autoimmune disorders including lupus, rheumatoid arthritis, and inflammatory bowel disease (13,(16)(17)(18)(19), and more recently, COVID-19 (20,21).
Dissecting the contribution of different monocyte subsets to disease is currently limited by samples and cohorts that can be profiled experimentally using cytometry and cell-staining-based assays. These limitations often result in underpowered studies and, consequently, poor reproducibility (22). Public transcriptomes provide an alternative source of data characterized by high statistical power and real-world biological, clinical, and technical heterogeneity, resulting in increased reproducibility (23)(24)(25)(26)(27)(28)(29)(30). However, most transcriptome datasets profile bulk blood or tissue samples, requiring the use of in silico approaches to quantify changes in the levels of specific cell types (31)(32)(33)(34)(35)(36).
Here, we integrated 853 publicly available microarray expression profiles of sorted human monocyte subsets from 45 independent studies to identify robust and parsimonious gene expression signatures, consisting of 10 genes specific to each subset. These signatures, although derived using only datasets profiling healthy individuals, maintain their accuracy independent of the disease state in an independent cohort profiled by RNA-sequencing. Furthermore, we demonstrate that these signatures are specific to monocyte subsets compared to other immune cells such as B, T, dendritic cells (DCs) and natural killer (NK) cells. This increased specificity results in estimated monocyte subset levels that are strongly correlated with cytometry-based quantification of cellular subsets. Consequently, we show that these monocyte subsetspecific signatures can be used to quantify changes in monocyte subsets levels in expression profiles from patients in clinical trials. Finally, we show that proteins encoded by our signature genes can be used in cytometry-based assays to specifically sort monocyte subsets. Our results demonstrate the robustness, versatility, and utility of our computational approach and provide a framework for the discovery of new cellular markers.
Public Data Collection, Annotation, and Analysis
Unless otherwise noted, we obtained all gene expression data used in this study from the Gene Expression Omnibus (GEO) database (www.ncbi.nlm.nih.gov/geo/) using the MetaIntegrator R package from CRAN (35) (Supplemental Table 1). All data was manually annotated using the available expression metadata. We normalized each expression dataset using quantile normalization and computed gene-level expression from probelevel data using the original probe annotation files available from GEO as described previously (31). We performed conormalization, effect size calculation, and gene ranking as previously described (31). We performed gene set selection to identify parsimonious gene signatures using the following criteria: (a) we ranked genes based on effect size (b) we filtered genes that were up-regulated in the cell subset of interest (c) we filtered genes with a mean expression difference of 32 expression units or above (d) We selected the top 10 genes for each subset. We chose these criteria to increase the likelihood of successful independent experimental validation of each marker gene. Signature scores were computed by calculating the geometric mean of expression levels of the signature genes in the dataset of interest, as described previously (17)(18)(19)(20)(21)(22)(23). All follow-up analyses were performed using R (v. 3.4.1). Analysis scripts are included as Supplemental Materials and available online here (https://www.biorxiv.org/ content/biorxiv/early/2020/12/22/2020.12.21.423397/DC1/ embed/media-1.gz?download=true).
Pathway Analysis
We performed gene set enrichment analysis (GSEA) as previously described (37). Briefly, for each monocyte subset we computed an effect size vector across all genes as described above. We then applied GSEA to the effect size vectors by comparing to MSigDB, a collection of molecular signatures derived from pathway analysis databases and published molecular data (http:// software.broadinstitute.org/gsea/msigdb/index.jsp). We corrected for multiple hypothesis testing across all pathways using Benjamini-Hochberg's FDR correction. We performed this analysis using the 'fgsea' and 'MSigDB' packages in R.
Samples
For the monocyte sorting and expression profiling, de-identified blood samples from 3 healthy adult donors were obtained from the Stanford Blood Center (SBC) and from 3 Rheumatoid Arthritis (RA) patients at UCSF (Supplemental Table 6). For the flow cytometry, 8 de-identified blood samples from healthy adults from the SBC, and 12 samples from patients with Systemic Juvenile Idiopathic Arthritis from Lucille Packard Children's Hospital were tested (Supplemental Table 7). The work was conducted with approval from the Administrative Panels on Human Subjects Research from Stanford University and UCSF.
Fluorescent-Activated Cell Sorting of Monocytes
Venous blood from healthy controls and RA patients was collected in heparin tubes (BD Vacutainer, BD, Franklin Lakes, NJ); peripheral blood mononuclear cells (PBMCs) were isolated by density gradient centrifugation using LSM Lymphocyte Separation Medium (MP Biomedicals, Santa Ana, CA). PBMCs were enriched for total monocytes using the Pan Monocyte Isolation Kit (Miltenyi Biotech, San Diego, CA). Enriched monocytes were stained for surface antigens as previously described (38). Briefly, cells were stained with LIVE/DEAD Fixable Aqua Dead Cell Stain (Life Technologies, Eugene, OR). Antibodies against CD3, CD19, CD56, and CD66b, all labeled with PercpCy5.5, were used to exclude T cells, B cells, NK cells, and neutrophils respectively ('dump'); antibodies against CD1c and CD141 were used to identify and exclude dendritic cells. Antibodies against HLA-DR-APC Cy7 (clone L243), CD14-Pacific Blue (clone M5E2), and CD16-PE Cy7 (clone 3G8) were used to identify monocytes and their three subsets. All antibodies are from Biolegend (San Diego, CA). Fluorescence minus one (FMO) were used as control for gating cell populations. Sterile flow cytometry sorting was performed using a BD FACSAria II (BD Biosciences) at the Stanford Shared FACS Facility (SSFF) using a 100uM nozzle, yielding monocyte subset purity of over 98% (verified using the classical subset). Sorted cells were collected into polypropylene tubes containing RPMI media (RPMI+10% Heat inactivated Fetal Bovine Serum +1% Penicillin Streptomycin), counted and spun down for total RNA extraction.
Total RNA Extraction and RNA-Seq
Total RNA extraction was performed using the Qiagen RNeasy Micro kit (Qiagen, Germantown, MD). Total RNA concentration and quality were determined using a NanoDrop 1000 Spectrophotometer (ThermoFisher Scientific, Waltham, MA) and a BioAnalyzer 2100 (Agilent Technologies, Santa Clara, CA) at the Stanford PAN Facility. RNA sequencing (RNA-seq) was performed by BGI Americas (Cambridge, MA). The total RNA was enriched for mRNA using oligo(dT), and the RNA was fragmented. cDNA synthesis was performed using random hexamers as primers. After purification and end reparation, the short fragments were connected with adapters. The suitable fragments were selected for PCR amplification. The library was then sequenced using Ilumina HiSeq 2000.
Gene Expression by Real Time RT-PCR
Total RNA from sorted monocytes was converted into cDNA using the iScript Reverse Transcription Supermix (Bio-Rad, Hercules, CA), and cDNA was amplified with SsoAdvanced ™ SYBR ® Green Supermix (Bio-Rad); reactions were performed in a CFX384 real time PCR instrument (Bio-Rad). Primers sequences were obtained from qPrimerDepot (http:// primerdepot.nci.nih.gov/), synthesized at the Stanford University Protein and Nucleic Acid Facility (PAN) and validated in our laboratory as previously described (39) using unsorted total monocytes. Primers are listed on Supplemental Table 8. Relative starting amounts of each gene of interest were determined using the delta delta Cq method.
Robust Subset-Specific Monocyte Signatures From Heterogeneous Gene Expression Datasets
We hypothesized that integrating heterogeneous transcriptome profiles of sorted human monocyte subsets across multiple cohorts would allow us to identify robust subset-specific gene expression signatures. To test this hypothesis, we collected and annotated 853 publicly available gene expression profiles of sorted human monocytes across 45 studies. These datasets spanned 22 microarray platforms for transcriptome profiling of samples acquired from healthy donors (Supplemental Table 1). After sample annotation, we co-normalized and integrated all expression data as previously described (31) (Figure 1). For each gene, we calculated effect sizes as Hedge's g between the samples from a cell subset of interest compared to all other samples. We then characterized the underlying biological functions represented within the transcriptional data for each monocyte subset by performing Gene Set Enrichment Analysis (GSEA) (see Methods). We identified 91 significantly enriched pathways in classical monocytes and 1737 for the nonclassical subset (FDR < 5%). Our analysis revealed pathways associated with known functions of classical monocytes, such as wound healing, cytoskeleton remodeling, and phagocytosis, positively enriched in the classical subset (Supplemental Table 2). In contrast, our analysis of the nonclassical subset revealed categories of diseaseassociated gene expression changes, cell cycle, and metabolism (Supplemental Table 3). Notably, our most significant enrichment consisted of a gene signature previously reported to be down-regulated in Alzheimer's Disease (pathway: 'BLALOCK_ALZHEIMERS_DISEASE_DN', p = 9.9e-6). This is in agreement with previous reports showing that nonclassical monocytes are found to be reduced in patients affected by Alzheimer's (40). These results suggest that our data integration strategy allowed us to preserve and capture previously described biological functions of monocyte subsets irrespective of technical and biological confounders within our collection of datasets. We then applied our multi-cohort analysis framework to identify robust cell-subset specific genes (see Methods) (23,31,41). We considered classical and nonclassical monocyte subsets for our analysis because of their functional importance and the number of available datasets for each subset that could be integrated into our analysis (n>=4). There were 30 genes significantly over-expressed in classical monocytes and 268 genes over-expressed in nonclassical monocytes. We created classical and nonclassical monocyte specific gene signatures using the top 10 genes for each monocyte subset that were consistently elevated within the subset of interest across all our discovery cohorts ( Figures 2A, B and Supplemental Table 4). Among the classical signature genes, five have been previously associated with the classical monocytes in a single-cell RNA-seq study of healthy FIGURE 1 | Generation of monocyte-specific signatures: Workflow depicting collection and annotation of publicly available discovery datasets from NCBI GEO profiling sorted human monocyte cell subsets (classical and non classical). Data was the combined and co-normalized to identify subset-specific signatures. Signatures were validated on a independent RNA-seq chohort, and on PBMCs expression profiles with paired flow data. After validation, signatures were applied on disease-affected cohorts and tested for their viability as phenotypic markers in cytometry-based assays. monocytes, and SIGLEC10 was identified in non-classical monocytes as well (42).
Monocyte Signatures Are Robust in an Independent Validation Cohort and Independent of Disease State
Although the gene signatures for classical and non-classical monocytes were derived by integrating independent datasets with substantial technical and biological heterogeneity, they only included healthy human subjects. We have previously shown that the accuracy of cell-type specific genes can be significantly affected by disease-induced changes in gene expression (31). Therefore, we investigated whether these monocyte subset-specific signatures are confounded by disease state in an independent cohort of healthy controls (n=3) and patients with rheumatoid arthritis (RA; n=3). We sorted monocyte subsets (classical, nonclassical, and intermediate) from peripheral blood samples and measured their transcriptomic profile using RNA-seq (see Materials and Methods).
Hierarchical clustering of the RNA-seq data using all genes in our monocyte subset-specific signatures accurately separated samples according to their cell subset identity, but not by their disease status ( Figure 3A). Importantly, the signature genes showed variable expression levels in the intermediate monocytes, suggesting that the intermediate monocytes may represent a transitional cellular state rather than a stable state. All genes except two from the non-classical monocyte signature, (IER2 and CTSA), were correctly over-expressed in their respective subset, and most (15 of 20) were independently confirmed by RT-PCR in sorted monocytes from healthy controls and patients with RA (Supplemental Figure 1). Next, we defined a classical monocyte subset score (cMSS) of a sample as a geometric mean of expression of genes in classical monocyte-specific signature, and nonclassical monocyte subset score (ncMSS) of a sample as a geometric mean of expression of genes in nonclassical monocyte-specific signature. We computed cMSS and ncMSS for each sample. In the independent cohort of healthy controls and patients with RA, we found that cMSS was (Figures 3B, C).
Overall, our results provide further independent evidence that our monocyte subset-specific signatures are consistently accurate across healthy and disease-affected samples.
Monocyte Signatures Are Highly Specific Across All Immune Cells
Although our transcriptional signatures are accurate and specific within monocyte subsets, irrespective of disease state and gene expression platform, they were obtained using gene expression data solely derived from monocytes. Therefore, we asked whether our signatures maintained their specificity and accuracy when compared across all immune cell lineages, including B cells, T cells, NK cells. This is important, as their direct application to blood or biopsy-derived expression profiles, which contain multiple and diverse cell populations, would otherwise produce confounded results (43). To answer this question, we compared effect sizes for each gene in our monocyte subset signatures across 20 sorted human immune cell types using 6160 transcriptomes from across 42 different microarray platforms described before (31). Hierarchical clustering of Hedge's g effect sizes (see Methods) of the genes in monocyte subset-specific signatures distinguished myeloid and lymphoid lineages (Supplemental Figure 3). Further, within the myeloid cluster, both CD14+ and CD16+ subsets clustered separately from other myeloid cells (Supplemental Figure 3). Next, we calculated cMSS and ncMSS scores for each sample and evaluated their ability to accurately distinguish each subset among all immune cells. As expected, cMSS scores were significantly higher in classical monocytes compared to nonclassical monocytes (t-test p=1.2e-7) and other immune cell types (t-test p<2.2e-16, Figure 4A). Similarly, ncMSS were significantly higher in nonclassical monocytes compared to classical monocytes (t-test p=4.6e-9) and other immune cell types (t-test p<2.2e-16, Figure 4B).
Monocyte Signatures Reveal Changes Associated With Disease and Treatment
Next, we hypothesized that monocyte subset-specific signatures and their corresponding scores, cMSS and ncMSS, could be used to monitor changes in proportions of monocyte subset levels associated with disease. To test this hypothesis, we analyzed transcriptome profiles of whole blood samples (GSE93272) from healthy controls (n=43) and patients with RA (n=232) (44). We computed cMSS and ncMSS for each sample. We observed a significant increase in both cMSS (p = 4.2e-4) and ncMSS (p = 4.4e-8) in RA-patients compared to healthy controls, which is in line with increased monocyte proportion in patients with RA that has been previously observed ( Figure 5A) (45). Finally, we assessed whether our signatures could detect changes in cellular composition induced by treatment. To this end, we analyzed a longitudinal dataset, GSE80060, profiling whole blood samples of patients affected by sJIA before and after treatment with canakinumab, a monoclonal antibody against IL-1 beta. Changes in levels of circulating monocytes in sJIA have been described, with higher levels during active disease (46,47). When comparing pre-and post-treatment samples, we measured a significant decrease in both classical (p = 2.9e-12) and nonclassical (p = 1.5e-6) signatures post-treatment irrespective of response ( Figure 5B). Our results indicate that our signatures can be used to specifically monitor changes in monocyte subsets occurring during disease and treatment.
Validation of Monocyte Signature Genes as Novel Cell Surface Markers for Subset Quantification Using Cytometry
Cell type-specific gene signatures have been shown to enable accurate in silico estimation of corresponding cell types from expression data of mixed-cell samples, such as whole blood or peripheral blood mononuclear cells (PBMCs) (31). We therefore tested whether these signatures could be used to accurately quantify monocyte subsets within human samples by using publicly available expression profiles from healthy human PBMCs with paired flow-cytometry [GSE65316 (48)]. We indeed found the cMSS to be strongly and significantly correlated with cytometry-measured monocyte proportions across all samples (r = 0.69, p = 6.7e-4; Figure 6). Next, we hypothesized that our large-scale transcriptome analysis would enable identification of cell surface markers to improve cellular phenotyping by cytometry using FACS or CyTOF. To test this hypothesis, we selected an extended set of genes that were significantly highly expressed in either classical or non-classical monocytes in both discovery and validation samples with an absolute effect size ≥ 1, had documented surface expression, and for which an antibody for follow-up protein quantification by cytometry was commercially available (Supplemental Table 5). We selected CD114 (gene name: CSF3R, ES=-1.28, p=2.84e-7), CD32 (gene name: FCGR2A, ES=-1.21, p=9.48e-7), CD36 (ES=-1.17, p=3.63e-6), and IL17RA (ES=-1.10, p=4.39e-6) as markers with higher expression in classical monocytes, and SIGLEC10 (ES=4.93, p=2.2e-70) as a marker with higher expression in nonclassical monocytes compared to classical monocytes ( Figures 7A, B).
We profiled cell surface proteins corresponding to these differentially expressed genes in PBMCs from healthy adult donors (n=8) and pediatric patients with sJIA (n=12). In both healthy adults and sJIA subjects, expression of the selected markers was higher in the corresponding monocyte subset, as predicted by transcriptome analysis (Figures 7C, D) and Supplemental Figure 5). All of our predicted proteins had significantly different levels between classical and nonclassical monocytes on the cell surface in both healthy controls and sJIA patients (p<0.05).
In summary, we have developed monocyte subset-specific robust and parsimonious gene expression signatures. Our results highlight their specificity and accuracy irrespective of technical and biological confounders and show their utility in translational applications. More importantly, our approach demonstrates that genes differentially expressed between two groups despite biological and technical heterogeneity across multiple independent datasets can be robust differentiators of the two groups at the protein level as well.
DISCUSSION
Here, we describe the generation and application of robust and parsimonious gene expression signatures to accurately and specifically quantify changes in monocyte subset levels from existing publicly available datasets. Our analysis presented here builds upon an existing framework that was previously applied to create a new and unbiased basis matrix for cell-mixture deconvolution of gene expression data. By applying this computational framework that integrates existing heterogeneous public expression data from sorted human monocytes, we identified gene signatures for the classical and nonclassical subsets, each consisting of ten over-expressed genes. We then validated our signatures using transcriptome profiles of 6661 sorted immune cell samples across 168 studies, including samples from patients with various diseases to demonstrate their generalizability despite biological, clinical, and technical heterogeneity. In addition, we profiled two independent validation cohorts by RNA-seq and flow-cytometry, respectively, to validate our signatures at the individual gene level. Our current work differs from previous efforts in several meaningful ways: First, our previous work on deconvolution aimed at building a basis matrix, immunoStates, that would account for 20 immune cell types. As a result, immunoStates is composed of more than 300 genes, and is applied in its entirety to deconvolve a sample of interest. Other studies, such as the one by Monaco et al. (36), established deconvolution approaches to quantify different cell types with high accuracy, as they rely on larger gene sets and statistical approaches tailored toward a specific data type or platform (e.g., RNA-seq).
In contrast, here we focused on creating cell-type specific signatures consisting of only a small set of genes to be used independent of any other signatures or deconvolution framework, while retaining high specificity and accuracy across multiple platforms. This strategy allows the researcher to specifically measure our parsimonious signature genes in a sample of interest using targeted assays such as qPCR or nanoString, which can be useful and cost-effective in pilot studies and clinical settings and is therefore complementary to tailored deconvolution approaches (36).
Second, our current gene selection strategy was chosen to prioritize genes that could be easily used as individual biomarkers for cytometry-based assays. Such strategies take into account the directionality of the markers and their expression difference, to increase the likelihood of validation by flow-cytometry. Indeed, a number of genes in our signatures correspond to surface markers with commercially available antibodies. Using this set of markers, we confirmed the subset specificity of the markers in both healthy and disease samples at the protein level. Among the markers identified and validated by flow cytometry, only CD16 and CD36, in addition to CD14 and HLA-DR, have been commonly used to identify monocyte subsets (49,50). The additional markers we identified could thus be potentially useful in further probing the heterogeneity of monocyte subsets, as revealed by recent studies utilizing the high dimensionality of mass cytometry and single cell sequencing to tease out the heterogeneity of the human monocyte population (42,43,51).
Finally, our analysis leveraged only samples profiled from healthy individuals, whereas our previous work included expression data from disease-affected samples as well. Our rationale for this decision was based on having on average 22 studies per targeted cell type in our discovery set, which triples both the statistical power and the amount of accounted heterogeneity in this study compared to our previous work [8 studies per cell type, (31)]. We hypothesized these increases would result in more robust signatures, and our validation cohort validated our signatures irrespective of disease state. Analysis of our cohort also revealed that the expression levels of our signature scores in intermediate monocytes were intermediate between the classical and non-classical subsets. It has long been debated whether intermediate monocytes exist as a stable subset or represent a transitional state between classical and non-classical monocytes (52). Another alternative, not necessarily mutually exclusive, is that intermediate monocytes may comprise a heterogeneous cell mix (42,53). Our results are consistent with evidence that human intermediate monocytes, like similar cells in mice (54) are an intermediate, transitory subset between classical and non-classical monocytes rather than a fixed, independent population (55). To test this hypothesis, we generated an additional expression signature from our discovery data, identifying genes that could specifically distinguish intermediate monocytes compared to all other subsets. Using the same criteria applied for the other signatures, we identified 10 genes that accurately distinguished intermediate monocytes from all other subsets (ATG2A, ATP50, DX39A, EVL, GPR183, LPCAT1, POU2F2, TSC22D4, ZNF14, ARHGAP27, Supplemental Figure 6). Unlike our previous signatures, we did not observe a good distinction of intermediate monocyte samples, neither in our own discovery set, nor in our validation cohort. Similarly, although gene expression analysis, both from microarray as well as from single cell RNA-seq analysis, generally support the concept of genetically separate three monocyte subsets, the exact nature of the intermediate subset, and its relationship to other monocyte subsets, could not be fully determined (56).
Our work has several limitations. First, our discovery data sets consist exclusively of microarray datasets, which can limit the number and type of cell-type specific markers that can be discovered compared to sequencing-based transcriptome profiling. This is particularly relevant in the context of intermediate monocytes. For example, HLA-DM has been identified as a strong discriminating marker that separates the intermediate subset from classical and non-classical (37). However, this marker is not usually profiled on microarrays, limiting our discovery potential. Secondly, we identified our signature genes by simply selecting the top 10 from a ranked list. While this approach is simple and intuitive, it prevents the consideration of other high-ranking genes as potential biomarkers. This potential can be explored in future work, where additional gene set selection strategies can be applied to this data.
Finally, to increase robustness and power of our signatures, our work leverages solely transcriptomic data without accounting for differences occurring post-transcriptionally that may affect final protein levels (57). This concern is especially relevant when translating our results into cytometry/staining based assays that leverage protein expression of surface markers. To this date, highthroughput proteomics data is limited by technical constraints on the number of protein markers that can be simultaneously profiled on a single sample. Advances in mass-cytometry based techniques can in principle extend our ability to profile multiple markers expressed in a single-cell (58), as well as proteomics (59), but at scales substantially lower than transcriptomics-based assays. In conclusion, we present a collection of robust and parsimonious gene expression signatures that can distinguish and quantify monocyte subsets across disease affected samples and can be used to identify cytometry biomarkers. Our work provides several applications and highlights the potential for our signatures and markers to be used in clinical and translational settings.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository and accession numbers can be found in the Supplemental Material.
AUTHOR CONTRIBUTIONS
FV, EM, and PK designed research, interpreted the results, and wrote the manuscript. FV and LZ performed bioinformatic analysis. CM contributed to experimental design, drafting of and revising manuscript. CM, SH, NR-a, and SM performed experiments and analyzed data. MN and JG performed patient sample collection and processing. EM and PK supervised the study. All authors contributed to the article and approved the submitted version. | 2021-05-14T13:25:33.797Z | 2021-05-14T00:00:00.000 | {
"year": 2021,
"sha1": "3e346f8db30589dcf1e4234b19afa2e5fd48cda2",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2021.659255/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3e346f8db30589dcf1e4234b19afa2e5fd48cda2",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
27829153 | pes2o/s2orc | v3-fos-license | Synthesis of Azolines and Imidazoles and their Use in Drug Design
Heterocycles are very important functional groups, especially in medicinal chemistry. They are not only pivotal in the synthesis of drugs, but also form part of the structure of a diversity of drugs, vitamins, natural products and biomolecules. The importance of azolines and imidazoles in heterocycles lies in the fact that their derivatives are known for analgesic, antifungal, antihypertensive, antiobesity, anticancer, and other biological activity. Additionally, they can inhibit butyrylcholinesterase, acetylcholinesterase, carboxylesterase and quorum sensing. Due to these properties, the present contribution reviews the use of azoline and imidazole moieties in recent drug synthesis based on classic as well as non-classic methods, the latter employing microwave and sonication energies. Also included is the preparation from oxazoline of nanostructured material having biomedical applications. Hence, the present focus is on the synthesis of azolines and imidazoles that are directly involved in the preparation of drug precursors and potential drugs. Compound 1 acts as a COX-2 inhibitor precursor [11], compound 2, methoxy-idazoxan / RX821002 (α2), as an α-adrenergic antagonist [1], compounds 3 and 7 as quorum sensing inhibitors [12,13], compound 4, epi-oxazoline halipeptine D, as a potent anti-inflammatory agent [14], compound 5, (-)-spongotine A, as an antitumor agent with moderate cytotoxicity against human leukemia K-562 [15,16], and compound 6, brasilibactin A, as a cytotoxic siderophore [17]. Synthesis and pharmacological activity of azolines The synthetic routes for building azolines can be divided into classic methods that use conventional energy, and non-classic methods accomplished with microwave (MW) or ultrasound energy. Classical methods of synthesis: One commonly used method for the synthesis of azolines starts from an aldehyde and a source of heteroatoms, usually an ethylenediamine for imidazolines, an ethanolamine for oxazolines, and a cysteamine for thiazolines. After obtaining azolidines in this way, oxidants (e.g., I2, tertbuthyl hypochlorite (t-BuOCl), N-chlorosuccinimide (NCS), Nbromosuccinimide (NBS) and N-iodosuccinimide (NIS) are utilized to achieve azolines. For instance, NCS and t-BuOCl have been applied to the total synthesis of (-)-spongotine A, 5 [15]. In other reactions, oxidation with NCS gave an 88% yield, but spongotine was obtained at 52% yield (Scheme 1). Scheme 1: Total synthesis of (-)-spongotine A, which displays moderate cytotoxicity against human leukemia K-562.
Introduction
Heterocycles are a very important functional group, especially in medicinal chemistry, because they constitute a common structural moiety in many drugs [1][2][3][4]. Azolines and imidazoles are key groups inside heterocycles, as they are not only a cornerstone of the synthesis pathway of these compounds, but also form part of potential [5] and marketed drugs [2]. Furthermore, in some cases azolines and imidazoles are the pharmacophore in drugs [6,7]. Consequently, azoline and imidazole synthesis has great importance in drug research.
The various interesting reviews of azolines and imidazoles [8][9][10] either focus on synthetic methods for obtaining these compounds or their biological activity, or both of these topics but with greater emphasis given to one of them [10]. In the current review, we focus on the synthesis of azolines and imidazoles that are directly involved in the preparation of drug precursors and the synthesis of potential drugs. We first discuss azolines and then imidazoles.
Azolines
Azolines are five-membered heterocycles with one double bond in the ring and two heteroatoms at positions 1 and 3, one of which is always nitrogen. Whereas imidazolines also contain nitrogen, oxazolines include oxygen, and thiazolines sulfur. Figure 1 shows some imidazolines, oxazolines and thiazolines that are either potential drugs or have important biological activity.
NBS-mediated reactions have in some cases required some of the longest reaction times. Nevertheless, NBS was used to synthesize 30 imidazoline inhibitors of cyclooxygenase-2 (COX-2), with a consequent anti-inflammatory activity [11]. Scheme 2 shows an example of the synthesis of a COX-2 inhibitor precursor 13, which is then oxidized to a COX-2 inhibitor, imidazole [11]. Amino alcohols and 4-ethoxy-4-iminobutanenitrile have also been employed to obtain new enantiomerically pure 2-cyanoethyloxazolines 18 in one step with good to excellent yields (73-96%). This was accomplished by following a procedure appropriate for the selective synthesis of mono-oxazolines, which display antioxidant, antimicrobial and analgesic activities [18] (Scheme 3). Padmavathi et al. [19] reported a new class of sulfone-linked pyrrolyl oxazolines and thiazolines 20 and 21, the synthesis of which was carried out with a one-pot methodology using transarylsulfonylethenesulfonyl-acetic acid methyl ester and ethanolamine or cysteamine, respectively. In the presence of a lanthanide chloride, SmCl 3 , aromatic compounds 22 and 23 were also obtained (Scheme 4). Compounds containing pyrrole and thiazoline possess excellent antimicrobial activity, while those including pyrrole and oxazoline show good antioxidant properties. Altintop et al. [20] synthesized 40 compounds 29 by reacting α-thiophenoxy esters 25 with hydrazine and then followed by phenyl isothiocyanate. Afther wards, α-bromo acetophenone 28 derivatives were added to achieve 40 thiazolines having with one exo double bond (Scheme 5). The compounds were evaluated for as antibacterial, or antifungal activity agents, orand in the case of one compound, was tested as for anticancer activity against C6 glioma cells. All compounds exhibited significant antifungal activity against T. harzianum, A. ochraceus, F. solani, F. moniliforme, and/or F. culmorum. It was observed that the compound bearing 1-phenyl-1H-tetrazole and p-chlorophenyl moieties displays an inhibitory effect against P. aeruginosa, while it was also found that the compound, having 1-phenyl-1H-tetrazole and non-substituted phenyl moieties (IC 50 =8.3 +/-2.6 mg/mL), was more effective than cisplatin (IC 50 =13.7 +/-1.2 mg/mL). Thiazoline hydrohalide was also synthesized from N-allylthiourea, which is obtained from allylisothiocyanate 30 and the corresponding amine 31, as can be appreciated in Scheme 6. It is known that thiazines are formed in this type of reaction under conditions of polar solvents and high temperature. Conversely, thiazolines hydrohalide 34 were obtained in good yields (Scheme 6) [21]. They were evaluated as acetylcholinesterase (AChE), butyrylcholinesterase (BChE), and carboxylesterase (CaE) inhibitors. Of the 30 compounds evaluated, only 5 were selective inhibitors of BChE, both BChE and ChE, or CaE [21].
Non-classic methods of synthesis:
• Microwave energy: When synthesizing 2-aryloxy methyl oxazolines 37 with carboxylic acid derivatives 36 and ethanol amine via MW energy operating at 20% for 5-10 min, good yields were obtained [22] (Scheme 7). Since the time is very short and the MW power low, this can be considered a very good method (even though the authors did not mention the total MW power). Compounds 37a-n were screened for anti-inflammatory and ulcerogenic activities, finding that compounds 37b (48.2%), 37h (48.5%) and 37l (46.5%) display significant anti-inflammatory activity. On the other hand, these compounds had lower ulcerogenic activity than the standard drugs, aspirin and phenylbutazone [22]. • Ultrasound energy: Ultrasound energy has proven to minimize waste, decrease reaction times, and sometimes eliminate the use of solvents in organic chemistry synthesis. Since these are good properties for green chemistry [23], its use will undoubtedly increase. Ultrasound was utilized in one study on imidazoline synthesis by reacting aromatic aldehydes and ethylenediamine, with NBS as oxidant and water as solvent, to achieve good to excellent yields in 12-18 min. [24] (Scheme 8). The resulting imidazolines 40 were evaluated as monoamine oxidase (MAO) inhibitors, finding that they have activity in the micro molar (µM) range with good selectivity [24]. When applying ultrasound energy to mediate azoline synthesis, we verified the high efficiency of this methodology, attaining imidazoline 40j in 12 min (81% yield) and alkylphenoxy imidazolines in 20 min with moderate to good yields. Our interest in azolines lies in antiquorum sensing activity [12,13].
Nanoparticles from azolines
Recently, nanoparticles have come to light as entities with medical applications, including their possible use as carriers in drug delivery to the target site or gene delivery to tumors, as well as contrast agents in imaging [25]. In this context, 2-methyl-2-oxazoline was taken as the raw material for a nanostructural material (PMeOX-silica hybrid nanoparticles) [26] that can be used in biomedical applications.
PMeOX-silica hybrid nanoparticles were prepared by using the "grafting to" method and either click chemistry or silane coupling. The first step of the method is the synthesis of the 2-methyl-2-oxazoline polymer functionalized with azide. Afterwards, a microemulsion water oil containing SiO 2 is prepared separately, followed by the addition of the polymer of oxazoline. Finally, PMeOX-silica hybrid nanoparticles emerge.
Imidazoles
Imidazole (1,3-diaza-2,4-cyclopentadiene) is a five-membered organic compound having the formula C 3 H 4 N 2 . As can be appreciated, it has three carbons and two nitrogens, the latter two atoms at positions 1 and 3. Its derivatives include an extensive variety of natural products such as histamine, histidine, biotin, alkaloids and nucleic acids [27].
Various imidazole derivatives had been discovered as early as the 1840s. However, it was not until 1858 that Heinrich Debus carried out the first imidazole synthesis, which was done by using glyoxal and formaldehyde in ammonia to yield imidazole [29]. Although imidazoles are of great pharmacological importance, there is no recent compendium of their current applications in the synthesis of new pharmacologically active compounds.
Pharmacological activity
The imidazole moiety is contained in the backbone of many drugs that have anticancer, antifungal, antibacterial, antitubercular, antiinflammatory, antineuropathic, antihypertensive, antihistaminic, antiparasitic, antiobesity and antiviral activity. Sharma et al. reported the utility of an imidazole moiety as the precursor of 2-(substituted phenyl)-1H-imidazoles 47, which has antibacterial activity. During the process of synthesis, the last step involves a direct acylation of a 2-phenylimidazole 45 with acyl chloride substituted 46 prepared in situ [30] (Scheme 9). Blunden et al. developed an amphiphilic block copolymer capable of self-assembling into polymeric micelles, which was successfully tested as a drug carrier for NAMI-A. This antimetastatic agent, now in Phase II clinical trials, has low cytotoxicity and is inactive against primary tumors. A polymeric form of NAMI-A was synthesized by combining an excess of polyvinyimidazole (M n,theo =14 300 g·mol -1 ) with the Ru complex in the appropriate alcohol. In the last step, a water-soluble block copolymer was designed in order to improve biocompatibility and cell uptake through the formation of micelles. It turns out that compared to the NAMI-A molecule, NAMI-A copolymer micelles are about 1.5-fold more active on cancer cell lines [33] (Scheme 11). Kantevari et al. synthesized 2-butyl-4-chloro-1-methylimidazoles 60 with embedded aryl-and heteroaryl-derived chalcones. These compounds were tested as inhibitors of angiotensin converting enzyme (ACE). The synthesis strategy involved the Claisen-Schmidt condensation of several aryl or heteroaryl methyl ketones of type 59 with imidazole-5-carbaldehyde 58. This procedure was catalyzed by means of 10% aqueous NaOH in methanol for 3.0-5.0 h at room temperature. When screening all the new compounds with the colorimetric ACE inhibition assay, the most active inhibitors, of type 60, were found to have an IC 50 of 2.24-3.60 μM. These values show that some derived chalcones are ~100 times more active than various chalcones and flavonoids of synthetic and natural origin [35] (Scheme 13). Recently, Patil et al. carried out the synthesis of 2,3-disubstitutedimidazolyl-quinazolin-4(3H)-one derivatives 63a, b with good yields. The synthesized compounds exhibited anti-inflammatory and antimicrobial activity. Through in vitro experiments, some 2,3-disubstituted-imidazolyl-quinazolin-4(3H)-one compounds were found to be as active as prednisolone. Interestingly, the presence of electron-withdrawing groups on the quinazolinone nucleus was related to biological activity. An in vivo anti-inflammatory assay showed that compounds having an electron-withdrawing group had an important pharmacological response. The synthesis is based on the reaction between 6,8-substituted-2-methyl-4H-3,1-benzoxazin-4-ones 61a or 6,8-substituted-2-phenyl-4H-3,1-benzoxazin-4-ones 61b and aminoimidazole 62, in refluxing glacial acetic acid and sodium acetate or in refluxing dry pyridine [36] (Scheme 14).
Conclusions
Based on their wide range of therapeutic activities, azolines e imidazoles are attractive molecules for the challenges that exist in medicinal chemistry. We herein show that azoline and imidazole rings are present as a core structural component in an array of medicinal applications, including antibacterial, antimicrobial, anti-inflammatory, analgesic, antiviral, antihypertensive, antifungal, anticancer, antioxidant and antidiabetic activity. These heterocycles have also been useful as quorum sensing inhibitors, cytotoxic agents against human leukemia, treatment of Alzheimer's disease, and inhibitors of BChE, ChE and CaE. Whereas some of the methods for obtaining these heterocycles are based on conventional strategies, others employ microwave and ultrasound energies. Finally, we have mentioned the use of oxazolines for the preparation of nanostructural material with biomedical applications. | 2019-04-08T13:08:38.808Z | 2016-09-20T00:00:00.000 | {
"year": 2016,
"sha1": "d240589beae4d18fc651f6219628a139a287be7b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4172/2161-0444.1000400",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "fbd434f0de572e420c8f6694e8e5e8d8ddedf09b",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
248436529 | pes2o/s2orc | v3-fos-license | Effects of early preventive dental visits and its associations with dental caries experience: a cross-sectional study
Objectives Limited information is known about preventive dental visits (PDVs) before seven years of age among children in China. This study aimed to examine the early PDV rate, identify the impact of PDV on dental caries and untreated dental caries, and explore the factors related to PDV among Chinese sampled children under seven years old. Methods A cross-sectional survey was conducted in five selected primary health care facilities in Chengdu, China, from May to August 2021. Parent–child dyads during regular systematic medical management were recruited to participate. Children's dental caries were identified through dental examinations and documented as decayed, missing and filled teeth index (dmft) by trained primary care physicians. Dental-related information was collected through a questionnaire. Zero-inflated negative binomial (ZINB) regression was used to test the effect of early PDV on the dmft value, and logistic regression was used to analyse impact factors on the early PDV. Results A total of 2028 out of 2377 parent–child dyads were qualified for analysis. Half of the children (50.4%) were male, with a mean age of 4.8 years. Among all the children, 12.1% had their first dental visit for preventive purposes, 34.4% had their first dental visit for symptomatic purposes, and more than half had never visited a dentist. The results showed that a lower dmft value (adjusted OR: 0.69, 95% CI: 0.48–0.84), a higher rate of caries-free (aOR: 6.5, 95% CI: 3.93–10.58), and a lower rate of untreated dental caries (aOR: 0.40, 95% CI: 0.21–0.76) were associated with early PDV utilization. Children who had a higher rate of PDV were positively associated with living in a family with better parental behaviours (aOR: 2.30, 95% CI: 1.71–3.08), better parental oral health perception (aOR: 1.23, 95% CI: 1.06–1.32), fathers who had no untreated caries (aOR: 0.68, 95% CI: 0.47–0.97), families with higher socioeconomic status (aOR: 1.09, 95% CI: 1.04–1.16), and dental health advice received from well-child care physicians (aOR: 1.47, 95% CI: 1.08–2.00). Conclusions Early PDV was associated with a lower rate of dental caries prevalence and untreated dental caries among sampled children younger than seven in Western China. Underutilization and social inequities existed in PDV utilization. Public health strategies should be developed to increase preventive dental visits and eliminate social disparities that prevent dental care utilization. Supplementary Information The online version contains supplementary material available at 10.1186/s12903-022-02190-6.
Introduction
Dental caries is one of the most common chronic diseases among children [1]. Untreated caries in deciduous teeth is the 10th-most prevalent condition, affecting 621 million children worldwide, especially in low-income families [2,3]. Multiple risk factors were associated with early childhood caries (ECC), such as a high number of cariogenic oral bacteria, frequent intake of high sugar food, insufficient saliva flow, lower fluoride exposure, poor oral hygiene, and low socioeconomic demographic status [4]. Therefore, use of comprehensive methods for ECC prevention may be more effective.
The specific preventive interventions could be categorized into passive and active interventions [5]. A wellknown passive preventive intervention is community fluoridated water. Compared to passive prevention, active preventive intervention requires a person to implement actions to change their lifestyle by including or excluding living habits [6]. Early preventive dental visits (PDVs) are one of the most critical active preventive interventions. Many dental professional academics recommend that parents take their children to the dental office when the first tooth erupts or no later than one year old [7,8]. Early PDVs can provide more effective and less costly dental care than dental care provided in emergency care facilities or hospitals [9,10]. However, there are significant disparities in access to early dental care among the population. The age of the first dental visit varies from one to six worldwide [11,12]. Underutilization of early dental care was associated with ethnic minorities [13], lower family income [14], poorer parents' perceptions and attitudes on dental health [15], less multidisciplinary cooperation [16], and lower access to dental services [17]. Understanding these factors is essential for developing strategies to eliminate disparities in early dental care utilization.
ECC remains a substantial public health issue in China, as in other countries in the world. According to the Fourth Chinese National Oral Health Survey, the prevalence of dental caries was 50.8%, 63.6%, and 71.9% for 3-, 4-and 5-year-olds, respectively [18]. Untreated dental caries were reported in 16.2% to 83.6% of different cities [19,20]. Children's oral health has not been effectively improved in recent decades [18]. In 2019, China's National Committee of Health issued the Medium-to-Long Term Plan for the Prevention and Treatment of Chronic Diseases [21]. In this plan, early dental health management for children was proposed for the first time.
There have been an increasing number of Chinese studies on children's dental service utilization in recent years. All-cause dental care utilization among children was reported to range from 17.5 to 45.5% [22][23][24]. However, most of these studies indicated that more dental utilization was associated with a higher prevalence of dental caries. One reason to explain this was that those studies could not distinguish the time sequence between dental utilization and dental caries onset within a cross-sectional design. The other reason was that those studies did not differentiate between dental services for prevention and treatment purposes. This information is not good for supporting public health strategies targeting ECC.
This study aimed to investigate the current status of early PDV utilization among children under seven years old in western China. It seeks to address the following research questions: 1) what is the rate of early PDV among young children in a metropolitan city in Western China; 2) can early PDV play a positive role in reducing ECC and untreated dental caries; and 3) what family background characteristics are associated with PDV?
Study participants
A cross-sectional survey was conducted in the selected primary care settings in Chengdu, China, from May 2021 to August 2021. Following the judgemental sampling method, we selected primary care facilities that cover at least 80 thousand persons per facility (average cover scope is approximately 30 to 50 thousand) in the serving area and has two or more paediatric primary care physicians (PCPs) working in the facility [25]. Nine facilities satisfied the selection criteria, and five facility directors agreed to participate. Parent-child dyads were invited to participate by fifteen trained PCPs during regular systematic medical management. Children under seven years old who had no systemic disease were eligible for the study. Children's parents or other caregivers who refused to participate were excluded from the study.
The sample size was calculated via the formula where Z is the value from the standard normal distribution reflecting the confidence level that will be used (Z = 1.96 for 95%) and E is the desired margin of error (E = 0.5). P is the proportion of caries rate referred from the fourth national oral health survey [18]. The value of p was 0.508. Therefore, the sample size should be 385 in each facility. The minimum total sample size was 385 × 5 facilities = 1925. Considering a 10% error rate, the total sample size should be 2118. During the systematic examination, children's parents or other caregivers were invited to enroll in the survey by PCP in the office. When they agreed, they were asked to scan a QR code to electronically sign a consent form and subsequently complete an online questionnaire. The questionnaire included demographic status, parental oral health-related behaviours, and oral health status of the parents. Pretest and auto skip questions were set to avoid illogic errors in the questionnaire. Then, a dental examination was performed for the enrolled children.
Primary outcome: dental caries and untreated dental caries
Children's dental caries were measured by the decay/ missing/filling teeth (dmft) index. Considering that dental caries were examined by PCPs, the decayed, missing and filled index (DMF) was used to identify evident caries. Compared to the International Caries Detection and Assessment System (ICDAS) and the Caries Assessment Spectrum and Treatment (CAST), the DMF may underestimate the occurrence of caries lesions in individuals but is the fastest method to apply [26]. Dental examination was performed and documented by the PCP during systematic medical management. For children under 18 months, the oral examination is conveniently performed in a "knee-to-knee" position. For children nearing two years of age, the PCP performed a dental exam with a flashlight for children who sat on the baby stool assisted by parents. For children over 30 months old, PCP performed dental examination with a flashlight with the child laid on the checking bed.
PCPs were required to perform a comprehensive evaluation for children following the guidelines of the National Basic Public Health Service for Children in China. This evaluation includes children's physical development, vision health, and oral health. The PCPs who performed the evaluation should also give appropriate advice to parents. All the PCPs had studied basic oral health in college education and continuous education after they had graduated. In the present study, to strengthen PCP's knowledge and update them on new paediatric dental knowledge, licenced paediatric dentists conducted twohour face-to-face training for all participating PCPs [27]. The training content was about oral health knowledge and dental caries examination skills. The purpose of the training was to deepen their dental examination ability and ensure the accuracy of dental examinations. All PCPs passed a ten-question dental examination quiz after the training [28]. The average score was 81 out of 100. All single scores were above 70.
Untreated dental caries were collected both by examination and the questionnaire. If dmft > 0, parents were asked in the questionnaire, "Did you take your children to treat dental caries? ("Yes" or "No"). A child who had dental caries but took no treatment was defined as having untreated dental caries.
Secondary outcomes: early preventive dental visits
Information on early preventive dental visits was collected using a self-report questionnaire. The first dental visit experience and age were collected by questions of "Did you ever take your child to visit a dentist?" ("Yes" or "No".) If "Yes", "At what age did you take him or her to the first dental visit?". We asked the question "Why do you take your child to the dental office at the first dental visit?" to assess the information for early preventive dental care. There are four options to answer this question: 1) had preventive care, e.g., regular checks, fluoride varnish, pit, and fissure sealant; 2) found a tooth decay onset; 3) the child reported a toothache or chewing issue; or 4) for other dental issues. If their parents or other caregivers chose option 1, we considered the children to have an early PDV. The age of PDV was used as the independent variable in the first step and the outcome variable in the second step. If they chose any other options from 2 to 4, we considered the children to have symptomatic dental visits. If they selected "no" dental visits in the first question, we considered the children to have no dental visits.
Covariates
We adjusted selected covariates that were associated with PDV to explore more expected and unexpected factors. All covariates were selected according to previous studies. The covariates included children's demographic information, parental oral health status, parent-performed dental hygiene behaviours, and family functioning.
Demographic information collected in this study included the child's sex, age, family income level, living area, and distance from the dental clinic with paediatric dentistry. According to the Average Wage of Urban Workers of 2020 published by the Sichuan Provincial Bureau of Statistics [29], the average annual wage was Renminbi (RMB) 74,520. Converting RMB into US dollars was $11,291 per person, and the exchange rate was 6.6 in 2020. The average annual family income was estimated to be approximately $22,582 in families with two parents. In this study, we defined "low family income" as family income lower than the average level of $22,582; "middle family income" as equal to the upper two times the average level of $45,164; and "high family income" as family income higher than three times the average of $67,746. The family living area was measured as "urban" and "rural". Dental clinic distance from home [30] was measured by driving time (within half an hour and more than half an hour).
Parental dental health perception and behaviours may impact the dental status and dental utilization of children [31,32]. Parents' caries status was measured by the following question: "Do any parent have dental caries but let it untreated? (Nobody, mother, father)". The American Paediatric Dentistry Association recommended "cleaning baby's mouth and gums with a soft cloth or infant toothbrush at bath time before the teeth erupt, and parents should brush children teeth and supervise the brushing for school-age children until they are 7 to 8 years of age"[33]. We used the following two questions to represent parents' dental care behaviour and perception. Dental care behaviour was represented by the question: "When did you start to brush or wipe gums/ teeth for babies?" ("Before child one year old" or "After one year old"). "Before child one year old" represents better parental dental behaviour. Parental dental health perception was represented by the question: "At what age do you think your child could independently brush their teeth without parental supervision?" The options were 2-8 years. Older age indicates better parental dental perception and knowledge. Parents' dental knowledge sources [24] were also collected (from the internet, books, friends, kindergarten teachers, and PCPs).
We also controlled for toothache status, which was reported to be an important indicator impacting dental health utilization among children in China [22]. The question was "Did your child have a toothache in the past six months? ("Yes" or "No")".
Statistical methods
A three-step analysis was performed to achieve the study aims. First, descriptive information of the sampled population was summarized. Chi-square analysis was used to identify the difference in the proportion of PDV and the other service types in different subgroups. Second, to test the PDV effect on dmft, the zero-inflated Poisson model (ZIP) and zero-inflated negative binomial model (ZINB) [35] were employed because the dmfs index was not normally distributed (Kolmogorov-Smirnov test, P < 0.001) and was not fit for the condition of conventional regression models. The Vuong test [36] was used to determine whether estimating a zero-inflation component is more appropriate than the standard model. The variance inflation factor (VIF) was calculated to assess the multicollinearity across independent variables before ZIP and ZINB were performed. The Akaike information criterion (AIC) and the Bayesian information criterion (BIC) were used to report the model fitness, and the smaller the value was, the better. Then, logistic regression was used to test the PDV effect on untreated dental caries among the children who already had dental caries. Third, stepwise logistic regression was used to test the associations with PDV.
Statistical significance was set at 0.05. All analyses were performed by STATA 14.0 (Stata Corporation, College Station TX, USA).
Descriptive information
A total of 2377 parent-child dyads were invited to the survey. A total of 135 children older than seven years and 214 parents who did not complete the questionnaire were excluded as missing completely at random. Finally, 2028 (70.5%, 2028/2877) children were included for analysis. Among the total population (Table 1), the mean age of the children was 4.8 years (SD: 1.18), 50.4% (N = 1023) were male, and 39.9% (809) had siblings. Of the total population, 42.2% (855) of children had dental caries, the mean dmft was 1.51, the range was 0-17, and 18.1% (368) had toothache experience within six months. A total of 12.1% (245) of children had their first dental visit for prevention, 34.4% (698) of children had their first dental visit for treatment, and 53.5% had never visited a dentist (P < 0.01). The mean age was 5.3 years for SDV and 4.8 years for PDV. Only 1.1% (23) of children had PDV before three years of age.
Effect of PDV on dental caries level and untreated dental caries
The collinearity test results before the regression model showed that the VIF of the independent variables ranged from 1.02 to 1.31, indicating no collinearity. ZIP and ZINB were both performed. Since the Vuong test in ZIP was 12.74 and ZINB was 10.73, zero-inflated regression was more appropriate than linear regression for this dataset. According to the AIC and BIC (ZIP: 6057.9 and 6237.6, ZINB: 5694.5 and 5879.8) of the two models, ZINB fit the data better. Table 2 shows that preventive dental service utilization was significant in both parts of ZINB (p < 0.05), implying that a lower level of dental caries was among children who had more PDV (aOR: 0.60, 95% CI: 0.58-0.82) and more caries-free cases among children who had more PDV (aOR: 5.21, 95% CI: 3.59-7.57) after covariates were controlled.
Among children who had dental caries, the association between untreated dental caries and PDV was estimated by multivariable logistic regression. As shown in column 3-4 in Table 2, Model 1 showed the crude odds ratio of the PDV effect on untreated dental caries. Then, Model 2 added covariates of child sex, age, family child count, dmft, toothache, age at toothbrushing start, and age at first dental visit plus Model 1. Model 3 was the final model containing all covariates of parents' perception of age at supervised toothbrushing, whether mother or father has untreated dental caries, children's main caregivers, rural or urban living area, parents' marital status, mother's age and education, and family income, plus Model 2. Model 3 showed that preventive dental utilization was steadily associated with lower untreated dental caries (aOR: 0.4, 95% CI: 0.21-0.76).
Factors associated with early preventive dental visits
Among all the children (Table 3), multivariable logistic regression results showed that children's father had no untreated caries (aOR: 0.7, 95% CI: 0.47-0.97), longer distance from the dental clinic (aOR: 4.8, 95% CI: 3.54-6.57), and dental health knowledge from wellchild care physicians (aOR: 1.5, 95% CI: 1.08-2.00), were associated with more PDV, in addition to the child who was younger age, family with a single child, better parents' oral perception, and higher socioeconomic status. On the other hand, the relationship was not significant for children's sex, caregivers, dental health information from the internet, books, friends, schoolteachers, or parental marriage.
Discussion
The primary finding of this study demonstrated that early PDV was associated with a lower rate of dental caries and untreated dental caries among children younger than seven in western China. This finding identified that early PDV was underutilized in the study sample. Social inequities existed in preventive dental care utilization. A lower rate of PDV was associated with children who lived in families with more children, poorer parental oral care behaviours, fathers with untreated caries, and lower socioeconomic levels. This study identified that the rate of first dental visits for prevention purposes was 12.1% among all children younger than seven in Chengdu, China. The rate of preventive dental care utilization was 24.3% in Beijing [24], and the average all-cause dental utilization rate at the national level was 17.6% [22]. The difference could be explained by socioeconomic-related inequality in dental utilization in different places [37]. In addition, we also identified that a higher education level of the mother, higher family income [14], better parental oral health perception and behaviours [15], and single children in the family [38] were associated with a higher rate of PDV, which is consistent with earlier research. This finding proved that there were social inequalities in utilizing early preventive dental care in Western China. The children who lived in families with higher educated mothers and higher family income had a significantly higher rate of preventive dental care utilization in the present study, which was consistent with previous studies [39][40][41]. Many social determinants for health tend to cluster among individuals living in underprivileged conditions and interact with each other. According to the framework for tackling social determinants, health inequities [42] are important predictors of health, especially in developing countries. More comprehensive actions should be (1) and (2) showed the effect of PDV on dmft. Zero-inflated part showed a logit model for zero dmft (caries-free) children, predicting whether a child was in caries-free group. Negative binomial part showed dmft counts for those children who were not caries free. ZINB model was adjusted for children's sex, age, family children count, age at which toothbrushing started, parents' oral health perception for children, whether mother or father has untreated dental caries, children's caregivers, rural or urban living area, parents' marriage status, mother's age and education, family income. Full table is in Additional file 1: S1 Column (3) Model 1: crude odds ratio indicating the effect of PDV on untreated dental caries. Column (4) Model 2: odds ratio adjusted by child's sex, age, family children count, dmft, toothache, age at which toothbrushing started, age at first dental visit. Column (5) Model 3: odds ratio adjusted by Model 2 plus parents' oral health perception for children, whether mother or father has untreated dental caries, children's main caregivers, rural or urban living area, parents' marriage status, mother's age and education, and family income. Full taken to eliminate the social disparities of dental care among young children. For example, clarify the mechanisms by which social determinants generate oral health inequities and provide a framework for evaluating which social determinants are the most important to address among children in different areas. The findings revealed that health advice from PCP had a positive effect on children's PDV, which is consistent with previous international studies [43]. This result highlighted the potential oral benefits of the almost full cover of the National Basic Health Service Program in China, which covered 92.7% of children under seven in 2018 [44]. Dental implementation by PCPs has been reported to be rare in China until now. More research should evaluate the advantages and cost-effectiveness of reducing caries by controlling common risk factors from the perspective of primary care. In addition, we unexpectedly found that higher PDV was associated with further living distance from dental clinics. Similar to patients who would like to seek orthodontics and dental implant services in further places to find a specific dentist [45], this unexpected result may reflect an extreme imbalance between paediatric dentistry supply and preventive dental demand. Parents who lived near the central city could access dental care knowledge through the Internet or social media development, yet a shortage of paediatric dentists broadly exists in China [46]. Children who live in suburban care may travel a long distance to access preventive dental care in the central city. Therefore, according to these results, we proposed that Chinese health policymakers consider improving preventive dental visits by encouraging interdisciplinary cooperation between primary care and paediatric dentistry. However, in the long term, more paediatric dentists are needed to satisfy children's dental care needs.
There are several limitations to this study. First, we could not observe continuous preventive dental service utilization in the present study, which is a better independent variable than the one-time variable [47]. Panel data should be collected to accurately evaluate the results of future studies. Second, hidden caries may be missing without mouth mirrors for examinations by PCPs, resulting in a lower number of dental caries. This may cause the association between dental caries and PDV to be underestimated. Third, sample selection bias was inevitable. The sample could partially represent the children who lived around the large primary care centre in the metropolitan city in western China. Besides, disparities between urban and rural areas could not be identified, and the differences may be larger. Fourth, we used a single parental report question to measure toothache, considering the time cost of responding, which may underestimate the actual proportion of children who suffer from toothache [48]. Future research is warranted to generalize our findings.
Despite these limitations, the present study has several contributions. The major strength of the present study is that it is the first study to find that early preventive dental utilization was associated with lower dental caries prevalence and lower untreated dental caries, which is inconsistent with some international studies [10,49] and opposite to some domestic studies [18,22]. The explanation is that we identified the time sequence and pattern of dental care services by using the first dental visit and its purpose as the independent variable. Dental caries can occur throughout life, both in primary and permanent dentitions [50]. Therefore, it is challenging to evaluate the real association between preventive dental care utilization and dental caries prevalence in a cross-sectional study. The advantage of using the first dental visit to identify the time sequence is that the first dental visit occurred previously or simultaneously with dental caries onset. The first dental visit is usually not interfered with by dentists' examination but spontaneously performed relying on parents' oral knowledge, attitude, and perception. Therefore, the purpose of the first dental visit could appropriately reflect parents' active prevention perception. Notably, this variable could be better used in a survey targeting young children to reduce memory bias.
Conclusions
Early PDV utilization was insufficient, and huge socioeconomic disparities existed among children. Preventive health policy strategies should focus on eliminating these disparities and increasing PDV access to promote dental health for young children. More longitudinal studies should evaluate the effect of PDV on dental health outcomes in China.
Additional file 1. S1. ZIBN regression results of association between tooth decay and preventive dental visits among children. S2. Multivariable regression results of association between untreated dental caries and preventive dental visits among children who had dental caries. | 2022-04-30T13:51:26.277Z | 2022-04-29T00:00:00.000 | {
"year": 2022,
"sha1": "6a46b0bb4fd08f143f24efeef05a2fb7214b9d0a",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "98fa782b8c3b5b84bcaba45649ac7c67969157ba",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
212793356 | pes2o/s2orc | v3-fos-license | In Defense of Materialism: The Haitian Zonbi Vs. the Philosophical Zombie
This work contrasts the concept of the philosophical zombie, p-zombie, with its conception as found in Haitian ontology, zonbi , to understand the material nature of consciousness constitution in the multiverse. Whereas the former is utilized to substantiate the mind-body dualism; the latter as I see it negates the dualism to offer a complete materialist under standing of consciousness constitution, which eliminates the mind-body dualism.
Introduction
This work contrasts the concept of the philosophical zombie, p-zombie, with its conception as found in Haitian ontology, zonbi, to understand the material nature of consciousness constitution. Whereas the former is utilized to substantiate the mind-body dualism; the latter as I see it negates the dualism to offer a complete materialist understanding of consciousness constitution, which eliminates the mind-body dualist position.
Background of the Problem
Consciousness here refers to subjective awareness of phenomenal experiences (ideology, language, self, feelings, choice, control of voluntary behavior, thoughts, etc.) of internal and external worlds. The academic literature "describes three possibilities regarding the origin and place of consciousness in the universe: (A) As an emergent property of complex brain neuronal computation, (B) As spiritual quality of the universe, distinct from purely physical actions, and (C) As composed of discrete 'proto-conscious' events acting in accordance with physical laws not yet fully understood" [1]. The latter position, (C), represents the ORCH-OR ("orchestrated objective reduction") theory of Stuart Hameroff and Roger Penrose [1], which includes aspects of (A) and (B), and posits that "consciousness consists of discrete moments, each an 'orchestrated' quantum-computational process terminated by… an action [,objective reduction or OR,] rooted in quantum aspects of the fine structure of space-time geometry, this being coupled to brain neuronal processes via microtubules" (pg. 70). In this view, the understanding is that a proto-conscious experience existed in the early universe, panpsy-chism, and as a result of emergent structures of the brain it (proto-conscious experience, psychion) became embodied and evolved as a result of quantum neuronal computations of "brains". The philosophical zombie, p-zombie, is a thought experiment in the philosophy of mind conceived by mind-body dualists such as David Chalmers to refute (A), which offers a complete materialist account of consciousness, in favor of (B). particle energy, psychion) consciousness that is recycled/ entangled/superimposed throughout the multiverses via (self-aware or not) practical activity and the phenomenal properties of subatomic particles of a psychonic/panpsychic field, which goes on to produce aggregated matter with consciousness.
In other words, Paul C. Mocombe's [3] structurationist theory of phenomenological structuralism, in keeping with the logic of structurationist sociology, assumes practical activity and consciousness, i.e., practical consciousness, to be the basis for understanding human behavior and consciousness in the world. For Mocombe, this consciousness is neither an emergent illusion of the brain or one that comes from a simulation of species-Beings with "higher consciousness" than our human form, nor a God, which animates our species-being with its essence that is our human soul/consciousness. The aforementioned positions, a simulation/virtual reality, emergent property of the mechanical brain, or the essence of God, presupposes the existence of consciousness as fundamental to the multiverse prior to its embodiment as the "I," the Cartesian thinking subject, of the human actor. In Mocombe's theory of phenomenological structuralism, consciousness, like the other forces of the multiverse, is presupposed as a proto-evolutionary force with a subatomic field whose particles become embodied via microtubules of brains. In other words, consciousness is an emergent fifth force of nature, a psychion of a psychonic/panpsychic subatomic field, which evolves via experience of the macro-world as embodied aggregated neuronal energy, in microtubules of brains, recycled/entangled/superimposed throughout the multiverses. Hence, it is not solely an emergent property of the mechanical brain; a simulation (virtual reality) wherein sentient beings with consciousness are the pawns in the conscious scenarios of a species-being with higher-consciousness; nor is it a product of a God, in the Christian sense, animating it in its consciousness. Even if the latter two (which makes up the virtual reality hypothesis in some physic circles) were the case, neither would deny the fact that we are able to know the laws of the "material" simulation by which we become conscious or have consciousness, which appears to be fundamental prior to time and space of the macro-worlds. I disagree with this virtual reality hypothesis of the multiverse. For me, the multiverse is real and objective, and consciousness is not fundamental to it. Instead, consciousness is, like time and space, an emergent property of the macro-world, which evolves as a force of nature akin to the evolution of gravity. In other words, it becomes an emergent force of nature, which is recycled/ entangled/superimposed throughout the multiverse, after the constitution of the macro-world: Consciousness is the product of neuronal energies, psychion, of a psychonic/ panpsychic subatomic field, the phenomenal properties of which aggregate as matter, via the other forces of nature, and manifests itself in the multiverse as embodied practical activity, i.e., practical consciousness, of species, whose consciousness, once disaggregated as matter in one universe, either collapses unto other versions of itself that exists in other multiverses, or is recycled into the psychonic/panpsychic subatomic field as particles with phenome-reason that they would possess conscious experience, which substantiates his mind-body position. In this article, I utilize the notion of zombies as found in Haitian ontology and practice, zonbi, to refute the dualist position in favor of a complete materialist account of consciousness constitution as found in (C). Thus the understanding here is that what accounts for the unity of experience is the psychion, subatomic particle, of an emergent psychonic/panpsychic subatomic field of the multiverse that has phenomenal properties, which gets embodied as neuronal particles of the aggregated brain, which experiences a material resource framework as an "I" whose phenomenal properties following matter disaggregation either returns back to the field or collapses in other worlds where the same matter exists. This Mocombeian [3] materialization parallels the concept and practice of nanm (soul/consciousness) and zonbi found in Haitian metaphysics.
Theory and Method
Paul C. Mocombe's [3,4] structurationist sociology, phenomenological structuralism, which attempts to resolve the structure/agency problematic of the social sciences, builds on the ORCH-OR theory and panpsychism of Hameroff and Penrose, while holding on to the multiverse hypothesis of quantum mechanics and Haitian ontology/ epistemology, which the authors reject, the former, because it is not "a more down-to-earth viewpoint" [1]. For Mocombe [3], quantum superposition, entanglement, and evidence in Haitian Vodou of spirit possession, which represent ancestors from a parallel world, Vilokan, of the earth's of which we ought to pattern our behaviors and structures, are grounding proofs for the acceptance of the multiple worlds hypothesis of quantum mechanics within an M-theory interpretation of the constitution of the multiverse [3,[5][6][7][8]. Within the latter hypothesis, the understanding is that "each possibility in a superposition evolves to form its own universe, resulting in an infinite multitude of coexisting 'parallel' worlds. The stream of consciousness of the observer is supposed somehow to 'split', so that there is one in each of the worlds-at least in those worlds for which the observer remains alive and conscious. Each instance of the observer's consciousness experiences a separate independent world, and is not directly aware of any of the other worlds" [1]. It is within this multiple worlds, which are materially real, hypothesis that Mocombe constitutes the notion of consciousness in the universe according to his theory of phenomenological structuralism. For Mocombe [3], the material world is real and objective, and the informational content of consciousness is epiphenomenal content recycled/entangled/ superimposed throughout the multiverse after matter aggregation and experience. Consciousness is an emergent fifth force of nature, a quantum material substance/energy, psychion, the phenomenal property of which is recycled/entangled/superimposed throughout the multiverse and becomes embodied via the microtubules of brains. It is manifested in simultaneous, entangled, superimposed, and interconnecting material resource frameworks as embodied praxis or practical consciousness, which in-turn becomes the phenomenal properties of material (subatomic of death. People who are called to work with lwa yo (spirits and concepts in Haitian Vodou) also have a fourth entity, personal lwa, mét tét, who permanently resides within their head, i.e., a sort of split personality that guides the individual in making important and daily decisions. For the average individual, at the time of death the physical body dies and rots, the ti bon anj, the ego, personality, etc., returns to Ginen (Africa), Vilokan, and the gwo bon anj lingers around seeking to animate a new body. Serviteurs, oungan yo, Manbo yo, and Bokor yo, can work to bring the ti bon anj of elders back across the waters from Africa so that they can be an active and honored ancestor. This latter process of ancestor retrieval is usually done a day and a year after the death of the person, and requires an animal sacrifice, i.e., the taking of a life to feed lwa yo in order to retrieve the deceased ancestor from Ginen. Upon retrieval, the ti bon anj of the ancestor is kept in a govi, a small clay bottle. Bokors, sorcerers, who are members of secret societies in Vodou, and stand apart from oungan yo (priests) and Manbo yo (priestesses) as sorcerers who serve Petwo lwa yo, can also capture the lingering ti bon anj to do spiritual work aimed at healing, ascertaining money, love relationships, work, political power, i.e., pwen, or other desires. This latter act is one form of zombification wherein the ti bon anj of a deceased person is captured in a bottle, govi, and directed to serve either the Bokor or an individual seeking wealth, love, political power, or to do harm to another person, etc.
Aside from separation in death, separation can also take place during a person's life cycle. During a person's life cycle, the gwo bon anj can be displaced by a lwa during possession or a Bokor for zombification. The lwa utilizes the animated body (the person possessed is called a chawl or horse for the lwa) to experience the world, heal, protect, etc. The ti bon anj can be displaced during a person's life cycle by a Bokor for the mitigation of punishment through zombification. This latter action is essentially the death penalty in Vodou when individuals morally violate nature, communal life, or an individual. Bokor yo are called upon by oungan yo and manbo yo to punish the transgressor through the removal of their ti bon anj from their bodies. During this process, the ego and personality, ti bon anj, which is viewed as a material thing, is removed, and the person is left with the material body and the gwo bon anj. The purpose of this act is to render the transgressor without the desire and drive (will) to commit any further acts, which arose from their ti bon anj. The person is not killed, but the desire and passion that caused them to commit the initial transgression that they committed is removed. Hence the person is left alive as a mindless zombie, i.e., zonbi. Essentially, whereas oungan yo, manbo yo, and gangan/dokté fey are the readers, judges, and healers, Bokor yo are the sorcerers and police force of the village. They are practitioners of black magic, and are visited by people seeking to do harm to someone, wealth, power, luck, revenge, etc. There are three other, external, cosmic force and lwa yo that impact the individual. They are the zetwal, i.e., the star of a person, which determines their fate; the lwa rasin, or lwa eritaj, the spirit of the ancestors "who enter the path of the unconscious to talk to nal properties, i.e., qualia. So the phenomenal properties, qualia, of subatomic particles is the binding elements that give unity to consciousness in the brains of sentient beings, which experience this unity as neuronal phenomenal experience of an "I" in order to experience and exist in a material resource framework.
Discussion and Conclusion
Mocombe [9] builds on Haitian ontology's notion of the nanm and zonbi to understand the constitution of consciousness as a proto-material substance of the multiverse whose phenomenal properties, once disaggregated as aggregated matter, constitutes a psychonic/panpsychic field the psychion of which gives unity to our experiences and felt experiences once embodied. Hence like Haitian/Vilokan Idealism which posits that the nanm, which provides unity to our experiences is a material thing, a Cartesian I composed of three distinct entities (sometimes more as Haitian metaphysics suggests that a fourth entity, lwa met tet, may constitute the nanm of serviteurs in order to guide them in their decision-making) that are also tied to the natural world and can be manipulated in life as well as death, I, also, view consciousness as an emergent proto-material substance, which becomes reified and embodied as the phenomenal properties of subatomic particles following the constitution of the macro-world.
As the late ATI-oungan of the religion of Haitian Vodou, Max Beauvoir [5], highlights about the nanm and zombification, in Haitian ontology, the human being is a sentient being, which is constituted as three distinct material entities, the physical body, the gwo bon anj (sé médo), and the ti bon anj (sé lido). The latter two constitute our nanm (soul), and the physical body is aggregated matter that eventually dies and rots. It is animated by the energy force of Bon-dye or the universe, the gwo bon anj, which is not active in influencing personality or the choices that the human subject makes in life. Instead, it is simply the spark of life or the energy force that keeps the body living or activated. In other words, metaphorically speaking, imagine the body as an electrical cord, Bon-dye as the socket, and the spark of energy from the socket that animates the appliance as the gwo bon anj.
The animated body, the physical body and the gwo bon anj, gives rise to consciousness and the personality through the ti bon anj. The most important part of the body is the head, which is the seat of consciousness and the space where sight, hearing, smell, and taste all reside. The five senses of the head, and the brain's reflection on what is smelled, heard, seen, tasted, and touched gives rise to the ti bon anj, which is consciousness, intellect, reflection, memory, will, and the personality. That is to say, it is the ti bon anj that houses the ego, self, personality, and ethics of the person from experiences in life. So the gwo bon anj animates the physical body, which gives rise to the ti bon anj, i.e., the individual ego or I of a human subject as they experience being in the world with others.
The three aforementioned distinct (materialist) entities constitute the average individual and can be separated at various points throughout their life cycle and at the time consciousness), learning, and development within Mocombe's phenomenological structural ontology are the product of the embodiment of the phenomenal properties of recycled/entangled/superimposed subatomic neuronal energies/chemicals, psychion, of the multiverse objectified in the space-time of multiverses via the aggregated body and the microtubules of the brain. Once objectified and embodied the phenomenal properties of the neuronal energies/chemicals encounter the space-time of physical worlds via a transcendental subject of consciousnesses (the aggregation of a universal-self superimposed and entangled across the multiple worlds of the multiverse) and the drives and sensibilities of the aggregated body and brain in reified structures of signification, language, ideology, ideological apparatuses, and communicative discourse defined and determined by other beings that control the resources (economics), and modes of distributing them, of a material world required for physical survival in space-time. So in my view, contrary to Daniel Dennett, this transcendental subject of consciousnesses is not an emergent illusion of the brain. It (the transcendental subject of consciousnesses) is, following the constitution of macro-worlds throughout the multiverse, an emergent material substance superimposed/entangled/recycled via a psychonic/panpsychic subatomic field. him or her in dreams, to warn of danger, and to intervene at the many levels of his [or her] life"; and the wonsiyon, "these are a series of spirits that accompany the lwa mét tét and modify somewhat the amplitude and the frequencies of its vibration or presence" [5].
Like Haitian ontology, highlighted by Max Beauvoir, I am a materialist, and view consciousness in similar material terms. For me, all aggregated matter in our dispensation of spacetime is composed of subatomic particle energies. Thus, to understand the constitution and origins of human practical consciousness one must begin with not only the actions (phenomenal properties, i.e., qualia) associated with these particles, but their essence or intrinsic nature, which is their inner conscious life (i.e., panpsychism), prior to understanding the sociocultural factors, which emerge as a result of matter aggregation and being-in-the-world.
So for me the multiverse is objective and real. There is no God in the multiverse (even if there was one, who created us as part of a simulation (virtual world) that is the multiverse, it would not matter or prevent us from understanding the rules and laws explaining the emergence and role of consciousness in the simulation), just consciousness, emanating from a psychonic or pan-psychic subatomic field, becoming and being in simultaneously existing present/past/future layered worlds, which are entangled and superimposed. The initial superverse, which created the multiverse is a product of quantum fluctuation of dark matter and energy, which funneled or exploded to create multiverses via the first four forces of nature, with consciousness being a later (evolutionary/emergent) force that emerged following species formation (matter aggregation) and death. That is, the superverse creates layered multiverses each interconnected via subatomic particles, which aggregated, via the initial four forces of nature, to form macro-worlds. Over time sentient beings experiencing these objective worlds emerged, and the phenomenal properties of their subatomic particles were/are recycled upon matter disaggregation to constitute a psychonic/ panpsychic field of the superverse, which would make consciousness an emergent (evolutionary) fifth force of nature endowing future sentient beings with consciousness, a fifth (evolutionary) force of nature. This consciousness is a neuronal energy field, which is not destroyed when matter is disaggregated; instead, it is either recycled into the psychonic/panpsychic subatomic field of the superverse, or entangled and superimposed into its counterparts where the disaggregated matter still exists in its aggregated forms in the multiverse. In the human ethos of the macro-world, the psychonic/pan-psychic subatomic field that is consciousness becomes God, which is associated with attributes that we embody or must embody in order to reproduce our being in material resource frameworks. | 2020-01-30T09:06:04.861Z | 2020-01-18T00:00:00.000 | {
"year": 2020,
"sha1": "8c066667d93006f4e19194af8cb39b42a621fa0b",
"oa_license": "CCBY",
"oa_url": "https://scholars.direct/Articles/anthropology/iap-4-026.pdf?jid=anthropology",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "0fd789480233680a75fa0107e4608beb388ba314",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": [
"Philosophy"
]
} |
197665992 | pes2o/s2orc | v3-fos-license | Are vocalists prone to temporomandibular disorders?
Abstract Background As vocalists demand high physical strains of the masticatory system, singing is frequently mentioned as a risk factor for temporomandibular disorders (TMDs). Objectives This study investigated whether vocalists report a higher prevalence of two types of TMDs (viz., TMD pain and temporomandibular joint sounds) compared with instrumentalists who do not load their masticatory system while performing. In addition, we examined which risk indicators are associated with the presence of these TMDs among musicians. Methods A total of 1470 musicians from 50 different music ensembles completed a questionnaire. Of these musicians, 306 were vocalists (mean age ± SD 37.5 ± 17.7 years; 63.9% female) and 209 musicians enrolled the control group (mean age ± SD 42.7 ± 18.0 years; 40.7% female). Results The prevalence of self‐reported TMD pain among vocalists was 21.9%, as compared to 12.0% in the control group. 20.0% of the vocalists reported TMJ sounds versus 15.1% of the controls. The multiple regression models indicated that being a vocalist was not a risk indicator for the presence of self‐reported TMD pain nor for self‐reported TMJ sounds. Instead, it appeared that the report of TMD pain among musicians was positively associated with female gender, next to the level of physical workload, depicted as frequency of oral behaviours and the hours of daily practice. Musicians’ report of TMJ sounds was associated with oral behaviours. Conclusion This study shows that singing is not associated with the reports of TMD pain and TMJ sounds, after adjusting for potentially confounding variables included in the models.
| INTRODUC TI ON
Work-related musculoskeletal disorders (WMSDs) represent the progression of an overuse injury that can occur in any part of the body associated with movement. WMSDs can be found in a wide array of occupations demanding high physical strains of the employees, ranging from health care to the catering industry. 1,2 Causative mechanisms behind WMSDs include repetitive motion, forceful exertions, and non-neutral body postures. Other factors that can aggravate such disorders are high psychosocial work demands and perceived stress. 3,4 In terms of population exposure, it can be expected that many musicians suffer from WMSDs, because regular training loads resulting from daily practice, rehearsal, and performance place great demands on the neuromusculoskeletal systems of the body. It is therefore not surprising that playing a musical instrument that loads the masticatory system is frequently mentioned as a risk factor for temporomandibular disorders (TMDs) 5,6 .
TMDs are a group of musculoskeletal disorders that comprise clinical problems affecting the masticatory muscles, the temporomandibular joints (TMJs) and associated tissues. 7 With this in mind, it is frequently suggested that singing is a predisposing factor for TMDs as well. [8][9][10] The basic idea behind this is that vocalists repetitively submit their masticatory system to unnatural positions during singing, aiming to achieve the desirable sound. 9 In addition, it can be expected that vocalists are more sensitive to, and aware of, signs and symptoms of TMDs due to the close proximity to their 'instrument', the vocal box. Surprisingly, the actual evidence that underlines the idea of singings as a predisposing factor for TMD is very limited. 11 To the best of our knowledge, only Vaiano et al investigated the presence of 13 types of bodily pains, including TMJ pain, in a group of 50 classical choral singers. 12 Although there was no significant difference in the presence of this pain when compared to a control group consisting of 150 persons from the general population (non-singers), it is difficult to extrapolate this specific type of pain to TMD pain in general.
As a part of a large study among musicians in The Netherlands, the aims of the present study were (a) to investigate whether vocalists experience more TMDs (viz., pain-related forms of TMDs and TMJ sounds) as compared to musicians for whom loading of the masticatory system is not required for the musical performance (eg cellist, percussionist, pianist), and (b) to assess which risk indicators are associated with the presence of these TMDs among musicians.
| Data collection
This study was conducted among adult (>18 years) musicians of several music ensembles (symphony orchestras, chamber music ensembles, brass bands, fanfares, and choirs) of different levels of professionalism (from amateur to professional) throughout The Netherlands. Music ensembles were contacted by e-mail or telephone and were asked whether they would like to participate.
Between December 2013 and June 2016, a total of 90 music ensembles (including 15 choirs) were approached to participate in this study. After permission was granted by the chairman, human relations manager, or the conductor of the ensemble to perform the study, each musician that was present received an information letter during one of the rehearsals with details about the survey and a questionnaire. An additional verbal explanation about the project and questionnaire was given to the musicians as well.
Participants were asked for written informed consent before completing the questionnaire. This study was considered by the
| Questionnaire
In order to screen for musculoskeletal complaints in the masticatory system, the 'Symptom Questionnaire' (SQ) of the Diagnostic Criteria for Temporomandibular Disorders (DC/TMD) 13 was implemented in the study questionnaire. The SQ solicits information for the most common types of TMDs (viz., TMD pain and TMJ sounds).
Other questions focused on demographics (age, sex), (adverse) oral behaviours (eg grinding, clenching, nail biting) 14 and psychological aspects (viz., daily stress and feeling depressed or down). 15,16 In addition, specific musician-related questions were formulated based on another study on this topic. 17 These questions aimed to determine the type and number of instruments (including singing) that were played, length of playing experience (years), hours of daily practice and the professional level of the musician. Draft versions of the questionnaire were discussed with colleagues and several musicians to assess whether the questions were unambiguous, and if they provided a good insight into the musculoskeletal loading related to the musical performance. Suggestions for improvement were integrated in the final version of the questionnaire (Table 1).
| Data analysis
From the total sample, two groups of musicians were selected: vocalists, and musicians for whom loading of the masticatory system is not required (eg cellist, percussionist, pianist; regarded as controls). Data from instrumental musicians for whose work loading of their masticatory system while playing their instrument is mandatory (eg violin, oboe, trumpet) were not included in this study.
For both vocalists and controls, descriptive statistics were used to summarise the study group characteristics. In order to investigate associations between the group characteristics and the prevalence of self-reported TMD pain and the prevalence of self-reported TMJ sounds, logistic regression analyses were used. For both selfreported TMD pain and self-reported TMJ sounds (outcome variables), the associations with type of musician (viz., vocalists versus controls) and the other independent variables (viz., gender, age, length of playing experience, hours of daily practice, number of musical instruments, amount of daily stress, feeling depressed or down and frequency of oral behaviours) were evaluated using single regression analyses. All independent variables that showed at least a weak association with the outcome variable (P-value < .10) were incorporated into a multiple regression model to estimate the mutually adjusted effects of predictors on the outcome variable. Predictors with the weakest association with the outcome variable were removed using a backward stepwise approach until all predictors in the final model showed a P-value < .05. The analyses were conducted using the IBM SPSS Statistics 25 software package (IBM Corp, Armonk, NY, USA).
| RE SULTS
Of the 1910 eligible musicians, 1470 completed the questionnaire (response rate 77.0%). Of these, 306 musicians were vocalists, and 209 were categorised as controls. In comparison with the control group, the vocalist group comprised more women, and the mean age was lower (see Table 2 for details). The prevalence of self-reported TMD pain among vocalists was 21.9%, as compared to 12.0% in the control group. TMJ sounds were reported by 20.0% of the vocalists and by 15.1% of the controls.
The single logistic regression analyses showed a positive association (P < .1) between being a singer and reporting of TMD pain.
In addition, female gender, having a younger age, being a (semi-)professional musician, a lower number of years of playing experience, a higher number of hours per day devoted to practise, a higher level of daily stress and a higher frequency of oral behaviours were potentially associated with a report of TMD pain as well (Table 3). In the final multiple regression model, being a singer was not retained. Instead, female gender (OR 2.24, 95% CI: 1.21-4.13), hours of daily practice (OR 1.16, 95% CI: 1.02-1.33) and frequency of oral behaviours (OR 3.04, 95% CI: 1.86-4.97) were best associated with the report of TMD pain.
In Table 4, the outcomes of the single and multiple logistic regression analyses with respect to the report of TMJ sounds are presented. It appeared that female gender, having a younger age and a higher frequency of oral behaviours were positively associated with TMJ sounds in the single regression models, of which only the variable 'oral behaviours' (OR 2.24, 95% CI: 1.40-3.57) was retained in the final model.
| D ISCUSS I ON
Temporomandibular disorders (TMDs) is a broad term, used to characterise pain and functional complaints originating from the masticatory system (ie the TMJ, masticatory muscles, or both). 18 The complaints usually fluctuate and are function dependent. 7,18 Although regular use of the masticatory system will not necessarily lead to complaints related to TMDs, it is believed that playing a musical instrument often requires mandibular activities that are beyond normal physiological function. 19 Besides physiological overloading, many epidemiological studies have demonstrated the existence of an association between TMDs and psychopathology. 20 As performing musicians face various sources of psychological stress due to the demanding and high competitive work demands, 21 for TMDs. 11,23,24 Furthermore, except for a study that examined bodily pains in classical choral singers, including pain in the TMJ, 12 it has never been investigated whether singing is a risk indicator for TMDs.
Therefore, the present study investigated whether vocalists experience more TMDs (viz., self-reported TMD pain and self-reported TMJ sounds) as compared to musicians for whom loading of the masticatory system is not required for the musical performance. In addition, it was investigated whether specific musician-related factors played a role in their complaints. Initially, the unadjusted results of this study showed a positive association between being a singer and the report of TMD pain; for the report of TMJ sounds, being a vocalist appeared not to be a risk indicator. The initial association between singing and TMD pain lost significance in the multiple regression model after correction for the influence of gender, hours of daily practice and oral behaviours. This means that the observed higher occurrence of these pain complaints among singers (viz., 21.9%) as compared to controls (viz., 12.0%) might be explained by differences between the groups in gender distribution, daily practice and oral behaviour report. Indeed, the vocalist group comprised significantly more women than the control group. As women seem to be more affected with TMD pain than men, 7,25 it is essential to consider gender as confounding variable in this type of studies.
Interestingly, the present study indicates that musicians reported more TMD pain in case they had practised more hours on a daily basis. This coincides with knowledge on the field of work physiology, namely that the length of daily working hours and perceived physical workload are risk factors for the development of work-related musculoskeletal disorders (WMSDs). 3,4 In line with this is the current finding that the factor 'oral behaviours' was the strongest predictor for the presence of self-reported TMD pain and self-reported TMJ sounds among musicians. This was not surprising, because a commonly held view in the literature and clinical practice is that TMDs may be caused by mechanical overloading.
Both heavy forces and less heavy forces (eg prolonged clenching) may lead to overloading of the jaw-closing muscles and compressive forces within the TMJ, and thus to TMD pain and joint sounds, respectively. [26][27][28] However, it has to be reminded that support for this association mainly comes from questionnaire studies, and rarely comes from studies using instrumental techniques. 29 As questionnaire studies only indicate associations and not necessarily the direction of the relationship, it is impossible to establish how reports of oral behaviours and TMD complaints are associated. In fact, it might be hypothesised that patients, who experience complaints in the oro-facial area, attribute these complaints to certain factors of which they believe are causal, such as stress or oral behaviours.
It might also be possible that a person with a popping TMJ sound or a nagging pain in the masticatory muscles is more aware of oral behaviours than someone who is free of such symptoms. In other words, the presence of complaints in the oro-facial area could TA B L E 3 Single and multiple logistic regression models of variables associated with TMD pain among musicians (n = 515) Note: Associations are expressed as odds ratios (OR) and 95% confidence intervals (CI). For each removed independent variable, the P-to-Exit is reported.
actually drive self-reporting of oral behaviours. Future studies are needed to more fully explore the underlying mechanisms of the relationship between oral behaviours and TMDs.
A drawback of the current study deals with the fact that the selection of the two groups of musicians was based only on the question inquiring for the main instrument. However, it turned out that almost 40% of the musicians played more than one musical instrument (including singing). Even though the variable 'Playing multiple instruments' was not associated with the presence of self-reported TMD pain and self-reported TMJ sounds among musicians, the influence of this possible selection bias cannot be ruled out. Another limitation of the present study deals with its cross-sectional nature. As discussed before, the observed findings merely reveal associations that require further testing. Another drawback deals with the subjective nature: both TMD pain and TMJ sounds were assessed through self-report only. However, the question that screened for TMD pain is implemented in the validated DC/TMS Axis I protocol, 13 and is very similar to a TMD pain screening question that exhibited excellent content validity. 30 Since self-reported TMJ clicking has been found to be associated with objectively recorded TMJ clicking as well, 31 appeared to be good. 33 In summary, the present study indicates that singing was not associated with the reports of TMD pain and TMJ sounds, after adjusting for potentially confounding variables included in the models. The best predictors for self-reported TMD pain among musicians were female gender, next to the level of physical workload, depicted as frequency of oral behaviours and the hours of daily practice. Musicians' report of TMJ sounds was associated with oral behaviours.
ACK N OWLED G EM ENTS
The authors would like to thank the following individuals, who were dental students at the time this study was performed, for all Note: Associations are expressed as odds ratios (OR) and 95% confidence intervals (CI). For each removed independent variable, the P-to-Exit is reported.
TA B L E 4 Single and multiple logistic regression models of variables associated with TMJ sounds among musicians (n = 515) their efforts in distributing the questionnaires among musicians: | 2019-07-20T13:04:18.431Z | 2019-07-18T00:00:00.000 | {
"year": 2019,
"sha1": "31ab50ca76ab195ec53df840522d415af467ba7d",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/joor.12849",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "6dc5543602cdab0744987717bf329c88f683a710",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255810705 | pes2o/s2orc | v3-fos-license | Upregulation of the receptor-interacting protein 3 expression and involvement in neural tissue damage after spinal cord injury in mice
Necroptosis is a newly identified type of programmed cell death that differs from apoptosis. Recent studies have demonstrated that necroptosis is involved in multiple pathologies of various human diseases. Receptor-interacting protein 3 (RIP3) is known to be a critical regulator of necroptosis. This study investigated alterations in the RIP3 expression and the involvement in neural tissue damage after spinal cord injury (SCI) in mice. Immunohistochemical analysis demonstrated that the RIP3 expression was significantly increased in the lesion site after spinal cord hemisection. The increased expression of RIP3 started at 24 h, peaked at 3 days and lasted for at least 21 days after hemisection. The RIP3 expression was observed in neurons, astrocytes and oligodendrocytes. Western blot analysis also demonstrated the RIP3 protein expression significantly upregulated in the injured spinal cord. RIP3 staining using propidium iodide (PI)-labeled sections showed most of the PI-labeled cells were observed as RIP3-positive. Double staining of TUNEL and RIP3 demonstrated that TUNEL-positive cells exhibiting shrunken or fragmented nuclei, as generally observed in apoptotic cells, rarely expressed RIP3. The present study first demonstrated that the expression of RIP3 is dramatically upregulated in various neural cells in the injured spinal cord and peaked at 3 days after injury. Additionally, most of the PI-labeled cells expressed RIP3 in response to neural tissue damage after SCI. The present study suggested that the upregulation of the RIP3 expression may play a role as a novel molecular mechanism in secondary neural tissue damage following SCI. However, further study is needed to clarify the specific molecular mechanism underlying the relationship between the RIP3 expression and cell death in the injured spinal cord.
Background
Necrosis was originally thought to be a purely passive and uncontrolled type of cell death. Necrosis, which is marked by cell swelling and membrane rupture, leads to inflammation via the release of intracellular signals [1][2][3]. In contrast, apoptosis, which is characterized by the activation of caspases and DNA fragmentation, was once considered the sole form of programmed cell death. Chromatin condensation, nuclear shrinkage and fragmentation and membrane blebbing are the result of the proteolytic activity of caspases and define the morphological characteristics of apoptosis [3][4][5].
"Necroptosis" is a newly identified type of programmed necrosis. Necroptosis is another form of programmed cell death that is regulated by the caspase-independent pathway and exhibits the morphological features of necrosis [6,7]. Recent studies have shown that the receptor-interacting protein (RIP) family is specifically involved in regulating necroptosis [3]. RIP3, a member of the RIP family, is known to be a key mediator of necroptosis [8,9]. RIP3 acts a nucleocytoplasmic shuttling protein and can locate not only in the cytoplasm but nucleus [10]. Various stimuli to cells can induce the formation of necrotic complexes that contain RIP1, TRADD, FADD and caspase-8. The interaction of RIP3 with RIP1 in the necrotic complex is an important step required for the execution of necroptosis [3,11]. Previous studies have demonstrated that the RIP3 expression correlates with the induction of necroptosis in various types of cells [12][13][14][15][16][17][18]. RIP3 alone can trigger necroptosis, even in the absence of RIP1 [12,14,19]. Therefore, RIP3 is indispensable for necroptosis, whereas RIP1 may participate only in certain stimulus-induced types of cell necroptosis [9]. Furthermore, recent studies have shown an increased expression of RIP3 in lesions and the induction of necroptosis in various disease models [18,20], such as that involving liver injury [14], ileitis [21], skin inflammation [22], and kidney ischemia-reperfusion injury [23]. The expression of RIP3 is also increased in retinal neurons in response to acute ischemic insults [24], and the upregulation of RIP3 contributes to the induction of necroptotic cell death in hippocampal neurons following cerebral ischemia [17].
Currently, it is considered that secondary damage after spinal cord injury (SCI) is caused by apoptosis, and most previous research related to cell death in the injured spinal cord has focused on apoptosis, not necroptosis.
In the present study, we investigated alterations in the RIP3 protein expression and the involvement of necroptosis in SCI using a spinal cord hemisection model in mice.
Animals
All experimental procedures were approved by the Institutional Animal Care and Use Committee of Tohoku University. All efforts were made to minimize the number of animals used and to decrease their suffering. Adult female C57BL/6J mice (8-10 weeks old; Charles River, Japan Inc., Yokohama, Japan) were used in this study. The mice were housed three or four per cage and kept at a temperature of 24 °C with free access to food and water before and after surgery.
Spinal cord hemisection model
The mice were anesthetized with 2 % sevoflurane. A 15-mm midline skin incision was made, and the laminae of T9-11 were exposed. Laminectomy was performed at the thoracic vertebra at T10, exposing the dorsal surface of the spinal cord without disrupting the dura mater. With a sharp scalpel, the spinal cord was hemi-transected on the left side only [25][26][27][28]. In mice with a compromised bladder function (a rare complication), the bladder was manual expressed twice a day until spontaneous voiding was noted. The sham-operated animals underwent the same surgical procedures, although hemisection was not applied to the spinal cord. During the surgery, the rectal temperature was monitored and maintained at 37.0 ± 0.5 °C with a heating pad.
Tissue preparation
At different time points (4 h, 24 h, 3, 7, and 21 days) after hemisection and immediately after the sham operation, the mice were overdosed via an intraperitoneal injection of 100 mg/kg sodium pentobarbital. The mice were then transcardially perfused with normal saline, followed by 4 % paraformaldehyde in 0.1 M PBS, pH 7.4. For immunohistochemical staining, the spinal cord segments containing the injured site were collected, postfixed in the same fixative overnight at 4 °C, cryoprotected in 30 % sucrose in PBS for 48 h at 4 °C and embedded in Optimal-Cutting-Temperature compound (Sakura Finetek, Japan). Serial 15-μm transverse cryostat sections obtained from around the injured site were mounted on slides. A total of 13 sequential sections were collected at 250-μm intervals that spanned 3000 μm in length along the spinal cord centered at the epicenter. The sections were used for both immunohistochemical and TUNEL staining, as described below.
Immunohistochemical staining of RIP3
For the immunohistochemical staining of RIP3, the sections were washed in PBS for 15 min, after which they were washed with PBS containing 0.3 % Tween for 10 min and blocked with 3 % milk and 5 % FBS in 0.01 M PBS for 2 h. The sections were incubated with rabbit anti-RIP3 antibodies (1:100; Sigma-Aldrich) diluted in PBS overnight at 4 °C. After rinsing with PBS, the sections were incubated with secondary antibodies. The sections were coverslipped with Vectashield containing DAPI to label the nuclei (Vector Laboratories). In each experiment, the sections were stained at the same time.
Counting and calculation of RIP3-positive cells
Following immunochemical staining of RIP3, as described above, each section was scanned using a fluorescence microscope (BX 51; Olympus, Tokyo, Japan). In order to quantify the RIP3 expression in the spinal cord, the total number of RIP3-positive cells in the injured and contralateral sides of the spinal cord was counted using the serial transverse sections at 250-μm intervals. The sections with the highest number of RIP3-positive cells on the injured side and the 250-μm rostral and caudal sections in each animal were selected. Then, the sum of the numbers in the three sections was compared between the injured side, the contralateral side and the sham group.
Western blot analysis of RIP3
The mice were killed 3 days after spinal cord hemisection and the sham operation, and their spinal cords were removed. The spinal cord was homogenized in lysis buffer. The debris was removed via centrifugation, and the protein levels in the lysates were determined with the aid of a Bio-Rad protein assay (Bio-Rad Laboratories, USA). The protein (30 μg) in the lysates was resolved using SDS-polyacrylamide gel electrophoresis (PAGE) in 15 % gels and then electrophoretically transferred to a polyvinylidene difluoride membrane. The membranes were blocked for 1 h in TBST buffer (0.01 M Tris HCl, pH 7.5, 0.15 M NaCl and 0.05 % Tween 20) containing 3 % milk and incubated with rabbit anti-RIP3 antibodies (1:100; Santa Cruz Biotechnology, USA) diluted in TBST buffer overnight at 4 °C. The membranes were incubated with secondary antibodies linked to horseradish peroxidase (1:1000; Invitrogen). The immunoreactive bands were developed using the enhanced chemiluminescence reagent (Amersham Corp). The band density was quantified using a scanned densitometric analysis and the Image Lab software program version 4.1 (Bio-Rad Laboratories). The quantity of the band density was normalized according to the level of β-tubulin and compared between the injured and uninjured spinal cord samples.
Double staining for RIP3 and various cell type markers
In order to examine the expression of RIP3 in a specific population of cells, the transverse sections obtained 3 days after hemisection were double-stained for RIP3 and various cell type markers: NeuN for neurons, GFAP for astrocytes and Olig2 for oligodendrocytes. The sections were incubated with a mixture of rabbit anti-RIP3 antibody (1:100; Sigma-Aldrich) and either goat anti-Olig2 (1:100; Santa Cruz Biotechnology), mouse anti-GFAP (1:50; Dako) or mouse anti-NeuN (1:100; Chemicon) antibodies diluted in PBS overnight at 4 °C. After rinsing with PBS, the sections were incubated with secondary antibodies, and then mounted with Vectashield containing DAPI to label the nuclei (Vector Laboratories). The specificity of the RIP3 antibody was evaluated by omission of the primary antibody in immunohistochemistry. Omitting the primary antibody (no primary immunoglobulin G control) in this protocol abrogated the staining, demonstrating the specificity of the immunoreactive staining [14,24].
RIP3 staining using propidium iodide-labeled spinal cord sections
In order to detect plasmalemma permeability, which is a hallmark of necrotic cell death, in the injured spinal cord at 3 days after hemisection, propidium iodide (PI) labeling was performed as previously described [29][30][31]. Briefly, PI (10 mg/mL; Sigma-Aldrich) was diluted in 0.1 mL of PBS and then intraperitoneally injected at a dose of 1 mg/kg body weight at 1 h before sacrifice. The mice were transcardially perfused, and the spinal cords were collected and sectioned as described above. For immunohistochemical detection of the RIP3 expression in the PI-labeled cells, the PI-labeled spinal cord transverse sections were stained for RIP3, as outlined above. Using a fluorescence microscope (BX 51; Olympus, Tokyo, Japan), the propidium iodide labeling was detected using emission and excitation wavelengths of 568 and 585 nm, respectively.
Double staining of RIP3 and TUNEL
In order to identify DNA fragmentation in the cells expressing RIP3, double staining of RIP3 and terminal deoxynucleotidyl transferase-mediated dUTP nick end labeling (TUNEL) was performed using the transverse sections at 3 days after hemisection. Following immunohistochemical staining of RIP3, as described above, the TUNEL was performed using the In Situ Cell Death Detection Kit (Roche Diagnostics).
Statistical analysis
Significant differences in the number of RIP3-positive cells and the band density of the Western blots were analyzed using the unpaired t test. In all analyses, a p value of <0.05 was considered to be statistically significant.
Immunohistochemical staining for RIP3
The number of cells expressing RIP3 was increased on the injured side at 3 days after spinal cord hemisection (Fig. 1). Cells expressing RIP3 were observed in both the gray and white matter of the injured side. However, all cells on the injured side did not express RIP3. On the contralateral side, the number of cells expressing RIP3 was obviously low, similar to that observed in the sham group.
Time course of the RIP3 expression after hemisection
Representative pictures showed that the number of cells expressing RIP3 was increased on the injured side at each time point after hemisection compared to that observed in the sham group (Fig. 2). The population of cells expressing RIP3 on the injured side was relatively larger at 3 and 7 days compared to that noted at the other time points.
When counting the number of RIP3-positive cells (Fig. 3), those observed on the injured side were found to be significantly higher than those seen on the contralateral side and in the sham control group. A significant increase in the number of RIP3-positive cells was first noted at 24 h and lasted for at least 21 days (P < 0.05). The maximum number of RIP3-positive cells on the injured side was observed at 3 days after hemisection.
Western blot analysis of RIP3
In the Western blot analysis, the level of RIP3 proteins was significantly higher in the injured spinal cord than in the uninjured spinal cord (Fig. 4a). In the analysis of the band density, the RIP3 expression was 2.2 ± 0.5-fold higher in the injured spinal cord (Fig. 4b, P = 0.005).
Double staining of RIP3 and various cell type markers
In order to examine the RIP3 expression in specific populations of cells, the transverse sections obtained at 3 days after hemisection were double-stained for RIP3 and various neural cell type markers. On the double staining, the expression of RIP3 was observed in the NeuN-, GFAPand Olig2-labeled cells (Fig. 5). The double staining demonstrated the RIP3 expression in neurons, astrocytes and oligodendrocytes. However, not all of these cells expressed RIP3.
RIP3 expression in the PI-labeled cells
Plasmalemma permeability, a hallmark of necrotic cell death, including necroptosis, was detected using PI-labeling in vivo. In RIP3 immunostaining using PI-labeled spinal cord sections. The number of both PI-labeled cells and RIP3-expressing cells increased on the injured side, and the PI-labeled cells were frequently observed to be RIP3-positive (Fig. 6). At a higher magnification, most of the nuclei in the PI-labeled cells expressing RIP3 were round, as generally observed in necrotic cell death.
Double staining of RIP3 and TUNEL
In order to detect DNA fragmentation in the cells expressing RIP3, we performed double staining of RIP3 and TUNEL. On the injured side, the numbers of TUNEL-positive and RIP3-positive cells were observed to have obviously increased (Fig. 7a-c). However, the RIP3 expression was rarely observed in the TUNEL-positive cells. Under higher magnification, most of the TUNEL-positive cells exhibiting shrunken or fragmented nuclei, as is typical of apoptotic cells, did not express RIP3 (Fig. 7d-f).
Discussions
In the present study, the level of RIP3 proteins significantly increased on the injured site after spinal cord hemisection. An increase in the number of RIP3-positive cells at 100 μm (a-c, e-g, j-k, m-o, q-s, u-w), 20 μm (d, h, l, p, t, x). The schematic drawing illustrates the location of the micrographs (y) the injured site was observed starting at 24 h, peaked at 3 days and lasted for at least 21 days after injury. The RIP3 expression was observed in neurons, astrocytes and oligodendrocytes. Most of the PI-labeled cells expressed RIP3 in response to neural tissue damage after SCI. These results suggest that the expression of RIP3 was dramatically upregulated in various neural cells and may be involved in the pathological mechanism at the lesion site following SCI.
Traditionally, secondary damage of neural tissue following SCI has been considered to be caused by apoptosis, not necroptosis. Most previous research related to cell death in the injured spinal cord has focused on apoptosis. In the present study, both immunohistochemistry and Western blot analysis demonstrated that the RIP3 expression was significantly upregulated at the lesion site after SCI. Importantly, the upregulation of the RIP3 protein was observed starting at 24 h, after which it peaked at 3 days and lasted for 21 days after injury. The time course of the RIP3 expression is quite similar to that of secondary damage following SCI [32][33][34]. Additionally, Fig. 4 RIP3 protein expression at 3 days after hemisection. a Western blotting showed that the RIP3 protein expression in the injured spinal cord was obviously increased compared to that observed in the uninjured spinal cord after the sham operation. b A quantitative analysis of the Western blots showed that the level of RIP3 proteins in the injured spinal cord was significantly higher than that observed in the sham control samples (*p < 0.05, n = 5 per each group). The quantity of the band density was normalized according to the level of β-tubulin. The values are presented as the mean ± SD Fig. 5 Double staining for RIP3 and various cell type markers on the injured side in the transverse sections after hemisection. The RIP3 expression was observed in the NeuN-, GFAP-and Olig2-labeled cells (arrowheads), demonstrating that the RIP3 expression was increased in neurons, astrocytes and oligodendrocytes, respectively. Scale bars : 100 μm (a-c, g-i, m-o), 50 μm (d-f, j-l, p-r). The schematic drawing illustrates the location of the micrographs (s) we have previously confirmed that the number of TUNEL-positive cells peaked at 3 days in the thoracic spinal cord hemisection model in mice (data not shown). The TUNEL-positivity potentially indicates apoptotic cell death as well as necrotic cell death if the morphological characteristics of the cells are ignored [35]. Whalen et al. [31] suggested that different phenotypes of cell death can occur at the lesion site after the CNS injury. These findings suggest that the increased expression of RIP3 in the injured spinal cord may contribute to secondary neural tissue damage and is possibly associated with not only apoptosis, but also necrotic cell death after SCI.
Recent studies have demonstrated that necroptosis plays a role in various pathological conditions in the CNS. Necroptosis can contribute to neural tissue damage in the model of adult brain ischemia [17,36,37], neonatal hypoxia-ischemia [38], traumatic brain injury [29] and neurodegenerative diseases [39,40]. Retinal ischemia [24,41] and photoreceptor loss-associated retinal disorders [42] also involve neuronal necroptosis. Previous studies have shown that apoptosis is associated with peculiar morphological traits, including chromatin condensation and nuclear shrinkage and fragmentation, as well as blebbing of the intact plasma membrane and shedding of vacuoles containing cytoplasmic portions known as apoptotic bodies [1,4,5]. On the other hand, necroptosis involves a similar morphology to that of necrosis, such as minor ultrastructural modifications of the nucleus (dilatation of the nuclear membrane), osmotic swelling of organelles, an increased cell volume and breakdown of Fig. 6 Immunostaining for RIP3 using propidium iodide (PI)-labeled sections obtained 3 days after hemisection. The representative pictures show that the PI-labeled cells were frequently observed to be RIP3-positive (a-c). At higher magnification (d-f), most of the nuclei in the PI-labeled cells expressing RIP3 (arrowheads in f) were round, which is normally observed in cells undergoing necrotic cell death, not fragmented or shrunken, as observed in apoptotic cells. Scale bars 100 μm (a-c), 20 μm (d-f). The schematic drawing illustrates the location of the micrographs (g) the plasma membrane [3,11]. In the present study, most of the PI-positive cells expressed RIP3 in the injured spinal cord and contained round nuclei, which is generally observed in cells undergoing necrotic cell death. Additionally, the TUNEL-positive cells exhibiting shrunken or fragmented nuclei rarely expressed RIP3 at the lesion site. Based on the increased expression of RIP3 and the morphology of the nuclei in these cells, necroptosis may be one of the several types of cell deaths occurring in the injured spinal cord.
However, it has been reported that the PI-or TUNELpositivity cannot perfectly differentiate between necrotic and apoptotic cells. A previous study suggested that PI-positive cells can occasionally be TUNEL-positive after CNS injury [31]. The PI-positivity is a hallmark of necrosis, but cannot be used by itself to identify necrotic cells because PI can enter cells with activated pannexin channels [43]. The TUNEL-positivity potentially indicates apoptotic cell death as well as necrotic cell death [35]. Additionally, the PI-or TUNEL-positivity did not perfectly reflect the RIP3 expression in the lesion site after hemisection in this study. Thus, from the data presented in this study, no firm conclusion about cell death in the RIP3-positive cells in the injured spinal cord can be made. The precise molecular mechanism underlying the relationship between RIP3 expression and PI-and/or TUNEL-positivity in neural tissue damage requires further clarification.
Previous studies have suggested that necroptosis is induced in various neural cells. The RIP3 expression in retinal neurons is upregulated in response to acute ischemic insults [24], and cerebral ischemia induces necroptotic cell death in hippocampal neurons [17]. In addition, necroptosis drives motor neuron death in amyotrophic lateral sclerosis ALS [40], Furthermore, hemin induces necroptotic cell death in cortical astrocyte cultures [44]. Necroptosis is induced in cultured rat astrocytes by the administration of staurosporine [45] Necroptosis contributes to arachidonic acid-induced oxidative cell death in primary oligodendrocyte precursor cultures [46]. According to our results, the expression of RIP3 is observed in neurons, astrocytes and oligodendrocytes in the injured spinal cord. These findings suggested that necroptosis may occur in various neural cells and may contribute to multiple pathological mechanisms after SCI. However, the molecular mechanisms defining the features of necroptosis are not fully understood, and the precise pathological mechanisms induced by necroptosis after SCI remain unclear. Further studies to clarify the molecular and biological mechanisms of necroptosis will hopefully lead to the development of novel therapeutic strategies for the treatment of SCI.
Conclusion
In the present study, the expression of RIP3 is dramatically increased at the lesion site after SCI. The increased RIP3 expression was observed in neurons, astrocytes and oligodendrocytes. Most of the PI-labeled cells expressed RIP3 in response to neural tissue damage. This study is the first to provide evidence supporting the increased expression of RIP3 involved in neural tissue damage after SCI.
Authors' contributions HK and HO designed the experiment, HK, ST and KY performed the experiment and analyzed the data, HK drafted the manuscript, HK and HO revised the manuscript and participated in paper modification. All authors read and approved the final manuscript. | 2023-01-15T14:57:26.246Z | 2015-10-08T00:00:00.000 | {
"year": 2015,
"sha1": "61bada36747e6f5082000887272688c033eb466c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12868-015-0204-0",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "61bada36747e6f5082000887272688c033eb466c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
266762848 | pes2o/s2orc | v3-fos-license | Identification of female-enriched and disease-associated microglia (FDAMic) contributes to sexual dimorphism in late-onset Alzheimer’s disease
Background Late-onset Alzheimer’s disease (LOAD) is the most common form of dementia; it disproportionally affects women in terms of both incidence rates and severity of progression. The cellular and molecular mechanisms underlying this clinical phenomenon remain elusive and ill-defined. Methods In-depth analyses were performed with multiple human LOAD single-nucleus transcriptome datasets to thoroughly characterize cell populations in the cerebral cortex. ROSMAP bulk human brain tissue transcriptome and DNA methylome datasets were also included for validation. Detailed assessments of microglial cell subpopulations and their relevance to sex-biased changes at the tissue level were performed. Clinical trait associations, cell evolutionary trajectories, and transcription regulon analyses were conducted. Results The relative numbers of functionally defective microglia were aberrantly increased uniquely among affected females. Substratification of the microglia into different subtypes according to their transcriptomic signatures identified a group of female-enriched and disease-associated microglia (FDAMic), the numbers of which were positively associated with disease severity. Phenotypically, these cells exhibit transcriptomic signatures that support active proliferation, MHC class II autoantigen presentation and amyloid-β binding, but they are also likely defective in phagocytosis. FDAMic are likely evolved from female activated response microglia (ARMic) with an APOE4 background and compromised estrogen receptor (ER) signaling that is deemed to be active among most subtypes of microglia. Conclusion This study offered important insights at both the cellular and molecular levels into how ER signaling affects microglial heterogeneity and function. FDAMic are associated with more advanced pathologies and severe trends of cognitive decline. Their emergence could, at least in part, explain the phenomenon of greater penetrance of the APOE4 genotype found in females. The biases of FDAMic emergence toward female sex and APOE4 status may also explain why hormone replacement therapy is more effective in APOE4 carriers. The pathologic nature of FDAMic suggests that selective modulations of these cells may help to regain brain neuroimmune homeostasis, serving as a new target for future drug development. Supplementary Information The online version contains supplementary material available at 10.1186/s12974-023-02987-4.
Background
According to the latest key facts from the World Health Organization, dementia is currently the seventh leading cause of death and one of the major drivers of disability and dependency among elderly people (World Health Organization, Dementia key facts 2023).Late-onset Alzheimer's disease (LOAD) is the most common form of dementia, a disease that disproportionally affects women [1].Despite these clinical observations, the underlying cellular and molecular mechanisms remain elusive.Age is perceived as the most influential risk factor for LOAD [2].The relatively longer life expectancies among females were previously proposed to explain the sex-biased observations in the disease [3].However, modern studies now indicate that the longevity factor alone is insufficient to clarify this issue [4], as it fails to explain why the disease manifests at a much lower prevalence in young women than in age-matched men and why the reverse trend is found among postmenopausal women [4].
It is now believed that these observations are at least in part caused by the divergent alterations in brain structures and microenvironments that male and female subjects manifest in response to disease-causing insults [5].Since the human immune system and function are sexually distinct [6], it is proposed that sex-biased differences in the neuroimmune response may be involved [7].Microglia are the resident innate immune cells in the brain [8,9].In addition to exhibiting varying structures and functions in different brain regions, sex-specific transcriptomic and proteomic profiles are observed among these cells [10,11].On the basis of these earlier findings, microglial heterogeneity has recently gained much attention with the emergence of single cell-based technologies.Multiple studies have utilized bulk and single-cell transcriptomic data from elderly human brains to identify different microglial properties and their relationships with neuropathology.Consistent with classic AD pathologies, it has been suggested that the genetic risk of LOAD, such as APOE and TREM2 status, is functionally associated with the microglial response to amyloid-β pathology (i.e., amyloid-responsive microglia) [12][13][14].On the other hand, activated response microglia (ARMs) [15], which share many overlapping homeostatic response signatures with disease-associated microglia (DAMs) [16], are generally considered a part of the normal aging response, although a subpopulation of ARMs of female origin distinguished by the expression of genes involved in MHC class II presentation, tissue repair and LOAD genetic risks (APOE, BIN1, etc.) has been associated with progressive amyloid-β accumulation [17].In the same way, losses in homeostatic phenotype, phagocytic activities and activation response of DAMs are also associated with neurodegeneration [16,18] and accumulation of degenerated myelin (i.e., white matterassociated microglia [WAMs]) [15].
With these updated findings describing the heterogeneous responses conferred by different subtypes of microglia in LOAD, recent studies have aimed to elaborate the underlying transcriptomic signatures associated with these changes.It has been repeatedly reported that microglial populations related to disease progression and proinflammatory responses exhibit heightened expression of APOE and lipid metabolism genes [19][20][21], suggesting that immunometabolic pathway perturbations possibly contribute to advanced disease pathogenesis [19].While details on the molecular mechanisms underlying the heterogeneities of microglial phenotypes within the adult human brain are emerging, less is known about how female sex may affect the relative quantities of microglial subpopulations and whether estrogen receptor (ER) signaling plays a role in shaping certain properties of microglia related to sex dimorphisms observed in LOAD pathogenesis.
To better understand whether sex-biased and disease-specific differences in microglial populations and molecular properties exist, an in-depth investigation was conducted in multiple human brain datasets.Here, we describe both established and previously unrecognized female-enriched and disease-associated microglia (FDAMic), the evolutionary trajectory relationship among them, and the ER signaling gene network related to the observed transcriptomic changes in FDAMic.We identified that the relative populations of FDAMic are more prevalent in female LOAD patients with an APOE4 background.This study therefore offered important insights at both the cellular and molecular levels into how ER signaling affects microglial heterogeneity and function.In addition, the biases of FDAMic emergence toward female sex and APOE4 status may explain how hormone replacement therapy is more effective among APOE4 carriers.
Single-nucleus RNA sequencing data sources, data processing and original codes
All analyses were carried out using freely available software packages.All original codes for each figure can be found at https:// github.com/ KimCh ow-Lab/ FDAMic.
The following datasets were used: the Mathys et al. single-nucleus RNA sequencing (syn18485175) dataset was downloaded from Synapse.org [22] for use as the discovery cohort.A total of 70,634 high-quantity cells were input into the Seurat 3 pipeline.The first 30 principal components were considered for UMAP visualization and cell-type identification.According to known cell-type labeling and cell type-specific markers, eight major cell types were identified.Twenty-two subcell clusters were obtained at a resolution setting = 0.5.Cluster 22 (one group of excitatory neurons containing 146 cells) was removed as all the nuclei originated from one sample.
To validate the existence of FDAMic, analyses were reperformed and validated with the Morabito et al. (GSE174367) single-nucleus RNA sequencing dataset, which was downloaded from the GEO database [23].Protein-coding genes were used to identify brain cell types, as well as markers used by the original publication [23].Microglial populations were extracted, and batch effects between samples were removed using the align_cds function in Monocle 3 [24].
Pseudotime cell differentiation status analysis
The expression matrix of microglia was constructed by the GetAssayData function in Seurat [25].The Monocle 3 [26] package was used to generate pseudotime evolutionary trajectories according to the microglia expression matrix.The first 20 principal components were used to normalize the data with the preprocess_cds function, and the resolution in the cluster_cells function was set to 0.01 for clustering the cells.Microglial nuclei were ordered by learn_graph and order_cells functions.To identify the roots of the trajectory lines, the signaling entropy for each cluster was computed by the SCENT R package [27] to estimate the differentiation potential.In parallel, the CytoTRACE webserver [28] was used to validate the initial findings.Differentially expressed genes (DEGs) in different subcell clusters in microglia were identified by the FindMarkers function in Seurat with pct.min = 0.25 using the default Wilcoxon rank-sum test.DEGs with a p value < 0.01 were considered significantly changed genes in different subclusters of microglia.
Identification of transcription regulons and their activities by the SCENIC algorithm
The SCENIC algorithm was developed to assess the regulatory network analysis regarding transcription factors and discovery regulons (that is, transcription factors and their target genes) in an individual cell [29].The default databases hg19-500 bp-upstream-7species.mc9nr.featherand hg19-tss-centered-10 kb-7species.mc9nr.featherwere used to analyze transcription factor binding motifs of target genes.After calculating the coexpression relationships between transcription factors and target genes in each single cell, regulons were identified by coexpression and binding location information.Estimation of the activated status of each transcription regulon for each microglial subtype was subsequently performed.Regulon activity was then analyzed by AUCell software, and any transcription factors with area under the curve (AUC) values greater than the 0.05 threshold in any of the cell clusters were retained for further analysis.
Deconvolution of ROSMAP bulk RNA-seq datasets
The Scaden [30] and CIBERSORTx [31] algorithms were used to deconvolute the bulk RNAseq datasets from the ROSMAP study accessed from the AD knowledge portal.To optimize the program-running burden to the system, random sampling of 5000 nuclei from the Mathys et al. dataset [22] and subsequent validation on all cell type coverage were performed before being considered as the reference matrix.This read count matrix was then used to deconvolute the normalized bulk RNA-seq data by CIBERSORTx.S-mode was then selected to remove any potential batch effects.The CPM value for each cell was subsequently used to generate the training data for Scaden.
Statistical methods for cell number variation and DEG analysis for bulk RNA-seq
The significance of cell number variations between ND and LOAD samples of different sexes was estimated by the Chi-square test.The significance of microglial cell proportions that emerged under different pathological conditions was calculated by the two-sample Wilcoxon test.Significances of differentially expressed genes between ND and LOAD samples of the ROSMAP bulk RNA-seq dataset were obtained by the linear regression model for each gene, which was implemented in the limma package by presetting the postmortem interval and age as covariate factors.Genes with p values less than 0.01 were regarded as significantly changed.Cell typespecific markers from single-nucleus RNA-seq datasets were identified using FindMarkers in Seurat with pct.min set at 0.25.Only DEGs with p values less than 0.01 and expression in more than 20% of the cells in the corresponding subcluster were considered significant.GSEA was used to compare the pathway variations among different clusters.
Overrepresentation enrichment analysis
Overrepresentation enrichment analysis for the target gene sets was conducted on Metascape [32] using the default settings of the platform.The 20 most significantly enriched terms were visualized and analyzed as a network.The ClusterProfiler [33] package was used to compare pathways enriched between various microglial subclusters.Gene set enrichment analysis (GSEA) was used to perform a global KEGG pathway comparison among microglial subtypes.The msigdbr package was used to extract the KEGG pathways from the MSigDB database [34].The Wilcoxon test was performed by the wilcoxauc package to prerank the genes, which was then used as the input for the fgsea package.Pathways with p values less than 0.01 were selected for subsequent analyses.
Characterization of different microglial subtypes
To characterize all possible microglial subtypes identified by our trajectory analysis, gene expression signatures of these cells were compared with those identified in previous studies, as elaborated in the Results section.Genes with elevated expression in specific subgroups were selected as potential signature genes.With the fgsea R package [35], signature genes were mapped to different microglial cell clusters according to their expression distribution.Normalized enriched scores (NESs) were used to compare the relationship between paired microglial subtypes.
Identification of estrogen-responsive methylomic gene loci and their changes in LOAD
The brain DNA methylation data of ND and LOAD patients from the ROSMAP study were downloaded from the synapse database (syn3157275).For the identification of estrogen-responsive methylomic loci, DNA methylation data of MCF-7 cells subjected to estrogen-depleted and replete conditions were used, and the dataset was downloaded from the GEO database (GSE132513).The differentiated methylated loci were identified by the limma package with the default setting.Changes at loci with p values less than 0.01 were considered significant.
Microglia are aberrantly expanded but functionally compromised in female LOAD patients
Previous studies performed in laboratory mouse models showed that altered microglial physiology contributes to the sex-dimorphic effects observed in LOAD [17,[36][37][38].To better understand their relevance to humans, pilot data analysis of a single-nucleus RNA-sequencing (snRNA-seq) dataset from Mathys et al. [22] consisting of 48 age-and sex-matched prefrontal cortex samples harvested from both nondementia [N = 24 (male N = 12; female N = 12)] and LOAD [N = 24 (male N = 12; female N = 12)] individuals was performed.From a total of 7 major cell types (Fig. 1A, Additional file 10: Fig. S1A), 22 smaller clusters were substratified based on refined analysis of their transcription profiles (Fig. 1B).Subsequent analyses of the cell number distribution with respect to sex and disease status revealed significant differences in 14 out of 22 subclusters (Fig. 1C).Among them, Cluster 13 was the most significant, characterized by an enriched population of female microglial nuclei from affected individuals (Fig. 1C).The existence of this female-enriched and disease-associated group of microglia was confirmed in another dataset deposited by Lau et al., although the finding was not statistically significant due to the small sample size [39] (Additional file 10: Fig. S1B, C).With reference to an existing definition of disease severity in the original publication [22], which was preassigned based on an integrated consideration of multiple clinicopathological features (Additional file 1: Table S1), changes in total female-to-male cell number ratios between nondementia (ND) and disease-affected (LOAD) samples were investigated.No obvious sex biases were found at the whole-tissue level (Additional file 10: Fig. S1D); significant differences were found within 14 subclusters of cells (Fig. 1D).Of all possible brain cell types detected, microglia were the only ones that consistently exhibited a progressive increase in cell number as the disease advanced in females (Fig. 1D).Relative to nondementia controls, the increment in microglial cell number observed in female samples was more profound at advanced stages of the disease (Fig. 1E, F) and was positively associated with more severe neurofibrillary tangle (NFT) deposition (i.e., defined by Braak and Cerad scores) (Additional file 10: Fig. S1E, F).Despite being marginally insignificant, the relative quantities of microglia also trended upward in female subjects suffering from more severe cognitive decline (i.e., Cogdx score) (Fig. 1G).
To further validate these relationships, additional analyses in the bulk transcriptomic datasets of brain tissues harvested from 613 individuals (nondementia: male N = 133 and female N = 219; LOAD: N = 88 and female N = 173) that took part in the Religious Orders Study and Memory and Aging Project (ROSMAP) [40] were performed.Similarly, the phenotypic status of these samples (i.e., LOAD vs. ND) was defined with reference to multiple clinicopathological features, including the neuritic plaque load (CERAD) [41], neurofibrillary tangle pathology (Braak) [42] and cognitive status [Cogdx [43] and DCFDX [44]] (Fig. 1H).Then, cell composition analysis was performed with the deep learning-based Scaden method [30], which revealed selective increments in relative microglial cell number among affected females (Fig. 1I, J); this finding was alternatively validated by the CIBERSORTx deconvolution method [31,45] (Additional file 10: Fig. S1G, top).Further analyses also indicated that these observations were associated with a more advanced disease status, as reflected by poorer Cogdx (Fig. 1J), Braak and Cerad scores (Additional file 10: Fig. S1G).
The observed increase in microglial cell number suggested that the global homeostatic status of the cells may have changed.In the ROSMAP bulk transcriptomic dataset, DEG analysis between sex-specific LOAD versus ND samples revealed a significantly higher number of DEGs in female than male samples (DEG LOAD vs nondementia : female samples = 4643 genes; male samples = 446 genes) (Fig. 1K, Additional file 2: Table S2).Among these, upregulated DEGs in the female LOAD group were clustered significantly into pathways related to gliogenesis and glial cell differentiation (Fig. 1L), suggesting that these cells are possibly evolving to a more terminally differentiated state or adopting a more stable reactive phenotype.Notably, this sex-biased trend in the number of DEGs and the functional implication of upregulated DEGs in female LOAD patients were validated even after downsizing the female sample number to be equal to that of males, as revealed by random elimination of female samples 1000 times in a permutation analysis (Additional file 2: Table S2).This finding validated that the DEGs identified were not biased by the preexisting differences in sample numbers.Cell-specific DEG analyses performed in sex-specific LOAD versus ND samples of Mathys et al. 's dataset (Fig. 1M) revealed that key microglia-related proliferation genes, such as CSF1R [46] and CD81 [47], were significantly induced in affected females (Additional file 10: Fig. S1H-I).Further analysis of the DEGs that were downregulated in affected females but upregulated in affected males was performed (Fig. 1N).Genes related to microglial stress responses, such as HSPA1A and HSPA1B, which prevent protein aggregation [48]; DDIT4, which modulates Aβ cytotoxicity [49]; ZFP36, which downregulates proinflammatory cytokine production [50]; and MSR1, which supports alternatively activated (M2) polarization of macrophages [51], were identified (Fig. 1N).In contrast, the DEGs that manifested in an opposite manner (i.e., downregulated in affected females but upregulated in affected males) included EMID1 and CHN2, which promote cell proliferation and migration [52,53] (Fig. 1N).These findings suggested that enhanced cell proliferation but diminished stress response signaling networks may preferentially occur in the microglia of affected females.This assumption was supported by the ROSMAP dataset, as immune function pathways, such as T cell activation, interleukin signaling and positive proteolysis regulation, were consistently downregulated uniquely in the affected female group (Fig. 1O).Similarly, these sex-biased changes at the pathway level were also found in Cluster 13 (i.e., microglia) of the Mathys et al. dataset (Additional file 10: Fig. S1J, left panel).Notably, these observations were readily evident even in the early stages of the disease (Additional file 10: Fig. S1J, right panel).Collectively, these data confirmed that transcriptomic changes in microglia in LOAD patients were sexually biased.
Identification of female-enriched and disease-associated microglia (FDAMic)
To delineate whether the expansion of functionally compromised microglia in affected females is a general phenomenon occurring in the majority of microglia cells or in a distinct subgroup of microglia [54,55], a semisupervised pseudotime analysis was performed based on the microglial differentiation status [26] (Fig. 2A, Additional file 11: Fig. S2A).From the analysis, three major trajectory branches were defined.Subcluster 11 was completely disconnected from the others, and due to the small cell number, it was discarded in the subsequent analyses (Fig. 2A, Additional file 11: Fig. S2A).In this setting, the "root" of the trajectory, defined as the most undifferentiated state [28], was identified based on the assumption that undifferentiated cells process more diverse gene expression profiles, while terminally differentiated cells are highly specialized [28].Using the SCENT entropy-based method, cells labeled with higher entropies representing greater gene expression profile diversity were first deployed in the analyses [27,56,57], which revealed that Subclusters 3 and 9 (Fig. 2A) were likely the roots of the two separated branches (Additional file 11: Fig. S2B, top panel).This observation was alternatively validated by cytoTRACE, an algorithm that empirically uses the number of expressed genes per cell as a measure of transcription diversity (Additional file 11: Fig. S2B, bottom panel) [28].To better understand the degree of cell differentiation and functional status of microglia located along the trajectory branches, mapping analysis for known marker genes was performed.The branch that consisted of Subclusters 1, 2, 3, 6 and 8 (Fig. 2A, B) exhibited enriched gene expression of classic homeostatic microglial (HomMic) markers, such as P2RY12 and CX3CR1 (Fig. 2C, Additional file 11: Fig. S2C), and was therefore considered the "HomMic Branch".The branch constituted by Subclusters 5, 7, 9 and 10 exhibited a much more diverse set of microglial signatures (Fig. 2C, Additional file 11: Fig. S2C).For instance, Subcluster 10, located at the terminus, resembled diseaseassociated dystrophic microglia (DysMic) [14] due to the robust expression of FTL1, FTH and PLEKHA7 (Fig. 2C, Additional file 11: Fig. S2C).In the neighboring Subclusters 5, 7 and 9, however, robust expression levels of activated response microglia (ARMic) markers, such as SPP1 and C1QB, were observed (Fig. 2C, Additional file 11: Fig. S2C).Since Subcluster 9 was predicted as a "root" (Fig. 2A, Additional file 11: Fig. S2B), this finding matched the established role of ARMic as a precursor of DysMic [17].Subcluster 4, which dominated a distinct branch (Fig. 2A-B), resembled border-associated macrophage-like microglia (BAMic) due to its robust expression of F13A1, MRC1 and SEPP1 (Fig. 2C, Additional file 11: Fig. S2C) [58].
Using these subtype classifications, we then further analyzed the relationship with disease status (Fig. 2D) and sex (Fig. 2E) distributions in each subcluster.Notably, Subcluster 7 was highly enriched with microglia from multiple affected female samples (Fig. 2F, Additional file 11: Fig. S2F).Further characterization of the subcluster relationship to the severity of disease in terms of neurofibrillary tangle (NFT) load (Fig. 2G), neuritic plaque burden (Fig. 2H), and severity of cognitive impairment (Additional file 11: Fig. S2D) supported that Subcluster 7 was associated with a more advanced disease status in the absence of biases of age (Additional file 11: Fig. S2E) or donor effect (Additional file 11: Fig. S2F).Collectively, Subcluster 7 was defined as the "female-enriched and disease-associated microglia" (FDAMic) cluster (Fig. 2I), and their emergence was validated further in another cohort of samples [23] (Fig. S2G, validation cohort).Consistently, FDAMic was located between ARMic and DysMic on the pseudotime trajectory (Fig. 2J, Additional file 11: Fig. S2H).Moreover, this cluster of cells identified in the validation cohort was enriched with nuclei from affected female subjects (Fig. 2K) with a more advanced Braak status (Fig. 2L).
To elaborate the contribution of FDAMic to the global molecular changes detected in all microglia, cell nuclei belonging to this cluster were selected from the entire discovery cohort.The positive relationship between relative C13 cell number and disease severity was no longer significant (Fig. 1F versus 2 M).Similar to the list of DEGs curated from comparing C13 cells of female LOAD versus ND samples (Fig. 1M), a substantial number of upregulated DEGs were contributed uniquely by FDAMic (Fig. 2N, Additional file 11: Fig. S2I).Next, upon selective elimination of FDAMic nuclei, the statistical significance and fold changes of many DEGs were diminished (Additional file 11: Fig. S2J).These included the microglia-specific neuroinflammatory-stable gene CD81 [59]; major brain cholesterol carrier APOE [60], SPP1, which mediates phagocytic activities [61]; and immunosuppressive ribosomal protein S19 (RPS19) [62] (Additional file 11: Fig. S2J, K).Notably, these changes were not observed when an equal number of random microglial nuclei were eliminated in the control permutation analyses (Additional file 3: Table S3), which alternatively validated the contribution of FDAMic to the detected global molecular changes.
Transcriptomic signatures suggested a proliferative and proinflammatory but defective phagocytic phenotype in FDAMic
To further characterize the phenotypic properties of FDAMic in comparison to other subtypes of microglia, DEG analysis was first performed regardless of the sex or disease status of the samples.In the Mathys et al. discovery cohort, 373 DEGs were found, 183 of which were significantly upregulated in FDAMic (Fig. 3A, Additional file 4: Table S4).Subsequently, Metascape pathway and network-level analyses [32] revealed that the majority of these upregulated genes supported a network of TYROBP/DAP12-complement signaling pathways related to LOAD pathogenesis [63] (Fig. 3B).Furthermore, a group of genes encoding the major histocompatibility complex (MHC) class II autoantigens was identified, suggesting that FDAMic are likely proinflammatory in nature and may actively interact with brain-infiltrated peripheral T cells [64] (Fig. 3B).In addition to these downstream immune functions, upstream pathways such as ribosome biogenesis that support cell proliferation [65], as well as the pathway that negatively regulates protein ubiquitination [66], were identified (Fig. 3B).In contrast, the 190 downregulated DEGs in FDAMic were implicated mainly in the Rac and Rho GTPase signaling network (i.e., "RAC GTPase cycle", "RHO GTPase cycle" and "Activation of GTPase activity" pathways) and cellular phagocytosis-related activities (i.e., "Fc gamma receptor dependent phagocytosis", "Rc gamma R-mediated phagocytosis", "Phagocytosis" and "Ubiquitin-dependent protein catabolic process") (Additional file 12: Fig. S3A).Together, these findings suggested that FDAMic are likely pathological and dysfunctional.At the individual gene level, the top upregulated set of DEGs identified (Fig. 3A) (i.e., RPS19, RPLP1, RPL13, FTL, SPP1, RPL28, ACTB and APOE) was also robustly expressed in the ARMic and DysMic clusters (Fig. 3C), supporting their interrelationships in the ARMic branch (Fig. 3C).The only differences among the three subtypes were found in several downregulated DEGs, including AKAP13, GAB2, ANKRD17, KANSL1, RAPGEF1, ZNF609 and PIK3R6, which were robustly expressed in ARMic and DysMic but were distinctively suppressed in FDAMic (Fig. 3C, Additional file 12: Fig. S3B, C).
To better characterize and visualize the functional similarities and differences between FDAMic and other subtypes of microglia in a more holistic manner, KEGG pathway analyses were performed.ARMic, as the name implies, exhibited a robust, activated signaling profile (Fig. 3D).In contrast, BAMic, HomMic and NorAR-Mic (i.e., a subgroup of ARMic enriched with nuclei originating from nondementia samples) presented a generally inactive signaling profile (Fig. 3D).FDAMic, however, resembled a mixed functional profile.Pathways representing a set of autoimmune diseases (i.e., viral myocarditis, type I diabetes mellitus, systemic lupus erythematosus, autoimmune thyroid disease, allograph rejection, graph-versus-host-disease, asthma, leishmaniasis and prion diseases) and disease-associated immune signatures (i.e., antigen processing and presentation, intestinal immune network for IgA production and complement and coagulation cascades) were activated with MHC class II autoantigens as common lead genes (Fig. 3D, Additional file 12: Fig. S3D).Among the inactivated pathways (i.e., those highlighted in blue), common genes with a GO biological function belonging to Fc receptor signaling were suppressed and were versatilely involved in Wnt (Additional file 12: Fig. S3E), VEGF (Additional file 12: Fig. S3F) and MAPK (Additional file 12: Fig. S3G) signaling networks, as well as the T cell and B cell receptor-mediated pathways (Additional file 12: Fig. S3H-I) and microglial gonadotropinreleasing hormone (GnRH) signaling, a hypothalamic hormonal pathway that mediates reproductive [67] and metabolic competences [68] (Additional file 12: Fig. S3J).In addition, JAK-STAT (Additional file 12: Fig. S3K), insulin (Additional file 12: Fig. S3L), and glycosphingolipid biosynthesis (Additional file 12: Fig. S3M) pathways were also suppressed (Fig. 3D).To further validate the uniqueness of FDAMic from other subclusters more systematically, comparisons to signature gene expression profiles of microglial subtypes defined previously by different research groups were performed (Fig. 3E).Similar to DysMic (C10), FDAMic also closely resembled "ARM" as defined by Frigerio et al. [17]; "DAM" as defined by Keren-Shaul et al. [16]; "dystrophic microglia" as defined by Nguyen et al. [14]; and "WAM" as defined by Safaiyan et al. [15].However, unlike DysMic, FDAMic also presented properties of the "microglial neurodegenerative phenotype" (MGnD) defined by Krasemann et al. [69], such as "neurodegenerative disease" (NeuroDegen) and "proliferation" phenotypes defined by Friedman et al. [70], which differentiated the two subtypes.In the comparison between FDAMic (C7) and ARMic (C9), the latter exhibited robust similarities to the "aged microglia phenotype" (AgedMic) defined by Olah et al. [71] but not the "proliferation" properties defined by Friedman et al. [70], which therefore differentiated the two subtypes.Collectively, these cell characterization data confirmed that FDAMic is unique from other microglial subtypes.The emergence of FDAMic is associated with sex dimorphism and the pathogenesis of LOAD.
FDAMic evolved from female ARMic associated with a compromised estrogen receptor signaling network
Our data supported that FDAMic possess a distinct set of molecular phenotypes.To identify the potential upstream drivers that shape this unique transcriptome profile, the SCENIC algorithm was deployed to dissect and compare the simultaneous gene regulatory networks among all the subtypes of microglia [29] (Fig. 4A).A set of transcriptomic regulons (i.e., Set 1) constituting FOXP1, which supports cognitive functions [72], and FOXO3 and PBPJ, which protect against age-related vascular diseases [73,74] (Fig. 4A), was activated in BAMic.For HomMic clusters (Clusters 1, 2, 3, 6 and 8), however, alternative sets of activated transcriptomic regulons (i.e., Set 2) were activated.These proteins include BPTF [75] and MEF2C [76], which govern microglial homeostatic responses; MEF2A, which promotes autophagy [77]; and ZEB1, which mediates protective effects after brain ischemia [78] [75,79,80] (Fig. 4A, Set 2).Another set (i.e., Set 3) characterized by IKZF1 [81], RUNX1 [82], ETV6 [83], SMARCA4 [84], TFEC [85], RCOR1 [86] and TCF12 [87], which are all crucial for normal immune lineage commitment, was coactivated in these cells, as were NorARMic and ARMic in a different category (Fig. 4A, Set 3).In contrast, transcription regulon Sets 1-3 were not active in either FDAMic or DysMic; however, in the latter, a distinct set of transcription regulons that promote metabolic reprogramming (i.e., ESRRA [88], BACH1 [89]) and immune exhaustion (i.e., CREM [90], YBX1 [91]) was activated instead (Fig. 4A, Set 4).The FDAMic transcriptomic profile exhibited no activities in any of the 4 sets of transcriptomic regulons, suggesting that this cluster is molecularly distinct from others.One obvious distinct property of FDAMic is female nuclei enrichment (Fig. 2E, F), but they are also likely defective in gonadotropinreleasing hormone (GnRH) signaling (Additional file 12: Fig. S3J) and insulin signaling (Additional file 12: Fig. S3L) [92][93][94], which are known to be estrogen-regulated, suggesting that this hormonal signaling axis is key.Supporting this, estrogen receptor-1 (ESR1/ERα) and estrogen receptor-2 (ESR2/ERβ) were both the least expressed in FDAMic (Fig. 4B).One immediate consequence is the loss of protein-protein interactions and the regulatory effects on known coactivators and transcription factors [95].Of the 55 microglia-relevant transcription factors identified by the SCENIC program (Fig. 4A), 39 (71%) were ERα or ERβ protein-binding partners, as predicted by the STRING and PPI networks available in the BIOGRID database (Fig. 4C, Additional file 5: Table S5).This high percentage of binding partners within the list was nonrandom in nature, since this number was much higher than any random selections conducted from the entire genome (Fig. 4D).Alternatively, loss of the ER may directly downregulate its target gene expression, which includes some of the SCENIC-identified transcription factors.According to the CHEA transcription factor targets database [96], as much as 32% (18/55) of this set of microglia-relevant transcription factors were indeed the transcription targets of ESR1/ERα (Fig. 4E).Taken together, these findings suggested that at least a significant part of the immune-related properties shared by the common microglial subtypes except FDAMic are associated with an active ER signaling network.In other words, the aberrant emergence of FDAMic is at least in part a result of defective ER signaling, in addition to other ERindependent mechanisms that shape its phenotype.
Referring back to the SCENIC transcriptomic regulon profile, the proto-oncogene product of SPI1 (PU.1) implicated in the pathogenesis of LOAD [97,98] was the only activated regulon found in FDAMic (Fig. 4A), being unknown to interact with any isoform of ER (Fig. 4C) or a downstream gene target (Fig. 4E).Further pathway analysis of the 288 SCENIC-identified PU.1 targets that were highly expressed in the testing dataset suggested that they were likely involved in some of the activated features of FDAMic, including Aβ binding, MHC class II protein complex assembly and related activities (Fig. 4F, G, Additional file 6: Table S6).On the other hand, FDAMic was only a subcluster of microglia that exhibited "proliferative" properties as defined by Friedman et al. [70], which could be mediated in part by the transcriptional activities of PU.1, as it is closely associated with the expression of CD81 [47], CD74 [99] and TYRPBP/DAP12 [100] (Fig. 4H).Indeed, an activated PU.1 transcription regulon was also found in ARMic (Fig. 4A), suggesting that FDAMic could have evolved from some of these cells facing compromised activities in the ER signaling-regulated transcription network (i.e., Sets 1-4).Intriguingly, ARMic are enriched with relatively more nuclei from male subjects (Fig. 2K); therefore, it is imperative to understand whether the activated PU.1 observed in this cluster was more related to a distinct subset of nuclei derived from the female subjects.Analysis of DEGs between ARMic nuclei of female and male origin revealed that key targets of PU.1, including CD81, CD74, HLA-DPB1, TYROBP and CSF1R [101], were more robustly expressed in female nuclei (Fig. 4I), suggesting that this subset of cells is a potential precursor of FDAMic.
In addition to these trans-acting mechanisms, ER signaling may also epigenetically modulate DNA methylation to adjust chromatin accessibility for various transcription regulators in estrogen-sensitive cells [102].Using the estrogen-sensitive MCF-7 DNA methylome as a testing model, depletion of the hormone in the culture environment resulted in a more pronounced gain in global DNA methylation than in demethylation (Fig. 5A, left), and these observations were reversed when the hormone was repleted back into the system (Fig. 5A, right).From these observations, 1,799 estrogen-responsive DNA methylation loci (i.e., the majority of genetic regions that become hypermethylated under estrogen depletion but respond in a reverse manner upon estrogen repletion, as well as the very few that respond the other way round) found on 1,125 genes were identified (Fig. 5B, Additional file 7: Table S7).Functionally, an ample number of these genes were intriguingly implicated in the signaling network of Rac and Rho GTPases (Fig. 5C).Considering that the regulatory regions of these genes are likely hypermethylated, their expression is likely suppressed [103] in the absence of ER signals; therefore, this epigenetic mechanism may explain how genes involved in the Rac and Rho GTPase signaling network are suppressed in FDAMic (Additional file 12: Fig. S3A).To validate whether these loci were affected in LOAD, comparisons of DNA methylome profiles between LOAD and ND brain samples were performed in a sex-specific manner.The analysis revealed that changes in DNA methylation to female LOAD subjects were more dramatic than those found in males (Fig. 5D).In female LOAD subjects, a total of 39,394 hypermethylated loci corresponded to 9,032 genes, and 3,079 hypomethylated loci corresponded to 2,399 genes (Fig. 5D, Additional file 8: Table S8).Notably, a significant number of loci were likely estrogen-responsive, as suggested by the comparative analysis (Fig. 5E) against the list curated from the MCF-7 analysis (Fig. 5A, B).From there, 162 hypomethylated estrogen-responsive genes were identified; however, they were not clustered into any meaningful pathways other than "signal transduction" (Fig. 5E-G).In contrast, among the 682 hypermethylated estrogen-responsive genes identified (Fig. 5E, F), many were implicated in multiple pathways of the Rac and Rho GTPase network (Fig. 5G).Although these findings reflected only changes at the bulk tissue level, they revealed a possible relationship between the ER signaling of Rac and the Rho GTPase network, as well as their female-biased linkages in LOAD subjects.This finding may be useful in explaining how this signaling network is widely suppressed in FDAMic (Additional file 12: Fig. S3A).
APOE4 and female sex are risk factors associated with the emergence of FDAMic
The menopausal transition and a decline in estrogen hormonal signaling are natural and inevitable changes in all aging women [104].Therefore, additional risk factors must be in play to confer selective vulnerability to disease pathogenesis among certain individuals.It has been reported previously that a greater penetrance of the APOE4 genotype in females might exist [105]; therefore, it is speculative that the APOE genetic status may also affect the emergence of FDAMic.As presented on a pseudotime trajectory map, FDAMic were uniquely enriched with nuclei from APOE-44 samples but included the least number of cells from APOE-23 samples, an allelic combination that was proposed to exhibit disease protective effects (Fig. 6A, B) [106].Characterization of the APOE gene expression level regardless of its variant status revealed that FDAMic expressed the highest level of APOE among all subtypes (Fig. 6C), suggesting that FDAMic are likely more affected by defective APOE4 than other subtypes.The APOE gene encodes a 299 amino acid cell surface glycoprotein that primarily functions as a lipid transporter [107].However, a previous study indicated that APOE4 can indirectly affect cellular transcriptomic profiles [108], suggesting that it may facilitate the acquisition of FDAMic transcriptomic signatures in microglia.Using the ROSMAP dataset, gene expression trend analyses across samples belonging to 3 major groups of APOE variants (i.e., 22 and 23 versus 33 versus 34 and 44) in a sex-specific manner were conducted (Fig. 6D).In contrast to observations in male subjects, in whom gene inductions were enriched mainly in subjects of the APOE22 and APOE23 background, more extensive inductions in gene expression were found among female subjects of the APOE34 and APOE44 background (Fig. 6D, Additional file 9: Table S9).Notably, signature genes of FDAMic, including those implicated in microglial cell proliferation (i.e., CD74 [99], CSF1R [46] and RPS3 [109]), LOAD pathogenesis (i.e., TYROBP [63] and SPI1 [110]) and immune checkpoint signaling (i.e., VSIG4 [111]), were the most upregulated in the APOE34/44 group, specifically among females (Fig. 6E).To understand how these transcriptomic changes manifest into the relevant pathways in different sexes and APOE statuses, pathway enrichment analysis was performed.Of the upregulated DEGs curated from the comparison between APOE34/44 females and females without the APOE4 allele (i.e., APOE22, 23 and 33), many were related to stress responses, RNA metabolism, cytokines and interleukin signaling (Fig. 6F).Of the upregulated DEGs curated from similar comparisons made among male subjects, these genes were implicated in extracellular matrix organization, homeostasis and, unexpectedly, multiple pathways of the Rho GTPase signaling network (Fig. 6G).The absence of an activated Rho GTPase network in female APOE34/44 brain samples suggested that their microglia are likely more vulnerable to extra insults that further downregulate this signaling network and its associated phagocytic activities, which resembles FDAMic.Additionally, a set of downregulated DEGs was found in male APOE34/44 samples, which was associated with suppressed RNA metabolism and a protein translation network (Fig. 6H) that is deemed to be neuroprotective, as these changes may suppress immunosenescence [112,113].Quantitatively, female subjects carrying even a single APOE4 allele were associated with significant increments in total microglial populations compared to non-APOE4 carriers (i.e., 22, 23 and 33) of the same sex or male subjects of the same APOE34/44 background (Fig. 6I), which could be at least in part caused by the emergence of FDAMic.
To further validate the importance of APOE4 as a partnering risk factor for female sex that promotes the emergence of FDAMic during AD pathogenesis, a matching analysis with the list of upregulated DEGs found in FDAMic (Fig. 3A) was performed against a mouse microglia dataset, from which microglia were harvested from either control or 5xFAD mice of both sexes that had their endogenous mouse ApoE alleles replaced by either the humanized APOE3 (i.e., 33) or APOE4 (i.e., 44) orthologs [114].Consistent with the predictions in human brain samples, the gene expression patterns that most closely matched those of FDAMic were found in microglia harvested from female mice (i.e., 10/14 = 71.4% mice on the left branch of the hierarchical clustering) or from those with the APOE-44 genotype (i.e., 8/14 = 57.1% mice on the left branch) (Fig. 6J).Notably, among the latter, 5 (out of 7 in total in the entire study cohort, i.e., 5/7 = 71.4%)were female APOE44 mice (Fig. 6J).Intriguingly, these 5 mice were also 5XFAD (i.e., the remaining 2 out of 7 female APOE44 mice were non-5XFAD on the right branch of the hierarchical cluster) (Fig. 6J), suggesting that Aβ pathologies may stand alone as a risk factor contributing to FDAMic signatures.However, the majority of microglia from male 5XFAD mice (i.e., 8/12 = 66.6%), regardless of their APOE status, failed to exhibit strong FDAMic signatures (Fig. 6J, right branch of the hierarchical cluster).Together, these discrepancies highlighted that female sex in combination with APOE44 may formulate a strong set of risk factors favoring the emergence of FDAMic.
Discussion
Sex dimorphism in microglial function and neuroinflammation is implicated in AD pathogenesis.Our study revealed that microglia are the only detected cell type that exhibited sex differences in their relative cell quantities during the process of LOAD pathogenesis, characterized by more pronounced inductions in number in affected female subjects as the disease progresses.Compared to the respective sex-matched nondementia controls, more profound changes in transcriptomic profiles were also observed in female subjects with LOAD.These observations agree with previous conclusions drawn from mouse studies [17,115], supporting that microglia are the key cell type that contributes to sex dimorphic changes observed in the disease.
Microglia constitute a heterogeneous cell population, and changes in composition among various subtypes of microglia may greatly affect brain neuroimmune homeostasis and vulnerability to different neurodegenerative diseases.Our analyses of different subtypes of microglia confirmed the unique existence of FDAMic enriched in affected female subjects, particularly among APOE4 carriers.Quantitatively, the relative cell number ratio of FDAMic is positively associated with more advanced disease pathologies.The relevance of FDAMic to the sex-biased differences observed in the global microglial population was also confirmed, as selective exclusion of their nuclei from the analyses greatly abolished the observations.
Compared to the rest of the microglial populations, FDAMic likely exhibit stronger cell proliferative properties, as defined by Friedman et al. [70].Moreover, their transcriptomic signature also revealed higher expression levels of MHC class II autoantigens and Aβ binding receptor genes.It was previously reported that MHC class II autoantigen-expressing microglia were associated with various autoimmune-related neurodegenerative diseases and the development of chronic inflammatory lesions [116][117][118][119], which hinted at an intrinsic pathological nature of these cells.In addition to these gainof-function genes, downregulated genes in FDAMic also suggested a loss in phagocytic activities, which may render them ineffective in clearing Aβ and other protein aggregates from the region [120] despite exhibiting Aβ binding properties.Together, these phenotypic characteristics suggest that these cells are quantitatively associated with more severe pathologies of the disease.
Mechanistically, our analyses suggested that loss of ER signaling is associated with, and possibly in part underlies, the emergence of FDAMic.Among other subtypes of microglia, FDAMic exhibit the lowest expression level of ESR1 or ESR2, and therefore, they are more likely to be defective in ER signaling, which is known to confer antiinflammatory properties to microglia [121].Our analyses revealed that ER signaling supports the activities of multiple transcription regulons that take part in shaping the homeostatic properties of HomMic, ARMic and BAMic by interacting with key transcription factors activated in these cells.Some of these transcription factors are gene targets of ESR1/ERα as well.On the other hand, ER signaling may modulate the global status of DNA methylation at gene regulatory regions such that the accessibility of transcription factors to these sites is altered.Our analyses revealed that many of these ER signaling-regulated DNA methylation gene targets are implicated in the Rac and Rho GTPase signaling network, which is known to be crucial in supporting microglial phagocytosis [122].In LOAD, our analysis indicated that the majority of the genes were hypermethylated in affected females.Although the bulk tissue DNA methylome data failed to directly provide insights at single-cell resolution to pinpoint the changes to microglia, the identification of this linkage between ER signaling and DNA methylation targets suggested that the diminishing sex hormone signals that occur during the menopausal period may reshape the brain microenvironment that favors the emergence of FDAMic.
The menopausal transition and a decline in ER signaling are inevitable in all women [104].Therefore, we reasoned that additional risk factors must underlie selective disease vulnerability in affected individuals.Here, we report that the coexistence of APOE4 genetic statusthe strongest genetic risk factor for LOAD [123]-with female sex formulates a set of strong risk factors favoring the emergence of FDAMic.Since FDAMic are associated with more advanced pathologies and cognitive decline, their emergence explains, at least in part, why a greater penetrance effect is found among female carriers of APOE4 [124].In the clinic, hormone replacement therapy (HRT) is used as a strategy to mitigate cognitive decline during menopausal transition and postmenopausal periods.The sex-and APOE4-biased trigger for the emergence of FDAMic may explain how HRT therapy is associated with higher efficacies among APOE4 carriers [124].It has been suggested that HRT should be administered early in the initiation of the menopausal transition to achieve a better protective effect.It is plausible that this beneficial effect is in part mediated by sustaining the ER signaling network in ARMic, which therefore prevents their subsequent transformation into FDAMic along the cell fate trajectory-a pathogenic and dysfunctional subtype of microglia.Nevertheless, this nature of FDAMic suggested that selective modulation of these cells and their precursors in the brain may help to regain neuroimmune homeostasis and therefore are a potential new target for future drug development.
While this work presents a set of in-depth analyses conducted with datasets from multiple sources to address how female sex may affect the relative quantities of microglial subpopulations and whether changes in the fidelity of estrogen receptor (ER) signaling may remodel the properties of these cells, there are still limitations to our study.One major limitation is the relatively low ratio of microglia in the brain, which renders further exploration on how these cells vary across different stages of LOAD challenging.This issue could be resolved by utilizing datasets of large sample sizes that were recently made available to the research community.Examples include the datasets associated with the Sun et al. [20] and Green et al. [21] (a preprint manuscript) studies.Our preliminary analyses with their supplementary data indicated that the lipidassociated/processing microglial subtype highlighted in these studies is indeed enriched in female and disease-associated nuclei (Additional file 13: Fig. S4, Additional file 14: Fig. S5), and their corresponding marker genes (i.e., PPARG + , APOE + , TREM2 + ) are also highly expressed in FDAMic (Additional file 13: Fig. S4A).This indirectly validated the existence of a population of "FDAMic-like" microglia in these independent datasets, and whether they truly resemble the molecular signature of FDAMic warrants future investigation.Furthermore, according to the paper by Keren-Shaul et al. [16], activation of disease-associated microglia (DAM) is characterized by induced phagocytic and lipid metabolic activities via upregulated expression of APOE, LPL, CD9, CDY7 and TREM2 and concurrent downregulation of microglia checkpoint genes (e.g., CX3CR1) [16].Considering that FDAMic also exhibited the highest expression levels of APOE and TREM2 (Additional file 13: Fig. S4A) but the lowest expression level of CXCR31 (Fig. 2C) among other subtypes of microglia, it is therefore speculated that FDAMic resembles the activated state of DAM, while their predominant APOE4 and female sex status (Fig. 6) might have hindered the activation of phagocytic activities through the Rac and Rho GTPase signaling network [125][126][127][128] and associated autophagy pathways [129] (Additional file 12: Fig. S3A).However, the original DAM properties were mainly based on previous mouse studies [16] and are substantially different from microglial signatures identified in human AD brains [130].This discrepancy could be due to the fact that these cells only constituted a small number of nuclei in the entire microglial population of the original mouse study [16], and greater variabilities exist in human versus mouse samples [16].Consistent with this, a recent study also indicated that DAM-like signatures in the human LOAD brain do not encompass one single state but rather multiple substates [20].Further speculation shall therefore proceed with caution.Nevertheless, if our prediction of FDAMic as a subgroup of lipid-processing DAMs is correct, this would offer important insights into how the APOE genetic status and sex of the subject may alter the lipid-processing properties and neuroprotective nature of activated DAMs and will be validated in future studies.
(Fig. 1
See figure on next page.)Aberrant expansion of functionally compromised microglial populations in female subjects with LOAD.A T-SNE plot of 70,634 nuclei derived from 48 age-and sex-matched prefrontal cortex samples harvested from both nondementia [N = 24 (male N = 12; female N = 12)] and LOAD [N = 24 (male N = 12; female N = 12)] patients from the Mathys et al. dataset.Different cell types are abbreviated as follows: excitatory neurons, Ex; oligodendrocytes, Oli; inhibitory neurons, In; astrocytes, Ast; microglia, Mic; endothelial cells, En; and oligodendrocyte progenitor cells, Opc.B Cells were further subclustered and colored based on subcluster numbering.C Cell number distribution statistics according to sex and disease status within different subclusters.Bars on the right represent − log10(p value).D Trends of changes in the scaled female:male cell ratio against disease status.***P < 0.05.E T-SNE plots showing how female and male nuclei from Clusters 13 and 21 are distributed in different stages of the disease.F, G Relative cell ratio of Cluster 13 in different F stages of disease or G status of cognitive functions (Cogdx scores).H Disease status of samples of the ROSMAP study was defined based on multiple clinicopathological parameters (x-axis).I, J Relative microglial cell ratio changes in ROSMAP samples calculated by the Scaden deep learning algorithm.Comparisons were made between various I disease statuses and J stages of cognitive function (Cogdx scores).K Number of DEGs identified from sex-specific comparisons made between LOAD and ND samples of the ROSMAP dataset.L Gene Ontology: biological process pathway enrichment analysis of downregulated DEGs in female LOAD samples.M Volcano plots illustrating DEGs curated from the comparison between Cluster 13 nuclei originating from LOAD versus ND samples in a sex-specific manner [females (top panel) and males (bottom panel)].N UpSet plot illustrating how DEGs shown in M overlapped and were related to one another.O. Metascape enrichment network of DEGs shown in M and corresponding boxes indicating how different pathways manifested in LOAD samples of different sexes.Pathways that shared similar trends of changes in a particular sex are grouped together in dashed boxes.Color code: light pink (downregulated pathways in female LOAD); dark pink (upregulated pathways in female LOAD); dark blue (upregulated pathways in male LOAD)
Fig. 2 Fig. 2 (
Fig. 2 Identification of female-enriched and disease-associated microglia (FDAMic).A, B Single-cell trajectory analysis with the Monocle 3 algorithm identified an evolutionary relationship among all microglial nuclei in the Mathys et al. cohort.A UMAP plot showing the locations of various microglial subclusters based on existing knowledge from published studies.Numbers inside the blanket represent the number of nuclei.B Three major branches of microglial fates were identified: HomMic, BAMic and ARMic branches.C Violin plots showing marker gene expression of BAMic (black), HomMic (blue), ARMic (red) and DysMic (pink) subtypes.D, E Visualization of all subtypes of microglia on the evolutionary trajectory UMAP plot according to D disease diagnosis or E sex of the samples.F Sex-and disease status-specific cell ratio distribution among all subclusters of microglia.Subcluster numbers are color-labeled according to their branch location on the evolutionary plot, i.e., BAMic (black), HomMic (blue), and ARMic (red).G, H Visualization of all subtypes of microglia on the evolutionary trajectory UMAP plot according to G neurofibrillary tangle (NFT) burden or H neuritic plaque burden of the samples.I. Detailed definition of microglial subtypes defined by our analysis, particularly the female-enriched and disease-associated microglia (FDAMic).J Single-cell trajectory analysis with the Monocle 3 algorithm identified an evolutionary relationship among all microglia in the Morabito et al. cohort.The expression intensities of the VSIG4, SPP1, RPS19 and C1QB genes are indicated on the right.K Sex-and status-specific cell ratio distribution among all subclusters of microglia identified in the Marabito et al. study.L Ratios of nuclei in samples of different Braak stages belonging to different subclusters of the Marabito et al. cohort.M Relative cell ratio changes in Cluster 13 of the Mathys et al. cohort after selective removal of FDAMic nuclei from the analysis.N Relative expression levels of the DEGs curated from the comparison between Cluster 13 nuclei originating from female LOAD versus ND samples (Fig. 1M, top panel) in different subtypes of microglia of the Mathys et al. cohort (See figure on next page.)
Fig. 3
Fig. 3 FDAMic are unique from the rest of the microglial population.A Volcano plot illustrating DEGs in FDAMic versus the rest of the microglial population in the Mathys et al. cohort.B Over-representation analysis (ORA) of upregulated DEGs in FDAMic shown in A using the Metascape platform.Every node represents an enriched term, and two nodes are linked if their Kappa similarities are higher than 0.3.Similar functional terms are clustered together and are displayed using the same color.Node size is proportional to the number of enriched genes.C Left: Violin plots of the top 8 upregulated DEGs specific to FDAMic.Right: Violin plots of 7 downregulated DEGs specific to FDAMic.D, E Pathway analysis of DEGs in each subtype of microglia (i.e., obtained when each specific subtype was compared to the rest of the microglial populations) referencing D the KEGG pathway database or E the signature of microglial subtypes defined by different research groups as indicated
Fig. 4
Fig. 4 FDAMic evolved from female ARMic associated with a compromised estrogen receptor signaling network.A Transcription regulons that possibly regulated the DEGs in all subtypes of microglia were predicted by the SCENIC algorithm.Their degrees of activation are indicated by the color.B Dot plot showing the relative expression levels of ESR1 and ESR2 in different subtypes of microglia.C Left (STRING network): physical interaction network between ESR1/ERα and the 4 sets of transcription regulons identified by the SCENIC algorithm.Transcription factor labels indicate positive interactions.Right (heatmap): physical binding prediction between transcription factors predicted with ESR1/ERα or ESR2/ERβ.D Random permutation of transcription factors (TFs) and their probabilities of interacting with ESR1/ERα.On average, 12 out of 55 (probability = 0.22) randomly selected TFs may interact with ESR1/ERα; this is in stark contrast to 33 out of 55 (probability = 0.6) identified from the SCENIC algorithm.E Heatmap showing the scaled gene expression level of ESR1/ERα-targeted TFs in different subclusters of microglia.F Functional enrichment analysis of the 288 SPI1 target genes highly expressed in microglia.G, H Correlations between the normalized expression level of SPI1/PU.1 and those of various G MHC class II autoantigens or H cell proliferation-associated genes.All of these are downstream targets of SPI1/PU.1.I Top: volcano plot showing DEGs identified from the comparison between ARMic of female versus male origins.Key microglial MHC autoantigens and proliferation-associated genes are labeled.Bottom: violin plots illustrating the expression profiles of these key genes across all subtypes of microglia
Fig. 5
Fig. 5 Estrogen receptor signaling regulates Rac and Rho GTPase signaling network by altering the global DNA methylation profile.A Volcano plots illustrating the DNA methylation profiles of the MCF-7 cell line subjected to estrogen-deprived (left) or replete (right) conditions.B UpSet plot summarizing the total and intersecting hyper-or hypomethylated locus numbers identified in A. C Pathway enrichment analysis of 1125 estrogen-responsive DNA methylation gene loci identified from A. The top enriched pathways are labeled.D Volcano plots showing the changes in DNA methylation profiles in brain tissues of LOAD versus ND individuals in a sex-specific manner (left: female; right: male).E, F Estrogen-responsive DNA methylation gene loci revealed significant overlap with altered DNA methylation loci found in female LOAD patient samples, illustrated as E Venn diagram and F heatmap formats.G Pathway enrichment analysis of estrogen-responsive hypomethylated (left) and hypermethylated (right) genes identified in female LOAD subjects
Fig. 6
Fig. 6 APOE4 and female sex are risk factors for the emergence of FDAMic.A Visualization of APOE status distribution in all subtypes of microglia from the Mathys et al. cohort.B Ratios of nuclei of different APOE statuses in all subclusters of microglia.C Violin plot illustrating the normalized expression level of the APOE gene in all microglial subtypes.D Heatmap illustrating the major trend of changes in gene expression across individuals carrying different combinations of APOE allelic variants.E Violin plots illustrating the expression profiles of key microglial proliferation-associated genes across samples with different combinations of APOE allelic variants.F-H Functional enrichment analysis of the F upregulated DEGs in female subjects carrying APOE-34 or APOE-44 and the G upregulated and H downregulated DEGs in male subjects carrying APOE-34 or APOE-44 compared to subjects of the respective sex with combinations of APOE-22, APOE-23 or APOE-33.I Cell ratio changes in microglial populations of the ROSMAP samples predicted by the Scaden deep learning algorithm.Comparisons were made across either male or female subjects carrying different combinations of APOE allelic variants.J Heatmap illustrates the matching of DEGs in FDAMic (Fig.3A) to transcriptome profiles of microglia harvested from control or 5xFAD mice in which endogenous ApoE alleles were replaced by either human APOE3 (i.e., 33) or APOE4 (i.e.,44) | 2024-01-06T05:12:57.761Z | 2024-01-04T00:00:00.000 | {
"year": 2024,
"sha1": "95e4121223a2a8fa1eb5509ae56961c50832fa2e",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "95e4121223a2a8fa1eb5509ae56961c50832fa2e",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
269823008 | pes2o/s2orc | v3-fos-license | The “ emotional brain ” of adolescent Spanish – German heritage speakers: is emotional intelligence a proxy for productive emotional vocabulary?
Autobiographical memories (AMs) are partly influenced by people ’ s ability to process and express their emotions. This study investigated the extent to which trait emotional intelligence (EI) contributed to the emotional vocabulary of 148 adolescents – 60 speakers of Spanish as a heritage language (HL) raised in Germany, 61 first-language (L1) German speakers and 27 L1 Spanish speakers – in their written AMs of anger and surprise. The results revealed that heritage speakers with high trait EI used more emotional words in their AMs. These bilinguals also used more positive, negative and high-arousal words in their HL and in their AMs of anger. Similar patterns were observed in the AMs produced in Spanish (HL and L1), but L1 Spanish speakers used more emotional words in their AMs of surprise. By contrast, L1 German speakers used more emotional words than bilinguals in their AMs in German, and AMs of anger in German included more emotional vocabulary than those addressing surprise events.
Introduction
Recalling and expressing emotions is unique to each individual.For example, while some people may display a calm attitude and self-control when confronted with emotionally charged situations, others will be unable to restrain their emotions.This will become evident in the way that they behave, the words that they use and the intensity of their emotional reactions.Emotional intelligence (EI) is an essential component of emotional regulation, behaviour and communication.EI has been conceptualised from different perspectives.According to the ability model developed by Salovey and Mayer (1990), EI refers to the human ability to recognise and understand one's own and other people's emotions and to use this knowledge to regulate one's thoughts and behaviour (Mayer et al., 2000;Salovey & Mayer, 1990).The trait approach proposed by Petrides and Furnham (2000) suggests that EI consists of mental abilities and individual personality traits, such as empathy, adaptability, self-esteem and assertiveness, which influence the way in which people process affective information (Petrides & Furnham, 2000, 2001;Petrides et al., 2016).Trait EI is believed to be a stable component of personality across the life span (Petrides & Mavroveli, 2018;Petrides et al., 2016;Vernon et al., 2008). 1 This study adopted the trait approach 2 to investigate the extent to which trait EI contributed to the emotional vocabulary that adolescent bilinguals with Spanish as their heritage language (HL) and German as their second language (L2) used in their written autobiographical memories (AMs) of anger and surprise in both their HL and L2.We also collected data from first-language (L1) Spanish and German speakers in order to examine whether similar patterns of emotional vocabulary would emerge in HL Spanish and L1 Spanish, as well as in L1 German and L2 German.Although previous studies of adolescents' EI have mainly focused on L1 speakers, the multilingual societies in which we currently live, which are characterised by an increasing number of L2 users, third-culture individuals, migrants and heritage speakers, have increased the demand for examining how EI influences young bilinguals' abilities to express and regulate their emotions not only in their L1s, but also in their L2s.It is therefore important to take the particularities of each individual into accountboth in terms of their personality traits (trait EI in this case) and language background (e.g., without merely using general labels such as "heritage or native speakers" which only provide a limited account of the richness and diversity of the characteristics of their languages; see Darvin & Norton, 2022) -, as well as the specific features of the emotion elicitation stimuli (e.g., valence and arousal).As emotional processing and emotional expression are very broad terms and require different methodologies and data collection procedures, which component of either processing or expression each study aims to assess must be narrowed down.The current study focuses on emotional expression, and particularly on emotional vocabulary elicited through AMs of different valence. 3 Our study contributes to bilingualism and emotion research in several ways.First, heritage speakers represent a unique type of bilingualism; they have acquired their heritage language(s) naturally and in affective contexts, which may influence how they regulate and express their emotions (Montrul, 2015(Montrul, , 2019;;Montrul & Polinsky, 2021) two core facets of EI.Moreover, the number of heritage speakers across Europeparticularly of HL Spanish speakers in Germanyis increasing rapidly (Loureda Lamas et al., 2020); therefore, more research on these still underrepresented HL contexts is needed to achieve an in-depth understanding of the complexities and particularities of the emotional expression of these minority language groups.Second, although anger is a prototypical negative emotion, surprise has variable affective valence and can generate mixed or ambivalent emotions, which is probably the norm rather than the exception in real-life situations (see Mavrou & Dewaele, 2020).Previous studies with L1 Spanish and L1 German speakers have found differences in the conceptualisation and emotional processing of anger and surprise (Bormann-Kischkel et al., 1990;Durst, 2001;Fontaine et al., 2013;Oster, 2019;Soriano Salinas et al., 2015), but very little is known about how bilinguals who are proficient in both Spanish and German express and verbalise these emotions.Furthermore, the experience of anger and surprise may be expressed differently depending not only on the language that the heritage speakers use (the HL or the L2), but also on their levels of EI (MacCann et al., 2020).This is supported by recent studies revealing that individuals with high trait EI are able to adjust the valence of their emotions and to regulate their arousal levels (Bodrogi et al., 2022); for example, by experiencing negative emotions less intensely (Gao & Yang, 2023).Third, some recent studies have found that L2 learners with higher levels of trait EI are likely to perform better in L2 writing tasks (Beheshti et al., 2020) and in lexical retrieval tasks targeting emotional words (Mavrou, 2021; see also Barrett, 2017).However, these studies addressed general language abilities or specific language competences, rather than "contextualised" emotional expression, and this is another gap that the present study aims to fill.The above findings, along with the extensive research on the role of EI in various aspects of emotional processing (e.g., Fiori et al., 2022;Lea et al., 2018;Mikolajczak et al., 2008), provide us with reasonable grounds to hypothesise that this positive influence of trait EI on L2 performance and use will also extend to the verbal expression of emotionally charged experiences among heritage speakers, which constitutes an underexplored area of enquiry.Lastly, despite the abundant research using decontextualised stimuli (e.g., pictures of emotional scenes, isolated emotionally charged words or sentences), the current study employed AMs, which are understood as the recall of emotionally meaningful, personal past experiences that provide people with good opportunities to reflect on and verbalise their emotions (Brewer, 1986;Fivush, 1994).It has been suggested that the characteristics of these AMs depend on the narrators' EI (Houle & Philippe, 2020;Yamamoto & Toyota, 2013; see also Bohanek et al., 2005).As Houle and Philippe's (2020) study illustrates, those individuals who were able to regulate their emotions appropriately wrote coherent negative AMs in their L1 that were integrated into the story of the personal self, leading to increased levels of personal well-being.We can therefore hypothesise a similar link between AMs narrated by heritage speakers and their levels of EI.
Emotional intelligence and bi-/multilingualism
Before discussing the role of EI in emotional expression of heritage and bilingual speakers, which is the focus of the current study, it is important to understand how EI is involved in bi-/multilingualism in general.Recent evidence suggests that multilingualism and trait EI are mutually influential.Daga and Rajan (2023) argued that individuals who speak more than one language tend to have high levels of self-reported empathy and trait EI (assessed with the Trait Emotional Intelligence Questionnaire-Short form [TEIQue-SF]).Dewaele et al.'s (2008) study revealed that multilinguals with high trait EI (TEIQue-SF scores) tended to experience less self-perceived anxiety when communicating in their different languages.It has also been suggested that trait EI of adolescents raised in families with a heritage background is influenced by parenting practices.A study by Sung (2010) found that adolescents with Chinese or Korean as their HL and English as their L2 whose parents used rigid and directive parenting practices (i.e., practices based on the values of the heritage culture such as hard work, discipline, saving face, strict family hierarchy or the use of anger to control behaviour) had very low trait EI, which was assessed with the Bar-On Emotional Quotient Inventory ( Bar-On, 2004).Conversely, adolescents with medium or high trait EI had parents who were open-minded and tolerant, adopted English cultural norms to some extent and supported their children's emotional development.
Additionally, trait EI has been found to correlate with bilinguals' linguistic practices and other personality factors.Bilinguals with more extroverted personalities and high trait EI (TEIQue-SF scores) were found to engage in more social interactions and thus developed greater L2 proficiency than did those with more introverted personality traits (Ożańska-Ponikwia, 2013;Surahman & Sofyan, 2021).However, introverts were more successful in L2 written tasks, whereas extroverts outperformed their peers in L2 oral tasks (Ożańska-Ponikwia, 2018).Furthermore, certain aspects of trait EI, such as adaptability and emotional expression, had a positive influence on the frequency and degree of L2 English use by HL Polish speakers with a migrant background, while assertiveness and emotional regulation emerged as the most influential factors for communicating in L2 English among Polish-English bilinguals without a migrant background (Ożańska-Ponikwia, 2016).Findings such as the ones described above highlight the need to continue investigating EI in multilingual settings, including heritage contexts.Although any comparison between monolingual and heritage/bilingual speakers, or the interaction between EI and other personality traits are topics that go beyond the scope of the current study, the aforementioned evidence points to the same direction: EI is an individual differences factor that appears to play a facilitative role in bilinguals' linguistic practices, and eventually in the emotional vocabulary used by heritage speakers, as will be discussed in the next section.
Emotional intelligence and emotional vocabulary
Emotional experience and language use are closely related, as we use words to describe our emotional experiences and explain their affective meaning by providing specific information about their valence and arousal (Barrett, 2004).A rich emotional vocabulary in the L1 has been associated with a wide variety of emotional experiences, and the number of positive and negative words contained in the emotional lexicon has been linked to personal experiences of positive or negative emotional events, respectively (Vine et al., 2020).Emotional vocabulary develops and increases in sophistication and diversity throughout adolescence.Bazhydai et al. (2019) found that emotional vocabulary was richer and more precise in late adolescence, while it was less diverse in early adolescence and mainly consisted of descriptors of emotional states.Similar findings were reported by Ros-Morente et al. (2022) in a study of L1 Spanish speakers aged between 12 and 16 who performed a 3-minute emotional vocabulary retrieval task; older participants used a greater number of negative words, while females used more emotional (positive and negative) words than did males.
The question that arises is whether having a richer and more diverse emotional vocabulary is a direct consequence of individual differences (such those in EI), beyond the impact of age, previous experiences and other contextual factors which have already been established in the literature.The available evidence suggests that trait EI influences domains in which the use of emotional words is key, such as emotional granularity (i.e., the ability to describe positive and negative states using a wide range of emotional words and concepts; see Lindquist & Barrett, 2008), emotional awareness (Agnoli et al., 2019) and alexithymia (Davidson & Morales, 2022;Taylor & Taylor-Allan, 2007).Barrett (2017) argued that high EI individuals are better equipped to use a wide variety of emotional concepts to describe their emotional states and experiences.Further links have been established between trait EI and emotional word recognition and retention (Lea et al., 2018;Mikolajczak et al., 2009).When it comes to the use of emotional words, Le Hoang and Grégoire (2021) found that L1 Vietnamese children with higher emotional awareness exhibited a richer emotional vocabulary to describe vignettes in which they had to explain the emotions of others.Another study by Mavrou (2021) revealed a positive association between trait EI (TEIQue-SF scores) of 174 students who were L1 or L2 Spanish and English speakers and their ability to retrieve emotional words, particularly positive ones.Moreover, the most frequently retrieved emotional words varied in accordance with the language of retrieval, thus suggesting that the emotional vocabulary in the L1 and in the L2 may be processed and represented differently by different speakers.
Research on emotional vocabulary acquisition and use among heritage speakers remains limited, and to our knowledge no study to date has investigated the role of EI in heritage speakers' emotional competencies and vocabularies.By contrast, general vocabulary acquisition in this population has received significant attention and appears to depend on multiple factors including age of onset, the quantity and quality of exposure to the HL, linguistic competence in the societal language, language maintenance and use in the HL community and formal instruction, among others (Montrul, 2015;Polinsky & Scontras, 2020).Heritage speakers have been found to have a smaller general vocabulary in their HL than in their societal language or in comparison to L1 speakers of the HL (Montrul & Mason, 2020;Montrul & Polinsky, 2021).However, findings pertaining to their emotionaland particularly their productivevocabulary are scant.Vañó and Pennebaker (1997) found that bilingual children of HL Spanish and L2 English exhibited a smaller emotional vocabulary in HL Spanish when describing emotion-evoking pictures.They attributed these results to the reduced opportunities for affective communication in the HL, which mainly took place at home.However, the authors examined exclusively these children's emotional concepts referring to basic emotions (sadness, anger, happiness, fear and guilt), whereby the complete emotional vocabulary remained unexplored.In a more recent study, Driver (2022) investigated the impact of emotional valence on vocabulary learning with a sample of 64 HL Spanish and L2 English bilinguals and 57 students of Spanish as a foreign language from different language backgrounds.The participants were required to read three texts with positive, negative and neutral emotional valence, and were asked to learn a set of positive, neutral and negative words included in each text.The results revealed that retention of words in the neutral and negative texts was more successful, although words with neutral valence had better overall recall.A qualitative analysis further indicated that the emotional meaning and the affective arousal of the texts played a beneficial role in vocabulary learning, particularly when the topics concerned emotionally significant personal experiences.
The current study
Previous research has suggested that EI (both trait EI and ability EI) influences individuals' abilities to express their emotions in both positive and negative ways (Bodrogi et al., 2022;Gao & Yang, 2023;Larsen, 2009;Larsen & Augustine, 2008).However, evidence concerning the specific linguistic elements that are determined by EIspecifically trait EIis scarce and has been largely obtained from L1 speakers (see, e.g., Alba-Juez & Pérez-González, 2019).Studies of trait EI among bilingual speakers have mainly focused on general language proficiency (Ożańska-Ponikwia, 2013, 2018;Surahman & Sofyan, 2021) or on the emotional words that were used in somewhat neutral production tasks (Mavrou, 2021).Nevertheless, the role of EI may become more evident when we reflect on and describe affective states and emotions related to our personal experiences.To the best of our knowledge, the present study is the first to investigate the role of EI in the productive emotional vocabulary of adolescent HL Spanish-L2 German bilinguals as elicited via their personal AMs of anger and surprise in both their HL and L2.Furthermore, we collected data from L1 Spanish and L1 German speakers based on the same AMs in order to examine whether similar patterns of emotional vocabulary would emerge in HL Spanish and L1 Spanish, as well as in L1 German and L2 German.The following research questions guided our study: (1) To what extent do trait EI, the language of retrieval and the valence of the AMs influence the number of emotional (positive and negative) words produced by adolescent HL Spanish-L2 German bilinguals in their written AMs of anger and surprise in HL and L2? Are similar patterns observed in the same AMs of anger and surprise produced by L1 Spanish speakers and L1 German speakers?(2) To what extent do trait EI, the language of retrieval and the valence of the AMs influence the number of high-arousal words produced by adolescent HL Spanish-L2 German bilinguals in their written AMs of anger and surprise in HL and L2? Are similar patterns observed in the same AMs of anger and surprise produced by L1 Spanish speakers and L1 German speakers?(3) To what extent do trait EI, the language of retrieval (HL/L2) and the valence of the AMs (anger versus surprise) influence the diversity of these participants' emotional vocabularies?
We hypothesised that high trait EI participants would describe their experiences of anger and surprise using a broader emotional vocabulary in terms of positive/negative and high-arousal words (Barrett, 2017;Mavrou, 2021).We further speculated that high Bilingualism: Language and Cognition trait EI participants would use a more diverse emotional vocabulary, regardless of the valence of their AMs (Beheshti et al., 2020;Le Hoang & Grégoire, 2021).With regard to language status and following previous studies involving early or balanced bilinguals (Ferré et al., 2010;Vañó & Pennebaker, 1997;Vargas Fuentes et al., 2022), we expected that our bilingual participants would retrieve more emotional words in their L2 German.We also hypothesised that bilinguals' emotional vocabulary would be less diverse in their HL Spanish than in their L2 German due to the reduced availability of general vocabulary in the HL (Montrul & Polinsky, 2021).Regarding the valence of the AMs, our hypothesis was that AMs of anger would elicit more emotional words because events with negative valence and high levels of emotional arousal are likely to be recalled in more detail than neutral or positive events (Chae et al., 2011;Earles et al., 2016;Kensinger, 2007;Talarico et al., 2009).By contrast, we expected AMs of surprise to be described in a less elaborate way, i.e., with the use of fewer negative, positive and high-arousal words.
Participants
The sample consisted of 148 adolescents aged between 13 and 18; there were 60 HL Spanish-L2 German bilinguals, 27 L1 Spanish speakers and 61 L1 German speakers.The Spanish-German bilinguals had been raised in Germany, and most of them had grown up in bicultural Spanish and German families (n = 53), while the remainder (n = 8) had L1 Spanish or HL Spanish parents.All of the bilingual speakers attended C1/C2 CEFR level (Council of Europe, 2001) Spanish language and culture classes (ALCE) in different German cities.ALCE is an extracurricular educational project that is supported by the Ministry of Education and Vocational Training of the Government of Spain and aims to promote the Spanish language and culture among child and adolescent heritage speakers in Germany.Although German was the main language of communication and socialisation in their environment (with friends and at school), they had acquired Spanish from their parents and mainly used it with their families.The L1 Spanish and the L1 German speakers attended secondary schools in Spain and Germany, respectively.All the L1 Spanish and L1 German speakers reported speaking only or mainly their L1 (Spanish and German, respectively) with their parents, teachers and peers.Information about the participants' demographic and language backgrounds, as collected via the pencil-and-paper children's version of the Language Experience and Proficiency Questionnaire (Marian et al., 2007), is summarised in Table 1. 4
Procedure
Permission to conduct the study was obtained from the three governmental institutions involved in the education of the participants.The participants and their parents or legal caregivers were informed about the aims and procedures of the study via a letter that was distributed by the adolescents' teachers.All the parents or legal guardians, as well as the participants themselves, provided their written consent.Data collection took place during normal school hours.The participants wrote the AMs using writing templates that were provided by the first researcher (see next section for a detailed description of the prompts).The order of the AMs that they had to write (anger/surprise) was counterbalanced, and the instructions were given in the language in which they had to write each AM.The bilingual participants wrote four AMs in totaltwo about anger and two about surprise in their respective languages (Spanish and German), while the L1 speakers wrote two AMsone for each emotionin their L1.The bilingual participants started with the AMs in Spanish to avoid the influence of German, the language in which they were slightly more proficient.The participants were not allowed to use additional resources, such as paper or online dictionaries and electronic devices (mobile phones or laptops), nor were they allowed to consult their classmates or their teacher.However, if they had questions regarding the completion of the AMs, they were encouraged to ask the first researcher, who was present during the entire data collection.After writing the AMs, the participants were asked to complete the TEIQue version for adolescents (TEIQue-ASF).The study was conducted in compliance with the Declaration of Helsinki and the ethical principles for research developed by the American Psychological Association.
Autobiographical memories
AM retrieval is a valid and widely implemented method in cognitive psychology and in linguistic research (Mills & D 'Mello, 2014;Rubin, 2005;Schrauf & Durazo-Arvizu, 2006).AM recall involves two components: the knowledge related to the memory of the event and the activation of the emotional state experienced during the event (Mills & D 'Mello, 2014).The prompts for the AMs were created in accordance with previous studies that also used a similar emotion elicitation method to examine the bilingual emotional vocabulary (e.g., Ho, 2009;Marian & Kaushanskaya, 2008).All the participants were asked to write AMs of anger and surprise based on the following prompt: "Write about a real personal experience in which you felt particularly angry/surprised in Spanish/German.Your text should be about one page long and you should complete the task in about 15 minutes".Participants were instructed to recall and include as much detail as possible about their emotions before, during and after the event in their AMs, as well as what they said and how they said it, how they acted, their physical response, their age, the other people involved and the consequences.A total of 416 AMs were collected and analysed, of which 230 reported events that occurred during adolescence and 80 events that took place during childhood (5 in early childhood and 75 in middle and late childhood).For the remaining 106 AMs no specific time frame was provided, although the context suggested that the event was recent, i.e., during adolescence.
Emotional intelligence
The participants' EI was assessed via the short version of the Trait Emotional Intelligence Questionnaire for adolescents (TEIQue-ASF; Petrides et al., 2016;Siegling et al., 2017).Bilingual participants had the option of completing the questionnaire either in Spanish or in German, while L1 speakers completed the questionnaire in their respective L1s.The TEIQue-ASF consists of 30 statements that assess four EI domains, namely well-being, self-control, emotionality and sociability.The participants were required to indicate their level of agreement with each statement using a 7-point Likert scale (1 = totally disagree, 7 = totally agree).The average score for each participant was calculated after applying reverse scoring procedures to a number of items, as recommended by the authors of the questionnaire.Internal consistency was satisfactory: for bilinguals, Cronbach's α = .826,95% confidence interval (CI) [.783, .863],for L1 German speakers, Cronbach's α = .856,95% CI [.797, .902]and for L1 Spanish speakers, Cronbach's α = .848,95% CI [.743, .917].
Data analysis
Linear mixed-effects regression models were computed in RStudio 2023.03.1 (Posit Team, 2023) using the lmer function in the lme4 package (Bates et al., 2015), and the performance Bilingualism: Language and Cognition package (Lüdecke et al., 2021), which was used to calculate the indices of model performance.We checked model assumptions using residuals versus fitted plots, normal probability plots and variance inflation factor (VIF) values.Although the assumptions were generally met, we also computed the same models using the robust function (Wang et al., 2022).The language used in the AMs of the heritage speakers (HL Spanish, L2 German), the valence of the AMs (anger versus surprise), trait EI and gender were introduced as fixed effects, participant ID as random effects and emotional words, high-arousal words and the diversity of the emotional vocabulary as the outcome variables (see Table 3 for a summary of these models).We also included an interaction term for the language used in the AMs and the valence of the AMs, but as this did not improve the fit of the models, it was removed from the final analyses, except for one casethe model that tested high-arousal words in the bilingual group in which the interaction term resulted in statistical significance and was therefore maintained.
Results
Descriptive statistics for the number of emotional (positive and negative) words, the number of high-arousal words and the diversity of the total number of emotional words per group (heritage speakers, L1 speakers), language (HL, L1, L2) and valence of the AMs (anger, surprise) are summarised in Table 2.The results of the statistical models (Table 3) revealed a main effect of the language of retrieval and the valence of the AMs on emotional vocabulary; that is, our bilingual participants used more emotional (positive and negative) and high-arousal words in their AMs in the HL and in their AMs of anger.In addition, a statistically significant interaction was found in that our bilingual participants used a significantly greater number of high-arousal words in their AMs of anger that they wrote in their HL.Trait EI was a statistically significant predictor of both the number of emotional words (B = 5.696, t = 3.449, p = .001)and the number of high-arousal words that the participants produced (B = 3.272, t = 2.756, p = .007).The diversity of the emotional vocabulary was only predicted by the language of retrieval; that is, the participants used more diverse emotional words in their AMs in the HL.Separate yet complementary models for negative and positive words were computed and revealed that the language of retrieval and the valence of the AMs predicted the number of negative words, with AMs in the HL (Spanish) and AMs about anger including a significantly higher number of negative words compared to the AMs in the L2 (German) and the AMs about surprise.Furthermore, the language of retrieval (t = −8.510,p < .001),EI (t = 3.700, p = .0004)and gender (t = 2.355, p = .022)contributed significantly to the number of positive words; that is, female participants and those participants with higher trait EI produced more positive words in their AMs in the HL, particularly in those AMs about anger (t = −2.176,p = .031).
The aforementioned models for emotional words, high-arousal words and emotional vocabulary diversity were replicated in order to examine differences between the AMs in HL Spanish produced by our bilingual participants and the AMs in L1 Spanish written by our L1 Spanish speakers (see Table 4), as well as between the AMs in L2 German (bilingual participants) and the AMs in L1 German (L1 German participants) (see Table 5).In what follows, we only focus on the results that complement the previous analyses.Specifically, no considerable differences were observed between heritage speakers of Spanish and L1 Spanish speakers in the emotional vocabulary (number of positive/negative and high-arousal words) they used in their AMs in Spanish.Moreover, both groups tended to use a greater number of these words in their AMs about anger as compared to their AMs about surprise.However, L1 Spanish speakers used a significantly greater number of both emotional words and high-arousal words in their AMs about surprise events.The comparison of the affective vocabulary in the AMs in L2 German (heritage speakers) and L1 German (L1 German speakers) further revealed a main effect of the language used and the valence of the AMs, that is, the AMs in L1 German included a significantly greater number of emotional (positive/negative) and high-arousal words than the AMs in L2 German, as did the AMs about anger compared to the AMs about surprise.
Discussion
EI is considered to be the key to success in interpersonal relationships, academic and professional settings and for mental well-being.Heritage speakers may also benefit from a developed EI when expressing their emotions; this hypothesis motivated our study.Specifically, we investigated the extent to which trait EI contributed to the emotional vocabulary that adolescent bilinguals with HL Spanish and L2 German used in their AMs of anger and surprise in both their HL and L2.Data from L1 Spanish and L1 German speakers were also gathered to explore whether similar emotional vocabulary patterns would be observed in Spanish L1 and HL and in German L1 and L2.
Overall, the results revealed that our bilingual participants who scored higher for trait EI used more emotional vocabulary, confirming our hypothesis that emotionally intelligent bilingual speakers are likely to describe their emotions in more detail (i.e., using more emotional words) because they are better aware of their emotional states and more well-equipped to reflect on them (Barrett, 2017;Mavrou, 2021;Petrides & Furnham, 2001;Petrides et al., 2016).Emotionally intelligent bilinguals may also be more adept in the use of a greater amount of emotional vocabulary because they have more emotional concepts at their Bilingualism: Language and Cognition disposal due to speaking two languages (Paradis, 2008;Pavlenko, 2005). 5With regard to arousal, our results revealed that high trait EI heritage speakers exhibited greater emotional expressivity in that they used more high-arousal words in their AMs as compared to low trait EI heritage speakers.As previous studies have suggested, the recall of emotionally charged AMs is closely related to expressive writing: individuals with high EI appear to be more confident, possibly because they have fewer inhibitions about writing and communicating highly intense personal emotional experiences (Bohanek et al., 2005;Pennebaker & Chung, 2007;Yamamoto & Toyota, 2013).Moreover, as our participants were adolescents, they may have been more prone to expressing their emotions with particular intensity (Coe-Odess et al., 2019;Denham, 2019).However, it is important to note that trait EI was only associated with the number of emotional words, not with emotional vocabulary diversity.Although this result contradicts previous studies (Beheshti et al., 2020;Le Hoang & Grégoire, 2021), any discrepancies could be attributed to methodological differences; for example, Beheshti et al. (2020) considered general (rather than emotional) lexical diversity, whereas Le Hoang and Grégoire (2021) focused on the link between emotional awareness and emotion-related types and tokens.Furthermore, the language of retrieval proved to be a significant determinant of the emotional vocabulary used by our bilingual participants.The results revealed that AMs in the HL contained a greater number of emotional and high-arousal words and more diverse affective vocabularies than the AMs in L2 German.In other words, the HL -Spanish in our studyemerged as a particularly emotional language when recalling AMs, even if our participants self-reported a slightly lower proficiency level in their HL Spanish compared with the societal language (German).Although this finding differs from previous studies among bilinguals that used retrieval tasks or decontextualised emotional situations (Ferré et al., 2010;Vañó & Pennebaker, 1997;Vargas Fuentes et al., 2022), it allows us to conclude that the use of AMs as a methodological technique may provide useful information by uncovering heritage speakers' emotional competences.Using this technique, our study found that emotional vocabulary is particularly salient in the HL, especially when heritage speakers have the opportunity to express and explain their emotions in relation to contextualised memories that have a personal meaning for them.Notably, previous evidence suggested that heritage speakers had limited general vocabulary in their HL due to a lack of input, fewer opportunities to practice the HL and fewer people to communicate with in the HL (Belpoliti & Bermejo, 2019;Montrul & Mason, 2020;Montrul & Polinsky, 2021).However, the same factors that were considered limiting (intimate and affective contexts of language use, and interlocutors who are family members and loved ones) may actually provide an advantage for emotional vocabulary development, or the development of this vocabulary in the HL may be less dependent on the frequency and more on the quality of interactions (Daskalaki et al., 2020;Gollan et al., 2015).To explain the above results, it is necessary to consider differences between the HL and the societal or majority language.The HL is acquired naturally and in emotionally charged contexts through interactions with parents and other family memberscontexts in which positive affect and the expression of emotions tend to play important roleswhereas heritage speakers' L2s are usually acquired through later socialisation in educational contexts (Montrul, 2019;Pavlenko, 2008Pavlenko, , 2012;;Shablack & Lindquist, 2019).Moreover, the HL is acquired during early childhood, a sensitive and malleable life stage in which linguistic and emotional regulation systems develop simultaneously (Bloom & Beckwith, 1989;Cole et al., 2010), which could influence the way in which bilinguals (learn to) express their emotions in their HL.Another plausible explanation relates to the specific target languages examined in this study, in that Spanish (the HL) is Bilingualism: Language and Cognition 9 considered to be a more emotionally expressive language than is German (Barañano et al., 2004;Den Ouden, 2016;Rehbein, 2011).
The results of our study further suggested that the emotional event that was recalled influenced the emotional vocabulary, at least to some extent.In line with our hypothesis, AMs about anger contained more emotional words and more high-arousal words than did AMs about surprise (with AMs of surprise in L2 German being the least emotional).Both anger and surprise are high-arousal emotions but have different durations (Fontaine et al., 2013;Soriano Salinas et al., 2015).While anger tends to be a prolonged emotion, often involving rumination, surprise is experienced briefly giving rise to other emotions (Scherer et al., 2004).Therefore, the above finding could be attributed to the increased memorability of negative emotions because they serve adaptive functions or are discussed more frequently and in more detail (Chae et al., 2011;Earles et al., 2016;Kensinger, 2007;Talarico et al., 2009), as well as to the greater or prolonged intensity with which anger is usually experienced, which could lead to more expressivity (Alia-Klein et al., 2020).The fact that the valence of the AMs did not influence emotional vocabulary diversity could be due either to methodological issues (e.g., the lexical diversity measure used in the current study) or to the wide range of extralinguistic factors which are more closely related to general lexical diversity, such as individual development of language skills (David & Wei, 2008), age and educational level (Sankoff & Lessard, 1975), the topics addressed in the AMs (see Van Gijsel et al., 2006) and cognitive anxiety (Bradac et al., 1980), to mention just a few.
It is interesting that our bilingual participants employed a significantly higher number of high-arousal words in their AMs of anger in their HL.High-arousal words in an HL are acquired through emotional and sensory experiences (Bloom & Beckwith, 1989), which explains why these words may be used more frequently in the HL than in the L2.In addition, challenging behaviours may arise during adolescence (such as rebellious attitudes, breaking the norms, problems at school or disagreements with parents), thus triggering emotionally charged or aggressive behaviours that are more likely to be exteriorised in the HLthe language of the heart (McKay, 2005;Shooter & Bailey, 2010).
Finally, it is important to highlight both the similar and dissimilar emotional vocabulary patterns that emerged from the comparisons between AMs in L1 Spanish and HL Spanish and between AMs in L1 German and L2 German.First, despite the differences in self-reported Spanish proficiency level between our heritage speakers and L1 Spanish speakers, both of them used a similar number of emotional words in their AMs in Spanish, which was significantly greater in their AMs of anger.Anger is experienced very often during adolescence, especially at home with parents (Coe-Odess et al., 2019;Denham, 2019).As almost all the parents of our Spanish HL and L1 participants were Spanish speakers themselves, it is likely that they also made a similar use of emotional vocabulary in situations of anger with their children at homevocabulary which was later acquired by or was more accessible to their children.However, L1 Spanish speakers produced more emotional words in their AMs of surprise in Spanish than did our heritage speakers, which could be explained by the fact that surprise is experienced less frequently than other emotions (Scherer et al., 2004).Therefore, our bilingual participants might have had fewer available emotional words in their HL that they could associate with the emotion of surprise as they probably had fewer opportunities to experience this emotion and to acquire and practice the corresponding vocabulary in that HL, whereas L1 speakers might have had more opportunities to do so given their relatively more extensive use of their L1.Second, our L1 German speakers used more emotional words than did our bilingual participants in their AMs in L2 German.This may be due to emotional words being richer and more deeply encoded in the L1 than in the L2 (Altarriba et al., 1999); similarly, the retrieval of high-arousal words appears to be more prominent in the L1 than in the L2 (Baumeister et al., 2017).Taken together, these results indicate that for our bilingual group the HL functioned as the language of the heart, although they were highly socialised in the majority language.We also found that AMs of anger in German (L1 and L2) included more emotional words than AMs of surprise.This result could be explained by the fact that people from individualistic societies (such as Germany) tend to express anger more openly as they highly value their personal needs (Holodynski, 2006;Matsumoto et al., 2010;Mesquita & Frijda, 1992), and this may provide them with more emotional-linguistic resources to express this emotion in German. 6 Nevertheless, our study is not without limitations.We only analysed written AMs, which might have conferred an advantage on our introverted participants, while oral AMs may have been more appropriate for extroverts (Ożańska-Ponikwia, 2018).Moreover, these AMs are sensitive to time (Friedman & de Winstanley, 1998), thus future studies need to consider the temporal distance of past events narrated in AMs.In addition, trait EI is a multidimensional construct.The TEIQue-ASF used in this study assesses four components of trait EI, namely well-being, self-control, emotionality and sociability.Due to space limitations we did not run statistical models for each trait EI dimension separately.However, we encourage future studies to conduct more in-depth analyses of the link between emotional vocabulary in L1/HL and different facets of multidimensional constructs such as EI, as well as to use alternative theoretical models and measurements to test not only trait EI but also ability EI.Furthermore, future studies should include indices of general vocabulary knowledge to disentangle whether differences in emotional vocabulary between heritage speakers and L1 or late L2 users may be due to differences in overall vocabulary size; if not, they should proceed to collect data related to broader contextual and social factors (see Kupisch & Rothman, 2018;Rothman et al., 2023, for discussions on this matter).Another important caveat refers to the analysis of the emotional vocabulary, which was limited to the word level.To overcome this limitation, holistic perspectives that consider the affective tone of the AMs and the emotional effects that these AMs arouse in L1 speakers would advance our understanding of heritage speakers' emotional discourse.Less studied emotions, such as solitude, anxiety or blended emotions, are also worthy of consideration in this line of enquiry.
Conclusions and implications
This study led to three main conclusions.First, trait EI appears to be a proxy for the breadth of emotional vocabulary used by heritage speakers to express their emotional states and experiences, at least with regard to anger and surprise.Second, the HL of adolescent bilinguals remains the language of the heartthe language that triggers more emotional vocabulary, affective expressivity and a varied emotional repertoire in comparison to the societal language.Third, AMs about negative emotions (anger in this study) elicited a greater emotional and expressive vocabulary than did AMs about positive or dynamic emotions, such as surprise.
The relationship between EI and emotional vocabulary has broad implications for linguistic research, educational policies and health services.The ability to express emotions appears to be inherently individual rather than language specific, albeit partly.Therefore, our study calls for a more careful consideration of emotional abilities and personality factors in studies that investigate emotional vocabulary and the expression of emotion among bi-/multilinguals, including heritage speakers, migrants and third culture individuals.Regarding language pedagogy, the teaching of emotional vocabulary and emotional expression should occupy an important position in L2/HL curricula and classrooms.HL teaching needs to be learner-centred and to make use of both the HL and the societal language.This would allow bilingual speakers to reflect on the differences between their HL and their L2 when expressing their moods and emotional states, and to increase their awareness of the cross-linguistic and cross-cultural differences between their languages; this would ultimately enable them to manage the communication challenges in multilingual societies in the twenty-first century.
Additionally, emotional vocabulary, particularly in the HL, becomes an external of adolescents' inner psycho-emotional states and personalities and can serve as a diagnostic and therapeutic tool to be used by educators and health professionals to identify and treat potential socio-emotional and behavioural maladjustments sufficiently early (Yun et al., 2019).This is becoming increasingly relevant in the case of young refugees who have had traumatic experiences, as well as with regard to multilingual adolescents in foster care or who are experiencing vulnerable mental health circumstances.Yun et al. (2019) argued that "We can't start treatment until we hire a bilingual therapist" (p.511).Therefore, language barriers should be addressed early and resolved effectively in the health care system.Furthermore, when an emotional barrier among young multilinguals and their parents, educators or therapists is created, being able to establish communication using the multilingual's different languages may be key to gaining access to their emotions and feelings and providing them with appropriate support (Serrani, 2023), as their ability to express their feelings may be more developed in one of their languages than in another.
The findings of our study have broader implications for other fields, such as human resources and population sciences.In the job market, recruiters may need to conduct interviews in multilinguals' or heritage speakers' different languages to gain insights into their personalities and to better assess their suitability for different positions.Governmental data collection tools, such as surveys that are used to obtain population-representative statistics measuring parameters related to psychological well-being, such as the psychological effects of isolation during a pandemic situation or the impact of home schooling on the mental health of children and adolescents, should allow multilingual speakers to express their emotions and opinions in their preferred language, possibly in their L1 if providing these data in their L2 may lead to biased or skewed results.We can only achieve more inclusive societies and ensure that all citizens benefit from equal opportunities by taking the diversity of the individuals who form modern societies, particularly young people, into account.tools that attempt to provide insights into the nature and complexity of EI.Ability EI is usually assessed with maximum performance tests whereas trait EI with self-reported Likert-type scales (see O'Connor et al., 2019, for a comprehensive discussion of the advantages and limitations of EI measures). 2The present study uses the Trait Emotional Intelligence Questionnaire-Short form (TEIQue-SF; Petrides et al., 2016), specifically the version for adolescents (TEIQue-ASF) which taps into well-being, self-control, emotionality and sociability (see section 5 for a detailed description).
3 Anger and surprise are discussed with reference to valence because in terms of arousal both are emotions of high arousal (see Fontaine & Scherer, 2013). 4An anonymous reviewer argued that some results might be due to differences in language proficiency.The reason why we did not use a language proficiency exam to assess our participants' language proficiency in their HL was that all of them attended C1/C2 level Spanish language and culture courses (ALCE) at the moment of data collection and spoke the HL at home.Regarding L2 German, all the heritage speakers in our study were living in Germany and attended German schools; therefore, German was the language in which they socialised and received instruction at school.This explains why their overall self-reported proficiency level in German (M = 9.22 out of 10, SD = 0.99) was slightly higher than their self-reported proficiency level in Spanish (M = 7.91 out of 10, SD = 1.15, p < .001).As expected, heritage speakers' selfreported proficiency level in HL Spanish was lower than L1 Spanish speakers' self-reported proficiency level in Spanish, the language they used both at home and to socialise at school (M = 9.53 out of 10, SD = 0.78, p < .001).However, no such differences were observed when we compared our heritage speakers' selfreported proficiency level in L2 German (M = 9.22 out of 10, SD = 0.99) and L1 German speakers' self-reported proficiency level in German (M = 8.94, SD = 0.78, p = .090).In this case heritage speakers slightly outperformed their peers and, interestingly, they also self-reported significantly higher writing skills in German ( p = .008). 5 An anonymous reviewer argued that this interpretation lacks statistical support because EI was not a statistically significant predictor of emotional vocabulary diversity.However, it is important to clarify that the use of a greater number of emotional words refers to the ability to use more words (including repetitions) of this kind (amount) and more different words of this kind (diversity).Our results support the above view, at least partially.Regarding diversity, it is worth noting that there are more than 20 measures of lexical complexity/diversity (see Mavrou & Ainciburu, 2019, p. 128, for a review), and thus future studies should employ more varied and sophisticated lexical complexity measures to corroborate the findings of the current study.Other extralinguistic factors that influence lexical diversity are mentioned in section 7. 6 The collectivistic/individualistic labels are not rigid divisions.People belonging to a specific society share values and behaviours which are more associated with one or the other pole.The characterisation of Germany as an individualistic society is supported by studies on emotional expression and emotional regulation (see, e.g., Bender et al., 2012;Ogarkova & Soriano, 2014;Oster, 2019).While we do not intent to emphasise cultural differencesas this issue goes beyond the scope of the current studyit is perhaps a related background that could provide further explanations for our findings.
Table 1 .
Descriptive statistics for participants' language background
Table 2 .
Descriptive statistics for the affective vocabulary per group, language of retrieval and valence of the AMs https://doi.org/10.1017/S1366728924000348Published online by Cambridge University Press
Table 3 .
Emotional vocabulary as a function of language (HL Spanish versus L2 German), valence of the AMs (anger versus surprise), trait EI and gender (bilingual speakers only) *p < .05,**p < .01,***p < .001.Statistically significant t-values for the robust models are marked in bold.
Table 4 .
Comparison of the affective vocabulary in the AMs in HL Spanish (heritage speakers) versus L1 Spanish (L1 Spanish speakers)
Table 5 .
Comparison of the affective vocabulary in the AMs in L2 German (heritage speakers) versus L1 German (L1 German speakers) p < .05,**p < .01,***p < .001.Statistically significant t-values for the robust models are marked in bold. * | 2024-05-18T15:25:59.903Z | 2024-05-14T00:00:00.000 | {
"year": 2024,
"sha1": "b233d76ff3e320ea7b6700e1eca40067357d3853",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/FD377362CD117292E2570C3ED60CF298/S1366728924000348a.pdf/div-class-title-the-emotional-brain-of-adolescent-spanish-german-heritage-speakers-is-emotional-intelligence-a-proxy-for-productive-emotional-vocabulary-div.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "41408a40502ebedf34aca70a5cfa29cb41c0b381",
"s2fieldsofstudy": [
"Psychology",
"Linguistics"
],
"extfieldsofstudy": []
} |
9542475 | pes2o/s2orc | v3-fos-license | Artemisinin resistance containment project in Thailand. (I): Implementation of electronic-based malaria information system for early case detection and individual case management in provinces along the Thai-Cambodian border
Background The Bureau of Vector-borne Diseases, Ministry of Public Health, Thailand, has implemented an electronic Malaria Information System (eMIS) as part of a strategy to contain artemisinin resistance. The attempt corresponds to the WHO initiative, funded by the Bill & Melinda Gates Foundation, to contain anti-malarial drug resistance in Southeast Asia. The main objective of this study was to demonstrate the eMIS’ functionality and outputs after implementation for use in the Thailand artemisinin-resistance containment project. Methods The eMIS had been functioning since 2009 in seven Thai-Cambodian border provinces. The eMIS has covered 61 malaria posts/clinics, 27 Vector-borne Disease Units covering 12,508 hamlets at risk of malaria infections. The eMIS was designed as an evidence-based and near real-time system to capture data for early case detection, intensive case investigation, monitoring drug compliance and on/off-site tracking of malarial patients, as well as collecting data indicating potential drug resistance among patients. Data captured by the eMIS in 2008–2011 were extracted and presented. Results The core functionalities of the eMIS have been utilized by malaria staff at all levels, from local operational units to ministerial management. The eMIS case detection module suggested decreasing trends during 2009–2011; the number of malaria cases detected in the project areas over the years studied were 3818, 2695, and 2566, with sero-positive rates of 1.24, 0.98, and 1.16%, respectively. The eMIS case investigation module revealed different trends in weekly Plasmodium falciparum case numbers, when classified by responsible operational unit, local and migrant status, and case-detection type. It was shown that most Thai patients were infected within their own residential district, while migrants were infected either at their working village or from across the border. The data mapped in the system suggested that P. falciparum-infected cases and potential drug-resistant cases were scattered mostly along the border villages. The mobile technology application has detected different follow-up rates, with particularly low rates among seasonal and cross-border migrants. Conclusion The eMIS demonstrated that it could capture essential data from individual malaria cases at local operational units, while effectively being used for situation and trend analysis at upper-management levels. The system provides evidence-based information that could contribute to the control and containment of resistant parasites. Currently, the eMIS is expanding beyond the Thai-Cambodian project areas to the provinces that lie along the Thai-Myanmar border.
Conclusion:
The eMIS demonstrated that it could capture essential data from individual malaria cases at local operational units, while effectively being used for situation and trend analysis at upper-management levels. The system provides evidence-based information that could contribute to the control and containment of resistant parasites. Currently, the eMIS is expanding beyond the Thai-Cambodian project areas to the provinces that lie along the Thai-Myanmar border.
Keywords: Malaria containment, Electronic-based, Information system, Malaria surveillance Background During the last decades, the Greater Mekong Subregion (GMS) has been experiencing the highest level of Plasmodium falciparum resistance to anti-malarial medicines (as monotherapies or in combination) in the world. More recently, evidence of growing P. falciparum resistance to artemisinin and its derivatives has been reported along the Thai-Cambodian border [1,2]. Artemisinin-based combination therapy (ACT) is the most rapid, reliable and effective treatments to cure patients infected with P. falciparum malaria worldwide. In addition to possibly compromising results in the Asia-Pacific region, growing falciparum resistance to artemisinin derivatives was considered to be a global public health threat, possibly affecting global efforts and investments if no action is taken, with the potential for particularly ill effects in Africa [3]. Moreover, only a few replacement therapies in the latter phases of drug development are in the pipeline. With the serious concern that resistance may spread beyond the Greater Mekong Subregion to affect other regions, the World Health Organization (WHO) has developed and supported a bi-country and multi-partner initiative to contain artemisinin-resistant parasites. The initiative was later consolidated into a global plan for artemisininresistance containment in 2011 [4]. The multi-pronged containment strategy was based on several elements: (1) stop the survival and spread of resistant parasites, (2) increase monitoring and surveillance to identify new foci rapidly, and provide information for containment and prevention activities, (3) increase access to diagnostics and treatment with ACT to improve patient outcomes and limit opportunities for resistance, (4) invest in antimalarial drug resistance-related research, and (5) motivate stakeholders at global, regional and national levels to support containment activities [4].
One key objective of the containment operation was to strengthen programme management through close observation and effective coordination with partners, which would enable rapid and high-quality strategy implementation. The main activities were laid out to ensure that all suspected patients had access to reliable diagnostic tools in the target areas, and to ensure all infected patients had access to and use of radical treatments, including gametocidal drugs [5,6].
Thailand's Bureau of Vector-borne Diseases (BVBD) is vertically driven in the border provinces that remain endemic for malaria. This means that special malaria services are operating in all endemic and remote villages in concert with general healthcare services. At the most peripheral points-of-care, such malaria posts and clinics, and in malaria-endemic communities, village malaria volunteers and/or village health volunteers work in collaboration with malaria program staff. At the upper levels, treatment and care activities are monitored vertically by the Vector-borne Disease Unit, Vector-borne Disease Center, Regional Offices of Disease Prevention and Control, and the BVBD. In addition to passive case detection, in which local febrile patients visit a malaria post/clinic or Vector-borne Disease Unit office for diagnosis by themselves, the local operational staff also conduct periodic active case detection in the villages for which they are responsible.
As part of the containment project implemented in seven provinces along the Thai-Cambodian border, the electronic Malaria Information System (eMIS) was developed based on the aforementioned existing malariaprevention and -control activities. The eMIS was built on the original hand-processed paper-based system that has not been significantly altered by Thailand's National Malaria Prevention and Control Programme in the past 5 decades. The paper-based system was time-consuming, slow in sharing and consolidating data across peripheral units, and did not provide consistent information at the central level for timely decision-making and reporting to the highest political level.
Even though the eMIS was specifically developed to address the problem of artemisinin resistance, it was also designed eventually to replace the BVBD's existing paper-based system, to reduce redundant data across routine data-collection forms and different aggregate data reports, whilst enhancing systems with modern technology and communication features. Replacing paper-based and aggregated data flow with integrated web and mobile technology was expected to provide real-time, traceable evidence at all levels for case detection at the point-of-care, case investigation, and followup activities for individual cases. The information within the system was expected to be used at both operational and supervision levels for case management, individual case follow-up and intervention coverage purposes. The primary objective of the study was to demonstrate the design and implementation of eMIS's functionalities. The secondary objective was to present the preliminary results of the system after use as an active, evidencebased malaria surveillance system for the Thailand malaria-containment project. The project's main containment outcome regarding parasite clearance is discussed in the second part of this manuscript series.
Implementation locations
For the purpose of incorporating the new system into the mainstream program, the eMIS was developed by the Center of Excellence for Biomedical and Public Health Informatics (BIOPHICS) in close collaboration with the National Malaria Programme managed by the BVBD. The eMIS was initially piloted in endemic zones of two Thailand provinces along the Cambodian border, where artemisinin resistance was documented as highest (designated as zone 1 in the project). The BVBD, however, decided to implement the system progressively in all seven Thai provinces bordering Cambodia, where, at that time, there was no evidence of artemisinin resistance (designated as zone 2 in the project). Thus, the eMIS was functioning only on the Thailand side of this bi-country containment project. Figure 1 is a map of the project's containment areas on both sides of the Thai-Cambodian border. At full scale, the eMIS covered 61 malaria posts/clinics, 27 Vector-borne Disease Units, 11,615 villages, and 12,508 hamlets.
Work and data flow
The eMIS was primarily designed to digitize malaria data generated from peripheral passive case-detection units and from intensive index-case investigations, to monitor anti-malarial drug compliance, supervise on/off-site tracking of malarial patients under treatment, to ensure strict follow-up, and to detect/report real-time any inadequate clinical or parasitological response to the drug. In addition, disease mapping and spatial analysis were incorporated into the system to increase the performance of rapid response teams in implementation areas. Moreover, although the eMIS was specifically developed to contain P. falciparum cases, it also records infection information on all other species, and the casemanagement routines and work activities of malariacontrol programmes.
The eMIS combines web-based and mobile technology to achieve its functionalities. The four main modules of the web-based system include: (a) case registration, (b) case investigation, (c) case follow-up, and (d) VIVO laboratory results. Google Earth was built into eMIS so that diseases can be mapped almost real-time. Maps portraying malaria situations can be customized by local operational staff at each level to present information at village and household levels. The mobile technology function employs "smart-phones" for data capture and alert messages.
In the case-registration module, the data captured for passive and active case detection (including basic demographics) of both infected and non-infected cases were entered into the eMIS. Details of case investigations and treatments provided, as shown in Figure 2, were captured for all documented infected cases. In addition, particularly for P. falciparum-infected cases, attempts have been made to capture and assess drug compliance on days 1, 2, and 3. Case P. falciparum and Plasmodium vivax cases were followed up routinely, to record temperature, symptoms, and blood-draw, for monitoring treatment outcomes and parasite clearance. Blood smears were collected at registration and during clinic or home follow-up visits for microscopic and PCR analysis; however, not all smears were analysed by PCR or utilized for routine malaria surveillance.
Ethical considerations
eMIS access and operational modules were integrated into the routine work performed by malaria and publichealth personnel. All patient-management activities, and the database containing information associated with the described activities above, which formed part of the eMIS, utilized strict security and were used only by authorized personnel in charge of patient casemanagement. Similar to data captured within the original paper-based reporting system, the eMIS has maintained the same crucial data integrity and confidentiality. No written informed consent forms were signed by patients who visited the malaria clinics, since the activities were considered routine programmatic malaria procedures and interventions. However, the malaria staff informed all patients verbally and asked them to return to the clinic or agree to home visits, to be conducted as part of their scheduled 28-day follow-up. Data extracted from the eMIS database were secondary data with no identification linked to any individual patient. The authors requested official permission from the Ministry of Public Health (Director of the BVBD) to extract these unlinked digital data for further analysis as part of the joint BIOPHICS-MOPH collaboration. The protocol for this methodology was reviewed and approved by the Ethics Committee of the Faculty of Tropical Medicine, Mahidol University.
Results
Development of the eMIS began in January 2009, and the program was launched in July 2009. At the end of December 2011, over 800,000 records were entered into the system. As part of the requirements of the artemisinin resistance-containment strategy to ensure an "effective management, surveillance and coordination" system, the following eMIS functionalities and outputs have been developed and scaled up for use in the field.
Reporting system
Since the eMIS was meant to be used as part of the routine programmatic work of the national malaria-prevention and -control programme, its core functionalities purposively maintained the original vertical reporting system whilst adding specific data-entry procedures for entering individual case detection and investigation information directly into the system. The follow-up information recorded during the home visit was captured online and offline by smart phones provided to malaria and healthcare staff. The data were then transmitted automatically to the system, or synchronized later if cell-phone use was not possible in a follow-up area. With near real-time entry of data and transmission for individual case deection and investigation (from day 0) case follow-up visits into the system, authorized staff at all levels, from local units to BVBD, had the opportunity to gather, review and critically assess the malaria situation in their own areas of responsibility and elsewhere each day. These features and automatic procedures made it much simpler to aggregate data from different types of reports and from different operational units up to the highest level of decision-making. Routine reports of aggregated data with traceable records were able to be generated whenever required. For purposes of data-quality control, especially in the early phase of implementation, the malaria staff maintained, cross-checked, and reconciled the eMIS data captured with those collected from the original paper-based system. The data-completion rates and data quality of the paper-based mechanism were compared with the eMIS, and presented at meetings of managementlevel malaria-control personnel; there was evidence of incomplete and missing data, as well as data inconsistencies, but the situation improved as the staff became more experienced. It should be noted that, in the early phase of eMIS implementation, data collection and reporting from both routine paper-based and implementation-phase electronic-based systems were done by trained malaria personnel working for the existing national vertical malaria-control programme. The paper-based mechanisms were then progressively phased out in the project areas. Figure 3 shows a screenshot of a public-access page that summarizes data from malaria cases collected to the present day (www.biophics.org/malariar10). Linking to this webpage, the most up-to-date statistics are presented on public pages, but some pages are restricted to authorized BVBD personnel. The summary of cases digitized in the eMIS database, between January 2009-December 2011, are shown in Table 1 and Figure 4. Even though the eMIS was officially launched in mid 2009, original paper-based data collected prior to the eMIS' launch were entered retroactively into the electronic system. Consolidated tables and graphs displayed by day, week, month, or year show a decreasing trend of confirmed malaria cases over the past three years in the operational project area. As shown in Table 1, the overall slide-positive rate (all malaria species) was 1.24% (3,818/ 307,214), 0.98% (2,695/275,399) and 1.16% (2,566/ 221,868), in 2009, 2010, and 2011, respectively. As expected, the proportion of P. vivax-infected patients among the confirmed cases increased from 72% to 88% with more recorded positive cases among migrants in zone 1 than zone 2. It should be noted that not all cases were treated by ACT; as shown in Table 1, the percentages of cases receiving ACT treatment were 68, 70, and 64%, during the period 2009-2011, respectively. The malaria statistics during the three years (2009-2011), summarized in Figure 4, represent cumulative annual statistics captured on the eMIS public-access page.
Situation and trend analysis
The eMIS offered customized motion graphs and reports, which could be generated based on near-real- time digitized databases. It allowed authorized malaria staff to access, and assess, the malaria situation and trends anywhere in the project area in a timely manner. Graphs and reports can be specifically customized according to the skills and needs of field personnel, location and time. Peripheral operational units and upper management levels can get access to data and generate reports within their BVBD assigned roles and responsibilities. This feature assisted supervising malaria staff in acting upon occurring cases in their localities of responsibility, strengthening the performance of individual case management (under Direct Observed Treatment, DOT) and individual follow-up by allowing staff to take remedial measures and appropriate action in a timely manner. Figure 5 shows some examples of system-generated graphs, which help staff at operational units and/or upper management at ministerial levels to estimate the current malaria situation and trends. As shown in Figure 5 (a), trends of P. falciparum cases over the past three years (2009-2011) reveal that the overall number of malaria cases appears to decrease over weeks, but fluctuations are noted in certain periods. This prompted the management level to alert the operational units to monitor the situation carefully and act accordingly. Figure 5 (b) shows bar graphs generated for different purposes, e.g., cases in different operational units classified by nationality, parasite infection and methods of detection. Caseloads are different among operational units pertaining to the citizenship of patients (Thais, long-term Migrant type 1 (M1 = staying in Thailand ≥ 6 months), and short-term/seasonal Migrant type 2 (M2 = staying in Thailand < 6 months and/or mobile cross-border population)). The number of infected patients detected by active case detection (ACD) was much less than those detected by passive case detection (PCD), reflecting some difficulty in performing ACD in certain remote localities. Over several years, the number of P. vivax-infected cases has been increasing across all operational units. Figure 5 (c) reveals different occupational risks in different border province. As seen in the example of two provinces, there was high percentage of infection among soldiers in certain province directly affected by border conflict, while a similarly high percentage was noticed among cross-border farm/orchard workers in another province. Figure 5 (d) indicates that most Thai patients were infected within the district, while M1s were infected in their working village, and M2s were likely infected from abroad.
Geographical Information System (GIS) applications
Employing GIS technology, the motion spatial and temporal presentations within eMIS were utilized by staff to identify and locate follow-up cases, and to assess temporal and spatial situations within their areas of responsibility. Figure 6 shows some examples of GIS presentations in the system. As shown in Figure 6 (a), P. falciparum -infected cases (Thais and non-Thais) in the seven provinces were scattered mostly along the border villages. There were more absolute cases in the northeastern provinces, probably due to the larger population size. Figure 6 (b) shows cases mapped at the household level, where local malaria staff performed home visits. Information acquired from individual follow-ups can be displayed for each case investigated. The GIS could also be used by malaria health staff to identify areas targeted for home visits. Dates of appointment were automatically generated to process individual case investigations and to plan subsequent control measures if needed. From self-reported information gathered during case investigations, the location where patients were most likely infected was mapped, as shown in Figure 6(c). This feature had the potential to update prevention and control measures in problematic locations. It should be noted, however, that the potential location of infection was based on self-reporting during atypical case investigation processes.
One of the main objectives of using the eMIS in the containment project was to map patients still positive after 3 days' ACT treatment. Mapping locations with suspected P. falciparum resistance to artemisinins in near-real-time allowed the containment project to perform case investigation instantly, to detect and treat secondary cases radically. Day 3 positivity was recorded utilizing routine surveillance data collection (simply counting days post-treatment, without details, such as hour of treatment) during case follow-up, either at a malaria clinic or through home visits by malaria personnel. Even though day-3 parasitaemia is an insensitive marker of artemisinin resistance, it is the best available indicator used by the project to routinely measure P. falciparum sensitivity to artemisinins [7]. Therefore, such an indicator on the map could help by screening for potential artemisinin resistance. Figure 6 (d) depicts GIS mapping of such cases at the village level for artemisinin resistance containment.
Mobile technology applications and alert system
As part of the eMIS, an application running on a Windows-based mobile phone was developed to record patient information. The mobile device was selected for easy portability to the patient's location during home visits, to store collected data and send the data either immediately online or once signal coverage became available. The mobile-phone applications developed for case follow-up at offsite locations in remote areas captured text, images, and locations. Figure 7 shows that, during case follow-up activities outside a malaria clinic, the staff collected a blood specimen to monitor treatment outcomes while capturing information on the mobile phone to assess drug compliance, clinical signs (if any), and patient locations. Mobile technology applications were also used to track patients and remind staff about specific follow-up activities. As soon as an infected patient was registered in the eMIS database, a follow-up schedule was automatically generated, by malaria species. On the scheduled date, the system reminded the staff responsible to perform an individual case follow-up on the mobile phone provided; this functionality was particularly useful to ensure home visits in remote areas. Efforts were made to perform follow-up visits for all scheduled dates; however, due to difficulties in coordinating case follow-up, particularly for cross-border and/or unregistered migrant workers, many scheduled follow-ups had to be skipped. Moreover, it was not standard practice to conduct frequent followup visits across malaria-endemic zones in all seven provinces, which was made evident by the fact that few or no visits were carried out in some areas. As shown in Tables 2, the overall follow-up rate (compared with scheduled visits) varied by location, between 50-90% for both P. falciparum and P. vivax cases. The follow-up rate was slightly higher among Thai citizens than migrants, with an exception in 2011, when the follow-up rate was higher among migrants than Thais. Even though the follow-up was scheduled until day 42 for P. falciparum cases and day 90 for P. vivax cases, it remained high at day 42 and day 90 (80% & 70%). However, during several off-site follow-up visits, most notably in remote areas, some specimen collections for laboratory testing were not performed.
The Short Messaging Service (SMS) was incorporated into the eMIS as an alert system. As shown in Figure 7 (c), daily new infected case(s), as well as cumulative positive case alerts, were set up to inform malaria staff at local and upper levels about the current malaria situation in their areas of responsibility. Daily infections and a weekly summary of all malaria cases were automatically sent to the head of each Vector-borne Disease Unit for the purpose of broadcasting the malaria situation in a given period of time. The daily and weekly messaging feature helps alert staff at the local level to manage each single case, and assess the local situation on a quasireal-time basis.
Discussion
The eMIS has been evolving over several years with continuous input from end-users, who have provided useful feedback to BIOPHICS to assist in matching the peripheral technical requirements and constraints of local units with the programmatic needs of regional and central management offices. Despite different needs and requests across working units in different malaria-endemic areas of the seven provinces, the system eventually incorporated all concerns with standardized operational practices to generate data across units, therefore reducing double-entries and repetitiveness, and reducing time-consuming effort whilst generating timely, quality cross-checked reports.
During the early phase of system implementation, it took some time for staff to learn to use the system effectively. Several issues arose regarding the management of the hardware and the undeveloped skills of existing staff. While the electronic system evolved, there were physical and psychological effects on staff due to the additional workload as the result of two reporting systems running in parallel (paper-and electronic-based). Several training sessions coupled with progressive improvements to the eMIS, with additional features, Figure 7 Screenshots of mobile-phone applications and SMS alert system. (a-1) Login screen of the password-secured mobile phone system; (a-2) Infected-case details with patient name and picture (as per permission), appointment date, address to be submitted online or offline during case investigation or home visit follow-up; (b-1) Case follow-up details on temperature, side effects, and other remarks; (b-2) Check boxes of atypical malaria signs and symptoms captured during home visit follow-ups; (c-1) Daily SMS alert messages -Day 0 alert informing number of cases within the specific malaria clinic area of responsibility, and a case follow-up alert informing number of cases to be followed up on a particular day; (c-2) Weekly SMS alert messages informing of the cumulative number of cases, particularly for each level of the vertical malaria control programme, from local operational units to ministerial management level.
increased end-user interest in the new system. This, in turn, allowed BIOPHICS to receive better feedback from, and collaborate more efficiently with, peripheral health staff. Over time, the upper management level at BVBD was also convinced of the added value of a system created to monitor and assess progress made through critical containment interventions with quick remedial actions in their areas of responsibility.
Contributions by the eMIS improved the ability of malaria-surveillance systems to capture data daily from individual malaria patients, almost eliminating the need to collect aggregated monthly data from local operational units. It should be noted, however, that some discrepancies were identified between the original paperbased system and the new electronic-based system in the data reported to upper levels, especially at the beginning of the project when the two systems were still running in parallel. Such findings have been explained as double counting certain papers, data not being digitized at all, or data wrongly introduced to the database. These issues have been progressively addressed by all parties. When cross-checked, consolidated numbers from the eMIS eventually represented the figures reported by the Bureau of Vector-borne Diseases (BVBD), which may differ slightly from the statistics provided by the Bureau of Epidemiology (BOE), which gathers data from additional health sources, such as hospitals. Even though sharing information is a routine practice between the two authorities, with almost all malaria cases being managed free-of-charge by official health facilities (off-the-shelf treatments for malaria are not permitted in Thailand), some patients go directly to official health facilities, clinics, and even hospitals, so they do not get counted on paper, or electronically. This has been an issue with malaria reporting in Thailand over the decades; however, in recent years, the numbers of the two reporting mechanisms have been quite close. It is predicted that a functioning eMIS, which encompasses all healthcare facilities, not just malaria, will further address that issue in the next version of the eMIS. Individual data recorded in the eMIS can be exported in different formats for further epidemiological analysis. Raw data extracted from the eMIS can then be used to generate reports for authorized staff. Information gathered by the eMIS in 2009-2011 indicated a decline in slide-positivity rate in the 7 provinces from 1.24 to 1.16%. This may be due in part to the intensive earlycase-detection and case-management efforts in the containment project. The statistical data collected electronically from each village in each specific jurisdiction, coupled with reclassification for different types of malaria endemicity, make calculating risk by village easier than using the paper-based system.
Several other issues regarding the system implementation require attention. Operational practices were not consistent between different operational units in terms of detection methods, individual follow-up, recording citizenship, occupation, etc. The issues of data integrity and standardization have been discussed elsewhere [8][9][10][11][12]. The collection of more detailed evidence from each operational unit will inform the redesign or fine-tuning of locally-driven prevention and control measures. More comprehensive information will also assist in reallocating resources and efforts to address rapidly evolving situations for each operational unit, which differ from generic, more static measures that are unlikely to deal with disease elimination. The mobile migrant population remained the major concern for the malaria-prevention and -control programme. The system highlighted the high percentage of mobile workers who could not be followed up by malaria staff in the Cambodian border containment area. The case follow-up rate for migrants in the Thai-Cambodian containment area was rather low (as low as 20%) compared with the findings of the Microsoft Research-funded study [13] piloted in one district on the Thai-Myanmar border, where the malaria follow-up rate was > 80%. Even though the eMIS had similar mobile technology and follow-up module to assist patient tracking as the Microsoft study, the results were different, because most migrants in the latter study were long-term residents of Thailand, and not highly mobile. However, the containment project was implemented in all order districts on the Thai-Cambodian border, where short-stay seasonal migrant workers are numerous; in addition, these areas included migrants from Cambodia seeking healthcare in Thailand. Ensuring that all mobile people, whatever their citizenship, have access to treatment and preventive measures is one of the main containment strategies [14][15][16][17]. Therefore, to eliminate artemisininresistant strains, more innovative strategies, and the involvement of more key stakeholders (e.g., national and international NGOs working on migrant issues) should be considered. The low follow-up rate among cross-migrant workers along the border could be improved by malaria staff and/or NGOs collaborating with the migrants' managers or orchard owners. In highly endemic areas, out-of-normal-hour service in malaria clinics may improve minimum required follow-up visits.
Despite these issues, the eMIS' features, combined with coordinated supervision, have demonstrated the capacity to provide malaria staff and the MOPH/BVBD with quality, quasi-real-time information, allowing them to make accurate decisions on disease control and planning in target areas. The BVBD, backed by BIOPHICS, took advantage of the eMIS system better to monitor the malaria situation in peripheral areas, and could alert local staff to act upon events in a timely manner. The inter-connected modules within the system have shown that the system contributed to an improvement in case management and individual follow up, which in turn helped to improve real-time situation assessment and epidemiological knowledge, and helping to identify the determinants of malaria spread. It also helped malaria workers to identify and track potential resistant parasites more effectively. The outcomes from the eMIS were quite positive, and could potentially be adopted and supported in the national policy to eliminate malaria in Thailand.
The outputs of the eMIS were similar to other information systems developed for malaria prevention and control. The quality of the system's data has enabled malaria personnel to perform their duties better. The systems developed elsewhere [18][19][20] were implemented in specific settings, and it was suggested that these systems could be integrated into national malaria control programmes at Ministry level. The eMIS, however, was implemented as part of the routine regional malariacontrol programme. This approach was initially planned in the design phase of the eMIS, to assure the robustness and sustainability of the system should it prove effective in malaria-case management and become well-accepted for use as the official malaria information system. Several GIS applications were developed for case surveillance and vector control (coverage by communitybased spraying, insecticide consumption and application rates) and preventive measures (coverage of insecticidetreated nets) [21][22][23][24]. The information communication technology (ICT) used in malaria control in Tanzania [20] suggested ICT could result in easier communication, improved training for doctors, and increased access to information by individuals and groups who are historically unaware of malaria. These information system functionalities are planned for the next version of the eMIS.
A recent review of the research agenda for the monitoring, evaluation and surveillance of malaria, with the end-goal of eradication [3], suggested that surveillance technologies based on cell-phone or real-time internet web-based reporting, be evaluated, since these new strategies could have major implications for program implementation. An assessment of any delivery system should cover acceptability, feasibility, efficiency, cost-effectiveness, and community engagement. There has been no formal evaluation of the eMIS; however, the system has been imbedded into routine work and appears to have been accepted by system users (based on informal interviews). The main system costs comprised the acquisition and maintenance of system hardware (computer systems and mobile phones), and the employment of new ICT personnel at some operational units. These costs were supported by the containment project initiative under the management of BVBD. However, the project has not yet been evaluated for cost-effectiveness.
Conclusion
The eMIS was developed and employed in provinces where intensive containment operations took place, to increase the overall performance of the malaria surveillance system, to strengthen case-management and strict follow-up of all malaria patients, and to improve the coordination of data management between local malaria operational units and upper programmatic management levels. The outputs from the eMIS have demonstrated that the system provides almost-real-time evidencebased information on individual malaria cases managed in local operation units, and can be used effectively for situation and trend analysis by senior management. The eMIS is an improvement on the paper-based system, where the information was aggregated vertically and submitted routinely. This timely information can contribute to more effective case management and thus to the coordinated control and containment of resistant parasites.
Uptake for routine malaria case management in all seven provinces along the Thai-Cambodia border took some time, and the development of a public-health informatics system is ongoing. Many additional features were developed during the project. Lessons learnt from the project were (1) cooperation and support by central management is key to eliminating resistant parasites; (2) program sustainability is required for continued system enhancement, and (3) participation and feedback from all operational levels can assist to generate viable results and improve performance.
Recently, the Global Fund Round 10 [25] has provided support for Thailand's expansion of containment efforts across its regional borders; the eMIS is now expanding into provinces along the Thai-Myanmar border. The limitations and drawbacks of the system revealed during the containment project along the Thai-Cambodian border are being resolved. Discrepancies between paperbased and electronic reports, and different channels of reporting between the two disease control authorities, are being evaluated. It is anticipated that the paperbased system of malaria surveillance will be phased out as the project progresses. As part of the Global Fund Round 10 proposal, BIOPHICS' developers are designing additional prevention and control modules, including entomology data, quality control of microscopic-test data, behavioral-communication-change modules, and bednet/insecticide residual spraying (IRS) distribution. When all prevention and control modules are combined, it is expected that the new informatics tool will fully support vector-borne disease-control operations, and provide an intelligent choice for assisting containment activities. This is an opportunity to prove that new information technology can be used as a tool for disease prevention and control, as well as an effective communication channel to the population at risk. It is anticipated that the enhanced eMIS could be applicable for Thailand at large, and other countries in the region, and beyond. | 2016-05-12T22:15:10.714Z | 2012-07-29T00:00:00.000 | {
"year": 2012,
"sha1": "1b25da456fb15a6a4968b609911caf1e1402217d",
"oa_license": "CCBY",
"oa_url": "https://malariajournal.biomedcentral.com/track/pdf/10.1186/1475-2875-11-247",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1dc8e1858a9c96b3885a72dfc123697ade94a709",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
232087036 | pes2o/s2orc | v3-fos-license | Implantation of Autologous Expanded Mesenchymal Stromal Cells in Hip Osteonecrosis through Percutaneous Forage: Evaluation of the Operative Technique
Bone forage to treat early osteonecrosis of the femoral head (ONFH) has evolved as the channel to percutaneously deliver cell therapy into the femoral head. However, its efficacy is variable and the drivers towards higher efficacy are currently unknown. The aim of this study was to evaluate the forage technique and correlate it with the efficacy to heal ONFH in a multicentric, multinational clinical trial to implant autologous mesenchymal stromal cells expanded from bone marrow (BM-hMSCs). Methods: In the context of EudraCT 2012-002010-39, patients with small and medium-sized (mean volume = 13.3%, range: 5.4 to 32.2) ONFH stage II (Ficat, ARCO, Steinberg) C1 and C2 (Japanese Investigation Committee (JIC)) were treated with percutaneous forage and implantation of 140 million BM-hMSCs in a standardized manner. Postoperative hip radiographs (AP—anteroposterior and lateral), and MRI sections (coronal and transverse) were retrospectively evaluated in 22 patients to assess the femoral head drilling orientation in both planes, and its relation to the necrotic area. Results: Treatment efficacy was similar in C1 and C2 (coronal plane) and in anterior to posterior (transverse plane) osteonecrotic lesions. The drill crossed the sclerotic rim in all cases. The forage was placed slightly valgus, at 139.3 ± 8.4 grades (range, 125.5–159.3) with higher dispersion (f = 2.6; p = 0.034) than the anatomical cervicodiaphyseal angle. Bonferroni’s correlation between both angles was 0.50 (p = 0.028). More failures were seen with a varus drill positioning, aiming at the central area of the femoral head, outside the weight-bearing area (WBA) (p = 0.049). In the transverse plane, the anterior positioning of the drill did not result in better outcomes (p = 0.477). Conclusion: The forage drilling to deliver cells should be positioned within the WBA in the coronal plane, avoiding varus positioning, and central to anterior in the transverse plane. The efficacy of delivered MSCs to regenerate bone in ONFH could be influenced by the drilling direction. Standardization of this surgical technique is desirable.
Introduction
Treatments for osteonecrosis of the femoral head (ONFH) are still a matter of considerable debate. Non-operative treatments have been associated with high radiographic failure rates (at a mean of 72%) consistently throughout the years [1]. Due to a high degree of heterogeneity across various studies, best individual stage-dependent treatment options and especially the correct indications for surgical treatment are largely unknown. Treatment with forage drilling, the so-called core decompression (CD), is the classic joint-preserving alternative to treat early cases of ONFH, initially proposed to decrease the intraosseous pressure in avascular osteonecrosis of the femoral head [2]. However, the results of CD to avoid femoral head collapse and eventual total hip replacement (THR) are highly variable. In a systematic review, radiological progression after forage averaged 44% of treated cases in studies done before 1992, with an improvement to 37% failure in more recent studies [1]. The preoperative radiographic stage [3] or the extent and location of the osteonecrotic lesion have been related to the failure of forage treatment [4]. Particularly, CD treatment of stage III and beyond is associated with up to 100% failure (radiographic progression or THR conversion) [3][4][5]. Large necrotic lesions and osteonecrosis extension laterally to the acetabulum edge (the so-called C2 lesions according to Sugano et al. [6]) caused femoral head collapse even in asymptomatic hips without treatment [7]. The CD failure rate was found to be higher in hips with medium and large lesions (more than 15% estimated volume), and in hips with more lateral lesions, rather than medial or central [4].
The forage or CD technique has substantially evolved given the limited, variable efficacy. Different technical proposals include multiple small drilling [8][9][10][11][12] versus a single larger diameter [8,13], with the risk of occasional fractures [11]. Regarding the positioning of the drill, not only fluoroscopy but also computer-guided [14] and even magnetic resonance (MR) guidance techniques [15] have been proposed, with claims that up to 100% reach the target (the osteonecrotic lesion). Direct vision of the drilled tunnel through endoscopy [16] and even hip arthroscopy [17] have been used to assist CD with a tunnel or intra-articular visualization. Other modifications include the incorporation of different augmentation grafts or substitutes, such as calcium phosphate and sulfate [18], demineralized bone matrix [19], grafts with bone morphogenetic protein (BMP) [20] and various combinations, as recently reviewed [21].
Considerable interest has been generated by the advances in cell therapy to regenerate bone, and Hernigou early on confirmed the benefits of bone marrow (BM) grafting injected through the forage [22]. The use of cell therapy approaches has increased since then, whether in the bone marrow concentration (BMC) or after cell expansion, aiming to deliver sufficient numbers of mesenchymal stromal cells (MSCs) [23] and offering a significant improvement over CD alone [24]. Yet, some potential patient-related aspects may affect the outcome, particularly considering the different etiological and possibly pathophysiological aspects within the ONFH. Of note, autologous treatments such as many cell therapy strategies may also impact on the therapy results, and therefore treatment standardization is paramount.
Efficacy may not only depend on patient-and disease-specific aspects (stage, volume, location of the necrosis, acute or chronic phase), but also on technical specificities that are poorly understood. While surgeons usually aim to perform the forage drilling towards the lesion, the cell distribution and subsequent efficacy may also be modulated by this drilling. Among the uncertain issues, some are technical, such as drill diameter, single versus multiple drilling or the location and direction of the drilling related to the lesion, towards the central area of the femoral head or the weight-bearing area (WBA). In preclinical models, the biodistribution of MSCs was proven to remain within the injected femoral head [25], confirming the tropism of injected BM-derived MSCs for the bone surface. Despite this finding, the reach of cells may vary depending on the drill location, and thus affect the efficacy. We hypothesise that the surgical technique, and particularly the drill location, may affect the treatment outcome.
At this point, the study aim is to evaluate the variability of the forage positioning and to identify how this variability may affect the efficacy of cell therapy, framed in a clinical trial injecting expanded autologous BM-derived MSCs in the femoral head with osteonecrosis Ficat-Arlet or ARCO stage II. To meet the aim of the study, we evaluated the efficacy of the technique related to the location of the osteonecrosis and the location of the forage tunnel.
Materials and Methods
Anonymized imaging from 22 patients treated for osteonecrosis of the femoral head under the Ortho 2 clinical trial (EudraCT 2012-002010-39) in the REBORNE EU-funded project (Regenerating bone defects using new biomedical engineering approaches, FP7 HEALTH-2009-1.4-2, Grant Agreement 241879) is the material under study [26]. Briefly, the trial was conducted in five clinical centers from four European countries (France, Germany, Italy and Spain) from March 2014 to June 2015. Patients were treated with percutaneous forage plus implantation of 140 million expanded mesenchymal stem cells (clinical grade) from bone marrow in a single injection of up to 7 mL of albumin (dose of 20 x 10 6 cells/mL). All included patients agreed to their participation and signed an appropriate informed consent form (Ethics Committee authorization code, coordinating center: HULP 3875). Patients were 19 males and 3 females with a mean ± sd (range) age of 43 ± 10 (21-62) years and a mean ± sd (range) time since initial diagnosis of 2.3±2.2 (0.1-7.6) months. The ONFH was idiopathic (40%) or due to corticosteroid treatment (25%) or other non-traumatic causes (35%). All cases were classified as stage II by Arlet and Ficat [2,27], Steinberg [28] or ARCO [29], as all these classifications converge on this stage, even after very recent modifications [30]. The volume of the necrosis [31], as a percentage of the sphere circumscribing the femoral head, was estimated (mean = 13.3%, range: 5.4% to 32.2%). The Japanese Investigation Committee (JIC) on osteonecrosis staging [6] was considered to evaluate the location of the lesion.
During surgery, antimicrobial prophylaxis and analgesia were performed as per local protocol. After anesthesia (general 68%, spinal 32%), patients were positioned supine on a fracture table. A radiological C-arm was placed and both anteroposterior (AP) and axial views of the femoral neck and head were checked under fluoroscopy. Following a minimally invasive approach, with a 1 cm incision laterally to the proximal femur, a guide wire was drilled from the lateral cortex of the subtrochanteric femur into the femoral head lesion, under fluoroscopic AP and axial control. Then, a 4 mm cannulated drill was introduced along the guide wire into the femoral head (Figure 1), guided by intraoperative fluoroscopy. Per protocol, one syringe was used to inject 7 mL within the forage tunnel in one single administration, slowly progressing to avoid overpressure (about 2 min were required to complete the injection). The guide wire was reintroduced to facilitate the clearing of the cannulated drill, and after 2-3 more minutes, the drill was removed without leakage. No sealing was required. The injected cell product consisted of a dose of 140 million MSCs suspended in 5% human albumin (20 million MSCs/mL). Cell expansion details have been published elsewhere [26,32]. Each patient underwent repeated radiographs (at 6 weeks, and 3, 6, and 12 months) and magnetic resonance imaging (MR at 3 and 6 months) during the clinical trial. The final evaluation of efficacy was completed after a minimum of 5 years' follow-up. To determine the coronal location of the necrotic lesion, we used the 2001 classification system, proposed by the Japanese Investigation Committee [6], on preoperative T1-MR and AP X-rays. The classification scheme consists in classifying the lesion in the weight-bearing area (WBA) as one of four types: A, B, C1 and C2, based on the central section of the femoral head on T1-weighted coronal MR or the AP radiograph (the image to evaluate was the coronal section when the anterior trochanter appears). Type A lesions occupied up to the medial third of the WBA. Type B lesions occupied up to the medial two-thirds of the WBA. Type C was divided into C1, occupying more than the medial two-thirds of the WBA and not extending laterally to the acetabular rim, and C2, occupying more than the medial two-thirds of the WBA and extending laterally to the acetabular rim. Figure 2 shows the system used in two different cases.
Figure 2.
Coronal location of the lesion in the weight-bearing area through the 2001 system, comparing T1-coronal MR with AP radiograph of two cases. In case 105, lesion (a) was classified as C2 as it occupied three thirds of the weight-bearing area and extended laterally to the acetabular edge; while lesion (b) was classified as C1, not extending further than the acetabular edge. In case 202, lesions through both imaging techniques in (c) and (d) were classified as C1. I, II, III: Weight-bearing area thirds.
To estimate the transverse lesion location from anterior to posterior, we defined a socalled anterior/central/posterior (ACP) method on preoperative T1-transverse MR sections and lateral or axial radiographs of the hip. The method consists in identifying the extension of the lesion in three regions of the femoral head (from/to: anterior/central/posterior), taking as a reference the osseous acetabular rim (from the anterior to the posterior edge). The value of anterior 2 (A2) or posterior 2 (P2) means that the lesion surpasses the anterior acetabular edge (A2) or the posterior acetabular edge (P2). To evaluate the transverse plane in MR sections, we set the height in the section where the anterior trochanter appears, using the comparative function of OsirixMD software v9.1 (Pixmeo SARL; Geneva, Switzerland) to help us adjust the coronal view. We use a radial angle circle tool to divide the region into three zones. Figure 3 shows an example. Finally, postoperative (1.5 or 3 months since surgery) anteroposterior radiographs were examined to measure the anatomical angle and the forage angle, taking as a reference the intersection point of the cervicodiaphyseal angle (caput-collum-diaphyseal angle, CCD), and to locate the forage tunnel in the WBA thirds (I, II, III), as seen in Figure 4a All images were processed, measured, and classified with OsirixMD software [33]. Mean and standard deviation for continuous variables and the percentage for categorical variables were reported. For analytical analysis, we used Stata statistical software, release 12 (StataCorp LP; College Station, TX, USA). A percent of agreement test (kappa test) was conducted to compare the osteonecrosis classification (by the 2001 system and ACP method) between MR and radiographic images. The degree to which both measurements were equivalent (agreement) was considered slight (if kappa was from 0-0.20), fair (from 0.21-0.40), moderate (from 0.41-0.60), substantial (from 0.61-0.80) or almost perfect (if >0.80) [34]. The dependent variable for the analysis was the proportion of healed/nonhealed cases. Comparisons of the lesion classification, location and forage were conducted using MRI. Mean differences and variance were reported using adequate parametric or non-parametric tests. Fisher's exact test was used for proportion comparisons. The Kaplan-Meier survival function and log-rank tests were used to evaluate the equality of failure rates across groups.
Treatment Efficacy and Extension of the Lesion
Characteristics of the osteonecrosis lesion and the forage technique are listed in Table 1, by study case. ONFH lesions were classified following the abovementioned JIC 2001 staging system [6] related to the WBA, with C1 in 8/22 cases (36%) and C2 in 14/22 cases (63%) using the T1-coronal MR sections, and C1 in 11/22 cases (50%) and C2 in the remaining 11 cases using AP X-rays. Inconsistencies between radiographs and MR were found in 9/22 cases (41%). The percentage of agreement between MRI and X-rays in the classification was considered moderate at 60% (CI, 95%: 0.37-0.81; p = 0.001). There was no difference in efficacy related to the JIC extension of the lesion in the coronal plane, whether the extension was evaluated in radiographs (Fisher's exact test ji 2 = 2.3; p = 0.311) or in MRI (Fisher's exact ji 2 = 0.8; p = 0.613). Therefore, the coronal extension of the lesion was not associated with an increase in the treatment failure after the delivery of autologous, expanded BM-MSCs (log-rank test ji 2 = 0.7; p = 0.397).
The location of the osteonecrosis by the ACP method in MR studies was defined as A2CP in 11/22 cases (50%), ACP in 6/22 cases (27%) and A2C in 5/22 cases (23%). Using lateral or axial radiographs, the location was defined as ACP in 13/22 cases (59%) and A2CP in 9/22 cases (41%). The inconsistency between both types of images was 50% (11/22 cases). The percentage of classification agreement between MRI and radiographs in our cases was considered slight, 18% (CI, 95%: 0.01-0.35; p = 0.042). Then, the transverse evaluation was performed on MRI for the final analysis. There was no difference in the efficacy (bone healing) by the type of ACP lesion in MRI (Fisher's exact ji 2 = 2.5; p = 0.314). Therefore, the transverse extension of the lesion was not associated in this series with an increase in the treatment failure after the delivery of autologous, expanded BM-MSCs (log-rank test ji 2 = 2.2; p = 0.327).
Treatment Efficacy and Forage Location
The mean anatomical angle of the proximal femur in grades was 133.7 ± 5.2 (range, 125.8-142.4), the mean intermediate angle forming the WBA-I (as per Figure 2 The drilling crossed the sclerotic rim in all cases. Seven forages were placed inside WBA-I (32%), eight inside WBA-II (36%) and seven (32%) outside the WBA. The failure rate for cases with forage outside of the WBA (all in a varus position compared to the anatomic cervicodiaphyseal angle) was 42.8% (3/7) versus 13.3% (2/15) in forages placed within the WBA. In this sense, the efficacy of the injected cells to heal the lesion (in terms of avoiding osteonecrosis progression and/or THR conversion) was significantly higher when the forage was performed in the weight-bearing area (log-rank test ji 2 = 3.85; p = 0.049). There was no significant difference in the failure rate (ON progression and/or THR conversion) when the forage was performed within the 1st or the 2nd portion of the femoral head weight-bearing area (see Figure 5, WBA-I and WBA-II) (log-rank ji 2 = 1.7; p = 0.280).
The transverse location of the forage (ACP) in the postoperative MR sections was central in 68% of cases (15/22) and anterior in the remaining 32% (7/19) of cases ( Figure 6). The ACP location of the forage drill was not associated with bone healing (Fisher's exact test ji 2 = 0.41; p = 0.477). The failure rate of cases with anterior forage was 14% (1/7) and 27% (4/15) with central forage (Figure 6), with no statistically significant difference (log-rank test ji 2 = 0.17; p = 0.681). No differences in bone healing were found between the anterior or central location of forage and the anterior extension of the lesion (Fisher's exact test ji 2 = 0.42; p = 0.999). Figures 3 and 4d).
The mean preoperative volume of osteonecrosis was 13.4 ± 5.9 % (range: 5.4-32.2), as a percentage of the spherical model of the femoral head and did not influence the healing in this homogenous series. No differences were seen in the preoperative volume between the healed and the non-healed cases (Mann-Whitney test p = 0.514) and no differences were seen related to the coronal ONFH location, as per the JIC 2001 classification (Mann-Whitney test p = 0.185), or the transverse ONFH location, as per the defined ACP method (Kruskal-Wallis test p = 0.709). When comparing cases with small ONFH lesions (volume under 15%) to those with medium-sized lesions (over 15%), the failure rate was not statistically different (log-rank test ji 2 = 1.7; p = 0.19). Adjusting the category of the small or medium preoperative volume of ONFH by the ACP location, no differences were observed in the failure rate (log-rank test ji 2 = 1.2; p = 0.277), not even when adjusting with the JIC 2001 (A, B, C1, C2) type of lesion (log-rank test ji 2 = 2.6; p = 0.451).
Discussion
Although non-surgical treatments may be an option in this type of patient, the surgical technique with regard to the position of the forage to deliver cell therapy was investigated in this study. The relevance of this issue is that cell therapy may regenerate bone within the osteonecrotic femoral head, but its distribution is unclear and therefore the efficacy may be affected by the way these cells are delivered. The main finding was the association of treatment failure with more varus forage positioning in the coronal plane, close to but outside the weight-bearing area of the femoral head. In the transverse plane, we could not find an association with either the anterior or central location of the drilling.
The technique's efficacy may be related to the lesion. The influence of the lesion location in the treatment of ONFH has long been debated. After the original and extended Ficat and Arlet staging, Steinberg (later, University of Pennsylvania classification) [28], the Association pour la Recherche de la Circulation Osseuse or Association Research Circulation Osseous (ARCO) and the Japanese Investigation Committee (JIC) offered different approaches to expand the ONFH classification and integrate the prognostic value of extended locations of the lesion. Even if a recent Delphi approach to the ARCO classification contradicts this view [30] and does not include the subdivision according to the size/location/length of the necrotic area, the use of the JIC classification recommends surgery in type C2 lesions in a large series due to the increased risk of collapse [35]. In our study, we limited our inclusions to stage II cases (X-ray abnormal, MRI abnormal, changes seen in the femoral head, no evidence of subchondral fracture, fracture in the necrotic portion or flattening of the femoral head), and this stage is stable across all classifications. However, in a homogenous, controlled series, we did not find differences between healed and non-healed cases, adjusting for the JIC C1 and C2 distribution. We can then conclude that the expanded MSCs delivered in this study seem to equally heal all lesions extending into the coronal plane. This includes C2 lesions, which are more prone to femoral head collapse, as shown by other authors [35].
Besides this debate, the imaging evaluation to classify the lesion may also be a problem, as the assessment of the Ficat-Arlet and ARCO staging concluded that these classifications offer poor interobserver reliability and fair intraobserver variability [36]. We could observe that the coronal evaluation of the lesion location in radiographs and MR was in moderate agreement, and in neither of those evaluations was the efficacy associated with the location of the lesion.
We also defined anterior, central and posterior areas of the femoral head as a way to understand the transverse extension of the lesion, in the philosophy of the JIC classification for the coronal plane [6]. The need for this transverse description of the lesion was due to the planned analysis of the forage positioning not only in the coronal plane but also in this transverse plane. We observed that the agreement between the lesions' transverse location on radiographs and MRI sections was only slight, possibly due to the variability in the radiograph positioning for lateral or axial hip projections. This being the case, our analysis was performed on MR transverse sections. No clear changes in efficacy were seen whether the lesion was more or less anterior in the femoral head, and we therefore conclude that even anterior lesions can be cured with the proposed technique.
The forage technique was then considered when related to treatment efficacy. The forage or CD technique has varied substantially in decreasing the potential risks, such as fracture [3] secondary to drilling or impaired biomechanical competence of the proximal femur [37]. In our case, we defined one single drilling of 4 mm in diameter and experienced no mechanical complications. The introduction of grafts often requires larger drilling, up to 9 mm with expandable reamers [38], the so-called advanced core decompression to remove the necrotic tissue [18]. The introduction of grafts has been claimed to be superior to standard CD procedures in a large series at 10 years when adjusted for Ficat stage [39], even if the clinical relevance is limited (58.1 vs. 57.9% 10-year survivorship). However, the size of the lesion may be a determinant when selecting this more aggressive technique. Small lesions (under 15% head volume) required 7% THR conversion, while medium and large lesions (over 15%) required 31 and 33% THR conversion after CD plus graft [31], and therefore, the volume was associated with the prognosis. This view has been further supported by new MR techniques [40]. In our study, most of the lesions were medium sized and no clear association was seen with treatment failure. The still unclear mechanism of action of this medicinal product may justify variable amounts of bone regeneration. This fact may be at the origin of different treatment outcomes and will deserve further studies.
The positioning of the forage has not been previously investigated. Furthermore, when multiple drilling (3 mm) is planned instead of a single, larger drilling [10], its positioning becomes even more unpredictable. The aim of reaching the ischemic area [14] is well established, with the help of fluoroscopy or other techniques, but when the lesion extends both in the coronal and transverse planes, the advisable place to deliver cells is unclear. We understood that crossing the sclerotic rim was necessary, and this condition was fulfilled in all the studied procedures. However, it is unclear how the infused cell therapy may be distributed in the femoral head. In experimental drilling and MSC injection in pig femoral heads [25], cells remained confined at the site of injection, attached to the bone trabeculae. Therefore, the positioning of the drill may affect the regenerative potential of the delivered cell therapy. Within the osteonecrotic lesion (particularly if widely extended) and to foster the bone regeneration proximal to the sclerotic rim, the surgeon may aim at the weight-bearing area in a more valgus drill positioning, or else aim at the central-central or even central-inferior area (such as in the fracture fixation techniques) in a more varus drill. We investigated the drill positioning related to the WBA and found that the regeneration obtained with a more varus drilling (under the anatomical CCD angle of the proximal femur) was less efficacious in the avoidance of failure. This finding recommends delivering cell therapy within the WBA in the coronal plane, with a valgus orientation of the drill. No differences were seen between weight-bearing areas I or II, and therefore, placing the drilling too valgus may not be required to improve the results. In the transverse plane, the more central or anterior drilling did not provide an advantage to bone regeneration. However, the postoperatively evaluated variability of the drilling was found to be moderate. A clinical protocol with a precise drilling angle (compared to the patient anatomical angle) may decrease this variability and help the surgeon to make intraoperative decisions regarding the surgical technique of the forage.
This study is based on postoperative imaging after an early clinical trial, and the number of cases is limited. This is a major limitation of the study, as occurs with many reports on osteonecrosis treatment. However, the fact that it is based on a precise protocol for including cases (specifically, stage II, symptomatic, early cases), performing the technique, delivering a standardized advanced treatment (such as 140 million autologous expanded MSCs) and following patients is also a study strength that has helped us to homogenize the results. Other limitations include the potential influence of different diagnoses, the variable regenerative potential of autologous cells from each patient, the unknown spatial proliferation of delivered cells and the evolution of these cells within the necrotic tissue. All these limitations due to the biology of the treatment may influence the surgical efficacy of the procedure. This bone regeneration is still poorly understood within the osteonecrotic lesion and will require further studies. Still, we believe that the role of adequately positioning the cells within the femoral head during the surgical procedure needs to be appraised and standardized, to avoid an important source of potential variability in the current treatment of early stages of osteonecrosis by means of cell therapy.
Conclusions
The drilling orientation of the percutaneous forage in the coronal plane within the weight-bearing area of the femoral head, slightly valgus compared with the anatomical CCD angle, increased the efficacy of bone regeneration when delivering cell therapy to the osteonecrotic femoral head in this study. In the transverse plane, central or anterior drilling were similarly efficacious. The drilling orientation should be standardized in a clinical protocol to percutaneously treat femoral head osteonecrosis with cell therapy, considering the injury location and spread, because the efficacy of the cells delivered in the treatment may be influenced by the surgical procedure. Data Availability Statement: Data available on request due to restrictions e.g., privacy or ethical. | 2021-03-03T05:22:15.361Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "a1ae9e785be25be5736f223808a640f838d6e159",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/10/4/743/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a1ae9e785be25be5736f223808a640f838d6e159",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256569430 | pes2o/s2orc | v3-fos-license | Reflection on 50 Years of Friendship and Collaboration on Aerosol Science and Technology
I started my master’s degree in 1971 and completed Ph.D. degree in 1976 under the mentorship of Prof. Benjamin Liu. During these years, we worked on bipolar charging and established the criteria to neutralize charged aerosols (Liu and Pui, 1974a, 1974b) (Fig. 1). An electrical aerosol analyzer (EAA) was developed to measure atmospheric particle size distributions and led to the successful commercialization of TSI 3030 EAA (Liu and Pui, 1975). TSI founding aerosol instrument manager Gilmore Sem wrote that without the success of the EAA, TSI would have gotten out of the aerosol instrument business (Schmidt et al., 2022). Professor Kenneth Whitby and Dr. William Wilson of EPA invited me to participate in the LA smog measuring campaign (Whitby et al., 1975) which provided results, together with other field measurements, to help EPA set up the PM2.5
academic. Yan was a senior director of Applied Materials and held 100 patents before his early retirement. He is now developing a sensor for real-time detection of nanoparticles and biological particles in air and liquid (Ye and Pui, 2021). Francisco worked at MSP/TSI for 25 years as a senior product manager before returning as a senior research engineer at the Center for Filtration Research (CFR). We published a series of papers on aerosol transport, deposition and charging (Pui et al., 1987;Tsai et al., 1990;Pui et al., 1990aPui et al., , 1990bRomay et al., 1991;Ye and Pui, 1990;Ye et al., 1991;Romay and Pui, 1992). During this period, I have started intense collaboration with Prof. Heinz Fissan, not only on research but also on organizing aerosol associations, IARA, IAC, and several major conferences and workshops. Our collaboration has built a strong basis that led us receive the Max Planck Research Award-the highest award for Engineers and Scientists in Germany, Humboldt Research Award for Senior U.S. Scientists (Pui), and the establishment of the Fissan-Pui-TSI Award for International Collaboration presented every 4 years during the International Aerosol Conference (http://iara.org/FissanPuiTsi.htm). Our families are close friends (Fig. 6). In fact, together with Fissan, Kousaka, Pourprix and Szymanski, we sent our daughters to each other's families every summer for several years. We now meet every two years to continue our friendships in Kyoto (2013)
Decade of 1990's
Chair Professor Da-Ren Chen started his Ph.D. study in the early 1990's. During his dissertation research and post-doctoral years and subsequent tenure as the PTL Manager, we together developed many exciting new aerosol technologies. Our two seminal papers on Electrospray (Chen et al., 1995a;Chen and Pui, 1997) have received approximately 900 citations. We also explored the technology for aerosolizing nanoparticles in the aerosol form for measuring liquidborne particles using the more sophisticated aerosol instruments. Following the development of a technique to produce nanoparticle medicines, we co-founded a start-up Nanocopoeia in St. Paul, Minnesota (https://nanocopoeia.com). Da-Ren also modeled and designed the Nano-DMA, the workhorse for making nanoparticle measurements. Prof. Fissan and his student Dr. Detlev Hummes also were involved in this important development (Chen et al., 1998). Prof. Ben Liu and I also started the Center for Filtration Research (CFR) in 1991, which is still going strong with 20 leading international filter manufacturers and end users. During the early years, we also worked with Dr. Wilson Poon (WL Gore), Dr. Scott Earnest (NIOSH Division Director), and Dr. Shintaro Sato (Hitachi) on a variety of filtration projects. Besides Scott as a NIOSH director, we have 4 other CFR Ph.D.s, Drs. Chaolong Qi, Liming Lo, Seungkoo Kang, and Drew Thompson working at NIOSH, an affiliated member of CFR and a good resource for our research. Prof. Da-Ren Chen has been a key contributor to CFR. One of the early projects of great interest to CFR members was the design of the pleated filters. Da-Ren performed a detailed study using the finite-element numerical method to provide the design guidelines (Chen et al., 1995b). Many of his students also contributed to the CFR as student investigators or post-doc at UMN, including Dr. Chaolong Qi (at NIOSH), Dr. Lin Li (at MSP/TSI) and Dr. Qisheng Ou (CFR lab manager). Dr. Ta-Chih Hsiao performed CFR research at WashU and is now a Professor at the National Taiwan University. Other CFR early graduates included Dr. Ming Ouyang (Cummins), Dr. Bruce Forsyth (Boston Scientific), and Dr. Hee-Siew Han (TSI). A master's student Xiang Zhang that I mentored is now the President of the University of Hong Kong. I also enjoyed helping Prof. Chuen-Jinn Tsai on Taiwan TAAR, Prof. Kangho Ahn on Korea KAPAR, and Prof. Junji Cao on CAAR during the startup of their respective aerosol associations. I have appreciated many years of friendship with Prof. Dr. Bernd Sachweh, who was a post-doc at UMN with Prof. Peter McMurry (1992-93). Bernd invited me to BASF's "Meeting with Professors" in Germany for several years and also to BASF conferences in Beijing and Singapore. He is currently Vice President, Special Projects Asia, at BASF (China) Co. Ltd. in Shanghai, China.
Decade of 2000's
One of the major programs during this period was funded by Intel on Extreme UV Lithography (EUVL) Mask Study. The objectives were to develop methods to evaluate and control particulate contaminant generation, transport and deposition in a mask handling system (Fig. 7). Prof. Fissan was a key investigator, and his student Dr. Christof Asbach, a CFR post-doctoral research associate, was a key contributor and is now the President of GAeF (Gesellschaft fuer Aerosolforschung)the oldest aerosol association in Europe and the world. Other students/post-docs working on the project included Prof. Se-Jin Yook, Prof. Jung-Hyeun Kim, Prof. Jing Wang and others. We have developed thermophoretic technique to protect the masks, and injection system to evaluate particle deposition under vacuum conditions (Asbach et al., 2006). We also deposited known size nanoparticles on the masks as calibration masks. In all, we published 16 peer reviewed journal papers (Kim et al., 2006a). Many of the techniques we have developed are now industry practice for EUVL system. I also had the opportunity of mentoring Dr. Seungki Chae who became VP and Sr. VP of Samsung Electronics and Samsung Display. During this period, long-time collaborator Dr. George Mulholland worked with us on certifying NIST 60 nm and 100 nm primary standard particles using the DMA technique (Mulholland et al., 2006). He worked with Jung-Hyeun to obtain the slip correction in the large Knudsen number regime (Kim et al., 2005) and published a series of papers with Prof. Weon-Gyu Shin on agglomerate particles characterization (Shin et al., 2010). In 2006, I attended Prof. Jing Wang's Ph.D. final defense in Aerospace Engineering and Mechanics (AEM). I was so impressed with his thesis research in fluid mechanics that I immediately recruited him to join my group as a post-doctoral research associate to work on aerosol and filtration research. In just a few years, he published a series of 35 papers focusing on EUVL and filtration research (Wang et al., 2007). He also helped mentoring junior students, Dr. Tze Yan Ling (Intel) and Weon Gyu Shin (Changnam National University). He has continued to contribute a great deal to CFR, even after he left to join ETH Zurich in 2010.
I am most grateful to have received the Fuchs Memorial Award during the 2010 International Aerosol Conference in Helsinki, Finland. The award is sponsored jointly by the American Association for Aerosol Research (AAAR), the Gesellschaft fuel Aerosolforschung (GAeF), and the Japan Association of Aerosol Science and Technology (JAAST). An Award Committee of the International Aerosol Research Assembly (IARA, www.iara.org), consisting of 18 international member associations, selects the winner(s) every four years. The co-winner of this 2010 Fuchs Memorial Award was Prof. Markku Kulmala of the University of Helsinki. IARA website stated, "The Fuchs Memorial Award recognizes outstanding original research contributions to the field of aerosol science and technology. It is considered the highest honor for researchers in the field. Presented every four years at the International Aerosol Conference, the award memorializes the late Professor Nikolai Albertovich Fuchs, the great Russian scientist who is regarded by many as the "father of aerosol science"".
Decade of 2010's and Beyond
Two milestones during this period were being named a member of the U.S. National Academy of Engineering (NAE) in 2016 and a Regents Professor at the University of Minnesota in 2019. There is a fixed number of 30 Regents Professors among 4,000 faculty members at UMN. We have explored industrial applications of filtration research during this period. Three major contributors during this period are Prof. Sheng-Chieh (Shawn) Chen, Dr. Seong Chan Kim and Dr. Qisheng Ou. They all have served a term as the manager of PTL/CFR. Shawn was Chuen-Jinn's Ph.D. student and came as a post-doc. He has worked on several topics: 1. evaluating membrane filter efficiency using sub-10 nm quantum dots-a collaboration with Prof. Doris Segets (Chen et al., 2016), a former post-doc at U of Erlangen-Nuremberg with Prof. Wolfgang Peukert; 2. co-authoring a PM2.5 review paper which has received 700 citations in a few years; 3. exploring the Electret filter applications-collaborating with Prof. Ziyi Li of the University Science and Technology Beijing on Zeolite coated Electret Media. Seong Chan spent 15 years at CFR separated by 5-year stay at Entegris as a contamination engineer. He came as a post-doc from Prof. J.K. Lee at Pusan National University. He worked on agglomerates generation and characterization, and health effects of nanoparticles. He performed in-vitro studies with Prof. Gunter Oberdorster at the University of Rochester (Kim et al., 2010), and many filtration applications projects, particularly in respirators/masks and contamination transport problem to mitigate Covid spreading (Kim et al., 2006b). I am pleased that Seong Chan has now started to work as a defect/contamination specialist at ASML, a major manufacturer of EUVL systems. Dr. Qisheng Ou started as a post-doctoral research associate from the Washington University of St. Louis (with Da-Ren as his Ph.D. advisor) and is now manager of CFR/PTL. He has performed research on filtration topics and developed systems to: 1. produce high temperature agglomerates to evaluate engine exhaust filters (Ou et al., 2019); 2. develop methods for coating nanoparticle membrane on wall filters to improve efficiency; and 3. evaluate respirator/masks, dental tools dispersion, and decontamination methods (Ou et al., 2020(Ou et al., , 2021. He mentored several Chinese scholars who became professors in Chinese universities, including Prof. Xinjiao Tian, Prof. Qiang Lv, and Prof. Cheng Chang, and recent students Dr. Chenxing Pei (Midea group), Weiqi Chen and Dongbin Kwak. Dr. Ou is currently focusing on starting an indoor air quality program. Two Japanese scholars came to out group during 2015-2017, Dr. Shigeru Kimoto and Dr. Maromu Yamada, and contributed to the CFR program. Dr. Kimoto worked on contamination control and instrument calibration. Dr. Yamada came from JNIOSH and returned to be a Senior Research Fellow in the Work Environment Research Group. Another Japanese scholar I remember well is Dr. Yoshiyuki Endo who sent me his growing family photo each year. One other major contributor was Dr. Zhili Zuo, who started our bioaerosol program. I have also enjoyed my frequent visits with Professor Pratim Biswas over the years, first as an Advisory Board member for his Energy, Environmental and Chemical Engineering Department at the Washington University in St. Louis, and now as an Advisory Board member for his College of Engineering at the University of Miami.
Two recent Korean graduates helped to develop new fields in my group. Dr. Changhuk Kim made use of the soft X-ray technique to detect sub-ppb concentration of airborne molecular contaminants in real-time (Kim et al., 2015). He is now an Associate Professor at the Pusan National University. Dr. Handol Lee started a systematic study on liquid filtration (Lee et al., 2017) and is now an assistant professor at Inha University. The latest Ph.D. to graduate from my group is Dongbin Kwak who did a fundamental study on particle formation, transport, deposition and filtration for semiconductor applications (Kwak et al., 2021).
There were many other former students/post-docs/scholars who have contributed to my career, and they are listed below. I like to tell my group that we should be kind to each other because aerosol/particle discipline is a relatively small and specialized community. Sooner or later, we will cross each other's path. Collaboration and friendship can not only increase productivity in the profession but also bring happiness to colleagues and families (Fig. 8) currently working with me on designing and operating 2 Air Cleaning Towers in Delhi, India, for the health and wellbeing of the residents. The Delhi Towers are the third generation Air Cleaning Tower (Fig. 9). Starting in 2015, Qingfeng and I published 5 papers on developing solar-assisted large scale cleaning system (SALSCS) to removed PM2.5 in urban atmosphere (Cao et al., 2015(Cao et al., , 2018. We collaborated with Prof. Junji Cao of the Chinese Academy of Sciences (IEECAS), Prof. Wenquan Tao of Xi'an Jiaotong University, and Dr. Ningning Zhang of CAS, to construct the first generation
Center for Filtration Research (CFR)
We have just completed the 62 nd Semi-Annual Review Meeting of the Center for Filtration Research. During the past 31 years, we have performed extensive filtration research and graduated a large number of students. Currently, we have 20 leading filtration manufacturers and end users (Fig. 10). Besides supporting UMN researchers, we are also funding subcontracts to former CFR researchers and collaborators, who are now renowned experts in the field, including Profs. Da-Ren Chen I often like to end my speech by showing the attached graph. An integrative approach, from collaborations among academia, government, and industry, can accelerate the solution to the PM2.5 problem in the world (Fig. 11). These three sectors of academia, government and industry represent three gears driving the wheel of progress: Sources ⇒ Effects ⇒ Regulation ⇒ Control. Fig. 11. An integrative approach, involving Academia, Government, and Industry to address PM2.5 issues (Pui et al., 2014).
Outreach Activities and Fingerson/TSI Distinguished Lectures
The academia can most effectively address the sources (coal burning and vehicle emissions) and their effects (visibility and health). To protect the public and environment health, the government can progressively set stricter regulations for PM2.5 and vehicle emissions standards. The industry can respond by developing novel control technologies for baghouse filters and diesel/gasoline particulate filters, which will reduce pollutions sources. Further, the academia can also help the government to set the regulations and the industry to develop novel technologies. This will enable the development of green technologies to provide sustainable environment for the world.
I am deeply gratified that AAC organizers set up the commemorative session to honor my 50 years of service to the aerosol discipline. I hope that this will start other commemorative sessions for many other well deserving colleagues who have made major contributions to the aerosol discipline. | 2023-02-04T16:02:55.223Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "30ec638741fdbf2c944efc73f90aa21ae14db9e7",
"oa_license": "CCBY",
"oa_url": "https://aaqr.org/articles/aaqr-22-11-pui-0400.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3343597af27ad61ccd4455ae1b071b8d5a728db6",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": []
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.